June 25, 2025
Generative artificial intelligence (GenAI) is no longer a distant innovation confined to science fiction and research labs; it has become an integral part of daily business operations worldwide. Employees across industries are adopting GenAI tools at a remarkable pace—including in Southeast Asia, where a tech-savvy workforce and widespread internet and mobile access have driven early adoption. The reality facing organizations today is clear: employees are integrating GenAI into their daily work, often without official approval or clear policies. This phenomenon, often called “Bring Your Own AI,” comes out of a disconnect between organizational governance and employee behavior and reveals the urgent need for proactive AI policies and oversight. For business leaders and legal teams, GenAI is both an opportunity and a challenge. On one hand, these tools can deliver real business value and boost efficiency. On the other, the unsanctioned and unmonitored use of GenAI introduces substantial legal risks, such as data privacy violations, confidentiality breaches, and intellectual property issues. The widespread adoption of GenAI tools by employees, regardless of official organizational stance or guidelines, demonstrates that prohibition is neither practical nor effective. A more strategic approach involves establishing comprehensive governance policies that encourage responsible AI use while managing the risks. Organizations that take the lead in developing GenAI governance policies are better positioned to benefit from its transformative potential. The question isn’t whether GenAI will change how we work, but how quickly organizations can put the right safeguards in place to manage this change successfully. Risks of GenAI Use The use of GenAI in business operations, whether sanctioned or not, exposes organizations to a unique set of risks. The following are particularly relevant: Data security and confidentiality: General GenAI tools in the market may transmit data to external servers, retain conversation histories, and use inputs for model training.