Generative artificial intelligence (GenAI) is no longer a distant innovation confined to science fiction and research labs; it has become an integral part of daily business operations worldwide. Employees across industries are adopting GenAI tools at a remarkable pace—including in Southeast Asia, where a tech-savvy workforce and widespread internet and mobile access have driven early adoption.
The reality facing organizations today is clear: employees are integrating GenAI into their daily work, often without official approval or clear policies. This phenomenon, often called “Bring Your Own AI,” comes out of a disconnect between organizational governance and employee behavior and reveals the urgent need for proactive AI policies and oversight.
For business leaders and legal teams, GenAI is both an opportunity and a challenge. On one hand, these tools can deliver real business value and boost efficiency. On the other, the unsanctioned and unmonitored use of GenAI introduces substantial legal risks, such as data privacy violations, confidentiality breaches, and intellectual property issues.
The widespread adoption of GenAI tools by employees, regardless of official organizational stance or guidelines, demonstrates that prohibition is neither practical nor effective. A more strategic approach involves establishing comprehensive governance policies that encourage responsible AI use while managing the risks.
Organizations that take the lead in developing GenAI governance policies are better positioned to benefit from its transformative potential. The question isn’t whether GenAI will change how we work, but how quickly organizations can put the right safeguards in place to manage this change successfully.
Risks of GenAI Use
The use of GenAI in business operations, whether sanctioned or not, exposes organizations to a unique set of risks. The following are particularly relevant:
- Data security and confidentiality: General GenAI tools in the market may transmit data to external servers, retain conversation histories, and use inputs for model training. Further, employees may share confidential organization or client information without realizing the implications, increasing the risk of unintentional data leakage and unauthorized disclosure—especially since it can be difficult for organizations to know which GenAI tools employees are using and what types of information they are sharing.
- Data protection and regulatory compliance: The evolving legal landscape regulating AI creates compliance challenges across multiple jurisdictions. Organizations must navigate complex data protection laws like Thailand’s Personal Data Protection Act (PDPA) and Vietnam’s Personal Data Protection Decree (PDPD), each with different compliance requirements. In the absence of AI-specific legislation, sector-specific regulations also add additional complexity, while unclear regulatory guidance often leaves organizations operating in legal uncertainty, particularly when using AI for decision-making that impacts individuals or when deploying AI systems that interact directly with customers.
- Intellectual property risks: AI-generated content raises yet-to-be-answered questions about ownership, originality, and copyright infringement. Additionally, proprietary information shared with GenAI tools can be inadvertently incorporated into model training data, potentially compromising trade secrets or violating confidentiality agreements.
- Governance and accountability: Disjointed and unregulated or inadequately governed GenAI adoption creates oversight gaps, making it difficult to track usage, assign responsibility for outputs, or respond to incidents. In addition, traditional approval processes may not account for AI-assisted work, creating quality control issues.
Developing an Internal GenAI Policy
Forward-thinking organizations across Southeast Asia are establishing internal policies that provide clear direction for both approved and unapproved AI use. These policies form the cornerstone of responsible AI adoption in these organizations by balancing innovation with effective risk management.
An effective AI policy functions as both a protective framework and an enablement tool. Rather than simply listing restrictions, the most effective policies provide practical guidance that empowers employees to leverage AI capabilities while maintaining organizational standards. This approach requires addressing several critical components when developing an AI policy, including, among others:
- Policy scope: Effective AI policies begin with a clear articulation of their purpose, defining exactly which AI tools and use cases are governed by the policy, including distinguishing between enterprise-approved solutions and general AI tools in the market.
- Access and authorization: Organizations should define user tiers and access levels, specifying which roles are permitted to use specific AI tools and under what circumstances. This includes establishing approval processes for new AI tool adoption and creating exceptions for specialized use cases.
- Data governance and privacy protection: As GenAI tools may process personal information, policies must establish strict protocols for data handling. This encompasses defining what types of data can be shared with AI systems and ensuring compliance with regional privacy regulations such as Thailand’s PDPA or Vietnam’s PDPD.
- Accountability and verification: Policies should also assign internal accountability for AI-generated content and outputs. It is important to establish appropriate review protocols based on the type of AI-assisted work, along with guidelines for transparently disclosing when and how AI was used, especially in client-facing materials or critical decision-making, which may require human validation.
- Monitoring and incident response: Effective policies establish clear procedures for tracking AI usage, identifying potential misuse or unacceptable output, and responding to security incidents, policy violations, and AI-related incidents such as hallucinations or biased outputs. This includes defining escalation procedures and reporting mechanisms.
- Vendor management: As organizations increasingly rely on third-party AI services, policies must address vendor evaluation criteria, contract requirements, and ongoing performance monitoring to ensure external AI providers meet legal obligations, data protection requirements, and operational expectations related to security, accountability, and transparency.
Given the rapid pace of AI development, policies should include review cycles, update mechanisms, and processes for incorporating new regulatory requirements or technological capabilities. They should also provide a framework for assessing emerging technologies and adapting policy coverage to reflect evolving risks and capabilities.
Finally, organizations should hold comprehensive education and training sessions to ensure that employees understand both the capabilities and limitations of AI tools, recognize potential risks, and follow organizational policies when using AI in their work.
Proactive Implementation
The GenAI revolution isn’t waiting for businesses to catch up—it’s already here, integrated into daily workflows. Organizations can either proactively implement robust governance frameworks to safely harness AI’s immense potential or risk falling behind in an increasingly complex and fast-moving landscape.
By establishing clear guidelines, accountability structures, and effective risk management protocols, organizations can confidently leverage AI capabilities to encourage innovation while maintaining oversight and minimizing risks. This approach not only builds stakeholder trust and ensures regulatory compliance but also encourages greater AI adoption and transparency among employees. With well-designed guardrails in place, employees can confidently and responsibly integrate GenAI into their work.
Ultimately, organizations that strike the right balance between innovation and responsibility will be best positioned to lead in the GenAI era.