In response to the rapid advancement of artificial intelligence (AI) and evolving global digital trends, Thailand has undertaken significant efforts to establish a comprehensive national policy framework aimed at fostering an AI ecosystem.
This framework seeks to promote the responsible development and deployment of AI technology to enhance Thailand’s economic competitiveness and improve quality of life, with targeted implementation by 2027.
In furtherance of this national AI policy, regulatory authorities have initiated efforts to develop and refine the applicable legal framework, including the drafting of Thailand’s first unified AI legislation.
Pending the composing and enactment of such comprehensive legislation, sector-specific regulators have proactively issued guidelines applicable to regulated entities within their respective jurisdictions, including financial institutions, banks, insurance companies, securities and derivatives business operators, and digital asset service providers.
Concurrently, cross-sectoral regulatory bodies, notably the Personal Data Protection Committee (PDPC) and the National Cyber Security Agency (NCSA), have promulgated guidelines applicable to all business operators within their regulatory purview.
While unified AI legislation has not been enacted, the design, development and use of AI in Thailand in various industries is still subject to existing sector-specific legislation.
National AI policy
The Thai cabinet approved the Thailand National AI Strategy and Action Plan (2022-2027) in July 2022, aiming to establish an AI development and application ecosystem by 2027.
The strategy is built around five pillars:
- Preparing social, ethical, legal and regulatory readiness for AI;
- Developing national infrastructure;
- Increasing human capability and AI education;
- Driving AI technology and innovation; and
- Promoting AI adoption in public and private sectors.
The above-mentioned national AI committee, under the National Digital Economy and Society Committee (NDESC), was established in August 2022, chaired by the prime minister.
Comprehensive legislation
Following the national AI strategy, the government has been developing comprehensive AI legislation to govern and promote AI adoption in Thailand. The first set of draft legislation consists of two laws.
First, the draft Royal Decree on Business Operations that Use Artificial Intelligence Systems, issued by the Office of the National Digital Economy and Society Commission (ONDE) under the Ministry of Digital Economy and Society (MDES), adopts a risk-based approach modelled on the EU AI Act.
Second, the draft Act on the Promotion and Support of AI Innovations, issued by the Electronic Transactions Development Agency (ETDA), focuses on building the AI ecosystem through provisions on sandboxes, data sharing, standards and risk assessment. Both drafts were issued for public hearing in 2022-2023.
The ETDA acknowledged that earlier drafts modelled on the EU’s framework needed updating to reflect Thailand’s evolving legal and technological landscape. In June 2025, the MDES, through the ETDA, introduced the (Draft) Principles of the Law on Artificial Intelligence for public hearing.
The draft AI principles are organised around five key areas:
- Risk-based requirements. Rather than specifying prohibited or high-risk AI categories in primary legislation, the framework delegates that authority to a central enforcement agency and sectoral regulators, which are considered best positioned to assess risks in their respective domains. Providers of high-risk AI would be required to implement risk management systems aligned with international standards (e.g., ISO/IEC 42001:2023), appoint local representatives in Thailand if they are offshore providers and report serious incidents. Deployers of high-risk AI must, among other things, ensure human oversight, maintain operational logs, ensure input data quality and notify individuals whose rights may be affected.
- Innovation support. The principles propose exceptions for text and data mining of online data and regulatory sandboxes for AI testing in controlled conditions. Sandbox participants acting in good faith would benefit from a safe harbour against penalties, though civil liability for damages would still apply.
- General principles. The framework affirms that AI-generated actions must be attributable to a human, prohibits denial of legal effect to AI-assisted contracts or administrative decisions, and establishes protections against unforeseeable AI errors. Individuals may have the right to be notified when AI is used, explaining AI-driven decisions, and to contest those decisions, although these rights may be limited to high-risk AI contexts.
- Regulators. No new regulatory body is proposed. Instead, the existing AI Governance Centre under the ETDA would oversee implementation including research, guidance, sandbox support and international co-operation.
- Legal enforcement. The enforcement agency and sectoral regulators would be empowered to issue administrative orders to cease prohibited or non-compliant AI services. Enforcement mechanisms include ordering digital platforms to remove or block services, seize products containing prohibited AI, and co-ordinate with MDES to direct internet service providers to block access within Thailand.
The draft AI principles, once revised after the hearing, will be transformed into a draft AI Act for further public hearing before proceeding through the legal enactment process.
Existing applicable laws
Without effective AI legislation, existing laws apply to AI adoption throughout the AI lifecycle – from design and testing to deployment and monitoring.
Key examples include:
- Liability. Under the Civil and Commercial Code, general wrongful act principles may impose civil liability for AI-caused damages.
- Data governance. Collection, use, disclosure and overseas transfer of personal data in AI systems are subject to the Personal Data Protection Act. Collection of computer data, including web scraping, could violate the Computer Related Crime Act (CCA). For critical information infrastructure organisations, the Cybersecurity Act is applied to ensure national cybersecurity measures are implemented.
- Content regulation and transparency. The CCA, Consumer Protection Act, Criminal Code and Child Protection Act restrict harmful, false, defamatory or obscene AI-generated content. The DPS Decree requires certain platforms to disclose algorithmic ranking and decision-making parameters.
In addition to the above-mentioned examples, laws such as the Copyright Act, Trademark Act, Gender Equality Act, Persons with Disabilities Empowerment Act, and Trade Competition Act, as well as the Thai Constitution, may be applicable, depending on the issue in question.
Sector-specific frameworks
While AI legislation has not been enacted, several regulators have proactively issued guidelines for their regulated businesses. Although certain guidelines carry no legal binding effect, regulatory bodies expect compliance to foster adherence with existing regulations.
Key sector-specific AI guidelines are:
- Banking and financial services. The Bank of Thailand issued Guiding Principles for Artificial Intelligence Risk Management in September 2025, applicable to financial institutions and payment service providers, covering AI lifecycle management, risk assessment, data governance, cybersecurity, transparency and human oversight.
- Capital markets. The Securities and Exchange Commission of Thailand issued a governance framework for AI and machine learning applicable to securities, derivatives and digital asset operators. It establishes four core principles – fairness, legal and ethical compliance, accountability, and transparency – with guidance on risk management, documentation and lifecycle monitoring.
- Insurance. The Office of Insurance Commission issued AI governance guidelines for insurance companies in 2025, addressing risk management, security, transparency, fairness and consumer protection in AI applications, particularly in high-risk processes such as underwriting and claims management.
- Data protection. In February 2026, the Personal Data Protection Committee released draft Guidelines on Personal Data Protection in AI Development and Use. The guidelines address stakeholder roles, require data processing agreements to include model training prohibitions, mandate data protection impact assessments for high-risk AI, and establish security measures throughout the AI lifecycle.
- Cybersecurity. The National Cyber Security Agency released AI security guidelines in September 2025, providing recommendations on protecting AI systems from cyber threats aligned with ISO/IEC 42001:2023 and the National Institute of Standards and Technology’s AI risk management framework.
Conclusion and outlook
Thailand has taken significant steps towards establishing a comprehensive AI governance framework, though it has not yet enacted AI-specific legislation. The draft AI principles, once finalised and enacted, will provide the foundational regulatory structure.
Meanwhile, sector-specific regulators have moved proactively to issue guidelines covering financial services, capital markets, insurance, data protection and cybersecurity.
Organisations deploying AI in Thailand should closely monitor legislative developments and assess compliance with existing legislation, aligning governance, risk management and transparency practices with existing guidelines to ensure readiness for the anticipated regulatory framework.
This article was originally published by Asian Business Law Journal.