You are using an outdated browser and your browsing experience will not be optimal. Please update to the latest version of Microsoft Edge, Google Chrome or Mozilla Firefox. Install Microsoft Edge

May 15, 2025

Thailand Resumes Development of AI Regulatory Framework

Thailand’s Electronic Transactions Development Agency (ETDA) held an explanatory session on the draft principles and regulatory approaches of the country’s planned artificial intelligence (AI) law on May 2, 2025. This came after a lull of two years following the initial release of draft legislation on AI.

In the session, the ETDA explained that the earlier drafts were modeled after the EU’s legal framework for AI, but given the evolving Thai legal and technological landscape, it is now necessary to revisit and refine the drafts to ensure they remain relevant and effective in the local context. To aid in this process, the ETDA will accept public comments on the draft principles of the AI law until June 9, 2025.

Based on gap analysis and a comparative study of how different countries have addressed AI issues, the ETDA’s draft AI law principles are structured into five key areas. These are described below.

1. Risk-Based Requirements

The draft principles outline a set of approaches that the legislation will take toward mitigating risk:

Delegation of powers to enforcement agency or sectoral regulators

The primary legislation will not directly specify a list of prohibited risks or high-risk types of AI. Instead, it will empower an enforcement agency or relevant sectoral regulators to determine and issue such lists. This approach allows regulators in each specific industry to assess the necessity of risk classifications within their respective sectors, based on the principle that sectoral regulators are best positioned to understand the specific risks in their domains. These regulators are expected to issue subordinate legislation in alignment with the overall framework. Meanwhile, the central enforcement agency will coordinate oversight across sectors and cover areas not under the jurisdiction of any specific regulator.

Duties of high-risk AI providers

Providers of AI deemed by the enforcement agency or sectoral regulators to be high-risk will have certain additional requirements:

  • Risk management frameworks: High-risk AI providers must implement risk management systems (e.g., ISO/IEC42001:2023 or NIST Risk Management Framework). The draft principles draw a “duty of care” boundary to clarify the basis for judicial discretion and to provide a reference for government agencies in their enforcement. Failure to comply with the prescribed standards does not automatically constitute a violation; however, if such failure results in harm, the provider may bear liability for a wrongful act. The framework is designed to align with international standards and support consistency across sectors, including through secondary regulations issued by the enforcement body.
  • Local legal representatives: Offshore high-risk AI providers will be required to appoint a local representative in Thailand to ensure effective enforcement of the law for all service providers. The enforcement agency must also be notified of the appointment of a legal representative.
  • Serious incident reporting: High-risk AI providers will be required to report serious incidents to the enforcement agency.

Duties of high-risk AI deployers

Entities deploying high-risk AI must ensure human oversight of AI systems, maintain operational logs, ensure the quality of input data, and notify affected individuals in cases where the AI system may have an impact on their rights or interests. Deployers must also cooperate with investigations if AI causes harm, and may be held liable if their use falls below the standard of care expected of professionals.

2. Measures in Support of Innovation

The supportive principles—most of which can be implemented without new legislation—focus on key areas:

  • Data: Introducing exceptions to permit the use of online data for purposes such as text and data mining, similar to the EU approach, while commercial use will still be subject to rightsholder reservations.
  • Sandbox: Testing in real-world conditions will be permitted under controlled environments to ensure that regulatory design aligns with practical realities. This will require an agreement between private entities and the relevant government agency overseeing the sandbox, allowing the use of personal data originally collected for other purposes to develop AI, provided it serves the public interest. Entities operating within a sandbox and acting in good faith should not be penalized for any harm that arises during the experimental phase, in line with a safe harbor principle. However, this safe harbor will not exempt participants from civil liability for damages.

3. General Principles

Some general principles guiding the development of Thailand’s legislative approach to AI include:

  • Nondiscrimination: Prohibiting the denial of legal effect to contracts or administrative decisions made using AI.
  • AI as a tool: Affirming that all actions generated by AI must be attributable to a human, regardless of human intervention. Developers and users cannot escape liability by citing unpredictability alone.
  • Protection against unexpected actions: Establishing exceptions to protect individuals from being bound by AI-generated acts that arise from unforeseeable errors. Such expectations would apply only if the affected party could not have reasonably foreseen the AI action and the counterparty either knew or could have known.
  • Right to explanation and appeal: Granting individuals the right to understand how AI systems are developed and the ability to appeal decisions made by or with AI, potentially requiring human involvement in decision making. These rights, which are under consideration and may apply only to high-risk AI, include the right to be notified when AI is used, the right to an explanation of how AI made a decision, and the right to contest the decision.

4. Regulator

The current proposal does not call for the establishment of a new regulator; instead, it designates the existing AI Governance Center (AIGC) under the ETDA to oversee the implementation of the law. The AIGC’s roles include conducting research and development on AI governance, providing guidance to organizations on AI adoption, and supporting pilot projects and regulatory sandboxes. Additional responsibilities include monitoring global trends, compiling national AI-readiness data, and developing cooperative mechanisms both domestically and internationally.

5. Legal Enforcement

The draft AI law empowers the enforcement agency and relevant sectoral regulators to jointly issue administrative orders requiring AI providers or deployers to cease the provision or use of prohibited or high-risk AI. If such parties fail to comply and the AI service is hosted on a digital platform, authorities may order the platform provider to remove or block access to the service. For prohibited AI embedded in physical products, enforcement may extend to seizure of the items, including through entry into premises. If the noncompliant AI service is hosted outside digital platforms or a platform fails to comply, the regulators may coordinate with the Ministry of Digital Economy and Society to order internet service providers to block access within Thailand.

Status and Outlook

The ETDA will take the comments into consideration as part of the legislative revision process. After reviewing the draft legislation based on the feedback received in this round, a revised version of the draft law will be published for another public hearing.

Business operators should review the proposed principles of the draft AI law and submit their comments, if any, to the ETDA. They should also start monitoring the development of this law to ensure timely compliance. In particular, operators that develop, use, or rely on high-risk AI systems should begin assessing their current risk management structures, data governance practices, and human oversight mechanisms.

RELATED INSIGHTS​