You are using an outdated browser and your browsing experience will not be optimal. Please update to the latest version of Microsoft Edge, Google Chrome or Mozilla Firefox. Install Microsoft Edge

April 3, 2026

Thailand’s AI and Machine Learning Governance Framework for Capital Markets

Thailand’s Securities and Exchange Commission (SEC) has established a comprehensive governance framework for the use of artificial intelligence and machine learning (AI/ML) in the capital markets. The framework provides guidance to capital market business operators on understanding the risks associated with AI/ML implementation and adopting appropriate practices to build public confidence in Thailand’s capital markets. While the guidelines are principle-based rather than prescriptive, they reflect the SEC’s expectations for responsible AI/ML governance and are likely to inform supervisory activities and industry standards going forward.

Scope

The framework applies to capital market business operators supervised by the SEC. This includes, for example, securities and derivatives firms, asset management companies, mutual fund and private fund managers, investment advisors and investment consultants (including robo-advisory service providers), derivatives intermediaries, and other licensed intermediaries and market operators in the Thai capital markets that deploy AI/ML in their operations.

Core Principles of the Guidelines

The framework is presented as a best-practice manual rather than prescriptive regulation, providing guidance that regulated entities may apply to their AI/ML governance and risk management as appropriate. While currently nonbinding, the guidelines signal the SEC’s expectations for the sector, particularly in relation to other binding SEC regulations such as those covering IT risk management and market conduct.

The guidelines name four core principles for AI/ML deployment:

  1. Fairness: Design and develop AI/ML with consideration for fairness, equality, and social diversity to prevent discrimination against individuals or groups.
  2. Legal and ethical compliance: Ensure AI/ML use aligns with applicable laws, ethical standards, and organizational values and policies.
  3. Accountability: Establish clear responsibility—both internally and externally—for AI/ML activities and outcomes.
  4. Transparency: Provide adequate disclosure to users about AI/ML use, including explainability of decisions and traceability of activities.

AI/ML Best Practices

The guidelines prescribe best practices across four stages of the AI/ML lifecycle, as described below.

System Design

System design translates objectives, risk controls, and usage principles into AI/ML tool selection requirements. Organizations remain responsible for outcomes regardless of whether solutions are off-the-shelf, outsourced, or internally developed. Where external vendors are used, organizations are expected to conduct due diligence, test model reliability, understand operations sufficiently to explain them to customers, and execute contracts with clear SLAs and KPIs, with ongoing performance monitoring to ensure adherence to the core principles above.

Data Preparation and Model Development

This stage covers several areas of practice:

  • Defining data requirements: On data requirements, organizations should determine the data types, size, accuracy, and time series requirements needed to enable efficient AI/ML operation.
  • Data collection and integration: For data collection and integration, documentation should be prepared showing data provenance (source) and data lineage (processing and changes throughout preparation) to ensure traceability.
  • Data labeling: Data labeling involves properly identifying data types for training, validation, and testing datasets so that AI/ML systems can accurately recognize data meanings.
  • Data quality evaluation: Data quality evaluation should take place before any datasets are used for training, validation, or testing, with data improved to meet organizational quality standards.
  • Personal data protection: For personal data protection, appropriate measures—including access controls, encryption, and anonymization—should be applied to personal or sensitive data used in AI/ML systems.

For high-risk AI/ML applications, the guidelines recommend implementing human-in-the-loop controls (where AI provides recommendations only, with no autonomous action without human participation), human-over-the-loop supervision (where AI operates independently under human oversight), or kill-switch mechanisms (emergency halt functions with clear activation conditions).

Deployment and Monitoring

Before deployment, organizations should verify accuracy, efficiency, capacity, and latency in test environments. Proper change management should be applied to minimize operational impact and ensure that changes achieve their intended objectives, and ongoing monitoring should include performance evaluation, model tuning as needed, and monitoring for new data sources, supported by automated tools with alerts for loss thresholds or accuracy issues.

User Communication

Organizations should communicate sufficient information to users to support their understanding and confidence, while withholding details that could enable exploitation. For example, disclosing the use of a chatbot is appropriate, whereas specifics of a fraud detection model would generally not be disclosed.

Governance, Documentation, and Risk Management

Beyond the lifecycle stages, the guidelines identify several organizational measures relevant to achieving the core principles in practice. These span governance structure, documentation, training, risk management, data privacy, IT audits, third-party contracts, human oversight, and transparency obligations.

On governance, organizations are expected to establish or expand frameworks to include AI/ML-specific oversight, with board-level engagement and dedicated committees with relevant expertise. Comprehensive documentation of data provenance, data lineage, model development, and testing processes should also be maintained to support audit trails.

Risk management frameworks may need updating to incorporate AI/ML-specific risk assessments, control measures, and continuous monitoring. Organizations using personal data in AI/ML should also ensure compliance with Thailand’s Personal Data Protection Act (PDPA), supported by robust technical safeguards.

Outlook

Although currently framed as best practices rather than binding rules, the Thai SEC’s AI/ML guidelines are likely to foreshadow more formalized regulatory requirements in due course. The emphasis on fairness, transparency, and accountability also reflects broader global regulatory trends in AI governance. Organizations that align with the framework proactively will be better positioned as the regulatory landscape continues to develop.

RELATED INSIGHTS​