You are using an outdated browser and your browsing experience will not be optimal. Please update to the latest version of Microsoft Edge, Google Chrome or Mozilla Firefox. Install Microsoft Edge

September 24, 2025

Thailand Issues AI Risk Management Guidelines for Financial Service Providers

On September 12, 2025, the Bank of Thailand (BOT) officially released its AI Risk Management Guidelines for Financial Service Providers, building upon the draft guidelines issued in June 2025. The guidelines reflect a balanced approach, encouraging innovation while safeguarding financial stability and consumer protection.

The guidelines are targeted at all financial service providers, including financial institutions and special financial institutions under the Financial Institution Business Act, as well as payment providers under the Payment Systems Act.

The guidelines apply to both AI systems developed in-house and those developed by third parties that are adopted for use by financial service providers.

AI Risk Management Guidelines

The two main pillars in managing AI risk are (1) governance of AI system implementation and (2) AI system development and security controls, consisting of the following key elements:

1. Governance

  • Stakeholder roles and responsibilities. Boards and senior management assume accountability for decisions and operations involving AI systems, and are responsible for defining roles and responsibilities for AI oversight. This includes establishing an AI system usage policy, designating personnel responsible for AI risk management, and building awareness of AI-related risk within the organization. Organizations are expected to foster internal capabilities to use AI securely and avoid overreliance that could compromise business continuity or customer service.
  • AI system usage policy. Policies governing AI usage should align with organizational goals, regulatory obligations, and recognized responsible AI frameworks—such as the FEAT principles (fairness, ethics, accountability, and transparency). These policies should be reviewed regularly to respond to technological advancements and evolving risk profiles.
  • Risk management throughout the AI lifecycle. Risk management should encompass the entire AI lifecycle, from establishing risk appetite to implementing continuous risk assessment and control measures tailored to specific use cases. Financial service providers should assess risks and impacts of AI usage on operations and customer services. Human oversight must be embedded in decision-making processes, with the degree of oversight calibrated to the level of risk and impact, especially when AI systems are used in strategic functions or customer interactions (e.g., loan approval or account opening). In customer interactions with AI systems, customers should be notified and have options to contact personnel of financial service providers.
  1. Development and security controls
  • Data risk. Financial service providers should have measures to assess and ensure the quality, accuracy, currentness, volume, and diversity of data used in AI model training. They should also implement data leakage prevention measures.
  • Model development risk. Financial service providers should have (1) clear evaluation metrics for assessing model accuracy and reliability through ongoing testing and monitoring, both before and after deployment, and (2) measures to ensure the explainability of AI outcomes. For generative AI applications, there should be specific measures to reduce AI hallucination risks by adopting techniques such as retrieval-augmented generation and prompt engineering. Financial service providers should also ensure explainability of AI outputs through documentation detailing model inputs, outputs, and parameters.
  • Cybersecurity risk. Financial service providers should have measures to prevent and detect emerging cyber threats targeting AI systems, based on established standards such as the OWASP Machine Learning Security Top 10.

In addition, the BOT emphasizes the importance of financial service providers strictly complying with applicable laws when adopting AI, including personal data protection laws and intellectual property laws.

RELATED INSIGHTS​