On February 17, 2026, Thailand’s Personal Data Protection Committee (PDPC) released its draft Guidelines on Personal Data Protection in the Development and Use of Artificial Intelligence. The draft guidelines, which translate data controller and data processor compliance obligations under the Personal Data Protection Act (PDPA) into measures tailored to AI development and deployment, are open for public comment until February 25, 2026.
At a public hearing session on the draft guidelines held on February 19, the PDPC emphasized that its approach to AI is not to hinder innovation but to develop practical guidance supporting safe deployment while ensuring data protection. Although the guidelines are not legally binding, they indicate the regulator’s expectations and the likely direction of interpretation and enforcement.
Scope of Application and Role of Stakeholders
The guidelines will apply to all data controllers and data processors in Thailand, and to overseas data controllers and data processors whose data processing falls within the extraterritorial scope of the PDPA.
The draft guidelines distinguish the roles of parties involved in AI deployment. Users of AI who determine the purpose of use and designate the input data, and retain outputs generated by the AI, are considered data controllers. In contrast, AI model providers or system integrators that process personal data under the instructions of the data controller are generally regarded as data processors. However, if an AI model provider utilizes user data for its own purposes, such as model fine-tuning or training, it may instead be classified as a data controller.
Key Obligations for AI Data Collection and Use
The basic principles of data processing under the PDPA must be maintained throughout the AI implementation lifecycle, from design to decommissioning, emphasizing accountability and privacy-by-design principles. The draft guidelines also stipulate the following:
Data Protection Impact Assessments for High-Risk AI
Data protection impact assessments (DPIAs) for high-risk AI applications are necessary to identify, manage, and mitigate AI-specific risks that may affect the confidentiality, integrity, or availability of personal data processed within AI systems.
High-risk AI applications include, for example, automated decision-making with legal or similarly significant effects on individuals, large-scale processing of sensitive data for AI model training, systematic behavioral monitoring in public spaces, and generative AI capable of creating defamatory or misleading content about individuals.
Businesses must conduct DPIAs from the design phase, assessing necessity, proportionality, and AI-specific risks such as algorithmic bias, model inversion attacks, and limited explainability of outputs. These assessments should identify risk-mitigation measures, which may include the deployment of privacy-enhancing technologies, anonymization techniques, data encryption, and the implementation of human-in-the-loop controls for high-risk AI systems.
The draft guidelines also provide examples of sector-specific applications that may face heightened scrutiny. For instance, financial institutions using AI for credit scoring must ensure explainability and fairness, with human oversight required for adverse or rejection decisions. HR departments deploying AI for recruitment or performance evaluation must audit for algorithmic bias to prevent unlawful discrimination. In the healthcare sector, AI tools that support diagnosis must not be used as the sole basis for life-affecting medical decisions, and a physician must make the final determination.
Security Measures and Vendor Management
The draft guidelines prescribe layered security obligations, including organizational, physical, and technical measures.
Organizational measures should include access controls that follow the principle of least privilege, with developers restricted to anonymized data in testing environments and general users barred from accessing model weights or training datasets. Businesses must adopt acceptable use policies (AUPs) prohibiting employees from entering personal data into public generative AI platforms and must train staff on AI-specific risks like hallucinations and prompt-injection attacks. When procuring external AI services, businesses must conduct vendor due diligence and execute DPAs that explicitly prohibit vendors from using client data to train or improve their own models without authorization. Third-party and open-source models also introduce supply chain risks, including the possibility that models were trained on unlawfully collected data or contain embedded backdoors. Using an open-source AI model does not reduce legal responsibility; the deploying organization remains the data controller and must assess the model’s provenance and security.
Physical measures should also be implemented to cover both hardware and system architecture, such as restricting access to premises where computer networks and cloud infrastructure are hosted, and ensuring the separation of testing sandbox environments from primary production environments at both physical and network levels.
Technical measures should include AI-specific safeguards that reflect the complexity and sensitivity of the data involved, such as input sanitization and data minimization, data encryption and anonymization, and the implementation of audit trails specifically designed for AI systems. These audit trails should cover interaction logs and metadata, including model versions, system prompts, and input data, as these are necessary to support digital forensic investigations. API rate limiting, proactive penetration testing, and input and output guardrails must also be implemented.
Data Subject Rights and Breach Notification
Businesses must design AI systems to support data subjects’ rights, taking into account technical feasibility to ensure effective protection of these rights, as listed below:
AI-related breach scenarios also require tailored response protocols. Prompt-injection attacks or data leakage through model inference may constitute reportable breaches if they result in unauthorized exposure of personal data. Businesses should assess breach severity by considering the sensitivity and volume of data involved and whether the exposure is contained internally or made public. Agreements with AI vendors should establish joint incident response procedures and require vendors to provide logs and forensic support within defined timeframes.
Next Steps
Businesses deploying AI in their operations should monitor the development and final issuance of the guidelines and any subsequent regulatory clarifications. Despite the guidelines’ nonbinding nature, organizations deploying AI will find it difficult to avoid aligning with these expectations, as regulators are likely to assess compliance against them.
Gap analysis of internal AI governance and preparation of AUPs may be necessary. The integration of AI into business operations will require clear and demonstrable compliance with data protection requirements. At the same time, staff training and robust contractual safeguards with third parties should be put in place to ensure enforceability, coordination, and effective risk management when AI-related issues arise.