Vietnam’s emerging governance framework for artificial intelligence (AI) is developing through a multi-layered structure comprising three components:
Policy level. At policy level, the foundation for a strategic framework for AI development and governance was laid in 2021 by the National Strategy for Research, Development and Application of AI until 2030, aimed at strengthening the national AI ecosystem and positioning Vietnam as a regional AI innovation hub.
Subsequently, resolution No.57-NQ/TW (2024) identified AI as a key driver of science, technology, innovation and national digital transformation. AI was also designated as a strategic technology under decision No.1131/QD-TTg (2025) listing priority technologies across sectors.
Regulatory framework. At the legislative level, the new Law on Artificial Intelligence took effect on 1 March 2026, establishing the core regulatory framework governing development, provision, deployment and use of AI systems.
Controlled testing for emerging AI technologies is implemented under the Law on Science, Technology and Innovation.
The AI Law is expected to be further operationalised through implementing instruments, most notably a draft decree guiding the AI Law, and draft decision of the prime minister identifying high-risk AI systems (both published in February 2026). A decision establishing priority datasets for AI development is also anticipated.
Compliance obligations may also arise under sectoral regulatory regimes, including data protection, cybersecurity, banking, consumer protection, e-commerce and intellectual property, particularly where AI systems are used in automated decision-making or data-driven services.
Technical standards and non-binding guidelines. Vietnam’s AI governance framework is also supported by technical standards and voluntary guidelines. A key instrument is decision No.1290/QD-BKHCN (2024), providing guidelines for responsible research and development of AI systems, and represents Vietnam’s first national AI ethics code. The Ministry of Science and Technology (MST) encourages organisations to adopt these principles – though they are not legally binding – to promote responsible AI development.
Vietnam has also begun incorporating international AI technical standards into its national standards system. While these standards are not legally binding unless incorporated into legislation or National Technical Regulations, they provide guidance on AI terminology, lifecycle management, robustness, governance frameworks and machine learning systems, helping align Vietnam’s AI governance ecosystem with international standards.
Scope of application. The AI Law applies to Vietnamese organisations and individuals, as well as foreign entities engaging in AI-related activities in Vietnam, but excludes those solely for national defence, security and cryptography purposes.
A defining feature of the AI Law is regulating by role rather than by industry, distinguishing between:
Risk-based classification as first compliance gate. At AI Law’s core is a regulatory model in which AI systems are classified as high, medium or low risk.
Providers and deployers must comply with transparency obligations and be prepared, on request, to explain the system’s purpose, operation, key input data and risk management measures, without being required to disclose source code, detailed algorithms or other trade secrets. Deployers are also responsible for explaining system operation, risk controls, incident handling measures and safeguards for the lawful rights and interests of affected persons.
Low-risk AI systems, by contrast, are subject to a largely post-hoc oversight model. Providers and deployers are only required to account for such systems when there are indications of legal violations or adverse impacts on lawful rights or interests, while users remain free to use low-risk systems for lawful purposes at their own responsibility.
In addition to governance under the AI Law, several sector-specific regulations impose additional requirements on deployment and use of AI in regulated industries.
In banking and finance, the State Bank of Vietnam has issued a draft circular on safety and risk management for AI deployment. Financial institutions must complete pre-deployment procedures, including risk classification documentation, information security testing, impact assessments for high-risk systems, and operational safety plans covering monitoring and incident response. The draft also introduces transparency requirements and prohibits using AI to exploit customer vulnerabilities or promote unsuitable financial products.
Under consumer protection, operators of large digital platforms must periodically assess and report their use of AI technologies and provide information to competent authorities for regulatory supervision.
In e-commerce, the Law on E-Commerce requires transparency where algorithms or AI-based recommendation systems are used to rank or display goods on digital marketplaces. Platforms must disclose the main criteria used by these algorithms and allow users to enable or disable such features.
For data protection, AI-related data processing is governed by the Law on Personal Data Protection. Organisations using personal data for AI training or analytics must ensure processing occurs for legitimate purposes and implement safeguards such as access controls, encryption and compliance with data subject rights and cross-border transfer requirements. The Data Law further establishes principles governing data management, sharing, and infrastructure relevant to AI development.
Vietnam has taken a significant step towards establishing a comprehensive legal framework for AI governance. While the AI Law provides the foundational regulatory structure, several implementing instruments remain under development and will further clarify compliance obligations.
As Vietnam’s digital economy expands, the regulatory approach is likely to evolve towards a more integrated governance model combining AI-specific regulations, sectoral oversight, and internationally aligned technical standards. Organisations deploying AI systems should therefore closely monitor regulatory developments and strengthen internal governance, risk management and transparency practices to prepare for the next phase of AI regulation.
This article was originally published by Asian Business Law Journal.