January 9, 2026
A Closer Look at Vietnam’s New AI Law: What It Means for AI Businesses

Vietnam has taken a decisive step into the global artificial intelligence regulatory landscape with the promulgation of the Law on Artificial Intelligence No. 134/2025/QH15 (AI Law), adopted on December 10, 2025, and effective from March 1, 2026. As one of the earliest comprehensive, standalone AI statutes in Southeast Asia, the AI Law signals Vietnam’s ambition to position itself as both an innovation-friendly and governance-conscious AI market.

In doing so, the legislature has also streamlined Vietnam’s AI regulatory architecture. The AI Law repeals most AI-related provisions previously embedded in the Law on Digital Technology Industry No. 71/2025/QH15, consolidating AI governance under a single, unified legal framework. This structural move underscores an intent to provide greater regulatory clarity and coherence for businesses operating across the AI value chain.

Against this backdrop, the key question for AI developers, providers, deployers, and governance teams is how the new risk-based framework will shape compliance expectations, operational decisions, and governance design in practice. This article examines the new AI Law through that practical lens, focusing on what it means for AI businesses operating in or into Vietnam.

Scope of Application

The AI Law applies broadly to Vietnamese organizations and individuals, as well as foreign entities that participate in AI-related activities within Vietnam. The law expressly excludes AI activities conducted solely for national defense, security, and cryptography purposes.

A defining feature of the AI Law is that it regulates by role, not by industry. It distinguishes between:

  • Developers, who design, build, train, test, or fine-tune AI models and have direct control over the technical methods, training data, or model parameters;
  • Providers, who place AI systems on the market or put them into use under their own names;
  • Deployers, who use AI systems under their control in professional, commercial, or service-provision activities;
  • Users, who interact with AI systems or rely on their outputs; and
  • Affected persons, whose lawful rights or interests, life, health, property, reputation, or opportunity to access services are directly or indirectly impacted by the deployment of, or by the outputs generated by, AI systems.

From a practical perspective, this role-based structure is critical. An organization may play multiple roles across different AI systems, or even within the same system. Where contractual roles do not align with regulatory roles under the AI Law, businesses may face unexpected compliance exposure or risk failing to fully meet their statutory obligations. As a result, role identification is the first governance decision any AI-related business must make under the AI Law.

Risk-Based Classification as the First Compliance Gate

At the core of the AI Law is a risk-based regulatory model. In particular, AI systems are classified as either high-risk, medium-risk, or low-risk, as defined below:

  • High-risk: May cause significant harm to life, health, or the lawful rights and interests of organizations or individuals, as well as to national interests, public interests, or national security. Given this broad definition, the prime minister is tasked to issue a list specifying which AI systems are classified as high-risk. Accordingly, only the systems included in this list will be subject to the strictest regulatory requirements applicable to high-risk AI (details of which will be discussed further below).
  • Medium-risk: May have the potential to confuse, influence, or manipulate users due to users being unable to recognize that the interacting entity is an AI system or that the content is generated by such a system.
  • Low-risk: All remaining systems.

This classification is essentially the gateway to compliance, since it determines whether obligations such as notification, conformity assessment, and other ongoing governance obligations apply. While Vietnam’s AI Law is broadly aligned with the EU AI Act in adopting a risk-based regulatory philosophy, its framework is structurally simpler. Unlike the EU AI Act, which embeds outright prohibitions within a four-tier risk taxonomy, Vietnam addresses prohibited AI practices separately and applies its three-tier classification only to AI systems that are otherwise lawful. From a governance perspective, this reduces classification ambiguity and supports more predictable enforcement.

Under the AI Law, providers bear the formal responsibility for self-classifying AI systems before they are put into use. Deployers inherit this classification but must reassess it if they materially modify the system or change how it is used.

For medium- and high-risk systems, providers must additionally prepare a risk classification dossier, and notify the Ministry of Science and Technology (MST) through the national AI portal before deployment.

Governance of AI Systems Based on Risk Levels

Governance of High-Risk AI Systems

Being classified as high-risk (included in the list of high-risk AI systems to be announced by the prime minister) has significant operational and governance implications. The AI Law imposes a lifecycle-wide governance framework that directly affects product design, deployment decisions, internal controls, and regulatory engagement. In particular:

Transparency obligation: Transparency under the AI Law is a user-facing operational obligation, not merely a documentation requirement. Providers must ensure that users can recognize when they are interacting with an AI system, and that AI-generated audio, images, and videos are appropriately marked in accordance with government standards. Deployers have corresponding duties when AI-generated or AI-edited content is made public, including clear disclosure and visible labeling where such content may cause confusion or involve simulation or impersonation. In practice, this requires transparency to be embedded into product design, user interfaces, content workflows, and public communications throughout the AI system’s lifecycle.

Incident management: The AI Law treats incident management as a collective obligation across the AI value chain. Developers, providers, deployers, and users are all required to ensure the safety, security, and reliability of AI systems, and to promptly detect and address incidents that may cause harm to individuals, property, data, or social order. Where a serious incident occurs, developers and providers must take immediate technical measures to remedy the issue, including suspending or recalling the system if necessary, and notify the competent authorities through the national one-stop AI portal. Deployers and users, in turn, are required to record, report, and cooperate in incident handling and remediation.

From a governance perspective, this framework requires organizations to establish clear internal incident thresholds, reporting and escalation procedures, and cross-functional coordination between technical, legal, and compliance teams, as well as operational readiness to suspend or withdraw AI systems when mandated by regulators.

Conformity assessment: High-risk AI systems are subject to mandatory conformity assessment before being put into use and upon any significant modification during operation. Depending on whether a system falls within the prime minister-issued list of systems requiring prior certification, conformity assessment may take the form of third-party certification by a registered or recognized assessment body if certification is mandatory, or self-assessment (or outsourced assessment) by the provider if certification is not mandatory. A positive conformity assessment is a legal precondition for deployment, and providers are required to maintain conformity and publicly disclose relevant information on an ongoing basis.

Local presence for foreign providers: Foreign providers supplying high-risk AI systems in Vietnam are required to establish a lawful local contact point in Vietnam. When a high-risk system falls within the category subject to mandatory conformity certification prior to deployment, the provider must additionally establish a commercial presence or appoint an authorized representative in Vietnam.

Lifecycle governance obligations: Beyond these headline requirements, high-risk systems are subject to continuous risk management, data governance controls, technical documentation, human oversight, and regulatory cooperation obligations. For deployers, this translates into stricter limits on how systems may be used, monitored, and scaled beyond their original purpose.

Governance of Medium-Risk and Low-Risk AI Systems

Under the AI Law, medium-risk AI systems are governed primarily through transparency and accountability mechanisms rather than ex ante conformity assessment and certification (if applicable). Providers and deployers must comply with the transparency requirements mentioned above and be prepared to explain, upon request by competent authorities, the system’s purpose, functional operation, key input data, and risk management measures, without being required to disclose source code, detailed algorithms, or trade secrets. Deployers also bear responsibility for explaining system operation, risk controls, incident handling, and protection of affected persons’ lawful rights.

Low-risk AI systems, by contrast, are subject to a largely post hoc oversight model. Providers and deployers are only required to account for such systems when there are indications of legal violations or adverse impacts on lawful rights or interests, while users remain free to use low-risk systems for lawful purposes at their own responsibility.

From a governance perspective, this lighter regulatory approach does not eliminate the need for internal controls. Organizations deploying medium- and low-risk AI systems should still maintain basic documentation, transparency mechanisms, and internal escalation pathways to respond efficiently if regulatory scrutiny or incidents arise, and are encouraged to apply relevant technical standards on a voluntary basis.

Other Notable Features of the AI Law

The AI Law establishes a sandbox mechanism for AI, under which testing results may be used by authorities to recognize conformity assessment results or adjust applicable obligations.

Vietnam will adopt a National AI Strategy issued by the prime minister and subject to periodic review at least every three years or upon significant technological or market developments.

The AI Law introduces a National AI Ethics Framework to guide the development of standards, technical regulations, sector-specific guidance, and incentive policies for safe, trustworthy, and responsible AI, with voluntary application encouraged.

Violators of the AI Law and other relevant legal provisions related to AI, depending on the nature, severity, and consequences of the violation, will be subject to administrative sanctions or criminal liability. If damage occurs, they must compensate according to civil law provisions.

Outlook

While the AI Law represents a significant milestone in Vietnam’s digital regulatory development, it is best understood as a framework law rather than a fully exhaustive regulatory regime. Many key compliance elements, including detailed risk classification criteria, transparency and labeling requirements, incident reporting thresholds, conformity assessment procedures, and local presence obligations for foreign providers, are to be provided in subordinate implementing regulations.

At the time of writing, the competent authorities have not announced a specific timeline for the issuance of these implementing decrees and guidance. As a result, the full scope of practical compliance obligations and enforcement in respect of the foregoing obligations, especially for providers and deployers of high-risk AI systems, will only become clear as secondary legislation and regulatory guidance are issued.

Regardless, businesses developing, providing, or deploying AI systems in or into Vietnam should begin compliance planning at an early stage, rather than waiting for implementing decrees or enforcement actions. Early preparation will be particularly important for organizations operating complex AI supply chains or deploying systems that may fall within higher risk categories.


Related Professionals
Anh Ha Mai Ho
+84 24 3772 5549
Anh Hoai Nguyen
+84 24 3772 5596
Waewpen Piemwichai
+84 24 3772 5618