Artificial intelligence (AI), semiconductors, and digital assets are considered critical drivers of Vietnam’s future economic growth and are fundamental to the nation’s digital transformation targets. These sectors form the core of Vietnam’s strategy to build a robust, globally competitive digital economy. This strategic direction gained substantial momentum with the issuance of the Law on Digital Technology Industry (DTI Law) on June 14, 2025. The DTI Law was designed to attract investment, stimulate innovation, cultivate high-quality human resources, and ensure the responsible, secure, and sustainable growth of digital technologies like AI and digital assets, harmonizing Vietnam’s digital industry with international standards while safeguarding public interests and national security. Several key provisions of the DTI Law took effect on July 1, 2025, and the law will become fully effective on January 1, 2026. The government is delegated to provide further necessary guidelines and details for implementation of the law. Artificial Intelligence (AI): Principle-Driven and Risk-Based Regulations Under the DTI Law, there are seven core principles guiding the development, provision, and use of AI which are applicable to AI developers, providers and deployers. These principles favor values-based governance over purely technical prescriptions, and include the following: Taking a human-centered approach that upholds ethical values, inclusivity, flexibility, equality, and non-discrimination. Ensuring transparency, accountability, and explainability, with AI systems remaining under human control. Maintaining cybersecurity and system safety. Adherence to data protection and privacy regulations. Having the ability to control AI algorithms and models. Effective risk management throughout the entire lifecycle of AI systems. Compliance with consumer protection laws and other relevant legal frameworks. AI system management follows a risk-based approach, with the law categorizing systems into high-risk, high-impact, and other groups. High-risk AI systems are those that, in certain applications, may pose significant threats or harm to individuals or the public interest while