Skip to main content

Thread Transfer

AI Regulatory Landscape 2025: Global Overview

The EU AI Act entered full enforcement August 2, 2025. Prohibited systems face fines up to €35M or 7% of global turnover. Colorado and California enacted comprehensive AI laws. Here's the global regulatory landscape.

Jorgo Bardho

Founder, Thread Transfer

August 23, 202520 min read
AI regulationEU AI ActcomplianceGDPRISO 42001enterprise governance
Global AI regulatory landscape visualization

The AI Act entered full enforcement on August 2, 2025. Prohibited systems face fines up to €35 million or 7% of global turnover. Colorado and California enacted comprehensive AI laws governing high-risk systems in employment, housing, and credit decisions. On December 11, 2025, the US President signed an executive order attempting to preempt state AI regulation—a move legal scholars call unconstitutional. China mandated explicit watermarking of all AI-generated synthetic content. 2025 is the year AI regulation shifted from proposals to enforceable frameworks with real financial penalties.

EU AI Act: full enforcement and prohibited systems

The AI Act is the first-ever legal framework on AI, which addresses the risks of AI and positions Europe to play a leading role globally. The AI Act (Regulation (EU) 2024/1689) is the first-ever comprehensive legal framework on AI worldwide. It entered into force on August 1, 2024, and will be fully applicable two years later on August 2, 2026, with some exceptions: prohibited AI practices and AI literacy obligations entered into application from February 2, 2025; the governance rules and obligations for GPAI models became applicable on August 2, 2025; and the rules for high-risk AI systems embedded into regulated products have an extended transition period until August 2, 2027.

The European Commission has made it clear: the timetable for implementing the Artificial Intelligence Act remains unchanged. There are no plans for transition periods or postponements. The first major enforcement deadline—February 2, 2025—introduced two key requirements: prohibited AI practices and AI literacy obligations. The Act explicitly bans AI systems that engage in manipulative behavior, social scoring, or unauthorized biometric surveillance.

Prohibited AI practices under the EU AI Act

Prohibited practices include cognitive behavioral manipulation of people or specific vulnerable groups (for example, voice-activated toys that encourage dangerous behavior in children); social scoring AI (classifying people based on behavior, socio-economic status, or personal characteristics); biometric identification and categorization of people; and real-time and remote biometric identification systems, such as facial recognition in public spaces. However, some exceptions are made for law enforcement purposes, such as searching for missing persons or preventing terrorist attacks.

AI systems that manipulate people's decisions or exploit their vulnerabilities, systems that evaluate or classify people based on their social behavior or personal traits, and systems that predict a person's risk of committing a crime are banned outright. The Act also bans AI systems that scrape facial images from the internet or CCTV footage, infer emotions in the workplace or educational institutions, and categorize people based on their biometric data. These prohibitions carry the highest penalties: up to €35 million or 7% of global annual turnover.

High-risk AI systems and compliance requirements

Article 6 of the AI Act defines the conditions under which an AI system is considered "high-risk." High-risk AI systems are AI systems that the EU considers to pose a high risk to the health, safety, or fundamental rights of EU citizens, but whose major socio-economic benefits outweigh these risks, which is why they are not banned. AI use cases that can pose serious risks to health, safety, or fundamental rights are classified as high-risk.

These high-risk use cases include AI safety components in critical infrastructures (e.g., transport), the failure of which could put the life and health of citizens at risk; AI solutions used in education institutions that may determine access to education and the course of someone's professional life (e.g., scoring of exams); AI-based safety components of products (e.g., AI application in robot-assisted surgery); and AI tools for employment, management of workers, and access to self-employment. These use cases include AI used in biometrics, critical infrastructure, education, employment, essential services, law enforcement, migration, and justice.

Providers, deployers, importers, and distributors of systems that meet the AI system description need to consider whether those systems further qualify as 'high-risk' AI systems. Providers and deployers of high-risk AI systems will be subject to significant regulatory obligations, with enhanced thresholds of diligence, initial risk assessment, and transparency compared to AI systems not falling into this category. The technology itself will need to comply with certain requirements—including around risk management, data quality, transparency, human oversight, and accuracy—while the businesses providing or deploying that technology will face obligations around registration, quality management, monitoring, record-keeping, and incident reporting.

General-purpose AI (GPAI) obligations

The EU AI Act Rules on GPAI 2025 Update introduces game-changing compliance requirements for General-Purpose AI model providers, with enforcement beginning August 2025. All GPAI model providers must provide technical documentation, instructions for use, comply with the Copyright Directive, and publish a summary about the content used for training. Free and open license GPAI model providers only need to comply with copyright and publish the training data summary, unless they present a systemic risk.

All providers of GPAI models that present a systemic risk—open or closed—must also conduct model evaluations, adversarial testing, track and report serious incidents, and ensure cybersecurity protections. For providers of GPAI models, the European Commission may impose a fine of up to €15 million or 3% of worldwide annual turnover. The European AI Office and authorities of the Member States are responsible for implementing, supervising, and enforcing the AI Act. The AI Office, based in Brussels, will enforce the obligations for providers of GPAI models, as well as support EU Member State national authorities in enforcing the AI Act's requirements for AI systems.

US state AI regulation: Colorado and California lead

Thirty-eight states enacted laws in 2025 regulating AI in one way or another. According to the Conference of State Legislatures, all fifty states, Puerto Rico, the Virgin Islands, and Washington, D.C., have introduced legislation on AI in 2025. Several states and municipalities, including California, Colorado, Illinois, Texas, and New York City, have enacted laws or regulations that impose restrictions on the use and development of AI, including transparency obligations, notice requirements, antidiscrimination protections, and guardrails to protect safety.

Colorado's Consumer Protections for Artificial Intelligence is the first comprehensive state law in the U.S. that aims to regulate AI systems used in employment, housing, credit, education, and health care decisions. The Colorado law aims to protect people from algorithmic discrimination and requires organizations using these "high-risk systems" to make impact assessments of the technology, notify consumers whether predictive AI will be used in consequential decisions about them, and make public the types of systems they use. However, enforcement of the law has been delayed while the state legislature considers its ramifications.

California's Transparency in Frontier Artificial Intelligence Act specifies guardrails on the development of the most powerful AI models. These models, called foundation or frontier models, are any AI model that is trained on extremely large and varied datasets and that can be adapted to a wide range of tasks without additional training. Among recently-passed California laws are a ban on AI makers blaming the technology itself for harming people when defending themselves in court, a prohibition on algorithms raising prices, a requirement that the makers of AI supply the public with tools to identify AI-generated content, and a law requiring AI makers to disclose details about the data they use to train their models.

US executive order vs. state regulation

On December 11, 2025, President Trump signed an executive order purporting to limit the ability of states to regulate the use of artificial intelligence. The order's stated purpose is to ensure that American AI companies are "free to innovate without cumbersome regulation" and to "remove barriers to American AI leadership." The order directs the Attorney General to establish an AI Litigation Task Force, tasked with challenging (suing) states over their AI laws. It also directs the Secretary of Commerce to publish an "evaluation" of existing state AI laws that the administration believes may conflict with their goals of freeing AI from regulatory restrictions.

The order directs the Commerce Department to halt funding under the Broadband Equity Access and Deployment Act to states that have laws in conflict with the order, and directs all federal agencies to examine discretionary grant programs to determine if they may condition such grants on states either not enacting AI laws in conflict with the order. The executive order exempts state AI laws related to child safety. The final order places express limits on the call for legislation, stating that federal legislation should exempt certain categories of state AI laws from preemption: child safety protection, AI compute and data center infrastructure, state government procurement and use of AI.

It's not clear what effect the executive order will have, and observers have said it is illegal because only Congress can supersede state laws. It is also highly likely that the order will be challenged by states that have adopted AI regulation as an unconstitutional encroachment of federal authority on states' rights. On July 1, the Senate, by a 99–1 vote, rejected a proposed amendment to the budget reconciliation bill that would have imposed a moratorium on state and local AI enforcement. House Republicans have renewed efforts for federal legislation to preempt state AI laws, pushing to include an AI preemption provision in the Fiscal Year 2026 National Defense Authorization Act (NDAA).

China: mandatory AI content watermarking

China's regulatory framework for generative AI is anchored in the Interim Measures for the Management of Generative Artificial Intelligence Services, which took effect on August 15, 2023. These measures mark China's first administrative regulation directly targeting generative AI and are enforced by a coalition of state agencies led by the Cyberspace Administration of China. China introduced strict new rules in March 2025, mandating explicit and implicit labeling of all AI-generated synthetic content. These rules align with broader efforts to strengthen digital ID systems and reinforce state control.

In 2025, the Cyberspace Administration of China (CAC) expanded its licensing regime to include foundation-model developers, aligning compliance with data-security and cybersecurity audits. Since 2023, China has required AI developers to watermark synthetic content, perform security assessments, and store data locally unless exceptions are granted. New rules coming into effect in 2025 further tighten requirements, especially for labeling AI-generated media. Organizations operating AI systems in China must comply with local data residency, security assessments, and watermarking requirements or face licensing restrictions.

Canada: voluntary code of conduct and AIDA

The Artificial Intelligence and Data Act (AIDA), introduced as part of Bill C-27 in 2022, is Canada's proposed regulatory framework to ensure responsible development and use of AI technologies. It aims to protect individuals and uphold Canadian values while supporting innovation and international interoperability. AIDA uses a risk-based approach to regulate high-impact AI systems. Canada's Artificial Intelligence and Data Act is still being discussed, but in the meantime, a voluntary Code of Conduct guides responsible AI use.

UK: sector-specific guidance over new laws

In the UK, AI oversight is guided by sector regulators and voluntary principles rather than a single law. The UK focuses on sector-specific guidance to enhance transparency and accountability without introducing new laws. The UK remains cautious, reiterating the need for thorough risk assessments before finalizing comprehensive legislation, yet it continues to accept industry-led recommendations. This approach emphasizes flexibility but creates uncertainty for enterprises seeking clear compliance requirements.

Japan, South Korea, Singapore: risk-based frameworks

South Korea's "Basic Act on the Development of Artificial Intelligence and Establishment of Foundation for Trust" will come into force on January 22, 2026. Similar to its EU counterpart, the Basic Act employs a risk-based approach to regulate the deployment, operation, and development of AI systems, with more stringent requirements applying only to specific "high-risk" use cases. Japan, South Korea, and Singapore are shaping risk-based frameworks aligned with OECD and G7 AI Principles, combining innovation incentives with robust governance standards.

International compliance frameworks: ISO 42001 and NIST AI RMF

International standards, particularly ISO/IEC 42001, are increasingly influential in shaping risk management, privacy, and auditing processes. ISO/IEC 42001 is the first international standard for AI management systems. It helps organizations govern AI responsibly, manage risks, and ensure transparency. The NIST AI RMF (Risk Management Framework) is US-developed but globally respected. It offers practical guidance to build trustworthy, fair, and explainable AI systems and is endorsed by over 40 countries.

Organizations pursuing EU AI Act compliance often adopt ISO 42001 as the foundational governance structure, then layer region-specific requirements on top. The framework covers AI system lifecycle management, risk assessment, stakeholder engagement, and continuous monitoring—all core requirements under the EU AI Act for high-risk systems. NIST AI RMF, while not a compliance standard itself, provides a common language for discussing AI risks and trustworthiness across jurisdictions.

Enterprise compliance challenges in 2025

AI regulations are new and not always consistent across regions. What's acceptable in the US might not fly in the EU or China. Businesses are left guessing what "compliant" really means, and that uncertainty slows down decision-making. Providers must establish robust compliance frameworks to adhere to regulatory requirements and avoid penalties. Regulatory bodies enforce these requirements through audits. More organizations are appointing dedicated teams or officers to manage AI risk, compliance, and ethics.

There's growing focus on fairness, transparency, and accountability in AI systems—not just as a best practice, but as a compliance expectation. Global AI regulation is a moving target, but the fundamentals are now clear: trust, transparency, and local adaptability are non-negotiable. Businesses should design AI systems with a strong central governance framework that can be tailored to meet the specific requirements of each jurisdiction.

Compliance steps for organizations

Companies should consider the following to enhance compliance: establishing a complete AI inventory with risk classification; clarifying the company's role (supplier, modifier, or deployer); preparing the necessary technical and transparency documentation; implementing copyright and data protection requirements; training and verifying AI competence among employees (including external staff); and adapting internal governance structures, including the appointment of responsible persons.

For high-risk AI systems under the EU AI Act, organizations must implement risk management systems, data governance protocols, technical documentation, record-keeping, transparency measures, human oversight, and cybersecurity safeguards. Post-market monitoring and incident reporting obligations require continuous risk monitoring and reporting serious incidents related to high-risk AI systems. These obligations mirror the quality management and compliance frameworks used in regulated industries like medical devices and automotive safety.

Context management for regulatory compliance

AI regulation requires maintaining detailed records of model training data, decision logic, human oversight interventions, and outcomes. For high-risk systems, this includes version control for model updates, audit logs for every inference, and documentation of human review processes. Thread Transfer addresses this by bundling AI interaction context into portable, auditable packages that preserve the full decision trail—input data, model version, output, human review, and outcome.

When a regulatory authority requests documentation for a high-risk AI system, organizations must produce evidence that the system was deployed according to approved risk management protocols, that human oversight occurred as required, and that incidents were reported within mandated timeframes. Without structured context preservation, reconstructing this audit trail is labor-intensive and error-prone. Context bundling automates capture of compliance-relevant metadata at the point of inference, ensuring audit readiness without manual record-keeping overhead.

The path forward

2025 is the year AI regulation shifted from proposals to enforceable frameworks with real financial penalties. The EU AI Act is fully enforced, US states have enacted comprehensive AI laws despite federal preemption attempts, and China has mandated AI content watermarking. Organizations deploying AI systems must navigate a fragmented global regulatory landscape where compliance requirements vary by jurisdiction, use case, and risk level. The organizations winning are those treating compliance as infrastructure—not an afterthought—and building risk management, transparency, and auditability into AI systems from day one.