Thread Transfer
EU AI Act 2025: What enterprise teams must know
The EU AI Act took effect February 2025. Here's what it bans, what it restricts, and how to stay compliant.
Jorgo Bardho
Founder, Thread Transfer
The EU AI Act took effect in February 2025, introducing the world's first comprehensive regulatory framework for artificial intelligence. Penalties reach €35M or 7% of global revenue for serious violations. If your organization deploys AI in Europe or serves European customers, compliance is no longer optional—it's existential.
What the Act bans outright
Certain AI practices are prohibited entirely. These include social scoring by governments, real-time biometric surveillance in public spaces (with limited law enforcement exceptions), subliminal manipulation that causes harm, and exploitation of vulnerable groups. Deploying prohibited systems triggers maximum penalties.
High-risk AI classifications
High-risk AI systems face strict requirements. The Act identifies two categories:
- Product safety components—AI embedded in machinery, medical devices, aviation, or automotive systems already covered by EU safety directives.
- Standalone high-risk systems—Biometric identification, critical infrastructure management, educational access, employment decisions, essential service access, law enforcement, migration control, and justice administration.
If your AI makes or significantly influences decisions in these areas, high-risk obligations apply.
Compliance requirements for high-risk AI
High-risk systems must meet seven core obligations:
- Risk management system—Continuous identification, analysis, and mitigation throughout the AI lifecycle.
- Data governance—Training, validation, and test datasets must be relevant, representative, and free from bias.
- Technical documentation—Detailed records of design, development, and testing sufficient for conformity assessment.
- Record-keeping—Automatic logging of events to enable traceability and post-market monitoring.
- Transparency and user information—Clear instructions for deployers; users informed when interacting with AI.
- Human oversight—Measures enabling humans to intervene, stop, or override AI decisions.
- Accuracy, robustness, cybersecurity—Systems must perform reliably, resist attacks, and maintain accuracy across deployment contexts.
General-purpose AI models (GPAI)
Foundation models like GPT-4, Claude, or Gemini fall under GPAI rules. Providers must document training data, publish summaries respecting copyright, ensure energy efficiency, and conduct adversarial testing. Models with "systemic risk" (over 10^25 FLOPs training compute) face additional evaluation and red-teaming obligations.
Timeline and enforcement
Prohibited practices bans took effect immediately in February 2025. High-risk AI obligations begin enforcement in August 2026, with GPAI rules following in August 2027. National authorities will conduct inspections, request documentation, and impose penalties for non-compliance. The European AI Office coordinates enforcement and issues guidance.
Compliance checklist
- Inventory all AI systems and classify risk levels using Annex III criteria.
- For high-risk systems: establish risk management, data governance, logging, and oversight processes.
- Document technical architecture, training data sources, and performance benchmarks.
- Implement transparency notices and human-in-the-loop mechanisms.
- Conduct regular audits and update documentation as systems evolve.
- Designate a compliance owner and train teams on Act requirements.
Need help scoping your AI Act obligations? Reach out at info@thread-transfer.com and we'll share frameworks from teams already building compliant systems.
Learn more: How it works · Why bundles beat raw thread history