EU AI Act August 2026: Complete Compliance Guide for Businesses
The EU AI Act becomes fully enforceable on August 2, 2026. Fines reach €35 million or 7% of global revenue. Here is everything your business needs to know — in plain language.
TL;DR
- • Full enforcement: August 2, 2026 (high-risk AI systems)
- • 4 risk tiers: Unacceptable (banned) → High → Limited → Minimal
- • High-risk AI in hiring, credit, healthcare, biometrics requires conformity assessment
- • GPAI models (ChatGPT, Claude, Gemini) have transparency obligations since August 2025
- • Fines: up to €35M or 7% global revenue for worst violations
- • Applies to non-EU companies serving EU customers — GDPR-style extraterritoriality
EU AI Act Enforcement Timeline
| Date | What Takes Effect | Who Is Affected |
|---|---|---|
| August 1, 2024 | Act enters into force | All — 12-month countdown begins |
| February 2, 2025 | Prohibited AI practices banned | Social scoring, real-time biometrics, manipulative AI |
| August 2, 2025 | GPAI model obligations | OpenAI, Anthropic, Google, Meta (foundation model providers) |
| August 2, 2026 | Full enforcement — high-risk AI | Any business deploying high-risk AI in the EU |
| August 2, 2027 | Annex I high-risk systems (extended deadline) | Legacy systems in safety-critical sectors |
The 4 Risk Tiers Explained
Tier 1: Unacceptable Risk (Banned)
These AI uses are prohibited entirely in the EU:
- Social scoring by governments (China-style citizen rating)
- Real-time remote biometric identification in public spaces (with narrow exceptions for law enforcement)
- Subliminal or manipulative AI techniques that exploit vulnerabilities
- AI that exploits vulnerabilities of specific groups (age, disability)
- Emotion recognition in workplaces and educational institutions
- Biometric categorization systems inferring race, religion, political views, sexual orientation
Tier 2: High Risk (Strict Requirements)
High-risk AI systems must undergo conformity assessment, maintain technical documentation, implement human oversight, and register with the EU AI database before deployment.
| Sector | Examples of High-Risk AI |
|---|---|
| Employment & HR | Resume screening, promotion decisions, performance monitoring |
| Finance & Credit | Credit scoring, loan decisioning, insurance risk assessment |
| Healthcare | Medical devices, diagnostic AI, triage systems |
| Education | Student assessment, access to educational institutions |
| Critical Infrastructure | Energy, water, transport, cybersecurity management |
| Law Enforcement | Crime prediction, evidence evaluation, risk assessment |
| Border Control | Visa and asylum applications, border risk assessment |
Tier 3: Limited Risk (Transparency Only)
AI systems that interact with humans must disclose that they are AI. The key requirement: chatbots, deepfakes, and AI-generated content must be clearly labeled.
- Chatbots must tell users they are interacting with AI (not a human)
- AI-generated images, video, and audio must be labeled as synthetic
- Deepfake content must be disclosed unless clearly artistic/satirical
Tier 4: Minimal Risk (No Obligations)
The vast majority of AI applications — spam filters, recommendation engines, product search, AI in video games — fall here. No mandatory compliance requirements, though voluntary codes of conduct apply.
What High-Risk AI Systems Must Do
| Requirement | What It Means |
|---|---|
| Risk management system | Continuous identification, analysis, and mitigation of risks throughout lifecycle |
| Data governance | Training data must meet quality standards; bias testing required |
| Technical documentation | Full system design, capabilities, limitations must be documented before deployment |
| Logging & record-keeping | Automatic logging of operations to enable traceability |
| Transparency | Users must be informed they are interacting with high-risk AI |
| Human oversight | Humans must be able to monitor, intervene, and override AI decisions |
| Accuracy & robustness | Appropriate accuracy metrics; resilience against adversarial attacks |
| Conformity assessment | Third-party audit (some sectors) or self-assessment + registration in EU database |
GPAI Models: What Frontier AI Providers Must Do
GPAI (General-Purpose AI) model obligations applied from August 2, 2025. All GPAI model providers (OpenAI, Anthropic, Google, Meta) must comply with:
- Technical documentation of model architecture, training data, and capabilities
- Published summary of training data (copyright compliance required)
- Compliance with EU copyright law during training
- Maintain documentation for 10 years after the model is discontinued
Models with systemic risk (training compute ≥ 10²⁵ FLOPs — roughly GPT-5.4, Claude Mythos 5, Gemini 3.1 Ultra) face additional requirements:
- Adversarial testing (red-teaming) before and after deployment
- Incident reporting to the European AI Office within 72 hours
- Cybersecurity measures appropriate to the risk level
- Energy efficiency reporting
Fine Structure
| Violation | Max Fine |
|---|---|
| Using prohibited AI (Tier 1) | €35M or 7% global annual turnover |
| High-risk AI non-compliance | €15M or 3% global annual turnover |
| Incorrect / misleading information to regulators | €7.5M or 1.5% global annual turnover |
| SMEs and startups | Proportionally reduced; lower caps apply |
Business Compliance Checklist
Frequently Asked Questions
Does the EU AI Act apply to US companies?
Yes — extraterritorial scope like GDPR. Any company placing AI on the EU market or whose AI outputs are used in the EU must comply. US companies with EU customers, EU employees, or EU data processors are all in scope.
Is a standard chatbot high-risk under the EU AI Act?
Most customer service chatbots are limited risk (must disclose they are AI) or minimal risk. A chatbot becomes high-risk only if it makes consequential decisions in regulated domains: credit approval, hiring, healthcare triage, or law enforcement.
What counts as "high-risk AI" in hiring?
AI that screens CVs, ranks candidates, or informs hiring decisions is classified as high-risk. Tools like automated video interview analysis and AI-powered applicant tracking systems fall into this category and require conformity assessment, bias testing, and human oversight.
Build AI workflows that are designed for compliance
HappyCapy includes human-in-the-loop controls, audit logging, and transparent AI outputs — designed with responsible deployment in mind.
Try HappyCapy Free