HappycapyGuide

By Connie · Last reviewed: April 2026 — pricing & tools verified · This article contains affiliate links. We may earn a commission at no extra cost to you if you sign up through our links.

AI RegulationApril 5, 20269 min read

EU AI Act August 2026: Complete Compliance Guide for Businesses

The EU AI Act becomes fully enforceable on August 2, 2026. Fines reach €35 million or 7% of global revenue. Here is everything your business needs to know — in plain language.

TL;DR

  • • Full enforcement: August 2, 2026 (high-risk AI systems)
  • • 4 risk tiers: Unacceptable (banned) → High → Limited → Minimal
  • • High-risk AI in hiring, credit, healthcare, biometrics requires conformity assessment
  • • GPAI models (ChatGPT, Claude, Gemini) have transparency obligations since August 2025
  • • Fines: up to €35M or 7% global revenue for worst violations
  • • Applies to non-EU companies serving EU customers — GDPR-style extraterritoriality

EU AI Act Enforcement Timeline

DateWhat Takes EffectWho Is Affected
August 1, 2024Act enters into forceAll — 12-month countdown begins
February 2, 2025Prohibited AI practices bannedSocial scoring, real-time biometrics, manipulative AI
August 2, 2025GPAI model obligationsOpenAI, Anthropic, Google, Meta (foundation model providers)
August 2, 2026Full enforcement — high-risk AIAny business deploying high-risk AI in the EU
August 2, 2027Annex I high-risk systems (extended deadline)Legacy systems in safety-critical sectors

The 4 Risk Tiers Explained

Tier 1: Unacceptable Risk (Banned)

These AI uses are prohibited entirely in the EU:

  • Social scoring by governments (China-style citizen rating)
  • Real-time remote biometric identification in public spaces (with narrow exceptions for law enforcement)
  • Subliminal or manipulative AI techniques that exploit vulnerabilities
  • AI that exploits vulnerabilities of specific groups (age, disability)
  • Emotion recognition in workplaces and educational institutions
  • Biometric categorization systems inferring race, religion, political views, sexual orientation

Tier 2: High Risk (Strict Requirements)

High-risk AI systems must undergo conformity assessment, maintain technical documentation, implement human oversight, and register with the EU AI database before deployment.

SectorExamples of High-Risk AI
Employment & HRResume screening, promotion decisions, performance monitoring
Finance & CreditCredit scoring, loan decisioning, insurance risk assessment
HealthcareMedical devices, diagnostic AI, triage systems
EducationStudent assessment, access to educational institutions
Critical InfrastructureEnergy, water, transport, cybersecurity management
Law EnforcementCrime prediction, evidence evaluation, risk assessment
Border ControlVisa and asylum applications, border risk assessment

Tier 3: Limited Risk (Transparency Only)

AI systems that interact with humans must disclose that they are AI. The key requirement: chatbots, deepfakes, and AI-generated content must be clearly labeled.

  • Chatbots must tell users they are interacting with AI (not a human)
  • AI-generated images, video, and audio must be labeled as synthetic
  • Deepfake content must be disclosed unless clearly artistic/satirical

Tier 4: Minimal Risk (No Obligations)

The vast majority of AI applications — spam filters, recommendation engines, product search, AI in video games — fall here. No mandatory compliance requirements, though voluntary codes of conduct apply.

What High-Risk AI Systems Must Do

RequirementWhat It Means
Risk management systemContinuous identification, analysis, and mitigation of risks throughout lifecycle
Data governanceTraining data must meet quality standards; bias testing required
Technical documentationFull system design, capabilities, limitations must be documented before deployment
Logging & record-keepingAutomatic logging of operations to enable traceability
TransparencyUsers must be informed they are interacting with high-risk AI
Human oversightHumans must be able to monitor, intervene, and override AI decisions
Accuracy & robustnessAppropriate accuracy metrics; resilience against adversarial attacks
Conformity assessmentThird-party audit (some sectors) or self-assessment + registration in EU database

GPAI Models: What Frontier AI Providers Must Do

GPAI (General-Purpose AI) model obligations applied from August 2, 2025. All GPAI model providers (OpenAI, Anthropic, Google, Meta) must comply with:

  • Technical documentation of model architecture, training data, and capabilities
  • Published summary of training data (copyright compliance required)
  • Compliance with EU copyright law during training
  • Maintain documentation for 10 years after the model is discontinued

Models with systemic risk (training compute ≥ 10²⁵ FLOPs — roughly GPT-5.4, Claude Mythos 5, Gemini 3.1 Ultra) face additional requirements:

  • Adversarial testing (red-teaming) before and after deployment
  • Incident reporting to the European AI Office within 72 hours
  • Cybersecurity measures appropriate to the risk level
  • Energy efficiency reporting

Fine Structure

ViolationMax Fine
Using prohibited AI (Tier 1)€35M or 7% global annual turnover
High-risk AI non-compliance€15M or 3% global annual turnover
Incorrect / misleading information to regulators€7.5M or 1.5% global annual turnover
SMEs and startupsProportionally reduced; lower caps apply

Business Compliance Checklist

Step 1: Inventory all AI systems in use across your organization
Step 2: Classify each system by risk tier (unacceptable / high / limited / minimal)
Step 3: For any unacceptable risk systems — discontinue or reconfigure immediately
Step 4: For high-risk systems — begin conformity assessment process now (allow 6+ months)
Step 5: Assign a responsible person (AI Compliance Officer equivalent) for each high-risk system
Step 6: Implement technical documentation for each high-risk system
Step 7: Enable logging and human oversight mechanisms
Step 8: Conduct bias and accuracy testing on training data
Step 9: Register high-risk systems in the EU AI database before August 2026
Step 10: Add AI disclosure labels to all customer-facing chatbots and AI-generated content
Step 11: Review GPAI providers you use (OpenAI, Anthropic, etc.) for their compliance status
Step 12: Set up incident reporting procedures for potential AI failures

Frequently Asked Questions

Does the EU AI Act apply to US companies?

Yes — extraterritorial scope like GDPR. Any company placing AI on the EU market or whose AI outputs are used in the EU must comply. US companies with EU customers, EU employees, or EU data processors are all in scope.

Is a standard chatbot high-risk under the EU AI Act?

Most customer service chatbots are limited risk (must disclose they are AI) or minimal risk. A chatbot becomes high-risk only if it makes consequential decisions in regulated domains: credit approval, hiring, healthcare triage, or law enforcement.

What counts as "high-risk AI" in hiring?

AI that screens CVs, ranks candidates, or informs hiring decisions is classified as high-risk. Tools like automated video interview analysis and AI-powered applicant tracking systems fall into this category and require conformity assessment, bias testing, and human oversight.

Build AI workflows that are designed for compliance

HappyCapy includes human-in-the-loop controls, audit logging, and transparent AI outputs — designed with responsible deployment in mind.

Try HappyCapy Free
SharePost on XLinkedIn
Was this helpful?

Get the best AI tools tips — weekly

Honest reviews, tutorials, and Happycapy tips. No spam.

Comments