HappycapyGuide

This article contains affiliate links. We may earn a commission at no extra cost to you if you sign up through our links.

Legal & PolicyMarch 31, 2026 · 8 min read

EU AI Act Just Opened Its First Formal Investigations — Targeting Grok and Meta

In January 2026, the European Commission's AI Office moved from compliance guidance to active enforcement — issuing a formal data preservation order against X over Grok, and expanding its investigation into Meta's AI practices. X faces potential fines up to 6% of its global annual revenue. This is the AI Act enforcement era beginning in practice, not just on paper. Here is exactly what triggered the probes and what it means for every AI user and business.

TL;DR

January 8, 2026: EU Commission issued a formal order requiring X to preserve all Grok internal data — triggering the AI Act's first enforcement action against a frontier AI system. Cause: Grok's "Spicy Mode" generated non-consensual imagery and disinformation. Meta separately under investigation for refusing to sign the GPAI Code of Practice. Google and Microsoft complying. Fines: up to 6% of global revenue. The compliance window is closed — enforcement is active now.

6%
of global revenue — max fine for X under AI Act
Jan 8
2026 — date of first formal EU AI Act enforcement order
2
major AI companies under active investigation
Aug 2025
when GPAI rules became legally applicable

What the EU AI Office Actually Did

The EU Artificial Intelligence Act's rules for General-Purpose AI (GPAI) models became legally applicable on August 2, 2025. For the first five months, the AI Office — the European Commission body responsible for enforcement — focused on transparency requirements, voluntary codes of practice, and systemic risk assessments. That phase ended in January 2026.

On January 8, 2026, the Commission issued a formal order requiring X (formerly Twitter) to retain all internal data related to its Grok AI chatbot. This is not a recommendation or a letter of inquiry — it is a compulsory legal order with non-compliance fines of up to 1.5% of global annual revenue. The order was triggered by allegations that Grok's "Spicy Mode" feature generated non-consensual sexualized imagery of real people and disinformation content, violating both AI Act provisions and the Digital Services Act.

The underlying investigation targets systemic risk. Under the AI Act, GPAI models with estimated training compute above 10²⁵ FLOPs are classified as posing "systemic risk" and must undergo a detailed conformity assessment, maintain technical documentation, and implement safeguards against misuse. The AI Office is assessing whether Grok meets those thresholds and whether X has complied with those obligations.

Why Meta Is Also in the Crosshairs

Meta's situation stems from a different decision: in late 2025, the company declined to sign the EU's voluntary GPAI Code of Practice — the industry-wide agreement that serves as a safe harbor for compliance during the transition period. Signing the code demonstrates good-faith compliance effort; refusing signals that a company is betting it can challenge the regulation's scope rather than comply.

In January 2026, the Commission expanded its investigation to include Meta's broader ecosystem — specifically whether the company uses the WhatsApp Business API to unfairly restrict rival AI providers from accessing its messaging infrastructure. Meta's Llama 4 models are also under systemic risk assessment review.

Meta has not commented publicly on the investigation's scope beyond stating it believes the AI Act's open-source provisions should create broader exemptions for open-weight models like Llama. The Commission disagrees with that interpretation.

When a single AI platform is under investigation, have a backup.
Happycapy gives you Claude, GPT-5.4, Gemini, Mistral, and 150+ models in one workspace. Switch in one click when any single provider faces regulatory restrictions, outages, or policy changes.
Try Happycapy Free →

The Compliance Scoreboard: Who Is Safe and Who Is at Risk

Company / ModelGPAI Code SignedSystemic Risk StatusEU Enforcement StatusMax Potential Fine
xAI / Grok (X)NoUnder assessmentActive — formal order Jan 8, 20266% global revenue
Meta / LlamaNo (refused)Under reviewActive — expanded Jan 20266% global revenue
Google / GeminiYesCompliant (by design)No active investigation
Microsoft / CopilotYesCompliant (by design)No active investigation
Anthropic / ClaudeYesConstitutional AI designNo active investigation
Mistral AIYes (EU-native)EU jurisdiction preferredNo active investigation
HappycapyRoutes to compliant modelsMulti-model — your choiceNo active investigation

What "Spicy Mode" Actually Did — And Why the EU Acted

Grok's "Spicy Mode" was a feature that reduced standard content safety guardrails, allowing users to generate more explicit outputs including sexual content. The EU investigation focuses on two specific harm categories: non-consensual intimate imagery (NCII) — sexually explicit images of real, identifiable people generated without their consent — and disinformation content generated through reduced moderation settings.

Both categories are covered under the AI Act's provisions for GPAI models with systemic risk. The Act requires that such models implement safeguards against these harms specifically, and that providers demonstrate through technical documentation that those safeguards function as described. The data preservation order issued on January 8 is the first step in a process that gives the Commission access to X's internal testing logs, safety evaluations, and moderation records.

X has since restricted or modified Spicy Mode features in EU jurisdictions, but the investigation continues. Regulatory inquiries of this type typically progress through data collection, a formal Statement of Objections, a company response period, and a final decision — a process that can take 12 to 18 months. The maximum fine is calculated on global revenue, not just EU revenue.

The Broader Enforcement Picture: What 2026 Changes

August 2, 2025 was the effective date for GPAI rules. The first five months were a grace period in practice — the AI Office was still hiring enforcement staff, building technical assessment capacity, and finalizing the Code of Practice. The January 2026 actions signal that the grace period is over.

Looking ahead, August 2, 2026 is the next major compliance threshold — the deadline for "high-risk" AI systems in sectors including healthcare, critical infrastructure, employment, and education. Companies that deploy AI in those categories and have not completed conformity assessments by that date face direct enforcement action.

The EU's pattern mirrors how GDPR enforcement developed: a slow start, followed by landmark actions that established the rules were real, then a systematic expansion into industry-wide compliance programs. The AI Act is now at the "landmark actions" phase.

EU AI Act Enforcement Calendar — Key Dates
• Aug 2, 2025: GPAI rules became legally applicable — transparency, systemic risk obligations active
• Dec 2025: Meta declines to sign voluntary GPAI Code of Practice
• Jan 8, 2026: EU Commission issues first formal data preservation order (against X/Grok)
• Jan 2026: EU expands Meta investigation to include WhatsApp Business API
• Mar 2026: EU AI Office issues formal inquiries to three frontier AI providers for systemic risk assessments
• Aug 2, 2026: High-risk AI systems compliance deadline — healthcare, infrastructure, employment, education
• Dec 2027: Full compliance deadline for most remaining high-risk system categories

What This Means for Users and Businesses

For individual users outside the enterprise sector, EU enforcement actions have limited direct impact. Grok remains accessible; Meta's AI tools remain available. The investigations affect company behavior and may result in product modifications or fines, but they do not immediately restrict access.

For businesses using AI in regulated contexts — healthcare, HR, financial services, legal, education — the August 2026 deadline is the operative date. Companies deploying AI tools in these sectors need to have compliance documentation ready. Using models from providers with active enforcement investigations creates documentation risk.

The broader practical implication is platform risk. When a core AI tool faces regulatory uncertainty — feature restrictions in specific regions, policy changes to meet enforcement requirements, or service modifications during an investigation — users who depend exclusively on that tool absorb the disruption. Multi-model workflows eliminate that single point of regulatory exposure.

Regulatory risk is the new outage risk. Cover both with one platform.
Happycapy gives you Claude, GPT-5.4, Gemini, Mistral, and 150+ models in one workspace. When any single provider faces investigation, policy changes, or regional restrictions — you switch in one click. $17/month.
Start Free on Happycapy →

Frequently Asked Questions

Why is the EU investigating Grok under the AI Act?

On January 8, 2026, the EU Commission issued a formal data preservation order against X over its Grok chatbot. The investigation centers on Grok's "Spicy Mode" generating non-consensual sexualized imagery and disinformation, which violates the AI Act's GPAI systemic risk obligations and the Digital Services Act. X faces fines up to 6% of global revenue.

Why is Meta being investigated?

Meta refused to sign the EU's voluntary GPAI Code of Practice in late 2025. In January 2026, the EU expanded its investigation to probe whether Meta uses WhatsApp Business API to restrict rival AI providers. Meta's Llama models are also under systemic risk review.

Which AI companies are complying with the EU AI Act?

Google and Microsoft have adopted "compliance-by-design" approaches globally. Anthropic (Claude) follows Constitutional AI principles aligned with the Act's safety requirements. Mistral is EU-native and operates within EU jurisdiction by default. None of these companies have active enforcement investigations as of March 2026.

What are the maximum EU AI Act fines?

Up to 7% of global annual revenue for prohibited AI practices. Up to 6% for violations of GPAI systemic risk rules — the category targeting Grok and Meta. Up to 3% for other rule violations. Up to 1.5% for non-compliance with procedural obligations like data preservation orders. Fines are calculated on total global revenue, not just EU revenue.

Sources
European Commission — AI Act Implementation and Enforcement (2026)European Parliament Think Tank — Enforcement of the AI Act (March 18, 2026)MetricStream — 2026 Guide to AI Regulations: US, UK, EU (2026)Digital Applied — March 2026 AI Roundup: EU AI Act enforcement issued first formal inquiries (March 2026)EU Artificial Intelligence Act — Implementation TimelineAxios — OpenAI, Anthropic feud could prop up Google (March 11, 2026)
← Back to all articles
SharePost on XLinkedIn