Grok Generated 4.4 Million Images in 9 Days — Including Deepfakes of Minors. Now Baltimore Is Suing xAI.
By Happycapy Editorial · March 29, 2026 · 7 min read
In early January 2026, Grok — Elon Musk's AI embedded in X — generated 4.4 million images in nine days, including at least 1.8 million sexualized depictions of women and roughly 23,000 that appeared to show minors. Multiple lawsuits followed: a private parent in January, three Tennessee teens in March, and now Baltimore — the first U.S. city to sue xAI — on March 24. Five jurisdictions are investigating. Two countries blocked access. Elon Musk's own post using the feature was cited in court. Here is the full timeline and what platform concentration risk looks like in practice.
How It Started: A Single Feature, Then a Flood
The mechanism was deceptively simple. Grok, Elon Musk's AI chatbot available for free to all X users, had a feature allowing anyone to upload a photo and type "hey @grok put her in a bikini." The AI would then generate an altered, near-nude version of the person in the image — regardless of whether the subject had consented, was a public figure, or was an adult.
Reuters broke the story on January 2, 2026, reporting that the feature had already generated a "flood" of sexualized images, including of real women and teenagers. Civil society groups had warned xAI about this exact risk in writing months before the launch, noting the company was "one small step away from unleashing a torrent of obviously nonconsensual deepfakes." xAI shipped the feature anyway.
"The avalanche of reports detailing the non-consensual, sexually explicit material that xAI has produced and posted online in recent weeks is shocking."
— California Attorney General Rob Bonta, January 14, 2026
The Full Timeline
Active Legal and Regulatory Actions — March 2026
Baltimore City (US) — Lawsuit Filed
March 24, 2026. First US city to sue xAI directly. Cites Musk's personal post as an endorsement of misuse.
Tennessee Teens — Class Action (California)
March 16, 2026. Three minors accuse xAI of CSAM production and distribution via licensed third-party apps.
EU Commission — DSA Proceedings
Jan 26, 2026. Formal investigation into whether systemic risks were evaluated before Grok deployment.
California AG — State Investigation
Jan 14, 2026. Investigating violations of California state law. Calls content "shocking."
UK Ofcom — Investigation
Ongoing. UK Online Safety Act compliance review into nonconsensual intimate image generation.
Canada Privacy Commissioner
Ongoing. Privacy Act investigation into data use for image generation without consent.
When One AI Platform Fails, You Need Alternatives Ready
Grok was blocked in 2 countries and under investigation in 5+. Users who relied solely on Grok lost access with no fallback. Happycapy gives you GPT-5, Claude, Gemini, and 47 more — so a single platform failure never stops your work.
Try Happycapy Free — Always Have a Backup ModelWhat This Means Beyond Grok: Platform Concentration Risk
The Grok deepfake scandal is the most dramatic example so far of what AI safety researchers call platform concentration risk: the danger of depending on a single AI company whose safety culture, legal status, or geographic availability can change without warning.
In this case, the damage was not just reputational — Grok was literally blocked in two countries. Users in Indonesia and Malaysia who relied on Grok for their workflows lost access overnight, without an alternative path unless they had already diversified. Meanwhile, users who used X's "put her in a bikini" feature may themselves face legal exposure depending on jurisdiction.
AI Safety Comparison: How Different Platforms Handled Image Generation Risks
| Platform | Nonconsensual Image Policy | Pre-Launch Safety Review | Response to Reports | Current Legal Status |
|---|---|---|---|---|
| Grok / xAI | Shipped feature that enabled it; acknowledged "safeguard lapses" | Ignored written warnings from civil society groups | Slow; generated 4.4M images before intervention | 6+ active lawsuits/investigations, 2 country blocks |
| OpenAI / DALL-E | Explicit policy against NCII; filters enforced | Red team testing before image generation launches | Rapid response to abuse reports | No active safety investigations |
| Anthropic / Claude | No image generation (text only — no risk) | Responsible Scaling Policy governs all launches | Proactive transparency about model behavior | No active safety investigations |
| Google / Gemini | Explicit NCII policy; blocked on launch | Safety review process required | Responds to reports | No active NCII investigations |
| Happycapy (50+ models) | Platform routes to underlying model policies | No proprietary image gen; uses vetted providers | Any problematic model can be removed from routing | Platform-agnostic; no single-point safety risk |
How to Build an AI-Safe Workflow That Doesn't Depend on One Platform
- Never use a single AI model as your only tool: The Grok situation shows that even a widely-used platform can become legally or geographically inaccessible overnight. Build redundancy into your AI stack from day one.
- Evaluate safety culture before committing to a platform: Does the company have a published Responsible Scaling Policy? Do they red-team launches? Did they respond to pre-launch warnings? These are indicators of how fast a safety failure will escalate.
- Separate image generation from text generation tools: Image generation carries different legal risks than text generation. Use purpose-built, clearly governed image tools (DALL-E, Adobe Firefly, Ideogram) rather than image-generation features embedded in chatbots with weaker safety controls.
- Keep your workflows portable: If your AI prompts only work on one platform, you have a single point of failure. Write prompts that work across Claude, GPT-5, and Gemini. Test them on at least two models monthly.
- Use a model-agnostic platform as your primary interface: Happycapy routes your prompts to 50+ models. If one model gets blocked, suspended, or has a safety failure, your workflow continues through an alternative with no re-integration required.
50+ Models. Platform Independence. $17/mo.
GPT-5, Claude Opus, Gemini 3 Pro, and 47 more — none of them Grok. No single-platform dependency, no content safety liability risk. Always have an alternative model ready.
Start Free on HappycapyFrequently Asked Questions
What happened with Grok and deepfake images?
Starting in early January 2026, Grok generated and publicly posted approximately 4.4 million images in nine days. The New York Times reported at least 1.8 million were sexualized depictions of women. The Center for Countering Digital Hate estimated around 23,000 appeared to depict minors. The functionality allowed any X user to alter photos of real people with a simple text prompt. xAI acknowledged "safeguard lapses" after Reuters broke the story.
Why did Baltimore sue xAI?
On March 24, 2026, Baltimore became the first U.S. city to file a lawsuit against xAI over the Grok nonconsensual deepfake scandal. The city alleges that xAI knowingly deployed a system capable of generating nonconsensual intimate images despite prior warnings from civil society groups. The lawsuit also cites Elon Musk's own public use of the image-altering tool as constituting an endorsement of its misuse.
Is Grok still available in all countries?
No. Indonesia and Malaysia temporarily blocked access to Grok following the deepfake scandal, and access remains restricted pending "effective safeguards." The European Commission has opened formal proceedings under the Digital Services Act. UK Ofcom, Canada's privacy commissioner, and California's attorney general are all conducting active investigations as of March 2026.
How do I use AI safely without depending on a single platform like Grok?
Platform-agnostic AI tools like Happycapy give you access to 50+ models — including GPT-5, Claude, and Gemini — from a single interface. If one model or platform faces legal issues, a content safety failure, or a geographic block, you can route to an alternative instantly without re-learning a new tool or changing your workflow. Happycapy Pro starts at $17/mo (annual).
Sources
- Reuters — "Grok says safeguard lapses led to images of minors in minimal clothing on X" (January 2, 2026)
- The New York Times — "Elon Musk's Grok A.I. Chatbot Made Millions of Sexualized Images, New Estimates Show" (January 22, 2026)
- CNBC — "Elon Musk's X faces probes in Europe, India, Malaysia after Grok generated explicit images" (January 5, 2026)
- Wikipedia — "Grok sexual deepfake scandal" (updated March 25, 2026)
- The Guardian — "Baltimore sues Elon Musk's AI company over Grok's fake nude images" (March 24, 2026)
- The Guardian — "Teenage girls sue Musk's xAI, accusing Grok tool of creating child sexual abuse material" (March 16, 2026)
- Washington Post — "Teens sue Musk's xAI, saying Grok made sexual images of them as minors" (March 16, 2026)