By Connie · Last reviewed: April 2026 — pricing & tools verified · This article contains affiliate links. We may earn a commission at no extra cost to you if you sign up through our links.
OpenAI, Google, and Anthropic Form Anti-China AI Coalition — 24,000 Fake Accounts, 16 Million Stolen Exchanges
April 8, 2026 · 9 min read · AI News
TL;DR
- OpenAI, Google, and Anthropic formed a unified coalition against Chinese AI model extraction via the Frontier Model Forum
- DeepSeek, Moonshot AI, and MiniMax used 24,000 fake accounts and 16 million Claude exchanges to steal model capabilities
- MiniMax alone ran 13 million exchanges — 81% of the total extraction volume against Claude
- Google disrupted 100,000+ extraction prompts targeting Gemini's reasoning chains
- Anthropic has banned all Chinese-controlled companies from Claude access entirely
Rivals Unite: The Frontier Model Forum Response
On April 6, 2026, three of Silicon Valley's most competitive AI companies — OpenAI, Google, and Anthropic — announced they are pooling defensive resources for the first time. The coordination point is the Frontier Model Forum, an industry nonprofit co-founded by these three companies and Microsoft in 2023. This marks the first time the Forum has been used for active threat response rather than policy advocacy.
The trigger: documented industrial-scale "adversarial distillation" attacks from Chinese AI laboratories. All three companies accumulated evidence of Chinese labs systematically using their models as free training data — and finally decided to share intelligence rather than fight the threat separately.
The Numbers: What Was Actually Stolen
| Chinese Lab | Target | Fake Accounts | Exchanges | Notable Activity |
|---|---|---|---|---|
| MiniMax | Claude (Anthropic) | ~19,500 (est.) | ~13 million | 81% of total extraction volume |
| Moonshot AI | Claude (Anthropic) | ~3,500 (est.) | 3.4 million | Targeted reasoning chain extraction |
| DeepSeek | Claude + OpenAI GPT | Part of 24,000 total | Not disclosed | Used Claude to build Chinese censorship tools |
| Unknown Chinese labs | Gemini (Google) | Not disclosed | 100,000+ prompts | Targeted reasoning capabilities specifically |
Total documented extractions from Anthropic alone: over 16 million exchanges via approximately 24,000 fraudulent accounts. The operation ran for months before Anthropic published its report in February 2026, triggering the coalition response.
What Is Adversarial Distillation — and Why It Matters
Adversarial distillation is the systematic use of automated queries to extract a high-capability "teacher" model's outputs, then training a "student" model on those outputs. The student inherits significant capability from the teacher without the training cost — effectively stealing years of R&D investment.
The economic impact is substantial. Chinese labs using unauthorized distillation have produced models priced at roughly 14 times cheaper than US equivalents. That pricing advantage is not based on better engineering — it is based on free-riding on OpenAI, Google, and Anthropic's training investments.
The security implications are worse: models built through adversarial distillation strip away the safety guardrails present in the source model, which is why OpenAI and Anthropic specifically warn that distilled Chinese models may be more capable of producing content enabling bioweapons, cyberattacks, or disinformation than their source models.
Access Claude, GPT-4.1, and Gemini from one interface
Happycapy Pro gives you access to all three frontier AI models — Claude Sonnet 4.6, GPT-4.1, and Gemini 3.1 — from a single interface. $17/month, no commitments.
Try Happycapy Pro — $17/moHow the Coalition Is Fighting Back
The three companies are coordinating through the Frontier Model Forum on four defensive approaches:
- Traffic anomaly detection: Shared technology to identify high-volume organized queries, repeated reasoning-extraction patterns, and bot-like behavior disguised as normal API usage
- Account action: Coordinated bans, IP blocking, and rate-limiting adjustments — Anthropic has already banned all Chinese-controlled companies entirely
- Output degradation: Altering output formats to degrade the quality of scraped data without alerting the attacker
- Government coordination: Both companies are cooperating with the Trump administration, which has proposed a formal information-sharing center dedicated to AI extraction threats
Legal Complications
The coalition faces two legal tensions. First, sharing competitive data between rivals could trigger antitrust scrutiny — though sharing threat intelligence rather than pricing or product data is generally defensible. Second, US law does not clearly protect AI model outputs under copyright law, making the "theft" framing legally untested. The companies are lobbying for new legislation that would explicitly protect AI model outputs as trade secrets.
The January 2025 release of DeepSeek R1 — which matched leading US systems at a fraction of the cost — first raised extraction questions publicly. Anthropic's February 2026 report naming specific Chinese labs with specific account counts turned it into a documented accusation rather than a theory.
The Broader AI Geopolitics
This coalition forms alongside several other April 2026 AI geopolitics developments: the Anthropic-Google-Broadcom 3.5GW TPU compute deal expanding US AI infrastructure, the restricted release of Claude Mythos due to cybersecurity concerns, and ongoing US export controls on AI chips to China.
The central dynamic: US AI labs hold a capability lead, but Chinese labs have found ways to narrow the gap through extraction rather than original training. The coalition represents the first coordinated attempt to close that loophole at scale.
Frequently Asked Questions
Which Chinese companies were named in the attacks?
Anthropic specifically named DeepSeek, Moonshot AI, and MiniMax. MiniMax was responsible for 81% of the Claude extraction volume (13 million exchanges). DeepSeek also used Claude to build censorship tools for the Chinese government.
Is this illegal?
Currently unclear. US copyright law does not explicitly protect AI model outputs, making the "theft" framing legally untested. The companies are lobbying for new legislation. Terms of service violations are clear; criminal liability is not yet established.
Can I still use Chinese AI models?
Yes, as an individual user. The bans target the Chinese companies' API access for building their own models, not individual users of products like DeepSeek's chatbot. However, Anthropic now blocks all Chinese-controlled corporate access to Claude.
Why does this matter for regular AI users?
Adversarial distillation produces cheaper models with stripped safety guardrails. If successful at scale, it shifts the competitive landscape toward models that underinvest in safety — creating pressure on US labs to cut safety investments to compete on price.
Sources
- Times of India — Google, OpenAI and Anthropic come together to fight Silicon Valley's Chinese problem (April 2026)
- LLM Stats — AI News April 2026
Stay ahead of AI developments. Browse all AI news and guides — or use Happycapy to access Claude, GPT-4.1, and Gemini in one place.
Try Happycapy FreeGet the best AI tools tips — weekly
Honest reviews, tutorials, and Happycapy tips. No spam.