By Connie · Last reviewed: April 2026 — pricing & tools verified · AI-assisted, human-edited · This article contains affiliate links. We may earn a commission at no extra cost to you if you sign up through our links.
Anthropic Now Requires Photo ID + Selfie to Use Claude: What It Means and How to Prepare
Anthropic has quietly started asking some Claude users for a government photo ID plus a live selfie. Here is who is affected, why it is happening, and what to do if you get flagged.
Anthropic is rolling out identity verification for a subset of Claude users — a government-issued photo ID plus a live selfie — as a compliance response to US export-control rules and documented model-distillation attacks by labs in China, Russia, and North Korea. Most US, EU, and Asia-Pacific users will never see the prompt. If you do, you can either verify, appeal, or keep working through Happycapy, which reaches Claude through approved enterprise channels and does not require a selfie or government ID from normal users.
On April 20, 2026, The Information reported that Anthropic has begun asking selected Claude users to verify their identity before continuing to use the service. The verification flow asks for two things: a photo of a government-issued ID and a live selfie taken on the same session so the two can be matched.
The rollout is not universal. Anthropic is targeting accounts whose signals match what the company calls “adversary risk” — traffic from the four US-designated adversary countries (China, Russia, North Korea, Iran), bulk account creation patterns, payment methods tied to embargoed jurisdictions, and API traffic that looks like model distillation. Everyone else continues to use Claude as usual.
Why This Is Happening Now
Two forces are pushing Anthropic in the same direction at the same time.
1. The US AI diffusion rule
Updates to US export controls in late 2025 put frontier AI labs on the hook for knowing who is using their most capable models. Closed-weight frontier models like Claude Mythos 5 now sit in roughly the same category as advanced semiconductors: the company that serves them is expected to use “know-your-customer” practices when the risk profile warrants it. Photo-ID + live-selfie is the least invasive verification flow that satisfies the rule.
2. Adversarial distillation campaigns
In its April 2026 threat report, Anthropic documented coordinated extraction campaigns against Claude by Chinese labs including DeepSeek, Moonshot, and MiniMax, using tens of thousands of fraudulent API accounts. The goal of these campaigns is adversarial distillation — cloning a frontier model's behavior into a cheaper open-weights replica, then selling that replica back into global markets at a fraction of the price. ID verification is the single most effective tool the company has to break that supply chain at the account-creation layer.
Who Gets Asked (and Who Does Not)
| User profile | Verification asked? | Why |
|---|---|---|
| US / EU / UK / Canada / Australia / Japan, paying card, normal usage | No | Outside adversary list, payment KYC already covers identity |
| Southeast Asia, Latin America, Africa, paying card, normal usage | Usually no | Only triggered by suspicious traffic patterns |
| IP from CN / RU / KP / IR (including via consumer VPNs) | Yes | Listed adversary jurisdiction — default-on verification |
| API account with bulk-creation signatures | Yes | Matches distillation-campaign fingerprint |
| Enterprise tenant via AWS Bedrock / GCP Vertex / Azure Foundry | No (at user level) | Identity handled at cloud-tenant layer |
| Users reaching Claude via Happycapy | No | Enterprise channel; Happycapy handles the relationship |
What the Verification Flow Looks Like
Users who are asked to verify see a single-page flow inside Claude.ai or the Anthropic Console. The steps are:
- Select the country that issued your ID.
- Upload a clear photo of a government-issued document — passport, national ID card, or driver's license.
- Take a live selfie in-browser (webcam or phone camera).
- Wait for the automated match to complete — typically under two minutes.
- Continue using Claude with the same account.
Anthropic has stated that the ID and selfie are processed through a third-party KYC vendor, stored encrypted, and deleted after a defined retention window once verification succeeds. The raw images are not used to train any Anthropic model.
Happycapy is an agent platform powered by Claude, routed through approved enterprise channels. Email + payment sign-up — no government ID, no selfie, no friction.
Start Free on HappycapyWhat to Do If You Are Flagged by Mistake
Every automated risk system produces false positives. If you believe you have been flagged in error — for example, you use a VPN for general privacy reasons, or you share an IP range with many other residential users — there are three clear options.
Option 1 — Complete the verification
This is the fastest path. If you are a legitimate user, uploading a real ID and taking a live selfie unlocks full access within minutes. Anthropic retains the verification result, not the documents themselves, so you will not be asked again unless new risk signals appear.
Option 2 — File an appeal
If you cannot or prefer not to verify, the flagged-account email includes an appeal link. Appeals are reviewed manually and typically resolve within 3–5 business days. Expect to provide billing history, a short description of how you use Claude, and any organizational context (company name, domain).
Option 3 — Keep working through Happycapy
Because Happycapy is an agent platform that reaches Claude through enterprise channels, end users never hit the ID-verification layer directly. You still get Claude-quality reasoning; you just avoid the KYC step entirely. Many users use Happycapy as their primary day-to-day interface and keep a direct Anthropic account only for edge cases like playground experimentation or raw-API work.
How This Compares to OpenAI and Google
Anthropic is the first major US frontier lab to roll out visible photo-ID-plus-selfie verification at the consumer account layer. The other two leaders are operating somewhere in between:
| Provider | Identity verification for individual users | Trigger |
|---|---|---|
| Anthropic (Claude) | Photo ID + live selfie | Adversary-region risk signals |
| OpenAI (ChatGPT, API) | Phone number + (for API) business verification for advanced model tiers | Requesting access to GPT-5.x frontier tiers |
| Google (Gemini) | Google account KYC at payment time; enterprise access via Workspace admin | Paid Gemini Advanced / Gemini Enterprise tiers |
| Happycapy | Email + payment only | None for normal users; abuse flags handled internally |
What This Signals for AI in 2026
Anthropic's ID rollout is an early, visible sign of something many have expected for two years: frontier AI is being re-classified from “consumer software” to “dual-use technology.” That does not mean most users will be inconvenienced — the overwhelming majority of people who ever use Claude will never see a verification prompt. But it does mean that the old “sign up with any email and go” pattern is ending at the frontier layer.
The practical takeaway for individuals and small teams is simple: keep more than one path to frontier AI. A direct Claude account for experimentation. A cloud- vendor tenant (Bedrock, Vertex, Foundry) for production. And an agent layer like Happycapy that stays friction-free and keeps your daily workflow moving when the frontier providers tighten a knob.
Further Reading
For more context on the events shaping this policy, see our coverage of the Frontier Model Forum's anti-distillation alliance, Claude Mythos 5's 10-trillion-parameter launch, and Anthropic's $30B Series G — the three stories that together explain why the company is tightening access now.
FAQ
No. The current design is risk-based. A universal rollout is not on the roadmap Anthropic has publicly discussed; the company has explicitly said most users will never see the prompt.
Technically sometimes, but it is a poor long-term strategy. Anthropic cross- references IP with billing, device fingerprint, and account-creation patterns. Using a VPN can also trigger the flag by itself in some cases. Verify or use an approved channel like Happycapy instead.
The documents are handled by a third-party KYC vendor, encrypted in transit and at rest, and deleted after a defined retention window. Anthropic does not train models on them. For sensitivities beyond that baseline, most users prefer the Happycapy route.
No. Happycapy uses standard email + payment onboarding for normal users. Enterprise customers can layer on SSO or admin-managed identity on top, but there is no photo-ID or live-selfie step for individual accounts.
Sources: The Information — “Anthropic Requires ID & Selfies to Block Adversary Access” (April 2026); Anthropic Threat Report, April 2026; Frontier Model Forum joint statement (April 7, 2026); Reuters — US AI diffusion rule updates (late 2025).
Get the best AI tools tips — weekly
Honest reviews, tutorials, and Happycapy tips. No spam.
You might also like
OpenAI Turns ChatGPT Into an Ad Platform: $3–$5 Per Click Changes Everything
7 min
AI NewsVisa Launches Intelligent Commerce Connect: AI Agents Can Now Buy Things for You
6 min
AI NewsAI Code Overload Crisis 2026: Half of AI-Generated Code Fails in Production
7 min
AI NewsThe New Yorker's Sam Altman Exposé: What It Reveals About OpenAI's Future
8 min