By Connie · This article contains affiliate links. We may earn a commission at no extra cost to you if you sign up through our links.
China Bans AI Companion Addiction for Children: Digital Human Regulation 2026
April 4, 2026 · 7 min read · By Happycapy Guide
China's Cyberspace Administration (CAC) released draft rules on April 3 requiring digital humans to be clearly labeled as AI, banning virtual intimate relationships for minors, and prohibiting addictive AI companion services. The rules also forbid creating avatars from personal data without consent. Public comment closes May 6, 2026. Global AI companion platforms face new compliance pressure.
China moved decisively on April 3, 2026 to regulate the booming digital human industry. The Cyberspace Administration of China (CAC) published draft rules that set strict boundaries on how AI-powered virtual personas can interact with users — particularly children. The regulations are the first of their kind globally to specifically target the addiction and manipulation risks of AI companions.
The timing is significant. China's digital human market is forecast to exceed $16 billion in 2026, with applications spanning entertainment, education, customer service, and social media. The new draft rules signal that Beijing views AI companions as a social risk on par with video game addiction — a category the government has aggressively regulated since 2021.
What the Rules Actually Say
The CAC draft regulations cover five core areas. Every digital human deployed online must carry a visible label identifying it as AI-generated. Services that simulate romantic relationships or intimate companionship are explicitly banned for users under 18. Platforms cannot use a user's personal data — photos, voice recordings, behavioral data — to generate a digital avatar without explicit consent. Any service designed to encourage dependency or emotional reliance is prohibited. And digital humans cannot produce content that endangers national security, promotes separatism, or undermines Chinese social values.
The rules apply to any service deployed in China or primarily serving Chinese users, regardless of where the company is headquartered.
What Counts as a Digital Human
The CAC defines a digital human as any AI-generated virtual persona with a visual appearance designed to resemble or simulate a person — including animated avatars, photorealistic video characters, and AI-voiced characters used in content or conversation. This covers:
| Category | Examples | Regulated |
|---|---|---|
| AI companions | Replika-style apps, virtual girlfriends/boyfriends | Yes — intimacy ban for under-18s |
| Virtual influencers | AI-generated social media personas | Yes — labeling required |
| Customer service avatars | AI chat agents with animated faces | Yes — labeling required |
| AI tutors | Educational apps with avatar teachers | Yes — child protections apply |
| Text-only chatbots | No visual persona | Not covered by this regulation |
The Addiction Problem Behind the Rules
China's concern is grounded in documented cases of users — many of them teenagers — developing strong emotional attachments to AI companions, spending hours daily in conversation, and in some cases preferring AI relationships over human ones. State media has reported multiple cases of students neglecting school and family due to AI companion use.
"Services must not use technical means to induce or compel users to increase usage frequency or duration, or to stimulate emotional dependency." — CAC draft regulation, April 2026
This language mirrors the restrictions China placed on video games in 2021, which limited under-18 players to three hours of gaming per week. The government is applying the same logic to AI: if the product is engineered to be addictive, it requires regulatory constraint.
Global Comparison: How China's Rules Stack Up
| Jurisdiction | AI Labeling Required | Child Protections | Addiction / Manipulation Ban |
|---|---|---|---|
| China (CAC 2026 draft) | Yes — digital humans | Yes — intimate AI banned for under-18 | Yes — explicit prohibition |
| EU (AI Act) | Yes — AI-generated content | High-risk classification triggers reviews | Banned for certain high-risk systems |
| United States | No federal requirement | COPPA covers data; no companion-specific rules | None at federal level |
| United Kingdom | Under Online Safety Act review | Age-appropriate design code applies | Consideration stage only |
Impact on Global AI Companion Platforms
Several major AI companion services operate in or serve Chinese users. Platforms like ByteDance's Zao, domestic AI girlfriend apps, and enterprise services with avatar features all fall under the new rules. Foreign companies with Chinese operations or Chinese-language services will also need compliance plans before the rules are finalized.
The labeling requirement alone represents a significant UX change. Services currently presenting AI personas as seamlessly human-like — a common design choice for engagement — will need to add visible disclosures that may reduce the immersive experience that drives retention.
The ban on using personal data for avatar creation without consent has direct implications for services that generate personalized digital humans from user photos, which is a common feature in Chinese social and entertainment apps.
What Happens After May 6
The public comment period closes May 6, 2026. After that, the CAC typically takes 30–90 days to finalize rules. Companies serving Chinese users should begin impact assessments now, particularly for:
- Avatar and digital human features in consumer apps
- AI companion services with any engagement-maximization logic
- Any service that creates AI personas from user-uploaded photos or voice
- Products serving users who may be under 18
The draft rules are available on the CAC website and are open for industry and public comment through the official consultation portal.
The Bigger Picture: AI Regulation Goes Mainstream
China's digital human rules are part of a broader global trend toward AI-specific regulation. The EU AI Act is in enforcement rollout. The UK Online Safety Act is being extended to AI-generated content. Several US states have passed or are considering AI disclosure and child-protection laws.
China's approach — targeting specific use cases with concrete prohibitions rather than broad risk classifications — may become a template that other regulators study. The addiction-prevention framing is new ground: no other major jurisdiction has explicitly banned AI services engineered to maximize emotional dependency.
For users of multi-model AI platforms like Happycapy, these regulations are relevant context: the tools you use today are increasingly operating in a global regulatory environment that will shape what features are available, how AI personas are presented, and what data can be used to personalize experiences.
FAQ
What did China regulate about digital humans in April 2026?
China's CAC issued draft rules requiring digital humans to be labeled as AI, banning virtual intimate relationships for minors, prohibiting addictive companion services, and forbidding avatar creation from personal data without consent.
Which apps are affected by China's digital human rules?
Any service in China using AI avatars, virtual companions, or animated AI personas is affected — including social apps, entertainment platforms, customer service bots with avatar interfaces, and AI tutoring services.
When do China's digital human regulations take effect?
Public comment closes May 6, 2026. Final enforcement timelines are pending. Companies should begin compliance assessments immediately.
How do China's AI rules compare to the EU AI Act?
Both require labeling of AI-generated content and include child protection provisions. China's rules are more targeted, specifically addressing digital human personas and virtual companion services rather than broad risk categories.
Sources: Cyberspace Administration of China draft regulation (April 3, 2026) · Reuters · Business Standard · AsiaOne · GeopolitEchs
Get the best AI tools tips — weekly
Honest reviews, tutorials, and Happycapy tips. No spam.