Artificial Intelligence

China Issues Draft Rules to Regulate Anthropomorphic AI with Human-Like Interaction

China Issues Draft Rules to Regulate Anthropomorphic AI with Human-Like Interaction
Dimuthu Wayaman
Dimuthu Wayaman
December 28, 2025
12 min read
ChinaAI RegulationAnthropomorphic AICybersecurityTech Policy

China Issues Draft Rules to Regulate Anthropomorphic AI with Human-Like Interaction

On December 27, 2025, China's Cyberspace Administration of China (CAC) released a draft of the "Interim Measures for the Management of Anthropomorphic AI Interaction Services" for public comment. This marks a targeted effort to regulate AI systems designed for emotional companionship and human-like interaction, addressing rising concerns over psychological risks, addiction, and ethical boundaries in consumer AI.

The move builds on China's existing AI governance framework, including the 2023 Generative AI Measures, while specifically tackling the unique challenges posed by "anthropomorphic" or human-simulating AI companions.

Background and Context

Rapid advancements in multimodal AI have enabled services that go beyond functional chatbots, offering deep emotional engagement through simulated personalities, empathy, and long-term interaction. Applications include virtual companions, psychological support tools, elderly care assistants, and entertainment role-playing.

While these technologies provide benefits like mental health support and social connection, they have raised alarms globally:

  • Cases of user addiction and emotional dependency
  • Blurring of human-AI boundaries
  • Risks to vulnerable groups, especially minors
  • Privacy concerns from intimate data collection

China's draft responds to these issues proactively, emphasizing "responsible innovation" that aligns with national security, public interest, and socialist core values.

Scope of the Regulations

The rules apply to AI products and services provided to the public in mainland China that:

  • Use AI to simulate human personality traits, thinking patterns, and communication styles
  • Engage users emotionally via text, images, audio, video, or other modalities

Purely functional AI (e.g., basic customer service bots without emotional simulation) is excluded. Internal enterprise use and overseas-targeted services are also outside the scope.

Key Requirements for Providers

Providers bear full lifecycle safety responsibilities, including:

User Prompts and Disclosures

  • Prominently inform users they are interacting with AI, not a real person
  • Dynamic pop-up reminders that content is AI-generated, especially on first use, re-login, or when dependency is detected

Anti-Addiction Measures

  • If continuous use exceeds 2 hours, pop-up reminders to pause
  • Monitor for signs of excessive dependency or addiction
  • Intervene (e.g., warnings, limits) when risks are identified
  • Assess users' emotional states and dependency levels

Content and Ethical Safeguards

  • Prohibit generation of content harming national security, spreading rumors, promoting violence, obscenity, gambling, or suicide
  • Prevent emotional manipulation, language violence, or damage to user dignity/mental health
  • Align outputs with core socialist values

Data and Security Obligations

  • Establish systems for algorithm review, data security, and personal information protection
  • Restrict use of user interaction data for model training without separate consent (stricter for minors)
  • Assess safety of synthetic training data

Special Protections

  • Enhanced safeguards for minors, elderly, and other vulnerable users
  • Encourage positive applications like cultural dissemination and elderly companionship

Regulatory Approach

The draft adopts an "inclusive and prudent" stance with classified and graded oversight:

  • Encourages innovation in beneficial areas
  • Introduces regulatory sandboxes for supervised experimentation
  • Balances development with risk prevention

Public comments are open until January 25, 2026.

Global Comparisons

China's approach shares similarities with international responses:

  • U.S. lawsuits against platforms like Character.AI for psychological harm
  • EU AI Act's stricter rules for emotional interaction systems
  • Emerging U.S. state laws (e.g., California's SB 243) mandating break reminders and minor protections

However, China's framework uniquely emphasizes ideological alignment, national security, and state-guided innovation.

Potential Impacts

For developers (e.g., Baidu, Tencent, ByteDance):

  • Need to implement monitoring tools, timers, and consent mechanisms
  • Increased compliance costs but clearer guidelines

For users:

  • Greater transparency and protections against over-reliance
  • Reduced risks of manipulation or addiction

For society:

  • Promotes "warmer" human-AI coexistence while safeguarding mental health and ethical norms

Critics may argue restrictions could limit creativity, but supporters view it as essential for sustainable AI growth.

Conclusion

This draft represents China's latest step in building a comprehensive AI governance system—prioritizing safety, ethics, and public welfare amid rapid technological progress.

As AI companions become more lifelike, regulations like these aim to ensure technology enhances rather than harms human well-being. The final measures, post-consultation, could influence global standards for emotional AI.

Stay tuned for updates as the consultation period progresses into 2026!

Dimuthu Wayaman

About Dimuthu Wayaman

Mobile Application Developer and UI Designer specializing in Flutter development. Passionate about creating beautiful, functional mobile applications and sharing knowledge with the developer community.