China Proposes Strict New AI Regulations to Protect Children and National Security
China has proposed strict new rules for artificial intelligence to protect children and prevent chatbots from advising self-harm or violence, including a ban on content promoting gambling. The draft regulations, published by the Cyberspace Administration of China (CAC), will apply to AI products and services in China once finalised.
The child-safety measures outlined in the draft require personalised settings, usage time limits, and guardian consent before AI can provide emotional companionship services. Additionally, chatbot operators must have a human intervene in any conversation involving suicide or self-harm and must immediately notify the guardian or emergency contact.
AI providers are also mandated to ensure their services do not generate or share content that endangers national security, damages national honour and interests, or undermines national unity. Meanwhile, the CAC encourages the promotion of local culture and elderly companionship through AI, provided it is safe and reliable.
The CAC is currently seeking public feedback on these proposals. This move comes amid a surge in AI chatbot launches globally, with DeepSeek topping app download charts and platforms like Z.ai and Minimax, which have tens of millions of users, announcing plans to list on the stock market.