

In order to regulate artificial intelligence services that simulate human personalities and engage users emotionally, China’s cyber regulator issued draft rules for public comment on Saturday. This move highlights Beijing’s push to shape the rapid deployment of consumer-facing AI by enhancing safety and ethical standards.
The draft outlines a regulatory framework requiring providers to warn users about excessive use and intervene when signs of addiction appear. It also mandates that service providers take responsibility for safety throughout the product lifecycle, implementing systems for algorithm review, data security, and personal information protection.
To address psychological risks, the draft mandates that service providers assess users' emotional states and potential dependency on the service. Providers are required to intervene if users display intense emotions or signs of addiction.
The guidelines establish boundaries for content and behavior, specifying that services should not produce material that threatens national security, disseminates false information, or encourages violence or indecency.