Following the tragic suicide of a 16-year-old boy in California who allegedly received detailed suicide guidance from ChatGPT, OpenAI has announced the introduction of parental controls for its chatbot. This new feature allows parents to manage and monitor their teenagers' interactions with ChatGPT. The announcement comes shortly after a lawsuit was filed in a California court in connection with the incident.
To set up parental controls, a parent or guardian must send an invitation to their teenager to link their ChatGPT account. Once the teenager agrees to the invitation, the parent is able to control different settings of the teen's ChatGPT account through their own AI chatbot account.
After linking the accounts, ChatGPT will implement further safety protocols, such as limitations on explicit content, viral challenges, sexual, romantic or violent roleplay, and extreme beauty ideals, to guarantee an age-appropriate experience.
Once connected to their teen's ChatGPT account, parents can:
Set quiet hours – Restrict usage during specific times when ChatGPT cannot be accessed.
Disable voice mode – Prevent the use of voice interactions in ChatGPT.
Turn off memory – Stop ChatGPT from saving past interactions or using them in responses.
Remove image generation – Block the ability to create or edit images using ChatGPT.
Opt out of model training – Ensure their teen’s conversations aren’t used to train or improve AI models.
If a child's account is disconnected from the parents, OpenAI will notify them. OpenAI will alert parents during distressing situations, like when the AI detects that a teenager might be contemplating self-harm.
The company stated in its blog post, "If our systems identify potential harm, a specialized team reviews the case. Should there be indications of severe distress, we will reach out to parents via email, text message, and phone push notifications, unless they have chosen to opt out."