

OpenAI has begun rolling out a new safety feature in ChatGPT called “Trusted Contact,” allowing adult users to nominate a trusted friend, family member, or caregiver who can be alerted in rare cases where automated systems and trained reviewers detect serious self-harm concerns in conversations. The feature is designed to encourage users in emotional distress to reach out to their chosen contact, adding an extra layer of human support alongside existing safety measures.
The “Trusted Contact” feature allows users to select an emergency contact who can be alerted if there is a potential risk. OpenAI explained, “Sometimes, when you are having a hard time, it can feel difficult to reach out or ask for help directly. The trusted contact feature is designed to support real-world connections in those moments. It follows four steps, from adding a contact to notification.”
ChatGPT’s “Trusted Contact” feature lets users add one adult from their settings such as a friend, family member, or caregiver, who must accept an invitation within seven days for activation. If ChatGPT’s systems detect signs of distress, the platform first encourages the user to seek support and may route the case to trained human reviewers. When a high-risk concern is confirmed, the Trusted Contact receives a brief alert, along with relevant support guidance. Notifications are generally issued within an hour of review.
Select 1 trusted adult (friend, family member, or caregiver) from ChatGPT settings.
Send an invite via email, SMS, WhatsApp, or in-app notification
The contact must accept within 7 days (otherwise you can choose someone else).
If ChatGPT detects potential self-harm or suicide-related content, trained reviewers may assess the case.
In high-risk situations, the Trusted Contact may be notified to check in on the user.
No chat messages or conversation transcripts are shared, only a safety alert.
Trusted Contacts can be changed or removed anytime from settings
The contact can also opt out later via the help centre
Trusted Contact builds on ChatGPT’s existing parental safety controls that alert guardians about teen accounts. Developed with input from clinicians, researchers, and the American Psychological Association, the feature is supported by OpenAI’s Global Physicians Network of 260+ doctors across 60 countries.
It works alongside existing safeguards like crisis helpline referrals, refusal to provide self-harm instructions, and guidance shaped by over 170 mental health experts. The feature is currently rolling out for users aged 18 and above.