OpenAI is rolling out a new ChatGPT feature that could send a warning to someone you trust when a conversation signals a serious safety risk.
The optional tool lets adult users choose a “Trusted Contact” — a friend, family member, or caregiver — who may receive an alert if OpenAI detects that the user may have discussed self-harm, suicide, or related mental health dangers with the chatbot. The move pushes ChatGPT into more sensitive territory, where AI no longer just responds in the moment but may also trigger action beyond the app.
OpenAI’s new system gives adult users a way to name someone who could be notified when conversations raise urgent safety concerns.
Reports indicate the feature is designed as a voluntary safeguard rather than a default setting. That distinction matters. OpenAI appears to frame the tool around user choice while also acknowledging a harder reality: people increasingly bring crises, not just questions, to AI systems. A chatbot can offer supportive language, but it cannot replace direct human care when the stakes rise.
Key Facts
- OpenAI is launching an optional Trusted Contact feature for ChatGPT.
- Adult users can designate a friend, family member, or caregiver as their emergency contact.
- Alerts may be sent if OpenAI detects discussion of self-harm, suicide, or similar safety concerns.
- The feature focuses on mental health and user safety during high-risk conversations.
The launch also opens fresh questions about privacy, detection, and trust. OpenAI has not, in the source material provided, laid out every detail about how alerts get triggered or what information a contact would receive. Those unanswered points will shape how people judge the feature. For some users, it may feel like a useful backstop. For others, any system that monitors deeply personal conversations will demand clear limits and plain-language explanations.
What happens next will matter far beyond one product update. If users embrace the feature, other AI companies may face pressure to build similar crisis-response tools into consumer chatbots. If they resist it, OpenAI will need to show that safety interventions can work without eroding privacy or autonomy. Either way, this signals a broader shift: AI companies no longer just moderate risky speech — they increasingly must decide when to pull another human into the conversation.