Logo
FrontierNews.ai

ChatGPT's New Safety Feature Alerts Trusted Contacts When Users Show Signs of Self-Harm

OpenAI has launched ChatGPT Trusted Contact, an optional safety feature that notifies a pre-selected adult if the AI system detects serious self-harm concerns during a conversation. The feature, now rolling out to users over 18, represents a shift in how major AI platforms are building safeguards around mental health conversations. Rather than relying solely on automated alerts or crisis hotlines, the system creates a bridge between AI detection and real-world human support.

How Does ChatGPT's Trusted Contact Feature Work?

The process is straightforward but involves multiple layers of human oversight. Users can add one trusted adult from their ChatGPT settings, who receives an invitation to accept the role within one week. If OpenAI's automated systems detect a conversation that may indicate serious self-harm concerns, the user is notified first and encouraged to reach out to their trusted contact directly.

What happens next is critical: a small team of specially trained reviewers then examines the conversation. If they confirm a serious safety concern, the trusted contact receives a brief notification by email, text message, or in-app alert. OpenAI emphasizes that the notification does not include chat transcripts or detailed conversation content, only the general reason self-harm came up in a concerning way.

  • User Control: Adults can remove or change their trusted contact at any time through settings, and the trusted contact can remove themselves through OpenAI's help center
  • Human Review: Every notification undergoes trained human review before being sent, with OpenAI aiming to complete reviews in under one hour
  • Limited Information: Notifications share only the general reason for concern and encourage the contact to check in, without revealing specific chat details
  • Complementary to Crisis Services: The feature does not replace crisis hotlines, emergency services, or professional mental health support, which ChatGPT continues to recommend when appropriate

Why Are Mental Health Experts Supporting This Approach?

The feature builds on established psychological research showing that social connection is a protective factor during emotional distress. OpenAI developed Trusted Contact with guidance from more than 170 mental health experts, clinicians, and researchers, including input from its Global Physicians Network of over 260 licensed physicians across 60 countries.

"Psychological science consistently shows that social connection is a powerful protective factor, especially during periods of emotional distress. Helping people identify a trusted person in advance, while preserving their choice and autonomy, can make it easier to reach out to real-world support when it matters most," said Dr. Arthur Evans, Chief Executive Officer of the American Psychological Association.

Dr. Arthur Evans, Chief Executive Officer of the American Psychological Association

Dr. Munmun De Choudhury, a researcher at Georgia Tech and member of OpenAI's Expert Council on Well-Being and AI, emphasized the potential for AI to strengthen human connections during vulnerable moments. She noted that the feature represents progress toward empowering users to seek real-world support when they need it most.

What Does This Mean for Schools and EdTech Platforms?

The rollout of Trusted Contact signals a broader shift in how AI platforms are handling safeguarding, particularly for educational settings. Trusted Contact builds on OpenAI's existing parental controls, which already allow parents and guardians to receive alerts when signs of acute distress are detected for linked teen accounts. The new feature extends similar protections to adults over 18 who choose to opt in.

For schools, universities, and EdTech providers integrating ChatGPT into their platforms, the feature demonstrates that safety controls are moving from system-level decisions into user-facing settings. This gives individuals more agency over their own support networks while maintaining the oversight needed to catch serious concerns. As more students and educators use AI tools in academic environments, how platforms handle mental health escalation is becoming a key differentiator in responsible AI deployment.

OpenAI says it has also worked with mental health experts to improve ChatGPT's underlying ability to detect and respond to signs of distress, de-escalate sensitive conversations, refuse harmful requests, and guide users toward real-world support. The combination of automated detection, human review, trusted contact notification, and crisis service integration creates a multi-layered approach to safety that acknowledges both the potential and the limitations of AI in mental health contexts.