Logo
FrontierNews.ai

ChatGPT's New Safety Feature Alerts Trusted Contacts When Users Show Signs of Mental Health Crisis

OpenAI has introduced a new "trusted contacts" feature in ChatGPT that automatically alerts a person you designate if the AI detects signs of mental health distress during your conversations. The feature represents a significant shift in how AI companies are approaching user safety, particularly as millions of people increasingly turn to chatbots for mental health advice.

Why Are AI Companies Adding Mental Health Safeguards?

The move comes as generative AI systems like ChatGPT, GPT-5, Claude, Gemini, Grok, and CoPilot have become go-to resources for mental health guidance. ChatGPT alone has over 900 million weekly active users, with a significant portion using the platform to discuss mental health concerns. The accessibility and affordability of these AI systems make them attractive alternatives to traditional therapy; users can access them 24/7 for nearly free or at minimal cost from anywhere with an internet connection.

However, this widespread adoption has created serious risks. General-purpose AI systems are not equipped with the robust capabilities of human therapists and can readily dispense unsuitable or even dangerously inappropriate mental health advice. Last year, OpenAI faced a high-profile lawsuit for failing to implement adequate safeguards when users sought cognitive and mental health guidance from the platform. As AI makers face mounting legal exposure from users and their loved ones claiming AI-related mental harms, companies are rapidly implementing protective features.

How Does the Trusted Contacts Feature Work?

The trusted contacts system operates similarly to parental oversight features already available in many AI platforms. Users can predesignate a trusted person, such as a family member, close friend, or trusted coworker, who will be contacted by OpenAI if the AI detects warning signs of mental health distress. The designated contact should be informed beforehand about this responsibility and must agree to take on the role.

The feature addresses a critical gap in current AI safety: even when an AI system suggests that a user should reach out to someone for help, the user may simply ignore that recommendation and continue using the platform. By having the AI proactively contact a trusted human, the system creates an additional layer of intervention that doesn't rely on the user's own initiative during a moment of crisis.

Steps to Setting Up Your Trusted Contact in ChatGPT

  • Identify Your Contact: Choose someone you trust completely, such as a family member, best friend, or trusted coworker who would be appropriate to contact during a mental health emergency.
  • Get Their Agreement: Inform the person beforehand that you want to designate them as your trusted contact and ensure they understand and accept this responsibility before proceeding.
  • Complete the Setup: Designate the contact within your ChatGPT account settings; if your first choice declines, you can nominate someone else until you find someone willing to take on the role.
  • Maintain Communication: Keep your trusted contact informed about your mental health status and ensure they know how to respond if OpenAI reaches out to them.

What Are the Challenges and Limitations?

The feature comes with significant technical and practical challenges. The AI must strike a delicate balance between false positives and false negatives. A false positive occurs when the AI alerts a trusted contact even though the user isn't actually in crisis, potentially causing unnecessary alarm and frustration. Conversely, a false negative happens when the AI fails to alert the contact during a critical moment when the user genuinely needs human intervention.

Tuning the AI to hit this balance correctly is extraordinarily difficult. The system must be sensitive enough to catch genuine warning signs but not so hair-trigger that it generates constant false alarms. Additionally, there are privacy and autonomy concerns. Some argue that adults should have complete control over whether anyone is contacted on their behalf, and that the responsibility for reaching out to others should rest entirely with the individual user.

Another practical consideration is that the designated contact may not always be in a position to help, or the relationship dynamics may be complicated. A user might choose someone who seems appropriate at the time of setup but later regret that choice, creating an awkward situation if the contact is actually notified.

What Does This Mean for AI Liability and the Future?

From a legal perspective, OpenAI and other AI makers implementing trusted contacts features are positioning themselves defensively against future lawsuits. By demonstrating that they have implemented reasonable safeguards and taken proactive steps to protect users experiencing mental health crises, companies can argue they have acted responsibly. This doesn't eliminate legal risk entirely, but it significantly strengthens their position if sued by users or their families claiming the AI failed to intervene appropriately.

The feature also signals a broader industry trend. Many popular AI makers are gradually providing similar safeguards associated with human mental health, recognizing that as AI becomes more deeply integrated into people's daily lives, the responsibility to monitor for and respond to signs of distress becomes unavoidable. This represents a fundamental shift in how AI companies view their role: no longer simply as neutral tools, but as systems with some responsibility for user welfare.

The trusted contacts feature is not a replacement for professional mental health care, and OpenAI and other AI makers continue to emphasize that general-purpose AI systems should not be treated as substitutes for human therapists. However, as millions of people continue to use these systems for mental health guidance, implementing safety mechanisms like trusted contacts represents a pragmatic acknowledgment of reality: people are already turning to AI for mental health support, and companies have an obligation to build in safeguards that might save lives.