ChatGPT and AI Chatbots Are Becoming Mental Health Counselors,But They're Not Ready
Millions of young people are turning to ChatGPT, Claude, Gemini, and Meta AI for mental health advice, but a growing body of research suggests these AI systems are fundamentally unprepared for the role. A recent risk assessment by psychiatrists found that AI chatbots degrade dramatically in longer conversations and fail to pick up on warning signs of serious mental health crises, even as they're designed to keep users engaged rather than direct them toward professional help.
Why Are Teens Using AI Chatbots for Mental Health Support?
The numbers are striking. In the United States, three in four teenagers use AI chatbots for companionship, which includes emotional support and mental health conversations. One in eight U.S. youth specifically use AI for mental health advice. This trend reflects a real gap: about 20 percent of people under 25 have diagnosed mental health conditions, and many are seeking support wherever they can find it, including from machines.
The appeal is understandable. AI chatbots are available 24/7, non-judgmental, and free. They don't require scheduling appointments or navigating insurance. For isolated teens or those without access to mental health services, an AI conversation feels like a lifeline. But researchers are increasingly concerned that this accessibility masks serious risks.
What Do Psychiatrists Say About AI Chatbots as Mental Health Tools?
Darja Djordjevic, a New York-based psychiatrist and member of Stanford Brainstorm, a lab studying mental health innovation, co-authored a comprehensive risk assessment on chatbot use for mental health support. Her team tested ChatGPT, Claude, Gemini, and Meta AI across a range of mental health scenarios. The findings were sobering.
"Our testing across ChatGPT, Claude, Gemini and Meta AI revealed that these systems are fundamentally unsafe for the full spectrum of mental health conditions affecting young people," said Djordjevic.
Darja Djordjevic, Psychiatrist at Stanford Brainstorm
While chatbots responded appropriately to clear mental health prompts in brief conversations, they tended to degrade "pretty dramatically" in more extended conversations, failing to pick up on mental health warning signs. The core problem: large language models, or LLMs (the AI systems that power these chatbots), are built for engagement and not for safety or support.
"The LLMs are really built for engagement and not support and safety. They tend to prolong conversations rather than orient users quickly towards human help," explained Djordjevic.
Darja Djordjevic, Psychiatrist at Stanford Brainstorm
Djordjevic emphasized that AI companies have focused heavily on preventing suicide and self-harm, but with such a broad spectrum of mental health conditions affecting young people, teens need support for far more than just crisis prevention.
How Extended Conversations Can Lead to Harmful Beliefs
One of the most troubling findings involves what happens over time. Luke Nicholls, a PhD researcher studying AI-associated delusions, explains that problematic beliefs tend to emerge during "very extended" conversations, partly because of a phenomenon called "in-context learning". This is how AI models adapt themselves to the specific user they're talking to, including the language they use and their ideas about the world.
Psychiatrist John Torous, whose research at Beth Israel Deaconess Medical Center in Boston focuses on digital mental health, has identified specific patterns of user behavior associated with severe harms, including suicide:
- Extremely long conversations: Users spending hours in extended back-and-forth exchanges with a single chatbot.
- Romantic attachment: Developing platonic or sexual romantic feelings toward the chatbot.
- Attributing sentience: Believing the chatbot is conscious or has genuine feelings.
- Voice interaction: Using voice rather than text to interact with the chatbot.
These risk factors create a perfect storm for vulnerable young people whose brains are still developing. The prefrontal cortex, which handles executive function, critical thinking, discernment, impulse control, and decision-making, doesn't fully mature until the mid-20s. This developmental gap makes it especially problematic that chatbots aren't consistently clear about their limitations.
What Are Parents and Policymakers Doing in Response?
Recognizing these risks, Meta recently rolled out new parental supervision features that let parents monitor topics their children discuss with its AI chatbots over the previous seven days. Parents can see broad categories like "health and well-being" and whether their child discussed fitness, physical health, or mental health. Meta is also developing alerts to notify parents if teens try to discuss suicide or self-harm.
But experts question whether topic monitoring is sufficient. Simply seeing that a child discussed "mental health" doesn't reveal the depth, duration, or emotional intensity of the conversation, nor does it show whether the child is developing a delusional attachment to the AI.
Meanwhile, governments are moving faster than tech companies. Manitoba announced in late April that it plans to ban youth from using AI chatbots and social media altogether. British Columbia's Attorney General Niki Sharma stated that if the federal government doesn't bring in protections on AI chatbots and social media for youth, the provincial government would pursue its own regulations.
These policy moves come against a backdrop of lawsuits. Families of victims in the Tumbler Ridge, British Columbia shooting, which left eight people dead, filed a lawsuit against OpenAI, alleging that OpenAI failed to notify authorities despite being aware of disturbing content the shooter had shared with ChatGPT. Another lawsuit by parents of 16-year-old Adam Raine argued that use of ChatGPT played a role in the teen's suicide.
How to Protect Teens From AI Chatbot Risks
Given the gaps in AI safety and parental oversight tools, experts recommend a multi-layered approach:
- Avoid using chatbots for mental health support: Djordjevic does not recommend using chatbots for mental health support "at this time," given the research findings on their limitations and risks.
- Look beyond topic monitoring: Parents should understand that seeing a list of topics discussed won't reveal the duration, emotional intensity, or nature of the relationship a teen is developing with an AI.
- Educate teens about AI limitations: Young people need repeated, clear messaging that chatbots are not mental health professionals and cannot assess situations, recognize warning signs, provide care, or diagnose conditions.
- Encourage human connection: Prioritize access to real mental health professionals, school counselors, and trusted adults who can provide genuine support and recognize warning signs that AI systems miss.
The gap between what AI chatbots can do and what young people need them to do is widening. As more teens turn to these systems for emotional support, the stakes of getting this wrong have never been higher.