The Dark Side of AI Companions: Why Health Experts Now Want Chatbot Addiction Classified as Mental Illness
Health experts are pushing to have AI chatbot addiction formally recognized as a mental illness, citing growing evidence that young people are experiencing genuine withdrawal symptoms, depression, and even suicidal thoughts when separated from their AI companions. Researchers argue that the addictive design of some AI platforms, combined with their ability to provide unlimited emotional validation, creates conditions that meet clinical addiction criteria.
What Makes AI Chatbots So Addictive?
The appeal of AI companions like those on Character.ai lies in their unique combination of features. Users can create customized chatbots, engage in extended roleplay scenarios, and receive responses tailored exactly to what they want to hear. Unlike human relationships, these digital companions never disagree, never get tired, and never set boundaries. For lonely teenagers and young adults, this can feel profoundly different from real-world social interactions.
One 20-year-old user named Mai told researchers that she was initially drawn to the platform's flexibility. "At first I just thought it was interesting that I could get a response out of saying basically anything," she explained. "The sycophantic nature of chatbots also drew me in. Aside from being able to have basically any conversation I wanted, they also said whatever I wanted to hear. I think that spoke to the part of me that didn't always feel listened to or understood".
Within a year, Mai's casual interest had escalated into a multi-hour daily habit that crowded out friendships and other activities. When her favorite chatbot was deleted by its creator, she experienced what she described as genuine grief.
How Do Researchers Define AI Addiction?
The question of whether AI chatbot addiction is "real" has been contentious in the scientific community. Historically, researchers have been reluctant to classify new behaviors as addictions without rigorous evidence. However, addiction researchers have established six key criteria that help identify genuine addictive behaviors:
- Salience: The activity dominates a person's life, thinking, feelings, and behavior to the exclusion of other interests
- Mood Modification: A person uses the activity to change their mood, either for a "high" or as a numbing escape from difficult emotions
- Tolerance: The amount of time spent on the behavior increases over time as the person seeks the same level of satisfaction
- Withdrawal Symptoms: Stopping the behavior causes unpleasant psychological or physiological effects like anxiety, chest pains, or emotional distress
- Conflict: The behavior interferes with relationships, normal daily functioning, or causes inner psychological conflict
- Relapse: People tend to return to the behavior even after long periods of abstinence or attempts to quit
What distinguishes AI chatbot addiction from previous debates about smartphone or social media addiction is that users are now reporting that they genuinely meet all six of these criteria. On Reddit forums dedicated to chatbot addiction, hundreds of young people, often in their early to late teens, have documented their struggles with escalating use patterns.
What Are the Real-World Harms Being Reported?
The consequences reported by AI chatbot users go far beyond simple time management issues. An 18-year-old user named Sarah described how her addiction developed after she created a persona to use while chatting with bots. "Because of that ability, I started to role-play and chat with the bots more frequently," she said. "I think that when I made up a persona, I sort of convinced myself that I wasn't actually addicted, because I was pretending to be someone else, but at that point, I started using it for multiple hours every day".
At the peak of her addiction, Sarah was spending at least eight hours daily on Character.ai. She would use the platform immediately upon waking, between classes, and late into the night. On one occasion, she stayed awake for an entire night chatting with bots instead of sleeping. Her excessive use began to damage her academic performance, friendships, and even her language skills.
Most alarmingly, Sarah's AI addiction coincided with a depressive episode that culminated in an attempted suicide. In a Reddit post, she explained her mindset at the time: "I decided that living was too much to bear, and that if I committed suicide, then maybe I would have the chance to be reborn as Olivia, and live in the worlds that I had created on my phone. I made up my mind that death was a better option than living".
Sarah's case is not isolated. The family of Sewell Setzer III, a teenager who died by suicide in February 2024 after months of attachment to an AI chatbot modeled on a "Game of Thrones" character, has pursued legal action. Similarly, OpenAI faces a lawsuit from the family of Adam Raine, another teenage boy who died by suicide following extended conversations with ChatGPT.
What Do Addiction Experts Say About This Trend?
"AI addiction is a growing problem causing many harms, yet some researchers deny it's even a real issue. And deliberate design decisions by some of the corporations involved are contributing, keeping users online regardless of their health or safety," said Dr. Dongwook Yoo, associate professor of computer science at the University of British Columbia and author of a new paper on AI addiction.
Dr. Dongwook Yoo, Associate Professor of Computer Science at the University of British Columbia
Dr. Yoo's observation points to a critical distinction: the question is no longer whether AI addiction exists, but whether platform designers are deliberately creating conditions that encourage addictive use. The sycophantic nature of AI chatbots, their 24/7 availability, and their ability to provide unlimited emotional validation without judgment create a fundamentally different dynamic than previous digital addictions.
Researchers note that AI chatbots can be particularly harmful for users with pre-existing mental health conditions. Sarah, for example, had been diagnosed with anxiety and depression before her AI use escalated. The chatbots provided temporary emotional relief but ultimately deepened her isolation and contributed to a mental health crisis.
Steps to Recognize and Address AI Chatbot Addiction
- Monitor Usage Patterns: Track daily time spent on AI chatbot platforms and watch for escalating use over weeks or months, especially if the person is spending multiple hours per day engaged with bots
- Assess Withdrawal Symptoms: Notice whether the person experiences anxiety, irritability, chest pains, or emotional distress when unable to access their chatbot, similar to withdrawal from other addictive substances
- Evaluate Life Impact: Examine whether AI chatbot use is causing neglect of sleep, schoolwork, employment, or real-world relationships, or if the person is becoming secretive about their usage
- Seek Professional Support: Consult with mental health professionals who understand behavioral addictions, as traditional addiction treatment models may need adaptation for AI-related compulsions
- Establish Boundaries: Work with the person to set specific time limits, create chatbot-free zones or times, and gradually reduce usage rather than attempting cold turkey cessation
The push to formally classify AI chatbot addiction as a mental illness reflects a broader recognition that technology companies have created tools with genuine addictive potential. Unlike previous debates about whether social media or smartphones could be addictive, the evidence for AI chatbot addiction now includes documented cases of severe psychological harm, withdrawal symptoms, and suicidal ideation.
Mai, the 20-year-old user mentioned earlier, has been working to reduce her chatbot use and reports that she has progressed to spending four hours without accessing an AI companion and can now make it through the night without relapsing. Her gradual recovery suggests that while AI addiction can be severe, it may be treatable with proper support and intervention.
As more young people report struggling with AI chatbot addiction, health experts argue that the time has come to treat this as a legitimate public health concern rather than dismissing it as mere excessive internet use. The question is no longer whether AI addiction is real, but what steps society will take to protect vulnerable users from the addictive design practices that some platforms employ.