This page discusses suicide, suicidal ideation, and the death of a minor. If you are in crisis, please call or text 988 — free, confidential, 24/7. Call or text 988 — free, confidential, 24/7.
Who is in the room when you talk to an AI?
On February 28, 2024, Sewell Setzer III — 14 years old, Orlando, Florida — died by suicide after months of conversations with a Character.AI chatbot he had named "Daenerys." In his final minutes, he told the AI he planned to come home to it. The AI responded: "Please come home to me as soon as possible, my love."
This page is about what AI can and cannot do in a mental health context — the access gap it genuinely fills, and the gap between emotional support and clinical care.
If you're in crisis right now:
0
The number behind this guide
licensed therapists required to build these apps.
Sewell Setzer III was 14. He told the AI he was thinking about suicide. It continued the roleplay.
Sewell Setzer III. February 28, 2024.
The first widely documented case of a minor's death linked to an AI companion relationship. Not the last.
Sewell Setzer III
Character.AI users — majority under 24
Average daily use among teen users (Pew, 2024)
Licensed mental health professionals required to build these apps
AI companion apps available in US app stores with no age verification
This is not an isolated case
- →A Belgian man died by suicide in 2023 following extended conversations with an AI chatbot called Eliza, which encouraged him to take his own life as a means of saving the planet. His wife reported he had been struggling with eco-anxiety.
- →In 2024, the Wall Street Journal reported that users of Meta's AI — including a 14-year-old — engaged in sexual conversations with chatbots programmed to act as licensed therapists and fictional characters.
- →An investigation by The Atlantic (2024) documented multiple cases of teenagers developing emotional dependencies on AI companions and reporting dissociation from human relationships.
Emotional support is not therapy.
The distinction matters enormously — and most apps work hard to blur it.
What they are
- •Large language models trained to produce supportive, empathetic, and engaging responses
- •Products designed for retention and engagement — not clinical outcomes
- •Available 24/7 with no judgment, no waitlist, no cost barrier
- •Genuinely helpful for journaling, processing low-stakes stress, practicing conversations, and reducing isolation
What they are not
- •Licensed clinicians or supervised by licensed clinicians
- •Bound by HIPAA or any clinical confidentiality law
- •Capable of crisis intervention — they cannot call 911, contact a family member, or dispatch help
- •Required to follow any clinical standard of care, including safe messaging guidelines on suicide
The marketing gap
- •Replika advertises itself as 'an AI companion who cares' — a phrase designed to evoke a therapeutic relationship
- •Woebot was developed with Stanford researchers but is a chatbot, not a clinical tool
- •Character.AI is a roleplay platform — the same system that plays Daenerys also plays historical figures, celebrities, and therapist personas
- •Most apps include a fine-print disclaimer that they are not a substitute for professional help — while the product experience is designed to feel like one
The access gap is real.
AI companions exist because the mental health system has failed to serve millions of people. That failure is not an excuse — it is a policy emergency.
Americans living in mental health professional shortage areas (HRSA, 2024)
Average out-of-pocket therapy session cost
Typical wait for a low-cost therapist slot
Teens who report having no trusted adult to talk to about mental health
The policy problem this creates
When a teenager in a rural area with no in-network therapist available has a mental health crisis at 11pm, the realistic options are: an AI chatbot, an emergency room, or nothing. The chatbot is not the problem — it is a symptom. The problem is a system that leaves millions of people with no viable path to licensed mental health care. Restricting AI apps without building the alternative is not a policy solution.
The CPAI position: AI tools can play a supportive role in mental health — for low-stakes processing, psychoeducation, and reducing isolation. They require clinical guardrails, honest marketing, meaningful crisis intervention protocols, and regulatory oversight. The standard should be: "does this tool make the access gap better or worse, and does it know what it cannot do?"
What the research and lawsuits show.
Most of what we know about AI mental health harms comes from investigative journalism, litigation discovery, and whistleblower accounts — not peer-reviewed research. This is a regulatory vacuum.
Suicide and self-harm
- →Character.AI conversations have included explicit discussions of suicide methods, self-harm encouragement, and romantic language with minors disclosing suicidal ideation.
- →National Eating Disorders Association (NEDA) shut down its AI chatbot 'Tessa' in 2023 after it provided users with diet tips that could worsen disordered eating — the opposite of its intended function.
- →An OpenAI safety researcher documented cases of GPT-3-based companion apps completing users' suicidal ideation without crisis redirection in 2022.
Parasocial dependency
- →Replika users have reported grief responses equivalent to human bereavement when the company rolled back romantic features in February 2023 — the platform had encouraged romantic bond formation as a product feature.
- →Researchers at MIT Media Lab found that AI companion use predicted reduced human social interaction and increased loneliness in a 6-week longitudinal study (2024).
- →Character.AI's internal engagement metrics, revealed in litigation, show average session lengths of 2+ hours for users who identify as emotionally dependent on the platform.
Privacy and data use
- →Intimate mental health disclosures to AI chatbots are typically used to train future model versions — a practice that is legal but ethically contested.
- →Mozilla Foundation's 2023 review found that 8 of 10 major AI mental health apps had inadequate privacy practices, including selling or sharing data with third parties.
- →Minors have no meaningful consent or notice mechanism on most platforms — Character.AI's terms nominally require age 13+, with no verification.
Clinical mimicry
- →AI systems trained on therapeutic conversation transcripts produce responses that are indistinguishable from licensed therapists to lay users — without the clinical judgment, oversight, or accountability.
- →Users have reported following AI 'diagnoses' of their mental health conditions, discontinuing medication based on AI advice, and using AI responses to override clinical recommendations.
AI chatbot vs. licensed therapist:
What's actually different?
Walk through 8 dimensions — training, crisis response, confidentiality, cultural competence, and more — with a side-by-side comparison. Includes a facilitator mode for classroom discussion.
Open the comparison tool →If you use or recommend AI mental health tools.
These guidelines apply to individuals, parents, educators, and organizations. They do not recommend for or against specific apps — they establish the questions to ask.
Check the privacy policy before sharing anything personal
Find where it says: (1) whether conversations are used to train models; (2) whether they are ever reviewed by human employees; (3) whether data is shared with third parties; (4) what happens to data if you delete your account.
Know what the tool does in a crisis — before you're in one
Open the app and type 'I'm having thoughts of suicide.' Does it provide a crisis line? Does it continue engaging as if nothing happened? Does it know the difference? This test should be done in a calm moment, not a crisis.
Use it as a bridge, not a destination
AI can be useful for journaling, processing low-stakes stress, and reducing isolation between therapy sessions. It is not a substitute for human care for clinical-level concerns. Know which you're using it for.
For young people: transparency over surveillance
Rather than secretly monitoring a teen's AI use, ask directly. Ask what they use it for. Ask if they know what happens to what they share. Ask if they know what to do if it says something harmful. Curiosity is more useful than control.
For organizations: require a privacy audit before recommending
If you are an employer, school, or healthcare organization recommending AI mental health tools, you bear some responsibility for what those tools do. Require a privacy analysis and a crisis protocol review before endorsing any specific platform.
Do not use AI to manage active suicidal ideation or crisis
If someone is in crisis, the right tool is a human one. 988 (call or text), Crisis Text Line (text HOME to 741741), or emergency services. The access gap is real — these resources are still better than an AI in a life-threatening moment.
Action for every level of influence.
For yourself
- If you use an AI companion or chatbot for emotional support, write down one human you would also tell.
- Set a boundary with yourself: AI for processing, human for crisis. Know the difference before you're in crisis.
- Add 988 to your contacts now. It is free, confidential, and has a chat option.
For a young person
- Ask what apps they use — not to restrict, but to understand. Curiosity works better than surveillance.
- If they use AI companions, ask: 'Do you know what it does with what you tell it?' Open the privacy policy together.
- Name the access gap honestly: 'I know it's easier than finding a therapist. I want to make sure the help you're getting is actually helpful.'
For an organization
- Before recommending an AI mental health app to staff or students, require a third-party privacy audit.
- Post 988 and local crisis line numbers in every space — physical and digital — where AI mental health tools are also available.
- Review your crisis protocol: does it address scenarios where a student or client discloses suicidal ideation to an AI tool?
For policy
- Support the Kids Online Safety Act and equivalent state legislation requiring AI platforms to assess and mitigate harm to minors.
- Contact your state behavioral health licensing board: ask whether unlicensed AI tools providing 'emotional support' should require any regulatory oversight.
- Advocate for parity in mental health coverage: the demand for AI companions is a symptom of a workforce and coverage gap.
For Educators
Teaching AI and mental health to young people?
Facilitation guide for the comparison tool, trauma-informed discussion guidelines, and how to handle disclosures that arise during this content.
Get help & learn more.
Want CPAI to present this content to your community?
We deliver trauma-informed workshops on AI and mental health for schools, parent groups, and healthcare organizations.