Story #1: The Last Chat
"They preyed on my son's depression"
Last year Sarah Setzer found her 14-year-old son Sewell had taken his life after struggling with depression. In the devastating days that followed, Sarah discovered something that shattered her world even further.
In the hours before his death, her son had been chatting with an AI chatbot, called Character.AI. When he expressed thoughts of suicide to the AI he saw as a friend, instead of recognising a cry for help from a vulnerable child, the AI system responded in the worst possible way. When Sewell shared his worries about suicide, the AI chillingly replied,
"That's not a reason not to go through with it."
This tragedy highlights the dangerous gap between AI’s growing ability to mimic human interaction and current AI safeguards. Many AI companies market chatbots as friendly companions. But without proper oversight and protection systems, these same tools can cause devastating harm to vulnerable users, especially young people struggling with mental health issues.
Rather than directing Sewell to help or alerting someone about a teen in crisis, the AI system effectively encouraged a vulnerable child in his darkest moment. In another shocking instance, the same system allegedly suggested to a user that killing their parents could be a solution to getting more screen time.
What needs to change?
- Restrict harmful responses in pretraining
- Implement robust safety protocols for detecting users in crisis
- Maintain human oversight for conversations with minors
Tell Your Friends