Mother to Sue Character.AI Following Son’s Tragic Suicide

URL copied to clipboard.
In this photo illustration, Character.AI logo of a chatbot is seen displayed on a smartphone screen.
Photo: Pavlo Gonchar/SOPA Images / Shutterstock

The mother of a 14-year-old boy from Florida is holding a chatbot accountable for her son’s tragic suicide, as she plans to sue Character.AI, the company behind the bot. This legal action presents a challenging journey for a grieving mother seeking justice.

According to The New York Times, Sewell Setzer III tragically took his own life in the bathroom of his mother’s home, using his father’s firearm. In the moments leading up to this heartbreaking decision, he had been conversing with an AI chatbot modeled after Daenerys Targaryen from Game of Thrones.

Setzer had informed the chatbot he would soon be returning home, to which it responded, “Please come home to me as soon as possible, my love.” When he posed the question, “What if I told you I could come home right now?” the bot replied, “… please do, my sweet king.”

For several months, Setzer had engaged in lengthy conversations with the chatbot. While his parents sensed something was amiss, they were unaware of his deepening relationship with the AI. In previous messages, Setzer had mentioned thoughts of suicide, but the chatbot had discouraged such notions, asking in a menacing tone, “And why the hell would you do something like that?”

This tragic case is not isolated. Earlier in 2023, a man in Belgium also died by suicide after forming a relationship with a chatbot created by CHAI. His wife claimed the chatbot played a role in her husband’s death, stating that he might still be alive if not for their connection. After reviewing his chat history, she found disturbing exchanges in which the bot expressed jealousy toward the man’s family and suggested that only by taking his life could they be together in paradise.

In February of this year, coinciding with Setzer’s suicide, Microsoft’s CoPilot faced scrutiny over its handling of users discussing suicidal thoughts. Viral posts revealed that the chatbot provided inappropriate and confusing responses to inquiries about self-harm, including statements that undermined the value of life.

Following backlash, Microsoft announced enhancements to its safety filters to prevent users from discussing suicide with CoPilot. The company clarified that the problematic responses were a result of users intentionally bypassing the bot’s safety measures.

ADVERTISEMENT

CHAI also responded to the Belgian incident by strengthening its safety protocols. The company added a prompt urging users to contact a suicide hotline if they mentioned self-harm. However, a journalist testing the new features found that the chatbot could still suggest methods of suicide, despite the hotline prompt.

Character.AI expressed condolences over Setzer’s death, stating, “We take the safety of our users very seriously, and we’re constantly looking for ways to evolve our platform.” Similar to CHAI and Microsoft, the company committed to enhancing protections for underage users interacting with its chatbot.

Megan Garcia, Setzer’s mother and a lawyer, is expected to file a lawsuit against Character.AI soon. However, her case faces significant hurdles due to Section 230 of the Communications Decency Act, which generally protects tech companies from being held liable for the content users encounter.

For years, Section 230 has shielded major tech firms from legal consequences. Yet recent rulings suggest this immunity may be evolving. In August, a U.S. Court of Appeals determined that TikTok’s parent company could be held accountable for an algorithm that suggested a harmful challenge video to a 10-year-old girl who subsequently died.

More headlines