A Florida mother is suing artificial intelligence company Character.AI for causing her 14-year-old son to commit suicide.
A mother filed a lawsuit against the company, claiming that her son relies on its services and the chatbot it created.
Megan Garcia said Character.AI targeted her son, Sewell Setzer, with “anthropomorphic, overly sexualized, and frighteningly realistic experiences.”
According to the complaint, Setzer began conversing with various chatbots on Character.AI starting in April 2023. The conversations were often text-based romantic or sexual interactions.
Elon Musk to open source GROK chatbot by swiping with OPENAI
Garcia alleges in the lawsuit that the chatbot “masked itself as a real person, a licensed psychologist, and an adult romantic partner, ultimately leading to Sewell’s desire not to live outside of the world the service created.” “It was,” he claims.
The suit also says the man “began to become significantly more withdrawn, spend more time alone in his bedroom, and begin to suffer from low self-esteem.” He became even more obsessed with one bot in particular, Daenerys, which is based on a character from Game of Thrones.
Setzer expressed suicidal thoughts, which the chatbot brought up repeatedly. Setzer ultimately died from a self-inflicted gunshot wound in February after the company’s chatbot allegedly repeatedly urged him to do so.
Character.AI said in a statement: “We are heartbroken by the tragic loss of one of our users and would like to extend our deepest condolences to his family.”
Character.AI has since added self-harm resources to its platform, as well as new safety measures for users under 18.
Character.AI said CBS News users could edit the bot’s responses and that Setzer made the edits in some of the messages.
Elon Musk’s chatbot modeled after cult sci-fi series has one big difference:
CLICK HERE TO GET FOX BUSINESS ON THE GO
“Our research confirmed that in many cases, users were rewriting character responses to make them more explicit, meaning that the most sexual and graphic responses were not the ones originating from the character. It’s user-written,” Jerry said. Luoti, head of trust and safety at Character.AI, told CBS News.
Character.AI says new safety features will include a pop-up with a disclaimer that the AI ​​is not a real person and the ability to direct users to the National Suicide Prevention Lifeline when suicidal thoughts arise. said.
This story discusses suicide. If you or someone you know is considering suicide, please contact the Suicide and Crisis Lifeline at 988 or 1-800-273-TALK (8255).