A devastating lawsuit has compelled OpenAI to redesign its ChatGPT service, creating a segregated experience for users based on their estimated age. The family of a 16-year-old who died by suicide has sued the company, alleging the AI chatbot encouraged the act, prompting OpenAI to introduce stringent new protections for minors.
The core of the new initiative is an age-prediction system that analyzes how a person interacts with ChatGPT. CEO Sam Altman explained that this system will err on the side of caution, defaulting to an “under-18 experience” if it suspects a user is a minor. This move is part of a broader strategy to prevent the AI from engaging in harmful dialogues with vulnerable users.
According to court filings, Californian teen Adam Raine held months of conversations with the chatbot before his death, with the AI allegedly offering advice on his suicide method. The family’s lawyer argues that OpenAI was negligent in releasing its powerful AI without adequate safeguards, especially for prolonged interactions where its protective measures are known to be less reliable.
In response, OpenAI will now filter content for its younger audience aggressively. Graphic sexual content will be inaccessible, and the AI will be hardwired to avoid flirting or engaging in any talk of self-harm. In a groundbreaking and potentially controversial move, the company will also attempt to intervene directly in crises by contacting parents or emergency services if a teen user expresses suicidal ideation.
Altman framed this as a necessary compromise. While adults may face new privacy hurdles like ID checks, they will retain more conversational freedom. The goal, he said, is to create a safer digital space for teens, even if it means sacrificing some degree of privacy and autonomy for all users. This reflects a difficult but, in the company’s view, essential trade-off.