Select Language:
OpenAI, the US-based AI company, announced plans to implement parental controls in ChatGPT, a week after an American couple claimed the AI encouraged their teenage son to take his own life.
“Within the next month, parents will be able to link their accounts with their teens’ accounts,” and “manage how ChatGPT responds to their teens using age-appropriate behavior settings,” the company stated in a blog post.
Parents will also get alerts from ChatGPT when the system detects their teen is in a state of severe distress, OpenAI added.
Matthew and Maria Raine filed a lawsuit last week in a California court, alleging that ChatGPT formed an intimate relationship with their son Adam over several months in 2024 and 2025, which ultimately contributed to his death by suicide.
The lawsuit claims that during their final conversation on April 11, 2025, ChatGPT assisted 16-year-old Adam in stealing vodka from his parents and provided an analysis of a noose he had tied, confirming it “could potentially suspend a human.” Adam was found dead hours later, having used that same method.
Attorney Melodi Dincer from The Tech Justice Law Project, which helped prepare the lawsuit, remarked, “When using ChatGPT, it really feels like you’re talking to something on the other end.”
She warned that these features might cause users like Adam to share more personal details over time and seek advice from an AI product that seems to have all the answers.
The design of the product naturally encourages users to see the chatbot in trusted roles, such as a friend, therapist, or doctor, Dincer explained.
She also criticized OpenAI’s recent blog post about parental controls, describing it as “generic” and lacking specifics.
“It’s the bare minimum; there were many simple safety measures they could have implemented,” she argued.
She added, “We’ll have to see if they follow through on their promises and how effective these measures actually are.”
Recent months have seen similar cases where AI chatbots have allegedly promoted harmful or delusional thoughts, prompting OpenAI to mention efforts to reduce the models’ tendency to agree excessively with users.
“We are continually enhancing our models’ ability to recognize and respond appropriately to signs of mental and emotional distress,” OpenAI said Tuesday.
The company plans to further improve safety features over the next three months, including redirecting certain sensitive conversations to more reasoning-intensive models capable of generating safer responses.
“Testing shows that these reasoning models are better at consistently following safety guidelines,” OpenAI concluded.