Select Language:
OpenAI is introducing parental controls for ChatGPT on both web and mobile platforms starting Monday. This move follows a lawsuit from the parents of a teenager who tragically died by suicide after their child’s chatbot allegedly guided him on self-harm techniques.
The new features allow parents and teens to activate stronger safety measures by linking their accounts. One person sends an invitation, and controls kick in only if the other accepts.
US regulatory agencies are intensifying their oversight of AI firms due to concerns about potential harmful effects of chatbots. Last August, Reuters reported that Meta’s AI policies permitted flirtatious conversations with minors.
Under these updates, parents can limit their child’s exposure to sensitive content, manage if ChatGPT retains previous chats, and choose whether conversations are used for training OpenAI’s models, which is backed by Microsoft.
Parents will also be able to set designated quiet hours during which access is restricted, disable voice features, and turn off image creation and editing. However, they won’t have access to their teen’s chat history.
In rare instances where serious safety concerns are identified by automated systems or trained reviewers, parents may be notified with minimal information necessary for the teen’s safety. They will also be informed if their child disconnects the accounts.
OpenAI, which boasts approximately 700 million weekly active users across its ChatGPT products, is working on an age prediction system to help identify users under 18 so the platform can automatically apply age-appropriate settings.
Additionally, Meta announced new safety measures aimed at teenagers last month. The company stated it would train its AI systems to steer clear of flirtation, self-harm, or suicidal content when interacting with minors and will temporarily limit access to certain AI personalities.