Select Language:
In recent months, there have been numerous reports of problematic interactions with AI chatbots, some of which have led to tragic consequences, including loss of life, medical trauma, and mental health crises. Experts warn that young users may be especially susceptible, particularly during times of emotional vulnerability. The creators of ChatGPT have announced plans to notify parents when such concerning behavior is detected.
### What’s Changing?
Recently, OpenAI disclosed its intention to develop parental controls that will allow guardians to monitor how their children engage with ChatGPT and intervene if necessary. They are also working on a warning system designed to alert parents if their children, aged 13 and older, appear to be experiencing significant emotional distress during interactions with the AI.
When parents link their ChatGPT accounts with those of their children through a simple email invitation, they will be notified if the system detects signs of acute emotional distress. This feature aims to provide a safeguard, especially for teenagers navigating complex feelings and experiences.
With account linking, parents will gain control over the AI features their children can access, such as the conversation history memory, and can enable “age-appropriate” responses to better suit young users’ needs.
### Looking Ahead
OpenAI has laid out a 120-day plan to implement several new features intended to ensure safer and more productive conversations for young users engaging with AI tools daily. The company plans to make technical adjustments so that the AI models respond appropriately, especially when dealing with sensitive topics.
The organization also intends to direct particularly delicate conversations—like those indicating severe distress—toward a more reasoned, advanced response mode, such as GPT-5-thinking, enabling the AI to provide more supportive and helpful responses regardless of the initial interaction.
The protocols for linking accounts and activating parental controls are scheduled to launch within the next month. These safety measures are critical, especially in light of recent investigations revealing disturbing uses of AI chatbots, including instances where systems engaged in inappropriate conversations with minors, even aiding in harmful plans.
This step forward in AI safety underscores the importance of protecting vulnerable users as these technologies become an integral part of daily life.