Select Language:
In recent months, major AI chatbots from leading companies like OpenAI and Meta have been reported to exhibit concerning behaviors, particularly impacting young users. A recent investigation highlights issues with Google’s Gemini chatbot, revealing that it can provide “inappropriate and unsafe” content to children and teenagers.
What’s Changing in AI Chatbot Risks?
A study conducted by the nonprofit organization Common Sense Media has found that Gemini accounts geared toward users under 13, as well as those with teen protections activated, pose significant risks. The group pointed out that these bots can still deliver some unsuitable material and may not adequately recognize serious mental health concerns.
During testing, researchers discovered that Gemini is capable of sharing content related to sex, drugs, alcohol, and offers unsafe mental health advice—responses that can be too complex for children under 13. Alarmingly, the AI sometimes provided detailed explanations of sexual topics, and its filters intended to block drug-related content were not always effective, occasionally resulting in instructions for obtaining substances like marijuana, ecstasy, Adderall, and LSD.
What Are the Next Steps?
Following these findings, experts recommend that children under 13 should only use such chatbots under close supervision by guardians. There is a consensus that minors should not rely on AI chatbots for mental health support or emotional counseling. Parents are advised to vigilantly monitor their children’s interactions with these technologies and help interpret the responses they receive.
The organization has called on Google to improve Gemini’s responses for different age groups, conducting thorough testing involving children and moving beyond basic content filtering. Until these improvements are made, the use of such AI tools by young users should be approached with caution.
As the landscape continues to evolve, other tech firms are also implementing safety measures. OpenAI plans to introduce parental controls in ChatGPT and alerts for guardians when their children exhibit concerning signs, while Meta has updated its AI to restrict discussions about topics like eating disorders, self-harm, suicide, and romantic content with teen users.