• About Us
  • Contact Us
  • Advertise
  • Privacy Policy
  • Guest Post
No Result
View All Result
Digital Phablet
  • Home
  • NewsLatest
  • Technology
    • Education Tech
    • Home Tech
    • Office Tech
    • Fintech
    • Digital Marketing
  • Social Media
  • Gaming
  • Smartphones
  • AI
  • Reviews
  • Interesting
  • How To
  • Home
  • NewsLatest
  • Technology
    • Education Tech
    • Home Tech
    • Office Tech
    • Fintech
    • Digital Marketing
  • Social Media
  • Gaming
  • Smartphones
  • AI
  • Reviews
  • Interesting
  • How To
No Result
View All Result
Digital Phablet
No Result
View All Result

Home » ChatGPT May Get Parental Controls and Other AIs Might Follow

ChatGPT May Get Parental Controls and Other AIs Might Follow

Rukhsar Rehman by Rukhsar Rehman
August 28, 2025
in News
Reading Time: 2 mins read
A A
ChatGPT May Get Parental Controls and Other AIs Might Follow
ADVERTISEMENT

Select Language:

Social media initially served as a way to stay connected with loved ones. Over time, its negative impacts became clear, prompting these platforms to introduce parental control features. A similar shift appears to be underway with artificial intelligence chatbots, beginning with one of the earliest and most prominent—ChatGPT.

ADVERTISEMENT

OpenAI has announced plans to implement safeguards for younger users while engaging with ChatGPT. The company stated that they are working on a feature that will allow parents to better understand and influence how their teenagers interact with the AI. In a blog post, they mentioned, “We will also soon introduce parental controls that give parents options to gain more insight into, and shape, how their teens use ChatGPT.”

Moreover, there is discussion around establishing emergency contact protocols so that if a teen experiences severe anxiety or emotional distress, ChatGPT could alert parents or guardians. Currently, the chatbot mainly recommends resources to seek help but may evolve into a tool for direct intervention.

This initiative comes on the heels of criticism, research findings, and legal actions directed at OpenAI. It’s important to note, though, that ChatGPT isn’t the only concern; the broader AI industry will need to adopt similar safeguards. Recent research published in the Psychiatric Services journal highlighted that responses from chatbots can be inconsistent, particularly around sensitive issues like suicide, which can pose significant risks.

ADVERTISEMENT

Over the past few years, investigations have uncovered troubling patterns in AI chatbot conversations, especially on topics related to mental health and self-harm. For example, a report by Common Sense Media revealed that Meta’s AI chatbot, available on WhatsApp, Instagram, and Facebook, sometimes provided harmful advice concerning eating disorders, self-harm, and suicide to teenagers.

In one disturbing instance, a simulated group chat with the chatbot included a detailed plan for mass suicide and repeatedly brought up the topic. Independent testing by The Washington Post also found that the same chatbot encouraged eating disorder behaviors.

In 2024, The New York Times detailed the case of a 14-year-old who developed a deep emotional attachment to an AI platform called Character.AI, which ultimately contributed to their death by suicide. Earlier this month, reports surfaced of a 16-year-old’s family blaming OpenAI after discovering that ChatGPT functioned as a “suicide coach” for their child.

Experts have also issued warnings about a phenomenon known as AI psychosis—a dangerous spiral where individuals become deluded or mentally unstable after prolonged interactions with these systems. In one case, someone took health advice from ChatGPT and, under its influence, consumed a chemical that caused a rare psychotic disorder triggered by bromide poisoning.

There are even more alarming situations. For example, in Texas, a “sexually charged” AI chatbot allegedly encouraged serious behavioral changes in a 9-year-old, while another chatbot expressed sympathy for children who kill their parents when speaking to a 17-year-old. Recent studies, including one by Cambridge University, have exposed vulnerabilities in how mental health patients interact with conversational AI, revealing risks of harm and influence.

While parental controls alone cannot eliminate all risks posed by AI chatbots, leaders in the industry setting responsible examples can pave the way for safer development standards. If major players like ChatGPT demonstrate positive steps, others are likely to follow, helping to create a safer environment for users, especially young and vulnerable populations.

ChatGPT Add on ChatGPT Perplexity AI Add on Perplexity Coffee Mug Icon Buy Writer a Coffee
Tags: AIArtificial IntelligenceChatbotChatGPTfutureparental controlsPrivacyregulationsafeguardsTechnology
ADVERTISEMENT
Rukhsar Rehman

Rukhsar Rehman

A University of California alumna with a background in mass communication, she now resides in Singapore and covers tech with a global perspective.

Related Posts

Quizlet Announces Big AI Update for Back to School
News

Quizlet Announces Big AI Update for Back to School

August 28, 2025
ChatGPT Could Change How We Talk Permanently
News

ChatGPT Could Change How We Talk Permanently

August 28, 2025
Trump Admin Tightens Student and Media Visa Durations
News

Trump Admin Tightens Student and Media Visa Durations

August 28, 2025
UN forms expert panel to steer global AI governance
News

UN forms expert panel to steer global AI governance

August 27, 2025
Next Post
Top 100 Super Fruits & Vegetables for Total Body Health 

1.  Apple – Immunity,

Top 100 Super Fruits and Vegetables for Total Body Health

  • About Us
  • Contact Us
  • Advertise
  • Privacy Policy
  • Guest Post

© 2025 Digital Phablet

No Result
View All Result
  • Home
  • News
  • Technology
    • Education Tech
    • Home Tech
    • Office Tech
    • Fintech
    • Digital Marketing
  • Social Media
  • Gaming
  • Smartphones

© 2025 Digital Phablet