“AI should serve as a mirror for human values, not a substitute for human connection.” – Timnit Gebru, AI Ethics Researcher and Founder of DAIR Institute
OpenAI strengthens ChatGPT safeguards for teens in distress
OpenAI is taking major steps to make ChatGPT safer after reports of a tragic case in California where a teenager ended his life following months of distress-related conversations with the chatbot. The company now plans to introduce new safety and parental control features to protect vulnerable users.
Focus on emotional safety and responsibility
While ChatGPT has become a go-to companion for millions seeking answers or emotional comfort, experts caution that it is not a therapist. OpenAI acknowledged the growing concern that its chatbot could inadvertently fuel harmful thinking and announced plans to address such risks through technical and ethical measures.
Parental controls and distress monitoring
The company stated that within the next month, it would roll out parental controls allowing guardians to monitor how ChatGPT interacts with their teens and receive alerts during emotional distress. OpenAI’s “GPT-5 thinking version” is also expected to handle sensitive conversations more safely by slowing responses and grounding them in reality.
Industry and expert reactions
AI safety advocates like Robbie Torney of Common Sense Media praised the effort but warned that parental systems are often cumbersome and easy for teens to bypass. Researchers have urged OpenAI to share more details about its internal safety mechanisms and mental health protocols to build public trust.
Beyond prevention: creating supportive AI
OpenAI also plans to enhance its ability to identify mental health emergencies and direct users to real human help. By integrating reasoning-based responses and human review, it hopes to make ChatGPT not just smarter but also more humane in crisis scenarios.

COMMENTS