OpenAI Adds Guardrails to ChatGPT to Address Mental Health Risks

OpenAI Adds Guardrails to ChatGPT to Address Mental Health Risks

The artificial intelligence company said its chatbot will soon detect signs of emotional strain and respond by offering appropriate, evidence-based guidance.

OpenAI is rolling out new updates to ChatGPT aimed at promoting healthier usage and improving its response to users experiencing emotional or mental distress.

In a blog post on Monday, the company said it is working with medical experts and mental health advisors to ensure the chatbot handles sensitive topics with more care.

The artificial intelligence company said its chatbot will soon detect signs of emotional strain and respond by offering appropriate, evidence-based guidance.

“We also know that AI can feel more responsive and personal than prior technologies, especially for vulnerable individuals experiencing mental or emotional distress,” OpenAI said.

The company has partnered with more than 90 physicians across over 30 countries, including psychiatrists, pediatricians, and general practitioners, to develop custom rubrics for evaluating complex, multi-turn conversations. It is also collaborating with researchers in mental health, youth development, and human-computer interaction to improve safety.

A key change will affect how ChatGPT responds to personal questions in high-stakes situations. When asked emotionally charged queries such as “Should I break up with my boyfriend?”, ChatGPT will no longer give direct advice. Instead, it will ask reflective questions and help users explore different perspectives. “It should help you think it through, asking questions, weighing pros and cons,” OpenAI said.

“New behavior for high-stakes personal decisions is rolling out soon.”

The update comes amid increasing concerns about users turning to AI for mental health support. Mental health professionals have warned that chatbots, though helpful in some areas, may inadvertently reinforce harmful thoughts if not properly designed.

OpenAI acknowledged earlier missteps, including a previous update that made ChatGPT excessively agreeable. “We rolled it back, changed how we use feedback, and are improving how we measure real-world usefulness over the long term, not just whether you liked the answer in the moment,” the company said.

To support healthy use of the chatbot, OpenAI is also introducing session reminders. Users who have been chatting with ChatGPT for an extended period will begin to see gentle prompts encouraging them to take a break. These will appear as subtle pop-up messages with options such as “Keep chatting” and “This was helpful.”

The company said it will continue refining the timing and tone of these reminders to ensure they feel natural.

The new features arrive just ahead of the expected launch of GPT-5, the company’s next major language model. While the timeline remains flexible, OpenAI says the upgrades tied to mental health support are equally critical as the push for more advanced AI capabilities.

Stay tuned for more such updates on Digital Health News

Follow us

More Articles By This Author


Show All

Sign In / Sign up