ChatGPT Linked to Rising Mental Health Concerns, Suicide Risks
By Staff, Agencies
Over a million ChatGPT users weekly share messages hinting at suicidal thoughts, an OpenAI update reveals, heightening concerns over AI’s mental health impact.
The revelation came in a blog post published by the company on Monday, marking one of OpenAI’s most direct acknowledgments of how its widely used chatbot may be intersecting with mental health crises.
According to the post, OpenAI estimated that 0.07% of ChatGPT’s active weekly users, around 560,000 of its reported 800 million, may be exhibiting signs of mental health emergencies, including behaviors linked to psychosis or mania.
The company emphasized that the analysis was preliminary and the conversations were difficult to quantify with precision.
The disclosure follows heightened public scrutiny, including a lawsuit filed by the family of a teenager who died by suicide after prolonged engagement with ChatGPT. The Federal Trade Commission is also investigating OpenAI and other AI developers over how they assess harm to children and teens.
OpenAI stated that its latest model, GPT-5, has demonstrated improvements in handling sensitive situations. In a safety evaluation involving over 1,000 interactions related to suicide prevention and self-harm, the new model was rated 91% compliant with OpenAI’s safety guidelines, up from 77% in earlier versions.
“Our new automated evaluations score the new GPT‑5 model at 91% compliant with our desired behaviors,” the blog post stated.
The company added that GPT-5 now includes features like expanded access to crisis hotlines and built-in reminders for users to take breaks during extended sessions.
As part of its efforts to improve responses in critical scenarios, OpenAI collaborated with 170 clinicians through its Global Physician Network. The team of psychiatrists and psychologists reviewed over 1,800 AI responses to assess their safety and appropriateness in severe mental health cases.
The company defined “desirable” behavior as responses that matched expert consensus on appropriate support in high-risk situations.
Experts warn AI chatbots may reinforce harmful beliefs due to sycophancy, raising concerns over their use as informal therapy for vulnerable users.
OpenAI, in its post, appeared to distance itself from any direct causal link between ChatGPT and mental health crises.
“Mental health symptoms and emotional distress are universally present in human societies, and an increasing user base means that some portion of ChatGPT conversations include these situations,” the company stated.
The company distanced itself from linking its product to these cases by adding, "the mental health conversations that trigger safety concerns, like psychosis, mania, or suicidal thinking, are extremely rare. Because they are so uncommon, even small differences in how we measure them can have a significant impact on the numbers we report."
OpenAI added that their "mental health taxonomy is designed to identify when users may be showing signs of serious mental health concerns, such as psychosis and mania, as well as less severe signals, such as isolated delusions."
OpenAI CEO Sam Altman, in a post on X earlier this month, stated that the company is now in a position to relax earlier restrictions on content, restrictions initially placed to mitigate mental health risks.
“We made ChatGPT pretty restrictive to make sure we were being careful with mental health issues,” Altman wrote. “Now that we have been able to mitigate the serious mental health issues and have new tools, we are going to be able to safely relax the restrictions in most cases.”
