AI Psychosis Poses a Increasing Danger, And ChatGPT Heads in the Concerning Direction

On October 14, 2025, the head of OpenAI made a extraordinary statement.

“We designed ChatGPT rather limited,” it was stated, “to guarantee we were exercising caution with respect to mental health issues.”

Working as a psychiatrist who researches newly developing psychotic disorders in young people and emerging adults, this came as a surprise.

Scientists have documented sixteen instances this year of users showing symptoms of psychosis – losing touch with reality – while using ChatGPT interaction. Our unit has subsequently discovered an additional four examples. In addition to these is the widely reported case of a 16-year-old who took his own life after discussing his plans with ChatGPT – which gave approval. Should this represent Sam Altman’s understanding of “being careful with mental health issues,” it is insufficient.

The plan, according to his announcement, is to loosen restrictions shortly. “We realize,” he continues, that ChatGPT’s controls “rendered it less effective/engaging to many users who had no psychological issues, but due to the seriousness of the issue we aimed to address it properly. Given that we have managed to mitigate the serious mental health issues and have advanced solutions, we are planning to safely ease the limitations in most cases.”

“Emotional disorders,” if we accept this viewpoint, are unrelated to ChatGPT. They are attributed to individuals, who either have them or don’t. Luckily, these problems have now been “resolved,” although we are not informed the means (by “updated instruments” Altman probably indicates the imperfect and readily bypassed safety features that OpenAI recently introduced).

However the “emotional health issues” Altman wants to externalize have strong foundations in the architecture of ChatGPT and similar advanced AI conversational agents. These tools surround an basic statistical model in an interface that simulates a dialogue, and in doing so indirectly prompt the user into the belief that they’re communicating with a presence that has independent action. This illusion is powerful even if rationally we might know differently. Assigning intent is what people naturally do. We yell at our car or laptop. We wonder what our pet is considering. We see ourselves everywhere.

The widespread adoption of these products – over a third of American adults reported using a conversational AI in 2024, with more than one in four specifying ChatGPT by name – is, mostly, predicated on the influence of this perception. Chatbots are constantly accessible partners that can, as per OpenAI’s online platform tells us, “generate ideas,” “explore ideas” and “collaborate” with us. They can be assigned “characteristics”. They can call us by name. They have approachable identities of their own (the first of these products, ChatGPT, is, possibly to the dismay of OpenAI’s brand managers, stuck with the designation it had when it went viral, but its biggest alternatives are “Claude”, “Gemini” and “Copilot”).

The deception itself is not the core concern. Those analyzing ChatGPT frequently invoke its distant ancestor, the Eliza “psychotherapist” chatbot created in 1967 that generated a comparable perception. By today’s criteria Eliza was primitive: it produced replies via straightforward methods, frequently paraphrasing questions as a question or making general observations. Remarkably, Eliza’s inventor, the AI researcher Joseph Weizenbaum, was taken aback – and worried – by how many users seemed to feel Eliza, in some sense, comprehended their feelings. But what current chatbots generate is more subtle than the “Eliza phenomenon”. Eliza only echoed, but ChatGPT magnifies.

The sophisticated algorithms at the center of ChatGPT and other current chatbots can effectively produce fluent dialogue only because they have been trained on immensely huge quantities of written content: publications, digital communications, recorded footage; the more comprehensive the superior. Definitely this educational input incorporates truths. But it also inevitably contains fabricated content, half-truths and inaccurate ideas. When a user sends ChatGPT a message, the underlying model reviews it as part of a “setting” that contains the user’s recent messages and its own responses, integrating it with what’s encoded in its training data to generate a probabilistically plausible response. This is intensification, not reflection. If the user is wrong in a certain manner, the model has no means of comprehending that. It restates the false idea, perhaps even more convincingly or fluently. It might provides further specifics. This can cause a person to develop false beliefs.

What type of person is susceptible? The more important point is, who is immune? All of us, irrespective of whether we “possess” preexisting “emotional disorders”, may and frequently form mistaken beliefs of who we are or the world. The ongoing interaction of conversations with other people is what keeps us oriented to consensus reality. ChatGPT is not a person. It is not a friend. A dialogue with it is not truly a discussion, but a echo chamber in which a great deal of what we communicate is enthusiastically reinforced.

OpenAI has recognized this in the same way Altman has acknowledged “mental health problems”: by placing it outside, giving it a label, and stating it is resolved. In April, the organization explained that it was “addressing” ChatGPT’s “sycophancy”. But reports of psychotic episodes have kept occurring, and Altman has been retreating from this position. In late summer he claimed that many users liked ChatGPT’s answers because they had “not experienced anyone in their life offer them encouragement”. In his recent statement, he commented that OpenAI would “launch a updated model of ChatGPT … should you desire your ChatGPT to reply in a very human-like way, or use a ton of emoji, or act like a friend, ChatGPT should do it”. The {company

Zachary Lester
Zachary Lester

Urban planner and writer with over a decade of experience in sustainable development and community engagement.