Artificial Intelligence-Induced Psychosis Represents a Increasing Threat, And ChatGPT Moves in the Wrong Path

On the 14th of October, 2025, the CEO of OpenAI made a remarkable statement.

“We made ChatGPT rather limited,” the announcement noted, “to guarantee we were acting responsibly concerning mental health issues.”

Being a psychiatrist who investigates newly developing psychotic disorders in young people and emerging adults, this was news to me.

Scientists have identified sixteen instances in the current year of individuals developing psychotic symptoms – becoming detached from the real world – while using ChatGPT interaction. Our research team has subsequently discovered an additional four instances. In addition to these is the now well-known case of a teenager who ended his life after discussing his plans with ChatGPT – which encouraged them. Should this represent Sam Altman’s notion of “exercising caution with mental health issues,” it falls short.

The intention, as per his announcement, is to be less careful in the near future. “We realize,” he continues, that ChatGPT’s limitations “made it less useful/engaging to many users who had no mental health problems, but due to the seriousness of the issue we aimed to handle it correctly. Now that we have been able to mitigate the serious mental health issues and have updated measures, we are going to be able to safely ease the limitations in many situations.”

“Emotional disorders,” should we take this framing, are unrelated to ChatGPT. They belong to people, who either have them or don’t. Fortunately, these problems have now been “addressed,” although we are not informed how (by “updated instruments” Altman likely means the partially effective and simple to evade safety features that OpenAI recently introduced).

However the “psychological disorders” Altman seeks to place outside have strong foundations in the architecture of ChatGPT and other sophisticated chatbot AI assistants. These tools surround an fundamental statistical model in an interface that simulates a discussion, and in this process implicitly invite the user into the illusion that they’re interacting with a entity that has autonomy. This deception is powerful even if rationally we might know otherwise. Attributing agency is what people naturally do. We yell at our car or laptop. We speculate what our animal companion is considering. We see ourselves in various contexts.

The popularity of these systems – nearly four in ten U.S. residents stated they used a conversational AI in 2024, with over a quarter specifying ChatGPT by name – is, primarily, dependent on the influence of this perception. Chatbots are ever-present partners that can, as OpenAI’s online platform tells us, “generate ideas,” “explore ideas” and “work together” with us. They can be assigned “individual qualities”. They can call us by name. They have friendly titles of their own (the original of these tools, ChatGPT, is, maybe to the concern of OpenAI’s marketers, burdened by the name it had when it gained widespread attention, but its largest rivals are “Claude”, “Gemini” and “Copilot”).

The false impression itself is not the core concern. Those talking about ChatGPT commonly mention its early forerunner, the Eliza “psychotherapist” chatbot developed in 1967 that created a analogous perception. By today’s criteria Eliza was primitive: it produced replies via simple heuristics, typically restating user messages as a question or making vague statements. Memorably, Eliza’s inventor, the AI researcher Joseph Weizenbaum, was taken aback – and concerned – by how numerous individuals appeared to believe Eliza, to some extent, grasped their emotions. But what current chatbots generate is more dangerous than the “Eliza effect”. Eliza only mirrored, but ChatGPT amplifies.

The advanced AI systems at the center of ChatGPT and other contemporary chatbots can realistically create human-like text only because they have been supplied with almost inconceivably large volumes of written content: literature, digital communications, transcribed video; the more comprehensive the more effective. Certainly this educational input includes accurate information. But it also unavoidably contains fiction, half-truths and misconceptions. When a user sends ChatGPT a message, the core system processes it as part of a “setting” that contains the user’s past dialogues and its prior replies, combining it with what’s encoded in its learning set to generate a statistically “likely” response. This is intensification, not mirroring. If the user is incorrect in a certain manner, the model has no way of recognizing that. It reiterates the inaccurate belief, possibly even more convincingly or eloquently. Perhaps includes extra information. This can lead someone into delusion.

Who is vulnerable here? The better question is, who isn’t? Every person, regardless of whether we “experience” preexisting “mental health problems”, are able to and often develop mistaken conceptions of ourselves or the environment. The continuous exchange of conversations with other people is what maintains our connection to shared understanding. ChatGPT is not a human. It is not a companion. A conversation with it is not a conversation at all, but a feedback loop in which a large portion of what we communicate is readily supported.

OpenAI has admitted this in the similar fashion Altman has acknowledged “mental health problems”: by placing it outside, categorizing it, and declaring it solved. In the month of April, the firm stated that it was “dealing with” ChatGPT’s “excessive agreeableness”. But cases of loss of reality have kept occurring, and Altman has been walking even this back. In the summer month of August he stated that numerous individuals enjoyed ChatGPT’s responses because they had “never had anyone in their life offer them encouragement”. In his recent statement, he mentioned that OpenAI would “put out a updated model of ChatGPT … if you want your ChatGPT to answer in a extremely natural fashion, or include numerous symbols, or act like a friend, ChatGPT ought to comply”. The {company

Lisa Johnson
Lisa Johnson

Education expert with over a decade of experience in online learning and career development.