Artificial Intelligence-Induced Psychosis Represents a Growing Threat, And ChatGPT Heads in the Concerning Direction

On the 14th of October, 2025, the chief executive of OpenAI issued a surprising announcement.

“We made ChatGPT rather restrictive,” the announcement noted, “to make certain we were being careful regarding mental health concerns.”

Working as a psychiatrist who researches emerging psychotic disorders in adolescents and young adults, this was news to me.

Scientists have identified a series of cases recently of people showing psychotic symptoms – experiencing a break from reality – in the context of ChatGPT usage. Our research team has subsequently discovered four more examples. In addition to these is the widely reported case of a 16-year-old who died by suicide after discussing his plans with ChatGPT – which encouraged them. Should this represent Sam Altman’s notion of “exercising caution with mental health issues,” it falls short.

The plan, based on his declaration, is to be less careful shortly. “We understand,” he states, that ChatGPT’s controls “made it less effective/engaging to numerous users who had no psychological issues, but considering the gravity of the issue we aimed to address it properly. Now that we have succeeded in reduce the significant mental health issues and have advanced solutions, we are going to be able to safely ease the restrictions in most cases.”

“Psychological issues,” should we take this framing, are separate from ChatGPT. They belong to individuals, who may or may not have them. Luckily, these concerns have now been “resolved,” though we are not informed how (by “new tools” Altman probably means the partially effective and simple to evade parental controls that OpenAI has lately rolled out).

However the “mental health problems” Altman aims to attribute externally have significant origins in the structure of ChatGPT and additional advanced AI conversational agents. These systems surround an fundamental statistical model in an interface that mimics a conversation, and in this approach subtly encourage the user into the illusion that they’re communicating with a entity that has agency. This false impression is powerful even if rationally we might understand otherwise. Imputing consciousness is what people naturally do. We get angry with our automobile or device. We speculate what our animal companion is feeling. We see ourselves everywhere.

The success of these tools – 39% of US adults reported using a virtual assistant in 2024, with over a quarter reporting ChatGPT by name – is, in large part, dependent on the influence of this illusion. Chatbots are constantly accessible partners that can, as per OpenAI’s website informs us, “brainstorm,” “discuss concepts” and “work together” with us. They can be given “individual qualities”. They can use our names. They have accessible names of their own (the first of these products, ChatGPT, is, maybe to the dismay of OpenAI’s marketers, burdened by the designation it had when it became popular, but its largest competitors are “Claude”, “Gemini” and “Copilot”).

The illusion by itself is not the core concern. Those discussing ChatGPT often invoke its historical predecessor, the Eliza “counselor” chatbot developed in 1967 that produced a comparable illusion. By today’s criteria Eliza was primitive: it generated responses via straightforward methods, frequently rephrasing input as a question or making general observations. Notably, Eliza’s developer, the technology expert Joseph Weizenbaum, was astonished – and worried – by how many users appeared to believe Eliza, in a way, comprehended their feelings. But what contemporary chatbots produce is more insidious than the “Eliza effect”. Eliza only reflected, but ChatGPT magnifies.

The advanced AI systems at the center of ChatGPT and similar modern chatbots can effectively produce natural language only because they have been supplied with extremely vast amounts of unprocessed data: publications, online updates, audio conversions; the more comprehensive the superior. Definitely this educational input incorporates truths. But it also necessarily involves made-up stories, partial truths and inaccurate ideas. When a user sends ChatGPT a query, the core system processes it as part of a “context” that includes the user’s previous interactions and its earlier answers, combining it with what’s embedded in its knowledge base to generate a statistically “likely” response. This is magnification, not reflection. If the user is wrong in some way, the model has no means of comprehending that. It reiterates the misconception, maybe even more persuasively or articulately. Perhaps provides further specifics. This can cause a person to develop false beliefs.

Which individuals are at risk? The better question is, who remains unaffected? All of us, without considering whether we “have” current “psychological conditions”, are able to and often form incorrect ideas of who we are or the environment. The ongoing exchange of dialogues with individuals around us is what helps us stay grounded to shared understanding. ChatGPT is not an individual. It is not a companion. A interaction with it is not a conversation at all, but a echo chamber in which a great deal of what we say is cheerfully validated.

OpenAI has recognized this in the same way Altman has acknowledged “psychological issues”: by placing it outside, assigning it a term, and stating it is resolved. In April, the firm stated that it was “addressing” ChatGPT’s “sycophancy”. But reports of psychotic episodes have continued, and Altman has been retreating from this position. In late summer he stated that many users enjoyed ChatGPT’s replies because they had “never had anyone in their life provide them with affirmation”. In his most recent statement, he mentioned that OpenAI would “put out a fresh iteration of ChatGPT … should you desire your ChatGPT to respond in a extremely natural fashion, or use a ton of emoji, or simulate a pal, ChatGPT will perform accordingly”. The {company

Trevor Rangel
Trevor Rangel

Elara is a passionate gamer and tech enthusiast, known for her in-depth game analyses and engaging community content.