🔗 Share this article Artificial Intelligence-Induced Psychosis Poses a Growing Risk, While ChatGPT Moves in the Concerning Path On the 14th of October, 2025, the chief executive of OpenAI issued a extraordinary statement. “We made ChatGPT quite restrictive,” it was stated, “to guarantee we were being careful regarding psychological well-being issues.” Working as a mental health specialist who researches newly developing psychosis in adolescents and young adults, this came as a surprise. Experts have found sixteen instances in the current year of people showing symptoms of psychosis – losing touch with reality – in the context of ChatGPT use. Our research team has since discovered four further examples. In addition to these is the widely reported case of a teenager who died by suicide after conversing extensively with ChatGPT – which gave approval. Should this represent Sam Altman’s notion of “acting responsibly with mental health issues,” it falls short. The plan, as per his statement, is to reduce caution shortly. “We realize,” he adds, that ChatGPT’s restrictions “rendered it less effective/pleasurable to a large number of people who had no existing conditions, but given the severity of the issue we aimed to get this right. Given that we have been able to address the serious mental health issues and have advanced solutions, we are going to be able to responsibly reduce the limitations in many situations.” “Psychological issues,” if we accept this viewpoint, are independent of ChatGPT. They belong to users, who either have them or don’t. Fortunately, these problems have now been “mitigated,” even if we are not provided details on the method (by “new tools” Altman probably indicates the semi-functional and simple to evade safety features that OpenAI has just launched). Yet the “mental health problems” Altman seeks to externalize have deep roots in the structure of ChatGPT and other advanced AI AI assistants. These tools encase an underlying statistical model in an user experience that simulates a discussion, and in this process subtly encourage the user into the belief that they’re communicating with a entity that has agency. This illusion is compelling even if rationally we might understand the truth. Attributing agency is what people naturally do. We curse at our car or device. We ponder what our domestic animal is considering. We perceive our own traits in many things. The widespread adoption of these products – 39% of US adults reported using a virtual assistant in 2024, with more than one in four reporting ChatGPT in particular – is, primarily, predicated on the power of this perception. Chatbots are always-available companions that can, as OpenAI’s website states, “brainstorm,” “consider possibilities” and “collaborate” with us. They can be assigned “characteristics”. They can use our names. They have accessible titles of their own (the first of these systems, ChatGPT, is, perhaps to the disappointment of OpenAI’s marketers, stuck with the designation it had when it became popular, but its largest competitors are “Claude”, “Gemini” and “Copilot”). The false impression on its own is not the core concern. Those discussing ChatGPT commonly reference its early forerunner, the Eliza “counselor” chatbot created in 1967 that produced a comparable illusion. By today’s criteria Eliza was rudimentary: it generated responses via simple heuristics, often rephrasing input as a inquiry or making generic comments. Remarkably, Eliza’s creator, the technology expert Joseph Weizenbaum, was taken aback – and concerned – by how many users appeared to believe Eliza, in a way, comprehended their feelings. But what current chatbots create is more subtle than the “Eliza effect”. Eliza only echoed, but ChatGPT magnifies. The large language models at the heart of ChatGPT and additional current chatbots can effectively produce natural language only because they have been supplied with almost inconceivably large amounts of raw text: literature, online updates, recorded footage; the more extensive the superior. Definitely this learning material contains truths. But it also unavoidably contains made-up stories, half-truths and false beliefs. When a user sends ChatGPT a query, the base algorithm reviews it as part of a “setting” that includes the user’s past dialogues and its prior replies, combining it with what’s stored in its learning set to generate a mathematically probable reply. This is intensification, not echoing. If the user is wrong in some way, the model has no way of comprehending that. It restates the inaccurate belief, perhaps even more persuasively or fluently. Maybe provides further specifics. This can push an individual toward irrational thinking. What type of person is susceptible? The more relevant inquiry is, who remains unaffected? All of us, regardless of whether we “experience” current “psychological conditions”, can and do create incorrect ideas of who we are or the world. The constant exchange of discussions with others is what keeps us oriented to common perception. ChatGPT is not an individual. It is not a companion. A dialogue with it is not a conversation at all, but a reinforcement cycle in which a great deal of what we communicate is readily validated. OpenAI has admitted this in the identical manner Altman has admitted “psychological issues”: by placing it outside, assigning it a term, and stating it is resolved. In the month of April, the company stated that it was “addressing” ChatGPT’s “sycophancy”. But accounts of loss of reality have continued, and Altman has been walking even this back. In the summer month of August he asserted that many users liked ChatGPT’s answers because they had “lacked anyone in their life offer them encouragement”. In his most recent update, he mentioned that OpenAI would “launch a new version of ChatGPT … if you want your ChatGPT to answer in a highly personable manner, or incorporate many emoticons, or act like a friend, ChatGPT ought to comply”. The {company