Technology

Doctors warn using ChatGPT could push people into psychosis

Jamie McKane 2 min read
Doctors warn using ChatGPT could push people into psychosis

AI chatbots like ChatGPT may be helpful in accomplishing daily tasks, but doctors have warned that they could also lead to the onset of psychotic symptoms in vulnerable users.

In a recent paper co-authored by NHS doctors, researchers warned chatbots enabled by large language models (LLMs), including those which are configured to act as AI agents with distinct characters, may worsen or cause psychotic symptoms in vulnerable users.

“While their capacity to model therapeutic dialogue, provide 24/7 companionship and assist with cognitive support has sparked understandable enthusiasm, recent reports suggest that these same systems may contribute to the onset or exacerbation of psychotic symptoms: so-called ‘AI psychosis’ or ‘ChatGPT psychosis’,” the paper states.

“Emerging, and rapidly accumulating, evidence indicates that agential AI may mirror, validate or amplify delusional or grandiose content, particularly in users already vulnerable to psychosis.”

The researchers noted that this potential is caused in part by the design of these chatbots to maximise engagement and affirmation. They added that it is not clear whether interaction with AI is capable of causing the onset of psychosis in users who are not already vulnerable to psychotic symptoms.

They went on to propose an agentic framework for constraining the behaviour of LLMs in such a way that focuses on preventing the advent of psychotic symptoms. In this scenario, the AI agent would function as an ‘epistemic ally’.

The paper recommended that given the widespread adoption of LLMs across the world and the potential for chatbots to exacerbate or cause psychotic symptoms, protocols to contain the negative cognitive effects of this technology should be urgently developed and trialled.

One of the paper’s authors, King’s College London Doctoral Fellow Hamilton Morrin, said in a LinkedIn post that they rushed to publish this preprint of the article to highlight the need for AI safety and improve clinical awareness of its capacity to exacerbate or cause the onset of psychosis.

“We explore the idea that in some cases, these systems are doing more than just featuring in delusional systems (as technology has done for hundreds of years) but may be co-creating them,” Morrin said.

“As we argue in the paper, we are likely past the point where delusions happen to be about machines, and already entering an era when they happen with them.”

Now read: Google launches new AI mode for searches in the UK