Hypnosis Offers Insights Into Artificial Intelligence Limitations

New research finds similarities between hypnotized minds and large language models like ChatGPT, highlighting vulnerabilities in current AI systems.

Published on Mar. 8, 2026

A new review published in Cyberpsychology, Behaviour, and Social Networking suggests that when under hypnosis, the human brain behaves in ways that closely resemble the functioning of a large language model (LLM) such as ChatGPT. The finding challenges long-held assumptions about consciousness and offers important insights for building safer and more reliable artificial intelligence.

Why it matters

The parallels between hypnosis and LLMs underscore a central point: fluent performance does not equal understanding. Both hypnotic cognition and LLM output rely on complex pattern matching while lacking the deeper layers of interpretation and self-awareness that characterize human reflective thought. This has implications for the development of safer and more trustworthy AI systems.

The details

The paper argues that hypnotized minds and LLMs share three core features: a dominance of automaticity, suppressed executive monitoring, and extreme contextual dependency. Both systems display automatic pattern-completion processes and operate without robust executive oversight, meaning they can generate sophisticated responses without genuinely understanding them. The most profound parallel is what researchers call the 'meaning gap' - hypnotized subjects can deliver seemingly insightful statements that appear incoherent once the trance ends, while LLMs also lack any grounded comprehension, with meaning arising only through the user's interpretation.

  • The new review was published on March 8, 2026.

The players

Giuseppe Riva

Director of the Humane Technology Lab at the Catholic University of Milan, Italy, where he is Full Professor of General and Cognitive Psychology.

Brenda K. Wiederhold

Professor at the Virtual Reality Medical Centre in San Diego.

Fabrizia Mantovani

Professor at the University of Milano-Bicocca.

Yann LeCun

Chief AI Scientist at Meta.

Anthropic

A research company that recently published a study on the internal activations of large language models.

Got photos? Submit your photos here. ›

What they’re saying

“Achieving artificial general intelligence (AGI) will require not only scaling existing systems but rethinking their architecture altogether.”

— Yann LeCun, Chief AI Scientist at Meta (N/A)

What’s next

Researchers suggest that insights from hypnosis may support the design of future AI architectures, such as the introduction of 'cognitive immune systems' to provide an internal supervisory function able to detect inconsistencies or harmful trajectories. Understanding how humans produce false memories under hypnosis may also offer a potential framework for detecting similar behaviors in AI.

The takeaway

The convergence between hypnosis and large language models indicates that current AI represents only one layer of intelligence: the automatic, pattern-completion layer, operating without the executive oversight that makes human cognition stable and reliable. Improving an LLM's linguistic fluency will not bridge the gap to genuine awareness, and achieving artificial general intelligence will require rethinking the architecture of AI systems altogether.