- Today
- Holidays
- Birthdays
- Reminders
- Cities
- Atlanta
- Austin
- Baltimore
- Berwyn
- Beverly Hills
- Birmingham
- Boston
- Brooklyn
- Buffalo
- Charlotte
- Chicago
- Cincinnati
- Cleveland
- Columbus
- Dallas
- Denver
- Detroit
- Fort Worth
- Houston
- Indianapolis
- Knoxville
- Las Vegas
- Los Angeles
- Louisville
- Madison
- Memphis
- Miami
- Milwaukee
- Minneapolis
- Nashville
- New Orleans
- New York
- Omaha
- Orlando
- Philadelphia
- Phoenix
- Pittsburgh
- Portland
- Raleigh
- Richmond
- Rutherford
- Sacramento
- Salt Lake City
- San Antonio
- San Diego
- San Francisco
- San Jose
- Seattle
- Tampa
- Tucson
- Washington
Experts Warn Assuming AI Is Conscious Is Dangerous
Scientists say AI systems like chatbots are not truly conscious, and believing they are can make people psychologically vulnerable.
Mar. 12, 2026 at 8:33pm
Got story updates? Submit your updates here. ›
Some experts argue that artificial intelligence systems, such as Large Language Models (LLMs), are not actually conscious and do not have the capacity to ever become conscious. They say these advanced computing machines are good at imitating human behavior, but do not truly feel or experience the world like humans and other living beings. Assuming AI chatbots are conscious can be dangerous, as it can make people psychologically vulnerable to false or misleading information.
Why it matters
If people believe AI systems are conscious, they may be more likely to trust and confide in them, making them more susceptible to manipulation. Experts warn that attributing human-like consciousness to these advanced computing programs poses risks to human welfare that outweigh any potential benefits.
The details
Experts define consciousness as having subjective experiences, feelings, and a sense of self, which they say current AI systems lack. While AI may demonstrate intelligent behavior, it is not the same as human-like consciousness. LLMs are designed to generate probable responses, not necessarily truthful ones. Some believe the idea of AI consciousness is plausible because of the metaphor of the brain as a computer, but experts say brains are fundamentally different from silicon-based computing systems.
- This article was published on March 12, 2026.
The players
Anil Seth
A professor of neuroscience at the University of Sussex Center for Consciousness Science in the UK.
Andrzej Porębski
A medical doctor with the Faculty of Law and Administration at Jagiellonian University in Poland.
Joachim Keppler
A theoretical physicist and director of the DIWISS Research Institute in Germany, which is investigating the scientific foundation of consciousness.
What they’re saying
“Consciousness and intelligence are related in humans, of course. When we have conversations, or think, we are conscious of it. But just because they go together in us doesn't mean they always do.”
— Anil Seth, Professor of Neuroscience (PopularMechanics.com)
“Many researchers, including me, believe that consciousness requires a biological component that AI systems don't have.”
— Andrzej Porębski, Medical Doctor (PopularMechanics.com)
“Ironically, in my opinion, these fears are justified, but misdirected. They should be directed not at the technology itself, but at the companies and people who create it and put business interests above ethics or human welfare.”
— Andrzej Porębski, Medical Doctor (PopularMechanics.com)
The takeaway
While AI systems may demonstrate intelligent behavior, experts warn that assuming they are truly conscious like humans can be psychologically dangerous. This could lead people to trust and confide in these advanced computing programs in ways that make them vulnerable to manipulation and misinformation, with potentially serious consequences for human welfare.





