- Today
- Holidays
- Birthdays
- Reminders
- Cities
- Atlanta
- Austin
- Baltimore
- Berwyn
- Beverly Hills
- Birmingham
- Boston
- Brooklyn
- Buffalo
- Charlotte
- Chicago
- Cincinnati
- Cleveland
- Columbus
- Dallas
- Denver
- Detroit
- Fort Worth
- Houston
- Indianapolis
- Knoxville
- Las Vegas
- Los Angeles
- Louisville
- Madison
- Memphis
- Miami
- Milwaukee
- Minneapolis
- Nashville
- New Orleans
- New York
- Omaha
- Orlando
- Philadelphia
- Phoenix
- Pittsburgh
- Portland
- Raleigh
- Richmond
- Rutherford
- Sacramento
- Salt Lake City
- San Antonio
- San Diego
- San Francisco
- San Jose
- Seattle
- Tampa
- Tucson
- Washington
Beyond Human Wisdom: Can Humanity Survive the Rise of AGI?
LessWrong explores the challenges of navigating the rapid progress of artificial intelligence
Apr. 1, 2026 at 11:40am
Got story updates? Submit your updates here. ›
A performance art/research project by Chris Leong explores the profound risks and uncertainties posed by the accelerating development of advanced artificial intelligence. As AI capabilities rapidly outpace human wisdom, the author warns that humanity faces an unprecedented challenge in steering this technological transformation towards a positive future.
Why it matters
The rapid advancement of AI technology, including the potential for autonomous systems to assist in the creation of bioweapons and other existential threats, has raised urgent concerns about humanity's ability to maintain control and ensure a safe outcome. This story examines the growing 'wisdom-capability gap' and the need to develop new forms of 'artificial wisdom' to help guide critical decisions.
The details
The author presents the 'SUV Triad' - Speed, Uncertainty, and Vulnerability - as a metaphor for the formidable challenges facing humanity in navigating the rise of advanced AI. The speed of AI progress, the deep uncertainty about timelines and outcomes, and the vulnerability to catastrophic risks combine to create a situation that may exceed the limits of human cognitive capabilities. Traditional approaches like 'stumbling through' or hoping for a 'silver bullet' solution are deemed increasingly inadequate, leading the author to propose the development of wise AI advisors as a potential path forward.
- In February 2025, members of the AI company Anthropic received disturbing news about the potential for an upcoming version of their AI system to assist in the creation of biological weapons.
- Renowned AI researcher Geoffrey Hinton warned that a normal person assisted by AI will soon be able to build bioweapons, likening the situation to an average person being able to make a nuclear bomb.
The players
Yoshua Bengio
A Turing Award winner and AI 'godfather' who has warned about the profound risks and challenges posed by the prospect of an 'intelligence explosion' from advanced AI systems.
Nick Bostrom
A philosopher and author of the book 'Superintelligence', who has cautioned that humanity is like 'small children playing with a bomb' when it comes to the power of the AI systems we are creating.
J. Robert Oppenheimer
The physicist who led the development of the atomic bomb, who famously said that when you see something 'technically sweet', you go ahead and do it and only argue about the implications afterwards.
Sam Altman
The CEO of OpenAI, who has expressed deep concerns about the implications of advanced AI, asking 'What have we done?' in the face of the powerful technologies being developed.
Toby Ord
A philosopher and author who has warned that humanity faces a crucial test in the coming centuries, either acting decisively to protect itself and its long-term potential, or risking the permanent loss of that potential.
What they’re saying
“Before the prospect of an intelligence explosion, we humans are like small children playing with a bomb. Such is the mismatch between the power of our plaything and the immaturity of our conduct.”
— Nick Bostrom, Philosopher and author of 'Superintelligence'
“When you see something that is technically sweet, you go ahead and do it and you argue about what to do about it only after you have had your technical success. That is the way it was with the atomic bomb.”
— J. Robert Oppenheimer, Physicist and director of the Manhattan Project
“There are moments in the history of science, where you have a group of scientists look at their creation and just say, you know: 'What have we done?... Maybe it's great, maybe it's bad, but what have we done?'”
— Sam Altman, CEO of OpenAI
What’s next
The judge in the case will decide on Tuesday whether or not to allow Walker Reed Quinn out on bail.
The takeaway
This case highlights growing concerns in the community about repeat offenders released on bail, raising questions about bail reform, public safety on SF streets, and if any special laws to govern autonomous vehicles in residential and commercial areas.


