Experts Warn Against Outsourcing Critical Thinking to AI

Google engineer and geopolitical advisor offer insights on balancing AI and human intelligence

Published on Feb. 20, 2026

A senior Google engineer and a geopolitical advisor warn that overreliance on generative AI language models can erode human competence and critical thinking. They advocate for a 'Socratic AI' approach that challenges users' assumptions and facilitates deeper learning. The experts also caution organizations against outsourcing all decision-making to AI, emphasizing the need to pair AI with human judgment to navigate rapidly changing technological and geopolitical landscapes.

Why it matters

As AI language models become more advanced, there are growing concerns that they could undermine our ability to think critically and develop new ideas. This story highlights the importance of striking the right balance between AI and human intelligence to ensure technology enhances rather than replaces our cognitive abilities. It also underscores the need for organizations to adapt their decision-making processes to the new realities of geopolitical volatility and machine-speed threats.

The details

The article features insights from two experts: Peter Danenberg, a distinguished software engineer at Google DeepMind, and Dr. David Bray, a two-time Global CIO Award winner and distinguished chair at the Stimson Center. Danenberg warns that the use of large language models (LLMs) for creative tasks can lead to a significant reduction in brain activity, suggesting that outsourcing critical thinking to AI can erode human competence and mastery. He advocates for a 'peirastic' approach, where AI systems are designed to challenge users' assumptions and facilitate deeper learning, rather than simply generating content. Bray, on the other hand, cautions against completely outsourcing human judgment to AI, emphasizing the need to pair AI with human expertise to navigate the rapidly changing technological and geopolitical landscape. He advises organizations to de-risk their operations on a regional basis, rather than relying on a globalization playbook, and to elevate the role of general counsel as geopolitical risk partners.

  • The article was published on February 18, 2026.

The players

Peter Danenberg

A distinguished software engineer at Google DeepMind and the architect of Gemini's key features.

Dr. David Bray

A two-time Global CIO Award winner, distinguished chair at the Stimson Center, and CEO of LeadDoAdapt Venture.

Ray Wang

The CEO of Constellation Research and co-host of the DisrupTV podcast.

Peter Norvig

A researcher at the University of Oxford, collaborating with Danenberg on building 'Socratic AI' systems.

Got photos? Submit your photos here. ›

What they’re saying

“After about 10 to 15 minutes of being questioned by the LLM, people basically had enough. Being questioned by the LLM is exhausting.”

— Peter Danenberg, Distinguished software engineer, Google DeepMind (ZDNET)

“If you outsource your thinking, you outsource your talent. This strategy may secure a short-term gain but risk the company's long-term future.”

— Dr. David Bray, Distinguished chair, Stimson Center; CEO, LeadDoAdapt Venture (ZDNET)

What’s next

The article does not mention any specific next steps, as it focuses on providing insights and recommendations from the experts.

The takeaway

This article highlights the need for organizations to strike a balance between AI and human intelligence, leveraging the strengths of both to navigate the complex technological and geopolitical landscape. By pairing AI with human judgment and fostering a 'Socratic AI' approach that challenges users' assumptions, companies can avoid the pitfalls of outsourcing critical thinking and ensure that technology enhances rather than replaces human cognitive abilities.