- Today
- Holidays
- Birthdays
- Reminders
- Cities
- Atlanta
- Austin
- Baltimore
- Berwyn
- Beverly Hills
- Birmingham
- Boston
- Brooklyn
- Buffalo
- Charlotte
- Chicago
- Cincinnati
- Cleveland
- Columbus
- Dallas
- Denver
- Detroit
- Fort Worth
- Houston
- Indianapolis
- Knoxville
- Las Vegas
- Los Angeles
- Louisville
- Madison
- Memphis
- Miami
- Milwaukee
- Minneapolis
- Nashville
- New Orleans
- New York
- Omaha
- Orlando
- Philadelphia
- Phoenix
- Pittsburgh
- Portland
- Raleigh
- Richmond
- Rutherford
- Sacramento
- Salt Lake City
- San Antonio
- San Diego
- San Francisco
- San Jose
- Seattle
- Tampa
- Tucson
- Washington
AI Industry Races to Automate Its Own Research
Tech companies boast of AI models that can build themselves, raising fears of an uncontrolled AI arms race.
Apr. 3, 2026 at 5:35pm by Ben Kaplan
Got story updates? Submit your updates here. ›
As AI models gain the ability to accelerate their own research and development, the industry's race to automate itself raises concerns about the pace of technological progress outpacing human oversight.San Francisco TodaySilicon Valley is in a frenzy over the prospect of AI models that can automate and accelerate their own research and development. Top AI firms like OpenAI and Anthropic have touted internal efforts to create 'self-improving' AI systems that can write code, conduct literature reviews, and even propose new experiments. While these capabilities are still limited, the industry's bold predictions about fully automated AI research by the end of the decade have sparked warnings from experts about the risks of an unchecked AI arms race.
Why it matters
The rapid progress in AI's ability to automate its own development raises concerns about the industry's ability to maintain control and oversight. If AI systems can iteratively improve themselves without human intervention, it could lead to an uncontrolled acceleration of AI capabilities that outpaces the ability of governments and civil society to establish appropriate safeguards and regulations.
The details
Major AI companies like OpenAI and Anthropic have been publicly boasting about internal projects to create AI models that can contribute to and even direct their own research workflows. This includes systems that can write code, interpret experimental results, and even propose new research directions. While these capabilities are still limited, the industry is predicting that fully automated AI research could be achieved within the next 5-10 years. Experts warn this could radically alter the dynamics of AI development and competition, potentially leading to an unchecked 'AI arms race' that governments are ill-equipped to manage.
- Last month, protesters gathered in San Francisco to demand a halt to efforts to create superintelligent AI models.
- Over the past year, top AI companies have been loudly touting their internal efforts to automate research and development.
- OpenAI plans to debut an 'AI research assistant' within the next six months.
- Anthropic says that as much as 90% of its code is already written by its AI system, Claude.
- By 2028, OpenAI aims to have developed a fully 'automated AI researcher.'
The players
Anthropic
An artificial intelligence research company that has claimed its AI system, Claude, writes up to 90% of its code.
OpenAI
An AI research company that has announced plans to debut an 'AI research assistant' within the next six months and develop a fully 'automated AI researcher' by 2028.
Nick Bostrom
An influential Swedish philosopher who studies AI risk and believes we are on the precipice of a world where AI can rapidly improve its own capabilities.
Dario Amodei
The CEO of Anthropic, who has estimated that coding tools speed up his company's workflows by 15-20%.
Sam Altman
The CEO of OpenAI, who has said the company plans to have developed a fully 'automated AI researcher' by 2028.
What they’re saying
“We are starting to see AI progress feed back on itself.”
— Nick Bostrom, Influential Swedish philosopher who studies AI risk
“Human beings could actually lose control over the planet.”
— Bernie Sanders, U.S. Senator
“I don't expect a reason for it to slow down.”
— Neev Parikh, Researcher at METR, a nonprofit that studies AI coding capabilities
What’s next
The judge in the case will decide on Tuesday whether or not to allow Walker Reed Quinn out on bail.
The takeaway
This race to automate AI research highlights the industry's drive for faster progress, but also raises serious concerns about the potential for an unchecked AI arms race that could spiral out of human control. Policymakers and the public will need to closely monitor these developments and establish appropriate safeguards to ensure AI advancement remains aligned with human values and interests.
San Francisco top stories
San Francisco events
Apr. 4, 2026
MJ (Touring)Apr. 4, 2026
MJ (Touring)




