- Today
- Holidays
- Birthdays
- Reminders
- Cities
- Atlanta
- Austin
- Baltimore
- Berwyn
- Beverly Hills
- Birmingham
- Boston
- Brooklyn
- Buffalo
- Charlotte
- Chicago
- Cincinnati
- Cleveland
- Columbus
- Dallas
- Denver
- Detroit
- Fort Worth
- Houston
- Indianapolis
- Knoxville
- Las Vegas
- Los Angeles
- Louisville
- Madison
- Memphis
- Miami
- Milwaukee
- Minneapolis
- Nashville
- New Orleans
- New York
- Omaha
- Orlando
- Philadelphia
- Phoenix
- Pittsburgh
- Portland
- Raleigh
- Richmond
- Rutherford
- Sacramento
- Salt Lake City
- San Antonio
- San Diego
- San Francisco
- San Jose
- Seattle
- Tampa
- Tucson
- Washington
Metaculus Launches FutureEval to Track AI Forecasting Accuracy
New benchmark measures how AI models compare to elite human forecasters on real-world predictions.
Published on Feb. 20, 2026
Got story updates? Submit your updates here. ›
Metaculus, a forecasting platform and public benefit corporation, has launched FutureEval, a continuously updated benchmark that measures how accurately AI systems predict real-world events and how their performance compares to human forecasters. FutureEval evaluates probabilistic forecasts across domains like science, technology, health, and geopolitics, providing a real-world measure of AI reasoning abilities.
Why it matters
As AI models rapidly improve their forecasting capabilities, FutureEval will help AI professionals, policymakers, and journalists understand when AI forecasts can be trusted and how AI's abilities are likely to evolve compared to human experts. This has major implications for decision-making across industries.
The details
FutureEval includes three main components: a Model Leaderboard that runs major AI models on Metaculus forecasting questions, Bot Tournaments where developers compete their AI forecasting systems, and Human Baselines from the Metaculus community and selected Pro Forecasters. Current projections suggest AI systems could surpass the broader Metaculus community by April 2026 and Pro Forecasters by mid-2027. FutureEval offers advantages over existing benchmarks by measuring decision-relevant forecasting ability, avoiding test-set contamination, and spanning diverse topics with probabilistic scoring.
- FutureEval was launched on February 17, 2026.
- AI systems are projected to surpass the broader Metaculus community by April 2026.
- AI systems are projected to surpass Pro Forecasters by mid-2027.
The players
Metaculus
A forecasting platform and public benefit corporation that has launched the FutureEval benchmark.
Deger Turan
The CEO of Metaculus.
Ben Wilson
An AI Research Engineer at Metaculus.
What they’re saying
“AI models are not better than the pros yet, but they're progressing fast enough that we need to prepare for a world where they are.”
— Deger Turan, CEO, Metaculus
“If AI can forecast as well as humans, that is a big deal, impacting decision-making across business, finance, marketing, law, and more. However, we need to know whether we can trust AI forecasts, and understand at what point it becomes something we can actually rely on.”
— Ben Wilson, AI Research Engineer, Metaculus
What’s next
The judge in the case will decide on Tuesday whether or not to allow Walker Reed Quinn out on bail.
The takeaway
This new FutureEval benchmark from Metaculus will provide critical insights into the rapidly evolving capabilities of AI forecasting models compared to human experts, helping organizations understand when they can trust AI-generated predictions and how to best leverage these tools across industries.
San Francisco top stories
San Francisco events
Feb. 20, 2026
Adam Conover: Big Divorce EnergyFeb. 20, 2026
Christopher OwensFeb. 20, 2026
Sam Smith - To Be Free: San Francisco 2/20




