- Today
- Holidays
- Birthdays
- Reminders
- Cities
- Atlanta
- Austin
- Baltimore
- Berwyn
- Beverly Hills
- Birmingham
- Boston
- Brooklyn
- Buffalo
- Charlotte
- Chicago
- Cincinnati
- Cleveland
- Columbus
- Dallas
- Denver
- Detroit
- Fort Worth
- Houston
- Indianapolis
- Knoxville
- Las Vegas
- Los Angeles
- Louisville
- Madison
- Memphis
- Miami
- Milwaukee
- Minneapolis
- Nashville
- New Orleans
- New York
- Omaha
- Orlando
- Philadelphia
- Phoenix
- Pittsburgh
- Portland
- Raleigh
- Richmond
- Rutherford
- Sacramento
- Salt Lake City
- San Antonio
- San Diego
- San Francisco
- San Jose
- Seattle
- Tampa
- Tucson
- Washington
Axiom Raises $200M to Prove AI-Generated Code is Safe
Verifiable AI startup aims to eliminate risk of 'hallucinations' in AI-created software
Mar. 13, 2026 at 1:22am
Got story updates? Submit your updates here. ›
Axiom Quant Inc., a startup focused on verifying the safety and accuracy of AI-generated code, has raised $200 million in a Series A funding round led by Menlo Ventures. The company uses a specialized programming language called Lean to train its AI systems to generate provably correct code, eliminating the risk of 'hallucinations' that can occur with existing AI code generation tools.
Why it matters
As AI-generated code becomes more prevalent in software development, there are growing concerns about the reliability and security of this code. Axiom's approach of using formal verification to guarantee the correctness of AI-generated outputs addresses a fundamental flaw in current AI code generation tools, which can produce plausible but potentially unsafe code.
The details
Axiom's AI models are trained to generate code in Lean, a programming language designed for mathematical proofs. This allows the company to provide mathematical certainty that any code function generated by its AI will always return the correct answer and won't introduce hidden vulnerabilities. Axiom also generates a 'verified data flywheel' for each AI output, feeding this proof-checked data back into the training process to enhance its models' capabilities without the risk of 'model collapse'.
- Axiom first came to attention in October 2025 when it raised $64 million in seed funding.
- In December 2025, Axiom's deterministic AI achieved a perfect score on the Putnam Competition, a prestigious math exam.
- In early 2026, Axiom was able to verifiably prove a 20-year-old number theory conjecture.
The players
Axiom Quant Inc.
A startup focused on verifying the safety and accuracy of AI-generated code.
Carina Hong
The 25-year-old CEO of Axiom, who is a Stanford University Ph.D. student, math wizard, and award-winning mathematician.
Ken Ono
A Guggenheim, Packard, and Sloan Fellow who previously served as Vice President of the American Mathematical Society and is one of the world's senior authorities on Ramanujan's mathematics. He is a founding member of Axiom's team.
Shubho Sengupta
The former Facebook AI Research Director who is Axiom's Chief Technology Officer and helped write foundational graphics processing units libraries at Nvidia.
François Charton
The first person to apply transformer models to solve a math problem that had stumped experts for more than 130 years, and is now part of Axiom's team.
What they’re saying
“LLMs are statistical by nature — they produce plausible outputs, not provably correct or safe ones. They can't guarantee a function returns the right answer, and they can't guarantee it doesn't introduce a security vulnerability in the process. This isn't a bug that will be fixed with the next model generation. It's architectural. Hallucinations and unsafe code from AI are not going away.”
— Matt Kraning and C.C. Gong, Partners at Menlo Ventures (SiliconANGLE)
What’s next
Axiom plans to scale its training infrastructure and expand its team of math experts to make formal verification fast and affordable for every company using AI-generated code.
The takeaway
Axiom's approach of using formal verification to guarantee the correctness of AI-generated code addresses a fundamental flaw in current AI code generation tools, which can produce plausible but potentially unsafe outputs. As AI-generated code becomes more prevalent, Axiom's solution could help eliminate the risk of 'hallucinations' and security vulnerabilities in critical software systems.





