- Today
- Holidays
- Birthdays
- Reminders
- Cities
- Atlanta
- Austin
- Baltimore
- Berwyn
- Beverly Hills
- Birmingham
- Boston
- Brooklyn
- Buffalo
- Charlotte
- Chicago
- Cincinnati
- Cleveland
- Columbus
- Dallas
- Denver
- Detroit
- Fort Worth
- Houston
- Indianapolis
- Knoxville
- Las Vegas
- Los Angeles
- Louisville
- Madison
- Memphis
- Miami
- Milwaukee
- Minneapolis
- Nashville
- New Orleans
- New York
- Omaha
- Orlando
- Philadelphia
- Phoenix
- Pittsburgh
- Portland
- Raleigh
- Richmond
- Rutherford
- Sacramento
- Salt Lake City
- San Antonio
- San Diego
- San Francisco
- San Jose
- Seattle
- Tampa
- Tucson
- Washington
0G Labs Publishes Verification Framework for Decentralized AI Training
New approach adds cryptographic verification layer to decentralized training, complementing the economic models used by existing networks
Mar. 27, 2026 at 7:18pm by Ben Kaplan
Got story updates? Submit your updates here. ›
0G Labs has published a technical framework for verifying decentralized AI training, addressing the growing trust gap as distributed models scale toward frontier performance. The framework combines Trusted Execution Environments (TEEs) with economic incentive alignment to provide cryptographic proof that every training step executed correctly.
Why it matters
As AI models become more powerful and widely deployed, the integrity of their training process becomes a security question, not just an engineering one. Training costs have crossed $100 million and are heading toward $1 billion, and AI agents are already executing financial transactions and medical recommendations, making training integrity directly relevant to output safety. The EU AI Act now requires transparency about how high-risk AI systems are developed, turning verification from a technical question into a compliance requirement.
The details
0G Labs' four-layer verified infrastructure adds hardware-level cryptographic proof to decentralized AI training, moving beyond economic incentives alone. Every compute operation runs inside a Trusted Execution Environment (TEE), a hardware-isolated processor region where code executes in an encrypted state. The TEE generates cryptographic attestations proving that specific code ran on specific data and produced a specific result. These attestations are verifiable by any third party without trusting the node operator. This approach complements the economic incentive models used by existing decentralized AI networks like Bittensor's Covenant-72B.
- 0G Labs published the technical framework on March 27, 2026.
- The company recently demonstrated DiLoCoX-107B, the world's largest decentralized AI model at 107 billion parameters, which achieved 357x communication efficiency over standard methods on ordinary 1 Gbps internet connections.
The players
0G Labs
The creator of the Blockchain for AI Agents and one of the best-funded AI infrastructure projects in Web3 with $40 million in seed funding and a $250 million token commitment.
Ming Wu
The CTO of 0G Labs.
Jake Salerno
The VP of GTM at 0G Labs.
Bittensor
A decentralized AI network that uses staking penalties to discourage cheating in its Covenant-72B model.
EU AI Act
A regulation that requires transparency about how high-risk AI systems are developed.
What they’re saying
“Training a model is one problem. Trusting how it was trained is a different problem entirely. When training runs cost hundreds of millions of dollars and the resulting models handle financial transactions and medical decisions, 'the incentives should keep people honest' is not sufficient. The hardware needs to prove it.”
— Ming Wu, CTO of 0G Labs
“As AI models become more powerful and widely deployed, the integrity of their training process becomes a security question, not just an engineering one.”
— Jake Salerno, VP of GTM at 0G Labs
What’s next
0G's verification framework will be presented at EthCC Cannes on April 1 in a keynote titled "Why Verification Should Be a First-Class Citizen in AI."
The takeaway
0G Labs' new verification framework adds a critical layer of cryptographic proof to decentralized AI training, complementing existing economic incentive models and addressing growing concerns about the integrity of training processes as AI systems become more powerful and widely deployed.





