- Today
- Holidays
- Birthdays
- Reminders
- Cities
- Atlanta
- Austin
- Baltimore
- Berwyn
- Beverly Hills
- Birmingham
- Boston
- Brooklyn
- Buffalo
- Charlotte
- Chicago
- Cincinnati
- Cleveland
- Columbus
- Dallas
- Denver
- Detroit
- Fort Worth
- Houston
- Indianapolis
- Knoxville
- Las Vegas
- Los Angeles
- Louisville
- Madison
- Memphis
- Miami
- Milwaukee
- Minneapolis
- Nashville
- New Orleans
- New York
- Omaha
- Orlando
- Philadelphia
- Phoenix
- Pittsburgh
- Portland
- Raleigh
- Richmond
- Rutherford
- Sacramento
- Salt Lake City
- San Antonio
- San Diego
- San Francisco
- San Jose
- Seattle
- Tampa
- Tucson
- Washington
Stanford's CodeX FutureLaw Explores AI's Shift to Critical Infrastructure
Key insights reveal how AI is moving beyond software to shape law, governance, and society.
Apr. 20, 2026 at 3:38am
Got story updates? Submit your updates here. ›
As AI evolves from a productivity tool to a foundational layer of modern institutions, new frameworks for governance, accountability, and public trust are emerging.Stanford TodayStanford Law School's annual CodeX FutureLaw conference brought together global leaders to examine the next frontier of legal informatics. Discussions explored how AI is evolving from a productivity tool to a critical infrastructure that requires new frameworks for accountability, trust, verification, and the encoding of values into emerging systems of law and policy.
Why it matters
As AI becomes more integral to legal systems, public services, and economic life, it is beginning to function as a foundational layer of modern institutions. This shift requires a shared public challenge, with government, industry, academia, and civic communities working in greater coordination to ensure these systems are efficient, scalable, transparent, and accountable while supporting the public interest.
The details
The conference examined a foundational divide in artificial intelligence between rule-based and probabilistic systems, and how this distinction shapes decision-making, explainability, and trust. Participants also discussed the growing importance of auditability, cultural and contextual reasoning, and nuanced decision-making as AI agents are used to automate routine legal work, streamline public services, and support infrastructure development. Additionally, the need for interdisciplinary teams in AI development was highlighted, as technical capability alone is insufficient to meet real-world user needs.
- The CodeX FutureLaw conference took place in 2026 at Stanford University.
The players
Stanford Law School
Ranked as the nation's top law school by U.S. News, it hosted the 13th annual FutureLaw conference through its global hub for legal tech innovation, CodeX.
CodeX
A multidisciplinary center between Stanford Law School and the Stanford Department of Computer Science dedicated to advancing computational law and improving efficiency, transparency, and access within legal systems.
Jeannette Eicks
Associate dean and professor of Law at The Colleges of Law, she co-convened the Rules, Patterns, and Hybrids session at the UN AI For Good Law Track.
Oliver Goodenough
Stanford CodeX affiliate, research professor at Vermont Law and Graduate School, and senior lecturer at the Thayer School of Engineering at Dartmouth College, he co-convened the Rules, Patterns, and Hybrids session at the UN AI For Good Law Track.
Sonia M. Gipson Rankin
A legal scholar, educator, computer scientist, and lawyer, she talked about the importance of interdisciplinary teams for AI development and testing during the UN AI for Good Law Track conference.
What they’re saying
“AI does not reason in a single way. It operates through two distinct logics. Some systems follow explicit rules, similar to traditional legal reasoning, while others rely on probabilistic models derived from patterns in data.”
— Jeannette Eicks and Oliver Goodenough, Co-conveners of the Rules, Patterns, and Hybrids session
“The path forward is about building more advanced systems that can be trusted and measured. This means establishing governance structures that are shared rather than fragmented; strengthening oversight at the intersection of law, policy, economics, and technology; and prioritizing accountability in how AI systems are designed, deployed, and scaled across society.”
— Britney Porter, Author
What’s next
The key insights from Stanford's CodeX FutureLaw 2026 conference will likely inform ongoing discussions and policy decisions around the governance, design, and accountability of AI systems as they become more deeply integrated into legal, public, and economic infrastructure.
The takeaway
The future of AI will be shaped by how we balance technical advancement with the systems of governance, design, and accountability that determine how it is introduced into society. This requires a shared, interdisciplinary approach to ensure AI systems are efficient, scalable, transparent, and accountable while supporting the public interest.




