- Today
- Holidays
- Birthdays
- Reminders
- Cities
- Atlanta
- Austin
- Baltimore
- Berwyn
- Beverly Hills
- Birmingham
- Boston
- Brooklyn
- Buffalo
- Charlotte
- Chicago
- Cincinnati
- Cleveland
- Columbus
- Dallas
- Denver
- Detroit
- Fort Worth
- Houston
- Indianapolis
- Knoxville
- Las Vegas
- Los Angeles
- Louisville
- Madison
- Memphis
- Miami
- Milwaukee
- Minneapolis
- Nashville
- New Orleans
- New York
- Omaha
- Orlando
- Philadelphia
- Phoenix
- Pittsburgh
- Portland
- Raleigh
- Richmond
- Rutherford
- Sacramento
- Salt Lake City
- San Antonio
- San Diego
- San Francisco
- San Jose
- Seattle
- Tampa
- Tucson
- Washington
Thermodynamic Computing Breakthrough Promises Energy-Efficient AI
Researchers propose design and training framework for thermodynamic neural networks that could drastically reduce energy requirements of machine learning.
Published on Mar. 6, 2026
Got story updates? Submit your updates here. ›
Researchers at Lawrence Berkeley National Laboratory have developed a new approach to thermodynamic computing that could enable energy-efficient machine learning. By designing thermodynamic neural networks that can perform nonlinear computations without requiring the system to reach equilibrium, the team has expanded the capabilities of thermodynamic computing beyond just linear algebra problems. They have also engineered a training framework using genetic algorithms and massive parallel simulations to optimize the thermodynamic neural network parameters.
Why it matters
Thermodynamic computing represents an exciting new frontier in low-power, energy-efficient computing that could help address the growing energy demands of modern computing, including the energy-intensive workloads of machine learning. By harnessing thermal noise as a power source rather than trying to eliminate it, thermodynamic computers have the potential to operate at a fraction of the energy cost of classical and quantum computers.
The details
The key innovations in this work are the ability to perform nonlinear computations with thermodynamic neural networks, and the development of a training framework using genetic algorithms and massive parallel simulations. Traditionally, thermodynamic computers have been limited to solving linear algebra problems and required the system to reach equilibrium before performing calculations. By designing nonlinear thermodynamic circuit components that can mimic the behavior of neurons in a neural network, the researchers have expanded the types of computations that can be performed. And their training approach, which evaluates billions of noisy dynamical trajectories per generation, allows the thermodynamic neural network to be optimized for specific machine learning tasks without the need to wait for equilibrium.
- The research paper was published in Nature Communications on March 6, 2026.
The players
Stephen Whitelam
A staff scientist at the Molecular Foundry, a U.S. Department of Energy user facility at Lawrence Berkeley National Laboratory, and an author on the research paper.
Corneel Casert
A researcher at the National Energy Research Scientific Computing Center (NERSC), a U.S. Department of Energy user facility at Lawrence Berkeley National Laboratory, and an author on the research paper.
Molecular Foundry
A U.S. Department of Energy user facility at Lawrence Berkeley National Laboratory that focuses on nanoscale science research.
National Energy Research Scientific Computing Center (NERSC)
A U.S. Department of Energy user facility at Lawrence Berkeley National Laboratory that provides high-performance computing resources for scientific research.
Lawrence Berkeley National Laboratory
A U.S. Department of Energy national laboratory that conducts scientific research in a wide range of fields, including energy, environment, and computing.
What they’re saying
“Thermodynamic computing is noise-powered. The premise of thermodynamic computing is that if you take a physical device with an energy scale comparable to that of thermal energy and leave it alone, it will change state over time, driven by thermal fluctuations. The goal is to program it so that this time evolution does something useful. Classical and quantum computing fight noise; thermodynamic computing is powered by it.”
— Stephen Whitelam, Molecular Foundry staff scientist (Nature Communications)
“A nonlinear thermodynamic circuit can behave like a neuron in a neural network. Nonlinearity is what gives a neural network its expressive power. What we reasoned is that if you build these thermodynamic neurons into a connected structure, then that structure should have the expressive power to mimic a neural network and so be able to do machine learning.”
— Stephen Whitelam, Molecular Foundry staff scientist (Nature Communications)
“It's a very different way of optimizing a neural network. Training a thermodynamic neural network by simulating it digitally is expensive, but once trained and built as physical hardware, we can perform inference on that hardware for a very low energy cost.”
— Corneel Casert, NERSC researcher (Nature Communications)
What’s next
The researchers are now looking for experimental partners to help realize these thermodynamic computing designs in hardware, as well as develop new algorithms to expand the types of computations that can be performed.
The takeaway
This breakthrough in thermodynamic computing represents a significant step towards energy-efficient machine learning and other computationally intensive applications. By harnessing thermal noise as a power source and designing nonlinear thermodynamic neural networks, the researchers have expanded the capabilities of this emerging field of computing, paving the way for a new generation of low-power, high-performance computing systems.


