- Today
- Holidays
- Birthdays
- Reminders
- Cities
- Atlanta
- Austin
- Baltimore
- Berwyn
- Beverly Hills
- Birmingham
- Boston
- Brooklyn
- Buffalo
- Charlotte
- Chicago
- Cincinnati
- Cleveland
- Columbus
- Dallas
- Denver
- Detroit
- Fort Worth
- Houston
- Indianapolis
- Knoxville
- Las Vegas
- Los Angeles
- Louisville
- Madison
- Memphis
- Miami
- Milwaukee
- Minneapolis
- Nashville
- New Orleans
- New York
- Omaha
- Orlando
- Philadelphia
- Phoenix
- Pittsburgh
- Portland
- Raleigh
- Richmond
- Rutherford
- Sacramento
- Salt Lake City
- San Antonio
- San Diego
- San Francisco
- San Jose
- Seattle
- Tampa
- Tucson
- Washington
Tiny silicon structures compute with heat, achieving 99% accurate matrix multiplication
MIT researchers have designed silicon structures that can perform calculations in an electronic device using excess heat instead of electricity.
Jan. 29, 2026 at 4:23pm
Got story updates? Submit your updates here. ›
MIT researchers have developed tiny silicon structures that can perform calculations using excess heat instead of electricity. These structures can perform matrix vector multiplication with over 99% accuracy, a key technique used in machine learning models. While scaling up the technology for large-scale deep learning applications remains a challenge, the researchers believe these heat-based computing structures could be useful for tasks like thermal management and heat source detection in microelectronics.
Why it matters
This new computing method using excess heat instead of electricity could enable more energy-efficient computation in electronic devices. It also opens up new possibilities for detecting heat sources and measuring temperature changes without consuming additional power, which is critical for maintaining the integrity of microelectronic systems.
The details
The researchers used an inverse design technique to create complex silicon structures, each roughly the size of a dust particle, that can perform computations using heat conduction. These analog computing structures encode input data as a set of temperatures using the waste heat already present in a device. The flow and distribution of heat through the specially designed material forms the basis of the calculation, with the output represented by the power collected at the other end. The researchers were able to achieve over 99% accuracy in matrix vector multiplication, a fundamental mathematical technique used in machine learning models.
- The findings were published in the journal Physical Review Applied in January 2026.
The players
Caio Silva
An undergraduate student in the Department of Physics at MIT and the lead author of the paper on the new computing paradigm.
Giuseppe Romano
A research scientist at MIT's Institute for Soldier Nanotechnologies and a member of the MIT-IBM Watson AI Lab, who is the senior author on the paper.
MIT
The university where the research was conducted.
What they’re saying
“Most of the time, when you are performing computations in an electronic device, heat is the waste product. You often want to get rid of as much heat as you can. But here, we've taken the opposite approach by using heat as a form of information itself and showing that computing with heat is possible.”
— Caio Silva, Undergraduate student, Department of Physics, MIT
“Finding the right topology for a given matrix is challenging. We beat this problem by developing an optimization algorithm that ensures the topology being developed is as close as possible to the desired matrix without having any weird parts.”
— Caio Silva, Undergraduate student, Department of Physics, MIT
“These structures are far too complicated for us to come up with just through our own intuition. We need to teach a computer to design them for us. That is what makes inverse design a very powerful technique.”
— Giuseppe Romano, Research scientist, MIT Institute for Soldier Nanotechnologies; Member, MIT-IBM Watson AI Lab
What’s next
The researchers plan to design structures that can perform sequential operations, where the output of one structure becomes an input for the next, to enable computations more akin to machine learning models. They also aim to develop programmable structures that can encode different matrices without starting from scratch with a new structure each time.
The takeaway
This new heat-based computing method represents a significant step towards more energy-efficient computation in electronic devices. While scaling up the technology for large-scale deep learning applications remains a challenge, the ability to perform computations using excess heat could have immediate applications in thermal management and heat source detection in microelectronics.
Cambridge top stories
Cambridge events
Mar. 19, 2026
The People's KaraokeMar. 19, 2026
Moon Walker, Demi The Daredevil, Sarah and the Safe WordMar. 20, 2026
Brat Boston and Nice & Niche Present: Pastel Party




