- Today
- Holidays
- Birthdays
- Reminders
- Cities
- Atlanta
- Austin
- Baltimore
- Berwyn
- Beverly Hills
- Birmingham
- Boston
- Brooklyn
- Buffalo
- Charlotte
- Chicago
- Cincinnati
- Cleveland
- Columbus
- Dallas
- Denver
- Detroit
- Fort Worth
- Houston
- Indianapolis
- Knoxville
- Las Vegas
- Los Angeles
- Louisville
- Madison
- Memphis
- Miami
- Milwaukee
- Minneapolis
- Nashville
- New Orleans
- New York
- Omaha
- Orlando
- Philadelphia
- Phoenix
- Pittsburgh
- Portland
- Raleigh
- Richmond
- Rutherford
- Sacramento
- Salt Lake City
- San Antonio
- San Diego
- San Francisco
- San Jose
- Seattle
- Tampa
- Tucson
- Washington
New Framework Boosts Trust in Robot, Vehicle Networks
Harvard researchers propose a system to help autonomous agents verify the trustworthiness of data from other connected devices.
Apr. 3, 2026 at 6:30am by Ben Kaplan
Got story updates? Submit your updates here. ›
A conceptual illustration of the advanced hardware and sensors powering the 'cy-trust' framework, which aims to help autonomous systems verify the trustworthiness of data from connected devices.San Francisco TodayHarvard computer scientists have developed a new framework called 'cy-trust' that aims to help networks of connected machines, like self-driving cars and smart power grids, determine how much they can trust information from other agents before acting on it. The researchers argue that establishing this trust framework is crucial for developing secure and reliable cyber-physical systems as they become more prevalent in the real world.
Why it matters
As autonomous and connected systems become more common in applications like self-driving vehicles, smart infrastructure, and robotic fleets, ensuring the trustworthiness of data shared between these systems is critical to prevent disruptions, accidents, and other real-world harms. Traditional network security methods are not enough, so the researchers propose a new quantitative measure of 'cy-trust' to help these systems validate the origin and integrity of data they receive.
The details
The paper introduces 'cy-trust' as a way for autonomous agents like vehicles or robots to assign a numerical trust value between 0 and 1 to data received from other agents or the cloud. This would allow them to determine how much to rely on that information when making decisions. The researchers envision using onboard sensors to cross-validate data, and applying signal processing to wireless communications to verify the origin. In lab experiments, the researchers have shown how this approach can help a group of cooperative robots ignore disruptive inputs from malicious 'attackers' trying to undermine the network.
- The paper was published on April 3, 2026 in the Proceedings of the IEEE.
The players
Stephanie Gil
The John L. Loeb Associate Professor of Engineering and Applied Sciences in the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS) and associate faculty member in the Kempner Institute, who led the research.
Andrea Goldsmith
The president of Stony Brook University and a co-author of the paper, who said the work 'could not come at a more important time' as autonomous systems become more prevalent.
What they’re saying
“Cyber-physical systems are going to become very pervasive. The question is, how do we secure these systems? How do we make sure they are going to be resilient as they go into the real world? This is something we had to learn from making internet systems secure.”
— Stephanie Gil, John L. Loeb Associate Professor of Engineering and Applied Sciences
“As we move into a world where so many of our physical systems consist of multiple agents controlled by AI in the cloud, we require a rigorous framework for their design that is secure and robust against malicious agents. Our paper provides a comprehensive roadmap of state-of-the-art techniques and new research frontiers to design secure robust collaborative multiagent systems.”
— Andrea Goldsmith, President of Stony Brook University
What’s next
The researchers plan to continue testing and refining the 'cy-trust' framework in their lab, exploring how it can be applied to real-world autonomous systems like self-driving vehicles and smart infrastructure.
The takeaway
As connected and autonomous systems become more prevalent, ensuring the trustworthiness of data shared between them is crucial to prevent disruptions, accidents, and other real-world harms. The 'cy-trust' framework proposed by Harvard researchers offers a new quantitative approach to help these systems validate the origin and integrity of information they receive, paving the way for more secure and resilient cyber-physical systems.
San Francisco top stories
San Francisco events
Apr. 3, 2026
ForbiddenApr. 3, 2026
LAMB OF GOD: INTO OBLIVION TOURApr. 3, 2026
Nimesh Patel: With All Due Disrespect




