Compressed Language Model Tested for Rural Self-Driving Cars

WSU researchers explore using a small, affordable computer to power autonomous vehicle decision-making in areas with limited connectivity.

Mar. 18, 2026 at 12:02am

Researchers at Washington State University have demonstrated that a compressed large-language model running on a small, affordable computer could be a viable approach for autonomous vehicle decision-making in rural areas with limited telecommunications infrastructure. The team tested the performance of the compressed model against a full-size ChatGPT model in an open-source simulator, finding the two systems made comparable safe decisions in most cases, though more testing is needed before such a system would be road-ready.

Why it matters

As self-driving cars become more prevalent in cities, a key challenge remains in deploying the technology in rural areas that lack robust connectivity to cloud computing resources. This research explores a potential solution using edge computing to enable autonomous decision-making without relying on a powerful back-end data center, which could improve efficiency, lower costs, and protect privacy.

The details

The WSU team focused on the reasoning layer of autonomous driving systems, testing a compressed version of the open-source large-language model Mistral running on a small Jetson Orin Nano computer. They compared its performance to the full-size ChatGPT model in an open-source simulator across seven driving scenarios, finding the compressed model made safe, comparable decisions in most cases, though it crashed in one scenario. The researchers concluded the initial results suggest compressed large-language models could eventually be viable for edge computing in self-driving cars, though significant further testing is required.

  • The research was presented at the Proceedings of the Tenth ACM/IEEE Symposium on Edge Computing in 2026.

The players

Xinghui Zhao

An associate professor of computer science, director of the School of Engineering and Computer Science at WSU Vancouver, and corresponding author of the new publication.

Ishparsh Uprety

A graduate research assistant in the School of Engineering and Computer Science at WSU Vancouver and first author of the paper.

Washington State University

The university where the research was conducted.

Mistral

An open-source large-language model used in the research.

Jetson Orin Nano

An 8-gigabyte computing module smaller than a paperback novel that was used to run the compressed large-language model.

Got photos? Submit your photos here. ›

What they’re saying

“With autonomous driving, we need to make decisions right away. If you have a super powerful cloud on the back end, you can easily train and improve the perception models to support decision-making in cars, but that's in urban areas where you have a really good connection. If we talk about rural areas, there's not much connection, or maybe the connection is on and off. In that case, you really need the capability to process data on the fly.”

— Xinghui Zhao, Associate professor, Washington State University

“An LLM model is pretty huge. If you are going to run that on a car, there's going to be a lot of computational work. We thought: How about we optimize the model and make it smaller?”

— Ishparsh Uprety, Graduate research assistant, Washington State University

What’s next

The researchers plan to conduct much more extensive testing of the compressed large-language model approach to ensure it can reliably and safely power autonomous decision-making before any real-world deployment.

The takeaway

This research demonstrates a promising path forward for enabling self-driving cars to operate effectively in rural areas with limited connectivity, by leveraging edge computing and compressed large-language models to perform autonomous reasoning without relying on powerful cloud infrastructure.