Brain Uses Eye Movements To See In 3D

New research challenges long-held beliefs about how the brain processes visual information.

Published on Feb. 5, 2026

Researchers at the University of Rochester have discovered that the visual motion caused by eye movements helps the brain perceive depth and understand the 3D structure of the world, contrary to the long-standing belief that the brain needs to discount or subtract this motion as visual "noise". The team developed a new theoretical framework to predict how humans should perceive an object's motion and depth during different types of eye movements, and found consistent, predictable patterns of errors that matched their predictions across experiments measuring motion direction and depth perception.

Why it matters

This research has important implications for understanding visual perception and could provide insights for improving visual technologies like virtual reality headsets, which currently do not account for how the brain uses eye movement patterns to interpret a scene.

The details

The researchers found that the specific patterns of visual motion created by eye movements are useful to the brain for figuring out how objects move and where they are located in 3D space. Contrary to conventional ideas, the brain doesn't ignore or suppress image motion produced by eye movement, but instead uses this information to accurately estimate an object's motion and depth.

  • The research was published in Nature Communications in 2026.

The players

Greg DeAngelis

George Eastman Professor; professor in the Departments of Brain and Cognitive Sciences, Neuroscience, and Biomedical Engineering and the Center for Visual Science; member of the Del Monte Institute for Neuroscience; and lead author of the new research.

Zhe-Xin Xu

A former graduate student in the DeAngelis lab who is now a postdoctoral fellow at Harvard University and first author on the study.

Jiayi Pang

A former undergraduate who is now a graduate student at Brown University and contributed to the research.

Akiyuki Anzai

A research associate at the University of Rochester who contributed to the study.

Got photos? Submit your photos here. ›

What they’re saying

“The conventional idea has been that the brain needs to somehow discount, or subtract off, the image motion that is produced by eye movements, as this motion has been thought to be a nuisance. But we found that the visual motion produced by our eye movements is not just a nuisance variable to be subtracted off; rather, our brains analyze these global patterns of image motion and use this to infer how our eyes have moved relative to the world.”

— Greg DeAngelis, George Eastman Professor; professor in the Departments of Brain and Cognitive Sciences, Neuroscience, and Biomedical Engineering and the Center for Visual Science; member of the Del Monte Institute for Neuroscience (Nature Communications)

“We show that the brain considers many pieces of information to understand the 3D structure of the world through vision, including the patterns of image motion caused by eye movements. Contrary to conventional ideas, the brain doesn't ignore or suppress image motion produced by eye movement. Instead, it uses this image motion to understand a scene and accurately estimate an object's motion and depth.”

— Greg DeAngelis, George Eastman Professor; professor in the Departments of Brain and Cognitive Sciences, Neuroscience, and Biomedical Engineering and the Center for Visual Science; member of the Del Monte Institute for Neuroscience (Nature Communications)

“VR headsets don't factor in how the eyes are moving relative to the scene when they compute the images to show to each eye. There may be a stark mismatch between the image motion that is shown to the observer in VR and what the brain is expecting to receive based on the eye movements that the observer is making.”

— Greg DeAngelis, George Eastman Professor; professor in the Departments of Brain and Cognitive Sciences, Neuroscience, and Biomedical Engineering and the Center for Visual Science; member of the Del Monte Institute for Neuroscience (Nature Communications)

What’s next

The research team plans to further investigate how the brain uses eye movement patterns to interpret 3D scenes, with the goal of informing the design of more immersive and comfortable virtual reality experiences.

The takeaway

This study challenges long-held beliefs about how the brain processes visual information, showing that the brain actively uses the visual motion caused by eye movements to perceive depth and understand the 3D world, rather than treating it as meaningless interference. The findings could lead to significant advancements in visual perception research and virtual reality technology.