AI Model Helps Decode Canary Songs for Neuroscience Research

TweetyBERT, a self-supervised neural network, can rapidly process and annotate birdsongs to aid studies on the neural basis of complex learned behaviors.

Published on Mar. 4, 2026

Researchers at the University of Oregon have developed a new machine learning model called TweetyBERT that can automatically segment and classify canary vocalizations with expert-level accuracy. This tool, which adapts the BERT language AI architecture, offers a scalable platform for neuroscience research by providing insights into the neural basis of how the brain learns and produces language. Beyond canaries, the underlying approach behind TweetyBERT could be applied to study vocal patterns in other bird species and even marine mammals like dolphins and whales.

Why it matters

Canaries, or songbirds, are commonly used by neuroscientists to study the neural basis of complex learned behaviors due to their remarkable ability to learn and produce intricate songs throughout their lives. TweetyBERT provides a faster and more scalable way to analyze these birdsongs, which could lead to new breakthroughs in understanding how the brain processes and generates speech and language.

The details

TweetyBERT is a self-supervised neural network that can rapidly process unlabeled vocal recordings, identify communication units like notes and syllables, and annotate song sequences without the need for human-labeled training data. This is a significant improvement over current methods that require slow and labor-intensive manual labeling by experts. The tool adapts the BERT language AI architecture to handle the unique acoustic structure of birdsong, allowing it to perform similarly to expert annotators in classifying and tracking changes in vocal patterns over time.

  • The study by University of Oregon researchers was published in the scientific journal Patterns on March 4, 2026.

The players

Tim Gardner

An associate professor of bioengineering at the University of Oregon's Knight Campus who led the development of TweetyBERT.

George Vengrovski

A graduate student in Gardner's lab who developed TweetyBERT as a means of automatically annotating the songs of canaries.

Got photos? Submit your photos here. ›

What they’re saying

“Current AI methods for analyzing animal vocalizations require human-labeled training data, a slow and labor-intensive process. We developed TweetyBERT, a self-supervised neural network for analyzing birdsongs. It can rapidly process unlabeled vocal recordings, identify communication units, and annotate sequences.”

— Tim Gardner, Associate Professor of Bioengineering (Mirage News)

What’s next

The researchers believe the underlying approach behind TweetyBERT could be applied to study vocal patterns in other bird species and even marine mammals like dolphins and whales, suggesting the potential for further development and broader applications of the technology.

The takeaway

TweetyBERT represents a significant advancement in the field of neuroscience research, providing a scalable and efficient tool for analyzing the complex vocalizations of songbirds. By automating the process of segmenting and classifying birdsongs, the model offers new insights into the neural basis of language learning and production, with potential applications extending beyond the study of canaries to a wide range of animal species.