- Today
- Holidays
- Birthdays
- Reminders
- Cities
- Atlanta
- Austin
- Baltimore
- Berwyn
- Beverly Hills
- Birmingham
- Boston
- Brooklyn
- Buffalo
- Charlotte
- Chicago
- Cincinnati
- Cleveland
- Columbus
- Dallas
- Denver
- Detroit
- Fort Worth
- Houston
- Indianapolis
- Knoxville
- Las Vegas
- Los Angeles
- Louisville
- Madison
- Memphis
- Miami
- Milwaukee
- Minneapolis
- Nashville
- New Orleans
- New York
- Omaha
- Orlando
- Philadelphia
- Phoenix
- Pittsburgh
- Portland
- Raleigh
- Richmond
- Rutherford
- Sacramento
- Salt Lake City
- San Antonio
- San Diego
- San Francisco
- San Jose
- Seattle
- Tampa
- Tucson
- Washington
USC AI Transcends Training to Master Obscure Programming Language
Researchers develop feedback loop to push AI model's success rate from 39% to 96% on Idris coding exercises
Published on Mar. 11, 2026
Got story updates? Submit your updates here. ›
A new study from the USC Viterbi School of Engineering found that by providing an AI model with feedback on its errors and letting it try again, the researchers were able to dramatically improve the model's performance on an obscure programming language that it had minimal training data for. The method, developed by USC undergraduate Minda Li and her advisor Professor Bhaskar Krishnamachari, pushed the model's success rate from a dismal 39% to an impressive 96% on Idris coding exercises.
Why it matters
This research suggests that AI models can transcend the limits of their initial training data by leveraging the right feedback mechanisms. The ability to master tasks and languages that have minimal available data online could unlock new applications for AI in areas like mathematical reasoning, legal logic, and endangered human languages.
The details
Li and Krishnamachari chose to test the AI model on Idris, an obscure dependently typed functional programming language with only around 2,000 publicly available code repositories online - about 10,000 times less data than the 24 million repositories available for the popular Python language. Crucially, neither the researcher nor her advisor could write a line of Idris code themselves. By implementing a 'compiler feedback loop' that captured error messages from the Idris compiler and fed them back to the AI model, Li was able to push the model's success rate from 39% to an impressive 96% on 56 Idris coding exercises.
- The study was accepted at the IEEE SoutheastCon 2026 conference, taking place March 12-15, 2026.
The players
Minda Li
A USC Viterbi undergraduate who has been pursuing research since her freshman year, working alongside her advisor Bhaskar Krishnamachari.
Bhaskar Krishnamachari
A Faculty Fellow and Systems Professor in the Ming Hsieh Department of Electrical and Computer Engineering at the USC Viterbi School of Engineering, with a joint appointment in the Thomas Lord Department of Computer Science.
GPT-5
The AI model tested by Li and Krishnamachari, which was able to dramatically improve its performance on the obscure Idris programming language through the feedback loop method.
What they’re saying
“Our AI tools are now able to transcend their initial training. Used to be, maybe a year or two ago, you would say an AI model is only as good as the data it has seen. This paper is saying something different.”
— Bhaskar Krishnamachari, Faculty Fellow and Systems Professor
“I thought we'd probably get a 10% jump. I was surprised that just that alone, seemingly one simple thing, just keep recompiling, keep trying, was able to get to 96%.”
— Minda Li, USC Viterbi Undergraduate
What’s next
Li is already thinking about how to make the model get smarter with each problem it solves, rather than starting from scratch every time. Krishnamachari sees potential applications of this approach in areas like mathematical reasoning, legal logic, and endangered human language translation.
The takeaway
This research demonstrates that AI models can transcend the limits of their training data by leveraging the right feedback mechanisms, opening up new possibilities for AI to tackle tasks and languages that have minimal available data online.
Los Angeles top stories
Los Angeles events
Mar. 11, 2026
Breaking Sound @ the Peppermint ClubMar. 11, 2026
The Bad Plus Potter TabornMar. 11, 2026
Dragon Mama




