- Today
- Holidays
- Birthdays
- Reminders
- Cities
- Atlanta
- Austin
- Baltimore
- Berwyn
- Beverly Hills
- Birmingham
- Boston
- Brooklyn
- Buffalo
- Charlotte
- Chicago
- Cincinnati
- Cleveland
- Columbus
- Dallas
- Denver
- Detroit
- Fort Worth
- Houston
- Indianapolis
- Knoxville
- Las Vegas
- Los Angeles
- Louisville
- Madison
- Memphis
- Miami
- Milwaukee
- Minneapolis
- Nashville
- New Orleans
- New York
- Omaha
- Orlando
- Philadelphia
- Phoenix
- Pittsburgh
- Portland
- Raleigh
- Richmond
- Rutherford
- Sacramento
- Salt Lake City
- San Antonio
- San Diego
- San Francisco
- San Jose
- Seattle
- Tampa
- Tucson
- Washington
Amsterdam Today
By the People, for the People
AI Suddenly Becomes Useful Tool for Open-Source Developers
But legal and quality issues still loom as AI-generated code floods projects
Apr. 1, 2026 at 1:40am
Got story updates? Submit your updates here. ›
Open-source software maintainers are finding that AI has become a much more useful tool for tasks like cleaning up old code, maintaining abandoned programs, and improving existing codebases. However, there are still significant legal and quality challenges to overcome, including concerns about AI-generated code being used to clone copyrighted projects and the flood of low-quality "AI slop" that is overwhelming many open-source maintainers.
Why it matters
Open-source software powers much of the world's critical infrastructure, but the vast majority of open-source projects have only a single maintainer, making them vulnerable to disruption. Using AI to assist with code maintenance and improvements could help address this problem, but the legal and quality issues must be resolved first to ensure the integrity of open-source software.
The details
Several prominent open-source maintainers, including Greg Kroah-Hartman of the Linux kernel and Ruby project maintainer Stan Lo, have reported that AI coding tools have recently become much more useful and capable. However, there are still significant challenges, including legal concerns about AI-generated code being used to clone copyrighted projects, as well as the flood of low-quality "AI slop" that is overwhelming many open-source maintainers. The Linux Foundation's security organizations are working to address these issues by making AI tools available to maintainers at no cost.
- Months ago, open-source projects were receiving 'AI slop' - low-quality, obviously wrong security reports generated by AI.
- A month ago, the world 'switched' and open-source projects started receiving real, high-quality AI-generated security reports.
- By the end of 2023, AI programming tools are expected to be much more reliable for open-source developers.
The players
Greg Kroah-Hartman
Maintainer of the Linux stable kernel.
Dirk Hondhel
Senior director of open source at Verizon.
Stan Lo
Maintainer of the Ruby programming language project.
Dan Blanchard
Maintainer of the important Python library chardet.
Mark Pilgrim
Claimed to be the original developer of the chardet library.
What they’re saying
“Months ago, we were getting what we called 'AI slop,' AI-generated security reports that were obviously wrong or low quality.”
— Greg Kroah-Hartman, Linux kernel maintainer
“This is almost possible today. And at the rate of improvement these tools have seen over the last couple of quarters, I am convinced that it will be possible with acceptable results at some point this year.”
— Dirk Hondhel, Senior director of open source, Verizon
“AI has already helped me with documentation themes, refactors, and debugging, and I explicitly wonder whether AI tools will 'help revive unmaintained projects' and 'raise a new generation of contributors -- or even maintainers'.”
— Stan Lo, Ruby project maintainer
What’s next
The Linux Foundation's security organizations, the Alpha-Omega Project and the Open Source Security Foundation (OpenSSF), are addressing the issue of low-quality AI-generated code by making AI tools available to maintainers at no cost to help with triage and processing of the increased AI-generated security reports.
The takeaway
While AI has become a much more useful tool for open-source developers, there are still significant legal and quality challenges that need to be addressed before AI and open-source programming can truly work together seamlessly. Maintaining the integrity and security of open-source software is crucial, and the open-source community is working to find the right balance between leveraging AI's capabilities and ensuring the quality and trustworthiness of the resulting code.


