Open Source Maintainers Are Drowning in Junk Bug Reports Written By AI
December 11, 2024

Open Source Maintainers Are Drowning in Junk Bug Reports Written By AI

an anonymous reader Share report: Software vulnerability submissions generated by artificial intelligence models have ushered in “a new era of open source security reporting” – and the developers who maintain these projects hope that bug hunters will rely less on the results produced by machine learning assistants. Seth Larson, resident security developer at the Python Software Foundation, raised the issue in a blog post last week, urging people who report bugs not to use artificial intelligence systems to find bugs.

“Lately I have noticed an increase in security reports of extremely low quality, spam, and LLM illusions in open source projects,” he wrote, pointing to similar findings from the Curl project in January. “These reports appear to be legitimate at first glance, so it takes time to refute them.” Larson believes that low-quality reports should be considered malicious reports.

As if to underscore the persistence of these concerns, a Curl project bug report released on December 8 revealed that nearly a year after maintainer Daniel Stenberg raised the issue, he was still facing “AI slop” — and wasting time with bugs. Argument submitters may be partially or fully automated.

2024-12-11 01:00:00

Leave a Reply

Your email address will not be published. Required fields are marked *