Open source According to security report sorter Seth Larson, project maintainers are drowning in a sea of AI-generated “security report crap.”
Larson said he has seen an increase in low-quality reports that waste maintainers’ time and contribute to burnout.
“I’ve recently noticed an increase in extremely low quality security reports, spam, and LLM-related bullshit for open source projects. The problem arises in the era L.L.M.“These reports appear to be potentially legitimate at first glance and therefore take time to debunk,” he wrote in his post. blog post.
“This issue is difficult to resolve because it is spread across thousands of open source projects, and because the reports are security sensitive, open source maintainers are discouraged from sharing their experiences or asking for help.”
Larson wants platforms to add systems to prevent automatic or inappropriate generation of security reports and allow them to be published without publishing vulnerability records—essentially allowing maintainers to shame violators.
They should stop publicly naming journalists who abuse the system, remove all positive incentives to report security issues, and limit the ability of newly registered users to report security issues.
Meanwhile, Larson urged journalists to stop using LLM systems to find vulnerabilities and submit only human-reviewed reports. Don’t spam projects, he said, and show up with patches, not just reports.
Regarding maintainers, he said that low quality reports should be treated as if they were malicious.
“Put as much effort into responding as a reporter would put into sending a sloppy report: which is almost zero,” he suggested.
“If you receive a report that you suspect is generated by AI or LLM, respond with a short reply and close the report: “I suspect this report is generated by AI/wrong/spam. Please respond with more justification for this report.”
Larson isn’t the only service professional to raise the issue of the poor quality of safety reports generated by AI.
Earlier this month Daniel Stenberg complained that while the Curl project had always received a certain number of bad reports, the AI was now making them more believable – and thus taking longer to verify.
“When reports are designed to look better and make sense, we take more time to investigate them and ultimately cancel them. Every safety report should have someone take the time to review it and evaluate what it means.” “, he said.
“The better the bullshit, the more time and more energy we have to spend on the report until we close it. A crappy report doesn’t help the project at all. Instead, it takes developers’ time and energy away from something productive.”