AI 'slop security reports' are driving open source maintainers mad

Software developer looking stressed at desk in a dark room with monitor light illuminating face.
(Image credit: Getty Images)

Open source project maintainers are drowning in a sea of AI-generated 'slop security reports', according to security report triage worker Seth Larson.

Larson said he’s witnessed an increase in poor-quality reports that are wasting maintainers' time and contributing to burnout.

"Recently I've noticed an uptick in extremely low-quality, spammy, and LLM-hallucinated security reports to open source projects. The issue is in the age of LLMs, these reports appear at first-glance to be potentially legitimate and thus require time to refute," he wrote in a blog post.

"This issue is tough to tackle because it's distributed across thousands of open source projects, and due to the security-sensitive nature of reports open source maintainers are discouraged from sharing their experiences or asking for help."

Larson wants to see platforms adding systems to prevent automated or abusive creation of security reports, and allow them to be made public without publishing a vulnerability record - essentially letting maintainers name-and-shame offenders.

They should remove the public attribution of reporters that abuse the system, take away any positive incentive to reporting security issues, and limit the ability of newly registered users to report security issues.

Meanwhile, Larson called on reporters to stop using LLM systems for detecting vulnerabilities, and to only submit reports that have been reviewed by a human being. Don't spam projects, he said, and show up with patches, not just reports.

As for maintainers, he said low-quality reports should be treated as if they are malicious.

"Put the same amount of effort into responding as the reporter put into submitting a sloppy report: ie, near zero," he suggested.

"If you receive a report that you suspect is AI or LLM generated, reply with a short response and close the report: 'I suspect this report is AI-generated/incorrect/spam. Please respond with more justification for this report'."

Larson isn't the only maintainer to raise the issue of low-quality AI-generated security reports.

Earlier this month, Daniel Stenberg complained that, while the Curl project had always received a certain number of poor reports, AI was now making them look more plausible - and thus taking more time to check out.

"When reports are made to look better and to appear to have a point, it takes a longer time for us to research and eventually discard it. Every security report has to have a human spend time to look at it and assess what it means," he said.

"The better the crap, the longer time and the more energy we have to spend on the report until we close it. A crap report does not help the project at all. It instead takes away developer time and energy from something productive."

TOPICS
Emma Woollacott

Emma Woollacott is a freelance journalist writing for publications including the BBC, Private Eye, Forbes, Raconteur and specialist technology titles.