AI 'slop security reports' are driving open source maintainers mad
Low-quality, LLM-generated reports should be treated as if they are malicious, according to one expert


Open source project maintainers are drowning in a sea of AI-generated 'slop security reports', according to security report triage worker Seth Larson.
Larson said he’s witnessed an increase in poor-quality reports that are wasting maintainers' time and contributing to burnout.
"Recently I've noticed an uptick in extremely low-quality, spammy, and LLM-hallucinated security reports to open source projects. The issue is in the age of LLMs, these reports appear at first-glance to be potentially legitimate and thus require time to refute," he wrote in a blog post.
"This issue is tough to tackle because it's distributed across thousands of open source projects, and due to the security-sensitive nature of reports open source maintainers are discouraged from sharing their experiences or asking for help."
Larson wants to see platforms adding systems to prevent automated or abusive creation of security reports, and allow them to be made public without publishing a vulnerability record - essentially letting maintainers name-and-shame offenders.
They should remove the public attribution of reporters that abuse the system, take away any positive incentive to reporting security issues, and limit the ability of newly registered users to report security issues.
Meanwhile, Larson called on reporters to stop using LLM systems for detecting vulnerabilities, and to only submit reports that have been reviewed by a human being. Don't spam projects, he said, and show up with patches, not just reports.
Get the ITPro daily newsletter
Sign up today and you will receive a free copy of our Future Focus 2025 report - the leading guidance on AI, cybersecurity and other IT challenges as per 700+ senior executives
As for maintainers, he said low-quality reports should be treated as if they are malicious.
"Put the same amount of effort into responding as the reporter put into submitting a sloppy report: ie, near zero," he suggested.
"If you receive a report that you suspect is AI or LLM generated, reply with a short response and close the report: 'I suspect this report is AI-generated/incorrect/spam. Please respond with more justification for this report'."
Larson isn't the only maintainer to raise the issue of low-quality AI-generated security reports.
Earlier this month, Daniel Stenberg complained that, while the Curl project had always received a certain number of poor reports, AI was now making them look more plausible - and thus taking more time to check out.
"When reports are made to look better and to appear to have a point, it takes a longer time for us to research and eventually discard it. Every security report has to have a human spend time to look at it and assess what it means," he said.
"The better the crap, the longer time and the more energy we have to spend on the report until we close it. A crap report does not help the project at all. It instead takes away developer time and energy from something productive."
Emma Woollacott is a freelance journalist writing for publications including the BBC, Private Eye, Forbes, Raconteur and specialist technology titles.
-
Geekom Mini IT13 Review
Reviews It may only be a mild update for the Mini IT13, but a more potent CPU has made a good mini PC just that little bit better
By Alun Taylor
-
Why AI researchers are turning to nature for inspiration
In-depth From ant colonies to neural networks, researchers are looking to nature to build more efficient, adaptable, and resilient systems
By David Howell
-
Redis unveils new tools for developers working on AI applications
News Redis has announced new tools aimed at making it easier for AI developers to build applications and optimize large language model (LLM) outputs.
By Ross Kelly
-
AI was a harbinger of doom for low-code solutions, but peaceful coexistence is possible – developers still love the time savings and simplicity despite the allure of popular AI coding tools
News The impact of AI coding tools on the low-code market hasn't been quite as disastrous as predicted
By Ross Kelly
-
‘Awesome for the community’: DeepSeek open sourced its code repositories, and experts think it could give competitors a scare
News Challenger AI startup DeepSeek has open-sourced some of its code repositories in a move that experts told ITPro puts the firm ahead of the competition on model transparency.
By George Fitzmaurice
-
Flaws in a popular dev library could let hackers run malicious code in your MongoDB database
News A popular third party library of MongoDB could allow attackers to execute malicious code on company servers.
By Solomon Klappholz
-
GitHub's new 'Agent Mode' feature lets AI take the reins for developers
News GitHub has unveiled the launch of 'Agent Mode' - a new agentic AI feature aimed at automating developer activities.
By Ross Kelly
-
The world's 'first AI software engineer' isn't living up to expectations: Cognition AI's 'Devin' assistant was touted as a game changer for developers, but so far it's fumbling tasks and struggling to compete with human workers
News Devin, a coding assistant from Cognition AI hailed as the world's 'first AI software engineer', hasn't quite lived up to expectations, according to researchers.
By Nicole Kobie
-
‘Maybe we aren't going to hire anybody this year’: Marc Benioff says Salesforce might not hire any software engineers in 2025 as the firm reaps the benefits of AI agents
News Salesforce CEO Marc Benioff has suggested the company may freeze hiring for software engineers as the company records productivity boosts through AI agents.
By George Fitzmaurice
-
Shadow AI is creeping its way into software development – more than half of developers admit to using unauthorized AI tools at work, and it’s putting companies at risk
News Enterprises need to create smart AI usage policies that balance the benefits and risks
By Solomon Klappholz