The Security Slop Slavine: Why AI Can't Replace Domain Expertise
AI Tools Are Flooding Bug Bounty Programs — and Real Researchers Are Paying the Price
.png)
Back in July 2025, Daniel Stenberg, the creator of curl, wrote a piece called
"Death by a Thousand Slops" showing how his project's bug bounty program had been overwhelmed by AI generated vulnerability reports.
That was eight months ago, things have only gotten worse.
His frustration resonated with me immediately, because I encounter AI slop every single day.
By day, at HeroDevs evaluating vulnerability reports for a range of End-of-Life libraries and frameworks.
By night (and weekends, and the occasional holiday), I help the Node.js project with triage and security releases.
I've now triaged so much of the slop that I’ve become the sommelier who can tell the wine by merely giving it a sniff, whether the report was crafted from the finest vintages of genuine research effort, or merely scooped from the slop bucket of AI-generated garbage.
Same Slop, Different Day
At HeroDevs, one of the libraries in our research scope is Hibernate, the Java ORM framework used in millions of enterprise applications.
Over the past months, we have received an exceptionally high number of security reports that share a common thread: they describe an attacker modifying `persistence.xml`.
For those unfamiliar: `persistence.xml` is a server-side configuration file.
It lives on the server.
It is not exposed to end users, it is not parsed from untrusted input at runtime, and it is not reachable over the network under any normal deployment.
For an attacker to modify it, they would need to have already compromised the server at which point, editing an XML configuration file is the least of your problems.
Different submitters, same scenario, same fabricated exploit chain, same confident tone.
Same complete absence of working proof.
It's not a coincidence; this is what happens when someone points an AI at a project and asks it to find vulnerabilities.
The AI obliges, generates a report that sounds plausible, cites real source code, references real CVE patterns, wraps it in the formatting of a real security report, and hands it back to you.
The submitter has no idea that it's all fake because they blindly trust the AI.
And if you argue back, the AI will argue back. You'll tell them why the attack scenario is not viable and the AI will come back with a new one.
Just as confident, just as fake.
It is a Dante circle of hell, a descent through plausible-sounding nonsense, each reply slightly different from the previous one (but equally lengthy and detailed), each requiring the same explanation you just gave.
The only exit is to close the report. Then, days later, a different account opens it again.

Two Different Kinds of Pain
The same flood hits the Node.js project every day with the major difference that Node.js security triage is volunteer work.
The people doing it are doing so on their own time out of commitment to the project and the community.
When a wave of AI generated slop hits the Node.js security inbox, there are human beings spending their weekends sorting through garbage instead of doing meaningful work.
Every hour spent on a fake report is an hour that did not go toward a real fix, a real CVE, a real security release that protects the millions of developers and organizations that depend on Node.js.
The project has already raised the bar on HackerOne, requiring a minimum reputation score to filter out the accounts.
It helped, but not enough.
So now the team is exploring the next line of defense: LLM-assisted triage.
A rule-based classifier to deprioritize the obvious noise first, then an LLM to assess what actually deserves human attention.
The volume of garbage has grown large enough that a human alone can’t keep pace, so we are deploying AI to defend against AI.
This is a defeat. Every filter we add carries the risk that a genuine report from a real researcher gets caught in the net.
Maybe their HackerOne account is new or the LLM deprioritizes something it does not have enough context to recognize.
That researcher moves on and the vulnerability stays open.
Bug bounty programs exist because skilled researchers find real things.
Every legitimate report that gets lost as collateral damage is a loss for the open source ecosystem, for the companies that depend on it, for everyone.
If You Think AI Will Handle Your Security, Think Again
These reports come from people who ran an AI tool against a codebase, got output that looked like a security finding, and submitted it without the domain knowledge to tell whether it was real. The AI generated something plausible and they trusted it.
This is what happens when the cost of generating a security report drops to zero and the skills required to validate one are not considered necessary.
If a security team is using AI to assess their own dependencies and trusting the output without expert review, they are getting false confidence.
They may be prioritizing fake vulnerabilities while real ones go unnoticed.
They may be spending engineering time patching something that was never broken.
Worse, they may be filing reports against open source projects with volunteer maintainers, burning those maintainers out over imaginary vulnerabilities.
Real security work requires knowing how a framework actually works.
It requires understanding the threat model (which AI tends to ignore completely).
It requires knowing which attack surfaces are real and which are theoretical.
In short, it requires domain-specific expertise.
Hibernate's `persistence.xml` is not a remote attack vector. Any security professional who has spent a week with the framework knows this. An AI does not know this, because it is pattern-matching on training data, not reasoning.
The gap between "AI-generated slop" and "vulnerability" is exactly where domain expertise lives.
That gap is not getting smaller as AI improves, if anything, the reports are getting more convincing and harder to dismiss which means triage takes longer.

What Needs to Change
I do not have a silver bullet, but a few things seem clear.
For bug bounty programs and open-source projects, the submission cost needs to go up somehow.
The current asymmetry, where generating a report takes thirty seconds and reviewing it takes an hour, is not viable.
For companies that consume security tooling, AI-assisted research is a force multiplier for experts, not a replacement for them. If your security assessment process does not include people who deeply understand the libraries and frameworks you depend on, you are creating the illusion of doing a security assessment.
Across the industry, we need to talk honestly about the toll this is taking on open source maintainers. Flooding their inboxes with AI-generated nonsense is a form of harm, even if it is unintentional.
The industry's current answer to every problem is "Here, have some AI", free tiers, bundled subscriptions, one-click agents, IDE plugins on by default.
Use it to write code. Use it to review PRs. Use it to identify vulnerabilities in open-source projects.
The incentive to ship AI everywhere is enormous. Every month, the tools get more accessible and more persuasive.
What we need is for the people doing real security work, whether at companies like HeroDevs or in open source communities like Node.js, to name this problem clearly and push back on the assumption that AI makes expertise optional. It doesn't. If anything, it makes expertise more important than ever.
.png)
.png)
.png)