The concept of the bug bounty is relatively simple: a researcher probes networks and applications for potential vulnerabilities, finds one, and reports it to the system owner. If there is agreement that the issue represents a genuine flaw, the researcher discloses the details and receives a reward.
Makes sense? The road to this level of acceptance has been slow and uneven, though, with bumps along the way, including T-shirts being offered as a bounty, as well as payments of up to $5m (£740,000) being mooted. However, a recent post by Daniel Stenberg, founder of cURL, raised fresh concerns and questions about the sustainability of bug bounties.
He says he received seven reported issues in a 16-hour period, none of which identified a vulnerability. These formed part of 20 submissions reviewed in the first weeks of 2026. As a result, he has now announced plans to shut down the bounty programme he ran through HackerOne “to remove the incentive for people to submit poorly researched reports … AI-generated or not.”
Stenberg said the volume of submissions placed a heavy burden on his security team. Speaking to Computer Weekly, he confirms the programme ended in January, and that he has switched from using HackerOne to GitHub for vulnerability reporting.
Stenberg is careful not to criticise HackerOne, describing it as “a great company” that had supported his projects in the past. However, he says the change from HackerOne to a new provider has had “an immense impact” on the number of reports received.
He notes that report volumes had been relatively stable for years, before increasing sharply in 2025. “Maybe the first five years made it possible for researchers to find and report the low-hanging fruit,” he wrote.
“In previous years, somewhere north of 15% of submissions ended up as confirmed vulnerabilities. Starting in 2025, that rate fell below 5%. Not even one in 20 was real.”
Valid reports
Stenberg says his issue was not simply that reports were poor, but that it now took significantly more time to determine whether they were valid. “Every report now takes longer, even to establish whether it’s true or not,” he adds.
In 2025, he says, the project received twice as many reports as in 2024, and projections suggested this could rise to three times as many in 2026. He expressed hope that stepping away from the bounty programme would expose low-effort submissions and prompt industry reflection, but acknowledged that “it doesn’t work like that”.
Is the problem driven by artificial intelligence (AI)-generated reports based on automated scanning? Stenberg identifies two indicators: first, the language used, such as every word being capitalised and reports ending with identical bullet-point structures.
Second, he says, follow-up questions often revealed a lack of understanding. “The user doesn’t know what I’m asking about,” says Stenberg. “They pass my question back to the AI, then paste its response. The follow-up answer is overly long. No human does that.”
There seems to be no escaping the power of generative AI (GenAI): a 2026 prediction from Deloitte found that more adults are forecast to have used passive GenAI search summaries (72% by mid-2026) than standalone tools (61%).
Other predictions are that GenAI and large language models (LLMs) “will move from reactive to preemptive”, and instead of waiting for you to ask, they’ll anticipate your needs, offer suggestions and steer the conversation before you even think to prompt them. How does this impact vulnerability reporting? Could the tool actually prompt the user to look at some code that has not been examined for some time, or has been found to be troublesome for some user sets?
For Stenberg, the core issue is not that AI is being used, but that many reports are inaccurate or misleading. “Previously, bad reports were easy to detect,” he says. “Now, we get large, detailed submissions that look plausible at first, but often turn out to be wrong.”
He adds that a report may appear valid for some time before inconsistencies become clear. Stenberg wants to highlight abuse of the system and encourage broader discussion of how to address it.
One option is to reduce or remove financial incentives. Given the sums sometimes paid, some researchers may see bug bounties as an easy source of income, especially when work can be outsourced to GenAI tools.
Stenberg says money has always been part of bug bounties and does not fully explain the increase in poor-quality reports. “Most likely, all of these factors combine,” he adds.
Incentive structures
Commenting on the issue, Michael Daniel, president and CEO of the Cyber Threat Alliance (CTA), says incentive structures are difficult to design. Bug bounties assume finding bugs requires time and expertise, which justifies payment.
“In effect, you were compensating hunters for their effort and skill,” he says. “AI tools are undermining those assumptions by making scanning easier and faster.”
Daniel adds that evaluating AI output still requires expertise, which shifts the burden to programme owners and reduces overall signal quality.
Could this affect how companies view bug bounties? Katie Moussouris, CEO of Luta Security, who pioneered bug bounties at Microsoft and the US government, and is outspoken on the issues surrounding them, says AI for offence is outpacing AI for defence across cyber security. In bug bounties, this appears as both low-quality submissions and high volumes of legitimate findings, overwhelming response teams.
Moussouris agrees that patch development and verification are harder to automate. “We need to rebalance the use of AI quickly and deliberately if we are to adapt to today’s realities,” she says.
Asked if she believes that any security response organisations can keep up with the scale of AI bug reports, Moussouris says no attacker ever followed the scope restrictions, so that is never a solution to a bug bounty programme having trouble keeping up with reports.
“It comes down to the sheer volume of reports, valid or otherwise, that are coming out of AI,” she adds. “Very few organisations could keep up with human researcher volume, especially when their software and processes are immature. The AI research is an accelerant to accumulating technical debt.”
Stenberg says he has long supported bug bounties as a way to attract skilled researchers. “They are full-time bounty hunters, and they focus on projects that pay,” he says. “If it’s your job, you want some reward occasionally.
“By offering money, you attract experts who will find your mistakes. I’ve always seen it as a luxury to be able to pay them to compete with other projects for their time.”
Mediating platforms
Another consideration is the role of platforms that mediate between researchers and organisations. Stenberg says he was moving away from HackerOne, but HackerOne’s co-founder and senior director of product management, Michiel Prins, says the cURL experience “looks meaningfully different from what we see across many other programmes”.
He explains that open source projects often operate with lean teams and fewer buffers between submissions and validation, making them more vulnerable to AI-driven surges.
In an email to Computer Weekly, Prins says most customers rely on managed triage and AI workflows so internal teams focus only on serious findings. “With the right architecture and support, bug bounties remain one of the most powerful tools for uncovering real-world risk,” he affirms.
Asked whether HackerOne discourages AI-generated submissions, a spokesperson says: “We welcome valid reports regardless of what tools are used.
“In our most recent annual survey, three-quarters of respondents said they already use GenAI tools in bug bounty work or plan to, reflecting how quickly these tools are becoming part of standard workflows,” they add.
This growth may explain the surge in reports. Moussouris says large rewards are one factor, but even modest payments attract attention: “Any payment can be enough incentive for low-cost AI labour. AI simply makes spray-and-pray approaches faster and cheaper.”
What’s in scope?
Could organisations manage volumes better by tightening scope? HackerOne says clear scope and continuously updated policies can reduce noise, but are not sufficient alone.
“The answer is a mix of tight policy, stronger gating and filtering, and consistent enforcement against low-effort submissions,” the spokesperson says. “Effective programmes combine AI analysis with human oversight.”
The impact of low-quality AI-generated submissions is difficult to manage. AI is now embedded across the industry, and bug bounties are no exception. HackerOne’s own research shows valid AI vulnerability reports have risen by 210%, accelerating the discovery of genuine AI security flaws.
How should this be handled? Michael Daniel points to basic economics: if supply rises, price should fall. “Reduce the bounty to reduce the incentive,” he says.
Stenberg expects more open source projects and companies to struggle with the growing ease of report generation.
Bug bounties changed how vulnerabilities are discovered and fixed. Rising rewards created opportunities for skilled researchers, while AI has lowered barriers to entry. The result is a complex situation that reflects the wider dynamics shaping modern cyber security.






