AI's Ability to Identify Major Software Bugs Grows by 490% Annually

AI’s Ability to Identify Major Software Bugs Grows by 490% Annually

4 Min Read

A surge in vulnerability reports is leading to the decline of bug bounty programs.

Tech companies and open-source teams are overwhelmed with AI-discovered software vulnerabilities, giving us an idea of the extent of the problem. The Zero Day Initiative, the largest independent bug bounty program globally, reported a 490% increase in submissions this month compared to last year’s April, and the month isn’t over yet.

“Organizations receiving bug reports are struggling with triage and response,” Dustin Childs, Head of Threat Awareness at the Zero Day Initiative, told Mashable. “Some programs, like the Internet Bug Bounty program, have closed entirely rather than trying to keep up.”

On March 27, the Internet Bug Bounty Program announced its closure for submissions due to the bug submission crisis, which it said was transforming the “landscape” of bug discovery.

“AI-assisted research is expanding vulnerability discovery across the ecosystem, increasing both coverage and speed,” said HackerOne, the group that ran the program, in a statement. “We are pausing submissions to reconsider the structure and incentives needed.”

As AI tools improve, they’re uncovering more severe vulnerabilities that require urgent patching. Anthropic, a key player in AI innovation, is likely contributing to the deluge.

Anthropic recently launched Claude Mythos, claiming it was too dangerous for public release. Claude Mythos demonstrated a massive leap in cybersecurity capabilities, capable of autonomously finding and exploiting zero-day vulnerabilities across major operating systems. It was released to selected organizations to help secure critical software, as the company found too many bugs to report all at once.

Critics have labeled this as security theater and a publicity stunt, while Anthropic committed to disclosing all vulnerabilities found by Claude after patching.

In its April 7 blog post, Anthropic noted that “less than 1% of the potential vulnerabilities we’ve discovered have been fully patched by maintainers.” The company prioritizes disclosing only the most severe bugs to avoid overwhelming other organizations.

However, Anthropic estimates this is just a fraction of the bugs it expects to find soon, prompting the hiring of security contractors to handle disclosures.

Before Claude Mythos, AI tools led to a surge in bug reports, though often of low quality. However, the severity is now increasing, adding to developers’ burdens.

“Not every submission is a real bug, but we must triage them as if they are,” Childs said.

Daniel Stenberg, a developer at cURL, paused their bug bounty program due to AI, reporting a surge in bug reports in recent years. “The aim of shutting down the bounty is to reduce poor-quality reports, AI-generated or not,” he wrote on his blog.

However, Stenberg told Mashable that the recent influx represents genuine security concerns, a reversal from the previous trend. More than 20 open-source projects confirmed this trend of high-quality security reports.

In his latest blog update, Stenberg noted an increase in both the volume and severity of new bugs in 2026, with confirmed vulnerabilities surpassing pre-AI levels.

He also worries about the impact on developers, especially those in volunteer-driven projects, warning that they could feel overwhelmed by quality reports, causing strain on maintainers.

There are indications that private companies are also facing an increase in AI-discovered bugs. Microsoft announced 165 new bugs patched in its April security update, the second largest in its history, with AI likely contributing to the rise, Childs noted in his Patch Tuesday blog.

Despite Microsoft denying AI was behind the large update, the impact is evident, with both potential and real bugs needing urgent fixes.

However, AI tools may still offer long-term benefits to cybersecurity defenders, though hackers might hold short-term advantages. AI enhances threat actors’ productivity.

AI is both a problem and a solution, as developers rely on AI for triaging AI-discovered bugs. “We’ve begun using AI for triage,” Childs says, “It’s the only way to keep up.” Some entries are AI-generated nonsense, but they’ve used these to train models to better identify low-quality submissions.

Without industry adaptation, consumers might face the threat of exposure to unpatched vulnerabilities. “We must scale up our fixes as researchers scale up findings,” Childs warned, or users will have “little chance to apply these [fixes] in time to avoid attacks.”

You might also like