Google Halts AI-Developed Zero-Day Hack

Google Halts AI-Developed Zero-Day Hack

1 Min Read

For the first time, Google has identified and halted a zero-day exploit developed with AI. According to Google’s Threat Intelligence Group (GTIG), significant cybercriminals planned to use this vulnerability for a “mass exploitation event” that could bypass two-factor authentication in an unnamed “open-source, web-based system administration tool.”

The exploit’s code revealed clues of AI involvement, such as a “hallucinated CVSS score” and formatting consistent with LLM training data. It exploits a “semantic logic flaw” where trust was hardcoded into the platform’s 2FA system. This follows discussions about cybersecurity AI models like Anthropic’s Mythos and a recent AI-identified Linux vulnerability.

This is Google’s first indication of AI in such an attack, though they don’t believe Gemini was used. Google claims to have stopped this exploit but warns of growing AI usage in finding security vulnerabilities. The report also notes AI as a target, with adversaries increasingly attacking AI systems’ components.

Google describes hackers using “persona-driven jailbreaking” to have AI find vulnerabilities, by prompting AI as security experts. Hackers input entire vulnerability datasets into AI models, and OpenClaw is reportedly used to refine AI-generated payloads in test environments to ensure exploit reliability before execution.

You might also like