From identifying elusive security vulnerabilities to enhancing threat detection, Google’s AI innovations are set to redefine digital security standards.
In a series of new announcements leading up to Black Hat USA and DEF CON 33, Google has showcased how its in-house AI tools are currently discovering significant bugs, assisting security teams in reducing response times, and partnering with humans in real-time hacker contests.
Google’s AI agent Big Sleep, originally unveiled last year, has recently identified a security flaw (CVE-2025-6965) in SQLite, which had been circulating undetected in the wild, known only to attackers. This finding, empowered by the insights of the Google Threat Intelligence Group, demonstrates how AI is now capable of catching vulnerabilities before they escalate.
Big Sleep is designed to operate like a human security analyst, sifting through code and recognizing suspicious activity just as a traditional researcher would. Google has also programmed it to detect clever variations of known vulnerabilities, which are highly sought after by hackers attempting to exploit modern software.
Furthermore, Google’s open-source digital forensics tool, Timesketch, is receiving a significant AI enhancement. Supported by a new model named Sec-Gemini, the upgraded system can now conduct heavy lifting during forensic investigations, such as scanning logs and identifying potential threats. This results in reduced workload for analysts and far quicker incident responses. A live demonstration is scheduled for Black Hat USA.
Another internal solution is emerging from the shadows. Google will provide an exclusive preview of FACADE, its insider threat detection system which has been silently tracking billions of daily events since 2018. It operates without the need for previous attack data to identify anomalies, utilizing a machine learning technique called contrastive learning.
At DEF CON 33, Google will also co-host a Capture the Flag (CTF) event alongside Airbus. Teams will receive assistance from AI tools to resolve a variety of security challenges. This innovative approach integrates AI directly with security professionals and enthusiasts.
Furthermore, Google is committing to promoting safer AI development. It is contributing data from its Secure AI Framework (SAIF) to the Coalition for Secure AI (CoSAI), supporting initiatives concerning agentic AI, the security of software supply chains, and cyber defense. This effort follows the initiative’s launch at last year’s Aspen Security Forum.
Lastly, next month will conclude the AI Cyber Challenge (AIxCC), a competition led by DARPA with Google’s support. The winners will unveil new AI solutions designed to discover and rectify vulnerabilities in major open-source software, representing a significant advancement in proactive digital security.