Google has allowed the U.S. Department of Defense to access its AI for classified networks, permitting all lawful uses, as multiple news reports indicate.
This decision came after Anthropic’s public stance against the Trump administration for not giving the DoD similar terms. The Pentagon sought unrestricted AI use, whereas Anthropic wanted safeguards against its AI’s use for domestic surveillance and autonomous weapons.
Due to Anthropic’s refusal, the DoD labeled the company a “supply-chain risk,” a designation typically for foreign threats. Anthropic and the DoD are now in a legal battle, with a judge recently granting Anthropic an injunction against the label while the case continues.
Google is the third AI company seeking to gain from Anthropic’s setback. OpenAI and xAI also struck agreements with the DoD. Google’s deal includes provisions stating it doesn’t intend its AI for domestic surveillance or autonomous weapons, The Wall Street Journal notes, mirroring OpenAI’s contract language. However, the enforceability of these provisions is uncertain, according to the WSJ.
Despite this, 950 Google employees signed an open letter urging it to adopt Anthropic’s stance and not sell AI to the DoD without similar safeguards.
Google expressed to TechCrunch its pride in supporting national security through its AI services, emphasizing that AI should not be used for domestic surveillance or autonomous weapons without human oversight. Here is their full statement:
“We are proud to be part of a broad consortium of leading AI labs and technology and cloud companies providing AI services and infrastructure in support of national security. We support government agencies across both classified and non-classified projects, applying our expertise to areas like logistics, cybersecurity, diplomatic translation, fleet maintenance, and the defense of critical infrastructure.”
“We believe that providing API access to our commercial models, including on Google infrastructure, with industry-standard practices and terms, represents a responsible approach to supporting national security. We remain committed to the private and public sector consensus that AI should not be used for domestic mass surveillance or autonomous weaponry without appropriate human oversight.”
