The confidential agreement reportedly does not provide Google with the authority to veto the government’s use of its AI models.
Google has entered into a secretive deal that permits the U.S. Department of Defense to use its AI models for “any lawful government purpose,” according to The Information. This arrangement was disclosed a day after Google employees urged CEO Sundar Pichai to prevent the Pentagon from employing its AI due to fears it might be utilized in “inhumane or extremely harmful ways.”
If confirmed, this agreement would align Google with OpenAI and xAI, who have also secured classified AI agreements with the U.S. government. Anthropic was part of this group until it was blacklisted by the Pentagon for rejecting the Department of Defense’s requests to remove weapon and surveillance-related safeguards from its AI models.
The Information cites an anonymous source “with knowledge of the situation,” stating that the agreement specifies that Google’s AI systems should not be used for domestic mass surveillance or autonomous weapons “without appropriate human oversight and control.” However, the contract also states it does not grant Google “any right to control or veto lawful government operational decision-making,” indicating these restrictions are more of an informal agreement than legally binding commitments.
In a statement to Reuters, a Google spokesperson affirmed the company’s stance that AI should not be applied to domestic mass surveillance or autonomous weaponry without proper human oversight. “We believe that providing API access to our commercial models, including on Google infrastructure, with industry-standard practices and terms, represents a responsible approach to supporting national security,” Google conveyed to the publication.
