OpenAI Discloses Additional Information on Its Pentagon Agreement

OpenAI Discloses Additional Information on Its Pentagon Agreement

2 Min Read

OpenAI’s recent agreement with the Department of Defense, acknowledged by CEO Sam Altman as “rushed” and having poor optics, followed failed negotiations between Anthropic and the Pentagon. Consequently, President Trump instructed federal agencies to cease using Anthropic’s technology after a transitional period, while Secretary of Defense Pete Hegseth labeled the AI firm a supply-chain risk.

OpenAI swiftly announced its own agreement for model deployment in classified settings. Despite both OpenAI and Anthropic stating similar restrictions on their technology use, questions arose about OpenAI’s transparency and reasons for its successful deal negotiation.

In response to scrutiny, OpenAI detailed its agreement approach in a blog post, asserting three prohibited areas of use: mass domestic surveillance, autonomous weapon systems, and high-stakes automated decisions like “social credit.” Unlike some companies reducing safety measures, OpenAI highlighted its expansive, multi-layered safeguards involving cloud deployment, staff oversight, and strong contractual terms, alongside existing U.S. legal protections.

OpenAI expressed uncertainty about Anthropic’s negotiation failure and encouraged other labs to consider similar agreements. Afterward, Techdirt’s Mike Masnick suggested the deal permits domestic surveillance under Executive Order 12333, enabling NSA activities abroad that involve U.S. communications.

OpenAI’s national security partnerships head, Katrina Mulligan, emphasized the importance of deployment architecture over contract language in preventing AI integration into weapons or surveillance systems. Altman, addressing the rushed nature and backlash of the deal on social media, noted the intention to de-escalate tensions between the industry and the Department of War, hoping the agreement would prove beneficial in fostering industry cooperation and innovation, despite the risk of criticism.

You might also like