Inside Anthropic's Existential Negotiations with the Pentagon

Inside Anthropic’s Existential Negotiations with the Pentagon

4 Min Read

A former Uber executive is challenging the AI lab in a significant dispute involving a $200 million military contract, but the implications extend far beyond just the contract. Anthropic has been embroiled in a lengthy conflict with the Department of Defense, which has unfolded in social media exchanges, critical public statements, and anonymous comments from Pentagon officials to the media. The core issue for the $380 billion AI startup hinges on the phrase “any lawful use.” This new contractual term, already accepted by OpenAI and xAI, would allow the US military to employ AI services for mass surveillance and autonomous lethal weapons, with no human involvement in the decision-making process.

The negotiations have become contentious, with Pentagon CTO Emil Michael, a former Uber executive, reportedly using threats to label Anthropic as a “supply chain risk,” a designation typically reserved for national security threats such as foreign interference or cyber warfare. Anthropic’s CEO Dario Amodei is set to meet with Secretary Pete Hegseth at the Pentagon in what has been described as a critical meeting.

It is unusual for the Pentagon to publicly threaten an American company. The Pentagon’s typical practice is to keep such classifications confidential. Publicly branding Anthropic as a national security risk would compel other companies to end their business relationships with Anthropic.

Should this classification become official, it could terminate Anthropic’s $200 million contract with the Pentagon and have significant repercussions on the company’s finances. Major defense contractors and tech firms like AWS, Palantir, and Anduril rely on Anthropic’s Claude AI model for Pentagon projects. If classified as a “supply chain risk,” companies associated with the military would have to abandon Anthropic’s AI systems, which are considered industry-leading.

The Pentagon recently signed an agreement to use the AI model Grok, developed by Elon Musk’s xAI, in classified systems. This development occurred just before Amodei’s meeting with Hegseth.

The Pentagon’s position towards Anthropic could follow either a narrow or broad course of action. Analyst Geoffrey Gertz suggests the more logical outcome is a limited restriction, with Anthropic’s technology barred from specific Pentagon work. However, the unprecedented labeling of Anthropic suggests a more severe punitive action could be entertained.

Despite facing accusations of being “woke,” Anthropic has not been formally accused of security vulnerabilities. Instead, the conflict with the Pentagon centers on Anthropic’s adherence to its “acceptable use policy.” Sources indicate that Anthropic has clearly communicated its boundaries to the government, particularly around unmanned military operations and mass domestic surveillance due to concerns about infringing on civil liberties and the current technological capabilities.

Anthropic’s policy is aligned with existing government directives preventing the collection of information on U.S. citizens without legal authorization and requiring human oversight in autonomous weapon systems.

Emil Michael is reported as an assertive negotiator for the Pentagon, possibly unhappy with a private company setting boundaries for government technology use. Anthropic’s “acceptable use policy” is central to its existing contract, emphasizing responsible AI that upholds democratic values. However, a memo from Hegseth highlights the need for rapid AI adoption, removing obstacles to data sharing, and asserts that AI must be applied “from campaign planning to kill chain execution.”

OpenAI, xAI, and Google have adjusted their Pentagon contracts following Hegseth’s directives. Yet, given Claude’s unique Impact Level 6 classification, these models cannot fully replace it if Anthropic is blacklisted, creating vulnerability for the Pentagon.

The dispute has also highlighted that Anthropic’s systems were reportedly used in a military operation involving Venezuelan President Nicolás Maduro, contradicting their current agreement.

While Anthropic cannot legally coordinate with other AI companies on the matter, the public nature of the battle has led to industry frustration that other firms are not advocating for similar terms as Anthropic. Some believe it is inevitable that Anthropic will eventually comply.

William Fitzgerald from The Worker Agency suggests that AI labs hold substantial power and can justify their valuation without military contracts, considering other ways to conduct business without integrating warfare into their model.

You might also like