Over the past two weeks, a disagreement has emerged between Anthropic CEO Dario Amodei and Defense Secretary Pete Hegseth regarding the military’s application of AI. Anthropic is against its AI models being used for mass surveillance of Americans or autonomous weaponry that functions without human oversight. Meanwhile, Secretary Hegseth maintains that the Department of Defense should not be restricted by a vendor’s policies and that any “lawful use” of the technology is justified.
On Thursday, Amodei made it clear that Anthropic will remain steadfast, even though there are threats that the company could be identified as a supply chain risk due to this stance. With the pace of news, it’s crucial to reassess what is at stake in this argument. Fundamentally, this confrontation revolves around control of potent AI systems: the creators or the government aiming to utilize them.
**What is Anthropic concerned about?**
Anthropic is worried that its AI models could be used for mass monitoring of Americans or autonomous weaponry without human involvement in targeting and executing strikes. Unlike traditional defense contractors, Anthropic has consistently highlighted the unique dangers AI poses and the need for specific safeguards. They are focused on how to uphold these protections when the military uses the technology.
The U.S. military already employs highly automated lethal systems, with decisions on using force traditionally made by humans. However, legal restrictions on autonomous weaponry in military use are limited. The DoD does not prohibit fully autonomous weapon systems and allows AI systems to select and engage targets independently, given certain standards and senior officials’ approval.
This scenario is troubling for Anthropic as military technology is inherently secretive. Thus, steps to automate lethal decision-making might remain undiscovered until operational. If Anthropic’s models are used, this could be counted as “lawful use.”
Anthropic does not advocate permanently forbidding these uses, but believes its models are not sufficiently capable to support them safely now. Imagining an autonomous system mistakenly identifying a target or escalating conflict highlights the need for caution with less-capable AI in weaponry, as it could make irreversible lethal decisions quickly and confidently.
AI can potentially enable extensive lawful surveillance of American citizens too. Current U.S. law allows some surveillance through texts, emails, and other communications, but AI can amplify this through enhanced pattern detection, entity resolution, risk scoring, and behavior analysis.
**What does the Pentagon want?**
The Pentagon argues for deploying Anthropic’s technology for any lawful purpose deemed necessary, rather than being limited by Anthropic’s policies on autonomous weaponry or surveillance. Secretary Hegseth insists the Department of Defense should not be bound by a vendor’s rules and plans lawful usage of the technology, as stated by Pentagon’s chief spokesperson, Sean Parnell. He clarified that the department isn’t interested in mass domestic surveillance or deploying autonomous weapons.
Parnell requested Anthropic to allow the Pentagon’s use of their model for all lawful purposes, warning that restricting would risk vital military operations and warfighter safety. Anthropic was given a deadline of 5:01 p.m. ET on Friday to decide, with the risk of ending their partnership if declined.
Despite DoD’s objection to corporate usage policies, Secretary Hegseth’s critiques of Anthropic are perceived to involve cultural grievances, as evident in his January address at SpaceX and xAI offices, criticizing “woke AI.”
**So what now?**
The Pentagon may either label Anthropic as a “supply chain risk,” thereby blacklisting it from government deals, or invoke the Defense Production Act to adapt its models to military needs. With the looming deadline, it’s uncertain if the Pentagon will proceed with its threat.
Either party withdrawing easily seems unlikely. VC Sachin Seth from Trousdale Ventures warns that a supply chain risk label for Anthropic could be disastrous. Removing Anthropic might create a national security issue, possibly delaying optimal model access for six to twelve months. xAI is preparing to fill the classified-ready space, and with owner Elon Musk’s stance, it is likely to grant complete tech control to the DoD. Meanwhile, OpenAI might align with Anthropic’s boundaries, based on recent reports.
