The law does not support Sam Altman’s assertions.
On Friday, amid a conflict between the Department of Defense and Anthropic, OpenAI CEO Sam Altman announced that OpenAI had successfully reached new agreements with the Pentagon. The U.S. government had just blacklisted Anthropic for refusing to compromise on two major concerns for military use: avoiding mass surveillance of Americans and refraining from using lethal autonomous weapons (or AI systems that can kill without human intervention). Altman suggested that he had managed to embed these restraints in OpenAI’s contract.
“Two of our key safety principles prohibit domestic mass surveillance and require human responsibility in the use of force, including for autonomous weapons,” Altman stated. “The DoW endorses these principles and embeds them in law and policy, which we’ve integrated into our agreement,” he added, using the term “Department of War” as favored by the Trump administration for the Defense Department.
Social media and the AI industry quickly reacted to challenge Altman’s claims. Many questioned why the Pentagon would now agree to restraints it previously refused to consider.
The Pentagon hadn’t conceded, sources told The Verge. OpenAI agreed to comply with existing laws that already allow forms of mass surveillance while arguing they uphold their principles.
During negotiation, the Pentagon did not waiver on its position of gathering and interpreting bulk data from Americans, a source clarified. Examination of OpenAI’s contract showed a reliance on what is “technically legal” for excuses, which has been broadly interpreted in previous governmental actions for mass surveillance.
Ex-OpenAI official Miles Brundage expressed that legal interpretations suggest OpenAI’s concessions were framed as victories, at significant cost to partnerships such as Anthropic.
OpenAI representative Kate Waters clarified that the Pentagon did not request mass surveillance permissions and that the agreement prohibited such actions. “The system cannot systematically gather or analyze Americans’ data indiscriminately,” Waters assured.
Nonetheless, AI tools can enhance investigatory actions with extraordinary efficiency by analyzing existing data patterns. The consolidation of personal information could form a comprehensive view of individuals’ lives, highlighting how AI can facilitate extensive surveillance measures, voiced Anthropic’s CEO in their statement.
While Anthropic has insisted on tighter controls over AI deployment, OpenAI leans on present legal frameworks. Their deal references existing surveillance and weapons regulations, extending prior protections without enforcing new constraints.
The legal considerations cited by OpenAI have previously been used by the US to endorse widespread governmental surveillance following 9/11. Numerous programs justified under those interpretations, revealed by whistleblower Edward Snowden, included mass data collection activities and remain largely unreformed.
The Pentagon denied requests to utilize OpenAI systems for unnecessary surveillance, claiming that intelligence actions within their deal must abide by legal structures, per Waters’ statement.
Anthropic founder Dario Amodei suggested that legal frameworks trail behind AI’s potential scale and capabilities for data analysis. Their proposed contract with the DoW insisted on adapting these frameworks to recent technological capacities.
For Lethal AI weapons, OpenAI’s constraints fall back on legislative boundaries mandating human oversight. This position aligns with the Pentagon’s outlined policies but lacks novel, stricter provisions Anthropic advocated.
OpenAI’s contract terms—emphasizing compliance with any legal applicability—have sparked apprehension over their sufficiency to uphold OpenAI’s stated restrictions.
Defense leaders openly declared that a tech company would not define military operations, and their partnership with OpenAI, built on the principle of “all lawful use,” seems to reflect that stance. OpenAI claimed efforts to extend those agreements to all AI firms—implying dissimilar stances, such as Anthropic’s, lacked reasonable basis.
As a result of their firm position, Anthropic faced being labelled a security threat. This unprecedented move for an American company could accessibly exclude Anthropic from future government contracts.
The AI community has rallied behind Anthropic, questioning why other tech firms haven’t adopted similar ethical standards. Anthropic’s defiance elevated its product to the leading position in app download charts, as prominent figures voiced their solidarity, adding pressure on competing AI organizations.
Even as praised for challenging a major defense deal, Anthropic accepts the future of autonomous combat solutions under improved conditions, regarding current tech capabilities as insufficient.
In summary, OpenAI and the Pentagon’s agreement reiterates existing surveillance capabilities and lethal weapon regulations without enforcing additional constraints, sparking debates over legal compliances and responsibilities in national security measures.
