AI is now part of the culture wars and real wars

AI is now part of the culture wars and real wars

4 Min Read

Untangling the Anthropic-Pentagon—OpenAI-Iran news cycle.

This was, to put it mildly, not a chill weekend.

For a few hours on Saturday, I thought that the Anthropic-Pentagon contract dispute, which seemed to have concluded on Friday night when Defense Secretary Pete Hegseth declared that the company was a supply-chain risk, would take a backseat in the news cycle. Right around 1AM Saturday morning, the US launched 100 military fighter jets and directed them toward Iran. I’d been texting sources late into the night about OpenAI’s new contract with the Pentagon, asking whether Sam Altman did get those red lines on mass surveillance and autonomous lethal weapons, but by the time I woke up, the United States had assassinated Ayatollah Ali Khamenei and several other Iranian leaders in an aerial strike on Tehran, openly and unapologetically in broad daylight.

Soon it became clear that Anthropic was part of the story, too. On Sunday, The Wall Street Journal reported that Claude-powered intelligence tools had been used by several military command centers during the strike, citing sources familiar. It’s unknown how the Pentagon used Claude in this specific operation in Iran, and such information would be classified and only known by people directly involved. But the Journal wrote that the Pentagon had already deeply embedded Claude into technology that performed “intelligence assessments, target identification and simulating battle scenarios” — technology that was used in the Iran strike.

A few observations can be pulled from this: First, the entire conflict was never about Anthropic posing an actual national security risk. But second, while AI may not have yet reached the “fully autonomous lethal weapon” stage, it’s developed to a level sophisticated enough to conduct an impressively precise (though uncomfortably extralegal) strike on a foreign leader. It is all the more impressive considering that Iran was under a near-total, government-imposed internet blackout for several months, with virtually no digital connection to the outside world.

I contacted Hamza Chaudhry, the AI and National Security lead at the nonpartisan Future of Life Institute, for his long view on Operation Epic Fury. He noted that both sides of the conflict were already using artificial intelligence in their warfare — Iran has deployed AI-assisted missiles in recent months — and while the US had clearly prevailed in this scenario, it was the prelude to what he described as a “dyadic automated warfare problem: two AI systems effectively talking to each other through the medium of kinetic action, each optimizing and responding faster than human decision-makers can follow.”

Chaudry’s nightmare scenario suggested the end of nuclear deterrence as a tool for global stability:

“Recent analyses of the 2025 India-Pakistan and Iran-Israel conflicts found that AI renders second-strike forces more transparent and thus more vulnerable, and that while nuclear arsenals still impose a ceiling on all-out war, AI lowers the floor for sub-threshold aggression and compresses political reaction time. If an adversary believes its nuclear deterrent is becoming visible, the rational response is to expand the arsenal or shift to a launch-on-warning posture.

“Experts have described this as threatening ‘arms race stability’: the risk that one side might seek a breakout advantage in advanced technology, triggering complementary efforts by the other. This is not a hypothetical future problem. The technologies that made Operation Epic Fury possible are the same technologies that are slowly making nuclear deterrence more fragile. We have no international governance framework that addresses this adequately.”

Natsec Lawyer-GPT

So what exactly is in the magical, red line-respecting contract that Altman was bragging about? Currently, we don’t know, other than what OpenAI wrote about the contract on its company blog. Though it was essentially a press release, the post did contain excerpts from what the company claims is the contract itself, which stated that “The AI System shall not be used for unconstrained monitoring of U.S. persons’ private information as consistent with these authorities,” citing several preexisting security laws. But even that didn’t pass the legal sniff test. As my colleague Hayden Field reported yesterday:

OpenAI appears to rely heavily on existing legal limits. It said its Pentagon agreement states that “for intelligence activities, any handling of private information will comply with the Fourth Amendment, the National Security Act of 1947 and the Foreign Intelligence and Surveillance Act of 1978, Executive Order 12333, and applicable DoD directives requiring a defined foreign intelligence purpose.”

But this isn’t reassuring. In the years after 9/11, US intelligence agencies ramped up a surveillance system that they determined fell within the legal limits OpenAI cites, including multiple mass domestic spying operations.

Here’s one more piece of imaginary legalese, pointed out by a reader: “unconstrained monitoring” isn’t even a real legal term, much less one cited in the relevant authoritative laws that OpenAI is pointing to.

Did

You might also like