On Friday afternoon, during the beginning of our interview, a news alert appeared on my screen: the Trump administration was cutting ties with Anthropic, a San Francisco AI company established in 2021 by Dario Amodei and former OpenAI researchers who departed over safety concerns. Defense Secretary Pete Hegseth invoked a national security law meant to address foreign supply chain threats, blacklisting the company from Pentagon dealings after Amodei refused to allow Anthropic’s technology for mass surveillance of U.S. citizens or autonomous armed drones that could target and kill without human oversight.
This sequence of events was staggering. Anthropic stands to lose a contract worth up to $200 million and faces a ban from further work with other defense contractors following President Trump’s directive on Truth Social for all federal agencies to “immediately cease all use of Anthropic technology.” Anthropic plans to contest the Pentagon in court, labeling the supply-chain-risk designation as legally unfounded and “never before publicly applied to an American company.”
Max Tegmark, a Swedish-American physicist and MIT professor, has spent years warning that AI development is advancing faster than our ability to regulate it. Founder of the Future of Life Institute in 2014, Tegmark played a key role in organizing a 2023 open letter, signed by over 33,000 people including Elon Musk, calling for a pause in advanced AI development.
Tegmark views the Anthropic crisis critically, stating that the company and similar ones have created their own dilemmas by resisting binding regulation. He argues that companies like Anthropic, OpenAI, Google DeepMind, and others have pledged self-regulation but frequently break their own commitments, including Anthropic’s recent decision to forgo its pledge not to release powerful AI systems without confidence they wouldn’t cause harm.
In a landscape without stringent rules, there’s little protection for these entities, says Tegmark. More details from the interview will be available in the full conversation on TechCrunch’s StrictlyVC Download podcast this week.
**What was your initial reaction to the Anthropic news?**
The situation recalls the adage: The road to hell is paved with good intentions. Reflecting on a decade ago, there was widespread optimism about AI’s potential to solve problems like cancer and boost American prosperity. Now, the U.S. government is opposing a company for resisting AI’s use in domestic mass surveillance and autonomous killer drones.
**Does Anthropic’s collaboration with defense and intelligence contradict its safety-first stance?**
Yes, it’s contradictory. Anthropic markets itself as safety-centric, but actions speak louder. Companies like Anthropic, OpenAI, Google DeepMind, and xAI have discussed safety but resisted obligatory safety regulations. They’ve all abandoned key safety promises. Google discarded its “Don’t be evil” slogan, OpenAI removed “safety” from its mission, xAI disbanded its safety team, and Anthropic dropped its core safety commitment just this week.
**Why did companies with prominent safety commitments reach this juncture?**
These companies have consistently opposed regulatory measures for AI, claiming self-regulation suffices. Consequently, AI is less regulated in America than food. A sandwich shop with health risks faces closure, yet AI systems, even with known dangers, proceed unchecked. This lack of regulation is the companies’ fault as they failed to endorse binding regulations, creating a vacuum that enables unrestricted actions, likened to historical corporate excesses like thalidomide or tobacco.
There are no laws against AI technologies potentially harmful to Americans. Had companies backed regulatory laws earlier, their current predicament might’ve been avoided.
**Does the race with China validate the companies’ counter-argument?**
Examining this reveals China is moving against certain AI applications, like anthropomorphic AI, to mitigate youth impact, showing AI impacts American youths alike. The race to develop superintelligence lacks control, posing a global risk if machines override human oversight—contrary to China’s governance ethos.
The notion that developing superintelligence is a security threat, not an asset, could gain traction in Washington. If experts consider Dario Amodei’s vision of a “country of geniuses in a data center” a risk, it may prompt reevaluation. Realizing the uncontrollable nature of superintelligence aligns with historic lessons from the Cold War’s nuclear race where restraint prevailed for global security.
**What implications does this have for AI development’s pace?**
Previously, AI experts anticipated human-level AI far in the future. However, advancements occurred rapidly. Systems like GPT-5 show significant progress towards AGI. Students now face a future where job security is uncertain due to AI advancements, requiring proactive preparation.
**With Anthropic blacklisted, will AI giants stand firm with them, or will others pursue such contracts?**
Sam Altman’s recent declaration in support of Anthropic’s principles is commendable. Google and xAI’s silence could result in internal discontent. This situation challenges companies to reveal their core values
