DeepSeek allegedly focused on Claude’s reasoning abilities while creating ‘censorship-safe alternatives to politically sensitive questions.’
Anthropic claims that DeepSeek and two other Chinese AI companies misused its Claude AI model to enhance their products. In an announcement on Monday, Anthropic revealed that the “industrial-scale campaigns” involved the creation of about 24,000 fraudulent accounts and over 16 million exchanges with Claude, as initially reported by The Wall Street Journal.
The three companies — DeepSeek, MiniMax, and Moonshot — are accused of “distilling” Claude, or training a smaller AI model based on a more advanced one. While Anthropic considers distillation a “legitimate training method,” it notes potential illicit uses, stating it can “acquire powerful capabilities from other labs in significantly less time and cost than developing them independently.”
Anthropic warns that illicitly distilled models are unlikely to retain existing safeguards. “Foreign labs that distill American models can then feed these unprotected capabilities into military, intelligence, and surveillance systems — enabling authoritarian governments to deploy frontier AI for offensive cyber operations, disinformation campaigns, and mass surveillance,” it states.
DeepSeek, noted for its efficient models, conducted over 150,000 exchanges with Claude, targeting its reasoning capabilities, according to Anthropic. It’s accused of using Claude to generate “censorship-safe alternatives to politically sensitive questions about dissidents, party leaders, or authoritarianism.” Last week, OpenAI similarly accused DeepSeek of “ongoing efforts to free-ride on the capabilities developed by OpenAI and other U.S. frontier labs.”
Moonshot and MiniMax conducted over 3.4 million and 13 million exchanges with Claude, respectively. Anthropic is urging AI industry members, cloud providers, and lawmakers to tackle distillation and suggests that “restricted chip access” could limit model training and the scale of illicit distillation.
