
Anthropic is accusing three Chinese AI companies of creating over 24,000 fake accounts with its Claude AI model to enhance their own models.
The labs — DeepSeek, Moonshot AI, and MiniMax — purportedly generated more than 16 million interactions with Claude through those accounts using a technique known as “distillation.” Anthropic stated the labs “targeted Claude’s most differentiated capabilities: agentic reasoning, tool use, and coding.”
The accusations surface amid discussions on enforcing export controls on advanced AI chips, a policy aimed at limiting China’s AI progress.
Distillation is a typical training method that AI labs use on their models to develop smaller, cost-effective versions, but rivals can leverage it to essentially replicate another lab’s work. OpenAI recently issued a memo to House lawmakers accusing DeepSeek of utilizing distillation to emulate its products.
DeepSeek first gained attention a year ago with its open-source R1 reasoning model that nearly matched American leading labs in performance at a reduced cost. DeepSeek is anticipated to soon unveil DeepSeek V4, its latest model, which allegedly surpasses Anthropic’s Claude and OpenAI’s ChatGPT in coding.
The scale of each attack varied. Anthropic identified over 150,000 exchanges from DeepSeek aimed at enhancing foundational logic and alignment, specifically around censorship-safe alternatives to sensitive queries.
Moonshot AI conducted more than 3.4 million exchanges focusing on agentic reasoning and tool use, coding and data analysis, computer-use agent development, and computer vision. Recently, the company launched a new open-source model Kimi K2.5 and a coding agent.
Techcrunch event
Boston, MA
|
June 9, 2026
MiniMax’s 13 million exchanges targeted agentic coding, tool use, and orchestration. Anthropic reported observing MiniMax redirecting nearly half its traffic to extract capabilities from the newest Claude model at its launch.
Anthropic plans to keep investing in defenses to make distillation attacks harder to conduct and easier to detect, urging for “a coordinated response across the AI industry, cloud providers, and policymakers.”
The distillation attacks occur amid ongoing debates over <a href="https://techcrunch.com/2026/01/21/a-timeline-of-the-u-s