Anthropic accuses three Chinese AI companies of setting up more than 24,000 fake accounts with its Claude AI model to improve their own models.
The labs — DeepSeek, Moonshot AI and MiniMax — reportedly generated more than 16 million exchanges with Claude through these accounts using a technique called “distillation.” Anthropic said the labs “focused on Claude’s most diverse abilities: agentic reasoning, tool use, and coding.”
The accusations come amid debate over how strictly to enforce export controls on advanced AI chips, a policy aimed at curbing AI development in China.
Distillation is a common training method that AI labs use on their own models to create smaller, cheaper versions, but competitors can use it to essentially copy other labs’ homework. OpenAI sent letters to House lawmakers earlier this month accusing DeepSeek of using distillation to imitate its products.
DeepSeek first made waves a year ago when it released its open-source reasoning model R1, which nearly matched the performance of America’s frontier labs at a fraction of the cost. DeepSeek is expected to soon release DeepSeek V4, its latest model that can reportedly beat Anthropic’s Claude and OpenAI’s ChatGPT in coding.
The range of each attack varies in range. Antropic tracked more than 150,000 exchanges from DeepSeek that appeared focused on improving the underlying logic and alignment, specifically around censorship-safe alternatives to policy-sensitive queries.
Moonshot AI has had more than 3.4 million exchanges focused on agent reasoning and tool use, coding and data analysis, developing agents for desktop use, and computer vision. Last month, the company released a new open source Kimi K2.5 model and encoding agent.
Techcrunch event
Boston, MA
|
June 9, 2026
13 million MiniMax exchanges focused on agent coding and tool usage and orchestration. Anthropic said it was able to observe the MiniMax in action when it diverted nearly half of its traffic to the siphon from the latest Claude model at launch.
Anthropic says it will continue to invest in defenses that make distillation attacks harder to execute and identify, but calls for a “coordinated response across the AI industry, cloud providers and policymakers.”
The distillation attacks come at a time when US chip exports to China are still hotly debated. Last month, the Trump administration formally allowed US companies like Nvidia to export advanced AI chips (like the H200) to China. Critics have argued that this loosening of export controls boosts China’s AI computing capacity at a critical time in the global race for AI dominance.
Anthropic says the extraction range of DeepSeek, MiniMax and Moonshot “requires access to advanced chips”.
“Distillation attacks therefore reinforce the logic of export controls: limited access to chips limits both direct model training and the extent of illicit distillation,” the Anthropic blog says.
Dmitri Alperovitch, chairman of the Silverado Policy Accelerator think tank and co-founder of CrowdStrike, told TechCrunch that he was not surprised by the attacks.
“It’s been clear for some time that one of the reasons for the rapid progress of China’s AI models was theft through the distillation of models at the US border. We now know that as a fact,” Alperovitch said. “This should give us even more compelling reasons to refuse to sell any AI chips to any of these companies, which would only further favor them.”
Antropic also said that distillation not only threatens America’s AI dominance, but could also create national security risks.
“Anthropic and other US companies are building systems to prevent state and non-state actors from using AI to, for example, develop biological weapons or conduct malicious cyber activities,” the Anthropic blog says. “Models made through illegal distillation are unlikely to retain these safeguards, meaning that dangerous capabilities can proliferate when a number of safeguards are completely removed.”
Antropic pointed to authoritarian governments deploying frontier AI for things like “offensive cyber operations, disinformation campaigns and mass surveillance,” a risk that is multiplied if these models are open source.
TechCrunch has reached out to DeepSeek, MiniMax and Moonshot for comment.