Anthropic alleges that DeepSeek and two other Chinese AI companies misused its Claude AI model to improve their own products. In Monday’s announcement, Anthropic says “industrial-scale campaigns” involved the creation of approximately 24,000 fraudulent accounts and more than 16 million exchanges with Claude, as it previously reported. The Wall Street Journal.
The three companies – DeepSeek, MiniMax and Moonshot – are accused of “distilling” Cloud, or training a smaller AI model based on a more advanced one. Although Anthropic says distillation is a “legitimate training method,” it adds that it can “also be used for illicit purposes,” including “obtaining powerful capabilities from other labs at a fraction of the time and cost of developing them independently.”
Anthropic adds that illegally distilled models are “unlikely” to carry over existing warranties. “Foreign labs that distill US models can then feed these vulnerable capabilities into military, intelligence, and surveillance systems—allowing authoritarian governments to deploy frontier AI for offensive cyber operations, disinformation campaigns, and mass surveillance,” Anthropic writes.
DeepSeek, which has caused a stir in the AI industry for its powerful but more efficient models, conducted more than 150,000 exchanges with Claude, focusing on his reasoning abilities, Anthropic reports. It is also accused of using Claude to create “censorship-safe alternatives to politically sensitive questions about dissidents, party leaders or authoritarianism”. In a letter to lawmakers last week, OpenAI similarly accused DeepSeek of “continuing efforts to parasitize capabilities developed by OpenAI and other frontier labs in the US.”
Moonshot and MiniMax had over 3.4 million and 13 million exchanges with Claude, respectively. Anthropic is calling on other members of the AI industry, cloud providers and lawmakers to address distillation, adding that “restricted access to chips” could limit model training and “the scope of illicit distillation.”