Leading US artificial intelligence company Anthropic complained on 24th February 2026 it had uncovered campaigns by three Chinese AI firms to illegally extract capabilities from its Claude chatbot, in what it described as industrial-scale intellectual property theft. OpenAI leveled similar charges in January 2026.
Anthropic said Chinese frontier labs DeepSeek, Moonshot AI and MiniMax used a technique known as “distillation” - using outputs from a more powerful AI system to rapidly boost the performance of a less capable one. Anthropic complained that the campaigns were growing in intensity and sophistication, and that the window to act was narrow.
Distillation is a common practice within AI development, often used by companies to create cheaper, smaller versions of their own models. Now that the US and China are in a stiff geopolitical battle of domination, it is a bone of contention.
1. Anthropic’s allegation of industrial-scale distillation - what exactly
Anthropic, the U.S. AI company behind the Claude large language model, has publicly accused three Chinese AI developers — DeepSeek, Moonshot AI, and MiniMax — of conducting “industrial-scale distillation campaigns” on its Claude systems. The company claims these labs used tens of thousands of fraudulent accounts to generate millions of interactions with Claude, extracting outputs to train or enhance their own models.
2. The technique called 'Distillation'
Distillation is a legitimate and widely used machine learning approach where a smaller “student” model is trained on the outputs of a larger “teacher” model, enabling efficiency improvements. However, Anthropic says in this case it was used as a shortcut to replicate proprietary capabilities without authorization, and conducted at a scale that exceeds normal research or competitive benchmarking.
You may wish to read all about 'Distillation' here.
3. Scale and Scope: Millions of queries
According to Anthropic's complaint, the distillation campaigns involved about 24,000 fake accounts and more than 16 million interactions with Claude, a volume Anthropic describes as coordinated and sustained. MiniMax alone reportedly accounted for the majority of those interactions, suggesting the efforts were not accidental or limited.
4. Circumventing access restrictions
Claude is not commercially available within mainland China; Anthropic has policies that restrict access in certain jurisdictions for safety and regulatory reasons. The company alleges that the Chinese labs used proxy services and networks to bypass regional access controls and service terms, allowing them to funnel large volumes of queries through Claude.
5. National security framing
Anthropic has framed the issue as more than commercial competition. The company has warned that models trained on unauthorized distilled outputs could bypass safety guardrails and be used for harmful purposes, including surveillance, cyber operations, or other national security threats. This feeds into ongoing debates about AI governance, export controls, and technology protection policies.
6. Collision with export controls
The charges come at a time when the U.S. government has restricted exports of advanced AI hardware, especially high-performance GPUs, to China. Anthropic argues that activities like unauthorized distillation undermine these export controls by effectively transferring advanced capabilities through model outputs rather than hardware. Critics see this as part of a broader policy and strategic tension in the AI race.
7. Industry and Academic reactions
Reactions among AI researchers and industry commentators have been mixed. Some see Anthropic’s claims as a wake-up call about intellectual property and safety in open model ecosystems. Others note that distillation is a standard technique and questioning where the line between research and theft lies is complex, especially when foundational models are publicly accessible through APIs. Still some others bluntly say that Anthropic itself has been caught cheating when it scraped the internet for model-training, without paying anyone anything at all. It also settles lawsuits arising from those charges.
8. Broader AI competition Between the U.S. and China
This dispute fits into the larger context of AI competition between the U.S. and China, where both sides are rapidly building large language models and AI capabilities. Previous allegations have involved similar accusations from other U.S. tech firms, and China has defended its own growing AI sector as following common global practices. The strategic backdrop includes billions in funding, government policy interventions, and geopolitical tensions.
9. Technical and ethical questions on model use
Beyond geopolitics, the episode sparks deeper ethical and technical questions about how models should be accessed and used. At what point does training on another model’s outputs become unfair appropriation? What safeguards should API providers build to detect and block misuse? Experts point out that the line between research, competition, and exploitation is still evolving as AI systems get more capable.
10. Next steps: Governance, Legal Action, and Industry Standards
Anthropic has called for coordinated action — including stricter platform defenses, collaboration with cloud providers, and stronger legal or regulatory frameworks — to curb unauthorized extraction of model capabilities. The issue is likely to be taken up in policy forums, standard-setting bodies, and possibly international AI governance discussions, as industry players weigh competitive interests against safety and IP norms.
Summary
Anthropic accused Chinese AI labs DeepSeek, Moonshot AI, and MiniMax of industrial scale distillation from Claude using fake accounts and millions of queries to extract capabilities. While distillation is a standard ML technique, Anthropic says this crossed into unauthorized IP extraction and security risk. The dispute sits within US China AI rivalry, export controls, and compute shortages. The case raises unresolved questions about API misuse, model IP, ethical boundaries, and how frontier AI capabilities inevitably diffuse despite access restrictions.
[The Billion Hopes Research Team shares the latest AI updates for learning and awareness. Various sources are used. All copyrights acknowledged. This is not a professional, financial, personal or medical advice. Please consult domain experts before making decisions. Feedback welcome!]
