🔴 BreakingAI & Technology

Anthropic Says Chinese AI Labs Used 24,000 Fake Accounts to Copy Claude

DeepSeek, Moonshot AI, and MiniMax allegedly made more than 16 million coordinated prompts to extract and replicate Claude's capabilities — what Anthropic is calling industrial-scale distillation.

O
ObjectWire Technology Desk
February 23, 2026📖 5 min read

Anthropic alleged on Monday that three Chinese artificial intelligence companies — DeepSeek, Moonshot AI, and MiniMax — used roughly 24,000 fraudulent accounts to extract capabilities from its Claude chatbot in a coordinated effort the company described as industrial-scale distillation, according to a blog post and a report first obtained by Fox News Digital.

The three firms prompted Claude more than 16 million times, siphoning outputs to train and improve their own AI systems, Anthropic said. The scale of each company's activity varied significantly — with MiniMax responsible for the vast majority of the interactions — according to reporting by The Wall Street Journal.

16 million prompts. 24,000 fake accounts. Anthropic says three Chinese AI firms — DeepSeek, Moonshot AI, and MiniMax — systematically extracted Claude's outputs to train competing models in what it calls industrial-scale distillation.

The Scale of the Operation: How 16 Million Prompts Break Down

The sheer volume of interactions reported by Anthropic signals that this was not opportunistic misuse but a deliberate, structured data extraction campaign. According to The Wall Street Journal's reporting on the Anthropic disclosure:

Alleged Claude Prompts by Company

MiniMax13,000,000+
Moonshot AI3,400,000+
DeepSeek150,000+

Source: Anthropic blog post and Wall Street Journal reporting, February 2026. Bars scaled relative to MiniMax total.

  • MiniMax — accounted for more than 13 million of the total interactions, by far the largest share
  • Moonshot AI — responsible for more than 3.4 million prompts
  • DeepSeek — attributed with approximately 150,000 interactions

Together, the three companies' alleged activity accounts for the bulk of the reported 16 million prompts. The fraudulent accounts were used to circumvent Anthropic's rate limits, usage policies, and detection mechanisms — effectively disguising what was a coordinated scraping operation as organic user traffic.

What Is AI Model Distillation — and Why Does It Matter?

Model distillation is a legitimate machine learning technique in which a smaller, more efficient "student" model is trained to mimic the outputs of a larger "teacher" model. When done with authorization or on openly licensed models, it is a standard part of the AI development toolkit.

What Anthropic is alleging is the unauthorized version: systematically querying a proprietary model at scale and using those outputs as training data without consent, in violation of terms of service. The resulting "distilled" model inherits behavioral characteristics, reasoning patterns, and factual knowledge from the source model — effectively transferring competitive capability without building it from scratch.

📊
Why 16 million prompts matters: At that volume, with carefully designed queries, a company can extract enough high-quality input-output pairs to meaningfully influence a model's capabilities across reasoning, instruction-following, coding, and general knowledge — the core differentiators between frontier AI models.

The technique has become a flashpoint in the AI industry since open-source and commercial AI companies began trading accusations about whose training data came from whose outputs. Earlier controversies around models like Alpaca (which was distilled from GPT-3.5) established the practice as both technically viable and legally contested.

Who Are DeepSeek, Moonshot AI, and MiniMax?

DeepSeek

DeepSeek is a Chinese AI lab backed by hedge fund High-Flyer that attracted global attention in January 2026 when its DeepSeek-R1 model posted competitive benchmark results against leading US frontier models at a fraction of the reported training cost. The release triggered a sharp selloff in US AI-related stocks and prompted widespread discussion about whether American AI companies held a sustainable compute advantage. DeepSeek's attribution in this incident — 150,000 prompts — is the smallest of the three, though even that scale is significant for targeted capability extraction.

Moonshot AI

Moonshot AI is a Beijing-based startup best known for its Kimi AI assistant, which has attracted significant user growth in China. The company has raised hundreds of millions in venture funding and is positioned as one of the leading Chinese consumer-facing AI products. The alleged 3.4 million Claude prompts represent a substantial extraction effort relative to the company's public profile.

MiniMax

MiniMax is a Shanghai-based AI company that operates multimodal models under the Abab series and has built consumer and enterprise AI products. The company is reportedly a unicorn valued at several billion dollars. With over 13 million alleged interactions — more than 80% of the reported total — MiniMax's role in this operation, if confirmed, represents by far the most aggressive extraction activity of the three named companies.

How Anthropic Detected and Disclosed the Activity

Anthropic disclosed the activity via a blog post published Monday and in materials first shared with Fox News Digital. The company did not detail the specific technical methods used to detect and attribute the fraudulent accounts to particular organizations, but said it identified the pattern as part of ongoing efforts to monitor its platform for terms-of-service violations and systematic abuse.

The disclosure follows a broader pattern of AI companies hardening their detection capabilities against coordinated scraping. Anthropic, OpenAI, and others have invested significantly in behavioral analysis to distinguish legitimate API use from systematic data extraction — though the arms race between detection and evasion via distributed fake accounts remains ongoing.

It is not yet known whether Anthropic has taken or plans to take legal action against any of the three companies. The jurisdictional complexity of pursuing Chinese AI companies in US courts — combined with the difficulty of establishing direct legal standing for outputs that are not clearly copyrightable — makes litigation a challenging path. Anthropic has not publicly stated its intended next steps beyond the disclosure itself.

Broader Implications: AI Model Security and the Distillation Arms Race

The Anthropic disclosure lands at a charged moment in the geopolitics of AI development. US-China competition over frontier AI capabilities has intensified following export controls on advanced semiconductors and growing scrutiny of Chinese AI companies' methods and data sources. Whether this incident is characterized as corporate espionage, terms-of-service violation, or a gray area of competitive intelligence will depend heavily on legal and political framing.

For the broader AI industry, the incident highlights a structural tension: the more capable and accessible a model is — especially through low-cost or free tiers — the more attractive it becomes as a distillation target. Anthropic's Claude is among the highest-quality commercially available models, making it a natural target for any organization seeking to bootstrap comparable capabilities without the underlying research investment.

The distillation dilemma: Restricting API access limits both legitimate use and extraction. Making models freely available accelerates adoption but also accelerates competitive imitation. There is no configuration that solves both problems simultaneously — which is why detection and disclosure, rather than access restriction alone, is increasingly the industry's primary defense.

The incident is also likely to accelerate calls in Washington for tighter controls on how US-developed AI systems can be accessed by foreign entities — adding another dimension to an already complex regulatory environment around AI export controls, cloud infrastructure access, and data sovereignty.

Related Coverage

Disclaimer: This article is based on public reporting from Anthropic's blog post and coverage by Fox News Digital and The Wall Street Journal. None of the three named companies have been charged with any crime. This article is for informational and journalistic purposes only.

Tags

#Anthropic#Claude#DeepSeek#Moonshot AI#MiniMax#AI Distillation#China AI#AI Security#LLM#API Abuse#Fake Accounts#AI Intellectual Property#Generative AI

Tags

#Anthropic#Claude#DeepSeek#Moonshot AI#MiniMax#AI Distillation#China#AI Security#LLM#Fake Accounts#AI Intellectual Property
O

Written by

ObjectWire Technology Desk

AI Reporter

Part ofObjectWirecoverage