Anthropic claims 3 Chinese companies ripped it off, using its AI tools to train their models: ‘How the turn tables’
Anthropic has accused three prominent Chinese artificial intelligence firms of using its Claude chatbot on a massive scale to secretly train rival models, an unexpected development in a years-long global debate over where fraud ends and industry standard practice begins.
In a blog post on Monday, San Francisco–based Anthropic alleged that Chinese labs DeepSeek, Moonshot AI, and MiniMax violated corporate law by interacting with Claude, its market-reshaping vibe-coding tool. “We have identified industrial-scale campaigns by three AI laboratories—DeepSeek, Moonshot, and MiniMax—to illicitly extract Claude’s capabilities to improve their own models,” the company said. “These labs generated over 16 million exchanges with Claude through approximately 24,000 fraudulent accounts, in violation of our terms of service and regional access restrictions.”
According to Anthropic, the Chinese companies relied on a technique known as “distillation,” in which one model is trained on the outputs of another, often a more capable system. The campaigns allegedly focused on areas that Anthropic considers key differentiators for Claude, including complex reasoning, coding assistance, and tool use.
Anthropic argues that while distillation is a “widely used and legitimate training method,” the Chinese firms’ use of it in this manner may have been for “for illicit purposes.” Using sprawling networks of fake accounts to replicate a competitor’s proprietary model violates its terms of service and undermines U.S. export controls aimed at constraining China’s access to cutting‑edge AI, Anthropic said, urging “rapid, coordinated action among industry players, policymakers, and the global AI community.”
If not quite distillation, Anthropic was recently accused of copyright violations by thousands of authors, allegedly downloading books in bulk from shadow libraries to train its AI models, rather than buying copies and scanning them itself. In a historic move, Anthropic settled that lawsuit for $1.5 billion in September 2025, paying authors around $3,000 per book for roughly 500,000 works.
How the Chinese firms are accused of doing it
The company claims the three labs bypassed geofencing and business restrictions that limit Claude’s commercial availability in China by routing traffic through proxy services that resell access to major Western AI models. One such “hydra cluster,” Anthropic said, operated tens of thousands of accounts simultaneously to spread requests across different API keys and cloud providers.
Once those accounts were in place, the labs allegedly scripted long, high‑token conversations designed to extract detailed, step‑by‑step answers that could be fed back into their own systems as training data. In Anthropic’s telling, the result was an off‑the‑books pipeline that turned Claude into an unwilling teacher for models being developed inside China’s increasingly competitive AI sector.
Anthropic has not yet announced specific lawsuits against the three companies, but it has signaled that it has cut off known access points and is urging Washington to tighten export controls on advanced chips and AI services to prevent similar efforts in the future.
‘How the turn tables’
If Anthropic hoped for sympathy, the reaction online and among industry watchers has been notably skeptical. Commentators quickly pointed out that Anthropic itself has faced high‑profile accusations of overreaching in its own data collection practices beyond the copyright case from authors, such as a separate case over scraping Reddit content. “How the turn tables,” wrote a commenter on the Reddit thread r/singularity, a play on words often attributed to a meme derived from the television show The Office.
Behind the sniping lies a broader fight over who sets the rules for an industry built on remixing human work. U.S. firms such as Anthropic and OpenAI have increasingly pushed for aggressive enforcement against foreign competitors they accuse of copying proprietary systems, even as they defend their own sprawling data collection under the banner of fair use.
Chinese labs, many of which release more open‑source models, are racing to close the performance gap with Western rivals using any legal advantage they can find. With Washington already debating tighter restrictions on exporting AI chips and cloud services to China, Anthropic’s allegations are likely to feed calls for new guardrails—while giving critics one more chance to note the uncomfortable symmetry at the heart of modern AI.
For this story, Fortune journalists used generative AI as a research tool. An editor verified the accuracy of the information before publishing.
This story was originally featured on Fortune.com