You've Stolen Data Too: Elon Musk & The Internet Call Out Anthropic's Hypocrisy After Chinese Data Theft Announcement
· Free Press Journal

Anthropic's bombshell revelation that three Chinese AI labs conducted industrial-scale data theft against its Claude model was supposed to put the company in the role of victim. Instead, it has ignited a firestorm of accusations against Anthropic itself - with Elon Musk leading the charge and much of the internet not far behind.
Visit extract-html.com for more information.
Anthropic accuses Chinese AI companies of 'distillation attacks'
On Monday, Anthropic publicly accused three prominent Chinese AI laboratories - DeepSeek, Moonshot AI, and MiniMax - of orchestrating coordinated campaigns to siphon capabilities from its Claude models using tens of thousands of fraudulent accounts. The labs allegedly generated more than 16 million exchanges with Claude through those accounts using a technique called 'distillation,' targeting Claude's most differentiated capabilities: agentic reasoning, tool use, and coding.
The announcement was clearly designed to build pressure around US export controls on AI chips. Anthropic argued that "distillation attacks therefore reinforce the rationale for export controls: restricted chip access limits both direct model training and the scale of illicit distillation."
Elon Musk calls out Anthropic's hypocricy
The moral high ground Anthropic sought didn't last long. Within hours, Elon Musk - whose own AI venture xAI has faced similar questions about training data - highlighted Anthropic's own legal entanglements. Pointing to Anthropic's reported settlement with authors over the use of pirated books and an ongoing lawsuit from music publishers, Musk posted bluntly to X, "Anthropic is guilty of stealing training data at massive scale and has had to pay multi-billion dollar settlements for their theft. This is just a fact."
Anthropic is guilty of stealing training data at massive scale and has had to pay multi-billion dollar settlements for their theft. This is just a fact. https://t.co/EEtdsJQ1Op
— Elon Musk (@elonmusk) February 23, 2026
Community Notes on the platform piled on, flagging that Anthropic had settled a $1.5 billion lawsuit related to using millions of books from shadow libraries - including the notorious Library Genesis - to train Claude, and that the company faces a $3 billion lawsuit for allegedly torrenting tens of thousands of copyrighted songs.
the community notes COOKED anthropic pic.twitter.com/hn4BVeBff8
— NIK (@ns123abc) February 23, 2026
Critics were quick to note that Musk's own AI companies have faced analogous questions about data sourcing, making his intervention feel less like principled concern and more like opportunistic score-settling.
The Internet was not sympathetic
The reaction on social media was swift, sardonic, and largely unflattering to Anthropic. A user named Swapna Kumar Panda captured a sentiment that rippled across X, "When you guys train your model by bombarding others for free of cost, it's fine. But if others are training by paying your model, it's illegal? Unethical?"
Another user took a more darkly comic tone, writing, *"Wow, that's very unfair. Maybe you can ask them to contribute to the huge bill you'll pay after scanning books without giving authors any rights. Hit me up for more problem solving free advices."
The irony of Anthropic, which stole everyone's data to train its AI only to sell it back to us, is now upset that a company stole that data from them just to give it back to the people for free. https://t.co/vXJdtnWx2z
— tt'Grex (@tt_Grex) February 24, 2026
Perhaps the sharpest critique came from Stephen Pimentel, who framed the controversy as a philosophical contradiction at the heart of the AI industry,"It is deeply ironic for AI labs to object to others distilling their models via public APIs. Since that is closely analogous to how the models are pretrained from data sets that are often under copyright. 'I can use others' data without permission; no one can use mine without permission.'"
I have to agree with Musk here pic.twitter.com/oYbaRnv4ML
— Muhammad Ali (@ali_agenticai0) February 23, 2026
— fbslo (@fbsloXBT) February 23, 2026
This is unacceptable. Anthropic spent years painstakingly stealing data from the entire internet without paying anyone a single dollar.
— Sasha Yanshin (@sashayanshin) February 24, 2026
They spent months carefully ignoring everyone else’s IP and terms of service to enrich themselves.
How dare these Chinese AI companies come… https://t.co/WZD7CouLT3
On X, another critic wrote, "You trained on the open internet and then call it 'distillation attacks' when others learn from you." Another user was less measured, "Ohhh nooo not my private IP, how dare someone use that to train an AI model, only Anthropic has the right to use everyone else's IP."
Ohhh nooo not my private IP how dare someone use that to train an AI model, only Anthropic has the right to use everyone elses IP nooooo, this cannot stand! https://t.co/HfLAnfOCZY
— Teknium (e/λ) (@Teknium) February 23, 2026
The most honest sentence in the entire AI industry right now is one nobody wants to say out loud.
— Shanaka Anslem Perera ⚡ (@shanaka86) February 23, 2026
Every major foundation model was trained on data its creators did not have explicit permission to use. Every single one.
Anthropic settled for $1.5 billion over 7 million pirated… https://t.co/B6F3woz7jm pic.twitter.com/s8CZ0k62Ns
AI labs, including Anthropic, routinely distill their own models to create smaller, cheaper versions for customers. But the same technique can be weaponised – a competitor can pose as a legitimate customer, bombard a frontier model with carefully crafted prompts, collect the outputs, and use those outputs to train a rival system.