Anthropic, a leading developer in the artificial intelligence sector, has published a detailed report identifying what it describes as systematic efforts by prominent Chinese AI laboratories to illicitly extract proprietary technology from its Claude models. The San Francisco-based firm revealed that three major Chinese entities—DeepSeek, Moonshot, and MiniMax—engaged in coordinated campaigns to bypass security protocols and harvest model data. According to the company, these organizations utilized approximately 24,000 fraudulent accounts to facilitate more than 16 million exchanges with Claude, aiming to enhance their own large language models (LLMs) using Anthropic’s internal logic and output patterns.
The disclosure marks a significant escalation in the ongoing technological rivalry between U.S. and Chinese AI developers. Anthropic characterized these activities as "industrial-scale campaigns" designed to circumvent regional access restrictions and terms of service. By flooding the system with automated queries through a massive network of fake identities, the Chinese firms allegedly attempted to perform "distillation attacks," a sophisticated method of reverse-engineering the capabilities of a frontier AI model.
The report specifically names DeepSeek, Moonshot, and MiniMax as the primary actors behind these efforts. These companies are among the most well-funded AI startups in China, often referred to as the "Tigers" of the Chinese AI industry. DeepSeek, in particular, has recently gained international attention for its high-performance models, which some industry analysts previously noted shared striking similarities with Western counterparts in terms of reasoning and formatting.
Distillation is a legitimate and common technique within the AI industry when used by a developer on its own technology. It involves training a smaller, more efficient model using the outputs of a larger, more complex "teacher" model. This allows companies to create faster and cheaper versions of their technology for consumer use. However, when an outside entity performs this process without permission, it is viewed as a form of intellectual property theft.
In an illicit distillation attack, a competitor sends millions of carefully crafted prompts to a rival model. By analyzing the nuanced responses, the competitor can effectively "steal" the underlying reasoning, safety guardrails, and knowledge base of the target system. This allows the attacking firm to develop advanced AI capabilities at a fraction of the cost and time required for original research and development. Anthropic’s report suggests that by siphoning Claude’s data, these Chinese labs sought to bridge the technological gap between their models and those developed in the United States.

The scale of the operation described by Anthropic suggests a high level of sophistication and resource allocation. To manage 24,000 accounts and generate 16 million interactions, the entities likely employed automated scripts and cloud infrastructure designed to mimic human behavior. This method was intended to evade Anthropic’s standard rate-limiting and fraud-detection systems, which are designed to flag and block suspicious patterns of high-volume usage.
The timing of this exposure is particularly sensitive, as it follows similar allegations made by OpenAI earlier this year. In January, OpenAI accused DeepSeek of engaging in similar distillation practices to improve its models. At the time, the incident sparked a heated debate within the tech community regarding the ethics of model training and the definition of intellectual property in the age of generative AI.
The controversy is further complicated by the broader debate over copyright and "fair use" in the AI industry. Many U.S. AI companies, including Anthropic and OpenAI, are currently facing lawsuits from authors, news organizations, and artists who allege that their copyrighted works were used to train LLMs without permission. Critics of the AI industry have pointed out a perceived hypocrisy: American firms argue that scraping the public internet for training data is "fair use," yet they characterize the scraping of their own model outputs as "theft."
This tension was highlighted during an AI summit in July 2025, where President Donald Trump defended the practices of U.S. tech firms. The President argued that American companies must be allowed to utilize all available information to maintain a competitive edge over China. "You can’t be expected to have a successful AI program when every single article, book, or anything else that you’ve read or studied, you’re supposed to pay for," Trump stated, adding that China was already ignoring such restrictions to accelerate its own development.
The geopolitical implications of these distillation attacks are profound. The U.S. government has increasingly viewed AI leadership as a matter of national security, implementing export controls on high-end semiconductors to slow China’s progress in the field. If Chinese firms can successfully bypass these hurdles by simply "distilling" the intelligence of American models, the effectiveness of U.S. trade policy could be severely undermined. Anthropic’s report explicitly framed these attacks as a national security concern, urging for a coordinated response between the private sector and government agencies.
The financial stakes are equally high. Companies like Anthropic, OpenAI, Meta, and Google are currently engaged in a massive capital expenditure cycle, spending tens of billions of dollars on data centers, specialized chips, and high-quality training data. The cost of developing a frontier model like Claude or GPT-4 is estimated to be in the hundreds of millions, if not billions, of dollars. If a rival can replicate those capabilities for a few million dollars through distillation, the original developer’s competitive advantage evaporates, threatening the economic model of the entire industry.

Anthropic noted that the campaigns it detected were not isolated incidents but part of a growing trend of increasing intensity and sophistication. The company stated that it has implemented new defensive measures to identify and block distillation attempts, but it acknowledged that as AI models become more capable, the methods used to exploit them will also evolve. The firm called for the establishment of industry-wide standards and shared intelligence to combat what it views as a global threat to AI safety and innovation.
Industry analysts suggest that the battle over model distillation will likely move into the legal and diplomatic spheres. While Anthropic’s terms of service clearly prohibit the use of its outputs to train competing models, enforcing those terms against foreign entities—particularly those operating in jurisdictions with different intellectual property frameworks—remains a daunting challenge. There is currently no international treaty that specifically addresses the legality of AI model distillation, leaving companies with few remedies beyond technical blocks and public exposure.
The impact of these revelations on the public perception of AI remains to be seen. While some see it as a clear-cut case of corporate espionage, others view it as an inevitable consequence of an industry built on the massive ingestion of data. The ongoing friction between the "open" nature of AI research and the "closed" nature of commercial model development continues to create friction as the global race for AI supremacy accelerates.
As the industry moves forward, the focus will likely shift toward more robust "digital watermarking" and fingerprinting techniques. These technologies would allow developers to embed invisible markers in their model’s outputs, making it easier to prove when a competitor has used that data for training purposes. However, such measures are still in their infancy and can often be stripped away by sophisticated attackers.
Anthropic’s decision to name DeepSeek, Moonshot, and MiniMax publicly serves as a warning to the broader tech community. It signals that the era of quiet observation is over and that U.S. AI firms are now willing to engage in public confrontation to protect their intellectual property. The company’s blog post concluded with a call for rapid action, stating that the "window to act is narrow" and that the future of the global AI landscape depends on the ability of democratic nations to secure their technological breakthroughs.
For now, the three Chinese firms named in the report have not issued formal responses to the allegations. The situation remains fluid, with U.S. policymakers likely to use the Anthropic report as further evidence for the necessity of stricter regulations on international AI interactions. As the technological divide between the East and West grows, the security of the data flowing through these advanced systems will remain a central point of contention for years to come.












