Artificial intelligence company Anthropic has accused several Chinese technology firms of using thousands of fraudulent accounts to extract and replicate capabilities from its flagship AI model Claude, escalating tensions between US and Chinese AI developers at a time when Washington is debating stricter export controls on advanced semiconductor technology.
According to reporting by TechCrunch, Anthropic claims that entities linked to Chinese AI labs including DeepSeek, Moonshot AI, and MiniMax created approximately 24,000 fake user accounts to interact with Claude’s systems. The alleged goal was to distill or reverse engineer aspects of the model’s reasoning and response behavior, a practice that involves training a new model on the outputs of a more advanced one.
Anthropic has not publicly released detailed forensic evidence, but the company said it identified patterns of coordinated activity that suggested automated querying and large scale harvesting of Claude’s responses. Distillation, while widely used within companies to compress and optimize internal models, becomes controversial when it involves extracting insights from a competitor’s proprietary system without authorization.

The accusations come as the US government continues to evaluate how best to limit China’s access to cutting edge AI chips and related technologies. Policymakers in Washington have argued that advanced semiconductors power not only commercial AI systems but also potential military applications, prompting export restrictions aimed at slowing China’s technological rise. The debate has intensified as generative AI capabilities expand rapidly across sectors including defense, finance, healthcare and media.
Anthropic, which positions itself as a safety focused AI research firm, has been vocal about responsible AI development and the risks associated with uncontrolled capability growth. Claude, its large language model family, competes with other major systems developed by US and international companies. Protecting proprietary training methods and model outputs is considered central to maintaining competitive advantage in the AI race.
The Chinese firms named in the allegations have not publicly confirmed the claims. China’s AI sector has grown quickly in recent years, supported by strong domestic demand, government backing and a highly competitive startup ecosystem. Companies such as DeepSeek, Moonshot AI and MiniMax have developed large language models tailored to Chinese language users and regional enterprise needs.

The broader issue of model distillation sits in a gray area of AI governance. Technically, once a model produces an output to a user query, that text can be read and analyzed. However, systematically querying a model at scale to recreate its underlying knowledge and reasoning structures may violate terms of service or intellectual property protections. AI firms increasingly deploy rate limits, identity verification and anomaly detection systems to prevent such scraping.
Anthropic’s allegations also intersect with geopolitical concerns. US officials have expressed worries that even if advanced chips are restricted, foreign firms could attempt to replicate frontier models through indirect means such as distillation. If successful, this approach could reduce the effectiveness of hardware export controls by narrowing the capability gap through software driven workarounds.
At the same time, critics of sweeping export restrictions argue that overly aggressive controls could fragment the global technology ecosystem, disrupt supply chains and push Chinese firms to accelerate domestic semiconductor development. China has already invested heavily in building indigenous chip manufacturing capacity in response to earlier rounds of US restrictions.

The dispute underscores how intellectual property, national security and commercial rivalry are increasingly intertwined in the AI sector. As models become more capable and expensive to train, companies are likely to intensify efforts to safeguard their systems from misuse. Governments, meanwhile, face mounting pressure to balance open innovation with strategic competition.
Whether Anthropic’s claims lead to formal legal action or regulatory scrutiny remains to be seen. What is clear is that as AI advances, the battle over who controls the most powerful models and the infrastructure that powers them is moving beyond research labs and into the realm of international policy and economic strategy.
Anthropic raises $30bn in latest funding round, more than doubling its value

