The AI industry has spent three years in a war of all against all , OpenAI versus Anthropic versus Google, each poaching the other's researchers, racing to claim benchmark supremacy, and fighting over every enterprise contract in every city. So when all three quietly began sharing classified threat intelligence earlier this year through a nonprofit they co-founded together, it was not a moment anyone had specifically predicted. The enemy of my enemy, it turns out, can be China. And the consequences of that realization may reshape the global AI market more profoundly than any single model release in 2026.
What Actually Happened
In early April 2026, Bloomberg reported that OpenAI, Anthropic, and Alphabet's Google had formalized a previously informal arrangement: sharing threat intelligence about Chinese AI labs engaging in what researchers call "adversarial distillation attacks." The sharing happens through the Frontier Model Forum , the nonprofit industry body the three companies co-founded with Microsoft in 2023, initially established to address AI safety standards and responsible development. The mechanism is modeled explicitly on how cybersecurity firms have exchanged threat data for decades: when one company detects a new attack pattern, it immediately flags that pattern for the others, allowing all parties to update their detection systems before the next wave hits. In practice, this means that a Chinese lab that successfully crafts a prompt sequence capable of extracting high-value reasoning demonstrations from Claude without triggering Anthropic's safety systems will find those same prompts blocked by OpenAI and Google within hours , not weeks or months later, after each company has been victimized independently.
The specific threat all three are now combating together is model distillation , a process where a competitor feeds carefully crafted prompts to a powerful frontier model, harvests the outputs, and uses those outputs to train a cheaper knockoff that mimics the original's capabilities without its compute bill. Anthropic published a blog post publicly naming three Chinese AI labs , DeepSeek, Moonshot AI, and MiniMax , as having engaged in exactly this practice against Claude. According to Anthropic, those three labs collectively generated over 16 million exchanges with Claude models through approximately 24,000 fraudulent accounts created specifically to circumvent usage policies. The accounts used stolen payment credentials, VPNs, and rotating IP addresses to evade detection , a level of operational sophistication that suggests organized coordination rather than opportunistic experimentation. U.S. officials estimate that adversarial distillation collectively costs American AI labs billions of dollars annually in lost competitive advantage, though the precise methodology behind that estimate remains contested in academic circles.
Alongside the intelligence-sharing arrangement, the three companies announced the creation of the AI Frontier Fund, a $1 billion initiative combining two complementary strategies. The technological track funds research into model watermarking , embedding imperceptible signals in model outputs that survive multiple rounds of processing and allow forensic investigators to trace a distilled model's lineage back to the original frontier model used as a teacher. The legal track funds coordinated litigation against named infringers, with DeepSeek and Alibaba's AI division among the initial targets. The $1 billion commitment dwarfs any previous industry-level AI IP enforcement effort by an order of magnitude, signaling that the companies view distillation not as a nuisance but as an existential competitive threat requiring sustained, well-funded response over years , not months.
Why This Matters More Than People Think
The surface story is familiar: large American technology companies are upset that Chinese competitors copied their work and want legal protection. But the deeper story is about something far more consequential , the moment when competitive dynamics in AI shifted from purely private races to something resembling geopolitical industrial coordination. What is truly striking about the Frontier Model Forum arrangement is not that OpenAI, Anthropic, and Google are cooperating. It is that they have chosen to cooperate on defense while remaining ferociously competitive on offense. OpenAI just released GPT-5.5 with 52% fewer hallucinations than its predecessor. Anthropic is shipping Claude Opus 4.7 with new enterprise security capabilities. Google's Gemini 3.1 Pro just posted a 94.3% GPQA Diamond score, topping reasoning benchmarks globally. None of them has slowed their competitive posture against each other by a single basis point. Developers are still being recruited away; benchmark scores are still compared hourly on social media; enterprise sales teams are still competing for the same Fortune 500 contracts with aggressive pricing.
But all three have carved out one specific domain , the integrity of the frontier model market itself , where they have decided their collective interest outweighs their individual competitive interests. Before this arrangement, each company was individually responsible for detecting and blocking distillation attacks, with no information flowing between them. That meant Chinese labs could rotate systematically: attack OpenAI until blocked, move to Anthropic until blocked, move to Google until blocked, then return to OpenAI with a new set of accounts. The blocked patterns at each company never reached the others. The intelligence-sharing arrangement breaks that rotation permanently. When one company detects a new attack pattern , a specific prompt structure, an account behavior signature, a data exfiltration sequence , all three update their defenses simultaneously. The rotation strategy that Chinese labs had exploited for two years becomes non-viable overnight. Whether regulators in the U.S., EU, or UK eventually scrutinize the arrangement as a form of competitive collusion remains to be seen , the arrangement has been structured through a nonprofit specifically to provide legal distance from direct commercial coordination.
The Competitive Landscape
The three Chinese labs named by Anthropic , DeepSeek, Moonshot AI, and MiniMax , represent different tiers of China's rapidly expanding AI ecosystem, and their alleged distillation activities reveal a coherent strategic logic. DeepSeek is the most high-profile of the three: its R1 model caused genuine alarm in American AI circles in early 2025 when its apparent efficiency suggested Chinese labs had closed the compute-cost gap faster than Western analysts expected. DeepSeek's reported $6 million training budget for R1 was widely cited as evidence that frontier capability could be achieved without frontier-level compute , a narrative that now looks markedly different in light of Anthropic's distillation claims. If a significant portion of R1's capability in agentic reasoning and code generation derived from systematic extraction of Claude outputs, the true cost of DeepSeek's development was partially offloaded onto Anthropic's infrastructure and, indirectly, onto Anthropic's paying customers. Moonshot AI, backed by Alibaba and other major Chinese investors, has built the Kimi family of models with notable long-context capabilities. MiniMax has focused on multimodal generation, particularly video. All three allegedly targeted the same domains: computer vision, agentic reasoning, and agentic coding , exactly the areas where frontier model outputs provide the richest training signal for a distillation campaign.
The legal front is now active for the first time in the distillation space. OpenAI and Anthropic have both indicated active litigation against named infringers, but no U.S. court has yet issued a definitive ruling on whether model distillation constitutes copyright infringement, unfair competition, or breach of contract actionable as civil damages. The legal ambiguity is significant: model outputs are not obviously copyrightable under current U.S. law, since copyright requires human authorship. If outputs cannot be copyrighted, then collecting them through fraudulent accounts may violate terms of service but may not constitute intellectual property theft in the traditional legal sense. The companies are pursuing a combination of theories: computer fraud under the Computer Fraud and Abuse Act, trade secret misappropriation, and tortious interference with business relationships. Which theory prevails will determine not just the outcome of these specific cases but the entire legal framework governing AI model IP for the next decade.
Hidden Insight: The Cartel Nobody Is Calling a Cartel
Here is what almost no one is saying about this story: the Frontier Model Forum intelligence-sharing arrangement is more significant as a precedent than as a solution to the distillation problem itself. Model distillation cannot be fully prevented through detection and blocking. Any model that accepts external inputs and produces outputs can be distilled by a sufficiently motivated adversary, at some scale, using patient and methodical approaches that stay below detection thresholds. The three companies know this. Their own internal security researchers will have told them that the information-sharing arrangement and the $1 billion watermarking fund will slow distillation and raise its cost significantly, but will not stop it entirely. The goal is not purely technical: it is normative. The public announcement is designed to establish the norm that frontier model outputs are legally and practically protected intellectual property , and that coordinated enforcement of that norm is both possible and legitimate.
This is, in effect, a cartel agreement on IP enforcement. That term has negative connotations in antitrust law, but in this context it describes something structurally accurate: three dominant firms in a market coordinating to set and enforce rules that govern how outputs from their products can be used by third parties. The immediate application is against Chinese competitors, and the geopolitical framing makes the cooperation politically palatable in the current regulatory environment in Washington. But the logic extends far beyond China. If a European startup systematically distilled Claude Opus to build a competitive model; if a South American university lab used GPT-5.5 outputs to train a research model that it then commercialized; if an open-source community used frontier model outputs as a training dataset component , would the same framework apply? Almost certainly yes. The Frontier Model Forum arrangement is a mechanism for protecting the market position of the three largest American AI companies against all potential free-riders, with China as the politically convenient initial target.
There is a second-order implication for the AI safety debate that deserves serious attention. One of the core arguments for open-source AI models , from Meta, Mistral, and the broader open-weights community , is that concentrating frontier capabilities in a small number of American companies is itself a dangerous form of power concentration that threatens the global diversity of AI development. The Frontier Model Forum arrangement cuts directly against that argument. By coordinating to prevent distillation, the three companies are explicitly working to ensure that frontier capabilities remain concentrated in their hands and in the hands of whoever they choose to license those capabilities to. The AI industry is quietly centralizing at precisely the moment when the public discourse is celebrating openness, democratization, and the proliferation of capable smaller models. The uncomfortable truth is that the cartel and the democratizers are both right , about different layers of the market. Open-source models are proliferating rapidly at the mid-capability tier. But the actual frontier , the models that define the absolute state of the art , is being locked behind a tighter and tighter perimeter. The Frontier Model Forum arrangement is one of the primary locks being installed.
What to Watch Next
The most important leading indicator is the first major legal ruling in the distillation IP cases. A U.S. federal court finding in 2026 or early 2027 that affirms distillation as legally actionable , under any of the theories being pursued , would dramatically change the economics of Chinese AI model development. Chinese labs that have been using distillation as a cost-efficient path to capability would need to invest far more heavily in genuine frontier training compute, dramatically increasing their development costs and slowing the pace at which Chinese AI capabilities converge with American frontier models. Watch the preliminary injunction phase of the DeepSeek and Alibaba cases: courts granting injunctions preventing ongoing distillation while cases proceed would itself be a significant win for the enforcement framework, regardless of how the merits are ultimately decided. Also watch Beijing's regulatory response , expect retaliatory measures against American AI products in Chinese markets if the legal framework is perceived in Beijing as a tool of economic coercion rather than legitimate IP protection.
Also watch the watermarking arms race closely. The AI Frontier Fund has committed significant resources to watermarking research, and academic groups are already publishing counter-techniques. Track publications from MIT CSAIL, Stanford's AI Lab, and Carnegie Mellon's CyLab , they typically set the research agenda six to twelve months before industry adoption. The specific technical question is whether statistical watermarking in model outputs can be made robust against adversarial stripping while remaining imperceptible enough not to degrade output quality for legitimate users. If a reliable, stripping-resistant watermarking system emerges from the Fund's research program, it would give the Frontier Model Forum real forensic capability , the ability to prove in court that a specific distilled model was trained on specific frontier model outputs. That evidentiary capability is what transforms the legal strategy from plausible to decisive. Without it, the cases rely on circumstantial evidence of account behavior patterns, which is significantly harder to win on in federal court.
The moment Silicon Valley's three AI rivals began sharing threat intelligence, the frontier model race stopped being purely competitive , and the Chinese AI industry will feel that shift for years.
Key Takeaways
- 16 million Claude exchanges via 24,000 fraudulent accounts , Anthropic's documented evidence of adversarial distillation by DeepSeek, Moonshot AI, and MiniMax, using stolen credentials and rotating IPs to systematically extract frontier training data
- $1 billion AI Frontier Fund , OpenAI, Anthropic, and Google's joint initiative combining model watermarking technology and coordinated legal actions, with DeepSeek and Alibaba named as the first targets of enforcement
- Frontier Model Forum intelligence sharing , the three fierce rivals now exchange attack pattern data in real time, permanently breaking the Chinese lab rotation strategy that had gone undetected for two years
- Billions in annual competitive losses , U.S. officials' estimate of the economic cost of adversarial distillation on American AI labs, reframing DeepSeek's celebrated $6 million R1 training budget as partially subsidized by Anthropic's infrastructure
- Precedent-setting IP enforcement framework , if upheld by U.S. courts, the legal theories being tested will govern how frontier AI model outputs are treated as protected intellectual property globally, extending well beyond China
Questions Worth Asking
- If model distillation is legally actionable IP theft when Chinese labs do it, does that same logic apply to Western academic researchers, open-source communities, and startups using frontier model outputs to train smaller models , and where exactly does the law draw the line?
- Could the Frontier Model Forum's enforcement success paradoxically accelerate China's strategic push to develop genuine frontier training capabilities from scratch, ultimately producing a more capable and self-sufficient Chinese AI ecosystem than distillation ever would have?
- If you are building a product or career on top of AI models, what does it mean for your long-term position that the three most important AI companies are now coordinating to control the frontier , not just racing to define it?