OpenAI just handed Europe something Anthropic would not. On May 11, 2026, the EU confirmed that vetted European cybersecurity teams will receive access to GPT-5.5-Cyber, OpenAI's specialized model for offensive and defensive security work. Anthropic's Mythos, which the UK AI Security Institute independently confirmed is slightly more capable on the same attack benchmarks, remains locked behind White House-approved access controls. Two models, two opposite bets on who gets to hold the keys to the most dangerous AI tools in existence.
What Actually Happened
On May 7, OpenAI published its EU Cyber Action Plan and confirmed the European Commission would gain access to GPT-5.5-Cyber, a fine-tuned variant of GPT-5.5 purpose-built for cybersecurity operations. The model is not public: access runs through a vetting process administered jointly by OpenAI and ENISA, the EU Agency for Cybersecurity. Approved teams can deploy GPT-5.5-Cyber for vulnerability discovery, incident triage, malware analysis, and adversarial simulation against enterprise infrastructure.
European Commission officials described the deal as a breakthrough. The Commission confirmed it had held "four or five" meetings with Anthropic about similar access to Mythos, but those conversations were "not yet at the same stage as the solution we have on the table from OpenAI." Anthropic has not announced a timeline for European access to Mythos, citing ongoing safety evaluations and coordination with US export-control frameworks. The net result: Europe's front-line cyber defenders now have OpenAI's model, not Anthropic's better-performing one.
The capability gap between the two models is small but verified. The UK AI Security Institute ran both through a standardized red-team test: a 32-step simulated corporate cyberattack requiring multi-hop lateral movement, credential harvesting, and exfiltration. Mythos succeeded in 3 out of 10 runs. GPT-5.5-Cyber succeeded in 2 out of 10. The difference is statistically modest, but in a defender context the higher-success-rate model matters: the same asymmetry applies to adversaries who might eventually access these systems through compromised channels.
Why This Matters More Than People Think
European governments spend more than €20 billion annually on cybersecurity, and that budget is now in active play for AI-augmented services. OpenAI's EU Cyber Action Plan is not merely a press release: it is the opening bid for a decade of government procurement relationships across 27 member states. The institutional credibility of an EU Commission endorsement converts directly into multi-year contracts with national CERTs, defense ministries, and critical infrastructure operators. OpenAI's playbook mirrors what Microsoft executed after the Snowden revelations: become the trusted vendor before the regulatory framework forces a choice, then collect the compound returns of incumbency.
The deeper implication is for the politics of AI capability distribution. The EU has spent three years trying to build regulatory leverage over American AI companies through the AI Act, foreign investment screening, and data-sovereignty requirements. None of those mechanisms gave Brussels the ability to access advanced AI tools on its own terms. What actually moved the negotiation was not a regulation: it was a procurement budget. The EU's leverage turns out to be the same as everyone else's.
Beyond procurement, there is a talent dimension. The EU's universities and national labs produce world-class cybersecurity researchers, many of whom will now begin their careers working with GPT-5.5-Cyber as their primary AI tool. The model they learn on will shape the techniques they develop, the vendor relationships they build, and the institutional preferences they carry for decades. OpenAI is not just winning a contract; it is shaping the formation of the next generation of European cyber professionals. Anthropic will struggle to displace that preference even if it opens Mythos access tomorrow.
There is a further dimension that has received almost no coverage: what this deal does to the EU's domestic AI industry. France's Mistral, the continent's most credible frontier model lab, does not have a cybersecurity-specialized variant that competes with GPT-5.5-Cyber or Mythos. By formalizing access to an American model for sensitive national-security-adjacent work, the EU Commission has implicitly certified that European models are not sufficient for the task. That certification will echo through procurement cycles for years.
The Competitive Landscape
OpenAI and Anthropic are not building in isolation. Microsoft's Threat Intelligence Center runs AI-assisted threat hunting across a customer base of 300 million business users, with its own proprietary models trained on security telemetry. Google's Security AI Workbench, built on Gemini and Mandiant's threat data, is already deployed at hundreds of enterprise customers. CrowdStrike and Palo Alto Networks have both embedded LLM reasoning layers into their flagship platforms. What separates GPT-5.5-Cyber and Mythos from these embedded tools is the capacity for autonomous multi-step reasoning across complex attack chains: not pattern matching, but genuine adversarial planning.
The context that matters most is timeline. Microsoft, Google, CrowdStrike, and Palo Alto Networks are all shipping AI security products today, but none has the regulatory endorsement that OpenAI just received from the EU Commission. Regulatory endorsement does not just unlock contracts; it creates liability shields. A European government agency that uses a Commission-endorsed model has a defensible answer when something goes wrong. An agency that deploys an unendorsed model takes on the reputational and legal risk itself. That dynamic will push European procurement toward GPT-5.5-Cyber regardless of whether a technically superior alternative becomes available.
The Chinese dimension matters too. Beijing has been investing in autonomous cybersecurity AI through state-backed labs since at least 2023, and the People's Liberation Army's cyber units are presumed to be testing similar capabilities. Against that backdrop, the US government's decision to allow GPT-5.5-Cyber into Europe while keeping Mythos restricted starts to look like a calibrated choice: give allies a capable tool, but retain the most capable tool as a strategic reserve. It is the AI equivalent of the tiered nuclear technology-sharing that defined NATO's early decades.
Hidden Insight: What Anthropic's Silence Reveals
The dominant narrative frames this as OpenAI winning a diplomatic race while Anthropic falls behind. That framing misses something. Anthropic is a company whose mission is the responsible development of AI for the long-term benefit of humanity. It has consistently chosen capability restraint over market share when it believed the deployment risk was real. The Mythos restriction may not be a failure: it may be the correct decision by a company that has modeled the downstream risk and concluded that EU government networks are not sufficiently secured to hold a model capable of completing autonomous corporate cyberattacks with a 30% success rate.
The bear case for OpenAI's strategy is concrete. Every model handed to a vetted government operator is a model that adversaries will now target. ENISA has faced sophisticated intrusion attempts before. EU member-state CERTs range from Denmark's world-class operation to smaller teams with less mature security practices. A breach of any approved operator that yields training data, system prompts, or fine-tuning weights from GPT-5.5-Cyber would transfer capability to an adversary that OpenAI, the EU Commission, and ENISA would all find deeply uncomfortable. Critics argue that no vetting process is airtight when the asset being protected is this valuable.
There is also the question of normalization. Every time a government deploys an AI model capable of multi-step cyberattacks, it moves the Overton window on what is acceptable for state-sponsored cyber operations. Today it is European defenders using GPT-5.5-Cyber against simulated attacks. Within three years, the same capability will be integrated into active government offensive operations. The precedent being set now is not just about access; it is about what governments believe they are permitted to do with AI in the cyber domain.
The most underappreciated signal in this story is what the US government's asymmetric treatment of the two models implies about their relative capabilities. The administration approved GPT-5.5-Cyber for EU deployment while holding Mythos under export-control-style restrictions. That asymmetry suggests Washington classifies Mythos as a higher-leverage strategic asset, consistent with the UKASI benchmark showing Mythos at 3 of 10 versus GPT-5.5-Cyber at 2 of 10. But the gap may be larger than those numbers suggest in classified evaluations. If so, Anthropic's model is being treated as a strategic asset rather than a commercial product, and that framing changes everything about how the company will eventually bring it to market.
What to Watch Next
The most important near-term indicator is whether Anthropic announces any European access to Mythos before ENISA's Q3 2026 procurement cycle, which typically closes in September. That cycle is when national CERTs across the EU lock in multi-year vendor relationships. If OpenAI's name appears on those contracts first, the institutional advantage will compound: every European security operator trained on GPT-5.5-Cyber becomes a customer OpenAI retains through tooling lock-in, integration depth, and institutional familiarity.
The second indicator is how US agencies with cyber mandates respond. Any announcement from the Defense Innovation Unit or DARPA about AI cybersecurity model evaluations will reveal which company has the more durable government relationship. Watch also for parliamentary debate in France or Germany about AI sovereignty in security contexts: that debate will surface before procurement contracts are signed, and it will force the EU Commission to defend its choice of an American model for national cyber defense.
A third signal worth watching: the EU's AI Act implementation office begins full enforcement reviews in late 2026 and has not yet published guidance on how AI systems used for cybersecurity will be classified under the high-risk provisions. If GPT-5.5-Cyber or Mythos are eventually classified as high-risk systems, the entire access framework will need to be renegotiated. OpenAI's current advantage may be fragile precisely because it was secured through bilateral negotiation rather than formal regulatory approval. A rule change that forces both models through the same compliance framework would reset the competitive clock to zero.
The better model is locked in a vault. The question is whether Anthropic is protecting the world or losing the war.
Key Takeaways
- OpenAI's EU Cyber Action Plan (May 11, 2026) grants vetted European cyber defenders access to GPT-5.5-Cyber, positioning OpenAI as the default AI security partner for Europe's €20B annual cybersecurity market.
- Anthropic's Mythos outperforms GPT-5.5-Cyber on UKASI benchmarks: 3 of 10 successes on a 32-step corporate cyberattack simulation versus 2 for GPT-5.5-Cyber, yet Mythos remains inaccessible to allied governments.
- EU Commission confirmed 4-5 meetings with Anthropic but no deal was reached, leaving Europe's defenders with the second-best available model.
- US government's asymmetric treatment of the two models implies Washington treats Mythos as a higher-leverage strategic asset held back for national security rather than commercial reasons.
- ENISA's Q3 2026 procurement cycle closes in September, the critical window where OpenAI can lock in multi-year European CERT relationships before Anthropic opens any access.
Questions Worth Asking
- If Mythos succeeds at autonomous corporate cyberattacks 30% of the time in tests, what success rate is safe enough to deploy to government networks with imperfect security?
- Does OpenAI's EU deal accelerate the normalization of AI in state-sponsored offensive cyber operations in ways that will be impossible to reverse?
- If you ran Anthropic's board, would you open Mythos to EU allies and accept the blowback risk of a breach, or hold the line and cede the market to OpenAI?