Regulation

Pentagon Signs OpenAI, Google, Cuts Anthropic Out

The Pentagon awarded classified AI contracts to OpenAI, Google, Microsoft, SpaceX, and 4 others, excluding Anthropic over a military use-of-force dispute.

Share:XLinkedIn
Pentagon Signs OpenAI, Google, Cuts Anthropic Out

Key Takeaways

  • Pentagon signed IL6/IL7 classified AI contracts with 8 companies including OpenAI, Google, Microsoft, SpaceX; Anthropic excluded
  • Anthropic refused "all lawful purposes" terms covering autonomous weapons, triggering an unprecedented "supply chain risk" threat normally used for Chinese firms
  • The $200M Anthropic-Pentagon contract from July 2025 is functionally dead; a California court blocked blacklisting but did not restore the contract
  • Defense AI market is projected to reach $6B by FY2027; insider companies gain exclusive access to classified training data and operational integrations
  • SpaceX's inclusion adds IL7 AI to its Starlink/Starshield infrastructure, creating an integrated intelligence-communications-AI stack no other private company matches

The phrase "supply chain risk" has one historical use in U.S. defense procurement: it means the vendor is linked to a foreign adversary. The Trump administration applied that label to Anthropic on May 1, 2026. Anthropic is a San Francisco AI company founded by Americans, funded largely by American and allied investors, and headquartered in the country it was just implicitly compared to China. The juxtaposition is extraordinary, and most of the coverage has missed it.

What Actually Happened

On May 1, 2026, the Pentagon announced classified-network AI contracts with eight companies: OpenAI, Google, Microsoft, Amazon Web Services, NVIDIA, SpaceX, Reflection AI, and Oracle. These contracts cover the military's most sensitive systems at Impact Level 6 and Impact Level 7, which represent the highest classification tiers in the Department of Defense cloud framework. IL6 handles classified national security workloads; IL7, used by Special Operations Command and NSA programs, handles the most sensitive intelligence operations in the U.S. government.

Anthropic was excluded after a dispute over contract terms. The government demanded that Anthropic agree its Claude models could be used for "all lawful purposes," language that would have authorized their use in autonomous weapons systems and mass surveillance programs. Anthropic refused. The Trump administration then threatened to designate Anthropic a "supply chain risk," effectively blacklisting the company from federal contracting. The designation had previously been applied only to Huawei, ZTE, and a handful of other companies with documented ties to Chinese military or intelligence services. Anthropic sued the administration in federal court. A judge in California blocked the government's effort last month, but the injunction did not restore the contracts. The $200 million Anthropic-Pentagon agreement signed in July 2025 is functionally dead.

Why This Matters More Than People Think

The defense AI market is not a rounding error. The Department of Defense spent $3.3 billion on AI-related contracts in fiscal year 2025, and that figure is projected to exceed $6 billion by fiscal 2027. The eight companies now inside the Pentagon's classified network have a structural advantage that no amount of technical superiority can overcome from outside: they are the only vendors allowed to train on classified datasets, operate in air-gapped environments, and build tools that military personnel can actually use in the field. If this division holds, it means Anthropic is building the most capable AI models in the world for a market that explicitly excludes the use case where governments write the largest checks.

Stay Ahead

Get daily AI signals before the market moves.

Join founders, investors, and operators reading TechFastForward.

The precedent set by the "supply chain risk" threat deserves careful attention. That designation is the nuclear option in federal contracting: it doesn't just cost you one contract, it potentially bars you from the entire federal supply chain. Applying it to an American company over a policy disagreement, rather than a documented foreign-influence finding, is structurally different from anything regulators have done before. The California judge's injunction matters precisely because it signals that courts may not allow the executive branch to use national-security designations as commercial retaliation. But legal protection is not the same as commercial protection.

The Competitive Landscape

OpenAI's inclusion tells a story about how far that company has traveled from its founding charter. OpenAI was created in 2015 as a nonprofit with an explicit mission to ensure AI benefited all of humanity and avoided catastrophic uses. By 2026, OpenAI is one of eight vendors authorized to build AI tools for the Pentagon's most classified weapons systems. The company's transformation is complete. Microsoft, which owns a roughly 49% economic stake in OpenAI, was also awarded a contract, creating a situation where a single corporate family holds two of the eight slots in the Pentagon's AI infrastructure.

SpaceX's inclusion is the most strategically interesting award. The company already operates the Starlink constellation, which Ukraine has used extensively in its conflict with Russia, and has deep relationships with the NRO and Space Force. Adding IL6/IL7 AI access to Starlink's existing sensor and communications role means SpaceX is building toward an integrated intelligence-communication-AI capability that no other private company currently possesses. Reflection AI, the least well-known of the eight, is a stealth-stage company founded by former Google DeepMind researchers, suggesting the Pentagon made at least one bet on pre-commercial AI research talent.

Hidden Insight: Anthropic Is Building the Right Model for the Wrong Customer

Anthropic's Constitutional AI framework, its investment in interpretability research, and its refusal to authorize autonomous-weapons use all reflect a coherent theory: that the most capable AI systems are also the most dangerous ones, and that capability should be gated by safety properties. This is a legitimate and defensible position. The problem is that governments do not purchase AI on safety grounds alone. They purchase it on capability, reliability, and willingness to execute legally authorized operations without unilateral carve-outs imposed by the vendor.

The bear case for Anthropic's position, however, is real: the company's refusal to accept "all lawful purposes" language is not a purely ethical stance. It is also a commercial strategy. If Anthropic had accepted the Pentagon's terms, it would have forfeited the positioning that makes it attractive to European governments, privacy-conscious enterprises, and civil-liberties-oriented organizations. Dario Amodei's White House meeting after the Mythos cybersecurity launch suggests the company is trying to thread that needle: demonstrate enough national-security utility to avoid formal blacklisting, while preserving enough policy independence to retain non-defense enterprise customers. That is an exceptionally narrow needle to thread.

The deeper issue is that Anthropic's legal victory may be pyrrhic. The injunction blocked the "supply chain risk" designation, but it did not compel the Pentagon to award contracts. The eight companies currently inside the classified network will train their models on datasets Anthropic will never see, build tool integrations into weapons programs Anthropic cannot access, and develop the operational understanding of military workflows that creates switching costs lasting decades. Every month Anthropic spends outside the Pentagon's network is a month the other eight companies spend building moats that no future contract could quickly overcome.

What to Watch Next

The critical indicator is whether Anthropic and the Trump administration reach a negotiated resolution in the next 90 days. Dario Amodei's White House visit after the Mythos launch was not coincidental. The administration wants AI tools that can identify zero-day vulnerabilities in adversary systems; Anthropic has built exactly that. The question is whether both sides can find contract language that preserves the "all lawful purposes" authorization the Pentagon requires while adding oversight mechanisms Anthropic can live with. A framework modeled on how defense contractors handle dual-use export controls, where use-of-force authorizations require additional sign-off, is the most likely path to resolution.

Watch the congressional response. Several senior senators on the Armed Services Committee have publicly questioned whether the "supply chain risk" threat was appropriate for an American company. If Congress holds hearings on the designation authority, it creates political cover for the administration to negotiate a face-saving resolution without appearing to capitulate. The 180-day window before the next defense appropriations cycle is the practical deadline: after that, the eight companies will have embedded themselves so deeply in Pentagon AI infrastructure that adding a ninth becomes logistically complicated regardless of the politics.

The Pentagon did not exclude Anthropic because its AI was worse; it excluded Anthropic because its ethics were inconvenient.


Key Takeaways

  • 8 companies awarded IL6/IL7 classified AI contracts , OpenAI, Google, Microsoft, AWS, NVIDIA, SpaceX, Reflection AI, Oracle; Anthropic excluded
  • $200M Anthropic-Pentagon contract dead , the July 2025 agreement is functionally terminated over "all lawful purposes" use-of-force language
  • "Supply chain risk" threat unprecedented , designation historically reserved for Huawei and Chinese-linked firms; a California judge blocked its application to Anthropic
  • Defense AI market projected at $6B by FY2027 , companies inside the classified network gain access to training data and operational contexts unavailable to outside vendors
  • SpaceX holds 2 of 3 key defense AI vectors , combining Starlink sensor/communications infrastructure with IL7 AI access creates an integrated capability no other private company matches

Questions Worth Asking

  1. If Anthropic wins the legal fight but loses the commercial war, has it actually advanced AI safety, or has it just ensured that the defense-grade AI systems get built without its input?
  2. Does a company that refuses to support autonomous weapons have a viable long-term business model as AI becomes the primary interface for national security infrastructure?
  3. SpaceX now holds Starlink, Starshield, and an IL7 AI contract. At what point does the concentration of critical defense infrastructure in a single private company become a national security risk in itself?
Newsletter

Enjoyed this analysis? Get the next one in your inbox.

Daily AI signals. No noise. Built for founders, investors, and operators.

Share:XLinkedIn
</> Embed this article

Copy the iframe code below to embed on your site:

<iframe src="https://techfastforward.com/embed/pentagon-signs-openai-google-cuts-anthropic-out" width="480" height="260" frameborder="0" style="border-radius:16px;max-width:100%;" loading="lazy"></iframe>