Why the Pentagon Just Sidelined Anthropic for Grok and OpenAI in Classified AI Deals

The U.S. Department of Defense has struck classified AI agreements with seven companies, including Elon Musk's xAI and its Grok model, while deliberately excluding Anthropic despite the startup's previous $200 million contract to handle sensitive military information. The Pentagon's decision signals a major shift in how the military plans to deploy artificial intelligence in classified settings and raises questions about what ethical boundaries AI companies are willing to cross for government contracts.

What Led to Anthropic's Exclusion from Pentagon Deals?

Anthropic, an AI safety-focused startup founded by former OpenAI researchers, had been working with the Pentagon on classified projects. However, the relationship fractured over fundamental disagreements about how the military could use AI systems. Anthropic refused to loosen what it calls "red lines" around two specific applications: mass domestic surveillance and fully autonomous weapons systems that operate without human oversight.

The Pentagon's decision to exclude Anthropic and label it a supply-chain risk represents a significant escalation in the dispute. Emil Michael, the Defense Department's chief technology officer, acknowledged Anthropic's security strengths but emphasized the agency's concerns. Michael stated that while Anthropic's security model, called Mythos, represents "a separate national security moment" with unique capabilities for finding and patching cyber vulnerabilities, the company remains a supply-chain risk overall.

Michael, the Defense Department's chief technology officer

In response to the exclusion, Anthropic sued the federal government and won a temporary injunction, preventing the Pentagon from immediately implementing the ban. The legal battle underscores the tension between AI safety advocates and military applications of artificial intelligence.

Which AI Companies Now Have Pentagon Classified Access?

The Pentagon's new agreements span a diverse set of AI providers, each bringing different capabilities and existing relationships with the military. The seven companies now authorized for classified use include:

  • OpenAI: Already had a prior agreement with the Pentagon for lawful use of its AI systems, now formalized under the new classified framework.
  • Google: Struck a similar agreement allowing "any lawful" use of its AI tools in classified military settings.
  • Microsoft: Leverages its existing deep relationships with the Pentagon to provide AI capabilities for classified operations.
  • Amazon: Brings its established infrastructure and cloud services to support classified AI deployments.
  • Nvidia: Newly contracted to provide AI hardware and computing resources for military applications.
  • xAI (Elon Musk's company): Reached agreements for lawful use of Grok and other AI systems in classified settings.
  • Reflection: A startup entering the Pentagon's classified AI ecosystem with new contract opportunities.

The inclusion of xAI and Grok represents a notable development, as Elon Musk's AI company has rapidly positioned itself as a contender in the government AI space. The Pentagon's announcement states these agreements will enable the "lawful operational use" of AI systems, with the goal of "establishing the United States military as an AI-first fighting force".

How to Understand the Pentagon's AI Strategy Shift

The Pentagon's classified AI agreements reveal several strategic priorities that define how the military plans to deploy artificial intelligence:

  • Breadth Over Specialization: Rather than relying on a single AI provider, the Pentagon is diversifying across multiple companies to avoid dependency on any one vendor and ensure redundancy in critical systems.
  • Lawful Use as the Standard: The Pentagon emphasizes "lawful operational use," a phrase that notably excludes the ethical guardrails Anthropic insisted upon, suggesting the military prioritizes flexibility in deployment scenarios.
  • Speed to Deployment: By signing agreements with established tech giants like Google, Microsoft, and Amazon alongside newer players like xAI, the Pentagon signals it wants rapid access to cutting-edge AI capabilities without lengthy development timelines.
  • Hardware and Software Integration: Including Nvidia alongside software companies indicates the Pentagon is securing both the AI models and the computing infrastructure needed to run them at scale in classified environments.

The exclusion of Anthropic despite its security expertise suggests the Pentagon prioritizes operational flexibility over the safety constraints that Anthropic champions. This represents a fundamental disagreement about how AI should be governed in military contexts.

What Does This Mean for AI Ethics in Government?

The Pentagon's decision to work around Anthropic's refusal to support autonomous weapons and mass surveillance raises broader questions about the future of AI governance. Anthropic's willingness to walk away from a $200 million contract demonstrates that some AI companies are willing to sacrifice lucrative government deals to maintain ethical boundaries. Conversely, the Pentagon's ability to find alternative providers suggests that companies willing to accept fewer restrictions will gain access to classified military applications.

The temporary injunction Anthropic won provides a brief window for the legal and policy debates to continue, but the Pentagon's rapid expansion of classified AI agreements suggests the military is moving forward with its AI-first strategy regardless. The outcome of Anthropic's lawsuit could influence whether other AI companies follow its lead in setting ethical red lines or whether the industry standard becomes accepting military applications without restrictions.

For observers of AI policy, the Pentagon's classified deals represent a critical moment where government procurement decisions are shaping which AI companies thrive and which ethical principles get embedded in military systems. The companies that win these contracts will have enormous influence over how AI is deployed in national security contexts for years to come.