Why the NSA Is Using Anthropic's Most Powerful AI Model Despite Pentagon Concerns
The U.S. National Security Agency is actively using Anthropic's most advanced AI model, Claude Mythos Preview, even though the Department of Defense has formally designated Anthropic as a supply chain risk and pushed to limit ties with the company. This contradiction highlights a fundamental challenge facing governments worldwide: the most capable AI tools for defending critical infrastructure are also the ones that raise the most serious concerns about misuse and strategic dependence .
What Makes Claude Mythos Different From Other Anthropic Models?
Claude Mythos represents a significant leap beyond Anthropic's existing lineup of Claude Haiku, Sonnet, and Opus models. The company introduced a new top tier called Copybara within the Mythos family, positioning it as substantially more powerful than anything Anthropic has released before . What sets Mythos apart is its exceptional strength in cybersecurity and coding tasks. The model can identify software vulnerabilities and develop sophisticated exploits at a level that often exceeds human expert capabilities. This dual nature, the ability to both defend and potentially attack, is precisely why access to Mythos is tightly controlled and why it has become so controversial in government circles .
The tension between Mythos's defensive value and its offensive potential became impossible to ignore when Anthropic CEO Dario Amodei met with White House Chief of Staff Susie Wiles and Treasury Secretary Scott Bessent to discuss the model's use within government. According to reporting on the meeting, both sides described the conversation as productive, with next steps expected to focus on how departments beyond the Pentagon might engage with the model .
How Is the Government Planning to Use This Powerful AI Safely?
The primary defensive initiative leveraging Claude Mythos is Project Glasswing, a collaborative effort led by Anthropic alongside major technology and security firms. The partnership includes Amazon Web Services, Apple, Broadcom, Cisco, CrowdStrike, Google, JPMorgan Chase, the Linux Foundation, Microsoft, NVIDIA, and Palo Alto Networks . This broad coalition reflects the scale of the cybersecurity challenge and the recognition that no single organization can manage the risks alone.
- Vulnerability Detection: Project Glasswing uses Claude Mythos Preview to identify hidden software flaws in critical systems before attackers can exploit them, helping secure infrastructure that underpins banking, healthcare, energy, and government operations.
- Controlled Access Model: Anthropic is intentionally limiting access to Mythos for now, sharing the model only with vetted partners and government agencies to prevent widespread availability before responsible deployment practices are established.
- Financial Investment: Anthropic is investing $100 million in usage credits and funding open-source security projects, while sharing findings to improve industry-wide defenses and strengthen cybersecurity practices across the sector.
The stakes for this approach are enormous. Global cybercrime is estimated at around $500 billion annually, and many attacks are backed by state-level actors with significant resources. Modern software contains vulnerabilities that have always been difficult to find, but advanced AI models like Mythos have dramatically reduced the effort and expertise required to discover and exploit them . This means that without proper defensive measures, attacks could become faster, more frequent, and more damaging.
Why Does the Pentagon View Anthropic as a Supply Chain Risk?
The Department of Defense's formal classification of Anthropic as a supply chain risk reflects broader concerns about dependence on a single AI vendor, particularly one developing frontier models with dual-use capabilities. The Pentagon's position creates an awkward situation where agencies can publicly criticize a vendor while simultaneously relying on that vendor's technology for critical operations . This disconnect suggests that the real competition is not only between governments and AI companies, but also between procurement caution and operational urgency. When cyber defense demands speed, stability, and scale, the newest and most capable model can become too valuable to ignore, even when policy concerns remain unresolved.
The NSA's use of Mythos despite Pentagon objections is not unique. The United Kingdom's AI Security Institute also has access to the model, indicating that multiple governments have concluded that the defensive benefits outweigh the risks, at least for now .
What Does This Mean for the Future of AI Governance?
The NSA-Anthropic situation reveals a fundamental policy problem that governments have not yet solved. Agencies can criticize a vendor in public statements or legal filings while still relying on the same vendor's technology in practice. This creates accountability gaps and raises questions about how governments will maintain oversight of frontier AI systems when operational needs conflict with procurement restrictions .
For policymakers, the lesson is uncomfortable but straightforward. Governments need strong AI tools to defend networks, but they also need procurement rules, audit trails, and usage boundaries that prevent those tools from becoming opaque dependencies. The Pentagon's tension with Anthropic demonstrates what happens when those boundaries are not aligned. If an agency says a vendor is too risky for broad use but still wants the model for its own missions, the issue becomes one of trust, accountability, and national strategy .
In the end, the NSA-Anthropic story is less about one model and more about the future of cyber power. The organizations that can safely deploy frontier AI will move faster in defense, but they will also face greater pressure to justify how these tools are controlled. Claude Mythos may be a glimpse of what is coming: a world where the most capable cyber systems are also the most contested, and where operational need often outruns policy comfort .