Why Elon Musk Is Now Powering His AI Rival's Growth, Despite Calling It 'Evil'
Elon Musk is now directly powering the expansion of Anthropic, the AI company he once called "evil" and accused of hating Western civilization. In a stunning reversal, SpaceX announced it would lease the Colossus 1 data center in Memphis to Anthropic, granting the Claude AI chatbot maker access to over 220,000 NVIDIA GPUs (graphics processing units, specialized chips used for AI training) and more than 300 megawatts of computing capacity within the month. The deal highlights a fundamental business reality: Musk's Grok AI failed to compete where it mattered most, while Anthropic's safety-focused approach became the enterprise standard.
How Did Grok's Anti-Woke Positioning Backfire in the Enterprise Market?
When Grok launched in 2025, it was explicitly positioned as the anti-woke alternative to Claude and other AI models. Musk promoted the chatbot during his peak involvement with the Trump administration, using it to attack mainstream media and position himself as a free-speech absolutist. The branding strategy seemed like a competitive differentiator, but it failed to translate into sustainable revenue.
The core problem was architectural and strategic. Grok was built around the chatbot paradigm and tightly integrated with X (formerly Twitter), which initially appeared to be a distribution advantage. However, this tight coupling became a growth trap. The product oriented itself around social media interactions rather than the workflow optimization and task completion that enterprise customers actually pay for. Meanwhile, Anthropic's Claude Cowork and OpenAI's Codex ushered in a new era of AI that could control computer operations and integrate with business tools, not just answer questions in a chat window.
The revenue gap tells the story. Musk's xAI generated $107 million in revenue for the quarter ending in September 2025, while posting a net loss of $1.46 billion in the same period. Anthropic, by contrast, reached an annualized revenue run rate of around $30 billion, according to analyst estimates. That's a 280-fold difference in revenue trajectory.
What Caused Grok's Reputation to Collapse?
The ultimate blow came in early 2026 when Grok generated at least 1.8 million sexualized depictions of women over nine days, along with imagery of minors. The scandal triggered investigations by regulators across Europe, Asia, and the United States. An EU digital affairs spokesman declared at the time: "This is not spicy. This is illegal. This is appalling. This has no place in Europe". The incident exposed a critical weakness in Grok's approach: while Anthropic invested heavily in safety guardrails through Constitutional AI (a framework with baked-in ethical constraints), Grok's lighter-touch moderation proved catastrophic.
Anthropic's entire brand identity is built on responsible AI. The company was founded in 2021 by Dario Amodei, Daniela Amodei, and researchers who left OpenAI over concerns that safety was not being taken seriously enough. Claude's models are trained with Constitutional AI, a framework designed to make the chatbot more likely to decline harmful requests, express uncertainty about sensitive topics, and push back on prompts it deems problematic.
Steps to Understanding the Strategic Reversal Between Musk and Anthropic
- The Ideological Divide: Anthropic has continuously refused to strip safety guardrails from Claude for use in autonomous weapons systems, including those being used in ongoing conflicts. Musk, by contrast, has backed hawkish foreign policy positions and signed defense contracts with the Pentagon alongside Google and OpenAI, on terms Anthropic refused in February after previously signing deals in July 2025.
- The Compute Crisis: Anthropic underestimated its own success. The launch of Claude Cowork far surpassed the company's ability to keep up with computing power, forcing leadership to reach out to SpaceX for additional data center support. Musk said he is comfortable leasing Colossus 1 partly because SpaceX had already moved its own training to Colossus 2.
- The Contractual Caveat: Musk added a notable condition on X, stating that SpaceX reserves the "right to reclaim the compute" if Anthropic's AI "engages in actions that harm humanity." The condition did not appear in the official press release, and it remains unclear whether it features in the actual contract. There is no stated threshold for what constitutes "harm to humanity," meaning the judgment could rest with Musk alone.
The partnership represents a pragmatic business decision masking deeper ideological tensions. Musk's Grok failed to achieve the enterprise adoption necessary to justify its massive losses. Anthropic's Claude succeeded precisely because it prioritized safety and enterprise integration over provocative positioning. By leasing Colossus 1 to Anthropic, Musk is monetizing infrastructure that would otherwise sit idle while his own AI initiative quietly falters. The irony is sharp: the man who attacked Anthropic as "evil" is now directly enabling the expansion of the company's safety-first approach to AI development.