Anthropic Just Doubled Claude's Power: Here's What That Means for AI Users
Anthropic announced major upgrades to Claude on May 6, 2026, doubling usage limits for code work, removing peak-hour slowdowns, and securing 300 megawatts of new computing capacity from SpaceX's Colossus 1 data center. The moves signal that computing power, not talent or algorithms, is now the main constraint limiting AI model capabilities. For paying Claude users, the practical impact is immediate: longer coding sessions without interruption, faster response times during busy hours, and more room for teams running production AI agents.
What Changed for Claude Users Today?
Anthropic made three concrete changes effective immediately. The five-hour usage window that gates how much work a Pro, Max, Team, or seat-based Enterprise account can run through Claude Code is now twice as large. This matters most for users running long, complex tasks like multi-file code refactors, end-to-end test runs, and large code reviews that previously required splitting work across multiple billing windows.
The company also removed the throttle that previously slowed Pro and Max accounts during peak demand windows, typically during US business hours. Users will see fewer "rate limited" interruptions when trying to use Claude during busy times. Additionally, Anthropic raised API rate limits for Claude Opus models, the company's most capable tier, allowing teams to run more concurrent AI agents without hitting usage caps.
Why Is SpaceX's Computing Power Such a Big Deal?
The SpaceX partnership is the most significant part of this announcement because of what it reveals about AI infrastructure in 2026. Anthropic now has access to more than 300 megawatts of computing capacity, equivalent to over 220,000 Nvidia GPUs, available within the month. For context, that single deal supplies roughly the entire computing footprint that brought GPT-4 to market in 2023.
What makes this unusual is the speed. Most multi-hundred-megawatt computing agreements take years to deliver. This one arrives within weeks. That timeline suggests Anthropic was facing genuine near-term capacity pressure, likely from Claude Code's rapid growth and enterprise deployment of Claude Opus 4.7, and needed computing resources faster than traditional hyperscalers could provide.
The forward-looking angle is equally intriguing. Anthropic expressed interest in partnering with SpaceX to develop multiple gigawatts of orbital AI compute capacity. Orbital data centers are not new as a concept, but the constraints have historically been launch costs, radiation hardening, and data transmission bandwidth. SpaceX, with Starship reaching launch cadence and Starlink as the largest broadband satellite operator, is currently the only Western company positioned to attempt orbital compute infrastructure at scale.
How Does This Fit Into Anthropic's Broader Computing Strategy?
Today's announcement is part of a sustained push by Anthropic throughout 2025 and 2026 to secure long-dated power and GPU capacity ahead of demand. The company is no longer relying on a single computing partner. Instead, Anthropic's compute supply now spans multiple providers, reducing concentration risk and giving the company negotiating leverage on every renewal.
- Amazon Web Services: Up to 5 gigawatts of capacity, with roughly 1 gigawatt available by the end of 2026 and the balance rolling out over multiple years, combining hyperscaler partnership with custom silicon.
- Google and Broadcom: TPU-based capacity with rollout beginning in 2027, representing a shift toward custom processors designed specifically for AI workloads.
- Microsoft Azure: A $30 billion Azure capacity commitment spanning multiple years for Nvidia GPU-based computing.
- Fluidstack: Roughly $50 billion in American infrastructure investment over multiple years for domestic GPU build-out.
- SpaceX: 300 megawatts and 220,000 Nvidia GPUs available within the month, plus forward interest in orbital compute capacity.
Two patterns emerge from this diversified approach. First, Anthropic is deliberately avoiding the single-vendor trap that has historically constrained competitors. OpenAI anchored on Microsoft, and Google built its own first-party stack. Anthropic's multi-partner strategy reduces risk and strengthens its negotiating position. Second, the timeline shape has changed. Amazon and Google deals are long-dated build-outs measured in years, while the SpaceX agreement is measured in weeks, suggesting Anthropic needed immediate capacity to keep pace with demand.
How to Maximize the New Claude Limits as a Paying User
- Consolidate Multi-Session Work: Long-horizon code refactors and multi-step builds that previously required splitting across two billing windows now fit in a single session, reducing context switching and improving workflow efficiency.
- Run Longer Agent Loops: The doubled five-hour usage budget means roughly twice as many agent loops per session before hitting rate limits, enabling more complex automation tasks without interruption.
- Increase Production Concurrency: Teams running production agents on Opus 4.7 can increase concurrent workloads without renegotiating limits with Anthropic sales, improving throughput for research assistants, document-processing pipelines, and multi-agent stacks.
It is important to note that these changes apply only to paid plans. Free-tier users at claude.ai were not mentioned in the announcement and should not assume these improvements apply to their accounts. The per-token pricing on the API has not changed; what moved is throughput and availability.
What Does This Signal About AI's Future?
Two years ago, the binding constraint on AI model quality was talent. One year ago, it shifted to algorithmic ideas around post-training and reasoning. In 2026, the binding constraint is electrical power and GPU supply. Today's announcement reads like Anthropic loosening that constraint and immediately handing the headroom back to users. It is the cleanest signal yet that capacity is no longer the bottleneck for paying Claude customers in mid-2026.
The SpaceX partnership also hints at Anthropic's confidence in its product roadmap. Securing 300 megawatts of capacity plus expressing interest in orbital compute suggests the company is preparing for significant new capabilities, possibly including Claude Opus 5 or other major model releases that would require substantial computing resources to train and serve at scale.