The Cooling Crisis Nobody's Talking About: Why AI Data Centers Are Ditching Water for Liquid Refrigerant

As AI data centers race to scale, they're running into a problem that has nothing to do with chips: cooling systems that guzzle water and energy while creating local environmental friction. A new generation of two-phase liquid cooling technology is emerging as a potential solution, offering data center operators a way to dramatically reduce both power consumption and water use without sacrificing performance or reliability .

Why Are Data Centers Struggling With Cooling in the First Place?

Traditional air cooling works fine for small server rooms, but when you're running thousands of GPUs simultaneously to train massive AI models, the heat becomes overwhelming. The industry's first attempt to solve this problem was single-phase liquid cooling, where treated water circulates directly to the chips. But this approach introduced significant complications: leak risks that could destroy expensive hardware, corrosion concerns, and constant water-quality maintenance requirements. More importantly, the complexity made these systems nearly impossible to distribute through standard IT supply chains, keeping liquid cooling accessible only to the largest hyperscalers .

Now, communities hosting these data centers are pushing back. Billions of dollars in planned projects have already been delayed or blocked due to concerns about strain on local water and power supplies. This has created an urgent need for cooling solutions that are both efficient and practical to deploy at scale .

How Does Two-Phase Cooling Work Differently?

Two-phase cooling uses a non-conductive dielectric refrigerant with an A1 safety rating and low global warming potential. Unlike single-phase systems, no water enters the IT rack at all, which means leak events pose minimal risk to GPUs or server electronics. The refrigerant changes between liquid and vapor states as it absorbs and releases heat, creating a more efficient thermal transfer process .

The efficiency gains are substantial. Industry studies have shown that two-phase cooling systems can reduce cooling energy consumption by up to 90 percent compared to air-cooled alternatives, while simultaneously eliminating millions of gallons of annual water use. Independent analysis by Jacobs Engineering found that two-phase solutions deliver 35 to 44 percent annual operating expense savings and 8 to 17 percent five-year total cost of ownership savings compared to single-phase direct-to-chip systems .

What's Changing in the Market Right Now?

Accelsius, a company founded by Innventure Inc. (NASDAQ: INV), announced the general availability of the NeuCool IR150 at Data Center World 2026. This is significant because it's the industry's first fully integrated rack-level cooling solution that combines a two-phase Coolant Distribution Unit (CDU), 42U of IT rack space, and built-in liquid and vapor manifolds in a single 800-millimeter-wide enclosure, offering up to 150 kilowatts of cooling capacity .

What makes this different from previous liquid cooling offerings is its design philosophy. The IR150 is a true plug-and-play system that moves through established IT infrastructure channels, making two-phase liquid cooling accessible not just to hyperscalers but to enterprises and smaller operators for the first time. Its fully integrated form factor is particularly suited for edge deployments and the small language model workloads that will proliferate as AI matures .

"Hyperscalers and neoclouds are under enormous pressure to deliver AI capacity faster than ever, and that urgency is understandable. Our message at Data Center World is simple: moving fast and planning responsibly are not mutually exclusive," said Josh Claman, CEO of Accelsius.

Josh Claman, CEO of Accelsius

How Are Data Center Operators Preparing for Deployment?

Accelsius also launched the NeuCool HyperStart program, a structured initiative designed to help hyperscale operators, neocloud providers, and key partners validate two-phase direct-to-chip liquid cooling solutions. The program provides early engineering support, deployment planning, and technical validation to accelerate readiness for high-density, large-scale AI deployments. Several hyperscale AI cloud providers have already engaged with Accelsius under the program as they build out cooling roadmaps for next-generation AI infrastructure .

  • Integrated Product Suite: The NeuCool product family includes the IR150 for rack-level cooling, the MR250 row-based CDU delivering up to 250 kilowatts-plus of cooling capacity per rack, and the NeuCool Thermal Simulation Rack (TSR), a thermal test platform that allows operators to evaluate solutions before full deployment.
  • Scalability Across Deployment Stages: The product lineup is designed to scale from single-rack evaluation through full data center deployment, allowing operators to test and validate technology before committing to large-scale rollouts.
  • Community and Sustainability Focus: By reducing water consumption and energy use, these solutions address the environmental and resource concerns that have blocked billions of dollars in data center projects in various communities.

The timing of these announcements reflects a critical inflection point in the data center industry. Hyperscale and neocloud operators are moving at unprecedented speed to bring gigawatt-class AI facilities online, but that velocity has triggered increasing public scrutiny of power and water use in host communities. The challenge facing the industry is clear: continue scaling AI infrastructure while demonstrating responsible stewardship of local resources .

For enterprises and smaller operators who have been locked out of liquid cooling due to complexity and cost, the IR150 represents a meaningful shift. It removes the barrier to entry that has kept advanced cooling technology confined to the largest players. As AI workloads continue to proliferate across organizations of all sizes, having cooling solutions that scale from edge deployments to hyperscale facilities becomes increasingly important .

The broader implication is that the data center industry may finally be moving beyond the false choice between speed and responsibility. By embedding efficient cooling architectures into reference designs from day one, operators can avoid the costly mistake of scaling infrastructure that will eventually need to be retrofitted or replaced. For communities hosting these facilities, that shift could mean the difference between welcoming AI infrastructure as an economic asset or viewing it as an environmental liability.