Logo
FrontierNews.ai

Jensen Huang and Dell CEO to Unveil AI Factory 2.0 with Extreme GPU Density

Nvidia CEO Jensen Huang and Dell CEO Michael Dell will deliver a joint keynote on May 18, 2026, announcing Dell AI Factory 2.0, a new generation of servers designed to accelerate artificial intelligence (AI) workloads with unprecedented GPU density and integrated cooling systems. The announcement marks a significant step in how enterprise customers will build and scale AI infrastructure, combining Dell's server expertise with Nvidia's accelerator technology into a tightly integrated package.

What Is Dell AI Factory 2.0 and Why Does It Matter?

Dell AI Factory 2.0 represents the next phase of the Dell-Nvidia partnership, moving beyond selling individual components to offering complete, pre-integrated systems designed specifically for large-scale AI training and inference. The new servers will feature Nvidia's Blackwell GPUs, the latest generation of accelerators designed for AI workloads, along with advanced liquid cooling technology to manage the heat generated by dense GPU configurations.

The hardware specifications are ambitious. Dell's expanded PowerEdge server line will include both air-cooled and liquid-cooled models, with the liquid-cooled variants (XE9780L and XE9785L) supporting up to 192 Nvidia Blackwell Ultra GPUs per system. For customers needing even greater density, configurations can be customized to support 256 Nvidia Blackwell Ultra GPUs per Dell IR7000 rack, a measure of physical infrastructure that houses multiple servers.

Dell claims these systems will deliver up to four times faster large language model (LLM) training compared with the previous generation PowerEdge XE9680 servers. An LLM is a type of AI model trained on vast amounts of text data to understand and generate human language. Additionally, the PowerEdge XE9712 will include the Nvidia GB300 NVL72, which Dell says offers 50 times more AI reasoning and inference output, meaning the system can process and respond to queries far more quickly.

How Are Dell and Nvidia Bundling Hardware and Software Together?

A key shift in the AI infrastructure market is the move from selling discrete components to offering integrated hardware-software-service bundles. Dell and Nvidia are following this trend by making Nvidia AI Enterprise, a suite of software tools and services, available directly through Dell's sales and support channels. This approach reduces the burden on customers to integrate different vendors' products themselves.

  • Hardware Integration: Dell provides the server chassis, storage systems, and networking infrastructure, while Nvidia contributes the GPU accelerators and software stack, creating a unified system optimized for AI workloads.
  • Software Availability: Nvidia AI Enterprise will be sold and supported directly through Dell, simplifying procurement and ensuring compatibility between hardware and software components.
  • Service and Support: The partnership includes Nvidia-led breakout sessions scheduled for May 18 and 19 at Dell Technologies World, where customers can learn about AI factories and data activation strategies.

Michael Dell has publicly argued that AI's future follows data, implying that organizations need infrastructure capable of processing data wherever it resides, whether in cloud environments, on-premises data centers, or at the edge of networks. This strategic framing aligns with Dell AI Factory 2.0's design, which aims to serve diverse deployment scenarios.

When Will These Systems Be Available?

Timeline matters for enterprise customers planning infrastructure investments. According to reporting, the PowerEdge XE7745, which will offer the Nvidia RTX Pro 6000 Blackwell Server Edition, is expected to ship in July 2025. Other new server hardware is expected to launch around the same period, though specific availability dates for all models have not been fully disclosed.

The joint keynote at Dell Technologies World on May 18, 2026, at 10 a.m. PT will serve as the formal announcement venue, giving customers and industry analysts a detailed look at the product roadmap and performance expectations.

What Questions Remain About Performance Claims?

Vendor performance claims require independent validation. Dell's assertion that AI Factory 2.0 delivers four times faster LLM training and 50 times more inference output are significant, but these figures come from Dell and Nvidia themselves. Third-party benchmarks and real-world customer deployments will be essential to verify whether these performance gains hold up in production environments with diverse workloads and configurations.

"This is a great addition to the AI factory with Nvidia that Dell has, because it's going to really help customers adopt AI servers faster than ever before," stated Varun Chharba, Senior Vice President of Infrastructure and Telecom Marketing at Dell.

Varun Chharba, Senior Vice President of Infrastructure and Telecom Marketing at Dell

Beyond hardware performance, customers and analysts should watch for when Nvidia AI Enterprise images and managed-service agreements appear in Dell's official product catalog, and whether pricing information becomes public. These details will determine how accessible and affordable the integrated solution is for different customer segments, from hyperscale cloud providers to mid-market enterprises.

The Dell-Nvidia partnership reflects a broader industry trend toward extreme GPU density and integrated infrastructure stacks. As enterprises race to build and deploy AI applications, the ability to provision high-performance compute quickly and reliably has become a competitive advantage. The May 2026 announcement will clarify whether Dell AI Factory 2.0 delivers on its promises and how it positions both companies in the rapidly evolving AI infrastructure market.