Jensen Huang Says Nvidia Could Hit $3 Trillion in Revenue. Here's Why He Believes It.

Nvidia CEO Jensen Huang believes the company could become a $3 trillion revenue business in the near future, a staggering projection that reflects how rapidly artificial intelligence is reshaping global computing infrastructure. Speaking on a recent podcast with Lex Fridman, Huang explained that Nvidia's growth isn't constrained by traditional market-share battles but rather by the sheer scale of opportunity AI is creating across industries like healthcare, telecom, and beyond .

What Makes Huang So Confident About Nvidia's Growth Potential?

Huang's optimism rests on a fundamental shift in how companies are deploying artificial intelligence. Unlike traditional semiconductor markets where growth comes from stealing customers from competitors, Nvidia is creating entirely new demand by bringing AI platforms into industries that have never used them before. As these sectors adopt AI systems to solve real-world problems, they need continuous computing power not just to train models, but also to run them in production .

The numbers backing this vision are striking. Nvidia's annual revenue reached $215 billion in the most recent full year, up 65% from the prior year. Just three years ago, the company's entire annual revenue was $27 billion, less than what it now generates in a single quarter. In the latest quarter alone, Nvidia reported $68 billion in revenue, with data center revenue hitting $62.31 billion, up 75% year-over-year .

During Nvidia's recent GTC conference, Huang raised his revenue estimate for Blackwell and Vera Rubin system sales from $500 million to $1 trillion through 2027. Vera Rubin, Nvidia's next-generation system, is already in full production and launching later this year .

How Is the Memory Supply Chain Struggling to Keep Up?

The infrastructure supporting Nvidia's growth is straining under demand. Memory suppliers, which provide the high-bandwidth memory (HBM) that every Nvidia GPU cluster requires, are experiencing unprecedented pressure. DRAM prices surged 90 to 95% in recent months, and suppliers are booked solid through 2027 .

Micron Technology, a major memory supplier, reported record fiscal Q1 2026 DRAM revenue of $10.8 billion, up 69% year-over-year. CEO Sanjay Mehrotra stated plainly that the demand-supply gap is the highest the company has ever seen. Micron has locked in agreements for its entire 2026 HBM supply, yet the company can only meet 50 to two-thirds of demand from several key customers .

"The gap between the demand and supply for all of DRAM, including HBM, is really the highest that we have ever seen," stated Sanjay Mehrotra, CEO at Micron Technology.

Sanjay Mehrotra, CEO at Micron Technology

Equipment makers supplying the factories that manufacture these memory chips are also experiencing record demand. Applied Materials reported record DRAM revenue in Q1 FY2026, with DRAM now representing 34% of its semiconductor systems sales. Lam Research posted $5.34 billion in Q2 revenue with $2.68 billion in deferred revenue, signaling customers are locking in equipment capacity well in advance .

What's Driving This Unprecedented Demand Surge?

The catalyst behind all this growth is what Huang describes as "the agentic AI inflection point." Agentic AI refers to AI systems that can autonomously plan and execute tasks, not just respond to prompts. These systems require even more computing power than traditional AI models because they need to continuously process information and make decisions .

Huang explained that Nvidia isn't limited by physical constraints or existing market boundaries. Instead, the company's revenue ceiling is determined by how many new industries and applications adopt AI technology. Consider the following factors driving this expansion:

  • Healthcare Applications: Nvidia is establishing AI platforms in healthcare, where systems can analyze medical data and improve diagnostic accuracy, creating sustained demand for computing power.
  • Telecom Infrastructure: Telecommunications companies are deploying AI to optimize networks and improve service quality, requiring continuous GPU and memory resources.
  • Real-World Problem Solving: As AI systems move from research labs into production environments, they need compute not just for training but for ongoing inference, where models "think through" questions and perform their jobs.
  • Annual Hardware Upgrades: Nvidia's commitment to updating its chip architecture annually means existing customers must expand their infrastructure to adopt new capabilities, creating recurring revenue opportunities.

How to Assess Nvidia's Path to $3 Trillion Revenue

  • Track Quarterly Data Center Revenue: Monitor Nvidia's quarterly earnings reports for data center revenue growth rates. The company's most recent quarter showed 75% year-over-year growth, a pace that would support Huang's long-term projections if sustained.
  • Watch Memory Supply Constraints: Follow reports from Micron, Samsung, and SK Hynix on HBM production capacity and order backlogs. Suppliers booked through 2027 indicate sustained demand, but any slowdown in orders would signal weakening AI adoption.
  • Observe Competitive Threats: While Amazon and Meta have designed some of their own chips, neither has suggested these will replace Nvidia's offerings. Monitor announcements from these companies about in-house chip deployment rates to gauge whether custom silicon is eroding Nvidia's market share.
  • Evaluate Industry Adoption Rates: Look for announcements from healthcare, telecom, and other industries about AI infrastructure investments. Broader adoption across sectors supports Huang's thesis that Nvidia is creating new markets rather than competing for existing ones.

Huang's $3 trillion projection may sound audacious, but it's grounded in real dynamics. Nvidia has maintained its lead through first-mover advantage with GPUs designed specifically for AI, annual chip updates that make competitors' offerings obsolete, and a complete ecosystem that locks customers into its platform. The memory supply crisis, where even record-breaking production can't meet demand, suggests the bottleneck isn't Nvidia's ability to sell chips but rather the industry's ability to manufacture supporting components .

The company faces competition from other chip designers and from customers building their own silicon, but these alternatives haven't dented Nvidia's dominance. As AI moves from experimental projects into core business operations across industries, the computing infrastructure required will likely dwarf today's data center buildout. If Huang is right about the scale of this transition, $3 trillion in annual revenue may not be a ceiling but a waypoint on a much longer growth trajectory.