Live: Will NVIDIA (NVDA) Continue To Surge After Q2 Earnings
Here’s a summary of all of Wall Street’s questions along with NVIDIA’s response:
Q: With 12-month wafer-to-rack lead times and Rubin on track for 2H, what’s the growth vision into 2026, and network vs. data center split?
A: Growth is driven by “agentic/reasoning” AI, which can require 100×–1,000× more compute than one-shot chat. NVIDIA built the NVLink-72 rack-scale Blackwell system for this moment, delivering big speed/efficiency gains in token generation. Over ~5 years, NVIDIA expects to scale Blackwell → Rubin (and successors) into a ~$3–$4 trillion AI-infrastructure opportunity; top 4 CSP capex has already doubled to ~$600B/year.
Q1: What must happen to realize the $2–$5B China revenue this quarter, and what’s sustainable into Q4?
A: There’s demand for H20; NVIDIA has supply and initial licenses. The actual range depends on additional licenses and geopolitical dynamics. More approvals → more H20 shipments.
Q2: Do customer ASICs shift spend away from NVIDIA GPUs?
A: ASIC success is rare because accelerated computing is full-stack co-design and models evolve rapidly. NVIDIA’s platform is everywhere (cloud to edge), supports every framework, and accelerates the whole pipeline (data → pretrain → RL → inference). Platform breadth (GPUs, CPUs, memory, SuperNIC, NVLink, Quantum InfiniBand, Spectrum-X Ethernet, Spectrum-X GS) plus best perf/W and perf/$ keeps lifetime utility and margins highest—hence broad customer preference for NVIDIA.
Q: Clarify the $3–$4T data-center infrastructure outlook and NVIDIA’s share; are power limits a bottleneck?
A: Hyperscaler capex is ~$600B/year and rising beyond top-4 and into enterprises/other regions. A 1-GW AI factory can cost ~$50–$60B; NVIDIA is an AI-infrastructure company (six chip types for Rubin). Limiters will be power and buildings; perf/W directly drives factory revenue (“tokens per MW”), and NVIDIA focuses on maximizing it. The $3–$4T through decade end is “sensible.”
Q: Long-term China prospects and the importance of licensing Blackwell there?
A: China is a ~$50B opportunity this year if NVIDIA can serve it competitively, and could grow ~50% YoY. ~50% of AI researchers are in China; many leading open-source models originate there, fueling enterprise adoption globally. H20 is approved for non-entity-listed firms (with licenses). NVIDIA is advocating for Blackwell access; bringing Blackwell to China is a “real possibility,” subject to U.S. policy.
Q: Scope for Spectrum-X GS and NVIDIA networking?
A: NVIDIA now offers three layers: NVLink (scale-up) for giant virtual GPUs (key to reasoning workloads), InfiniBand & Spectrum-X Ethernet (scale-out), and Spectrum-X GS (scale-across) to interconnect multiple AI factories. Networking choice can lift effective cluster throughput from ~65% to 85–90%, creating $10–$20B of effective benefit in a $50B 1-GW factory—so networking is economically pivotal. Spectrum-X has become a sizable, ~1.5-year-old “home run.”
Q: How to apportion the +$7B guide (Q/Q) across Blackwell vs. Hopper vs. networking?
A: Blackwell is the “lion’s share” of data center growth; Hopper (H100/H200 HGX) is still selling; networking rises alongside Blackwell due to NVLink-rich systems. No finer split given.
Q: Rubin transition—incremental capability and size of step vs. Blackwell?
A: NVIDIA is on an annual cadence to keep lifting perf/W and customer revenue. Blackwell delivers ~order-of-magnitude higher perf/W vs. Hopper for reasoning; Rubin brings “a lot of great ideas” to be detailed at GTC. Today, NVIDIA is ramping GB200 and GB300 (Blackwell Ultra); 2024 is record, 2025 expected record. Rubin consists of six new chips and is already in fab; supply chain will be more mature at launch.
Q: With AI market ~50% CAGR, can NVIDIA data-center revenue grow at least in line next year; visibility and puts/takes?
A: NVIDIA has strong forecasts from large customers and rising wins. AI funding is surging Capacity is tight (H100/H200 sold out; CSPs renting from each other). With hyperscaler capex at ~$600B/year, NVIDIA expects multi-year, through-the-decade growth as a significant share of that spend.
Closing notes from Jensen (summary):
Blackwell (NVLink-72 rack-scale) is the platform the market needed as reasoning AI drives huge compute jumps. Blackwell Ultra is ramping fast with extraordinary demand. Rubin (third-gen NVLink rack-scale) is already in fab with six chips and will scale into the $3–$4T AI-factory build-out. Customers are moving from tens of MW (Hopper) to hundreds of MW (Blackwell) and, soon, multi-GW multi-site “AI super-factories” (Rubin). The “AI race is on.”