Nvidia: $1T in Blackwell + Rubin orders through 2027. Groq LPU pushes 35x tokens-per-watt.
TX_034Devices & Hardware

Nvidia: $1T in Blackwell + Rubin orders through 2027. Groq LPU pushes 35x tokens-per-watt.

Jensen Huang's GTC keynote raised projected Blackwell + Rubin orders from $500B to $1T through 2027. Nvidia also unveiled the Groq 3 LPU — a 35x tokens-per-watt boost when paired with Rubin GPUs.

Jensen Huang's GTC 2026 keynote disclosed updated order projections: Blackwell + Vera Rubin combined orders are expected to reach $1 trillion through 2027, doubled from last year's $500B figure [CNBC GTC keynote].

── What shipped ──

Three announcements beyond the Rubin platform itself (TX_033):

  • Nvidia Groq 3 LPU. First chip from Groq, which Nvidia mostly acquired through a $20B asset purchase in December 2025. The Groq LPX rack increases tokens-per-watt by 35x when paired with Rubin GPUs [Nvidia newsroom].
  • Kyber rack architecture (2027). 144 GPUs in vertical compute trays instead of horizontal — a density and latency play. Expected to ship in 2027.
  • Nvidia Space Computing. AI inference into Earth orbit, partnering with launch and satellite operators.

── Why it matters ──

The $1T order projection is the headline. Doubled in twelve months. Two reads of that number:

  • Demand-side: AI infrastructure spend continues to compound faster than even Nvidia's own internal forecasts. Hyperscalers, sovereign-AI programs, and neoclouds are all pulling forward orders.
  • Supply-side: Nvidia is signalling to TSMC, HBM suppliers, and packaging partners that capacity commitments need to grow proportionally. Expect knock-on capex announcements through Q3.

The Groq LPU pairing is the most underrated announcement. Tokens-per-watt is the metric inference economics will increasingly be priced on, particularly as AI moves into latency-sensitive consumer products and on-device deployments. A 35x boost — even if real-world numbers come in at half — is the kind of efficiency gain that makes inference-heavy products viable that aren't today.

Kyber-class density is a 2027 problem. Buying decisions made today should not factor it in.

── Editor's take ──

Nvidia at GTC 2026 looks like a company that has stopped trying to convince anyone of the AI demand thesis. The $1T number is the new floor; the upside is whatever sovereign-AI and inference-at-scale build-outs add on top. The risk to the thesis is no longer demand. It is whether AMD, sovereign chip efforts, or model-architecture changes (sparser MoE, smaller distillations) reduce per-task compute requirements faster than capacity scales. So far the answer is no. We'll keep watching.

adjacent broadcasts
operator_channel
[ comments_offline · provider_not_configured ]
transmission_log

// newsletter_offline · provider_not_configured