DeepSeek V4 ships at 97% below GPT-5.5 — and it runs on Huawei silicon
TX_013AI

DeepSeek V4 ships at 97% below GPT-5.5 — and it runs on Huawei silicon

DeepSeek V4 ships as 1.6T-param Pro and 284B Flash variants under MIT license. Pricing is 97% below OpenAI's GPT-5.5. The unannounced story is that V4 is the first model optimised for Huawei Ascend chips.

DeepSeek released a preview of V4 on April 24 with two open-weight variants under MIT license: the 1.6-trillion-parameter Pro and the 284-billion Flash. Both ship with a 1M-token context window [CNBC].

── What shipped ──

V4 is DeepSeek's biggest model line yet. Pricing is reportedly 97% below OpenAI's GPT-5.5 [SCMP], a gap large enough to force a market-wide repricing if the capability claims hold.

The headline that didn't make every press release: V4 is the first DeepSeek model optimised for domestic Chinese chips, specifically Huawei's Ascend series [MIT Technology Review].

── Why it matters ──

The pricing is the loud story; the chip optimisation is the consequential one.

Until now, every frontier-class model — including DeepSeek's earlier releases — has been trained and inferred on Nvidia silicon. US export controls on Nvidia H100/H200 hardware to China have made this an increasingly fragile dependency for Chinese labs. V4 running natively on Huawei Ascend is the first credible demonstration that the dependency can be broken at frontier scale.

For the global AI market, two implications:

  • Price floor moves. A 97%-cheaper model with credible quality forces every API vendor to either match price or differentiate sharply on capability. Expect rapid pricing pressure on the cheap-fast tier.
  • The chip-as-moat thesis weakens. Nvidia's premium pricing rests partly on the assumption that frontier training requires CUDA. DeepSeek shipping a frontier-class model on Huawei silicon punctures that, at least for inference workloads.

MIT Technology Review's read is more measured: V4 is "likely the best open-source option" but "not competitive with frontier U.S. models." That gap matters. The 97% price cut is meaningless if the model fails on tasks where the Western frontier still leads.

── Editor's take ──

Two things to watch. First, independent benchmarks across coding, multilingual, and long-context tasks — DeepSeek's self-reported numbers warrant scepticism. Second, whether US export-control responses adjust now that domestic-chip frontier inference is demonstrated. The geopolitical cost of those controls just went down.

adjacent broadcasts
operator_channel
[ comments_offline · provider_not_configured ]
transmission_log

// newsletter_offline · provider_not_configured