Mistral Large 3 ships as 41B-active sparse MoE under Apache 2.0
TX_045AI

Mistral Large 3 ships as 41B-active sparse MoE under Apache 2.0

Mistral 3 family launched with three dense small models (3B, 8B, 14B) and Mistral Large 3 — a sparse MoE with 41B active and 675B total parameters. All under Apache 2.0. Large 3 hits #2 in OSS non-reasoning on LMArena.

Mistral launched the Mistral 3 family in early 2026. The lineup includes three dense small models (3B, 8B, 14B) and Mistral Large 3 — a sparse mixture-of-experts model with 41B active and 675B total parameters [Mistral 3 announcement].

── What shipped ──

The full lineup, all under Apache 2.0:

  • Mistral Small 3B — edge and on-device
  • Mistral Small 8B — efficient general-purpose
  • Mistral Small 14B — strong general-purpose
  • Mistral Large 3 — flagship sparse MoE, 41B active / 675B total

Mistral Large 3 debuts at #2 in OSS non-reasoning models and #6 amongst OSS models overall on the LMArena leaderboard [Mistral models].

── Why it matters ──

Three signals.

One — sparse MoE at frontier scale, fully open. Mistral Large 3's 675B parameter total is in the same range as Llama 4 Maverick (400B). Both shipped under permissive licences within months of each other. The open-weight frontier in 2026 is no longer "behind closed-weight by 18 months" — the gap has narrowed materially.

Two — Apache 2.0 means commercial use without restriction. Llama's licence has commercial-use carve-outs (>700M MAU clause) and acceptable-use restrictions. Mistral 3's pure Apache 2.0 is genuinely permissive. For enterprises sensitive to licence-compliance risk, Mistral wins on legal cleanness even where Llama wins on raw benchmark.

Three — small models matter more than they used to. The 3B/8B/14B Small variants are sized for on-device and edge inference. With Apple, Samsung, and Google all shipping AI features that prioritise on-device execution, the demand for high-quality 3B–14B models has stepped up sharply.

── Editor's take ──

The Mistral 3 family is the clearest "we are still here" signal from Europe's flagship AI lab. The honest read: not first on capability, not first on flashiness, consistently first on permissive licensing and shipping cadence. For European enterprises, sovereign-AI programmes, and any use case that needs on-prem deployment, Mistral 3 is the practical choice. For consumer-facing API competition, the gap to Anthropic/OpenAI/Google remains.

adjacent broadcasts
operator_channel
[ comments_offline · provider_not_configured ]
transmission_log

// newsletter_offline · provider_not_configured