#open-source
// 5 transmissions tagged with #open-source
Mistral Medium 3.5 lands as a 128B dense model with agentic features
Mistral shipped Medium 3.5 on April 29 — a 128B dense model with new agentic primitives. The Paris lab continues its open-weight cadence as American competitors close their frontier.
DeepSeek V4 ships at 97% below GPT-5.5 — and it runs on Huawei silicon
DeepSeek V4 ships as 1.6T-param Pro and 284B Flash variants under MIT license. Pricing is 97% below OpenAI's GPT-5.5. The unannounced story is that V4 is the first model optimised for Huawei Ascend chips.
Meta's Llama 4 family: 10M-token context, MoE architecture, fully open
Llama 4 ships with two open-weight models: Scout (17B active / 109B total, 10M context) and Maverick (400B parameters). MoE replaces dense transformer. Largest open context window on the market.
Mistral ships Voxtral TTS open-source for nine languages
Mistral released Voxtral TTS as an open-source text-to-speech model on March 23. Supports nine languages including Hindi and Arabic. Designed for enterprise voice agents.
Mistral Large 3 ships as 41B-active sparse MoE under Apache 2.0
Mistral 3 family launched with three dense small models (3B, 8B, 14B) and Mistral Large 3 — a sparse MoE with 41B active and 675B total parameters. All under Apache 2.0. Large 3 hits #2 in OSS non-reasoning on LMArena.