channel_ai · 15_broadcasts

AI.

// models · tooling · llms · agents

LLM news, model releases, AI tooling

TX_002· 16:42AI

Anthropic ships Claude 4.7 with 1M-context

Claude 4.7 lands with a million-token context window and modest pricing changes. Five things shipping engineers should care about.

TX_004· 11:30AI

Anthropic locks in $200B of Google TPU capacity

Anthropic signs a five-year, $200B compute commitment to Google's TPU fleet. The deal reframes the cost basis of frontier model training — and tightens the cloud-vendor knot.

TX_005· 22:15AI

OpenAI ships GPT-5.5 Instant. Anthropic just overtook them on ARR.

OpenAI announced GPT-5.5 Instant on Monday. The same week, Anthropic's ARR ($30B) eclipsed OpenAI's ($24B) for the first time. The model is the headline; the revenue inversion is the story.

TX_012· 20:00AI

Gemini 3.2 Flash quietly hit the iOS app. Pricing is the news.

Google rolled Gemini 3.2 Flash into the iOS Gemini app and AI Studio with no announcement. $0.25 per million input tokens. Performance reportedly near 3.1 Pro.

TX_014· 16:30AI

Mistral Medium 3.5 lands as a 128B dense model with agentic features

Mistral shipped Medium 3.5 on April 29 — a 128B dense model with new agentic primitives. The Paris lab continues its open-weight cadence as American competitors close their frontier.

TX_018· 19:00AI

Microsoft Foundry adds Claude. The OpenAI-only era is over.

Microsoft made Anthropic's Claude models available in Microsoft Foundry on April 27, ending the OpenAI exclusivity that has defined Azure's AI strategy since 2023.

TX_015· 08:00AI

OpenAI shut down Sora. The official reason is deepfakes; the real reason is the bill.

Sora's web and app experiences shut down April 26. OpenAI cited deepfake risk during election year. Internal reporting puts compute burn at $1M/day on declining usage. Both reasons are true.

TX_013· 18:30AI

DeepSeek V4 ships at 97% below GPT-5.5 — and it runs on Huawei silicon

DeepSeek V4 ships as 1.6T-param Pro and 284B Flash variants under MIT license. Pricing is 97% below OpenAI's GPT-5.5. The unannounced story is that V4 is the first model optimised for Huawei Ascend chips.

TX_049· 14:00AI

Microsoft's 2026 capex hits $150B. AI infrastructure now dominates the balance sheet.

Microsoft's 2026 capital expenditure runs to roughly $150B, the bulk allocated to AI compute capacity. The number reframes Microsoft as a hyperscaler-first business with software as the monetisation layer.

TX_011· 15:00AI

Meta's Llama 4 family: 10M-token context, MoE architecture, fully open

Llama 4 ships with two open-weight models: Scout (17B active / 109B total, 10M context) and Maverick (400B parameters). MoE replaces dense transformer. Largest open context window on the market.

TX_043· 16:30AI

Mistral ships Voxtral TTS open-source for nine languages

Mistral released Voxtral TTS as an open-source text-to-speech model on March 23. Supports nine languages including Hindi and Arabic. Designed for enterprise voice agents.

TX_044· 14:00AI

Mistral's Leanstral writes machine-checkable proofs in Lean 4

Mistral released Leanstral on March 16 — the first open-source AI agent built specifically for Lean 4 formal proof engineering. Generates code plus a machine-checkable proof of correctness.

TX_017· 14:00AI

SpaceX absorbs xAI. Frontier AI now sits inside a launch company.

SpaceX merged with xAI in February, consolidating Musk's AI operations under his space company. The combined entity now carries the implied AI valuation into the SpaceX IPO target.

TX_016· 22:00AI

Grok 4.20 ships multi-agent, 2M context, weekly updates

xAI released Grok 4.20 in public beta with multi-agent orchestration, a 2M-token context window, and a weekly-update cadence. Hallucination rates reportedly cut to 4.2%.

TX_045· 11:00AI

Mistral Large 3 ships as 41B-active sparse MoE under Apache 2.0

Mistral 3 family launched with three dense small models (3B, 8B, 14B) and Mistral Large 3 — a sparse MoE with 41B active and 675B total parameters. All under Apache 2.0. Large 3 hits #2 in OSS non-reasoning on LMArena.