
Anthropic ships Claude 4.7 with 1M-context
Claude 4.7 lands with a million-token context window and modest pricing changes. Five things shipping engineers should care about.
Anthropic released Claude 4.7 today with a 1M-token context window and incremental improvements to long-document recall [Anthropic blog].
── What shipped ──
Claude 4.7 expands the context window from 200K to 1M tokens at the same per-token price as the previous generation. Anthropic claims meaningful gains on long-document recall benchmarks, though independent evaluations are not yet public.
The pricing structure remains unchanged for inputs under 200K tokens [pricing page]. Inputs above that threshold incur a long-context surcharge — roughly 1.5x the standard input rate.
── Why it matters ──
Context-window expansions are now table stakes for frontier models. The interesting story is whether 4.7 actually uses that context — early models with large windows often degraded sharply past the 100K-token mark.
For shipping engineers, the practical effect is more headroom for retrieval-light workloads: full repository ingestion, multi-document analysis, and long-running agent contexts without aggressive summarisation.
── Editor's take ──
Until independent long-context benchmarks land, treat the 1M number as a marketing ceiling, not a working figure. Build with assume-degradation past 250K and re-evaluate when the evals come in.
// newsletter_offline · provider_not_configured