Three stories worth your time. All sources linked.

TIME · Safety & Governance

Anthropic scrapped the central promise of its 2023 Responsible Scaling Policy — the commitment to never train or deploy models without advance safety guarantees. The replacement is a looser “Responsible Development Policy” with no hard gates. For practitioners, this matters because the only meaningful self-imposed brake at a frontier lab just got loosened as competitive pressure hits peak intensity.

The Futurum Group · AI Agents & Autonomy

Anthropic’s Frontier Red Team reports Claude Opus 4.6 autonomously discovered and validated 500+ high-severity zero-days in production open-source code. The actual challenge is what happens next: vulnerability discovery just got decoupled from remediation capacity. The gap between finding and patching is where incidents live.

Liquid AI · LLM Infrastructure

An open-weight 24B-parameter model using a hybrid mix of short-convolution blocks and grouped-query attention in a sparse MoE layout — total footprint under 32GB RAM. Benchmark numbers are competitive with models 3x its size. This is the class of model that makes serious local deployment practical.

Keep reading