Tuesday, February 25, 2026
Anthropic Drops Flagship Safety Pledge, Overhauling Its Responsible Scaling Policy
AI Governance · Safety Policy
Anthropic quietly replaced its Responsible Scaling Policy with a new "Core Views" document, removing the explicit commitments that previously tied capability releases to safety evaluations. The RSP was the industry's most specific self-governance framework — its removal, days before a major Pentagon contract dispute, signals that safety commitments may increasingly bend to commercial and geopolitical pressure.
Every enterprise contract and compliance team that cited Anthropic's RSP as evidence of safety governance now needs to re-evaluate what that assurance was actually worth.
Claude Found 500 Zero-Days. Who Patches Them Before Attackers Arrive?
AI Security · Vulnerability Research
Google's Project Naptime used Claude to autonomously discover over 500 zero-day vulnerabilities across major open-source projects — significantly more than human researchers found in the same period. The research is being disclosed to maintainers, but the window between AI-discovered vulnerability and attacker exploitation is now measured in days, not months.
Every security team using open-source dependencies now has a documented risk model: AI can find what humans miss, and attackers have the same access to the same models.
Liquid AI Releases LFM2-24B-A2B: Hybrid MoE That Fits in 32GB
LLM Infrastructure · Local Deployment
Liquid AI released LFM2-24B-A2B, a hybrid mixture-of-experts architecture with 24B total parameters but only 2B active per token — matching GPT-4-class performance while fitting in 32GB of consumer RAM. The model uses a novel liquid neural network backbone that reportedly outperforms transformer architectures on long-context tasks.
32GB consumer hardware is now the threshold for frontier-capable local deployment. Teams that ruled out on-prem AI for cost or hardware reasons need to re-run that calculation.
More signal, less noise → thesignal.press
