On March 20, the White House released its first national AI action plan — and it said nothing about the problem that caused the Pentagon's AI chief to resign.
⚠️ OpenAI Resignation: Caitlin Kalinowski Exits Over Pentagon AI Contracts
AI Governance / Ethics
Caitlin Kalinowski spent years building the hardware that makes AI real — AR headsets at Meta, then OpenAI's silicon roadmap. In early March, she resigned. The public reason: OpenAI's contract with the Pentagon. Her stated red lines were specific: surveillance without judicial oversight, lethal autonomy without a human authorization step. This isn't an abstract policy objection. It's a job she turned down with her departure. What makes it significant isn't the resignation itself, but the role. Hardware chiefs don't quit over press releases. They quit when they've seen the roadmap. The next senior person to leave will have a template. The ones who stayed made a different calculation that's now visible by contrast.
📜 Military AI Policy by Contract: The Limits of Procurement as Governance
AI Policy / Regulatory Structure
Every major frontier lab now holds a DoD contract or is actively pursuing one. Anthropic signed for intelligence analysis and logistics. Google/DeepMind followed. OpenAI expanded into classified government work. Each announcement used consistent framing: dual-use applications, safety guardrails, national security rationale. What's absent from every one of them is an independent verification mechanism. The Lawfare piece names the underlying structure plainly: governance-by-procurement. The United States has no dedicated AI regulatory framework. What exists instead are negotiated contracts — DoD sets requirements, labs accept restrictions to win the work, and the enforced standard is whatever both parties agreed to in a document that may be classified. No independent audit. No public accountability. No congressional feedback loop. The contract is both the rule and the enforcement mechanism. The labs that built their brand on Constitutional AI and safety culture are signing the same contracts as Lockheed — but without Lockheed's oversight structures, inspector general apparatus, or established congressional scrutiny. For anyone building or buying AI at enterprise scale, this matters: the safety assurances you're evaluating from frontier labs are being shaped by procurement conversations you won't see.
🏦 Anthropic and Google Take Pentagon AI Deals — The Ethics Trade-Off
AI Governance / Defense Contracts
The Anthropic-Pentagon deal is the case study the Lawfare piece circles. The economic logic isn't hard to follow: frontier AI development requires compute at a scale where government contracts represent meaningful capital, infrastructure access, and political relationship value that no enterprise deal replicates. The labs that most loudly differentiated themselves on safety culture — "we're different, we built safeguards first" — are now in the same procurement pipeline as traditional defense contractors, but without the oversight structures those contractors operate under. There's no equivalent of a defense inspector general evaluating AI contract compliance. No congressional Armed Services subcommittee with established jurisdiction. The dual-use designation does real work here: it licenses deployment under existing authorities without triggering the oversight frameworks designed for weapons programs. Whether that's a deliberate design or an accidental gap in the regulatory map is, at this point, an academic distinction.
🏦 Trump White House Releases National AI Framework — Preempts State Rules
AI Policy / Regulatory Framework
On March 20, the White House released its national AI framework — the federal government's first attempt to set a unified standard and preempt a 50-state patchwork of conflicting rules. What it addresses: child privacy and parent controls, streamlined permitting for data center power, IP rights, AI workforce education, anti-censorship provisions. What it doesn't address: independent oversight of classified military AI contracts, the accountability gap Lawfare documented, or any mechanism for verifying safety claims from frontier labs operating under DoD agreements. Michael Kratsios framed the preemption case simply: "We need one national AI framework, not a 50-state patchwork." That's a coherent argument. It's also a document that exists in a parallel conversation from the governance problem Kalinowski's resignation made visible. One is a political document addressing what's politically addressable. The other is a structural problem that doesn't have a constituency yet. The safety pledge era didn't end with a manifesto. It ended with a signing on a Friday afternoon and a resignation on a Tuesday morning. The institutions that would normally fill this gap — a dedicated AI regulator, an established congressional oversight framework, any international accountability mechanism — don't exist yet. Procurement law is the placeholder. The question worth sitting with: what are you assuming about your frontier AI vendors when you assume their safety culture is intact?
📡 More signal, less noise → www.thesignal.press
