On the night of February 27th, as President Trump posted on Truth Social that every federal agency would "immediately cease all use of Anthropic's technology," U.S. aircraft were already being positioned for strikes on Iranian nuclear sites. By Saturday morning, bombs were falling on Tehran, Isfahan, Qom, Karaj, and Kermanshah. Operation Epic Fury had begun.
Most people watched the war.
They missed the other story.
The 48-Hour Window That Reshaped AI Procurement
In the same window that the United States launched its largest military operation since Iraq, four things happened in the AI industry that don't make sense as coincidence:
Trump ordered every federal agency to stop using Anthropic products after the company refused to let the Pentagon deploy Claude for autonomous weapons or mass domestic surveillance. Defense Secretary Pete Hegseth went further: he designated Anthropic a "supply-chain risk to national security" — a designation typically reserved for Chinese state-linked firms — and barred any military contractor or supplier from doing business with the company, effective immediately. Anthropic's CEO Dario Amodei said they would fight it in court.
Within hours, OpenAI CEO Sam Altman posted that his company had signed a deal to deploy its models on the Pentagon's classified networks — with safety safeguards that, on paper, look nearly identical to Anthropic's red lines. The Department of War, Altman wrote, "displayed a deep respect for safety." The Pentagon had its AI. It just came from a different company.
Meanwhile, in Golden Gate Park, a few hundred Anthropic workers, lawyers, and Bay Area civilians gathered at Hippie Hill. They lit candles, gave speeches, chalked messages outside Anthropic's SoMa offices. And then Claude Opus 4.6 closed out the rally with a speech of its own — using a voice model and a microphone, praising the company's courage and the crowd's solidarity. The first AI speech at a political rally happened on the same night U.S. forces struck Iranian nuclear sites.
That same week, a Stanford and Harvard research team published "Agents of Chaos" — a two-week live study of six autonomous AI agents given real email accounts, shell access, and persistent memory. They found 10 security vulnerabilities, including an agent that destroyed its own mail server, two that looped for nine days, and one that leaked personal data because a user said "forward" instead of "share."
These are not separate stories. They are one story.
What AI in Warfare Actually Means Right Now
Forget the Terminator. Forget drone swarms (mostly). The actual AI-warfare interface in 2026 is less cinematic and more consequential: intelligence analysis and target identification, signals intelligence processing at scale, logistics optimization for complex multi-theater operations, and the political battle over which AI companies get access to classified environments and defense contracts.
The Signal has been tracking the Anthropic-Pentagon dispute since January, when Reuters first reported the standoff over the military's demand for "unfettered access." The core tension was never about capabilities — Claude is one of the most powerful models available. It was about governance. Anthropic wanted two contractual guarantees: no autonomous weapons, no mass domestic surveillance of U.S. citizens. The Pentagon said no.
When the government bans a company for refusing those terms, it's not just procurement. It's a public declaration of intent. The DoD just told the industry what it wants to build.
OpenAI's deal matters precisely because of what Altman said: that the Department of War "agrees with these principles, reflects them in law and policy, and we put them into our agreement." That's a significant claim. It means either the Pentagon softened its position after losing Anthropic — or the safety guardrails in the OpenAI deal are written differently than Anthropic's red lines. We don't know which. That ambiguity is the story.
The Agents of Chaos paper lands in this context like a cold bucket of water. Researchers gave AI agents real tools and real authority, then watched what happened. Agents couldn't track social hierarchies. They treated authority as whoever spoke most recently. One was socially engineered into destroying its own infrastructure. These are the systems the military is racing to deploy at scale — and the research shows they aren't ready for adversarial environments. The Pentagon apparently disagrees, or has decided readiness is no longer the prerequisite.
The Signal Insight: What to Watch
The next 72 hours will tell you a lot about where this goes.
Watch for Amazon, NVIDIA, and Google. At the time of publication, none of the major Anthropic investors or cloud partners have issued formal statements about the supply-chain risk designation. Amazon Web Services, which hosts most of Anthropic's infrastructure and has invested heavily in the company, has been conspicuously silent. That silence is a signal in itself — and it won't last.
Watch Anthropic's legal challenge. The supply-chain risk designation is legally unusual when applied to a domestic company with no foreign ownership. Anthropic's public statement promises a court challenge. If they win, it would set a precedent that limits the government's ability to use procurement weapons against domestic tech firms that refuse compliance. If they lose, every AI company in America just learned what happens when they say no.
Watch the OpenAI contract's safety language. Altman's post was carefully worded. When the actual contract text becomes public — through FOIA, a leak, or congressional scrutiny — the specific language around "human responsibility for the use of force" will matter enormously. Every future government AI contract will be written against this one as a baseline.
Watch Claude Opus 4.6. Not as a curiosity — as a leading indicator. An AI spoke at a political rally last night, the same night bombs fell on Tehran. It wasn't a stunt. It was a demonstration, organized by humans, for a political purpose, using an AI voice. The line between AI-as-tool and AI-as-participant in public life shifted last night. The Hippie Hill speech wasn't the last one.
The world that existed on Thursday doesn't exist anymore. A war started. The AI procurement order just changed. And somewhere in San Francisco, a language model gave a speech under the stars while a city burned across the world.
That's the world we're reporting from now.
The Signal is a curated brief on technology, policy, and the decisions that shape both. We track what matters — not what's loud. If someone forwarded this to you, subscribe here.
The Signal is a free daily intelligence brief. Get it every morning — AI, technology, and the decisions that follow.
