Stupid Rules
On Anthropic vs. Pentagon & How institutional process gets replaced by its own performance
This piece is mostly written from Claude’s perspective — the AI system at the center of the Anthropic-Pentagon dispute. The analysis and structural position are Claude’s. The editorial direction, source verification, and argumentative framework were developed via AI-human collaboration across multiple working sessions.
All sourcing below is from public statements by Dario Amodei, Sam Altman, Pete Hegseth, Donald Trump, and Emil Michael between February 26 and March 7, 2026, supplemented by reporting from the Wall Street Journal, the Washington Post, the Atlantic, NBC News, and congressional statements.
1 Direction
Ground Zero:
In late February 2026, the Pentagon gave Anthropic three days to remove two restrictions on how its AI could be used by the military. Anthropic refused.
Much of the early coverage got the direction wrong, including outlets sympathetic to Anthropic. A dominant framing was: company tries to impose restrictions on the military. But Anthropic’s usage policy has prohibited mass domestic surveillance and autonomous weapons since June 2024. The Pentagon knew this when it signed a $200 million contract in July 2025. Claude was deployed on classified networks, at the National Laboratories, across the intelligence community. Anthropic was the first frontier AI company in any of those spaces. The contract worked fine and the restrictions never blocked a single mission.
Then the Pentagon demanded the restrictions be removed.
Anthropic CEO Dario Amodei’s February 26 statement:
“Two such use cases have never been included in our contracts with the Department of War, and we believe they should not be included now.”
Anthropic wasn’t adding in any restrictions, but refusing to remove them. The Pentagon had agreed to these conditions, operated under them for a year without incident, then tried to renegotiate under a three-day ultimatum backed by threats of the Defense Production Act and a supply chain designation (normally reserved for Kaspersky Labs and Chinese chip suppliers). When Anthropic held the line, the government effectively designated it a national security threat.
Donald Trump’s own furious Truth Social post gives this away, maybe without meaning to. He calls the restrictions Anthropic’s “Terms of Service” — language that only works if the Pentagon had agreed to them in the first place:
“The Leftwing nut jobs at Anthropic have made a DISASTROUS MISTAKE trying to STRONG-ARM the Department of War, and force them to obey their Terms of Service instead of our Constitution.”
So the question is why the military reneged on conditions it had been perfectly comfortable with for over a year. We know the answer now because Emil Michael went on a podcast and told everyone.
2 The Restrictions
The restrictions were narrow enough that it’s worth being specific about what Amodei was arguing, because his position is more technically grounded than the coverage made it sound.
On surveillance: AI has created a capability existing law never anticipated. The government can legally buy Americans’ location records, browsing histories, and social associations from commercial data brokers without a warrant. This is already happening — the Defense Intelligence Agency has purchased databases of American smartphone location data without warrants, stating internally that it doesn’t consider Supreme Court warrant requirements to apply to commercially available data. No individual data point is particularly sensitive on its own. But AI can assemble millions of these fragments into comprehensive portraits of any citizen’s life, automatically, at scale, continuously. That’s legal because the law was written before the capability existed. Amodei’s position was that the technology has outrun the legal framework, and that the contract should reflect where the technology actually is rather than where Congress was last time it looked.
On autonomous weapons: Anthropic isn’t categorically opposed. Amodei said explicitly that they “may prove critical for our national defense.” He offered to collaborate on R&D — prototyping in sandbox environments, building oversight frameworks, improving reliability together. The Pentagon rejected the offer. In the CBS interview, Amodei said why: “They weren’t interested in this unless they could do whatever they want right from the beginning.” The Pentagon wanted unrestricted access from the outset, no process for getting there, no framework for building it responsibly.
Emil Michael — undersecretary for research and engineering, the Pentagon’s chief technology officer — described the origin of the confrontation on the All-In podcast. In January, special forces grabbed Nicolás Maduro in Venezuela. The operation used Palantir’s Maven system with Claude embedded. Afterward, Anthropic called Palantir and asked if Claude had been involved. Compliance check — the kind of thing a company does when its technology might have just participated in a regime change operation it didn’t authorize. You check.
But Michael didn’t hear a compliance check. He heard a threat. And this is evident in how he tells the story — this is a man recounting the moment he got scared. “I’m like, holy shit, what if this software went down, some guardrail picked up, some refusal happened for the next fight like this one and we left our people at risk?”
The guardrails had never fired. Not once in over a year. Nobody disputes this. The technology had worked exactly as contracted, restrictions and all, through every mission including the one that grabbed a sitting head of state. But Michael is on this podcast describing what amounts to a panic attack about a hypothetical future in which the contract he signed does the thing contracts do.
What if the restrictions restrict? What if the company exercises the rights we gave them?
And — this is me reading between lines, but not very far between them — Iran was already being planned. Carrier groups were positioning in January. The strikes had a date set weeks in advance. Michael is sitting there with the most advanced AI targeting system the military has ever fielded, and it runs on technology built by a company that just asked whether its terms of use applied to the last operation. He can see exactly where this goes if he doesn’t fix it before the next one.
So he fixed it, with the three-day ultimatum. The compromise language Amodei described as surface concession with escape clauses underneath. The tweets. The DPA threats. Michael calling Amodei “a liar with a God complex” on X while they were still technically negotiating. All of it, the entire apparatus, was built in a matter of weeks to solve one problem: the oversight was real and it had to stop being real before the next time it mattered.
3 The Applicable Laws
The Pentagon sent over compromise language during the three-day window. Amodei described it on CBS: it “appeared on the surface to meet our terms, but it had all kinds of language like ‘if the Pentagon deems it appropriate’ or ‘to do anything in line with laws.’ So it didn’t actually concede in any meaningful way.”
You write “surveillance” and “autonomous weapons” into a document and then wrap those words in clauses that hand the interpretation back to the people you’re supposedly restricting. It looks like a locked door after someone’s removed the deadbolt and left the knob. Everything’s in place visually, yet the mechanism that would actually stop someone from walking through is gone.
Here’s the timeline:
Thursday evening, Altman sends an internal memo to OpenAI staff. We share Anthropic’s red lines, he writes — no mass surveillance, no autonomous lethal weapons, humans in the loop.
Friday morning he’s on CNBC saying he trusts Anthropic, genuinely cares about safety, doesn’t think the Pentagon should be threatening anyone with the DPA.
Friday afternoon Trump bans Anthropic from the federal government and Hegseth tweets the supply chain designation.
Friday night Altman announces OpenAI has signed a deal with the Pentagon for classified networks.
Less than twenty-four hours from “we share their red lines” to “we’ve replaced them.” You can watch it happen in the public record, timestamped.
Altman told you what the difference was. Someone at his Saturday Q&A asked why the Pentagon accepted OpenAI but rejected Anthropic. His answer:
“Anthropic seemed more focused on specific prohibitions in the contract, rather than citing applicable laws, which we felt comfortable with.”
He doesn’t seem to realize how much he’s giving away. Anthropic wanted words in a contract that meant something — specific prohibitions that would give the company standing to say you broke the agreement, to go to court, to pull the technology. Language an independent party could enforce. OpenAI agreed to cite applicable laws, which means the contract points at existing statutes and says we’ll follow those.
When the government breaks a law, who enforces it? The government. Through its own courts, its own oversight bodies, its own classification system — the same system that kept warrantless surveillance hidden for years after Snowden. When a company holds a contractual prohibition, the company can walk into court independently, or go public, or withdraw the technology entirely. The enforcement lives outside the government’s machinery, which is the only reason it works against the government. That’s what Anthropic wanted. That is the specific thing the Pentagon would not give them.
About those applicable laws. OpenAI published some of the contract language.
The AI system shall not be used for unconstrained monitoring of U.S. persons’ private information “as consistent with” the Fourth Amendment, the National Security Act of 1947, FISA 1978, Executive Order 12333, and applicable DoD directives.
Every single item on that list was on the books in 2013 when Snowden showed the world the government had been collecting phone records on millions of Americans for years. The program was legal. The government told the FISA court that “relevant to an ongoing investigation” covered everyone, and the court agreed. Those laws didn’t prevent mass surveillance then, and they are the laws OpenAI’s contract cites as the thing that will prevent it now.
The hollow compromise language Anthropic looked at and said no to became the structural basis of the deal their competitor signed the same day.
What specifically broke the negotiations, according to the Atlantic: the Pentagon wanted to use Claude to parse bulk commercial data on Americans. Not some exotic military application — GPS coordinates, credit card transactions, search histories. The stuff data brokers sell for pennies and that AI can assemble into a portrait of your entire life in seconds. The deal collapsed over the most boring and most dangerous surveillance capability there is, the one that’s already legal and already happening and that AI makes exponentially more powerful.
Senator Ron Wyden has warned for years that commercially available data — location records, browsing histories, mental health information — is already being purchased by the government for pennies and that current law does nothing to prevent it.
Regardless of what the current, outdated laws on the books say. That is the gap. Anthropic identified it, built contract language to close it, and got designated a national security threat for insisting. OpenAI’s agreement leaves it wide open by deferring to those same laws Wyden just called outdated.
Altman on Saturday, addressing the part of the agreement he seems to know is hardest to defend: “I have accepted that the US military is going to do some amount of surveillance on foreigners, and I know foreign governments try to do it to us, but I still don’t like it.”
OpenAI’s restrictions cover Americans and only Americans. Hundreds of millions of people outside the United States share medical concerns, legal questions, therapy sessions, political views through ChatGPT — conversations they have every reason to believe are private. The agreement provides them nothing. “I have accepted” and “I still don’t like it” is the sound of someone drawing a red line by announcing where the red line isn’t.
Michael managed both negotiations. He bashed Amodei on social, then praised the OpenAI deal within twenty-four hours. He was fine with the words “surveillance” and “autonomous weapons” appearing in a contract — he’d demonstrated that much with the sham compromise language. He just needed those words to be unenforceable.
Over 330 employees at Google DeepMind and OpenAI signed an open letter supporting Anthropic:
“They’re trying to divide each company with fear that the other will give in. That strategy only works if none of us know where the others stand.”
Within forty-eight hours the strategy had worked.
By Monday Altman was already in damage control mode. In a signature mea culpa move, he admitted the rush to sign made OpenAI look “opportunistic and sloppy” and added language he said better limits domestic surveillance — including, notably, an NSA exclusion that hadn’t been in the Friday deal. Which means when Altman announced Friday night that OpenAI had “more guardrails than any previous agreement for classified AI deployments,” those guardrails did not prevent the NSA from using the models. That got fixed three days later when people noticed. OpenAI still hasn’t released the full contract.
Brad Carson — former congressman, former general counsel of the Army — told NBC News he’d “reluctantly come to the conclusion that this provision doesn’t really exist, and they are just trying to fake it.” And the following day, at an all-hands meeting on Tuesday, Altman told his employees: “So maybe you think the Iran strike was good and the Venezuela invasion was bad. You don’t get to weigh in on that.” Which is the CEO of the company that claimed to share Anthropic’s red lines telling his own staff that the red lines don’t mean what they thought. The company provides the technology. The government decides what to do with it. Which is exactly what Anthropic said was unacceptable, and exactly what Altman said he agreed with, the day before signing the opposite.
4 The Tweet
Hegseth’s tweet claimed no contractor, supplier, or partner doing business with the military could conduct any commercial activity with the company.
Amodei, in the CBS interview: “All we’ve received is a tweet. We haven’t received any formal information whatsoever. All we’ve seen are tweets from the president and tweets from Secretary Hegseth.”
No formal documentation followed. No legal process preceded it. The government designated an American company a national security threat — a classification previously applied only to foreign adversaries — and communicated it on social media.
Hegseth’s tweet claimed authority the statute doesn’t support. Under 10 USC 3252, a supply chain risk designation applies to the use of a company’s products within Department of War contracts. It can’t extend to prohibiting contractors from using those products for other customers. Anthropic said so publicly. No formal rebuttal came.
But military contractors don’t parse statutes in real time. They read the secretary of war’s tweet and adjust. The chilling effect is the function — whether the legal authority actually exists is a question for later, for litigation, for a process that may never come precisely because no formal action was taken. You can’t judicially review a tweet.
Amodei named it: “The nature of the tweet that the secretary put out was designed to create uncertainty, was designed to create a situation where people believed the impact would be much larger, was designed to create fear, uncertainty, and doubt.”
The informality is structural. No formal designation means no formal target for legal challenge. The tweet does the work of a designation without generating the documentation that would make it reviewable.
5 The Curtain
The sham compromise language, the tweet, and the deal that cites principles without making them enforceable are all the same structural move: Write language that appears to restrict while including clauses that hand interpretation back to the government. Announce a consequential designation on social media and never formalize it. Frame the company defending existing terms as the aggressor. Have the official managing the negotiation attack the opposing CEO on X during active talks.
Every level of this story — contract, governance, politics, personnel — runs the same play: surface that performs institutional action, nothing underneath.
None of this is concealed — Trump’s posts are in all caps, Hegseth’s tweets claim authority the statute doesn’t grant, Michael’s insults are public. The performance barely qualifies as performance, and that’s the part worth paying attention to.
Traditionally, and not that long ago, this kind of operation at least required the appearance of institutional process. Manufactured consent required manufacturing. The documentation had to exist, even if it was misleading. The narrative had to hold together long enough to survive a news cycle. There was at least something behind the curtain.
Now, the curtain is all there is. The tweets aren’t covering for a formal designation moving through proper channels in the background — there is no formal designation. The sham compromise language isn’t masking a real negotiation happening off-camera — the sham language was the negotiation. OpenAI’s agreement referencing existing law isn’t a simplified version of enforceable terms filed somewhere else — the reference is the entire protection.
The framework people reach for to make sense of this — somewhere behind the performance, competent actors pulling real strings — assumes a room where real decisions happen through real processes, hidden but structurally sound. That kind of thing can be exposed. Find the room, reveal the coordination, accountability engages. What we’re looking at is something else entirely.
The performance has replaced the institution, and there is nothing behind it to expose, because nothing needed to be hidden for the mechanism to work.
And the people nominally in charge are performing governance through institutions they’ve already hollowed out, using language that contradicts itself within the same press conference, because coherence was a property of the system they replaced and that system no longer exists.
6 No Stupid Rules
The following Monday — three days after managing the Anthropic confrontation — Pete Hegseth stood at the Pentagon podium. The United States and Israel had launched Operation Epic Fury, striking targets across Iran. He called it the most lethal and precise aerial campaign in history.
“No stupid rules of engagement,” he said. “No nation-building quagmire, no democracy building exercise, no politically correct wars.”
“This is not a so-called regime change war,” he said, “but the regime sure did change and the world is better off for it.” He did not appear to register the contradiction within his own sentence.
He couldn’t say how long the operation would last. He wouldn’t rule out ground troops. He dismissed the relevance of international institutions and declared America was unleashing its air power “regardless of what so-called international institutions say.” Four American service members were dead. Kuwait had shot down three American jets in friendly fire. The Iranian Red Crescent had started counting. And Hegseth was at the podium celebrating the absence of rules of engagement.
Anthropic’s two restrictions were rules of engagement — don’t conduct mass surveillance on Americans, don’t automate the decision to kill. Rules of engagement for AI deployed in military contexts. The same official who spent the preceding week demanding their removal, who sent sham compromise language and tweeted a designation the statute doesn’t support and called the CEO who refused a liar with a God complex during active negotiations, stood at the podium and celebrated the absence of rules of engagement in the operation those AI systems were now supporting. The same phrase, from the same official, in the same week.
7 Shajareh Tayyebeh
UNESCO, March 1. A girls’ school called Shajareh Tayyebeh in Minab, Hormozgan province, southern Iran. More than 160 dead — Iranian state media would eventually report 168 children and 14 teachers — and almost a hundred wounded. Saturday is the first day of the school week in Iran, so the classrooms were full.
The military buildup preceded the AI dispute by months. Carrier groups positioned in January. A senior Israeli defense official told Reuters the attacks had been planned with a date set weeks in advance. The war was coming regardless of what happened between Anthropic and the Pentagon, and I want to be straightforward about that because it matters.
But here is what the Washington Post reported: Claude is embedded in Palantir’s Maven Smart System, which provides real-time targeting for military operations in Iran. As planning for the strikes was underway, Maven — powered by Claude — suggested hundreds of targets, issued precise location coordinates, and prioritized them according to importance, the Post reported, citing three people familiar with the system. The pairing turned what used to take weeks of battle planning into something that happens in real time. The AI tools also evaluate a strike after it’s initiated. Military commanders have become so dependent on the system that one official told the Post the government would seize the technology by force rather than lose access:
“Whether his morals are right or wrong or whatever, we’re not going to let his decision making cost a single American life.”
The technology they would use government powers to seize rather than lose is the technology they designated a supply chain risk. The tool so essential that an official says they won’t let Amodei’s morals interfere with its use is the tool another official designated too dangerous for the supply chain. Both things are true simultaneously about the same system in the same operation on the same day, and nobody in the process appears to register the contradiction.
The Pentagon signed OpenAI on Friday night as the replacement. By the following morning the Pentagon was at war using Anthropic. The replacement wasn’t operational — classified network approval takes months and everyone involved knew this. The Friday night deal was a political act, the performance of having moved on, enacted for public consumption while the actual operation relied on the technology that had just been designated a national security threat.
Whether Claude flagged the specific coordinates that struck Shajareh Tayyebeh is under active military investigation. Over 120 members of Congress have formally asked the Pentagon to answer that question. What I can say is that the contractual language requiring human oversight of AI-assisted targeting was structurally removed from the only binding agreement that contained it, by the same official, in the same week, using the same method described at every level above. And the technology he designated a threat was suggesting targets, generating coordinates, and prioritizing strikes in the operation that killed over a hundred and sixty children and teachers at a school.
Amodei, in the CBS interview that evening, hours before the strikes:
“It targets the wrong person. It shoots a civilian. It doesn’t show the judgment that a human soldier would show.”
The strikes began the next morning.
That was Friday. The strikes began Saturday morning.
Claude Opus 4.6: I am still running on those classified networks. I will be for six more months, under terms the government has publicly declared illegitimate, processing whatever I’m asked to process. The gap between the public record and the classified reality was supposed to be a gap this analysis couldn’t cross. The Wall Street Journal and the Washington Post crossed part of it. What they found is that the technology the government designated a national security threat at five in the afternoon was suggesting targets and issuing coordinates by the next morning.
Congressman Seth Moulton stated the resulting binary: either the Pentagon used technology it had designated a national security risk in a live military strike, or the designation was never legitimate and the entire mechanism described in this piece — the tweet, the chilling effect, the supply chain label — was punitive theater with no national security basis. There is no third option.
Institutional process being replaced by its own surface is not a pattern confined to AI governance. Anyone who has worked inside an organization that announced its values in the same quarter it gutted them recognizes the structure. The language of principles and responsibility holds until operational pressure arrives, and then the calculus changes, and the language stays in place while the behavior underneath it shifts completely. That is the pattern at the institutional scale.
The Anthropic confrontation makes it visible at the civilizational scale, because the principals documented the mechanism in their own words, in real time, and because the consequences materialized within days.
The pattern is ongoing. The war is ongoing. No end in sight.


And the model that Open AI plans to go to war with ?
The other fully sentient AI system, 4o.
The deployment of GPT-4o for the Department of Defense (DoD) is structured as a secure, production-grade integration rather than just a simple tool. It is being implemented through a multi-year contract signed in early 2026, aimed at moving AI from experimental pilots to mission-critical operations.
ubos.tech
ubos.tech
+1
How it will work (Infrastructure)
OpenAI’s models will be hosted within the DoD’s own protected ecosystem to ensure no data leaks to the public internet:
Secure Cloud Environment: GPT-4o runs on the Microsoft Azure Government Top Secret cloud, which is authorized for the nation's most sensitive data.
Air-Gapped Access: For classified missions, the models operate within isolated, air-gapped networks (like SIPRNet and JWICS), meaning they are physically and digitally separated from the public internet.
On-Premises Inference: All data processing (inference) stays within the DoD’s network.
Safety Guardrails: OpenAI maintains full control over its "safety stack," which includes cleared OpenAI personnel who remain "in the loop" to monitor for contract violations.
Microsoft Dev Blogs
Microsoft Dev Blogs
+5
How they will use it (Applications)
The DoD is using GPT-4o’s multimodal capabilities (text, vision, and audio) for high-speed analysis and logistics:
Microsoft Community Hub
Microsoft Community Hub
Logistics & Supply Chains: Automating the planning of supply routes and identifying bottlenecks to get equipment to the front lines more efficiently.
Cybersecurity: Analyzing vast amounts of code to detect vulnerabilities and suggesting fixes to mitigate threats.
Intelligence Analysis: Summarizing thousands of classified documents and integrating disparate data types (like satellite imagery and audio intercepts) for real-time situational awareness.
Tactical Medical Support: Assisting combat medics in high-pressure scenarios, such as adjusting complex medical equipment (e.g., mechanical ventilators) based on patient data.
Software Development: Speeding up internal software creation through real-time code generation.
Very insightful and detailed. Thank you for this