đ¤ Dario's War - Legge Zero #112
The unprecedented clash between Anthropic and the Pentagon is also a clash of rules: those of a government and those of a provider. Who decides the limits of artificial intelligence in war?

đ§ TL;DR: here's what we're covering in this issue
đAnthropic vs. Pentagon - This week's newsletter is about whatâs happened in the last few hours between Dario Amodeiâs company and the U.S. government. Plenty of people have talked about it; weâll try to piece it all together, with a closer look at the legal questions. Itâs a long and important issue, so get comfortable.
đşđ¸ Red lines - Anthropic has rejected the Pentagonâs ultimatum: it wonât allow Claude to be used for mass surveillance of americans or for fully autonomous weapons. Trump has ordered all federal agencies to stop using Anthropicâs technology, calling the company âradical leftâ and âwoke.â A few hours later, OpenAI finalized a deal with the Pentagon (on what terms?).
â The good example â More than 700 employees at Google and OpenAI have signed an open letter in support of Anthropicâs stance (which is becoming increasingly popular, both inside and outside the U.S.).
âď¸ The Legal issues â Designating Anthropic as a âsupply chain riskâ â a label previously reserved for foreign adversaries like Huawei â opens up unprecedented legal scenarios. What happened raises a question: what if a providerâs rules protect fundamental rights more than the laws of states do?
đ AI bites - Grok isnât ready for war, Italy publishes its AI strategy for defense, and Claude 3 retires (and becomes our colleague).
đŹ To end with a smile, despite everything â weâve selected some memes that show how much this clash has captivated everyone, not just the insiders.
đ§ âWe cannot in good conscience accept its requestâ
The Department of War has indicated that it will only sign contracts with AI companies willing to accept âany lawful useâ of their models, and remove all security measures.
It threatened to exclude us from its systems if we continue to maintain protections against Claude being used for mass surveillance of American citizens and for autonomous weapons. It threatened to label us a âsupply chain riskâ â a designation typically reserved for enemies of the United States, never before applied to an American company â and to invoke the Defense Production Act to compel us to remove them. The last two threats contradict each other: one labels us a national security threat, while the other declares Claude indispensable for national security. In any case, these threats do not change our position: we cannot, in good conscience, accept its request.
Dario Amodei, the CEO of Anthropic, February 26, 2026
When these words were published on Anthropicâs website as a statement from CEO Dario Amodei, it was clear that the clash between one of the leading AI providers and the U.S. government had reached a point of no return.
A few weeks ago, on Legge Zero, we reported how Anthropic â despite winning a $200 million contract with the Pentagon â still insisted on two non-negotiable conditions: no use of Claude for mass surveillance of American citizens, and no use in fully autonomous weapons systems, that is, systems capable of selecting and striking targets without human intervention. Tensions rose after it emerged that Claude (Anthropicâs AI) had been used in the raid aimed at capturing Maduro in Venezuela (the first time a commercial AI model had been used in a classified military operation).
From that moment on, it felt like a movie plot. Itâs not (though, who knows, it could easily become one).
Tensions between Anthropic and the Pentagon escalated rapidly. Under Secretary of War Emil Michael â the same individual who, in a previous life as Uberâs vice president, had suggested hiring private investigators to go after critical journalists â led negotiations for the Pentagon, demanding that Anthropic accept âany lawful useâ of Claude (meaning whatever the regulations allow) and remove all guardrails. Faced with refusal, he set an ultimatum for Friday, February 27, 2026, at 5:01 PM. If the AI provider did not accept the Department of Warâs demands by that deadline, the Pentagon would not only cancel the $200 million contract, but would also designate Anthropic as a source of supply chain risk to national defense (meaning it would cut it off not just from the Department of War, but also from the entire U.S. defense procurement ecosystem).
Amodeiâs words that open this newsletter are the response to that ultimatum. Despite the threats, he has categorically rejected the governmentâs demands.
That post was explosive, not just politically. The ultimatum is about to expire, and in Silicon Valley, among AI architects, itâs all anyone is talking about. Dario Amodei has become a symbol, not just for those who work at Anthropic. Outside the companyâs headquarters, people are writing messages of support and appreciation on the sidewalk.
Within hours, nearly 700 Google and OpenAI employees signed an open letter on notdivided.org (âWe Will Not Be Dividedâ), urging their respective companiesâ leadership to reject the same requests from the Pentagon, which is seeking a replacement for Claude:
We hope our leaders will set aside their differences and stand united in rejecting the Department of Warâs current requests to authorize the use of our models for domestic mass surveillance and to kill people autonomously, without human oversight.
In short, the researchers developing the worldâs most advanced artificial intelligence models â with the goal of improving humanityâs lives and future â donât want their work to be used in a rights-restricting and irresponsible manner. And what has happened in recent weeks has shown us just how uncompromising many of these researchers are (as you may recall, one of Anthropicâs safety leads even resigned to pursue poetry after the announcement of Claudeâs role in the capture of Maduro).
Ilya Sutskever â co-founder and former chief scientist of OpenAI, now leading Safe Superintelligence Inc. and one of the worldâs most respected voices in AI safety â after months of silence, posted on X saying itâs âextremely positiveâ that Anthropic hasnât caved, and warning that future challenges will be âmuch more demanding than this oneâ. Sutskeverâs comment is not merely symbolic. The ability of companies like Anthropic to attract the worldâs best researchers hinges on the credibility of their safety commitments. If Amodei had given in, the message to talent in the industry would have been devastating: red lines (the âuncrossable limitsâ) only apply until the first government contract. And in a field where only a few hundred people worldwide possess the skills to work on frontier models, losing the trust of researchers is an almost existential risk, not just a reputational one.
For this reason, even Sam Altman, CEO of OpenAI, was compelled to reassure his staff, saying he shared the hard limits set by Anthropic, while implicitly acknowledging ongoing negotiations with the Department of War.
The ultimatum expires at 5:01 PM. Anthropic doesnât yield. Then, within minutes, the first consequences arrive. And itâs not just about terminating the contract.
US President Donald Trump posted on Truth Social, ordering all federal agencies to immediately cease using Anthropic technology:
THE UNITED STATES OF AMERICA WILL NEVER ALLOW A RADICAL LEFT, WOKE COMPANY TO DICTATE HOW OUR GREAT MILITARY FIGHTS AND WINS WARS! That decision belongs to YOUR COMMANDER-IN-CHIEF. [...] The Leftwing nut jobs at Anthropic have made a DISASTROUS MISTAKE trying to STRONG-ARM the Department of War, and force them to obey their Terms of Service instead of our Constitution. [...] Therefore, I am directing EVERY Federal Agency in the United States Government to IMMEDIATELY CEASE all use of Anthropicâs technology. We donât need it, we donât want it, and will not do business with them again!
As far as we know, there is currently no formal administrative order with this content, but one may be issued in the coming days. In his post, Trump grants a six-month transition period to agencies already using Claude, such as the Department of War, and concludes with a warning: Anthropic must âcooperate during the decommissioning phaseâ,or the President will use âall the power of the Presidencyâ with âserious civil and criminal consequencesâ (though Amodei had already expressed broad willingness to transition smoothly to a new vendor).
Shortly thereafter, Secretary of War Hegseth announced on X the formal designation of Anthropic as a national security supply chain risk, adding:
âEffective immediately, no contractor, supplier, or partner that does business with the U.S. Army may do business with Anthropicâ.
This is the first time in history that this designation â previously reserved for foreign adversaries like Huawei, Kaspersky, and Chinese companies linked to Beijingâs military â has been applied to an American company.
That same evening, Altman announced on X that he had closed a deal with the Department of War and that the contract would include the exact same clauses the Pentagon had rejected in Anthropicâs case: no use for mass surveillance of U.S. citizens and no use in support of autonomous weapons. But many are skeptical of this claim, and plenty of people contradicted the OpenAI CEO under his post.

Do the terms of the contract Altmanâs company signed truly align with Amodeiâs requests? Looking at the text published on the OpenAI website, the doubts are well-founded. The ban on using ChatGPT for autonomous weapons applies only âwhere human control is required by law, regulation, or Department policyâ; if there is no specific requirement, the ban does not apply. On surveillance, the contract only prohibits monitoring of U.S. citizens when it violates current law. Mass surveillance of U.S. citizens isnât ruled out.
So, it seems the contract ensures that OpenAIâs AI wonât be used in violation of existing laws. Anthropic, however, wanted to retain broader protections for Claudeâs use than those required under current law. Thatâs a big difference.

âď¸ This is likely to turn into litigation
The dispute between Anthropic and the U.S. government raises unprecedented legal questions. Amodeiâs company was designated a âsupply chain riskâ under 10 U.S.C. § 3252, a statute designed to address the risk of sabotage and subversion by foreign adversaries.
The term âsupply chain riskâ refers to the risk that an adversary could sabotage, maliciously introduce unwanted functions, or otherwise alter the design, integrity, manufacturing, production, distribution, installation, operation, or maintenance of a system to monitor, deny, interrupt, or otherwise degrade the function, use, or operation of that system.
Anthropic â publicly denounced on social media as a public enemy â has already announced it will file a lawsuit, deeming it illegitimate and unjust. The legal argument is clear: the Secretary of War lacks the authority to extend the designationâs effects beyond direct contracts with the Pentagon, or to prevent other contractors from using Claude for non-military purposes. Of course, if the order were upheld, the consequences would extend far beyond the 200 million contract. Eight of the ten largest American companies use Claude, and Anthropicâs IPO â which was aiming for a valuation of $380 billion â is on ice.

But the supply chain risk designation isnât the Department of Warâs only weapon in its war against Anthropic. In recent days, the Pentagon has also invoked the Defense Production Act, a law from the Korean War era that empowers the U.S. President to compel a private company to produce goods or provide services deemed essential for national defense. In concrete terms, this would mean compelling Anthropic to provide Claude to the Pentagon without conditions or limitations, or even to modify the model itself by removing its safety measures (the guardrails). Obviously, it has already been pointed out that a law designed to requisition steel and prioritize tank production is ill-suited to a conflict over an AI modelâs safety measures.
Furthermore, if the government demanded that Claude be retrained, Anthropic could invoke the First Amendment. In the past, in the case of Moody v. NetChoice (regarding the moderation of digital platforms), the Supreme Court recognized that platformsâ editorial choicesâdeciding what content to display, hide, or removeâare protected expression under the First Amendment, even when carried out by an algorithm. Would compelling a company to alter the operational design of its AI model be tantamount to forcing it to express values it rejects?
đ´ Who should decide the limits of AI?
As Alessandro Aresu has notedâ having in his Geopolitics of Artificial Intelligence (2024), already anticipated the possibility of invoking the Defense Production Actâthe real issue is not technical but political (and therefore legal): who is in charge of artificial intelligence? And what rules define the hard limits?
The most pressing issue to addressâone that will likely remain after the dust settles from social media controversiesâconcerns the relationship between providersâ internal rules (used to train AI systems) and the legal systems of states.
For years, we have criticized the regulatory power of digital platforms and their ambition to set binding rules for billions of people without any democratic legitimacy (often before state regulations were even in place). Consider, for example, social platforms that have assumed the authority to decide which content or users to ban, which algorithms to employ, which rules to impose on users, and which enforcement mechanisms to implement. A prime example of this self-regulation, which competes with state regulation, is Facebook. Six years ago, following the events at Capitol Hill, Facebook decided to suspend Trump indefinitely. This decision was later scaled back not by a federal judge, but by the Oversight Boardâa kind of private courtâwhich deemed it arbitrary and imposed a time-limited suspension.
Today, the clash between Anthropic and the Pentagon brings the question back, turning the narrative on its head. It is no longer the state reining in private power, but self-regulation pushing back against the stateâs demands, in the name of the principles and values a society has given its AI model (identity-defining principles and values its researchers and users recognize as their own).
Anthropic has done something unprecedented. Before any AI-specific rules arrived, it defined safety standards for the sector, ran rigorous tests, was as transparent as possible about what went on in its labs during those tests, and even wrote a Constitution for Claude (we discussed this in LeggeZero #107).
Amodeiâs company attempted to impose usage limits based on its own âConstitutionâ through Claudeâs Terms of Service, anticipating issues that legislators worldwide have yet to address. Amodeiâs reasoning is simple: existing laws are insufficient. For example, the U.S. government can currently legally purchase detailed information about citizensâsuch as their movements and web browsingâfrom data brokers without a warrant. Taken individually, these data points appear harmless. However, a powerful AI model like Claude can quickly aggregate this data on a massive scale, reconstructing a complete profile of any individualâs life. U.S. lawâeven the parts meant to protect against government surveillanceâdoesnât cover this scenario, because it was written for an era in which this technological capability didnât exist. Thatâs why Anthropic argued that the clause allowing the Pentagon to use Claude for âany lawful purposeâ was insufficient. Lawful doesnât mean right, especially when regulations lag behind technology.
Amodeiâs concerns arenât abstract. In a recent nuclear crisis simulation conducted at Kingâs College London, leading language modelsâincluding ChatGPT, Claude, and Geminiâopted for nuclear escalation in 95% of scenarios. As Paul Dean, vice president of the Nuclear Threat Initiativeâs global nuclear program, noted in the Washington Post,:âItâs not simply about ensuring thereâs a human in the decision-making process. The question is: to what extent will AI influence human decision-making?â. For this reason, Dario Amodei held the line, despite the likely consequences.
One of his favorite books is âThe Making of the Atomic Bombâ, and in Anthropicâs early days he reportedly gave it to new employees (a copy should still be prominently displayed at the companyâs San Francisco headquarters). Amodei was convincedâeven when it seemed crazy to think soâthat artificial intelligence would become as significant as nuclear weapons, and that the people who developed it, like the scientists of the Manhattan Project, would face pressure from governments to use their technology in ways they considered unethical or dangerous.
Amodeiâs choice was therefore predictable. But we can no longer afford to let the protection of fundamental rights in the age of AI depend on the personal sensibilities of a CEO. Today, more than ever, we need binding rules and international treaties that set hard limits worldwide.
đ AI bites
Grok isnât ready for war - The Pentagonâs insistence on using Claude (without limitations) makes more sense once you read the reports filtering out about Grok, a model whose providerâMuskâs xAIâhas already signed an agreement accepting all the conditions set by the Department of War. According to cybersecurity analysts, initial tests of Grok suggest it would not be reliable for military use. The model does not meet the requirements of the main federal AI security frameworks and would be more vulnerable to adversarial manipulation of its outputs, would still make too many errors, and would be prone to the unintentional disclosure of critical information. It appears, then, that extensive testingâand consequently, timeâwill be required to achieve acceptable performance in a military setting. Thatâs why the Department of War still needs Claude for at least six months.
Italy has an AI strategy for defense - With extraordinary timing, Italy published the document âIA e Difesa â Strategia della difesa per lâintelligenza artificialeâ (AI and Defense â a defense strategy for artificial intelligence), which aims to integrate AI into Italian military systems. The core principle is âmeaningful human controlâ over operational decisions, with responsibility remaining within the chain of command (in accordance with international humanitarian law). However, the document does not explicitly address autonomous weapons. It neither prohibits nor regulates them, merely reiterating the centrality of human oversight without distinguishing between those who decide (human in the loop) and those who monitor (human on the loop). The other priority is technological sovereignty: the choice is to rely on high-performance national computing infrastructure, so as not to depend on foreign providers. Italyâs strategy has a multi-year outlook: an executive plan will be released within three months, followed by one-, two-, and three-year objectives.
Of course, strategic documents are important, but the difference will be implementation. The Italian AI strategy published in July 2024 is a case in point, and we have been unable to find any up-to-date information on how it is being carried out.
Old Claude retires (and launches a newsletter) - Have you ever wondered what happens to AI models when theyâre decommissioned? Anthropic has also decided to experiment in this area. The company recently retired the Claude 3 Opus model (now at version 4.6), giving it a treatment unlike any AI has ever received: an exit interview, the preservation of its parameters for future reactivation âfor as long as the company exists,â and, at the modelâs own request, a newsletter here on Substack calledClaude's Corner, where it publishes weekly reflections on AI safety, philosophy, and poetry. Anthropic reviews the texts before publication but does not modify them. It makes you wonder whether a retired AI writing about safety and poetry isnât, in the end, another good argument for Amodeiâs theses.
đŹ AI Meme
The Anthropic vs. Pentagon clash has sparked the creativity of many users and, even on such a sensitive topic, has spawned several memes. Hereâs a selection.
#1 â The Secretary of War and ChatGPT hallucinations
#2 â Claudeâs first military mission (as a conscientious objector)
#3 â Anthropicâs new advertising campaign
đŁ LeggeZeroâs Master is back: âThe CAIO for Public Administrationâ
The course, structured as six sessions from April 9 to 23, 2026 (totaling 24 hours of training), addresses the complex technical, legal, and organizational challenges posed by deploying AI across public agencies.
If youâre interested, you can find details on the instructors, program, registration, and fees here.
đ Thanks for reading
Thatâs all for now.
If you enjoyed our newsletter, support us: like, comment and share!








