Main content

News from the front - Anthropic returns fire on the US Department of War and rejects demands not to block AI from launching missiles on its own

Stuart Lauchlan Profile picture for user slauchlan February 27, 2026
Summary:
No undertakings have been received. As such, this AI firm appears to be at war with the US Government...

war

He did it, he actually did it! Confronted with an ultimatum from the US Department of War to drop contractual clauses that prevent Anthropic tech from being used for mass surveillance of domestic citizens or the autonomous launching of weapons without a human being pressing the actual button, the firm’s CEO Dario Amodei has stuck to his metaphorical guns and refused to comply.

With a deadline looming of 5pm today Washington time, Pete Hesgeth, Secretary of State at the Department of War, had threatened a number of possible outcomes if Anthropic didn’t fold, including being blacklisted as a security risk or having Cold War legislation brought to bear to compel it to do what the Administration wants.

But Amodei got his retaliation in first, issuing a public statement on Thursday that made it clear Anthropic would rather risk losing the business than buckling to his demands. He went out of his way to make clear that:

Anthropic understands that the Department of War, not private companies, makes military decisions. We have never raised objections to particular military operations nor attempted to limit use of our technology in an ad hoc manner.

But he added:

However, in a narrow set of cases, we believe AI can undermine, rather than defend, democratic values. Some uses are also simply outside the bounds of what today’s technology can safely and reliably do. 

Fully-autonomous weapons may be critical to national defense at some point, he acknowledges, but for the moment:

 Today, frontier AI systems are simply not reliable enough to power fully autonomous weapons. We will not knowingly provide a product that puts America’s warfighters and civilians at risk... without proper oversight, fully autonomous weapons cannot be relied upon to exercise the critical judgment that our highly trained, professional troops exhibit every day. 

In a shot over the head of the Department of War, he pointed out that the two clauses to which it is objecting now were in the contract that it signed with the supplier, not something that Anthropic has sought to add to the mix:

To our knowledge, these two exceptions have not been a barrier to accelerating the adoption and use of our models within our armed forces to date.

And he reached out to the Department to suggest co-operation:

We have offered to work directly with the Department of War on R&D to improve the reliability of these systems, but they have not accepted this offer. 

That leads to only one conclusion:

Regardless, these threats do not change our position: we cannot in good conscience accede to their request.

Reaction

At time of writing, several hours before today’s 5pm deadline, there has been no official reaction from Hesgeth, although it’s safe to assume that being told such a firm ‘no’ in such a public way isn’t going to sit well, even if it was his Department that picked the fight in the first place.

But political avatars within the Administration have been returning fire. The highest profile one to date was Emil Michael, US Under-secretrary of War and Chief Technology Officer at the Pentagon, who turned to personal attacks on Amodei  in a series of social media posts:

It’s a shame that [he] is a liar and has a God-complex. He wants nothing more than to try to personally control the US Military and is ok putting our nation’s safety at risk..Anthropic is lying...What we are talking about is allowing our warfighters to use AI without having to call [Amodei] for permission to shoot down an enemy drone swarms that would kill Americans.

(It should be noted that at no point has that last idea been floated or insisted upon by Anthropic, and no evidence has been presented that Amodei has sought to, or would wish to, insert himself in the chain-of-command in such a way. In fact, he goes out of his way in the public statement to say he understands where military decision-making responsibility lies.)

Meanwhile Sean Parnell, former United States Army Captain and currently Assistant to the Secretary of War for Public Affairs, protests:

The Department of War has no interest in using AI to conduct mass surveillance of Americans, which is illegal, nor do we want to use AI to develop autonomous weapons that operate without human involvement. This narrative is fake and being peddled by leftists in the media.

To which, at the risk of being mis-characterised as a leftist in the tech media, the obvious question surely is, 'What’s the problem then? Anthropic’s contractual terms map onto the Department's own proclaimed position, don’t they?’. Parnell’s problem is:

We will not let ANY company dictate the terms regarding how we make operational decisions.

Others in the Administration have also spoken out. Sarah B. Rogers, Under Secretary for Public Diplomacy at the Department of State, says: 

There are a lot of instances where the Government and its AI provider—and US law—concur on what ought to be out-of-bounds.  Mass domestic surveillance is one obvious example! But the contractor can’t have procedural carte blanche to cut the cord if there’s a dispute.

And Jeremy Lewin, Under Secretary of State for Foreign Assistance, Humanitarian Affairs & Religious Freedom, insists:

This isn’t about Anthropic or the specific conditions at issue. It’s about the broader premise that technology deeply embedded in our military must be under the exclusive control of our duly elected/appointed leaders. No private company can dictate normative terms of use—which can change and are subject to interpretation—for our most sensitive national security systems. The Department of War  obviously can’t trust a system a private company can switch off at any moment.

Whatever happens after 5pm, this one is going to run and run. Political opponents of the Administration have been making their views clear as well, such as  Senator Mark Warner from Virginia and Vice-Chairman of the Senate Intel Committee, who asks:

Does anybody really want Pete Hegseth to decide what's appropriate and not appropriate use of Artificial Intelligence?...Companies have to make some concessions to work with government. That's a legitimate debate. What kind of data is being collected, what kind of weapons should be used, with or without a human in the loop - those are policy questions. I don't know about you, but I don't trust Pete Hegseth to make those decisions. We've got to stand up in this world where Artificial Intelligence could bring a lot of good, but also has an awful lot of challenges, and we sure as hell don't need the so-called Secretary of War to be making these choices!

My take

What happens next? Will the Administration really dare to put a US company vaunted as being at the forefront of the global AI sector, one that Trump 2.0 has specifically declared America must dominate, on a blacklist along with the likes of China's Huwaei?

Would it be able to square shifting overnight from having Anthropic’s tech approved as the only one good enough for maximum security work at the Pentagon to suddenly being deemed a security risk to the nation, just because it won’t alter terms in a contract that were already there when the Department signed it? As Amodei noted: 

They have threatened to remove us from their systems if we maintain these safeguards; they have also threatened to designate us a 'supply chain risk'—a label reserved for US adversaries, never before applied to an American company—and to invoke the Defense Production Act to force the safeguards’ removal. These latter two threats are inherently contradictory: one labels us a security risk; the other labels Claude as essential to national security.

The answer, of course, is, quite possibly.

What will happen to Anthropic if the Administration does declare war? Will it be blacklisted by the commercial enterprise sector as 'UnAmerican' as well as seeing public sector work dry up? Or will it build on the massive PR that it’s had this week and see business actually increase from enterprise buyers who do value the importance of ‘human in the loop’ guardrails at a time when AI is in its infancy and still prone to toddler tantrums?

What about the rest of the tech sector? How will it react to all this? What happens to Anthropic could, presumably, happen to anyone else in this space? At Google and OpenAI, employees have been signing up to public petitions in support of their competitor, but, of course, some of the 'usual suspects' have leapt on Anthropic’s decision to score points. 

For example, Alex Kop, CEO of Palantir, a man who cheerily boast of his company, “Sometimes we kill people, hope you’re in favor of that!”, raged:

Do you really think a warfighter is going to trust a software company that pulls the plug because something becomes controversial, with their life?

And because it’s no show without Punch, Elon Musk turned up right on cue with a concise attempt to open a new ideological/racial front in the war of words:

Anthropic hates Western civilisation.

(Keeping it classy, Elon, as ever...)

That said, and to his credit, OpenAI CEO Sam Altman did back Anthropic, up to a point, in an interview with CNBC when he said: 

I don’t personally think that the Pentagon should be threatening DPA against these companies. For all the differences I have with Anthropic, I mostly trust them as a company and I think they really do care about safety.

And while he made clear in an internal email to staff that he wanted Department of War business to come his way, it wouldn't be at any price; 

We are going to see if there is a deal with the DoW that allows our models to be deployed in classified environments and that fits with our principles. We would ask for the contract to cover any use except those which are unlawful or unsuited to cloud deployments, such as domestic surveillance and autonomous offensive weapons.

More to come as events unfold after 5pm. For now though, Anthropic appears to be be at war with the US Government.

Loading
A grey colored placeholder image