Pentagon And Anthropic Duke It Out In Feud Over AI Use


Anthropic, the maker of the Claude AI platform, is in a high stakes fight with the Pentagon over how its AI can be used. The company already has a two year agreement with the Defense Department worth up to $200 million to use advanced AI for classified systems.

  • But Anthropic insists on guardrails. It does not want its models used for domestic mass surveillance of Americans or deployed in fully autonomous weapons without a human in the loop.

    • CEO Dario Amodei has called those uses illegitimate and prone to abuse.

  • WAR GAMES: That concern is not theoretical. A recent study reported by New Scientist found leading AI models from OpenAI, Anthropic and Google chose to deploy tactical nuclear weapons in 95 percent of simulated war games.

    • Researchers said the findings were unsettling and warned AI systems may not grasp the stakes of escalation the way humans do.

WHAT ANTHROPIC WANTS:Anthropic says it will work with the military, but only if certain limits are written into its agreements.

  • No use of its models for domestic mass surveillance of Americans

  • No deployment in fully autonomous weapons without a human in the loop

  • Clear usage terms that prevent the Pentagon from overriding safety guardrails at will.

WHAT THE PENTAGON WANTS:
The Pentagon’s position, according to reporting from NBC News, is that models should be available for any lawful military purpose. Defense Secretary Pete Hegseth has outlined a push for an “AI-first warfighting force” and has threatened to invoke the Defense Production Act or label Anthropic a supply chain risk if it refuses to comply.

  • The Pentagon has already signed a deal with Elon Musk’s xAI and is nearing one with Google, hoping those agreements increase pressure on Anthropic.

HISTORY AS A GUIDETech companies have clashed with the military over AI guardrails before.

  • In 2018, Google adopted AI ethics principles after employee protests over Project Maven, a 2017 Defense Department initiative launched during President Donald Trump’s first term to accelerate machine learning in military operations.

    • Google later revised those principles and dropped its explicit ban on military applications.

  • OpenAI moved from early caution about military uses to major defense contracts under negotiated terms.

  • Other firms, including Palantir and NVIDIA, have long partnered with the defense sector without publicly imposing strict ethical prohibitions.

Anthropic’s stance is sharper in tone. It says it will participate, but only with defined limits.

NOW WHAT?Anthropic recently announced a $20 million donation to a group backing AI-focused political candidates, signaling it understands this fight extends beyond contract language. The outcome could shape whether private AI companies retain meaningful control over how their systems are used in war, surveillance and national security.


Previous
Previous

Trump’s Pick For Surgeon General Won’t Rule Out Link Between Vaccines and Autism

Next
Next

Trump Job Approval Hits New Low Ahead Of State Of The Union Address