Anthropic’s $200 million contract with the Division of Protection is up within the air after Anthropic reportedly raised considerations concerning the Pentagon’s use of its Claude AI mannequin through the Nicolas Maduro raid in January.
“The Department of War’s relationship with Anthropic is being reviewed,” Chief Pentagon Spokesman Sean Parnell mentioned in an announcement to Fortune. “Our nation requires that our partners be willing to help our warfighters win in any fight. Ultimately, this is about our troops and the safety of the American people.”
Tensions have escalated in latest weeks after a high Anthropic official reportedly reached out to a senior Palantir government to query how Claude was used within the raid, per The Hill. The Palantir government interpreted the outreach as disapproval of the mannequin’s use within the raid and forwarded particulars of the trade to the Pentagon. (President Trump mentioned the army used a “discombobulator” weapon through the raid that made enemy gear “not work.”)
“Anthropic has not discussed the use of Claude for specific operations with the Department of War,” an Anthropic spokeperson mentioned in an announcement to Fortune. “We have also not discussed this with, or expressed concerns to, any industry partners outside of routine discussions on strictly technical matters.”
On the middle of this dispute are the contractual guardrails dictating how AI fashions can be utilized in protection operations. Anthropic CEO Dario Amodei has persistently advocated for strict limits on AI use and regulation, even admitting it turns into troublesome to stability security with income. For months now, the corporate and DOD have held contentious negotiations over how Claude can be utilized in army operations.
Underneath the Protection Division contract, Anthropic gained’t permit the Pentagon to make use of its AI fashions for mass surveillance of Individuals or use of its know-how in absolutely autonomous weapons. The corporate additionally banned the usage of its know-how in “lethal” or “kinetic” army functions. Any direct involvement in energetic gunfire through the Maduro raid would doubtless violate these phrases.
Among the many AI corporations contracting with the federal government—together with OpenAI, Google and xAI—Anthropic holds a profitable place putting Claude as the one giant language mannequin licensed on the Pentagon’s categorised networks.
This place was highlighted by Anthropic in an announcement to Fortune. “Claude is used for a wide variety of intelligence-related use cases across the government, including the DoW, in line with our Usage Policy.”
The corporate “is committed to using frontier AI in support of US national security,” the assertion learn. “We are having productive conversations, in good faith, with DoW on how to continue that work and get these complex issues right.”
Palantir, OpenAI, Google and xAI didn’t instantly reply to a request for remark.
AI goes to battle
Though the DOD has accelerated efforts to combine AI into its operations, solely xAI has granted the DOD the usage of its fashions for “all lawful purposes,” whereas the others keep utilization restrictions.
Amodei has been sounding the alarms for months on consumer protections, providing Anthropic as a safety-first different to OpenAI and Google within the absence of governmental rules. “I’m deeply uncomfortable with these decisions being made by a few companies,” he mentioned again in November. Though it was rumored that Anthropic was planning to ease restrictions, the corporate now faces the potential of being minimize out of the protection business altogether.
A senior Pentagon official informed Axios Protection Secretary Pete Hegseth is “close” to eradicating Anthropic from the army provide chain, forcing anybody who needs to conduct enterprise with the army to additionally minimize ties with the corporate.
“It will be an enormous pain in the ass to disentangle, and we are going to make sure they pay a price for forcing our hand like this,” the senior official informed the outlet.
Being deemed a army provide threat challenge is a particular designation normally reserved just for overseas adversaries. The closest precedent is the federal government’s 2019 ban on Huawei over nationwide safety considerations. In Anthropic’s case, sources informed Axios that protection officers have been trying to choose a battle with the San Francisco–primarily based firm for a while.
The Pentagon’s feedback are the most recent in a public dispute coming to a boil. The federal government claims that having corporations set moral limits to its fashions can be unnecessarily restrictive, and the sheer variety of grey areas would render the applied sciences futile. Because the Pentagon continues to barter with the AI subcontractors to increase utilization, the general public spat turns into a proxy skirmish for who will dictate the makes use of of AI.
