Anthropic CEO Dario Amodei has returned to negotiations with the US Department of Defense after talks collapsed last week over how the military can use the company’s artificial intelligence technology, according to a report by the Financial Times.

The discussions are aimed at resolving a dispute over the Pentagon’s access to Anthropic’s Claude AI models, which have already been deployed in classified government networks under a major defense contract.

Talks resume after breakdown

Amodei is now in discussions with Emil Michael, the Pentagon’s under-secretary of defense for research and engineering, in what people familiar with the matter described as a last-ditch attempt to reach an agreement governing how the military can deploy Anthropic’s tools, FT reported.

Negotiations fell apart on Friday after the two sides failed to bridge differences over the scope of the technology’s use.

In the aftermath, President Donald Trump directed federal agencies to halt their use of Anthropic’s software, while Defense Secretary Pete Hegseth said the company could be designated a supply-chain risk to US national security.

The dispute also turned personal.

Michael publicly criticized Amodei in a post on social media platform X, calling him a “liar” with a “God complex.”

Disagreement over military use of AI

At the heart of the conflict is how far the Pentagon can go in applying AI models such as Claude in military operations.

Anthropic was awarded a $200 million contract that made Claude the first major AI model deployed within classified government networks.

However, the company later sought assurances that its technology would not be used for domestic surveillance or to power autonomous weapons systems.

The Pentagon, by contrast, has pushed for the ability to deploy the technology for any lawful military purpose.

According to a memo Amodei sent to employees seen by FT, the Defense Department offered to accept Anthropic’s terms near the end of negotiations if a specific phrase referring to the “analysis of bulk acquired data” was removed.

Amodei wrote that the wording reflected exactly the kind of use case the company was concerned about.

While on one hand, Anthropic was facing the wrath of the DoD, on the other, its AI model, Claude was reportedly used by the US military in the barrage of strikes against Iran as the technology “shortens the kill chain” – meaning the process of target identification through to legal approval and strike launch.

Rivalry with OpenAI intensifies

The controversy has also spilled into the broader AI industry, highlighting growing competition between Anthropic and its rival OpenAI.

Only hours after federal agencies were directed to stop using Anthropic’s tools, OpenAI signed a deal with the Pentagon and said that the department had agreed ot the company’s restrictions.

Amodei in the memo sent to staff on Friday, also claimed that statements from the Pentagon and OpenAI — which struck its own deal with the Defense Department on Friday — were “just straight up lies about these issues or tries to confuse them.”

In a separate internal note earlier on Wednesday, Amodei also criticized OpenAI chief Sam Altman, calling him “mendacious” and accusing him of offering what he described as “dictator-style praise” for Trump, according to The Information.

Amodei added that Anthropic had maintained its “red lines with integrity,” rather than working with officials to create what he described as “safety theater” intended to reassure employees.

He wrote that many in Washington, including officials at the Pentagon and Palantir as well as political consultants, had assumed that was the issue the company was trying to address.

The timing of OpenAI’s deal has triggered backlash online, with Anthropic’s Claude reportedly seeing a surge in downloads while uninstallations of OpenAI’s ChatGPT increased.

Altman also later acknowledged that his company “shouldn’t have rushed” its deal with the Pentagon and said he had urged officials not to designate Anthropic as a supply-chain risk.

Industry concern over government pressure

The standoff has alarmed parts of the technology sector, which fear that labeling a domestic AI developer as a national security risk could have broader consequences for the industry.

A technology trade group whose members include Nvidia, Google and Anthropic sent a letter to Defense Secretary Hegseth this week expressing concern over the possibility of such a designation.

Anthropic, founded in 2021 by former OpenAI researchers who left after disagreements over the company’s direction, has positioned itself as a safety-focused alternative in the rapidly expanding AI market.

The outcome of the negotiations could set an important precedent for how private AI companies cooperate with governments as advanced machine learning systems become increasingly integrated into national security operations.

The post Anthropic resumes Pentagon talks in last ditch attempt to reach deal: report appeared first on Invezz