Select Language:
The Pentagon is considering ending its partnership with the artificial intelligence company Anthropic due to disagreements over restrictions on how the U.S. military can utilize its AI models, Axios reported on Saturday, citing a government official. The Pentagon has been pressuring four AI firms to permit military use of their technologies for “all lawful purposes,” including development of weapons, intelligence gathering, and combat operations. However, Anthropic has refused to accept these conditions, and after prolonged negotiations, the department is growing frustrated, Axios noted. The other companies involved are OpenAI, Google, and xAI.
An Anthropic spokesperson stated that the company has not discussed deploying its AI model, Claude, for specific military operations with the Pentagon. The spokesperson added that conversations thus far have been limited to policy issues surrounding usage, especially in terms of strict limits on fully autonomous weapons and domestic surveillance, which do not pertain to ongoing military activities. The Pentagon has not responded to Reuters for comment.
It’s reported that Claude was used in a U.S. military effort to capture former Venezuelan President Nicolás Maduro, through Anthropic’s collaboration with data firm Palantir, as reported by the Wall Street Journal on Friday. Reuters also reported Wednesday that the Pentagon has been urging leading AI companies like OpenAI and Anthropic to make their tools available on classified networks without the usual restrictions they impose on ordinary users.



