OpenAI CEO Sam Altman introduced late on Friday that his firm has reached an settlement permitting the Division of Protection to make use of its AI fashions within the division’s categorized community.
This follows a high-profile standoff between the DoD — additionally recognized beneath the Trump administration because the Division of Conflict — and OpenAI’s rival Anthropic. The Pentagon pushed AI corporations, together with Anthropic, to permit their fashions for use for “all lawful functions,” whereas Anthropic sought to attract a pink line round mass home surveillance and absolutely autonomous weapons.
In a prolonged assertion launched Thursday, Anthropic CEO Dario Amodei mentioned the corporate “by no means raised objections to explicit navy operations nor tried to restrict use of our expertise in an advert hoc method,” however he argued that “in a slim set of instances, we consider AI can undermine, slightly than defend, democratic values.”
Greater than 60 OpenAI staff and 300 Google staff signed an open letter this week asking their employers to help Anthropic’s place.
After Anthropic and the Pentagon failed to achieve an settlement, President Donald Trump criticized the “Leftwing nut jobs at Anthropic” in a social media put up that additionally directed federal companies to cease utilizing the corporate’s merchandise after a six-month phase-out interval.
In a separate put up, Secretary of Protection Pete Hegseth claimed Anthropic was attempting to “seize veto energy over the operational selections of the USA navy.” Hegseth additionally mentioned he’s designating Anthropic as a supply-chain danger: “Efficient instantly, no contractor, provider, or companion that does enterprise with the USA navy could conduct any industrial exercise with Anthropic.”
On Friday, Anthropic mentioned it had “not but obtained direct communication from the Division of Conflict or the White Home on the standing of our negotiations,” however insisted it might “problem any provide chain danger designation in court docket.”
Techcrunch occasion
Boston, MA
|
June 9, 2026
Surprisingly, Altman claimed in a put up on X that OpenAI’s new protection contract contains protections addressing the identical points that turned a flashpoint for Anthropic.
“Two of our most essential security rules are prohibitions on home mass surveillance and human accountability for the usage of power, together with for autonomous weapon methods,” Altman mentioned. “The DoW agrees with these rules, displays them in legislation and coverage, and we put them into our settlement.”
Altman mentioned OpenAI “will construct technical safeguards to make sure our fashions behave as they need to, which the DoW additionally wished,” and it’ll deploy engineers with the Pentagon “to assist with our fashions and to make sure their security.”
“We’re asking the DoW to supply these similar phrases to all AI corporations, which in our opinion we predict everybody ought to be keen to just accept,” Altman added. “Now we have expressed our sturdy want to see issues de-escalate away from authorized and governmental actions and in direction of affordable agreements.”
Fortune’s Sharon Goldman studies that Altman instructed OpenAI staff at an all-hands assembly that the federal government will enable the corporate to construct its personal “security stack” to forestall misuse and that “if the mannequin refuses to do a job, then the federal government wouldn’t power OpenAI to make it do this job.”
Altman’s put up got here shortly earlier than information broke that the U.S. and Israeli governments have begun bombing Iran, with Trump calling for the overthrow of the Iranian authorities.

