Sam Altman explains why the Department of War (DoW) moved forward with OpenAI while talks with Anthropic collapsed.
In a new X post, Altman says OpenAI’s approach to safety ultimately led to reaching a deal with the DoW.
“First, I saw reporting that they were extremely close on a deal, and for much of the time, both sides really wanted to reach one. I have seen what happens in tense negotiations when things get stressed and deteriorate super fast, and I could believe that was a large part of what happened here.
We believe in a layered approach to safety–building a safety stack, deploying FDEs (forward deployed engineers) and having our safety and alignment researcher involved, deploying via cloud, working directly with the DoW. Anthropic seemed more focused on specific prohibitions in the contract, rather than citing applicable laws, which we felt comfortable with. We feel that it’s very important to build a safe system, and although documents are also important, I’d clearly rather rely on technical safeguards if I only had to pick one.”
Altman adds that OpenAI and the Department of War ultimately aligned on contract language, while noting that operational control ended up being a sticking point for Anthropic.
“I think Anthropic may have wanted more operational control than we did.”
OpenAI separately outlines the guardrails embedded in its agreement.
“Our agreement with the Department of War upholds our redlines:
No use of OpenAI technology for mass domestic surveillance.
No use of OpenAI technology to direct autonomous weapons systems.
No use of OpenAI technology for high-stakes automated decisions (e.g. systems such as ‘social credit’).”
The company says those limits are reinforced through layered safeguards rather than standalone prohibitions.
“In our agreement, we protect our redlines through a more expansive, multi-layered approach. We retain full discretion over our safety stack, we deploy via cloud, cleared OpenAI personnel are in the loop and we have strong contractual protections. This is all in addition to the strong existing protections in US law.”
OpenAI also says it does not believe Anthropic should face a supply chain risk designation and notes that it has communicated that view to the Department of War.
Disclaimer: Opinions expressed at CapitalAI Daily are not investment advice. Investors should do their own due diligence before making any decisions involving securities, cryptocurrencies, or digital assets. Your transfers and trades are at your own risk, and any losses you may incur are your responsibility. CapitalAI Daily does not recommend the buying or selling of any assets, nor is CapitalAI Daily an investment advisor. See our Editorial Standards and Terms of Use.

