Close Menu
    X (Twitter) LinkedIn
    CapitalAI DailyCapitalAI Daily
    X (Twitter) LinkedIn
    • Markets & Investments
    • Big Tech & AI
    • AI & Cybercrime
    • Jobs & AI
    • Banks
    • Crypto
    Wednesday, April 15
    CapitalAI DailyCapitalAI Daily
    Home»Uncategorized»Sam Altman Weighs In on Why the Pentagon Chose OpenAI Over Anthropic

    Sam Altman Weighs In on Why the Pentagon Chose OpenAI Over Anthropic

    By Henry KanapiMarch 1, 20262 Mins Read
    Share
    Twitter LinkedIn

    Sam Altman explains why the Department of War (DoW) moved forward with OpenAI while talks with Anthropic collapsed.

    In a new X post, Altman says OpenAI’s approach to safety ultimately led to reaching a deal with the DoW.

    “First, I saw reporting that they were extremely close on a deal, and for much of the time, both sides really wanted to reach one. I have seen what happens in tense negotiations when things get stressed and deteriorate super fast, and I could believe that was a large part of what happened here.

    We believe in a layered approach to safety–building a safety stack, deploying FDEs (forward deployed engineers) and having our safety and alignment researcher involved, deploying via cloud, working directly with the DoW. Anthropic seemed more focused on specific prohibitions in the contract, rather than citing applicable laws, which we felt comfortable with. We feel that it’s very important to build a safe system, and although documents are also important, I’d clearly rather rely on technical safeguards if I only had to pick one.”

    Altman adds that OpenAI and the Department of War ultimately aligned on contract language, while noting that operational control ended up being a sticking point for Anthropic.

    “I think Anthropic may have wanted more operational control than we did.”

    OpenAI separately outlines the guardrails embedded in its agreement.

    “Our agreement with the Department of War upholds our redlines:

    No use of OpenAI technology for mass domestic surveillance.

    No use of OpenAI technology to direct autonomous weapons systems.

    No use of OpenAI technology for high-stakes automated decisions (e.g. systems such as ‘social credit’).”

    The company says those limits are reinforced through layered safeguards rather than standalone prohibitions.

    “In our agreement, we protect our redlines through a more expansive, multi-layered approach. We retain full discretion over our safety stack, we deploy via cloud, cleared OpenAI personnel are in the loop and we have strong contractual protections. This is all in addition to the strong existing protections in US law.”

    OpenAI also says it does not believe Anthropic should face a supply chain risk designation and notes that it has communicated that view to the Department of War.

    Disclaimer: Opinions expressed at CapitalAI Daily are not investment advice. Investors should do their own due diligence before making any decisions involving securities, cryptocurrencies, or digital assets. Your transfers and trades are at your own risk, and any losses you may incur are your responsibility. CapitalAI Daily does not recommend the buying or selling of any assets, nor is CapitalAI Daily an investment advisor. See our Editorial Standards and Terms of Use.

    Anthropic Department of War OpenAI Sam Altman
    Previous ArticleInvestor Who Nailed 2008 Housing Crash Warns $1,800,000,000,000 Market Could ‘Hurt the US Economy Badly’
    Next Article JPMorgan Examines 30% AI Job Displacement Scenario – Sees Different Path Ahead

    Read More

    OpenAI Revenue Chief Takes Aim at Anthropic, Calls $30,000,000,000 Run Rate Inflated and Compute Strategy a ‘Misstep’: Report

    April 13, 2026

    AI Security Institute Warns Claude Mythos Preview Could Autonomously Compromise Vulnerable Systems End to End

    April 13, 2026

    ‘Big Short’ Investor Warns Bond Market Danger Zone Could Spark Another Stock Sell-Off – ‘That Barrier Could Be Breached’

    April 13, 2026

    Altimeter’s Brad Gerstner Says ‘Peak OpenAI FUD’ Is Here – But Incoming Model Could Change Everything

    April 13, 2026

    OpenAI Affected by North Korea-Linked Software Supply Chain Attack, Moves To Block Risk of Fake Apps

    April 11, 2026

    David Sacks Says $30,000,000,000 Anthropic Revenue Came From Just Coding – Next Phase Will Be ‘Absolutely Massive’

    April 11, 2026
    X (Twitter) LinkedIn
    • About
    • Author
    • Editorial Standards
    • Contact Us
    • Privacy Policy
    • Terms of Service
    • Cookie Policy
    © 2025 CapitalAI Daily. All Rights Reserved.

    Type above and press Enter to search. Press Esc to cancel.