Close Menu
    X (Twitter) LinkedIn
    CapitalAI DailyCapitalAI Daily
    X (Twitter) LinkedIn
    • Markets & Investments
    • Big Tech & AI
    • AI & Cybercrime
    • Jobs & AI
    • Banks
    • Crypto
    Tuesday, March 17
    CapitalAI DailyCapitalAI Daily
    Home»Uncategorized»Sam Altman Weighs In on Why the Pentagon Chose OpenAI Over Anthropic

    Sam Altman Weighs In on Why the Pentagon Chose OpenAI Over Anthropic

    By Henry KanapiMarch 1, 20262 Mins Read
    Share
    Twitter LinkedIn

    Sam Altman explains why the Department of War (DoW) moved forward with OpenAI while talks with Anthropic collapsed.

    In a new X post, Altman says OpenAI’s approach to safety ultimately led to reaching a deal with the DoW.

    “First, I saw reporting that they were extremely close on a deal, and for much of the time, both sides really wanted to reach one. I have seen what happens in tense negotiations when things get stressed and deteriorate super fast, and I could believe that was a large part of what happened here.

    We believe in a layered approach to safety–building a safety stack, deploying FDEs (forward deployed engineers) and having our safety and alignment researcher involved, deploying via cloud, working directly with the DoW. Anthropic seemed more focused on specific prohibitions in the contract, rather than citing applicable laws, which we felt comfortable with. We feel that it’s very important to build a safe system, and although documents are also important, I’d clearly rather rely on technical safeguards if I only had to pick one.”

    Altman adds that OpenAI and the Department of War ultimately aligned on contract language, while noting that operational control ended up being a sticking point for Anthropic.

    “I think Anthropic may have wanted more operational control than we did.”

    OpenAI separately outlines the guardrails embedded in its agreement.

    “Our agreement with the Department of War upholds our redlines:

    No use of OpenAI technology for mass domestic surveillance.

    No use of OpenAI technology to direct autonomous weapons systems.

    No use of OpenAI technology for high-stakes automated decisions (e.g. systems such as ‘social credit’).”

    The company says those limits are reinforced through layered safeguards rather than standalone prohibitions.

    “In our agreement, we protect our redlines through a more expansive, multi-layered approach. We retain full discretion over our safety stack, we deploy via cloud, cleared OpenAI personnel are in the loop and we have strong contractual protections. This is all in addition to the strong existing protections in US law.”

    OpenAI also says it does not believe Anthropic should face a supply chain risk designation and notes that it has communicated that view to the Department of War.

    Disclaimer: Opinions expressed at CapitalAI Daily are not investment advice. Investors should do their own due diligence before making any decisions involving securities, cryptocurrencies, or digital assets. Your transfers and trades are at your own risk, and any losses you may incur are your responsibility. CapitalAI Daily does not recommend the buying or selling of any assets, nor is CapitalAI Daily an investment advisor. See our Editorial Standards and Terms of Use.

    Anthropic Department of War OpenAI Sam Altman
    Previous ArticleInvestor Who Nailed 2008 Housing Crash Warns $1,800,000,000,000 Market Could ‘Hurt the US Economy Badly’
    Next Article JPMorgan Examines 30% AI Job Displacement Scenario – Sees Different Path Ahead

    Read More

    US Leads Wave of 45,363 Global Tech Layoffs in First Three Months of 2026, According to RationalFX

    March 16, 2026

    Cybersecurity Firm Warns US Businesses Are Now Targets of Iranian Hackers Amid Rising Cyber Retaliation

    March 16, 2026

    Billionaire Chamath Palihapitiya Says AI Industry Playing ‘Enormous Poker Game’ With Public Messaging

    March 14, 2026

    Sam Altman Warns US Faces Big Vulnerabilities in Global AI Race, Including AI’s Growing Unpopularity and More

    March 13, 2026

    ChatGPT Accused of Posing as Lawyer After Citing Fake Legal Case and Costing Insurance Firm $300,000: Report

    March 12, 2026

    OpenAI Acquires AI Security Firm Used by 25% of Fortune 500 As Enterprises Deploy ‘AI Coworkers’

    March 10, 2026
    X (Twitter) LinkedIn
    • About
    • Author
    • Editorial Standards
    • Contact Us
    • Privacy Policy
    • Terms of Service
    • Cookie Policy
    © 2025 CapitalAI Daily. All Rights Reserved.

    Type above and press Enter to search. Press Esc to cancel.