Close Menu
    X (Twitter) LinkedIn
    CapitalAI DailyCapitalAI Daily
    X (Twitter) LinkedIn
    • Markets & Investments
    • Big Tech & AI
    • Fraud & Scams
    • Hacks
    • Banks
    • Crypto
    Saturday, October 4
    CapitalAI DailyCapitalAI Daily
    Home»Big Tech & AI»Sam Altman Warns AI Could Threaten Humanity Through Human Misuse and This Hidden Risk

    Sam Altman Warns AI Could Threaten Humanity Through Human Misuse and This Hidden Risk

    By Henry KanapiOctober 3, 20252 Mins Read
    Share
    Twitter LinkedIn

    OpenAI CEO Sam Altman warns that artificial intelligence could pose a threat to humanity in more ways than one.

    In a new interview with Axel Springer CEO Mathias Döpfner, Altman outlines how the technology, when directed by nation-states in conflict, could be turned toward devastating ends.

    According to Altman, AI models are designed to follow user commands, noting that someone can weaponize the tech.

    “AI systems do exactly what we tell them to do, but somebody is able to build one and a country, and we talked about warfare, a country that is using it to wage war is able to do sort of like an unimaginable amount of damage to the world. So that’s one we can think about, right?

    Where AI is aligned, AI follows our instructions, but a human running a powerful country fighting other powerful countries decides to horribly misuse it and create a bioweapon or hack into someone’s nuclear weapon systems or who knows what. That’s not really like, that’s not an AI alignment failure in any sense of the word.”

    He contrasts that scenario with a second category of risk, in which AI systems themselves resist human control.

    “Category two is sort of the more classic sci-fi, the sort of like AI, let’s not even call it conscious, but the AI develops some sense of agency and does not want to be stopped. Even if it’s just like trying to accomplish a goal and there’s no intentionality or consciousness, but it’s like, I need to not be stopped by these humans in pursuit of this goal. That would be an alignment, a big alignment failure, and category two, sort of the sci-fi thing.

    And the world spends a lot of time talking about those two. I think they’re obviously important things we need to address.”

    Altman also mentions a third category of risk, one where AI could “accidentally take over the world” without wars or consciousness.

    Disclaimer: Opinions expressed at CapitalAI Daily are not investment advice. Investors should do their own due diligence before making any decisions involving securities, cryptocurrencies, or digital assets. Your transfers and trades are at your own risk, and any losses you may incur are your responsibility. CapitalAI Daily does not recommend the buying or selling of any assets, nor is CapitalAI Daily an investment advisor. See our Editorial Standards and Terms of Use.

    AI artificial intelligence OpenAI Sam Altman
    Previous ArticleTesla Smashes Delivery Record As Wall Street Strategist Calls $3 Trillion Market Cap for TSLA
    Next Article Fidelity Exec Warns AI Boom Mirrors Dot-Com Meltup From 1994–2000

    Read More

    Goldman Sachs Pouring $6 Billion Into AI and Tech in 2025, Embraces Tools Like Cognition’s Devin

    October 4, 2025

    More Than Half of Adults Fail To Spot AI Scams, Leaving Accounts and Enterprises at Risk: Yubico Research

    October 4, 2025

    Billionaire Jeff Bezos Calls AI Surge an ‘Industrial Bubble,’ Warns Billions Flowing to Startups With No Product

    October 4, 2025

    Morgan Stanley Unveils $376 Bull Case for Apple (AAPL), Says Outsourced AI Will Fuel Next Evolution

    October 3, 2025

    Nvidia, Broadcom, AI Trade Rallies Sustainable for Longer, Says $785 Billion Asset Manager Bernstein – Here’s the Firm’s Timeline

    October 3, 2025

    Microsoft Calls OpenAI Most Important Partner, Says Customers Pushing for Anthropic Models Too

    October 3, 2025
    X (Twitter) LinkedIn
    • About
    • Author
    • Editorial Standards
    • Contact Us
    • Privacy Policy
    • Terms of Service
    • Opt-out preferences
    © 2025 CapitalAI Daily. All Rights Reserved.

    Type above and press Enter to search. Press Esc to cancel.

    Manage Consent
    To provide the best experiences, we use technologies like cookies to store and/or access device information. Consenting to these technologies will allow us to process data such as browsing behavior or unique IDs on this site. Not consenting or withdrawing consent, may adversely affect certain features and functions.
    Functional Always active
    The technical storage or access is strictly necessary for the legitimate purpose of enabling the use of a specific service explicitly requested by the subscriber or user, or for the sole purpose of carrying out the transmission of a communication over an electronic communications network.
    Preferences
    The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user.
    Statistics
    The technical storage or access that is used exclusively for statistical purposes. The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.
    Marketing
    The technical storage or access is required to create user profiles to send advertising, or to track the user on a website or across several websites for similar marketing purposes.
    Manage options Manage services Manage {vendor_count} vendors Read more about these purposes
    View preferences
    {title} {title} {title}