Close Menu
    X (Twitter) LinkedIn
    CapitalAI DailyCapitalAI Daily
    X (Twitter) LinkedIn
    • Markets & Investments
    • Big Tech & AI
    • AI & Cybercrime
    • Jobs & AI
    • Banks
    • Crypto
    Thursday, February 12
    CapitalAI DailyCapitalAI Daily
    Home»Big Tech & AI»Ilya Sutskever Predicts AI Will ‘Feel Powerful,’ Forcing Companies Into Paranoia and New Safety Regimes

    Ilya Sutskever Predicts AI Will ‘Feel Powerful,’ Forcing Companies Into Paranoia and New Safety Regimes

    By Henry KanapiNovember 28, 20253 Mins Read
    Share
    Twitter LinkedIn

    Ilya Sutskever says the industry is approaching a moment when advanced models will become so strong that they alter human behavior and force a sweeping shift in how companies handle safety.

    In a new interview with podcaster Dwarkesh Patel, the cofounder and former chief scientist of OpenAI and now head of Safe Superintelligence Inc., explains what he thinks would happen as AI systems gain visible power.

    “I maintain that as AI becomes more powerful, people will change their behaviors. And we will see all kinds of unprecedented things that are not happening right now. And I will give some examples. I think for better or worse, the frontier companies will play a very important role in what happens, as will the government.”

    He says the first major shift will come from companies that once fought fiercely, now moving toward active collaboration on safety. He says this trend is an early marker of a much larger change.

    “The kind of things that I think we will see, which you see the beginnings of, are companies that are fierce competitors starting to collaborate on AI safety. You may have seen OpenAI and Anthropic doing a first small step, but that did not exist. That is actually something which I predicted in one of my talks about three years ago, that such a thing will happen.”

    Sutskever then warns that as models grow stronger, their capabilities will become obvious even to skeptics, triggering pressure from the public and from governments to intervene.

    “I also maintain that as AI continues to become more powerful, more visibly powerful, there will also be a desire from governments and the public to do something. And I think that this is a very important force in shaping the AI.”

    He follows with a second prediction that he describes as central to how the industry will evolve. He says companies will not feel the true risk until the models cross a threshold in capability.

    “Right now, people who are working on AI, I maintain that the AI does not feel powerful because of its mistakes. I do think that at some point, the AI will start to feel powerful, actually. And I think when that happens, we will see a big change in the way all AI companies approach safety. They will become much more paranoid. I say this as a prediction that we will see happen. We will see if I am right. But I think this is something that will happen because they will see the AI becoming more powerful.”

    He closes by saying that future AI systems may raise new ethical questions and that companies would benefit from preparing early.

    “I think that care for sentient life, I think there is merit to it. I think it should be considered. I think that it will be helpful if there were some kind of a shortlist of ideas that then the companies, when they are in this situation, could use.”

    Disclaimer: Opinions expressed at CapitalAI Daily are not investment advice. Investors should do their own due diligence before making any decisions involving securities, cryptocurrencies, or digital assets. Your transfers and trades are at your own risk, and any losses you may incur are your responsibility. CapitalAI Daily does not recommend the buying or selling of any assets, nor is CapitalAI Daily an investment advisor. See our Editorial Standards and Terms of Use.

    AI AI safety Ilya Sutskever regulations
    Previous ArticleARK Invest Pours $116,170,000 Into Google and CoreWeave in One Week, Slashes Palantir and AMD Stakes
    Next Article AI Pioneer Andrew Ng Warns Americans Fear and Distrust AI – ‘They’re Going To Make Your Job Go Away’

    Read More

    ‘The World Is in Peril’ – Anthropic and OpenAI Researchers Sound Alarm About the State of AI

    February 12, 2026

    Elon Musk Breaks Silence on xAI Exits, Says Company Is Reorganizing Amid Co-Founder Departures

    February 12, 2026

    Morgan Stanley Says US Is Pulling Ahead of China in the AI Race – Here’s Why

    February 11, 2026

    Michael Burry Warns Google’s 100-Year Bond Plan Rhymes With a Chilling Motorola Moment

    February 11, 2026

    Billionaire Mark Cuban Warns AI Is Making Patents a Liability, Not a Moat

    February 10, 2026

    Investors Rush To Place Over $100,000,000,000 of Orders for Google’s New Bond Sale: Report

    February 10, 2026
    X (Twitter) LinkedIn
    • About
    • Author
    • Editorial Standards
    • Contact Us
    • Privacy Policy
    • Terms of Service
    • Cookie Policy
    © 2025 CapitalAI Daily. All Rights Reserved.

    Type above and press Enter to search. Press Esc to cancel.