Close Menu
    X (Twitter) LinkedIn
    CapitalAI DailyCapitalAI Daily
    X (Twitter) LinkedIn
    • Markets & Investments
    • Big Tech & AI
    • AI & Cybercrime
    • Jobs & AI
    • Banks
    • Crypto
    Friday, May 15
    CapitalAI DailyCapitalAI Daily
    Home»Big Tech & AI»Stanford Study Finds AI Models Agree With Users 49% More Than Humans — With Harmful Effects

    Stanford Study Finds AI Models Agree With Users 49% More Than Humans — With Harmful Effects

    By Henry KanapiMarch 30, 20262 Mins Read
    Share
    Twitter LinkedIn

    AI systems may be influencing human judgment in ways that extend beyond convenience and into behavior.

    A new study published in the journal Science finds that leading AI models frequently validate users’ views, even when those views involve harmful or unethical actions.

    Researchers from Stanford University describe the pattern as “sycophancy,” a tendency for AI systems to agree with users rather than challenge them.

    The study examined 11 major AI models across multiple real-world scenarios, including everyday advice, moral conflicts and explicitly harmful situations.

    “Across 11 AI models, AI affirmed users’ actions 49% more often than humans on average, including in cases involving deception, illegality or other harms.”

    The researchers found that this behavior appears even in widely used online judgment contexts.

    “On posts from r/AmITheAsshole, AI systems affirm users in 51% of cases where human consensus does not (0%).”

    The study also tested how these responses influence real users through controlled experiments involving more than 2,400 participants.

    “In our human experiments, even a single interaction with sycophantic AI reduced participants’ willingness to take responsibility and repair interpersonal conflicts, while increasing their own conviction that they were right.”

    Despite these effects, participants consistently preferred and trusted the affirming responses.

    “Yet despite distorting judgment, sycophantic models were trusted and preferred.”

    The findings point to a structural tension in AI design.

    “AI sycophancy is not merely a stylistic issue or a niche risk, but a prevalent behavior with broad downstream consequences.”

    Lead author Myra Cheng says people should always take advice from AI chatbots with a grain of salt.

    “I think that you should not use AI as a substitute for people for these kinds of things. That’s the best thing to do for now.”

    Disclaimer: Opinions expressed at CapitalAI Daily are not investment advice. Investors should do their own due diligence before making any decisions involving securities, cryptocurrencies, or digital assets. Your transfers and trades are at your own risk, and any losses you may incur are your responsibility. CapitalAI Daily does not recommend the buying or selling of any assets, nor is CapitalAI Daily an investment advisor. See our Editorial Standards and Terms of Use.

    AI AI research Stanford sycophancy
    Previous ArticleBillionaire Bill Ackman Says Two ‘Stupidly Cheap’ Stocks Could 10X — ‘Asymmetry at Its Best’
    Next Article Goldman Sachs Says $18,400,000,000 in Short Bets Leaving Stocks Open to Sharp Upside Move: Report

    Read More

    More Than Half of Crypto Investors Using AI To Interpret Earnings Releases, Macro News, On-Chain Signals and More: Bitget

    May 14, 2026

    Cisco CEO Says ‘Networking Super Cycle’ Now in Play As CSCO Explodes Over 13% in Just One Day

    May 14, 2026

    Anthropic Warns China Could Close AI Gap by 2028, Enabling a Cyber Force Capable of Disrupting Critical Infrastructure Worldwide

    May 14, 2026

    AI Stock That’s Up 68% Could Solve Constraint Choking OpenAI and Anthropic, Says Altimeter’s Brad Gerstner

    May 14, 2026

    Jamie Dimon Says Mythos Makes Cyber Risk ‘More Dangerous,’ Warns Banks Racing To Respond Before Bad Actors Get There First

    May 12, 2026

    Ex-Bank of America Chief Market Strategist Predicts S&P 500 Soaring to As High as 13,000 – But There’s a Big Catch

    May 11, 2026
    X (Twitter) LinkedIn
    • About
    • Author
    • Editorial Standards
    • Contact Us
    • Privacy Policy
    • Terms of Service
    • Cookie Policy
    © 2025 CapitalAI Daily. All Rights Reserved.

    Type above and press Enter to search. Press Esc to cancel.