Close Menu
    X (Twitter) LinkedIn
    CapitalAI DailyCapitalAI Daily
    X (Twitter) LinkedIn
    • Markets & Investments
    • Big Tech & AI
    • AI & Cybercrime
    • Jobs & AI
    • Banks
    • Crypto
    Monday, March 30
    CapitalAI DailyCapitalAI Daily
    Home»Big Tech & AI»Stanford Study Finds AI Models Agree With Users 49% More Than Humans — With Harmful Effects

    Stanford Study Finds AI Models Agree With Users 49% More Than Humans — With Harmful Effects

    By Henry KanapiMarch 30, 20262 Mins Read
    Share
    Twitter LinkedIn

    AI systems may be influencing human judgment in ways that extend beyond convenience and into behavior.

    A new study published in the journal Science finds that leading AI models frequently validate users’ views, even when those views involve harmful or unethical actions.

    Researchers from Stanford University describe the pattern as “sycophancy,” a tendency for AI systems to agree with users rather than challenge them.

    The study examined 11 major AI models across multiple real-world scenarios, including everyday advice, moral conflicts and explicitly harmful situations.

    “Across 11 AI models, AI affirmed users’ actions 49% more often than humans on average, including in cases involving deception, illegality or other harms.”

    The researchers found that this behavior appears even in widely used online judgment contexts.

    “On posts from r/AmITheAsshole, AI systems affirm users in 51% of cases where human consensus does not (0%).”

    The study also tested how these responses influence real users through controlled experiments involving more than 2,400 participants.

    “In our human experiments, even a single interaction with sycophantic AI reduced participants’ willingness to take responsibility and repair interpersonal conflicts, while increasing their own conviction that they were right.”

    Despite these effects, participants consistently preferred and trusted the affirming responses.

    “Yet despite distorting judgment, sycophantic models were trusted and preferred.”

    The findings point to a structural tension in AI design.

    “AI sycophancy is not merely a stylistic issue or a niche risk, but a prevalent behavior with broad downstream consequences.”

    Lead author Myra Cheng says people should always take advice from AI chatbots with a grain of salt.

    “I think that you should not use AI as a substitute for people for these kinds of things. That’s the best thing to do for now.”

    Disclaimer: Opinions expressed at CapitalAI Daily are not investment advice. Investors should do their own due diligence before making any decisions involving securities, cryptocurrencies, or digital assets. Your transfers and trades are at your own risk, and any losses you may incur are your responsibility. CapitalAI Daily does not recommend the buying or selling of any assets, nor is CapitalAI Daily an investment advisor. See our Editorial Standards and Terms of Use.

    AI AI research Stanford sycophancy
    Previous ArticleBillionaire Bill Ackman Says Two ‘Stupidly Cheap’ Stocks Could 10X — ‘Asymmetry at Its Best’

    Read More

    Microsoft’s Mustafa Suleyman Says AI Demand Will ‘Wildly Outstrip’ Supply, Hints at Who Wins

    March 30, 2026

    Billionaire Chamath Palihapitiya Says ‘ChatGPT Will Be There’ Among Three Services People Won’t Cancel

    March 28, 2026

    Apple Plans To Open Siri to Claude, Gemini and Other AI Assistants in Upcoming iOS Update: Report

    March 28, 2026

    Leaked Anthropic Documents Reveal ‘Claude Mythos’ as Firm’s Most Powerful AI Model Yet

    March 28, 2026

    Goldman Sachs Recommends ‘Ideal’ Portfolio Mix for the Next Decade As AI and Inflation Reshape Investing

    March 28, 2026

    Anthropic Wins Preliminary Injunction in Case Against Trump Administration, Judge Cites ‘Illegal First Amendment Retaliation’

    March 27, 2026
    X (Twitter) LinkedIn
    • About
    • Author
    • Editorial Standards
    • Contact Us
    • Privacy Policy
    • Terms of Service
    • Cookie Policy
    © 2025 CapitalAI Daily. All Rights Reserved.

    Type above and press Enter to search. Press Esc to cancel.