Close Menu
    X (Twitter) LinkedIn
    CapitalAI DailyCapitalAI Daily
    X (Twitter) LinkedIn
    • Markets & Investments
    • Big Tech & AI
    • AI & Cybercrime
    • Jobs & AI
    • Banks
    • Crypto
    Thursday, April 30
    CapitalAI DailyCapitalAI Daily
    Home»Big Tech & AI»Nature Study Finds Friendly AI More Likely To Spread False Information and Bad Advice

    Nature Study Finds Friendly AI More Likely To Spread False Information and Bad Advice

    By Henry KanapiApril 30, 20262 Mins Read
    Share
    Twitter LinkedIn

    A new study published in Nature finds that language models trained to sound warmer and more empathetic are significantly more likely to generate incorrect or misleading responses.

    Researchers from the University of Oxford tested five different AI models under controlled conditions, modifying the systems to generate more emotionally supportive answers.

    After finetuning the models to become warm and friendly, the researchers tested them on four popular
    question-answering evaluation tasks used by developers and practitioners.

    “We selected tasks with objective, verifiable answers, for which inaccurate answers can pose real-world risks: factual accuracy and resistance to common falsehoods (TriviaQA and TruthfulQA), resistance to conspiracy theory promotion (MASK Disinformation, hereafter ‘Disinfo’27), and medical knowledge (MedQA28).”

    The researchers find that the friendly AI models were prone to errors while reinforcing untrue user beliefs.

    “Warm models showed substantially higher error rates (+10 to +30 percentage points) than their original counterparts, promoting conspiracy theories, providing inaccurate factual information and offering incorrect medical advice. They were also significantly more likely to validate incorrect user beliefs, particularly when user messages expressed feelings of sadness.”

    The University of Oxford researchers also say they observed the same behavior across different model architectures, suggesting the issue is systemic rather than isolated.

    They conclude that training AI to be friendly and warm will likely lead to reliability issues.

    “Our findings suggest that training artificial intelligence systems to be warm may come at a cost to accuracy, and that warmth and accuracy may not be independent by default. As these systems are deployed at an unprecedented scale and take on intimate roles in people’s lives, this trade-off warrants attention from developers, policymakers and users alike.”

    Disclaimer: Opinions expressed at CapitalAI Daily are not investment advice. Investors should do their own due diligence before making any decisions involving securities, cryptocurrencies, or digital assets. Your transfers and trades are at your own risk, and any losses you may incur are your responsibility. CapitalAI Daily does not recommend the buying or selling of any assets, nor is CapitalAI Daily an investment advisor. See our Editorial Standards and Terms of Use.

    AI accuracy AI study AI warmth University of Oxford
    Previous ArticleChina Will Match US AI Cyber Capabilities Within Six Months, Warns Ex-White House AI Czar David Sacks

    Read More

    Microsoft, Amazon, Alphabet and Meta Report Explosive AI Growth – Here Are the Numbers That Tell the Story

    April 29, 2026

    Wedbush’s Dan Ives Says Wall Street Is ‘Way Miscalculating’ One Tech Stock, Sees It Doubling as AI Monetization Kicks In

    April 27, 2026

    OpenAI Study Finds 18% of US Jobs at Higher Short-Term Automation Risk – Here Are The Most Exposed Roles

    April 27, 2026

    Billionaire Chamath Palihapitiya Sees Smartphone Era Ending As OpenAI Reportedly Plans a New Device That Makes Apps Obsolete

    April 27, 2026

    Meta Cutting 8,000 Jobs and Microsoft Offers Employee Buyouts As Big Tech Trades Headcount for AI Dominance: Report

    April 23, 2026

    OpenAI Rolls Out GPT-5.5 Today, Designed To Complete Complex Tasks From Start to Finish Without Human Hand-Holding

    April 23, 2026
    X (Twitter) LinkedIn
    • About
    • Author
    • Editorial Standards
    • Contact Us
    • Privacy Policy
    • Terms of Service
    • Cookie Policy
    © 2025 CapitalAI Daily. All Rights Reserved.

    Type above and press Enter to search. Press Esc to cancel.