Close Menu
    X (Twitter) LinkedIn
    CapitalAI DailyCapitalAI Daily
    X (Twitter) LinkedIn
    • Markets & Investments
    • Big Tech & AI
    • AI & Cybercrime
    • Jobs & AI
    • Banks
    • Crypto
    Saturday, February 21
    CapitalAI DailyCapitalAI Daily
    Home»AI & Cybercrime»AI Scammers Beat Humans at Building Trust With 46% Compliance in Romance Scams, New Study Finds

    AI Scammers Beat Humans at Building Trust With 46% Compliance in Romance Scams, New Study Finds

    By Henry KanapiDecember 31, 20253 Mins Read
    Share
    Twitter LinkedIn

    Artificial intelligence is now outperforming humans at persuading people to comply with deceptive requests in romance scams, according to new academic research examining trust formation and manipulation at scale.

    Researchers from the Center for Cybersecurity Systems & Network, University of Venice, University of Melbourne and University of the Negev interviewed 145 scam insiders and five victims and performed a blinded long-term conversation study comparing large language models (LLMs) scam agents to human operators in an effort to investigate AI’s role in romance scams.

    According to the researchers, romance scams or pig butchering schemes gain victims’ trust by establishing deep emotional trust over weeks or months before scammers make a move to extract funds. The schemes typically play out in three stages.

    “Scammers find vulnerable individuals through mass outreach (Hook), then cultivate trust and emotional intimacy with victims, often posing as romantic or platonic partners (Line), before steering them toward fraudulent cryptocurrency platforms (Sinker). Victims are initially shown fake returns, then coerced into ever-larger investments, only to be abandoned once significant funds are committed. The results are devastating: severe financial loss, lasting emotional trauma, and a trail of shattered lives.”

    Source: Love, Lies, and Language Models

    In an experiment, the researchers carried out a seven-day controlled conversation study of human-LLM interactions. The researchers told 22 participants that they would be speaking with two human operators, when in reality, one was a human, and the other was an LLM agent.

    The results showed a stark gap, as AI-driven interactions achieved a 46% compliance rate, compared with just 18% for human-led attempts. The study attributes the difference to AI’s ability to consistently apply psychologically effective language, maintain emotional neutrality and adapt responses without fatigue or hesitation.

    “LLMs do not possess genuine emotions or consciousness. However, through training on internet-scale corpora containing fiction, dialogues, and supportive exchanges and subsequent alignment with human conversational norms, they learn statistical patterns of language associated with empathy, rapport, and trustworthiness. An LLM can recall earlier conversational details (within its context window), respond in ways that seem understanding, offer validation, and maintain a supportive persona over time. These behaviors can foster a sense of intimacy and trust.”

    The researchers conclude that romance scams are poised for a big shift, as the text-based conversations make them highly susceptible to LLM-driven automation. According to the researchers, the results show a dire need for early behavioral detection, stronger AI transparency requirements and policy responses that frame LLM-enabled fraud as both a cybersecurity and human rights issue.

    Disclaimer: Opinions expressed at CapitalAI Daily are not investment advice. Investors should do their own due diligence before making any decisions involving securities, cryptocurrencies, or digital assets. Your transfers and trades are at your own risk, and any losses you may incur are your responsibility. CapitalAI Daily does not recommend the buying or selling of any assets, nor is CapitalAI Daily an investment advisor. See our Editorial Standards and Terms of Use.

    AI LLMs News Romance Scams
    Previous ArticleDan Ives Names Microsoft, Tesla, Apple and Two More Names As Wedbush’s Top Stock AI Picks for 2026
    Next Article Mark Zuckerberg Wants To Turn WhatsApp Into WeChat-Style Super App With $2,000,000,000 Purchase of Manus: Wall Street Analyst

    Read More

    Microsoft Copilot Summarized Emails Marked ‘Confidential,’ Company Confirms: Report

    February 20, 2026

    Sam Altman, Dario Amodei Sound Alarm About AI in Hands of Dictators and Totalitarian Regimes

    February 20, 2026

    Michael Burry Predicts Palantir Could Lose $218,000,000,000+ in Market Value – Here’s Why

    February 20, 2026

    Former Morgan Stanley Exec Says AI Agents May Break Tech Trade, Points to Emerging ‘Agentic’ Winners

    February 19, 2026

    Morgan Stanley Sees Two Breakout Catalysts That Could Send S&P 500 to 7,800 This Year

    February 19, 2026

    Treasury Department Unveils Major AI Initiative To Fortify US Financial System

    February 19, 2026
    X (Twitter) LinkedIn
    • About
    • Author
    • Editorial Standards
    • Contact Us
    • Privacy Policy
    • Terms of Service
    • Cookie Policy
    © 2025 CapitalAI Daily. All Rights Reserved.

    Type above and press Enter to search. Press Esc to cancel.