Close Menu
    X (Twitter) LinkedIn
    CapitalAI DailyCapitalAI Daily
    X (Twitter) LinkedIn
    • Markets & Investments
    • Big Tech & AI
    • AI & Cybercrime
    • Jobs & AI
    • Banks
    • Crypto
    Thursday, January 1
    CapitalAI DailyCapitalAI Daily
    Home»AI & Cybercrime»AI Scammers Beat Humans at Building Trust With 46% Compliance in Romance Scams, New Study Finds

    AI Scammers Beat Humans at Building Trust With 46% Compliance in Romance Scams, New Study Finds

    By Henry KanapiDecember 31, 20253 Mins Read
    Share
    Twitter LinkedIn

    Artificial intelligence is now outperforming humans at persuading people to comply with deceptive requests in romance scams, according to new academic research examining trust formation and manipulation at scale.

    Researchers from the Center for Cybersecurity Systems & Network, University of Venice, University of Melbourne and University of the Negev interviewed 145 scam insiders and five victims and performed a blinded long-term conversation study comparing large language models (LLMs) scam agents to human operators in an effort to investigate AI’s role in romance scams.

    According to the researchers, romance scams or pig butchering schemes gain victims’ trust by establishing deep emotional trust over weeks or months before scammers make a move to extract funds. The schemes typically play out in three stages.

    “Scammers find vulnerable individuals through mass outreach (Hook), then cultivate trust and emotional intimacy with victims, often posing as romantic or platonic partners (Line), before steering them toward fraudulent cryptocurrency platforms (Sinker). Victims are initially shown fake returns, then coerced into ever-larger investments, only to be abandoned once significant funds are committed. The results are devastating: severe financial loss, lasting emotional trauma, and a trail of shattered lives.”

    Source: Love, Lies, and Language Models

    In an experiment, the researchers carried out a seven-day controlled conversation study of human-LLM interactions. The researchers told 22 participants that they would be speaking with two human operators, when in reality, one was a human, and the other was an LLM agent.

    The results showed a stark gap, as AI-driven interactions achieved a 46% compliance rate, compared with just 18% for human-led attempts. The study attributes the difference to AI’s ability to consistently apply psychologically effective language, maintain emotional neutrality and adapt responses without fatigue or hesitation.

    “LLMs do not possess genuine emotions or consciousness. However, through training on internet-scale corpora containing fiction, dialogues, and supportive exchanges and subsequent alignment with human conversational norms, they learn statistical patterns of language associated with empathy, rapport, and trustworthiness. An LLM can recall earlier conversational details (within its context window), respond in ways that seem understanding, offer validation, and maintain a supportive persona over time. These behaviors can foster a sense of intimacy and trust.”

    The researchers conclude that romance scams are poised for a big shift, as the text-based conversations make them highly susceptible to LLM-driven automation. According to the researchers, the results show a dire need for early behavioral detection, stronger AI transparency requirements and policy responses that frame LLM-enabled fraud as both a cybersecurity and human rights issue.

    Disclaimer: Opinions expressed at CapitalAI Daily are not investment advice. Investors should do their own due diligence before making any decisions involving securities, cryptocurrencies, or digital assets. Your transfers and trades are at your own risk, and any losses you may incur are your responsibility. CapitalAI Daily does not recommend the buying or selling of any assets, nor is CapitalAI Daily an investment advisor. See our Editorial Standards and Terms of Use.

    AI LLMs News Romance Scams
    Previous ArticleDan Ives Names Microsoft, Tesla, Apple and Two More Names As Wedbush’s Top Stock AI Picks for 2026
    Next Article Mark Zuckerberg Wants To Turn WhatsApp Into WeChat-Style Super App With $2,000,000,000 Purchase of Manus: Wall Street Analyst

    Read More

    Harry Potter Star Says Fan Was ‘So in Love’ With AI Impostor She Nearly Lost Her Home and Money

    January 1, 2026

    Former Tesla AI Chief Says FSD Hit a Major Milestone After First 100% Autonomous Coast-to-Coast Drive

    January 1, 2026

    Chamath Palihapitiya Says Nvidia–Groq Deal Could Make AI Cheaper and Drive Adoption at Billions-Scale

    January 1, 2026

    Instagram Chief Says a New Gate Has Emerged for Creators With AI Making Content Infinitely Reproducible

    January 1, 2026

    Goldman Sachs Says AI Trade Is Priced Backward, Not Forward – And That’s the Key Difference From Dot-Com Bubble

    December 31, 2025

    AI Agents Are the New Cheat Code in Tech, According to Box CEO Aaron Levie – Here’s What He Means

    December 31, 2025
    X (Twitter) LinkedIn
    • About
    • Author
    • Editorial Standards
    • Contact Us
    • Privacy Policy
    • Terms of Service
    • Cookie Policy
    © 2025 CapitalAI Daily. All Rights Reserved.

    Type above and press Enter to search. Press Esc to cancel.