Artificial intelligence is eroding one of cybersecurity’s most basic defenses: the ability to tell what’s real from what’s fake.
In a new global survey commissioned by Yubico, researchers found that people routinely misidentify AI-generated messages, exposing companies and consumers to a surge of targeted fraud.
“One of the key threats AI poses in the realm of cybersecurity is its uncanny ability to mimic human communication patterns.”
The research, conducted by Talker Research between August 15 and August 27, 2025, polled 18,000 employed adults across nine countries, including the United States, the United Kingdom, Germany, France, Japan, India, Australia, Singapore, and Sweden.
The data shows the fragility of human judgment when faced with an AI-generated message.
“54% of respondents either misidentified or could not identify an AI-generated message, with just 46% correctly labeling it as AI.”
The finding highlights how convincing synthetic content has become and how easily phishing campaigns can slip past traditional filters.
Even when confronted with human writing, most adults struggled to discern the difference.
“When presented with sample messages, just 30% of respondents correctly identified a human-written message, with 70% incorrectly attributing it to AI or were unsure.”
But younger respondents showed a sharper eye, suggesting familiarity with the technology may provide a small edge.
The study also highlights a key reason why respondents failed to distinguish between real and fake messages.
“34% of respondents said the reason they fell for the ruse was that it appeared to come from a trusted source. With AI’s ability to cater to specific individuals and draw from vast amounts of data, this finding shows how AI is allowing these types of threats to grow and become more successful. ”
The broader survey shows rising anxiety as well. Yubico found that 76% of those surveyed are now concerned about AI impacting the security of their personal or professional accounts, up from 58% a year earlier.
Says Ronnie Manning, Yubico’s chief brand advocate,
“AI is actively rewriting the rules of cybercrime, making it easier for bad actors to launch sophisticated and highly targeted attacks. Organizations that utilize basic authentication methods like passwords and SMS are going to find themselves behind the curve. It’s clear that the time is now for companies to modernize and adopt security methods that are proven against today’s threats.”
Disclaimer: Opinions expressed at CapitalAI Daily are not investment advice. Investors should do their own due diligence before making any decisions involving securities, cryptocurrencies, or digital assets. Your transfers and trades are at your own risk, and any losses you may incur are your responsibility. CapitalAI Daily does not recommend the buying or selling of any assets, nor is CapitalAI Daily an investment advisor. See our Editorial Standards and Terms of Use.