Artificial intelligence (AI)-powered impersonation scams are already inflicting staggering losses, with fake CEOs and executives tricking employees into wiring millions of dollars in funds or sharing sensitive data.
Losses from executive deepfakes topped $200 million in the first quarter alone, according to James Turgal, vice president of global cyber advisory at Optiv, the Wall Street Journal reports.
Executive deepfakes use AI models trained on a leader’s voice and video, often pulled from interviews, earnings calls, or YouTube clips, to generate convincing live calls or recorded messages. Fraudsters pose as CEOs or CFOs, instructing staff to transfer money, share credentials, or click on malicious links.
Brian Long, CEO of the AI cybersecurity firm Adaptive Security, warns that the threat has drastically grown in just the last 12 months.
“A year ago, maybe one in 10 security executives I spoke to had seen one. Now it’s closer to five in 10.”
Margaret Cunningham, director of security and AI strategy at Darktrace, notes that the scams succeed by exploiting human trust.
“These attacks work because they simply target how humans operate. Once trust is established, even briefly, attackers can step into an insider role and request actions that feel legitimate.”
Cases have already hit companies, including automaker Ferrari, cloud-security firm Wiz, and advertising agency WPP, with one U.K. engineering employee transferring $25 million after a video call with AI-generated executives.
Regulators, including the U.S. Treasury’s FinCEN, have issued alerts as the fraud wave escalates, warning banks and corporations that AI-driven impersonations are no longer a futuristic threat, but a present-day crisis.