The FBI says malicious actors have been using AI-generated text and voice impersonation to target close contacts of senior US government officials, creating a growing trust and security risk at the highest levels of government.
In a new alert, the FBI says activity dating back to 2023 shows attackers impersonating senior US state government officials, White House and Cabinet-level leaders, as well as members of Congress.
According to the FBI, the campaign relies on AI-powered smishing and vishing techniques to establish credibility with victims who are often family members or close acquaintances of government officials.
“Since at least 2023, malicious actors have sent text messages and AI-generated voice messages — techniques known as smishing and vishing, respectively — that claim to come from a senior US official to establish rapport with targeted individuals.”
The FBI says attackers typically begin with an SMS message and quickly attempt to move the conversation to encrypted messaging platforms such as Signal, Telegram or WhatsApp. Once communication is established, the actors exploit the perceived authority of the impersonated official to manipulate victims, often tailoring conversations around topics the target knows well.
The agency said attackers use these conversations to propose high-level meetings, suggest political or corporate appointments and discuss sensitive policy issues.
“Actors continue to engage the victim in any number of ways, including discussions on current events or bilateral relations, proposing a meeting with the president of the United States or other high-ranking officials, or noting the victim is being considered for a nomination to a company’s board of directors.”
In more severe cases, the FBI says victims were pressured to take concrete actions that exposed sensitive data or financial assets.
“Actors have requested victims provide authentication codes, supply personally identifiable information and copies of sensitive personal documents such as a passport, wire funds to an overseas financial institution, or introduce the actor to a known associate.”
The FBI warns that AI-generated content has advanced to a point where impersonation is increasingly difficult to detect, even for experienced professionals. The agency urges heightened verification measures, including independently confirming identities, scrutinizing contact details and being alert to subtle signs of AI-generated images, video or voice.
“When in doubt about the authenticity of someone wishing to communicate with you, contact your relevant security officials or the FBI for help.”
Disclaimer: Opinions expressed at CapitalAI Daily are not investment advice. Investors should do their own due diligence before making any decisions involving securities, cryptocurrencies, or digital assets. Your transfers and trades are at your own risk, and any losses you may incur are your responsibility. CapitalAI Daily does not recommend the buying or selling of any assets, nor is CapitalAI Daily an investment advisor. See our Editorial Standards and Terms of Use.

