On December 19, 2025, the Federal Bureau of Investigation (FBI) published an Alert warning the public that it has data from as far back as 2023 that “malicious actors have impersonated senior U.S. state government, White House, and Cabinet level officials, as well as members of Congress to target individuals, including officials’ family members and personal acquaintances.”
The malicious actors send AI-generated voice messages in vishing campaigns and AI-generated text messages in smishing campaigns that impersonate officials that establish communication with the victim on an encrypted messaging application to:
- Discuss current events;
- Ask about U.S. policy;
- Propose a meeting with high-ranking officials;
- Request copies of personal documents;
- Request a wire transfer to an overseas financial institution;
- Note appointment of the victim to a company’s board of directors;
- Request an authentication code that allows the threat actor to sync their device with the victim’s contact list; and
- Request the victim introduce the threat actor to a known associate.
The threat actor starts the communication with a text message and then asks the victim to move to an encrypted platform such as Signal, Telegram, or WhatsApp.
The Alert provides recommendations for spotting a fake message, including:
- Verify the identity of the person calling you or sending text or voice messages. Before responding, research the originating number, organization, and/or person purporting to contact you. Then independently identify a phone number for the person and call to verify their authenticity.
- Carefully examine the email address, messaging contact information, including phone numbers, URLs, and spelling used in any correspondence or communications. Scammers often use slight differences to deceive you and gain your trust. For instance, actors can incorporate publicly available photographs in text messages, use minor alterations in names and contact information, or use AI-generated voices to masquerade as a known contact.
- Look for subtle imperfections in images and videos, such as distorted hands or feet, unrealistic facial features, indistinct or irregular faces, unrealistic accessories such as glasses or jewelry, inaccurate shadows, watermarks, voice call lag time, voice matching, and unnatural movements.
- Listen closely to the tone and word choice to distinguish between a legitimate phone call or voice message from a known contact versus AI-generated voice cloning, as they can sound nearly identical.
- AI-generated content has advanced to the point that it is often difficult to identify. When in doubt about the authenticity of someone wishing to communicate with you, contact your relevant security officials or the FBI for help.
[View source.]