FBI Warns of Increase in AI-Generated Impersonations of Senior U.S. Officials

Robinson+Cole Data Privacy + Security Insider
Contact

On December 19, 2025, the Federal Bureau of Investigation (FBI) published an Alert warning the public that it has data from as far back as 2023 that “malicious actors have impersonated senior U.S. state government, White House, and Cabinet level officials, as well as members of Congress to target individuals, including officials’ family members and personal acquaintances.”

The malicious actors send AI-generated voice messages in vishing campaigns and AI-generated text messages in smishing campaigns that impersonate officials that establish communication with the victim on an encrypted messaging application to:

  • Discuss current events;
  • Ask about U.S. policy;
  • Propose a meeting with high-ranking officials;
  • Request copies of personal documents;
  • Request a wire transfer to an overseas financial institution;
  • Note appointment of the victim to a company’s board of directors;
  • Request an authentication code that allows the threat actor to sync their device with the victim’s contact list; and
  • Request the victim introduce the threat actor to a known associate.

The threat actor starts the communication with a text message and then asks the victim to move to an encrypted platform such as Signal, Telegram, or WhatsApp.

The Alert provides recommendations for spotting a fake message, including:

  • Verify the identity of the person calling you or sending text or voice messages. Before responding, research the originating number, organization, and/or person purporting to contact you. Then independently identify a phone number for the person and call to verify their authenticity.
  • Carefully examine the email address, messaging contact information, including phone numbers, URLs, and spelling used in any correspondence or communications. Scammers often use slight differences to deceive you and gain your trust. For instance, actors can incorporate publicly available photographs in text messages, use minor alterations in names and contact information, or use AI-generated voices to masquerade as a known contact.
  • Look for subtle imperfections in images and videos, such as distorted hands or feet, unrealistic facial features, indistinct or irregular faces, unrealistic accessories such as glasses or jewelry, inaccurate shadows, watermarks, voice call lag time, voice matching, and unnatural movements.
  • Listen closely to the tone and word choice to distinguish between a legitimate phone call or voice message from a known contact versus AI-generated voice cloning, as they can sound nearly identical.
  • AI-generated content has advanced to the point that it is often difficult to identify. When in doubt about the authenticity of someone wishing to communicate with you, contact your relevant security officials or the FBI for help.

[View source.]

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations. Attorney Advertising.

© Robinson+Cole Data Privacy + Security Insider

Written by:

Robinson+Cole Data Privacy + Security Insider
Contact
more
less

What do you want from legal thought leadership?

Please take our short survey – your perspective helps to shape how firms create relevant, useful content that addresses your needs:

Robinson+Cole Data Privacy + Security Insider on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide