AI Driven "Deepfakes" Expose Holes in Corporate Compliance Programs: The Road to Recovery

King & Spalding
Contact

Artificial intelligence (“AI”) has made astounding advancements in recent years. Driven by incredible computing power, AI has the potential to change nearly every industry. AI is touted as having the potential to reduce disease outbreaks, detect never before seen objects, increase navigational accuracy and autonomous driving vehicles, and improve voice and facial recognition. Like most technological advancements, these breakthroughs present potentially great benefits. However, advancements in AI also present our clients with unprecedented risks in corporate compliance and risk management that demand immediate attention.

Particularly concerning are “deepfakes”—initially used to create phony photographs involving celebrities in compromising situations, the technology has graduated to realistic-looking or -sounding video and audio generated by robust computers and machine-learning software. “Deepfakes” can also be used to commit fraud and corporate theft.

Cybersecurity company Symantec[i] has reported that “deepfake” audio recently was used in three separate corporate attacks, where criminals played phony audio files during phone calls to cause the transfer of funds—basically, a new variation of the “business email compromise” schemes that have regained rampant popularity with cybercriminals across the world. The criminals are alleged to have culled samples from hours of speeches by the victim companies’ CEOs during TED talks, earnings calls, and YouTube videos. “Deepfake” technology thus has the potential to cause immediate financial losses, adversely affect a company’s financial performance, and give rise to many types of litigation, including lawsuits under various securities laws.

A recent real-world example highlights the potential dangers of “deepfakes.” According to a recent report in the Wall Street Journal[ii], a CEO of a U.K.-based energy firm was duped into wire transferring $243,000 into the account of criminals using AI to impersonate the voice of his boss, the CEO of the firm’s German parent. The fake caller apparently used AI, in what is thought to be a first-of-its-kind crime, to impersonate the voice of the German CEO, including his unique accent and intonation. Because the UK CEO knew his boss’s voice well, he was convinced it was a legitimate request and made the payment as requested. The “deepfake” impersonation was apparently well done—the criminals relied on it during three separate phone calls, although the third call tipped off the UK CEO enough to terminate all but one fraudulent transaction.

The fact that technology exists to allow criminals to pull off such a brazen theft is not shocking. Over the course of the past several decades, cyber-crimes have tracked many of the latest technological innovations, from blockchain, cryptocurrency, and AI-enabled password cracking. And relatively few companies have endeavored to adopt compliance programs and technologies to mitigate the risks and try to keep up with the more sophisticated criminal and nation-state actors.

However, many companies have not yet invested in the latest cybersecurity tools that can detect “deepfakes” or are unaware of such technologies, particularly live spoofed voices and video. Indeed, the fact that the technology to detect “deepfake” voice and audio spoofing is lagging makes it even more likely that video-conferences and audio-verification systems will be susceptible to similar spoofs and thefts.

Many banks and tax professionals use voice recognition, or voice biometrics, to confirm the identity of their customers. Countless numbers of corporate entities allow audio approvals, sometimes with voice biometrics, for payments within certain thresholds. Companies assume voice biometrics make identification faster and more secure. But with AI-aided “deepfake” technology now available to spoof voice biometrics, one should question whether this form of identification is secure and/or a best practice.

Advancements in AI thus necessitate immediate action to build and/or update corporate compliance programs to effectively deal with the impending threats from these technologies. Traditional approval methods may no longer be sufficient to guard against sophisticated AI attacks using “deepfake” technologies. We recommend seeking counsel experienced with these issues to ensure that your compliance program is up to the task—and technology.

Corporate compliance is just one area where “deepfake” technology can adversely impact your businesses. In forthcoming alerts, we will discuss other potential adverse consequences our clients may experience as the result of “deepfake” technologies.

_________________

[i] https://www.fastcompany.com/90379001/criminals-are-using-deepfakes-to-impersonate-ceos

[ii] https://www.wsj.com/articles/fraudsters-use-ai-to-mimic-ceos-voice-in-unusual-cybercrime-case-11567157402

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations.

© King & Spalding | Attorney Advertising

Written by:

King & Spalding
Contact
more
less

King & Spalding on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide