Deepfakes: Navigating Data Privacy and Cybersecurity Risks

DRI
Contact

DRI

[author: Afam I. Okeke]

Since 2017, deepfakes have proliferated all over the internet. Deepfakes (also styled as deep fakes) are synthetic media created with artificial intelligence (AI) technology that utilize machine learning algorithms like generative adversarial networks to create audio, video, or images of real humans saying or doing things they never said or did. “Deepfake,” Merriam-Webster (2021), https://www.merriam-webster.com/dictionary/deepfake. While deepfakes provide innumerable opportunities and benefits, the malicious use of deepfakes could cause a spectrum of serious harm in society – from attacks on an individual’s dignity to compromising an organization’s data and proprietary information. Accordingly, as the deepfakes continue to rapidly improve and proliferate on the internet, individuals and businesses must be aware of the imminent risks they present and how to mitigate them. This article aims to explore the historical development of deepfake technology, scrutinize its diverse risks concerning personal and organizational data privacy, and suggest pragmatic strategies for mitigating these potential concerns.

The Rise and Expansion of Deepfakes

In the past, only highly skilled media editors and CGI experts could digitally manipulate videos to create movies or shows that appear authentic and realistic. Today, however, new applications and open-source code render deepfake technology widely accessible to the public. 10 best Deepfake Apps and websites you can try for fun, GEEKS FOR GEEKS (2023), https://www.geeksforgeeks.org/10-best-deepfake-apps-and-websites-you-can-try-for-fun/ (last visited Aug 16, 2023). Currently, as long as deepfake creators have a few images and sounds of a person, they can create a deepfake of that person on a computer in the comfort of their homes. See Henry Ajder et al., The State of Deepfakes: Landscape, Threats, and Impact, DEEPTRACE (Sept. 2019). Moreover, publicly accessible generative AI programs like OpenAI’s ChatGPT and DALL-E now allow malicious users to produce convincingly deceptive deepfake content quickly and with less sophisticated technology. (Shannon Bond, It takes a few dollars and 8 minutes to create a deepfake. and that’s only the start NPR (2023), https://www.npr.org/2023/03/23/1165146797/it-takes-a-few-dollars-and-8-minutes-to-create-a-deepfake-and-thats-only-the-sta). (last visited Aug 16, 2023).

Another form of media manipulation that should not be confused with deepfakes, is “cheap fakes” or “shallow fakes,” which are audiovisual manipulations created with cheaper, more accessible software as compared to deepfakes. Paris, Britt, and Joan Donovan. “Deepfakes and Cheap Fakes: The Manipulation of Audio and Visual Evidence.” Rep. Deepfakes and Cheap Fakes: The Manipulation of Audio and Visual Evidence. DATA & SOCIETY, 2019. Cheap fakes can be rendered through Photoshop or just by changing the speed of a video. These videos do not rely on machine learning, but simple techniques available on any video recording or editing software. Unlike cheap fakes, deepfakes utilize deep learning technology to create synthetic content.

According to data from the World Economic Forum (WEF), between 2019-2020, the number of deepfake videos online is experiencing a staggering annual growth rate of 900%, which underscores the rapid expansion of this technology's reach. How can we combat the worrying rise in deepfake content?, World Economic Forum, https://www.weforum.org/agenda/2023/05/how-can-we-combat-the-worrying-rise-in-deepfake-content/ (last visited Aug 16, 2023). In 2022, the WEF found that 66% of cybersecurity professionals faced deepfake attacks in their organizations. Furthermore, according to a report by Europol, researchers estimate that as much as 90% of online content may be synthetically generated by 2026. Europol (2022), Facing reality? Law enforcement and the challenge of deepfakes, an observatory report from the Europol Innovation Lab, Publications Office of the European Union, Luxembourg. These latest developments emphasize the urgent need for strong legal frameworks to address the potential threats posed by deepfakes.

The United States is the first in promulgating federal legislation aimed at addressing deepfakes. In December 2019, then-President Donald Trump signed deepfake legislation into law under the National Defense Authorization Act, which requires, in pertinent part, a comprehensive report of foreign weaponization of deepfakes. Currently, only a handful of states have recognized deepfakes’ legislation. California and New York have introduced deepfake legislation that grants individuals affected by deepfakes a private right of action, while Virginia has amended its penal laws to criminalize the sharing of deepfakes without consent and with malicious intent.

Deepfake Technology

The creation of deepfakes is as compelling as its dissemination. Deepfakes depend on two advancements in machine learning (Chris Nicholson, Artificial Intelligence (AI) vs. Machine Learning vs. Deep Learning. PATHMIND, https://pathmind.com/wiki/ai-vs-machine-learning-vs-deep-learning) (1) deep neural networks (Also known as “deep learning.” Janelle Shane, Neural networks, explained, PHYSICS WORLD (July 9, 2018), https://physicsworld.com/a/neural-networks-explained/ and (2) generative adversarial networks (GANs). See Joseph Rocca, Understanding Generative Adversarial Networks (GANs) Medium (Aug. 25, 2019), https://towardsdatascience.com/understanding-generative-adversarial-networks-gans-cd6e4651a29. Deepfake technology relies on a deep neural network, neural networks are a form of machine learning algorithm, modeled loosely after the neural systems of the human brain. Thus, the more images, audio, or video datasets of an individual that are fed through the neural network, the more accurately and quickly it can reproduce the realistic representations of said individual.

Deep neural networks are only half of the equation, the underlying technology in deepfakes is GANs. GANs adopt a game-theoretic approach in which two neural networks battle through a training distribution process of real images and fake images. See A game-theoretic approach for Generative Adversarial Networks, (Mar. 30, 2020), https://arxiv.org/pdf/2003.13637.pdf (explaining the game-theoretic approach). The first neural network is known as the generator, which generates a new image by attempting to replicate the dataset it is being fed. Then, the second neural network known as the discriminator, attempts to identify the errors in the generated images. The discriminator’s job is to identify which images in the dataset are fake. If the discriminator can identify the fake image, the generator can then learn how the discriminator determined the fake data and correct whatever error was that the generator made.

GANs present one of the major problems with differentiating a deepfake from an actual video, image, or audio recording. With GANs, the moment the deepfake is detected, a correction can be made which renders the deepfake formidable and difficult to detect the next time a dataset is configured. Danielle K. Citron & Robert Chesney, Deep Fakes: A Looming Challenge for Privacy, Democracy, and National Security, 107 California Law Review 1753 (2019). Whereas each detection improves the deepfake, each new method of detection by the AI only works once. For example, if a person in a deepfake video exhibited irregular blinking patterns or strange lip movements, the deepfake creators can use detection algorithms and create new videos with more realistic blinking patterns and lip movements. Matthew Hutson, Detection stays one step ahead of deepfakes-for now IEEE Spectrum (2023), https://spectrum.ieee.org/deepfake (last visited Aug 16, 2023). Thus, the detection algorithms continue to play a game of cat and mouse and the creators of the deepfakes inch closer to realism with each correction they make.

Data Privacy & Cybersecurity Risks

Deepfakes present both opportunities and risks for organizations. While they can be used for legitimate business purposes, for example, an individual can translate a video of them speaking in one language to speak another language, organizations must be aware of the inherent dangers associated with deepfakes. Legal and ethical considerations regarding deepfake are paramount, like intellectual property law, data privacy, and cybersecurity law. Although intellectual property is an important legal consideration, especially regarding the right of publicity, the more pressing concerns are the latter. The creation of convincing deepfakes often relies on vast amounts of personal data. Therefore, implementing strict data protection measures is essential to secure data and reduce its availability for malicious purposes.

The emergence of AI-generated identity fraud, exemplified by deepfakes, has become a pressing concern. In fact, according to a survey conducted by Regula, a global developer of identity verification solutions and forensic devices, 37% of organizations have experienced deepfake voice fraud, while 29% have fallen victim to deepfake videos. A third of businesses hit by Deepfake Fraud: Regula survey, Regula, https://regulaforensics.com/news/a-third-of-businesses-hit-by-deepfake-fraud/ (last visited Aug 16, 2023). As such, safeguarding personal data is significantly imperative to prevent and minimize the risk of deepfake exploitation.

Data Privacy Breaches and Unauthorized Use

One of the significant concerns arising from deepfakes is the potential for data privacy breaches and unauthorized use. Malicious actors can use deepfakes to impersonate employees, clients, or business partners, posing serious privacy risks when personal information, such as biometric data and voice recordings, is exploited without consent or authorization. Such deepfake-based social engineering attacks or fraudulent communications can expose sensitive data, like financial information and trade secrets, leading to severe consequences for businesses. Although U.S. state laws, such as the California Consumer Privacy Act (CCPA), the New York SHIELD Act, and the Illinois Biometric Information Protection Act (BIPA), that aim to protect residents' personal information, deepfake content poses unique challenges for victims seeking privacy violation claims. For example, most of the malicious actors are difficult to trace due to the anonymity of the creator who created and disseminated the deepfake. As such, the deceptive nature of deepfakes can further complicate the process of proving data privacy breaches.

Targeted Cyberattacks

Targeted cyberattacks have also been empowered by deepfake technology. By skillfully manipulating audio and video, attackers can deceive targets into revealing sensitive information, gaining unauthorized access, or engaging in harmful actions. These attacks can be devastatingly effective. In 2021, cybercriminals used AI voice cloning to impersonate the CEO of a large company and tricked the organization’s bank manager into transferring $35 million to another account to complete an acquisition. Thomas Brewster, Fraudsters cloned company director’s voice in $35 million heist, police find Forbes (2023), https://www.forbes.com/sites/thomasbrewster/2021/10/14/huge-bank-fraud-uses-deep-fake-voice-tech-to-steal-millions/?sh=3bcd55a47559 (last visited Aug 16, 2023). This is a newly defined cyberattack vector, known as business identity compromise (BIC). By leveraging deepfake technology, attackers can create synthetic corporate personas or convincingly imitate existing employees, heightening the potential for sophisticated cybersecurity attacks. Cybercriminals can exploit BIC to unlawfully obtain patents and trade secrets, compromise relationships with stakeholders and even manipulate capital markets.

Phishing Campaigns

The integration of deepfakes in phishing campaigns has further escalated the sophistication of cyberattacks. Attackers can craft convincing messages impersonating executives or colleagues, enticing victims to disclose sensitive information, login credentials, or financial data. Cybercriminals employ audio deepfakes of trusted figures within the organization as a starting point for these attacks. They leverage platforms like web conferencing or voicemail to manipulate employees, using social engineering tactics like business email compromise (BEC) to coerce them into disclosing sensitive information or making unauthorized financial transactions. Business email compromise, FBI (2020), https://www.fbi.gov/how-we-can-help-you/safety-resources/scams-and-safety/common-scams-and-crimes/business-email-compromise (last visited Aug 16, 2023). In 2019, a fraudster used AI to impersonate the chief executive of a UK-based firm’s German parent company and demand that he requested an immediate transfer of $243,000 to a Hungarian supplier. Jesse Damiani, A voice deepfake was used to scam a CEO out of $243,000 Forbes (2019), https://www.forbes.com/sites/jessedamiani/2019/09/03/a-voice-deepfake-was-used-to-scam-a-ceo-out-of-243000/?sh=f5e2aad22416 (last visited Aug 16, 2023).

Reputation and Brand Image

Lastly, beyond data breaches and financial losses, malicious deepfakes can severely damage a company's reputation and brand image. Impersonating key executives or influential figures associated with the business, these deepfakes can spread false information, eroding trust and credibility among customers, partners, and stakeholders leading to potential harm to both the company's reputation and the executive's personal and public image. Furthermore, repairing the damage caused by deepfake-driven misinformation may prove to be a challenging and time-consuming task for businesses.

Risk Mitigation and Prevention Strategies

Individuals and organizations should take proper steps to safeguard their data to mitigate the risks posed by malicious deepfake attacks.

Individuals

To protect themselves from deepfake exploitation, individuals can take the following proactive measures:

  • Avoid sharing personal information online, especially on public platforms like Facebook or X (formerly known as Twitter).
  • When consuming information on the internet, utilize SIFT method which encourages individuals to Stop, Investigate the source of the information, Find trusted coverage, and Trace the original content to prevent attacks.
  • Use strong and unique passwords for all online accounts, or even use multi-factor authentication to further protect accounts. If you have a large number of online accounts, a password manager can help secure and organize all your passwords and improve your digital security.
  • Stay vigilant for suspicious content, such as videos or text/audio messages that seem manipulated or out of character for the sender. It is important to verify the person’s identity through alternative independent sources.
  • Stay informed about deepfake technology and its implications to recognize potential risks.

Organizations

To prevent protect their data and proprietary information from deepfake attacks, organizations can utilize the following prevention and mitigation strategies:

  • Establish robust cybersecurity measures, privacy policies, and procedures to securely manage and protect employees' personal information.
  • Limit internal access to sensitive data and implement strong authentication mechanisms, like multi-factor authentication or blockchain technology.
  • Reinforce risk expectations and clarify protocols for verifying requests, especially when it comes to payment requests from senior executives, to prevent fraudulent financial transactions.
  • Build comprehensive incident response plans and business continuity plans to mitigate potential damages in the event of a data breach.
  • Conduct regular cybersecurity training and awareness programs, to educate employees about deepfake-related risks and how to identify and report potential threats.

Conclusion

Amid the rapid progress of deepfake technology, the gravity of its cybersecurity and data privacy implications grows increasingly apparent. To effectively safeguard against the emerging threats posed by deepfakes, a collaborative effort among individuals, organizations, and governments is imperative. Central to this endeavor is a heightened emphasis on data privacy, ensuring that sensitive information remains protected from manipulation and unauthorized usage. Concurrently, bolstering cybersecurity defenses, particularly in areas susceptible to deepfake attacks, is essential to thwart potential breaches and malicious exploitation. Furthermore, fostering widespread awareness and knowledge dissemination regarding the intricacies of AI-generated content is vital to navigating this transformative era and empowering individuals to identify and confront these threats confidently. In a world where seeing is no longer believing, we must work together to protect the integrity of our increasingly digitized world.

Written by:

DRI
Contact
more
less

DRI on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide