Is Privacy Enforcement Impending for Generative Artificial Intelligence Technologies?

Rothwell, Figg, Ernst & Manbeck, P.C.
Contact

Just last week, researchers at Robust Intelligence were able to manipulate NVIDIA’s artificial intelligence software, the “NeMo Framework,” to ignore safety restraints and reveal private information. According to reports, it only took hours for the Robust Intelligence researchers to get the NeMo framework to release personally identifiable information from a database.[1] Since these vulnerabilities were exposed, NVIDIA has expressed that it fixed one of the root causes of this flaw. The issues with NVIDIA’s AI software follow a significant data security incident concerning ChatGPT in March 2023. After this incident, OpenAI published a report explaining that a bug caused ChatGPT to expose users’ chat queries and personal information.[2] Like NVIDIA’s response, OpenAI patched this bug once it was discovered. Relatedly, in April 2023, OpenAI announced a “bug bounty” program that invites “ethical hackers” to report vulnerabilities, bugs, or security flaws discovered in its systems.

These data security incidents occur against the backdrop of what has been termed an “AI Arms Race.”[3] This “race” kicked off at the beginning of the year when the popularity of ChatGPT spurred an influx of consumer-facing artificial intelligence products. Popular examples of these emerging AI products include Microsoft’s Bing chatbot and Google’s Bard chatbot. The urgency to release these products to the public is exemplified by Microsoft’s CEO, Satya Nadella, who in February 2023 expressed that “[a] race starts today. . . We’re going to move, and move fast.”[3] In light of the race to get artificial intelligence products on the market, one could argue that technology companies have opted for a “release first, fix later” approach to data security for these products.

Of course, bugs are inevitable in complex software. In fact, “bug bounty” programs are not new. Many prominent technology companies utilize these types of programs, and the FTC often requires that entities establish similar programs for fixing known security vulnerabilities. But are these programs sufficient to protect AI companies from liability for data security incidents? The ease at which hackers can exploit the vulnerabilities in these artificial intelligence products and the speed at which these products hit the market begs the question of whether the NVIDIA and OpenAI data leaks were preventable at the outset. Indeed, AI developers know that today’s AI products pose consumer privacy risks. This reality is underscored by testimony from Gary Marcus, a leader in the field, who expressed that the existing AI systems do not adequately protect our privacy.[4] Because of the prevalence and acknowledgment of the lack of security in the current AI systems, it might be the case that these companies could be held liable for the vulnerabilities in their AI products – regardless of how the company reacts after these vulnerabilities are exploited.

In the past, companies have faced enforcement based at least partly on the lack of reasonable measures taken to prevent security vulnerabilities. Often, enforcement on these grounds is brought by the FTC. For example, in a 2023 complaint against Ring, the FTC admonished Ring’s “lax attitude toward privacy and security.” In a 2013 enforcement action, the FTC alleged that HTC America “introduced numerous security vulnerabilities … which, if exploited, provide third-party applications with unauthorized access to sensitive information and sensitive device functionality.” 

Enforcement actions brought on similar grounds have already been mentioned in connection with today’s artificial intelligence products. FTC Commissioner Alvaro M. Bedoya warned in his April 2023 “Early Thoughts on Generative AI” remarks before the International Association of Privacy Professionals, that “[the FTC has] frequently brought actions against companies for the failure to take reasonable measures to prevent reasonably foreseeable risks.”[5] This warning echoes the FTC’s urging that AI companies “take all reasonable precautions before [the generative AI product] hits the market” in a blog post about deceptive AI.[6] While not directly addressing data privacy concerns, this blog post further provides that “deterrence measures should be durable, built-in features and not bug corrections or optional features that third parties can undermine via modification or removal.”[6] Commissioner Alvaro M. Bedoya’s May 31, 2023 statement on the recent settlement with Amazon emphasizes that “[m]achine learning is no excuse to break the law,” sending an unequivocal message to technology companies.[7] The FTC’s public statements concerning how AI companies should be safeguarding consumers from the potential harms caused by AI products might be a harbinger of privacy enforcement actions to come.

While there has yet to be a privacy enforcement action against a technology company concerning its AI product, it is likely only a matter of time. Indeed, in March, the Center for AI and Digital Policy filed a complaint with the FTC asking it to investigate OpenAI, stating that its GPT-4 product is a privacy and public safety risk.[8] Amidst calls for regulation from industry leaders, the data security practices (or lack thereof) used by developers of these technologies will only face even greater scrutiny in the future. 

[1] https://finance.yahoo.com/news/privacy-breach-risk-nvidias-ai-123408857.html

[2] https://openai.com/blog/march-20-chatgpt-outage

[3] https://time.com/6255952/ai-impact-chatgpt-microsoft-google/

[4] https://www.judiciary.senate.gov/imo/media/doc/2023-05-16%20-%20Testimony%20-%20Marcus.pdf

[5] https://www.ftc.gov/system/files/ftc_gov/pdf/Early-Thoughts-on-Generative-AI-FINAL-WITH-IMAGES.pdf

[6] https://www.ftc.gov/business-guidance/blog/2023/03/chatbots-deepfakes-voice-clones-ai-deception-sale

[7] https://www.ftc.gov/system/files/ftc_gov/pdf/Bedoya-Statement-on-Alexa-Joined-by-LK-and-RKS-Final-1233pm.pdf

[8] https://www.caidp.org/cases/openai/

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations.

© Rothwell, Figg, Ernst & Manbeck, P.C. | Attorney Advertising

Written by:

Rothwell, Figg, Ernst & Manbeck, P.C.
Contact
more
less

Rothwell, Figg, Ernst & Manbeck, P.C. on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide