More Critical Than Ever: Cyber Risk "Tabletop Exercises" in the AI Infused Workplace

Epstein Becker & Green
Contact

Epstein Becker & Green

Since the dawn of digitalization, the collection and retention of personal and other business confidential data by employers has implicated security and privacy challenges—by amassing a treasure trove of data for bad actors (or unwitting/unauthorized employees) and drawing a roadmap for those seeking to breach the system. Adding artificial intelligence (AI) into the mix creates further areas of concern. A recent survey undertaken by the Society of Human Resource Management of more than 2000 human resources professionals indicates that AI is being utilized by the majority of respondents in virtually all aspects of the employment lifecycle from recruitment to termination. Of additional concern is that third party contractors servicing an organization’s supply chain may utilize AI in one form or another. Among organizations using AI to support HR-related activities, two in five respondents expressed concern about the security and privacy of data used by AI tools.

AI is trained by the assimilation of massive datasets which often contain personal and sensitive information. Prompts to ChatGPT, Gemini, Co-Pilot, and other Generative AI tools can also inadvertently contain confidential or proprietary information that in extreme cases may be manipulated by bad actors injecting malware intrusions into them or poisoning the underlying data upon which the algorithmic model is trained. Illustrative is a recent complaint filed in U.S. District Court in Connecticut in February 2024, West Technology Group v. Sundstrom, alleging that a former employee, using the AI-generated recording device called Otter, acquired and absconded with trade secrets and other confidential and proprietary information of his employer. Indeed, poorly implemented or inadequately secured deployment of generative AI systems can lead to vulnerabilities, allowing unauthorized access or manipulation of the models and compromising overall cybersecurity in the workplace.

2024 has already seen the release of new and more sophisticated AI tools, such as SORA (still in testing stage, but should be unveiled to the public later this year), providing even the less sophisticated user of technology the ability to create voice and video content within minutes that mimics others with alarming alacrity, looking, and sounding like the “real thing.” An August 2023 New York Times article, “Voice Deepfakes Are Coming for Your Bank Balance,” describes the AI-generated scams designed to trick bankers into moving money away from account holders and into the scammers’ pockets.

In February 2024, the Federal Communications Commission announced that it was banning the use of AI-generated voice cloning in robocalls, following an AI-generated call in January that imitated U.S. President Joseph Biden. Deepfake videos that mislead, misinform, or simply embarrass can also manipulate stock prices. Lyft’s stock reportedly jumped more than 60 percent in after-hours trading when a zero was added to a profitability measure, “a move likely exacerbated by trading algorithms that didn’t immediately catch that a five percentage point increase in margins might be a mistake.” AI is also increasingly used in creating untrue harassment scenarios, sexual and otherwise (see, e.g., an October 2023 report by the Internet Watch Foundation).

These concerns have garnered the attention of the National Institute of Standards and Technology (NIST) in its continuing efforts, spurred on by President Biden’s 2023 Executive Order on AI, to support the development of trustworthy AI and alleviate the ever-increasing harms and dangers of its unsupervised use. In January 2024, NIST issued a compendium of the types of cyberattacks that manipulate the behavior of AI systems, along with mitigation strategies and their limitations. While providing some additional arrows to a risk-management quiver, it is, by NIST’s own admission, no magic bullet. On February 7, 2024, U.S. Secretary of Commerce Gina Raimondo announced the leadership team of NIST’s U.S. AI Safety Institute, designed to create safe and trustworthy AI. To support the Institute, NIST has also created a U.S. AI Safety Institute Consortium of more than 200 organizations to develop science-based and empirically backed guidelines and standards for AI measurement and policy.[1]

On February 28, 2024, President Biden issued another Executive Order designed to prevent access by “countries of concern” to Americans’ bulk sensitive personal data as well as U.S.-government-related data—directing U.S. government agencies to issue regulations to assist in that effort. Subject to public notice and comment, the regulations will prohibit or restrict Americans from engaging in certain deals involving bulk sensitive personal or U.S.-government-related data that pose an “unacceptable risk to the national security of United States.” The Department of Justice reported that these countries would likely include, for purposes of this program, China, Russia, Iran, North Korea, Cuba, and Venezuela. “Countries of concern can rely on advanced technologies, including artificial intelligence (AI), to analyze and manipulate bulk sensitive personal data to engage in espionage, influence, kinetic, or cyber operations or to identify other potential strategic advantages over the United States,” the Order states. “Countries of concern can also use access to bulk data sets to fuel the creation and refinement of AI and other advanced technologies, thereby improving their ability to exploit the underlying data and exacerbating the national security and foreign policy threats.”

“Tabletop Exercises”? What are They? Does Our Organization Need Them? (Yes.)

In this era of persistent cyber threats, an organization will be secure only with the active participation of everyone. Each member of the group, from the newest employee to the chief executive, holds the power to harm or to help, to weaken or strengthen, the organization’s security posture.

--From NIST’s National Initiative for Cybersecurity Education (NICE) Working Group, Subgroup on Workforce Management, “Cybersecurity is Everyone’s Job.”

More than ever, employers should be reexamining their security protocols, not only in the face of continuing and costly cyberattacks, but also in the face of the continued deployment of AI in their workplaces and consumer interactions. As noted in a recently issued IBM report, the global average cost of a data breach in 2023 was $4.45 million, a 15 percent increase over three years. It may be the time for organizations to revisit and reconstitute cybersecurity “tabletop exercises” within the context of AI, to provide understanding and education concerning these new and evolving risks and the concomitant creation of protocols and policies regarding the use and administration of AI-generated tools.

Organizations and risk managers have long utilized “tabletop exercises” to educate employees and to provide a coordinated defense plan, usually around potentially catastrophic events such as a cybersecurity breach, by engaging in realistic simulations in which conditions are undertaken to assess (and ultimately improve) organizational crisis management and incident response capabilities. Corporate resiliency and recovery are key aspects of such undertakings. Moreover, “tabletop exercises” provide a forum for executives and employees to discuss and resolve real-world scenarios of cyber intrusion—whether by phishing, vishing, or smishing, to name a few—to determine their risk levels and undertake training and establish resilient protocols to mitigate those risks. A corporate board’s satisfaction of its oversight obligation under Delaware law, for example, and other federal state and local regulatory requirements, including the U.S. Securities and Exchange Commission’s cybersecurity disclosures (registrants must disclose and describe on a new item 1.05 of Form 8-K any cybersecurity incident they determine to be “material”) could come into question when AI development and utilization is not adequately addressed. Ensuring competent deployment, oversight, and ongoing monitoring and maintenance—given the evolving dynamics of this disruptive technology—will be critical for several reasons besides the obvious security. To the extent that an organization is contemplating cybersecurity risk insurance, “tabletop exercises” provide certain assurances to insurers as well as regulators of the good faith efforts and reasonableness of the organization’s effort to ensure security of its cyber and AI systems. 

“Tabletop exercises” provide an opportunity as well to explore and assess, not only the areas of concern inherent in the use of AI, but its benefits as well. As EBG’s Privacy Officer’s Roadmap: Data Breach and Ransomware Defense—Speaking of Litigation video podcast (February 27, 2024) points out, AI can also be used “in the same way that the bad guys use AI”—meaning that companies can also leverage AI to assist in the detection of a present or imminent attack, or to help solidify its defenses to protect against a breach.

Epstein Becker Green Staff Attorney Ann W. Parks contributed to the preparation of this post.

----

[1] EBG Advisors, the consulting arm of our national law firm Epstein Becker Green (EBG), are privileged to be collaborating with NIST in this AI Safety Consortium.

[View source.]

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations.

© Epstein Becker & Green | Attorney Advertising

Written by:

Epstein Becker & Green
Contact
more
less

Epstein Becker & Green on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide