AI at the gate: NYDFS issues guidance on addressing new AI-driven cybersecurity risks under existing cybersecurity requirements

Eversheds Sutherland (US) LLP

On October 16, 2024, the New York State Department of Financial Services (DFS) issued an industry letter providing guidance on how DFS-regulated entities (covered entities) should be evaluating and responding to artificial intelligence (AI)-driven cybersecurity risks.

  • Threat actors can use AI to scale their social engineering operations with deceptive deepfakes and more efficiently exploit breaches to exfiltrate information.
  • Companies using or developing AI systems face unique vulnerabilities along their supply chain and expanded risk from pooling data resources to develop proprietary AI solutions.
  • While the letter does not impose any new regulatory requirements, it emphasizes that DFS expects covered entities to account for and address AI cybersecurity risks under existing DFS cybersecurity requirements.
  • The guidance urges covered entities to take proactive measures to review and reevaluate their Cybersecurity programs to address AI risks in their documented risk assessments, board-level reporting, vendor management and training.

In the letter, DFS enumerates several AI-driven cybersecurity risks, based both on the use of AI by threat actors and on a covered entity’s own use of AI.

AI Cybersecurity Risks from Threat Actors

The letter identifies two ways in which AI exacerbates known threats from hackers:

1. Generative AI is multiplying the risk of deepfakes. By enabling threat actors to develop increasingly realistic communications through AI-enabled audio or visual deepfakes and tailored written messages, AI is more effective and convincing in coercing unsuspecting targets to share sensitive information or credentials.

2. AI makes cyberattacks more effective and efficient. Because AI enables threat actors to comprehensively surveil an exploited system, bad actors can quickly identify and exploit vulnerabilities and exfiltrate more information faster following a breach.

AI Cybersecurity Risks from Using AI Systems and the Accompanying Chain of Service Providers

The DFS letter also identifies security risks posed by a covered entity’s own use of AI systems or tools.

1. AI uses lots of data, which increases risk. AI systems often require access to and the processing of extremely large amounts of valuable data, including nonpublic personal information and sensitive personal information, such as biometric data. This large volume of data not only is inherently harder to protect but also creates a greater incentive for hackers to target the covered entity and its trove of data.

2. AI tools rely on layers of vendors, creating AI supply chain risk. Most covered entities using AI license all or part of their AI systems from service providers that in turn are leveraging other vendors to support the AI tools they offer. Each link in this chain can have access to and process large amounts of the covered entity’s data, making each vendor an enticing target for hackers. Each layer also adds the potential for security vulnerabilities and weaknesses that could be exploited by a threat actor. To the extent a covered entity’s AI systems are integrated with, or its data is accessible to, those third- and fourth-party service providers, a breach at one entity in the supply chain can become a gateway for a much larger incident.

To account for and address these risks, DFS is encouraging covered entities to ensure the issues highlighted in the letter are specifically addressed. The letter identifies several steps covered entities should consider taking to mitigate AI-driven threats:

1. Ensure required risk assessments address the full range of AI use cases and interactions. Covered entities should be assessing and documenting AI-driven security risks—like those identified by DFS—as part of their required assessments of risk, including from their own use of AI systems and the use of AI by their third-party service providers. Overlooking internal or external AI systems can result in failing to capture potential security vulnerabilities that could compromise the confidentiality, integrity or availability of the covered entity’s information systems.

2. Conduct enhanced, specific due diligence of AI service providers before providing system access, and impose heightened security requirements on vendors in the AI ecosystem that have access to the covered entity’s nonpublic information.Acknowledging the risks associated with granting access to third-party AI service providers, covered entities should perform appropriate risk-based due diligence when engaging service providers in the AI ecosystem that process the covered entities’ nonpublic information.

3. Use AI-resistant multifactor authentication. As with many previous DFS guidance letters on cybersecurity, the department continues to emphasize using strong forms of multifactor authentication (MFA) as a key security control. Generative AI creates some new risks when using certain types of biometric authentication as part of MFA, because generative AI could be used to impersonate the biometric characteristics used for authentication. For example, a deepfake video or image could be used to bypass facial recognition requirements. Accordingly, DFS encourages covered entities to embrace “authentication factors that can withstand AI-manipulated deepfakes and other AI-enhanced attacks,” including “digital-based certificates and physical security keys” or multiple simultaneous biometric modalities. That is easier said than done. While there is no obligation to use these specific kinds of authentication factors under either the currently applicable DFS MFA requirements or the elevated MFA requirements going into effect in 2025, this letter continues the pattern of DFS pushing for stricter MFA practices than what is required under its regulations. This letter could be a precursor to future revisions to DFS’s MFA requirements and an issue that could arise during DFS exams or enforcement proceedings.

4. Engage all personnel in risk-based and function-focused training on AI cybersecurity risks and capabilities. The letter provides that covered entities should amend existing and ongoing training to ensure employees are aware of the unique cybersecurity risks posed by AI. For covered entities deploying AI solutions, their cyber and IT staff should be trained on how to secure and defend their AI systems.

5. Monitor for AI-powered attacks and minimize accessible data. Covered entities should be engaging in surveillance and monitoring for specific behaviors unique to AI-focused attacks. DFS specifically points to the need to scan for unusual querying behaviors in generative AI systems, where a threat actor could be attempting a prompt injection attack1 or other effort to extract sensitive information from the model. Covered entities should already be implementing data minimization tactics, but because of the data-intensive nature of AI model training and tuning, organizations should be careful to silo and secure AI training data and other nonpublic information used in AI systems to prevent widespread access to such data in the event of a breach.

Separately, DFS also notes that AI can have significant benefits for cybersecurity defense. Covered entities should be looking at AI as a potential innovative security solution.

With this letter, DFS is encouraging covered entities to ensure the issues highlighted are specifically addressed under their current cybersecurity programs, including in documented risk assessments, board-level reporting, vendor management and training.

__________

1 In a prompt injection attack on a generative AI system, the attacker submits prompts to the system designed to cause the system to circumvent information and privacy protection controls and to reply with sensitive or valuable information.

[View source.]

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations. Attorney Advertising.

© Eversheds Sutherland (US) LLP

Written by:

Eversheds Sutherland (US) LLP
Contact
more
less

PUBLISH YOUR CONTENT ON JD SUPRA NOW

  • Increased visibility
  • Actionable analytics
  • Ongoing guidance

Eversheds Sutherland (US) LLP on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide