Cyber AI Chronicles II – AI-enabled cyber threats and defensive measures

Constangy, Brooks, Smith & Prophete, LLP
Contact

Constangy, Brooks, Smith & Prophete, LLP

EDITOR’S NOTE: This is part two of “Cyber AI Chronicles” – written by lawyers and named by ChatGPT.  This series will highlight key legal, privacy, and technical issues associated with the continued development, regulation, and application of artificial intelligence.

Recent developments in Artificial Intelligence have opened the door to exciting possibilities for innovation. From helping doctors communicate better with their patients to drafting a travel itinerary as you explore new locales (best to verify that all the recommendations are still open!), AI is beginning to demonstrate that it can positively affect our lives. 

However, these exciting possibilities also allow malicious actors to abuse the systems and introduce new or “improved” cyber threats.

For example, malicious actors have already begun exploiting AI in the development of synthetic media—commonly referred to as “deepfakes.” Just last month, the FBI warned of malicious actors creating deepfakes of sexually explicit content to extort, coerce, and harass victims. Although deepfakes can be created without the use of AI, AI makes the content more believable and therefore more dangerous. AI can also draft more believable phishing content—thereby increasing the likelihood that an intended victim will fall for the scam. Malicious actors have even combined these actions to create audio deepfakes of trusted parties calling to ask friends, families, and colleagues to send money to help them in an emergency. 

AI-enabled threats don’t stop at creating more believable scams. It is possible, or likely to soon be possible, for AI to be used to generate or improve existing malicious software—or “malware.” This AI-generated malware makes it easier for malware variants to evade network defenses. Commercially available AI tools like ChatGPT have safeguards in place to prevent this abuse of their capabilities, but a maxim in information security is that there is no such thing as perfect security. In other words, we cannot reasonably expect commercially available AI tools to prevent all attempts to create malware.  Worse, it is possible that new AI systems will be introduced expressly for malicious purposes.

There is, however, a flip side to these new threats—the ability for AI systems to be used to improve our information security measures. The National Security Agency has noted that it sees the opportunity to support efforts to secure and defend networks. This opportunity is due in part to AI’s capacity to review and analyze massive data sets to recognize patterns. These patterns can identify previously used or slight derivations of past malicious activity. AI can also be used to rapidly perform a large volume of security measures that are time consuming and require repetitive or near-repetitive actions. As AI’s capacity to learn from the past and predict future behavior improves, its capacity to better predict novel, emerging threats will also improve.

It’s too early to tell whether AI will be a greater boon or bane to information security, but we can influence the outcome. Doing so begins with understanding the possible uses and abuses of this revolutionary technology. 

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations.

© Constangy, Brooks, Smith & Prophete, LLP | Attorney Advertising

Written by:

Constangy, Brooks, Smith & Prophete, LLP
Contact
more
less

Constangy, Brooks, Smith & Prophete, LLP on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide