Digital Forensics: The Good, the Bad, and the AI-Generated

Association of Certified E-Discovery Specialists (ACEDS)

Unless you have been living under a rock for the past year, you have heard something about artificial intelligence (AI). AI chat, art, and everything in between have been all over social media and news feeds. The AI craze isn’t likely to end soon, in fact, more companies are migrating towards implementing AI into tools and software.

Digital forensics is the scientific process of identifying, preserving, collecting, analyzing, and reporting on electronically stored information. An obvious question is – how will AI affect digital forensics?

The Good

Providers of digital forensic tools have already started to implement AI into their tools. Features like image categorization, where AI analyzes media files and categorizes them based on their content, can be useful in quickly identifying files of interest. Think of it as facial recognition for objects.

For example, suppose a case revolved around items such as firearms or drugs. In that case, many forensic tools have image categorization features to help identify photos or videos that may contain those types of items. Of course, they are not always 100% accurate, and any digital forensic examiner worth their salt would verify and validate the findings. Relying solely on a tool to find all your evidence is called “push button forensics.”

The implementation of robust AI models into digital forensic software and toolsets will likely result in significant improvements in the ability to categorize and filter files and file contents. AI implementation may also help to reduce the overall time the analysis takes to conduct, aiding in reducing potential case backlogs or wait times.

The Bad

It is worth mentioning again, that AI usage to help with the analysis of evidence can be a tremendous benefit to the profession, but it may be a double-edged sword. Relying solely on AI to find evidence leads to “push button forensics.” If digital forensic examiners rely solely on the tool, they are hoping to find all evidence with the equivalent push of an “Easy” button; therefore they don’t perform a true digital forensic examination.

Digital forensic examiners need to understand where the evidence is coming from, what it means, and most importantly, validate their findings. Tools and software can and do get it wrong. Just because your AI-infused software found evidence doesn’t mean it is valid or accurate. Where did the evidence come from? What is the significance of the evidence? Does the evidence actually exist, or is the tool manipulating the output?

Additionally, bad actors and criminals will find a way to exploit or use the tools for nefarious purposes, but more on that next.

The AI-Generated

It is no secret that tools such as ChatGPT, DALL-E, Midjourney, and more can be used for nefarious purposes. There are plenty of articles talking about AI being used to create very credible phishing emails, write malware, and create fake images or videos of people. We’ve already seen a fake AI-generated call purporting to be President Joe Biden.

Software such as Adobe Photoshop has already started to implement AI generation. You can take a photo and have AI assist in adding another person, removing a person, or changing the background. All of this can and will likely start appearing as electronic evidence.

It is not just computer programs that are embracing AI, smartphone apps are also making use of artificial intelligence. Google’s Pixel smartphones come with a feature that allows a user to remove certain things from photos called Magic Eraser. AI apps are already being released across the various app stores for consumer use. Even OpenAI, the creators of ChatGPT, has launched an AI marketplace where you can find custom AI versions.

From a forensic perspective, the important question is what traces are left behind on the phone? Can you tell for sure that the photo being submitted hasn’t been edited or altered from its original state?

Additionally, looking at criminal matters, how will using AI image, video and audio-generating technology affect the criminal justice system? The FBI recently put out an announcement about the use of AI in the generation of child sexual abuse material (CSAM).

To quote the very first sentence of the release “[the] FBI is warning the public that child sexual abuse material (CSAM) created with content manipulation technologies, to include generative artificial intelligence (AI), is illegal.” The release goes even further, by providing recent examples of cases involving the use of AI in CSAM investigations.

While it may come as a surprise to some that criminals will use AI in this matter, it was only a matter of time before cases such as those cited popped up.

Final Thoughts

While AI certainly has its benefits, there are some big questions about how it is used and what it can do that need to be answered. It may be able to reduce the time it takes to analyze or identify ESI or help filter documents. But what happens when it is used improperly or there isn’t any validation of how it is working? In other words…will your tool and process survive a Daubert challenge?

[View source.]

Written by:

Association of Certified E-Discovery Specialists (ACEDS)
Contact
more
less

Association of Certified E-Discovery Specialists (ACEDS) on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide