AI Threatens Courts with Fake Evidence, UW Prof Says

EDRM - Electronic Discovery Reference Model

EDRM - Electronic Discovery Reference Model

Ethics prof supports pause in development and deployment of AI until safeguards are in place

[Editor’s Note: Republished with permission, this article first appeared in The Record, [3.28.23]]. 

Courts are not equipped to detect the fake video and audio evidence that will contaminate the justice system thanks to advances in artificial technology, says Maura Grossman, a University of Waterloo expert in ethics, law and technology.

“I worry about what happens with the court system when neither a judge nor a jury will be in any position to believe their eyes and ears anymore when they look at evidence,” said Grossman in an interview.

Only defendants who can afford to pay for expert analysis of the evidence will be able to get a fair trial when trying to refute deep fakes, she said.

“There has to be some middle ground. Let’s find something in between outright banning and doing nothing, and maybe it is some agency for algorithms that has to approve them.

Dr. Maura R. Grossman

The AI race among big tech became public in a new way in November 2022 when California-based OpenAI released ChatGP3, and more recently ChatGP4. Ask a question, it confidently answers. Microsoft announced it will invest billions in the technology.

A few days ago Google released its chatbot called Bard to a limited number of users in the United States and the United Kingdom. 

Both ChatGPT and Bard are among the latest advances in natural language processing, an area called generative AI. The technology can be used to help take apart a real, authentic recording, and rearrange everything.

With a relatively short recording of a person speaking, the AI can have the subject saying something radically different in a new recording it makes based on data from the original, said Grossman.

There are already reports of telephone scams that use fake audio produced by a generative AI to trick people into sending money, she said.

Since 2016, Grossman has taught ethics and AI to graduate and undergraduate students at the University of Waterloo, and Osgoode Hall Law School in Toronto. Prior to that she practised law in New York for 17 years, specializing in law and technology.

Grossman is also on faculty at the Vector Institute for Artificial Intelligence in Toronto, and the principal at Maura Grossman Law, an eDiscovery law and consulting firm in Buffalo, N.Y.

She is trying to warn the legal profession and government ministries that oversee the justice system about the threats posed to the courts by generative AI.

“What does that do to the justice system that has always relied on either a judge or a jury being able to assess evidence?” said Grossman. “We are moving into a world where that is going to be harder and harder to do.”

But she stops short of calling for government regulation of AI for two reasons. First, it can easily become a blunt instrument that stifles advances that can help with disease diagnoses and new drug development. Secondly, the public sector does not have the experts needed to meaningfully regulate AI.

The European Union has draft legislation that is the most comprehensive so far and includes a ban on facial recognition technology for public surveillance, said Grossman, and some U.S. states have also banned the use of facial recognition tech.

“A lot of people say the EU went too far,” said Grossman.

She likes the idea of telling the public when they are interacting with a chatbot. She also likes the idea of requiring impact assessments before deploying AI, and making the findings public.

“There has to be some middle ground,” said Grossman. “Let’s find something in between outright banning and doing nothing, and maybe it is some agency for algorithms that has to approve them.”

AI specialists are among the best-paid workers in tech, and it is going to be very difficult for the public sector to recruit experts to help draft and enforce regulations.

Tech associations, industry leaders and government should call for a pause on the deployment of new AI until some of the concerns are addressed, she said. Grossman acknowledges that will be difficult, and won’t work unless everybody agrees to a voluntary pause.

“Some people have asked, ‘Shouldn’t we hit pause, at least commercially, to get our arms around this before they go to market?’ Maybe more research needs to be done,” said Grossman.

“There are some smart, thoughtful people who have taken that position that this stuff has shifted a little too quickly without a lot of thought about the dangers,” she added.

Written by:

EDRM - Electronic Discovery Reference Model


  • Increased visibility
  • Actionable analytics
  • Ongoing guidance

EDRM - Electronic Discovery Reference Model on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide