New York Bar: Courts Should Take First Crack at “Deepfake” Evidence

Esquire Deposition Solutions, LLC
Contact

Esquire Deposition Solutions, LLC

One interesting aspect of the  New York State Bar Association’s recent review of the ethical and legal issues raised by artificial intelligence in the practice of law was its suggestion that now might not be the best time to craft a solution to the problem of unreliable, AI-generated pleadings and evidence. After reviewing legislative effects in New York, and very recent discussions of changes to federal evidence rules, New York’s artificial intelligence policy experts recommended that trial courts be allowed to consider the admissibility of AI-generated evidence under the current rules on a case-by-case basis.

A significant danger with generative artificial intelligence is its ability to create compelling outputs (e.g., “deepfake” representations of images, text, and sound) while at the same time masking the inputs and processes by which the outputs were created.

The Report and Recommendations of the New York State Bar Association Task Force on Artificial Intelligence (April 2024) summarizes both the development of the artificial intelligence technologies that are finding their way into law offices across the country and the various court opinions and law reform efforts to date. The task force report is a great introduction to artificial intelligence in general, and the legal community’s efforts to protect client interests and the reliability of the adversarial justice system specifically. Very generally speaking, the task force concluded that more experience with artificial intelligence technologies is necessary before workable new rules for lawyers and courts can be written.

The AI “Deepfake” Problem

A significant danger with generative artificial intelligence is its ability to create compelling outputs (e.g., “deepfake” representations of images, text, and sound) while at the same time masking the inputs and processes by which the outputs were created. The “black box” nature of AI technologies creates challenges when used in legal proceedings, where participants have traditionally been able to discover the factual basis for any conclusions offered as evidence in court.

In New York, Assembly Bill 8110 would prohibit the introduction of evidence either “created” or “processed” by artificial intelligence unless the proponent demonstrates – with other, independent admissible evidence – the reliability and accuracy of the artificial intelligence used to create or process the proffered evidence. A similar bill, Senate Bill 8390, is pending in the state senate. Neither has advanced beyond the introduction phase.

At the federal level, the advisory committee for the Federal Rules of Evidence recently discussed a proposed amendment to Fed. R. Evid. 901, the rule on authentication of evidence. The proposed revision would require that the proponent of evidence created by artificial intelligence both describe the technology that created the evidence and demonstrate that the proffered evidence is reliable.

Two changes are under discussion. After the proposed revision, Fed. R. Evid. 901(b) would read as follows (new language underlined):

(9) Evidence about a Process or System. For an item generated by a process or system:

(A) evidence describing it and showing that it produces an accurate a valid and reliable result; and

(B) if the proponent concedes that the item was generated by artificial intelligence, additional evidence that:

(i) describes the software or program that was used; and

(ii) shows that it produced valid and reliable results in this instance.

A new Fed. R. Evid. 901(c) would read:

901(c): Potentially Fabricated or Altered Electronic Evidence. If a party challenging the authenticity of computer-generated or other electronic evidence demonstrates to the court that it is more likely than not either fabricated, or altered in whole or in part, the evidence is admissible only if the proponent demonstrates that its probative value outweighs its prejudicial effect on the party challenging the evidence.

Details on the proposal can be found in the agenda book for the advisory committee’s April 19 meeting (PDF, 10.94 MB).

The intent of the proposed revisions is to allow “valid and reliable” AI-generated evidence while addressing the black-box aspect of artificial intelligence tools by requiring transparency into how the evidence was created. Advisory committee materials suggest that the committee will likely hold off recommending rule changes for now, in order to give trial courts time to gain experience working through deepfake evidence issues under existing rules.

The New York bar’s task force appeared to agree with a “wait and see” approach to deepfake evidence challenges. It said that, while both proposals were “laudable,” the legislative and rulemaking processes might not be the appropriate approach to regulating evidence generated by artificial intelligence. Trial judges ought to have the first crack at writing evidentiary standards for admissibility of AI-generated evidence, the task force suggested.

“[I]t may well be that the common law at the trial court level provides at least an interim roadmap for how judges should consider these issues,” the task force wrote. “Indeed, this approach was largely employed to develop the law regarding discovery and admissibility of social media evidence when those issues first took hold.”

The recommendations in the task force report were adopted April 6 by the New York State Bar Association’s House of Delegates.

Evidence Rule Changes Appear Unlikely

Elsewhere, in California, legislation has been introduced (Senate Bill 970) that would regulate the production and use of “synthetic media” to some extent, and direct the California Judicial Council to study the need for changes in state evidence rules “to assist courts in assessing claims that evidence that is being introduced has been generated by or manipulated by artificial intelligence.” The measure does not appear to be headed for passage; however, the Judicial Council recently launched a deepfake evidence initiative without the need for legislative prompting.

Last year, in Texas, the Taskforce for Responsible AI in the Law (TRAIL) released an Interim Report to the State Bar of Texas Board of Directors that discussed deepfake evidence challenges. Task force members saw two problems with deepfakes. First, deepfake evidence clearly threatens the integrity of the fact-finding process — thus, authentication rules must be able to screen out faked evidence. However, the ubiquity of deepfake technologies will likely raise in jurors’ minds the possibility that reliable evidence might have been faked — thus, jurors may need to be given some assurance that the evidence they’ve heard or seen is reliable.

“Though the technology is relatively new, courts already have processes in place to handle fake evidence and can apply these same procedures to managing deepfakes,” the task force noted. “But courts are less prepared to deal with proving that real evidence is, in fact, real. Furthermore, the better the evidence, the more likely that juries will feel required to verify its legitimacy.”

The likely result of the ongoing policy work on deepfake evidence – both at the federal and state levels – is that there will be no evidence rule changes anytime soon. Instead, trial courts, and the litigators working in those courts, will have to work out authentication issues for digital video, images, and sounds on a case-by-case basis under current rules. All while the technology for creating digital evidence advances – a moving target. Whether a new rule for authenticating digital evidence emerges from this experience is anyone’s guess.

Written by:

Esquire Deposition Solutions, LLC
Contact
more
less

PUBLISH YOUR CONTENT ON JD SUPRA NOW

  • Increased visibility
  • Actionable analytics
  • Ongoing guidance

Esquire Deposition Solutions, LLC on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide