Is Your AI Algorithm Admissible in Court? Some Things to Consider.

Lighthouse
Contact

Increasing use of AI requires careful consideration by both legal practitioners and the courts.

Artificial Intelligence (AI) technology is becoming pervasive in our lives. There’s no doubt about it. Whether it’s Netflix suggesting the next movie to watch or series to binge, Pandora suggesting songs to play, or Alexa or Siri responding to voice commands, we’re probably using AI technology every day without giving it a second thought. It’s everywhere, impacting our lives in ways we’re not even aware of. For example, did you know that an AI algorithm identified the outbreak of what came to be known as the COVID-19 virus several days before either the World Health Organization or the US Center for Disease Control and Prevention?

AI and the Court

AI and machine learning technology have had notable impacts within the practice of law and law enforcement as well. Since Judge Andrew Peck’s 2012 ruling in Da Silva Moore v. Publicis Groupe & MSL Group approving the use of computer-assisted review (and many subsequent rulings reinforcing that approval), machine-learning tools have been considered a court-approved protocol for managing review in eDiscovery.

From a law enforcement standpoint, however, the uses of AI and whether any particular use will be accepted in court is in a state of flux as evolving technologies challenge existing rules and standards. Facial recognition technology, license plate recognition, predictive policing, and other enhanced surveillance is under scrutiny, as are AI-driven tools for sentencing.

In one well-known sentencing example, the Wisconsin Supreme Court, in a unanimous ruling in 2016, upheld a six-year prison sentence for 34-year-old Eric Loomis, who was deemed at high risk of re-offending by a popular algorithmic-based risk-assessment tool for recidivism known as COMPAS (Correctional Offender Management Profiling for Alternative Sanctions), a 137-question test that covers criminal and parole history, age, employment status, social life, education level, community ties, drug use and beliefs. Loomis had challenged the decision, maintaining that it was a violation of his constitutional right to due process for a trial court to rely on the risk assessment results provided by a proprietary tool that prevents a defendant from challenging the accuracy and scientific validity of the result. Notably, COMPAS assessments take gender and race into account in formulating the risk assessment, muddying the constitutional waters further. Although the court upheld COMPAS’s constitutionality, there has been much controversy around whether or not an algorithm that cannot be examined by impacted actors should be allowed to have such influence.

This, and other questions related to AI transparency and bias are important ones and will likely become more pressing as the pervasive use of AI confronts our understanding of a just and due legal process.

Future Acceptance? No Guarantees.

The uses of AI that have been approved by courts so far do not necessarily guarantee that any other technology using AI will be admissible. As we noted in our recent article on ABA Resolution 112,What’s Your Ethical Duty Regarding AI? Here’s What the ABA Says,(for download here), AI algorithms are only as good as the data they learn from and the competence of those who deploy them. Improper use and bias can skew the results of the AI model. As with any evidence, relevance, reliability and authentication are key; it’s just that authenticating algorithms and their results comes with both practical and ethical challenges. It is thus incumbent upon counsel to understand both the potential caveats of AI functionality and the ethical implications of their use.

So, where does that leave as far as the admissibility of AI in the courts? There are a multitude of open questions, but here’s some helpful information to consider about AI admissibility as evidence related to Federal courts.

Federal Rules to Consider for AI Admissibility

There is no single rule in the Federal Rules of Evidence that specifically addresses admissibility of AI technology nor is there any known court ruling that comprehensively addresses evidentiary issues related to admissibility of AI. However, there are components of three FRE rules that can be applicable to the authentication of AI-related evidence:

Rule 401. Test for Relevant Evidence

Evidence is relevant if:

(a) it has any tendency to make a fact more or less probable than it would be without the evidence; and

(b) the fact is of consequence in determining the action.

Rule 901. Authenticating or Identifying Evidence

(a) In General. To satisfy the requirement of authenticating or identifying an item of evidence, the proponent must produce evidence sufficient to support a finding that the item is what the proponent claims it is.

(b) Examples. The following are examples only — not a complete list — of evidence that satisfies the requirement:

(1) Testimony of a Witness with Knowledge. Testimony that an item is what it is claimed to be.

(9) Evidence About a Process or System. Evidence describing a process or system and showing that it produces an accurate result.

Rule 902. Evidence That Is Self-Authenticating

The following items of evidence are self-authenticating; they require no extrinsic evidence of authenticity in order to be admitted:

(13) Certified Records Generated by an Electronic Process or System. A record generated by an electronic process or system that produces an accurate result, as shown by a certification of a qualified person that complies with the certification requirements of Rule 902(11) or (12). The proponent must also meet the notice requirements of Rule 902(11).

These are rules by which parties can approach the authentication of AI evidence. A combination of testimony from a knowledgeable witness and evidence that demonstrates that the AI algorithm is accurate and unbiased will help make a compelling case.

The Sedona Conference Commentary on ESI Evidence & Admissibility

If you’re looking for more robust information on ESI admissibility, including admissibility of AI evidence, The Sedona Conference® just released the final version of its Commentary on ESI Evidence & Admissibility, Second Edition in October 2020. The Second Edition provides updated guidance from the first edition (published way back in 2008) that reflects the advances in technology and the 2017 and 2019 amendments to the Federal Rules of Evidence, in particular Rules 803(16), 807, and 902(13) and (14). And it includes discussions about new issues and pitfalls, such as ephemeral data, blockchain, and (of course) AI!

Conclusion

Applications for, and evidentiary issues related to, AI are continuing to evolve, so this is a topic that will certainly bear revisiting. Eventually, there will be case law that helps define what courts are considering regarding AI admissibility that will certainly impact how parties handle AI evidence. Stay tuned!

Written by:

Lighthouse
Contact
more
less

Lighthouse on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide