Perspectives on AI and the Law

Epiq
Contact

Epiq

The 2018 Bloomberg Law Leadership Forum in New York included a robust agenda covering the current regulatory priorities, corporate legal risks and the tools counsel have at their disposal to address them. Keynote speakers included Jay Clayton, Chairman, U.S. Securities and Exchange Commission and Rod Rosenstein, Deputy Attorney General, U.S. Department of Justice.  Participating in the day provided rare access to current perspectives from thought leaders that matter to our industry.  

Artificial intelligence, machine learning, and the law

Sharing the stage to explore the implications of the rise of artificial intelligence and machine learning on law with a panel of legal experts was an honor. My panel was moderated by Cassandra Porter, Sr. Privacy Counsel, Americas, Cognizant and included Behn Dayanim, Partner, Paul Hastings; Bert Kaminski, Chief Commercial Counsel, GE Digital; Pedro Pavón, Managing Counsel, Oracle; Boris Segalis, Partner, Cooley. The experience increased my confidence that technology-enabled services – including eDiscovery – remain in an extraordinary period of growth.

What is artificial intelligence?

The working definition of AI for the panel was, according to Porter, “any task, that if performed by a human, would require intelligence.”  To expand on that, each of the panelists was asked for their perspective on “what does AI mean to you.”  Answers were illuminating.

Segalis opened with “to me, AI is decisions made on synthetic intelligence.” Pedro followed up with “I think of AI as a category of things… systems that make decisions without human involvement...in an algorithmic way.” He cited examples including machine learning, deep learning and perhaps in the future applications enabled by quantum computing.  Kaminski’s response included “it’s a tool. Basically mathematical algorithms that tease out correlations. So it’s not really intelligence as we would think in a human sense… it’s meant to supplement humans.” He highlighted that these tools break down when they are presented with situations that they have not “seen” before. I shared my perspective that AI is a collection of algorithms that are categorizers… the algorithms observe patterns in flows of data to provide decision support, or even sometime to take actions independently. Dayanim ffinished the definition round with the observation that “from a legal perspective, I think of AI as decision making that impacts people but does not involve human intervention except in a very limited way.” He related that the legal focus regarding AI is on “automated decisions that impact people.”

These were powerful answers from people deeply involved in figuring out not only what AI is, but what it means to us all culturally, legally and ethically.  One of the themes throughout the discussion was, as Kaminski put it, “who is responsible for some of the decisions and that AI makes?”  

Artificial intelligence and liability

Various issues relating to autonomous vehicles were considered – for example, who is responsible for the liabilities that arise when autonomous cars are programmed to make what would be considered ethical choices if made by people, such as whether to prioritize the safety of a passenger over a pedestrian, or the use of autonomous weapons.  One manufacturer pursuing autonomous vehicle, Volvo, has taken the position that they recognize that they will be the responsible party.

But determining in the abstract where responsibilities lie is not at all the same as establishing accountability in specific instances. To get closer to that, Boris offered an analogy to the physical evidence that is analyzed following a plane crash.  The airline industry tracks the manufacturing process of parts down to the data files used to create castings for hydraulic parts. Similarly, when autonomous vehicles are in an accident “the data will just be there. We will know what led to that particular accident.”

Data and eDiscovery

This is a powerful and useful analogy. From the perspective of eDiscovery it is indeed the crux of the matter. The volume of data that are analyzed in autonomous vehicle operations is extraordinary. The data stored in the vehicle’s memory may also be vast, and must include the results of the algorithms that operate the vehicle – it can’t be a full copy of all of the environmental data from the time leading up to the accident.  Furthermore, the algorithms themselves are complex. 

While there may be sufficient information to know what led to accidents involving AI, actually knowing the cause of an accident will require eDiscovery and expert analysis.  As AI is embedded in more products, expert perspectives regarding not only the electronically stored information (ESI) in AI-enabled products, but also regarding the algorithms that generated the ESI, will be required to know what led to any particular accident. Or more generally, any liability for which we are to hold AI responsible.

Once again, as new forms of ESI become both relevant and reasonably accessible, the eDiscovery industry must be prepared to support related administrative, legal and regulatory processes.

Written by:

Epiq
Contact
more
less

Epiq on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide