Recent AI Developments And The Practice Of Law—An Update

Manatt, Phelps & Phillips, LLP
Contact

Manatt, Phelps & Phillips, LLP

There have been many recent artificial intelligence (AI) developments over the past several weeks that impact the practice of law. Since my article “Recent AI Developments—Can AI Replace Attorneys?” was published on March 14, 2023, a number of new generative AI technologies have been introduced that affect the legal profession. While these technologies still cannot replace attorneys, they provide improved generative AI tools for attorneys to utilize.

OpenAI—the company that created the technology Generative Pre-Trained Transformer (GPT)-3.5 that powers ChatGPT—introduced a new version of the technology on March 14: GPT-4, now available on ChatGPT Plus. Like GPT-3.5, GPT-4 is a generative AI technology that can power chatbots, search engines and other computer systems, and learns by analyzing huge amounts of data from the Internet. Generative AI is a type of artificial intelligence that can create new content, including text, images, video, audio, code and simulations.

GPT-4 is more advanced than its predecessor, allowing it to better handle more complex tasks and analyze image inputs in addition to text inputs, and is trained on significantly larger amounts of data (The New York Times, “OpenAI Plans to Up the Ante in Tech’s A.I. Race”).

OpenAI stated on its website that GPT-4 is “more reliable, creative, and able to handle much more nuanced instructions than GPT-3.5.” For example, OpenAI claims that GPT-4 passed a simulated bar exam with a score around that of the top 10% of test takers; GPT-3.5’s score was around the bottom 10%. GPT-4 received a score in the 88th percentile on the Law School Admission Test (LSAT), compared to a score in the 40th percentile received by GPT-3.5.

Microsoft also announced on March 14 that its new Bing search engine runs on GPT-4, which Microsoft customized for the search engine. Microsoft then on March 16 introduced Microsoft 365 Copilot, a new generative AI integration that leverages GPT-4 to bring automation to Microsoft Word, PowerPoint, Excel, Outlook, Teams and more. Google also announced on March 21 that it had released a chatbot called Bard, powered by Google’s Language Model for Dialogue Applications (LaMDA) technology.

To test these new generative AI technologies in the legal context, I asked ChatGPT Plus (based on GPT-4) the same question that I had asked ChatGPT (based on GPT-3.5): Why is truth in advertising important? Here is the first part of the response:

Advertising is an essential part of modern business, allowing companies to promote their products and services to consumers in a competitive marketplace. However, as advertising has become increasingly sophisticated and targeted, there has been growing concern about the need for truth in advertising.

                                                                                                                Truth in advertising refers to the ethical principle that advertising should be accurate, honest, and not misleading. It is the responsibility of advertisers to ensure that the claims they make about their products or services are truthful, and that any images or descriptions used in advertising accurately represent what is being sold.

It is interesting that ChatGPT Plus’s answer characterized truth in advertising as an ethical principle. ChatGPT provided more emphasis on the legal consequences of false advertising.

I also asked the new Bing (based on GPT-4) why truth in advertising is important. Here is the response:

Truth in advertising is important for several reasons. One of the most important reasons is that it manages expectations. You want potential customers to know exactly what you’re offering. If they show up and it turns out you promised more than you can deliver, they’re going to leave, and it’s going to hurt your reputation.

                                                                                                            Another reason is that it helps build trust with consumers. The fastest way to break a customer’s trust is to promise something in your advertising that does not hold true the second they get their hands on the product.

The Federal Trade Commission (FTC) looks especially closely at advertising claims that can affect consumers’ health or their pocketbooks—claims about food, over-the-counter drugs, dietary supplements, alcohol, and tobacco and on conduct related to high-tech products and the Internet.

That is a fairly good response—it characterizes truth in advertising in practical, ethical and legal contexts. This is more akin to the answer you would get from an attorney but provides much less legal analysis.

Why It Matters

Even with the introduction of these new generative AI technologies, attorneys are still essential for providing legal services to clients. However, these recent generative AI developments can serve as useful tools for attorneys. Attorneys must decide which generative AI tool is best for a task and how to make the right queries, and must assess the relevance, quality and accuracy of the responses. Attorneys must also ensure that in the use of any generative AI tools, client confidentiality is protected.

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations.

© Manatt, Phelps & Phillips, LLP | Attorney Advertising

Written by:

Manatt, Phelps & Phillips, LLP
Contact
more
less

Manatt, Phelps & Phillips, LLP on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide