Applying today’s legal ethics to today’s AI (part 2)

Casetext
Contact

To ensure ethical AI use, lawyers should look to today’s ethics rules 

In part 1 we examined three different types of generative AI that have become available in the year since GPT-4 was launched, including specific-use AI made for legal practitioners, such as CoCounsel. Part 2 examines how existing rules of professional responsibility related to competence, diligence, communication, and candor might apply to the use of AI. 

Before we dive into specific rules, it’s important to note that ethical use of generative AI is predicated on the understanding that this technology is a legal assistant, not a lawyer. Lawyers must exercise the same caution with AI-generated work as they would with work produced by a junior associate or paralegal. In each case, it’s essential to use independent judgment to review and finalize the work product.

Ethical use of generative AI also assumes that:

  • The AI is developed responsibly by developers;
  • The user understands how the AI works, including what it can and cannot do; and 
  • The user is always in control of the technology and accountable for its use. 

Thus the responsibility for using AI ethically falls on both the legal professionals who employ it and the developers who create the technology. Developers should take measures to educate the user on how AI works, and users must be intentional about learning the AI’s capabilities and understand their ongoing commitments to using AI ethically and responsibility. 

To that end, we explore how today’s rules of professional conduct—the ABA’s Model Rules of Professional Conduct, specifically—apply to lawyers’ use of legal AI. 

Rules 5.1 & 5.3: Partner/supervisory Lawyer Duties Regarding Nonlawyer Assistance

Under Rule 5.1 (Responsibilities of a Partner or Supervisory Lawyer) and Rule 5.3 (Responsibilities Regarding Nonlawyer Assistance) of the ABA’s Rule of Professional Conduct (“RPC”), lawyers are required to oversee both lawyers and nonlawyers who help them provide legal services to ensure their conduct complies with the RPC.

Notably, Rule 5.3’s language covers responsibilities regarding nonlawyer “assistance,” rather than “assistants,” a critical change to the Rule’s title made in 2012. The effect of this change was to expand the ethical obligation to non-human assistance, including the work generated by technology (such as legal AI) that’s used in the provision of legal services.

The bottom line is that non-human legal assistance is within the scope of the ABA’s rules, and you must supervise an AI legal assistant just as you would any other legal assistant.

Rule 1.1: Competence

A lawyer’s duty to be technologically competent is recognized in Rule 1.1 of the ABA’s RPC, which requires lawyers to provide competent representation to a client. The duty of technological competence is specifically set forth in Comment 8 to the rule, which states that to maintain the knowledge and skill necessary for competent representation, a lawyer should “keep abreast of changes in the law and its practice, including the benefits and risks associated with relevant technology,” and do so by way of continuing education.

Note the expectation that lawyers are to keep up with new technology (such as AI) is assumed with the language “keep abreast of changes.” Comment 8, which was part of the ABA’S 2012 amendment to the RPC, was added in light of cloud computing and technology such as smartphones and tablets, which were becoming increasingly widespread in law practice.

Since then, the RPC has been amended to include AI. In 2019, the ABA adopted Resolution 112, urging courts and lawyers to address ethical and legal issues related to AI use, including “bias, explainability, and transparency of automated decisions made by AI” and the “controls and oversight of AI and the vendors that provide AI.”

The takeaway is that the duty of technological competence requires an understanding of relevant technology—and in today’s world, that includes AI. Efforts should be made to engage in learning opportunities, such as webinars and CLEs, to ensure you understand the “benefits and risks” associated with AI. 

Rule 3.3: Candor Toward the Tribunal

Rule 3.3 sets forth the special duties of lawyers as officers of the court, including the obligation to “avoid conduct that undermines the integrity of the adjudicative process.” Comment 2 to Rule 3.3 states lawyers “must not allow the tribunal to be misled by false statements of law or fact or evidence that the lawyer knows to be false,” while Comment 3 provides that lawyers are “responsible for pleadings and other documents prepared for litigation.”

An example of failure to follow these rules when using general-use generative AI in practice can be found in Avianca v Mata—more widely known as the “ChatGPT lawyer” incident. In short, the defense counsel filed a brief in federal court (the E.D.N.Y., no less) filled with citations to non-existent case law. When confronted by the judge, the lawyer explained he’d used ChatGPT to draft the brief, and claimed he was unaware the AI could hallucinate cases (despite the disclaimer directly beneath the chat box).

The judge didn’t take kindly to the lawyer’s laying blame on ChatGPT. It’s clear from the court’s decision that misunderstanding technology isn’t a defense for misusing technology, and that the lawyer was still obligated to verify the cases cited in documents he filed with the court. 

There are several ways this situation can be avoided. First and foremost, you shouldn’t rely on general-use AI such as ChatGPT, which, as we explained in part 1, doesn’t draw from a reliable source of law. Instead, legal AI should be used because it is limited to a reliable, up-to-date source of information. For example, CoCounsel draws from Casetext’s database of case law, statutes, and regulations. It also shows its work by providing links to the cases, making it easy to check your work.   

Secondly, you should understand the risk of using the AI before using it (see Rule 1.1 regarding technological competence, above) and must check the veracity of the AI’s output (as required by Rules 5.1 and 5.3). Finally, such debacles can be avoided by disclosing AI use to the court. 

Rule 1.4: Communications

It’s a good idea to disclose AI use to clients, too. Comment 1 to Rule 1.4 on client communications states: “Reasonable communication between the lawyer and the client is necessary for the client effectively to participate in the representation.” Comment 3 provides that the Rule “requires the lawyer to reasonably consult with the client about the means to be used to accomplish the client’s objectives.”

How should these rules be applied in practice? If you’re using AI in the provision of legal services to your clients, explain your use to them. Be transparent with clients about how you’ll use AI—and be ready to explain how it works, and address any privacy and security concerns.

Additionally, there are several ways to disclose your AI use. One option is to disclose this in fee agreements or retention letters to clients. 

Some firms, though, because AI is considered “tech,” don’t distinguish it from any other tech used, which in most terms and conditions, privacy policies, or engagement letters is used as an umbrella term rather than an introduction to an app-by-app list. Even so, any firm that takes this approach should still be ready to give a thorough answer to anyone who asks about AI use.

Rule 1.6: Confidentiality of Information

Comment 2 to Rule 1.6 states lawyers must not reveal information relating to representation to the client, unless they have the client’s informed consent, while Comment 18 requires lawyers to 

“act competently to safeguard information relating to the representation of a client against unauthorized access by third parties and against inadvertent or unauthorized disclosure by the lawyer or other persons who are participating in the representation of the client.” 

This rule comes into play not only when using AI, but also when selecting AI. Generative AI is built with a network of technology and partnerships, such as cloud storage and third-party data processing agreements. To that end, look for legal AI that is private, secure, and built by experienced developers—such as CoCounsel, which is carefully engineered to eliminate security and data privacy risks.

Legal AI is meant to serve as a legal assistant, not as a substitute for a lawyer, and lawyers should look to existing ethics rules to help guide their use. By choosing reliable, specific-use AI and using it responsibly, lawyers can tap into this powerful technology to improve their practice and better serve their clients.

Written by:

Casetext
Contact
more
less

PUBLISH YOUR CONTENT ON JD SUPRA NOW

  • Increased visibility
  • Actionable analytics
  • Ongoing guidance

Casetext on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide