Ethical AI Guideposts for Lawyers Using Generative AI

EDRM - Electronic Discovery Reference Model
Contact

EDRM - Electronic Discovery Reference Model

ETHICAL GUIDEPOSTS FOR LAWYERS USING GENERATIVE AI By Ralph Artigliere
Image: The Hon. Ralph Artigliere (ret.) and AI tools.

Introduction

Generative AI can generate content, such as text, images, or even entire media, autonomously. These AI systems are designed to produce new data or content that is often indistinguishable from what a human might create. Since Generative AI stormed on the world stage, garnered hundreds of millions of users, and spawned a flurry of powerful uses, the demonstrated faults and misuse generated a myriad of concerns for legal professionals, regulators, governments, and organizations. Fear of hallucinations, bias, misuse, abuse, and more has caused judges and organizations to ban or qualify the use of AI, governmental bodies to adopt laws and regulations, and lawyers and law firms to shy from using AI models. See Hon. Xavier Rodriguez, Artificial Intelligence (AI) and the Practice of Law, 24 The Sedona Conference Journal 782 (Sept. 2023).

Using AI in law practice requires understanding of how to use it effectively while mitigating dangers and downsides. Almost immediately after OpenAI and others introduced public versions of large language and other generative AI models, lawyers made blunders in using Generative AI products, such as submitting briefs drafted by AI without checking the citations only to find the cases cited were hallucinations. See, e.g., Mata v. Avianca, No. 22-cv-01461, 2023 WL 3698914 (S.D.N.Y. May 26, 2023) (lawyers sanctioned for citing nonexistent cases that were hallucinations by ChatGPT in an unverified brief to the court). Those errors alone resulted in corrective measures from judges from standing orders that ban the use of ChatGPT products for court submissions, to those who require disclosure of such use, to those who require lawyers to check their work product and stand behind it.

Fear of hallucinations, bias, misuse, abuse, and more has caused judges and organizations to ban or qualify the use of AI, governmental bodies to adopt laws and regulations, and lawyers and law firms to shy from using AI models.

See Hon. Xavier Rodriguez, Artificial Intelligence (AI) and the Practice of Law, 24 The Sedona Conference Journal 782 (Sept. 2023).

Now The Florida Bar is considering whether to establish advisory guidance on the ethical use of AI by Florida lawyers that includes obtaining informed consent from clients on whether a lawyer is required to obtain a client’s informed consent to use generative AI in the client’s representation and four other distinct examples of using AI. Transparency with clients and ethical safeguards on danger points are understandably important, but is The Florida Bar’s approach a good way to deal with the issues and potential dangers of Generative AI or a bad idea?

Care should be taken before resorting to restrictive measures. It is early, the landscape of generative AI products is fluid, and I am in the camp with Judge Paul Grimm, Maura Grossman, and Daniel Brown, who point out that ad-hoc orders may discourage reasonable and appropriate use of GenAI, making legal process more accessible and practicing law more efficient. The learned authors argue that existing rules of procedure and professional conduct already prohibit the exhibited misconduct in those cases without the overregulation and disadvantages of orders banning use or requiring disclosure. See Is Disclosure and Certification of the Use of Generative AI Really Necessary? Judicature, Vol. 107, No. 2, October 2023.

Judge Xavier Rodriguez, a learned U.S. District Judge in the Western District of Texas, eloquently encapsulated the problem of judicial over-regulation in response to generative AI missteps:

Some judges (primarily federal) have entered orders requiring attorneys to disclose whether they have used AI tools in any motions or briefs that have been filed. This development first occurred because an attorney in New York submitted a ChatGPT-generated brief to the court without first ensuring its correctness [Mata case referenced above]. The ChatGPT brief contained several hallucinations and generated citations to nonexistent cases. In response, some judges have required the disclosure of any AI that the attorney has used. As noted above, that is very problematic considering how ubiquitous AI tools have become. Likely these judges meant to address whether any generative AI tool had been used in preparing a motion or brief. That said, if any order or directive is given by a court, it should merely state that attorneys are responsible for the accuracy of their filings. Otherwise, judges may inadvertently be requiring lawyers to disclose that they used a Westlaw or Lexis platform, Grammarly for editing, or an AI translation tool.

24 The Sedona Conference Journal at 822.

Overbroad orders also have a chilling effect on the positive use of needed products which implicate access to justice issues as well as needless disclosure of attorney work product.

In similar manner, caution should be exercised in other types of regulation. Governmental, institutional, and organizational rules, laws, and regulation of AI in general can be overbroad and stifle creativity and valuable advantages that may be achieved while reasonably using Generative AI and Large Language models. Some organizations are banning its use altogether by employees, associates, and independent contractors.

Every organization will need to decide the safety measures to put in place aside from compliance with laws, rules, and regulations that may emerge, but banning AI use is draconian. Responsible use of emerging AI technology in the legal profession is far too diverse, powerful, and beneficial to ban or overregulate. In the legal arena alone, the potential of ChatGPT to permit individuals and entities of varied aptitudes to enhance their abilities to express themselves and advocate their cause invokes an access to justice and access to effective representation factors in legal process.

Perhaps more importantly, the term artificial intelligence invokes such a broad and diverse field of tools that is in a state of unparalleled expansion that nonspecific rules grounded in law or ethics may unintentionally implicate products, models, and tools that are developed and used in a fashion that do not exhibit any of the characteristic dangers that overbroad regulation or guidance is intended to avoid.

Used properly as a tool complementing human ingenuity, generative AI can be a game-changer in the effectiveness of communication, which is a foundational skill for all law practice. Perhaps more importantly, the term artificial intelligence invokes such a broad and diverse field of tools that is in a state of unparalleled expansion that nonspecific rules grounded in law or ethics may unintentionally implicate products, models, and tools that are developed and used in a fashion that do not exhibit any of the characteristic dangers that overbroad regulation or guidance is intended to avoid. Narrowing the targeted conduct to use of Generative AI or Large Language Models is still overbroad, since the types of uses and the myriad products that fall within those categories is expansive and diverse.

Understandably abuse and misuse issues need to be addressed. Responsible use of AI entails knowledge of the tools and their potential problems and the intent and ability to use them properly. This is a sizable ask given the novelty, complexity, and changing landscape of the AI world. But for lawyers it is well worth investing the time and effort to learn to use these products safely and effectively. Lawyers have the ability to learn and undertake the responsible use of these powerful products and should be allowed to do so.

Assuming AI is here to stay and it has unquestioned benefits to offer but known downsides and issues, what is the best mechanism for legal professionals and those who regulate them to ensure guardrails or guideposts are in place and lawyers and law firms stay within the lines? I prefer the analogy of guideposts or beacons in the form of education, reserving reasonable control and freedom to operate to professionals.

I prefer the analogy of guideposts or beacons in the form of education, reserving reasonable control and freedom to operate to professionals.

As with most nascent issues that arise in the legal profession, obvious guideposts are found in the ethical obligations that bind and guide us in the Rules of Professional Responsibility. There is always the question of whether existing ethical requirements and their attendant commentary are sufficient guides for lawyers. If further guidance is needed in the form of ethical opinions, especially with regard to evolving or emerging technology, care must be taken to ensure that the guidance is solidly based on real dangers and required protective measures rather than perceived or transient ones. Never has this been truer than the emergence of tools based on AI.

The Moving Target

New products are developed to maximize their utility and uniqueness and to make them attractive to a broad range of users, and there are potential pitfalls for those professionally or legally responsible for confidentiality, safety, accuracy and bias avoidance. This was exquisitely demonstrated earlier this year when hallucinations in a public ChatGPT product resulted in submission of briefs to courts that included false citations. There are likely many missteps that were not so prominent or even known as lawyers without a true understanding of large language models and how to use them tried the newly available models. The response from the judiciary included prohibition in using AI in court submissions, the requirement to disclose the use of AI, or the requirement to take responsibility for the content of submissions whether or not AI was used. What is the best approach?

The burgeoning development of AI products, including generative AI language products like ChatGPT as well as generative AI for creating images, video, and sounds, is in its infancy, as it’s public release is less than a year old and new products and improvements on existing models are occurring virtually daily. Many of the products are being enhanced to control or eliminate the known and emerging problems. Others are developed and tailored in a way to suit the requirements of specific uses, like legal applications. Products for the legal market should contain safeguards that eliminate issues that lawyers fundamentally need to avoid. Attractiveness of products is impacted substantially by their safety in issues specific to the field of law. Product developers are already motivated to adjust and modify design, development, and instructions on use to minimize relevant problems.

The bigger problem with overbroad prohibitions or regulation is that AI comes in so many forms and uses. Many AI products can be used quite safely. Others may not even exhibit the defects or issues that the regulation is designed to avoid. Further, it is likely many Lawyers are using AI-enhanced products without even knowing it in conjunction with word processing, search engines, Westlaw, Lexis, email clients and other everyday applications.

Nonetheless, ChatGPT does not understand and will not govern itself to conform to ethical or moral standards. That is a uniquely human attribute, and it requires the user to question anything generated by the model for lies, hallucinations, bias, and the like. Products that are not tailored to eliminate problems or are not used in conjunction with software that performs that function, will need to be utilized with the care due under the circumstances. For example, using products for time-saving tasks in the office may not implicate any of the risks. Using generative AI for research and drafting may require protection of confidential and private information and human verification of output.

The common denominator for the professionally sound use of any AI product is understanding the product well enough to obtain the benefits of their use while identifying and eliminating the risks. That is a task well-suited to the legal professional bound by ethical guidance.

Solving the Problem: Reasonable Regulation and Ethics as the Touchstone

Ethical limitations on human behavior can often be more favorable than regulations or laws when it comes to preserving creativity and freedom and enabling innovation. Though laws and regulations aim to protect society, an overabundance of rigid rules that take time to evolve with developing factors can stifle creativity and the advancement of ideas. In contrast, an ethical framework that promotes values such as honesty, responsibility and compassion allow for more flexibility. It appeals to professional responsibility and capability of applying due regard to fundamental principles in an evolving technological landscape rather than demanding behavioral compliance with unbending and often outdated rules.

The profession of law is imbued with the ethical and professional responsibilities that come with the revocable license to practice law and the oath administered to newly admitted lawyers. Each lawyer and the members of the Bar collectively are responsible for adherence of all lawyers to the standards of the profession. This is akin to the function of a constitution, fundamentally binding all of us. Laws, regulations, principles, and guidelines take time to develop and are not as effective at dealing with the emerging and complex issues like those arising virtually every day in the explosive environment of AI. Ethical standards and professionalism are uniquely suited to govern the range of issues while allowing legal professionals to tap the enormous potential of new products, models, and applications.

Key advantages of emphasizing ethics over regulation include:

– Ethics solutions allow for nuance and consideration of context, whereas regulations are uniformly applied regardless of circumstances. There is room for reasonable discretion.

– Ethics solutions are adaptive and evolve with changing social norms and new dilemmas. Regulations struggle to keep pace with the times.

– Ethics solutions enable cooperative, good-faith efforts to tackle complex challenges.

The intent here is not to dismiss the importance of laws and regulations entirely. Some baseline rules and government oversight are certainly beneficial for a functioning society overall. However, when regulations become overbroad, intrusive, cumbersome and excessive, they can do more harm than good. An emphasis on ethics allows for the same goals of social order and justice, while retaining space for human creativity and progress to thrive. The most successful societies strike a balance between moral wisdom and pragmatic rules.

The Florida Ethical Guidance Proposal

The Florida Board Review Committee on Professional Ethics has been asked by the Florida Bar Board of Governors to consider adopting a proposed advisory opinion addressing several issues relating to use of AI by lawyers according to the Florida Bar website. This effort has generated interest outside the state. See Michael Berman, Florida Bar Weighs Whether Lawyers Using AI Need Client Consent, EDRM/JDSupra (Florida Bar Weighs Whether Lawyers Using AI Need Client Consent | EDRM – Electronic Discovery Reference Model – JDSupra).

Florida’s effort to generate guidance is a good idea in concept. It is based on ethical commitments of professionals and is to be guidance rather than hard and fast rules. The announcement calls for comments by December 1, 2023. Ideally the process will yield measured guidance that allows lawyers the freedom to rely on their own judgment and ethical compasses while having the freedom to use a full range of tools as they come available. But that will not be easy. There is a five issue framework of the proposed guidance, but at the current time there are no drafts or detail for commenters to consider.

The proposed advisory opinion will address:

  1. Whether a lawyer is required to obtain a client’s informed consent to use generative AI in the client’s representation;
  2. Whether a lawyer is required to supervise generative AI and other similar large language model-based technology pursuant to the standard applicable to non-lawyer assistants;
  3. The ethical limitations and conditions that apply to a lawyer’s fees and costs when a lawyer uses generative AI or other similar large language model-based technology in providing legal services, including whether a lawyer must revise their fees to reflect an increase in efficiency due to the use of AI technology and whether a lawyer may charge clients for the time spent learning to use AI technology more effectively;
  4. May a law firm advertise that its private and/or inhouse generative AI technology is objectively superior or unique when compared to those used by other lawyers or providers; and
  5. May a lawyer instruct or encourage clients to create and rely upon due diligence reports generated solely by AI technology?

This list includes potentially far-reaching and somewhat unrelated issues. Consider whether each targeted area is truly ripe for regulation and specific guidance beyond existing rules. The Rules of Professional Conduct already cover what is under consideration and more. See Rule 1.1 (competence and staying current with technology), Rule 1.4(2) (“reasonably consult with the client about the means by which the client’s objectives are to be accomplished”), Rules 5.1-5.3 (proper supervision and training of subordinates), Rule 1.6 (confidentiality and privacy), Rule 1.5(reasonable fees); Rule 8.4 (bias and discrimination). For a perspective on ethical use of AI in eDiscovery, refer to the excellent White Paper recently published by EDRM entitled PROFESSIONAL RESPONSIBILITY CONSIDERATIONS IN AI FOR EDISCOVERY: COMPETENCE, CONFIDENTIALITY, PRIVACY, AND OWNERSHIP found at AI Ethics & Bias – EDRM.

Notwithstanding the existence of comprehensive general rules of conduct, guidance from The Bar on ethical issues, especially novel and emerging issues and context for their application is a fundamental and valuable function. Advisory opinions alert, educate, and guide attorneys on important issues. Lawyers should welcome the guidance and attendant dialogue created by a well-reasoned, informed, and narrowly crafted advisory opinion on one or more of the issues, assuming the issue is ripe for decision making.

Florida’s effort to generate guidance is a good idea in concept. It is based on ethical commitments of professionals and is to be guidance rather than hard and fast rules…

Comments should be submitted to Jonathan Grabb, Ethics Counsel, The Florida Bar, 651 E. Jefferson Street, Tallahassee 32399-2300, and must be postmarked no later than December 1, 2023.

Knowing the diligence with which The Florida Bar undertakes these measures, those who develop and recommend the guidance to The Board of Governors will have input from experts in technology and use of the models and products that are implicated by the guidance. In this case, it is imperative that the guidance contains accurate descriptions of the products as well as specifying the types of uses that are affected by the guidance.

Comments and feedback by knowledgeable and potentially impacted persons are welcome in the process. I personally intend to send comments to The Florida Bar. The eDiscovery and technology legal community worldwide is a tremendous, sharing, brilliant group. I hope many of you will submit comments, as the Florida Bar’s work may be groundbreaking and may impact those jurisdictions that adopt similar guidance. Comments should be submitted to Jonathan Grabb, Ethics Counsel, The Florida Bar, 651 E. Jefferson Street, Tallahassee 32399-2300, and must be postmarked no later than December 1, 2023.

Conclusion and Call to Action

The intent here is not to dismiss the importance of laws and regulations. Some universal rules and government oversight on such impactful products are beneficial for a functioning society. However, intrusive, cumbersome and excessive regulations can do more harm than good. Reliance on ethical responsibility is measured, reasonable, and consistent with the profession of law and gives legal professionals and their clients the full power and benefits of the products.

Applying ethical guidance allows for the same goals of social order and justice, while retaining space for human creativity and progress. Lawyers as professionals with proper guideposts should be up to the task. However, the most successful professional regulation of a complex and dynamic field requires focused, current, and necessary guidance after considered deliberation. When given the opportunity to participate in the process, we need to participate. I urge lawyers and other interested persons and organizations to become informed and comment on the Florida proposal and any proposed ethical guidance in your respective jurisdictions on these issues.

Written by:

EDRM - Electronic Discovery Reference Model
Contact
more
less

EDRM - Electronic Discovery Reference Model on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide