Your AI Legal Strategy Framework

LawVision
Contact

Part 1: Separating the Signal from the Noise with the Help of “Bootleggers and Baptists”

Summary. Law firm leaders face a mountain of AI[1] noise at the extreme ends of the fear and greed continuum, drowning out the real story. But this phenomenon is familiar and has occurred before during other significant technological introductions. Using a framework to assess the most relevant ideas will best shape your AI strategy, which you need to develop now.

Introduction

The AI hype today is ablaze with both doomsayers and opportunists. But most law firm leaders feel some compulsion to respond by learning about AI’s strategic implications despite its frenzied panic. The challenge is figuring out what is essential for each firm and acting on it.

Marc Andreessen’s recent blog, “Why AI Will Save the World,” offers a helpful perspective. It outlines why we must embrace AI and apply its virtues to society while managing risks but not being stifled by the mania and panic posed by policymakers and market participants.[2] Andreessen’s article was written for businesses generally without referencing legal services, so I will draw analogies and make recommendations for law firms.

Today, I will focus on separating relevant ideas from noise and hype. Then in Part 2 (coming in a few weeks), I will provide a strategic AI framework to help you develop action plans to pursue this exciting technology within your strategy.

AI Co-Pilots, Assistants, and Augmentation

Andreessen eloquently describes how AI will provide human augmentation for great things like tutors for children, assistants for busy managers, and researchers for scientists. He describes a golden age for the arts and sciences. Even in the tragic event of military conflict, he theorizes that this technology may aid in more rational decision-making to resolve conflict faster by providing clarity in the fog of war.

It is easy to draw similar analogies about co-pilots in the research and practice of law. New tools are being launched (e.g., Harvey AI, Casetext, Lawdroid), providing on-ramps for law firms to deploy generative AI models.

Many writers are projecting legal service transformation with generative AI. Two articles of note include the Economist’s June 6 article, “Generative AI could radically alter the practice of law,”[3] and the Goldman Sachs Economic Report forecasting that half of today’s legal tasks may be automated via generative AI.[4] Our recent LawVision discussions and surveys with law firm leaders yield various views and timetables on the expected speed and magnitude of change to the law firm model.

AI Moral Panic: Ghost Stories Are Contagious

One downside Andreesen sees is the moral panic surrounding the AI opportunity. Andreessen stated that AI is “shot through with hysterical fear and paranoia.” For more, see MIT Technology Review’s June 19 article, “How existential risk became the biggest meme in AI,”[5] and Harvard Business Review’s article, “How the AI Hype Cycle is Distracting Companies.”[6]

Andreessen observed that moral panics are the norm rather than the exception, citing reactions to past societal transformations like electricity, automobiles, and radios.[7]

Andreessen should also have mentioned calculators. When they were introduced, people feared it would lead to a decline in mental math skills, making people overly dependent on technology for simple calculations. Mathematician and accountant jobs were on the chopping block. And educators were worried that calculators would hinder the learning process.

Sound familiar? I would add that the moral panic also includes the chorus of FOMO, fear of eating last at the AI table with scraps, thereby dying a quick organizational death for failure to invest.

Separating the Signal from the Noise

Today’s cause of the confusion rests on various opinions and diverse stakeholders with cross-over views and varying motivations. Andreessen explains today’s AI moral panic via an economic theory labeled “Baptists and Bootleggers.”[8]

Under this theory, “Baptists” are:

“the true believer of social reformers who legitimately feel – deeply and emotionally, if not rationally – that new restrictions, regulations, and law are required to prevent societal disaster.”

Whereas “Bootleggers” are:

“the self-interested opportunists who stand to financially profit by the imposition of new restrictions, regulations, and the laws that insulate them from competitors.”

The theory originated during alcohol prohibition in the United States. During that time, Baptists strongly supported the ban on alcohol due to moral and religious reasons. Bootleggers, on the other hand, profited from the illegal production and distribution of alcohol. Despite their differing motivations, both groups found themselves supporting the same policy of alcohol prohibition.

Relatedly, in an article published by the Cato Institute, Scott Lincicome makes a compelling case for why we should be skeptical of today’s AI Bootleggers (my words) advocating for new AI regulation.[9] In his article, he persuasively details the long history of incumbents demanding they be regulated ostensibly for societal protection. In reality, these positions are well-thought-out strategies meant to sustain the status quo and increase their competitive advantage by blocking new market entrants or hurting competitors.[10]

 Can you trust anyone? Yes, but you must verify. I recommend instituting an intelligence stream described below.

Setting Up Your AI Strategy Information Stream

One of the first steps your firm should take is to create an active information stream of strategic AI intelligence, creating an ongoing supply of relevant strategic AI topics to act on.

It proposes the creation of a strategic intelligence team, one that is action-driven over ideation. The team’s mission is to lead the timely gathering of insights to help shape an accurate narrative, educate your fellow partners, lead them in discourse, and drive them to timely action. It is not an easy task. The steps include:

  1. Developing a mission statement for the AI Strategic Intelligence Team: “Our mission as the AI Strategic Intelligence Team is to serve as the core of a strategic steering committee dedicated to leveraging emerging technologies to shape and direct our firm’s trajectory.”
  2. Assembling the team with crucial law firm stakeholders, including key partners, data leaders, strategy, and risk leaders.
  3. Establishing or enhancing data ethics policies and rules. Develop comprehensive guidelines for data gathering & evaluation, ensuring confidentiality and strategic assessment ethics policies and regulations.
  4. Adopting a scientific, data-skeptical posture, exercising caution over claims without sufficient evidence while scrutinizing the motivation and incentives of information sources. Be thorough but not panicked.
  5. Recognizing deliberate attempts by some designed to instill fear driven by bootlegger motives.
  6. Casting a wary eye toward anyone who speaks with definite knowledge of the future.
  7. Encouraging debate and focusing on crucial AI topics. Fostering discussion with the intelligence team on ethical considerations, intellectual property, privacy, security, critical AI applications, tools, competitive intelligence, AI enablement, and business model implications.
  8. Emphasizing knowledge sharing and actionable outcomes by highlighting learnings, observing key trends, and translating these into strategic, actionable plans.
  9. Striving for an active AI-educated law firm by promoting a culture of continuous learning and education about AI technologies.
  10. Driving to mobilize and act.

Part 2 will discuss charting your strategic AI framework. 


[1]  According to Google’s Bard, Artificial Intelligence is the use of computer technology to simulate human intelligence to perform tasks that would otherwise require human intervention.

[2]  Marc Andreessen, “Why AI Will Save the World.” a16z Blog, 6 June 2023, https://a16z.com/2023/06/06/ai-will-save-the-world/. In his blog, Andreessen responds to five common risks that commonly are raised related to AI: (1) that it will kill us, (2) it will ruin society, (3) it will take our jobs, (4) it will lead to crippling inequality, and (5) it will lead to people doing bad things.

[3] The Economist. (2023, June 6). Generative AI could radically alter the practice of law.

[4] Goldman Sachs Economic Research (203, March 26), The Potentially Large Effects of Artificial Intelligence on Economic Growth. https://www.key4biz.it/wp-content/uploads/2023/03/Global-Economics-Analyst_-The-Potentially-Large-Effects-of-Artificial-Intelligence-on-Economic-Growth-Briggs_Kodnani.pdf

[5] MIT Technology Review. (2023, June 19). How existential risk became the biggest meme in AI. https://www.technologyreview.com/2023/06/19/1075140/how-existential-risk-became-biggest-meme-in-ai/

[6] Reference HBR article on hype cycle: https://hbr.org/2023/06/the-ai-hype-cycle-is-distracting-companies

[7] There were safety fears over possible electrocution and fires with electricity. People were concerned about health effects, and traditional industries like gas lamps and candle makers were apprehensive about the upheaval of job losses. Automobiles introduced fears about accidents and pedestrian crashes and their propensity to encourage dangerous behaviors like speeding or reckless driving. For radios, people worried that it could more readily encourage the spread of propaganda or false information, enable unauthorized eavesdropping, and lead to a decline in live performances.

[8] “Bootleggers and Baptists in Retrospect”, by Bruce Yandel. This paper, published in Regulation magazine in 1999, is often cited as one of the seminal works on the concept. It explores the origins and implications of the Baptists and bootleggers’ coalition and its impact on policymaking.

[9] Cato Institute. (2023, May 24). The Worst Possible Reason to Support New AI Regulation. Retrieved from [URL] Author: Lincicome, https://www.cato.org/commentary/worst-possible-reason-support-new-ai-regulation.

[10] Examples include Amazon lobbying for an increase in the minimum wage to hurt Walmart, Banks calling for regulation of their fintech competitors, Exxon’s lobbying to rollback methane regulations that hurt small producers the most.

Written by:

LawVision
Contact
more
less

PUBLISH YOUR CONTENT ON JD SUPRA NOW

  • Increased visibility
  • Actionable analytics
  • Ongoing guidance

LawVision on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide