Though laws might be months away, smart lawyers are employing these best practices today
We’ve already shared our take on how to think about regulating AI, informed in part by last fall’s Blueprint for an AI Bill of Rights, which laid the groundwork for much-needed laws and regulations governing the use of AI, but policy experts can’t even say how long it might be until those laws and regulations are on the books. And while last month’s voluntary commitment to AI safety by seven top tech companies seems to be a step forward, without clear standards or ways to enforce these companies’ pledge, we seem no closer to the guardrails most people—especially professionals looking to use AI in their work—agree we need.
That doesn’t mean it’s the Wild West, though. In fact, we in the legal industry have the opportunity and even the responsibility to lead, modeling the right way to use AI. In few professions are the stakes so high, and do we have so much to gain, so quickly—for practitioners and their clients. Plus, we can’t afford to wait for official rules, given how quickly lawyers’ rate of adopting AI is increasing.
In March, a survey found that 88% of US lawyers and law students had heard of generative AI tools such as ChatGPT, but that only 32% of those were considering using them professionally, with only 9% having used them for work, and only 10% saying they thought generative AI would “transform” the practice of law.
But by July, less than four months later, in a related survey of lawyers and law students in the US, Canada, UK, and France, 43% of those who had heard of such tools planned to use them for legal work, 15% had used them for legal work, and 47%—nearly a 400% increase since March—believed they would transform the practice of law.
Clearly lawyers must be on top of how best to use AI in their work, and in the absence of laws about AI use, the best principles to guide legal practice are transparency and education.
The survey discussed above found that 45% of law firms expect to offer clients a choice about their use of generative AI, but that 65% of in-house counsel clients expect to get that choice. However, lawyers, their teams, and their clients can only make good decisions if they have all the necessary information. Meaning no one can make good decisions about the use of AI unless they know it’s been or is being used. Following these best practices will put you on the right side of existing guidelines for ethical behavior, and once there are AI-specific regulations, you’ll likely already be adhering to those, as well:
1. Let your clients know you’re using AI.
Being forthcoming that you’ve added this powerful new tool to your practice, especially considering that at this point most people have at least heard of generative AI, demonstrates your trustworthiness. It also shows you’re on top of the latest developments in the legal industry and are willing to adopt tools and techniques that will help you deliver the best possible service—all reasons clearly related to ABA model rules. Better yet, give them a quick look at how the technology works, which gives you the opportunity to provide reassurance, for example to the question of whether entering information into a tool like CoCounsel constitutes breaking attorney-client privilege. In fact, sharing client information with a legal technology vendor (e.g., eDiscovery or legal research software) does not break attorney-client privilege, as long as the vendor is acting as an agent of the attorney and the information is shared for the purpose of facilitating legal advice or services—and of course provided the vendor’s security practices are up to snuff. There are exceptions, such as when an attorney discloses privileged information for a non-legal purpose, but that is not the case when a lawyer uses Casetext products, for example, including CoCounsel, in the ways they are intended to be used and in compliance with our Terms of Service.
2. Let your colleagues and the court know you’re using AI.
First, if you work as part of a firm or in-house counsel department you’ll need to ensure your Knowledge Management, IT, and Security teams or vendors know when you’re taking advantage of new software, so it can be properly vetted and securely installed. And whether you’re part of a larger team or a solo practitioner, you should let any outside counsel or consulting attorneys know you’re taking advantage of the power of generative AI. Even if there aren’t specific legal industry or legislative rules in place yet about sharing this information, you have an opportunity as a relatively early adopter to be part of demystifying AI, which still has a lot of people understandably jumpy. By not disclosing, it will seem as if there’s something to hide, when in fact, used properly, AI tools can be an enormous boon to your work and your clients’ outcomes. Some judges have even begun requiring disclosure, but it’s wise to do so even if not explicitly asked to.
3. Require anyone you work with to disclose their use of AI.
For the same reasons it’s a best practice to disclose your own use of generative AI, it’s vital to know when anyone you’re collaborating with is using it, too. Everyone on a legal team, including any consultants, such as brief-writing services or forensic accounts, should be held to the same standard. The more you work in coordination with those on your team, the better your outcomes, and you’ll only be able to help each other make the most of this powerful new technology if you know who’s taking advantage of it. You also want to be sure that not knowing someone you’re working with is using AI doesn’t come back to bite you.
The amount of information available right now about generative AI is undoubtedly overwhelming, for just about anyone. But when staying on top of it is key to your profession, the pressure and stakes are even higher. These guidelines can help you focus on what matters most:
1. Know the implications of the ways you’re using AI.
In most commentary about regulating AI, much of the attention focuses on situations in which AI is making decisions. For instance, in policing, in making housing or employment decisions. Paying close attention to how the AI is trained, how it works, and what biases it might bring is vital to protecting people whose lives will be affected by those decisions, both by enacting legislation to prevent unethical use and enforcing those regulations once they’re instituted. There is much in these uses to be cautious about and potentially even alarmed by.
But in legal AI, while responsible use and regulation is still important, the AI tool is never meant to be making decisions, but rather is simply extracting and surfacing information that humans use for making decisions. You will always need to review the work, not only to ensure accuracy (because even AI isn’t perfect), but also to then decide how to apply the information produced—to what strategy to employ. In short, to do the work that can’t be done by machine. In this case, the focus is on ensuring that the tools you’re using are suitable for professional use, meaning they’re grounded in the right information, keep your information separate from the underlying large language model, and ensure the security and privacy of all your data.
2. Know the differences between general-use generative AI and legal-specific (“professional-grade”) generative AI.
You can only choose the right AI tool for your professional needs if you know how to vet the solutions that are out there. Ultimately, your responsibility is to provide the best possible service to your clients while keeping their information completely confidential. So when using legal AI, you need a tool that:
- Draws upon accurate, vetted information. Solutions for professional work need to be built for law and for substantive reliability. For instance, CoCounsel can surface more accurate, on-point information than can large language models when accessed directly. because we’ve implemented controls to limit CoCounsel to answering from known, reliable data sources—such as our comprehensive, up-to-date database of case law, statutes, regulations, and codes, or your own database of content—or not to answer at all.
- Was tested extensively before launch, continues to be tested, and generates work that’s easily checked. Casetext’s Trust Team—a dedicated group of AI engineers and experienced litigation and transactional attorneys—spent more than 4,000 hours before we launched CoCounsel running more than 30,000 real-world examples through CoCounsel, creating criteria for correct answers, and grading the results, then turning those correct answers into automated tests to run nightly. This work continues, every day. And because CoCounsel wasn’t designed to replace a lawyer, but rather to help them accomplish high-quality work in less time, it’s vital that reviewing its work is straightforward. So just as a lawyer reviews all work delegated to a junior associate or paralegal, they need to validate CoCounsel’s output. We’ve made it easy to do so: all answers link to their origin in the source documents, so it’s simple for lawyers to trust, but verify.
- Employs best-in-class security and privacy practices. CoCounsel always keeps lawyers’ and their clients’ data private and secure. Data entered by users into CoCounsel is subject to substantially more rigorous security controls than are consumer-facing LLM-powered products such as ChatGPT. And CoCounsel accesses OpenAI’s model through private, dedicated servers, and through a zero-retention API. This means OpenAI cannot store any customer data longer than required to process the request, cannot view any of that data, and cannot use any of it to train the AI model. Users always retain control over their data and can remove it completely from the platform at any time.