The New Jersey Supreme Court is being very specific about the educational courses that will qualify as “technology-related subjects” under recently revised continuing legal education requirements for New Jersey licensed attorneys. In a notice to the bar published Dec. 30, 2025, the court singled out several technology hot topics, including the use of artificial intelligence during discovery and the use of internal policies to safeguard client confidential information when dealing with outside vendors.
After Jan. 1, 2027, New Jersey attorneys will be required to obtain one CLE credit in a technology-related course every two years.
New language in regulations governing the state’s Board on Continuing Legal Education would define “technology-related subjects” to include “developments in artificial intelligence (AI) and other emerging technologies affecting the overall practice of law and specific legal practice areas.”
Drilling down to specifics, the proposed regulations identified the following subjects as appropriate topics for CLE coursework:
- technologies used to gather electronic or digital evidence and authentication of that evidence for use at trial
- artificial intelligence’s potential relevance to legal research, discovery practices, brief-writing, and the preparation of court materials
- use of cybersecurity features of tools for sending, receiving, and storing digital information
- use of computer network, hardware, software, and mobile device security technologies to prevent, mitigate against, and counter cybersecurity threats and data breaches, and
- the formulation and implementation of relevant internal policies for attorneys and outside vendors in both the public and private sectors, and those whom they supervise.
Comments on the proposed regulation are due Jan. 30, 2026. The new continuing legal education requirement, approved by the court in April 2025, will take effect Jan. 1, 2027. After that date, New Jersey licensed attorneys will be required to obtain one CLE credit in a technology-related course every two years.
State Demands for Technology Competence
New Jersey is one of a handful of states to align its continuing legal education requirements with the legal profession’s growing appreciation of the key role technology plays in the ethical delivery of legal services. Technology education mandates for licensed attorneys have also been adopted in California, Florida, New York, North Carolina, and the U.S. Virgin Islands. And the list of states whose bar regulators have opined on the ethical use of generative artificial intelligence is long, and getting longer.
In July 2024, the American Bar Association published Formal Opinion 512 on the ethical use of generative artificial intelligence in law practice. Just this month, the ABA’s Task Force on Law and Artificial Intelligence published a detailed account of the legal profession’s AI-related policy work so far, including a list of all state bar opinions addressing the ethical use of artificial intelligence technologies.
The Latest Wrinkle: Hallucinated Facts
The success of these bar groups’ educational efforts has been mixed, at least in the area of generative AI “hallucinations” that produce inaccurate statements of law and inaccurate citations to those same inaccurate statements of law. In fact, if you asked Judge Carlton W. Reeves of the U.S. District Court for the Southern District of Mississippi, he’d say that “hallucinations” is too kind a word for it. Judge Reeves would call them “lies.” Which he did in a Dec. 30 order in Pauliah v. Univ. of Miss. Med. Ctr., No. 3:23-cv-3113 (S.D. Miss., Dec. 30, 2025), a case that involved not hallucinated law but hallucinated facts which were asserted in a declaration opposing a motion for summary judgment.
The declaration at issue contained AI-generated citations to deposition testimony that were, upon examination, what the court called “completely fabricated deposition testimony.” Although it appeared that the client had taken a lead role in creating the errant deposition citations, the court found the attorney’s failure to detect and correct the citations to be sanctionable. After all, the court noted, the attorney actually took the depositions in question and obtained copies of the transcripts. “Simply turning to the purported source of any of these quotations – each was accompanied by a manufactured citation to a deposition transcript, ostensibly to lend the appearance of authenticity – would have revealed them as fake,” the court wrote.
“Lawyers may be unfamiliar with artificial intelligence, the hallucinations it has shown itself prone to, or how their clients may use or misuse it,” the court wrote, adding that “lawyers are obligated to maintain the standards imposed on them as officers of the court, including forthrightness and candor. Those obligations predate artificial intelligence by centuries.”
The court ordered the attorney, in addition to $4,000 monetary sanctions payable to opposing counsel, to attend a continuing legal education course on the topic of hallucinatory citations generated by AI in the legal field.
Mississippi’s Rules of Professional Conduct do not contain an explicit technology competence requirement, nor do the state’s mandatory continuing legal education rules contain a technology education requirement. However, in Ethics Opinion No. 267 (Nov. 14, 2024), Mississippi Bar officials said that lawyers have an affirmative duty to verify the accuracy and sufficiency of work performed by generative artificial intelligence. “As part of the lawyer’s responsible use of a GAI tool, the lawyer should determine whether there is a reasonable basis for trusting the tool’s output,” the bar opinion noted. “GAI tools perform a wide variety of tasks, and the level of trust and amount of verification depends on the circumstances.”
According to Opinion No. 267, the ethical obligation to verify the accuracy of generative artificial intelligence outputs arises from Rule 1.1, which creates the duty to provide competent representation to a client.