Deputy Attorney General Lisa Monaco Announces DOJ’s Approach to AI

Morrison & Foerster LLP

On February 14, 2024, Deputy Attorney General Lisa Monaco announced two important developments related to the advent of artificial intelligence (AI) as it relates to the work of the U.S. Department of Justice (DOJ). First, Monaco announced that federal prosecutors now will seek longer sentences for defendants convicted of crimes involving the use of AI. Second, DOJ will be conducting an internal evaluation, called Justice AI, to determine how DOJ can deploy AI in a safe and ethical manner in connection with performing its work. As we anticipated in our regulatory, compliance, and enforcement predictions for 2024, these two pronouncements confirm that DOJ has turned its attention toward AI.

Monaco described AI as “a double-edged sword” and perhaps “the sharpest blade yet,” able to assist both law enforcement in investigating potential criminal conduct and individuals in committing criminal acts. Monaco emphasized that DOJ has already used AI to trace sources of opioids and other drugs, triage tips received by the Federal Bureau of Investigation, and synthesize large volumes of electronic evidence collected in some of DOJ’s more significant cases, including prosecutions stemming from the January 6, 2021 attack on the U.S. Capitol. But she also explained that AI has many risks, including presenting a special threat to the security of upcoming elections, both in the U.S. and abroad, as well as potentially perpetuating discrimination, price-fixing, and identity theft.

To deter the use of AI in criminal conduct, federal prosecutors now are instructed to seek more significant prison sentences for crimes “made significantly more dangerous” by the involvement of AI. Monaco justified the pursuit of longer sentences for those defendants who use AI to facilitate their criminal conduct in part by noting that enhanced penalties are also sought for defendants who use certain other instrumentalities, for example, firearms that “enhance the danger of a crime” and thus warrant “more severe” sentences. Currently, recommended sentences are calculated by consulting the U.S. Sentencing Guidelines (Guidelines), which contain certain enhancements that can be added to a base offense level for particular federal crimes. While the Guidelines do not contain enhancements specifically referring to the use of AI, prosecutors could potentially seek an enhancement under §2B1.1(b)(10)(c) for the use of “sophisticated means,” §5H1.2 for the misuse of “special training or education to facilitate criminal activity,” or §3B1.2 for the use of a “special skill” not possessed by members of the general public. Monaco noted that DOJ may seek a proposed amendment to the Guidelines should existing enhancements thereunder prove inadequate to address the harms caused by AI.

Notwithstanding Monaco’s announcement, DOJ has yet to incorporate any AI-related enhancements into its Corporate Criminal Enforcement Policies. However, Monaco also announced that DOJ is taking steps internally to evaluate how best to harness the benefits of AI while also guarding against its risks—all in accordance with President Biden’s Executive Order on the Safe, Secure, and Trustworthy Development and Use of AI (the “AI EO”), announced in October 2023. The AI EO directs DOJ to anticipate the impact of AI on the criminal justice system, competition, and U.S. national security.

Monaco announced that DOJ is now working with other federal agencies to create guidance and controls regarding AI to ensure that that the use of AI does not harm the safety or rights of U.S. residents. Monaco noted that AI will be a top priority of the Disruptive Technology Strike Force, a joint effort of DOJ and the U.S. Department of Commerce launchedin February 2023 to use export control laws to ensure international adversaries are not able to misappropriate cutting-edge American technology. Also at the international level, Monaco highlighted the U.S. government’s participation in the Hiroshima AI process, which issued its Comprehensive Policy Framework, an allied attempt to “internationalize” a code of conduct regarding the design of safe and trustworthy advanced AI systems.

Monaco stated that DOJ, like other federal agencies, is also working on DOJ-specific guidance to govern its own use of AI. DOJ currently identifies 15 use cases of AI in its inventory, including Amazon Rekognition, which “offers pre-trained and customizable computer vision (CV) capabilities to extract information and insights from lawfully acquired images and videos.” DOJ Chief Information Officer Melinda Rogers also recently explainedhow DOJ is currently using AI in security applications to “make sure” it “leverage[s] information” from collected log data, which contains information about usage patterns, login activities, and operations that occurred on a device. But before DOJ uses AI to “assist in identifying a criminal suspect” or to “support a sentencing decision,” Monaco emphasized that DOJ “must first rigorously stress test that AI application and assess its fairness, accuracy, and safety.” Monaco therefore announced Justice AI, an internal initiative to study how DOJ can best use AI to advance its work.

As part of Justice AI, Monaco stated that in January 2024, DOJ appointed its first cyber AI officer, who will lead a new Emerging Technology Board to advise the U.S. attorney general and Monaco on responsible and ethical uses of AI by DOJ. DOJ also will confer with individuals in academia, technology experts, and industry leaders in the U.S. and abroad to “draw on varied perspectives” concerning this topic. Based on such learning, Justice AI will report to President Biden at the end of 2024 regarding the possible uses of AI in the federal justice system.

Ultimately, Monaco’s announcement aligns with the longstanding DOJ goal of ensuring that existing laws are used fairly and effectively when applied to new technologies. DOJ’s emphasis on the safe and ethical use of AI signals that DOJ will hold itself and others to high standards when using AI and will seek increased punishment for those who use AI in the commission of crimes. Time will tell whether additional revisions to the Guidelines and DOJ’s policies regulating corporate criminal enforcement will become necessary. In the interim, businesses should take steps to test their AI uses, vet AI vendors, and ensure AI is used responsibly.

[View source.]

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations.

© Morrison & Foerster LLP | Attorney Advertising

Written by:

Morrison & Foerster LLP
Contact
more
less

Morrison & Foerster LLP on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide