U.S. Artificial Intelligence Regulation Takes Shape

Orrick, Herrington & Sutcliffe LLP
Contact

Orrick, Herrington & Sutcliffe LLP

Artificial Intelligence (AI) has the potential to create breakthrough advances in a wide range of industries, while raising legal and ethical questions that will likely define the next era of technological advancement.  Companies with AI-based products and services must carefully monitor and account for the unsettled and changing regulatory landscape of AI-specific laws.

In the European Union, the European Commission (EC) recently published its highly anticipated communication and “Proposal for a Regulation laying down harmonized rules on artificial intelligence” (E.U. Regulation)—see Orrick’s guidance here.  The E.U. Regulation will employ a risk-based approach to the controls it will place on the use of AI systems, depending on the intended purpose of the AI system. 

Unlike this comprehensive regulatory framework by the EC, more specific regulatory guidelines have been proposed on an agency-by-agency basis in the United States.  Below, we outline key developments related to AI regulation in the U.S. and describe steps that companies can take to avoid potential regulatory pitfalls.

Department of Commerce

A recent flurry of AI-related activity has emanated from the U.S. Department of Commerce (DoC)—including a move towards the development of a risk management framework.

In the National Defense Authorization Act for 2021, Congress directed the National Institute of Standards and Technology (NIST), which falls under DoC, to develop “a voluntary risk management framework for trustworthy AI systems.”  In July, NIST issued a Request for Information (RFI) seeking input to inform the development of the AI Risk Management Framework (AI RMF).  The AI RMF may greatly influence how companies and organizations approach AI-related risks, including avoiding bias and promoting accuracy, privacy, and security.

In September, the DoC also established the National Artificial Intelligence Advisory Committee (NAIAC) in accordance with the National AI Initiative Act of 2020.  The NAIAC will “advise the President and other federal agencies on a range of issues related to artificial intelligence,” and will offer recommendations on the “current state of U.S. AI competitiveness, the state of science around AI, issues related to the AI workforce” and how AI can enhance opportunities for historically underrepresented populations, among other topics.

Given its responsibilities and engagement with AI, DoC—and NIST in particular—appears poised to be central to the federal approach to AI regulation. 

Federal Trade Commission

In April, the Federal Trade Commission (FTC) published a blog post entitled “Aiming for truth, fairness, and equity in your company’s use of AI” (FTC Memo).  The FTC Memo makes it clear that the FTC will use its authority under Section 5 of the FTC Act, as well as the Fair Credit Reporting Act (FCRA) and the Equal Credit Opportunity Act (ECOA) to pursue the use of biased algorithms.  The FTC lays out a roadmap for its compliance expectations, stating that companies should “keep in mind that if you don’t hold yourself accountable, the FTC may do it”. 

Among other things, companies are expected to:

  • Rely on inclusive data sets: “companies should think about ways to improve their data set, design their model to account for data gaps, and—in light of any shortcomings—limit where or how they use the model.”
  • Test their algorithms “both before companies use it and periodically after that—to make sure that it doesn’t discriminate based on race, gender, or other protected class.”
  • Be truthful with customers about how their data is being used and not exaggerate what an algorithm can deliver.
  • Be transparent and independent, “for example, by using transparency frameworks and independent standards.”

The FTC’s statements are a starting point for companies to prevent AI bias in practice, and companies that develop and use AI should be forward-thinking as they evaluate and address potential AI risks.

The White House

In September, The E.U.-U.S. Trade and Technology Council (TTC) released its Inaugural Joint Statement.  The TTC committed to cooperate on developing “AI systems that are innovative and trustworthy and that respect universal human rights and shared democratic values,” as well as to “uphold and implement the OECD Recommendation on Artificial Intelligence and to discuss “measurement and evaluation tools. . . to assess the technical requirements for trustworthy AI.”  The TTC will undertake a joint economic study to examine the impact of AI on the future of the labor market. 

Food and Drug Administration

The Food and Drug Administration (FDA) released the Artificial Intelligence/Machine Learning (AI/ML)-Based Software as a Medical Device (SaMD) Action Plan (Action Plan).  SaMD is software that relies on AI/ML and is intended to treat, diagnose, cure, mitigate, or prevent disease or other conditions.  The Action Plan outlines how the FDA intends to oversee the use and development of AI/ML-based SaMD, including by updating the proposed regulatory framework outlined in its 2019 discussion paper.  The FDA recently held a virtual public workshop on the topic of transparency in AI/ML-enabled medical devices and is accepting comments until November 15, 2021.

National Security Commission and Government Accountability Office (GAO)

On March 1, 2021, the National Security Commission on Artificial Intelligence (NSCAI) released and submitted its final report to Congress.  The report recommends that the government take certain domestic actions to protect privacy, civil rights, and civil liberties in its AI deployment. The report notes that the lack of public trust in AI from a privacy or civil rights/civil liberties standpoint will undermine the deployment of AI to promote U.S. intelligence, homeland security, and law enforcement.  The report advocates for the public sector to lead the way in promoting trustworthy AI, which will likely affect how AI is deployed and regulated in the private sector.

Similarly, in June 2021, the GAO published a report identifying key practices to help ensure accountability and responsible AI use by federal agencies and other entities involved in the design, development, deployment, and continuous monitoring of AI systems.  The report identifies four key focus areas: (1) organization and algorithmic governance; (2) system performance; (3) documenting and analyzing the data used to develop and operate an AI system; and (4) continuous monitoring and assessment of the system to ensure reliability and relevance over time.

Next Steps

While there is currently no federal regulation of AI in the U.S., regulators have sent a clear message that AI regulation is on the horizon.  Companies should craft policies and procedures across the organization in order to create a compliance-by-design program that promotes AI innovation, but also ensures transparency and explainability of systems.  Companies should also audit and review their usage regularly and document these processes to comply with regulators who may seek further information.  For more information about steps that companies can take to harness the benefits of AI while limiting regulatory issues, see Orrick’s guidance here.

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations.

© Orrick, Herrington & Sutcliffe LLP | Attorney Advertising

Written by:

Orrick, Herrington & Sutcliffe LLP
Contact
more
less

PUBLISH YOUR CONTENT ON JD SUPRA NOW

  • Increased visibility
  • Actionable analytics
  • Ongoing guidance

Orrick, Herrington & Sutcliffe LLP on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide