Where is AI Regulation Heading and What Can Companies Do to Prepare?

Shook, Hardy & Bacon L.L.P.
Contact

Shook, Hardy & Bacon L.L.P.

Artificial intelligence (AI) is top of mind for companies, and while early adoption of this technology has strategic value, companies that do so with an eye on regulation will be better positioned to defend their use of AI. To help you do that, below we outline the existing legal frameworks, review the pending legislation, and provide practical tips for preparing your company for new AI regulations.

Existing legal frameworks that apply to AI

There are several existing legal frameworks, including one at the federal level, that apply to companies’ use of AI. We’ll start with the general and move to the specific.

On the general side of the spectrum, there is oversight by the Federal Trade Commission (FTC) under Section 5 of the FTC Act—which regulates “unfair and deceptive practices.” This broad power has allowed FTC to protect consumers against company practices on privacy, information security, and now AI. Most recently, for example, FTC opened an investigation into OpenAI, the maker of ChatGPT, requesting information about its use of large language models. FTC previously took enforcement action against AI deployers, including requiring deletion of algorithms trained on data that was unlawfully obtained.

On the specific side of the spectrum, jurisdictions have passed legislation to assess bias or discrimination in the use of AI. New York City, for example, passed a law that took effect January 1, 2023, requiring notice of use of AI in hiring decisions as well as annual audits to assess potential bias. In Colorado, Gov. Polis signed Senate Bill 21-169 into law in 2021, prohibiting insurers from using big-data systems to unfairly discriminate and requiring they take corrective action to address any consumer harms that are discovered from such discrimination.

Twelve states (California, Colorado, Connecticut, Delaware, Indiana, Iowa, Montana, Oregon, Tennessee, Texas, Utah, and Virginia) have now enacted comprehensive privacy laws that take effect over the next several years. The majority of these laws contain rules for “profiling,” which is any form of automated processing performed on personal data to evaluate, analyze or predict personal aspects related to an individual’s economic situation, health, personal preferences, interests, reliability, behavior, location or movements. The rules generally require notice about the profiling and the opportunity to opt-out of profiling decisions that have legal or similarly significant effects.

There is also guidance from different government agencies relating to AI when used in certain contexts, including, for example:

So, while we might lack comprehensive AI regulation (for now), certain use cases are subject to specific requirements, particularly in relation to consumers and employees/applicants.

Proposed legislation

We have seen significant efforts at the federal level to craft rules around AI technology. On June 21, 2023, Sen. Chuck Schumer launched the SAFE Innovation Framework, which is based on the pillars of security, accountability, foundations, explainability and innovation. As part of the framework, AI Insight Forums—panels of legislators, industry experts, and other stakeholders—will be convened to assist with the development of bipartisan legislation.

A number of other federal proposals would address different aspects of AI use. For example, bills introduced this year include:

In addition, the National Telecommunications and Information Administration (NTIA) at the Department of Commerce issued a request for comments on AI Accountability Policy. The NTIA is seeking feedback on policies to support the development of AI audits, assessments, certifications, and other mechanisms to foster trust in AI systems. Since issuing the request in early April, the NTIA received over 1,400 comments that will help inform a forthcoming report setting out policy recommendations.

Other countries are also considering AI regulation that may impact U.S. companies. Top among these is the EU AI Act—a regulation in the vein of GDPR—which is currently in the “trilogue” process for finalization of the text of the regulation between the EU Commission, the EU Parliament, and the Council of the European Union. This legislation would take a risk-based approach to AI regulation—prohibiting certain uses of AI that pose unacceptable risks to individuals; identifying high risk systems that would be subject to human oversight, transparency, cybersecurity, risk management, data quality, monitoring, and reporting obligations; and imposing lower compliance burdens on limited or minimal risk systems. Canada is also considering an AI bill as part of its legislative package for updating privacy rules. Like the EU AI Act, Canada’s Artificial Intelligence and Data Act would take a risk-based approach to AI regulation.

This flurry of activity is a strong indicator that regulation of AI technology—and not just certain use cases—is a near term possibility.

Tips for preparing for AI regulation

Companies adopting AI tools and systems can prepare for forthcoming regulation in the following ways:

  1. Develop an AI inventory. The key first step to understanding what legal requirements might apply is to identify how AI is being used by the company. For example, is HR using AI to screen job applicants? Are business teams using it for productivity? Is customer and consumer data being used with AI? Having an inventory of AI use cases will help companies evaluate which legal requirements might apply. And like any good inventory, this should be kept up-to-date, with a defined process for regularly reviewing and updating the inventory.
  2. Update risk assessment procedures to include evaluating AI. Companies can leverage existing risk assessment procedures—like cyber, privacy and third party risk management— to evaluate AI systems, identify potential risks, and implement mitigating measures.
  3. Create a multi-stakeholder working group. Because of its breadth, AI technology can apply in several different contexts within a company. So it is important to convene a working group that represents different areas and functions (like IT, privacy, legal, security, product, HR, and marketing) to ensure that all potential use cases for AI are assessed and risks are appropriately managed.
  4. Keep records of AI system evaluation. Accountability appears universally across the legislative landscape. To stay one step ahead of regulation, companies should document their risk assessments and decision-making with respect to AI system adoption. This will help when it comes time to determine whether applicable legal obligations are satisfied and what gaps might exist in the AI governance program.

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations.

© Shook, Hardy & Bacon L.L.P. | Attorney Advertising

Written by:

Shook, Hardy & Bacon L.L.P.
Contact
more
less

Shook, Hardy & Bacon L.L.P. on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide