2023 Artificial Intelligence Regulation: An Update from a Senior FTC Official

Holland & Knight LLP
Contact

 

Highlights

  • Michael Atleson, a senior attorney for the Federal Trade Commission (FTC), joined Holland & Knight for a recent question-and-answer webinar presentation on artificial intelligence (AI) regulation.
  • This Holland & Knight alert provides a number of key takeaways from that discussion, including best practice tips and "inside information" that can help companies enhance their regulatory compliance programs and minimize risks associated with AI use.
  • These takeaways apply to businesses in every single industry sector – from financial services firms and healthcare product manufacturers to retailers and many others – that use AI systems or tools in the course of business or to market their AI systems.

Holland & Knight hosted Michael Atleson, a senior attorney for the Federal Trade Commission (FTC or Commission), for a webinar presentation on Nov. 7, 2023. Mr. Atleson has been with the FTC for nearly two decades and currently serves as a staff attorney with the FTC's Division of Advertising Practices.

During the interview, Holland & Knight Partner Anthony DiResta (former director of the FTC's Southeast Regional Office and co-chair of the firm's Consumer Protection Defense and Compliance Team) and Associate Benjamin Genn (a member of the firm's Consumer Protection Defense and Compliance Team) asked Mr. Atleson dozens of questions covering a broad range of topics concerning artificial intelligence (AI) regulation and best practices, including:

  • the working definition of AI
  • the FTC's philosophy concerning AI
  • the legal bases for the FTC to regulate AI
  • recent enforcement actions concerning unfair or deceptive use of AI
  • federal directives, including the Biden's Administration recent AI Executive Order (Executive Order)
  • bias and discrimination
  • cooperation among agencies
  • duty to monitor AI products and use of disclaimers
  • liability and available relief to consumers and the government as a result of an enforcement action
  • risk management, including recommended policies and procedures
  • the future of AI regulation

This alert summarizes the discussion between Holland & Knight and Mr. Atleson. The use of AI is increasingly prevalent in every industry sector of the U.S. economy, including financial services, healthcare and life sciences, retail, technology, hospitality and tourism, transportation, education, media, technology, telecommunications and manufacturing. The webinar is chock-full of best practice tips and "inside information" that can help companies enhance their regulatory compliance programs and minimize risks associated with AI use in the consumer space. Holland & Knight anticipates continuing conversations with key FTC regulators as new AI developments come to light.

Key Takeaways

Defining AI

"AI" is a fairly ambiguous term. It means different things to different people. The FTC does not employ one official definition of AI. That said, federal sources tend to ascribe to AI broad definitions. AI goes beyond just chatbots. AI encompasses algorithms in the form of tools and systems that utilize computations and predictive coding used by a variety of industries in the regular course of business.

From the FTC's perspective, the focus is on AI in the marketing space, as many companies market their AI capabilities, which has the potential to harm consumers.

The FTC's Approach

The FTC views AI through the lens of its own mission: consumer protection. In other words, the FTC wants companies to confront the hard questions surrounding AI's impact, value and potential negative consequences.

In contrast, companies often view AI through the lens of the technology itself. However, the FTC cautions that companies must acknowledge the risks of AI and remember that the data collected and processed, at the end of the day, is information about consumers.

Legal Underpinnings

Section 5 of the FTC Act prohibits "unfair or deceptive acts or practices in or affecting commerce." This is a broad and flexible provision under which the FTC actively prosecutes companies that deploy AI in a harmful or deceptive manner. The FTC emphasizes that Section 5 is more than sufficient to regulate AI and that no further legal authority is required. Thus, any marketing or use of AI must adhere to the long-established principles found within Section 5.

In the case of deception, the FTC has identified two common scenarios. One is instances in which companies exaggerate the capabilities of AI as their selling point (aka, "the Fake AI Problem"). The second scenario is instances where AI is solely deployed to deceive the consumer through "deepfakes," which include using cloned voices and language models to develop phishing messages.

Enforcement Actions

The FTC has brought several enforcement actions against companies engaging in harmful use of AI and other algorithms, including:

  • luring consumers to invest in online stores by using deceptive claims that the company's AI ensures success and profitability; the FTC sued Automators for claiming that its "AI machine learning" was trained to maximize revenues, which would help users achieve more than $10,000 per month in sales
  • promoting "smart" devices that claim to treat health ailments; the FTC sued Physician's Technology for falsely claiming that its low-level therapy device emitted infrared and visible light to diagnose and treat chronic pain and reduce inflammation
  • overstating the benefits of automated investment services; the FTC sued DK Automation for pitching supposed cryptocurrency investment services that were the "#1 secret passive income crypto trading bot," which the company claimed could generate profits "while you sleep"
  • deceiving consumers about the use of facial recognition technology and associated retention policies; the FTC sued Everalbum for applying facial recognition technology to customers' content, despite promising not to use such technology unless the customer affirmatively chose to activate the feature; in addition, the company failed to keep its promise to delete the content of customers who deactivated their accounts and instead retained the data indefinitely

Federal Policy

The Biden Administration is highly concerned about the risks of AI, as detailed in its Oct. 30, 2023, Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence and expounded upon by Holland & Knight in a webinar and previous alert, "What to Know About the New Artificial Intelligence Executive Order," both published on Oct. 31, 2023.

The Executive Order seeks to establish new standards for AI safety and security, protect Americans' privacy, advance equity and civil rights, protect consumers and promote innovation. Importantly, the Executive Order directs certain agencies – such as the U.S. Department of Homeland Security and U.S. Department of Energy – to advance AI safety and address AI systems' threats to critical infrastructure.

With respect to the FTC, the Executive Order did not explicitly direct the Commission to take specific actions. That said, in the context of irresponsible uses of AI that result in bias or discrimination, the Executive Order signaled that the FTC should use existing authority to protect people's rights. A fair reading of the Executive Order suggests that the FTC should continue to regulate AI – within the scope of its jurisdiction – under the FTC Act.

Potential for Bias and Discrimination

As indicated by the Executive Order, AI outputs can sometimes be biased or discriminatory. It is well-documented that AI systems have discriminated, often inadvertently, with respect to individual immutable characteristics, including race, ethnicity, gender and language.

What triggers AI bias? There is a number of reasons how an AI system may discriminate. AI systems sometimes behave this way because the bias is embedded in the data on which the algorithm was trained. Other times, an AI system may inadvertently discriminate because its underlying model is being used for something other than its original purpose. In either case, a company runs the risk of violating the law.

From the FTC's perspective, a company that uses an AI system that results in disparate treatment can be prosecuted under Section 5 of the FTC Act based on an "unfairness" theory. For example, the FTC brought an action against Passport Automotive Group and obtained a settlement of $3.3 million (to be refunded to consumers) where the company engaged in lending practices that regularly charged African American and Latino customers more in financing costs and fees.

The FTC – along with the U.S. Department of Justice (DOJ), Equal Employment Opportunity Commission (EEOC) and Consumer Financial Protection Bureau (CFPB) – are highly concerned about unfairness in AI, as detailed in their April 25, 2023, Joint Statement on Enforcement Efforts Against Discrimination and Bias in Automated Systems (Joint Statement).

Interagency Cooperation

As a general matter, and as evidenced by the recent Joint Statement, federal agencies communicate and cooperate in support of federal directives. For example, in the context of AI, the FTC is conducting a joint effort with the CFPB to collect information about tenant screening tools, including whether the underlying algorithms can have an adverse impact on underserved communities.

In addition, the FTC is closely coordinating with state attorneys general given the Commission's loss of its ability to seek disgorgement following the U.S. Supreme Court's ruling in AMG Capital Management, LLC v. FTC.

Monitoring and Disclaimers

Under the FTC Act, companies may be liable for what vendors or contractors do on their behalf. This means that companies have an implied duty to vet and monitor the third parties they engage. Whether a company regularly monitors its vendors and contractors is an important factor in enforcement discretion. In other words, if a company's AI system results in consumer harm, the FTC will investigate whether the company monitored both the product and its vendors and contractors. A showing of diligent and continuous monitoring practices may dissuade the FTC from prosecuting or, at the very least, persuade it to minimize the remedies sought.

Disclaimers can be used when marketing AI as long as they are clear and conspicuous. The extent to which a disclaimer limits liability is narrow, similar to disclaimers and waivers in the context of tort claims. Put another way, a disclaimer cannot cure blatant deception or harm that the consumer cannot reasonably avoid.

Enforcement Relief

During the course of an investigation and negotiations, the FTC considers injunctive relief and monetary relief. In this context, injunctive relief comes in the form of requiring companies to implement certain compliance provisions in their AI programs. If appropriate and legally available, monetary relief comes in the form of civil penalties.

Does the FTC have any recourse against the technology itself? In a 2021 commission statement, former FTC Commissioner Rohit Chopra stated that no longer allowing "data protection law violators to retain algorithms and technologies that derive much of their value from ill-gotten data [is an] important course correction." Based on this directive, the FTC now seeks algorithmic deletion as a remedy in its enforcement actions. For example, the FTC brought actions against the aforementioned Everalbum and WW International and subsidiary Kurbo, in which the Commission successfully required those companies to delete both the data collected (photos and children's information, respectively) and the algorithms derived from the data.

Best Practices

What safeguards can companies implement to limit their liability? The FTC recommends reviewing its recent Policy Statement on Biometric Information. While the statement deals with biometrics, its guidance can be readily applied to AI systems.

In a nutshell, the FTC believes that AI best practices include:

  • conducting pre-release assessments concerning foreseeable harms
  • taking steps to mitigate the risks of those harms and not releasing the product at issue if those risks cannot be mitigated
  • being transparent to consumers regarding the collection and use of their data
  • evaluating vendors' capabilities to minimize risks to consumers
  • providing appropriate training for employees and contractors whose job duties involve interacting with AI systems and their related algorithms
  • conducting ongoing monitoring of AI systems to ensure that their use is operating as intended and not likely to harm consumers

Companies must remember that the FTC Act does not explicitly outline a standard of "reasonable foreseeability." In other words, the Commission does not have to prove intent. That said, under a theory of "unfairness," the FTC will consider the reasonableness of a company's conduct – what the company knew about its AI system, what it should have known and what steps it took to mitigate risk and remediate harm – in its discretion to prosecute a company.

The Future of AI Regulation

The world of AI is rapidly evolving. Consequently, we can expect more comprehensive regulation in the near future. Congress may look to what other countries are doing in this space, such as the European Commission's AI Act. But even domestically, some states such as California are taking steps to direct their own agencies to adopt a proactive approach to AI regulation that both promotes trustworthy principles and elevates fairness as crucial components of AI deployment.

Some experts believe that Congress may be amenable to provide guidance with respect to AI as part of a broader federal privacy bill. If that is the case, however, Congress should pair any legislation with additional resources for the agencies. In the meantime, the FTC will continue to rely on "deceptive" and "unfair" fact patterns to prosecute companies under Section 5.

Co-author Zachary Sherman was a Holland & Knight summer associate and will join the firm as an associate in fall 2024.

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations.

© Holland & Knight LLP | Attorney Advertising

Written by:

Holland & Knight LLP
Contact
more
less

Holland & Knight LLP on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide