Although the U.S. has no federal law that specifically regulates artificial intelligence (AI), the Federal Trade Commission (FTC) has indicated that it may be preparing to exercise its consumer protection authority with respect to AI deployment. In May, the FTC issued new guidance for the use of AI, building upon its 2020 AI guidance and its 2016 report on big data. And FTC Acting Chair Kelley Slaughter has stated in public remarks that the Commission will be exploring concerns relating to algorithmic harms, including bias and discrimination. Organizations deploying AI systems in the U.S. are advised to familiarize themselves with the FTC guidance in order to make sure that their uses of AI are in compliance with U.S. consumer protection requirements.
The FTC has long exercised its authority to regulate private sector uses of personal information and algorithms that impact consumers. As discussed below, that authority stems from Section 5 of the FTC Act (Section 5), the Fair Credit Reporting Act (FCRA), and Equal Credit Opportunity Act (ECOA).
Section 5 prohibits unfair or deceptive acts or practices in or affecting commerce. An act or practice is considered deceptive if there is a statement, omission, or other practice that is likely to mislead a consumer acting reasonably under the circumstances, causing harm to the consumer. An act or practice is considered unfair if it is likely to cause consumers substantial harm not outweighed by benefits to consumers, or to create competition circumstances where consumers cannot reasonably avoid the harm. The FTC’s most recent guidance offers examples of how AI deployments could be deemed deceptive (e.g., if organizations overpromise regarding AI performance or fairness) or unfair (e.g., if algorithms impact certain racial or ethnic groups unfairly).
FCRA regulates consumer reporting agencies and the use of consumer reports. The FTC’s AI guidance and enforcement actions make clear that the FTC considers certain algorithmic or AI-based collection and use of data subject to the FCRA. For example, if an organization purchases a report or score about a consumer from a background check company that was generated using AI tools, and uses that score or report to deny the consumer housing, that organization must provide an adverse action notice to the consumer as required by the FCRA. The FTC has also noted that organizations that supply data which may be used for AI-based insurance, credit, employment, or similar eligibility decisions may have FCRA obligations as “information furnishers.”
The ECOA prohibits discrimination in access to credit based on protected characteristics such as race, color, sex, religion, age, marital status. The FTC notes in both its 2020 and 2021 guidance that if, for example, a company used an algorithm that, either directly or through disparate impact, discriminated against a protected class with respect to credit decisions, the FTC could challenge that practice under the ECOA.
The FTC’s updated guidance provides insight into the expectations for organizations using AI.
Organizations deploying AI are well-advised to consider whether they are doing so in alignment with the FTC’s recommendations and to consider how best to demonstrate such use is truthful, fair, and equitable in the eyes of the FTC.
If the early part of the 21st century will be known for being the age of big data, then what we have now entered is the age of the algorithms.
Across industries, organizations are increasingly relying upon the use of artificial intelligence and machine learning technologies to automate processes, introduce innovative new products into consumer markets and enhance research and development.
This article is part of a series of articles, which will further examine the existing and emerging legal challenges associated with AI and algorithmic decision-making. We will take a detailed look at key issues including algorithmic bias, privacy, consumer harms, explainability and cybersecurity. There will also be exploration of the specific impacts in industries such as financial services and healthcare, with consideration given to how existing policy proposals may shape the future use of AI technologies.
Stevie DeGroff, a Law Clerk in our Denver office, and Brittney Griffin, a Senior Paralegal in our New York office, contributed to this entry.