The Federal Trade Commission and Artificial Intelligence

ArentFox Schiff
Contact

ArentFox Schiff

Artificial intelligence (AI) burst into the public consciousness less than one year ago, with OpenAI’s highly successful public release of ChatGPT. Since then, AI-enabled products and services have proliferated throughout the economy. AI-enabled tools can now do everything from marketing automation to HR functions and from legal research to accounting. Virtually every company is now scrambling to imbed AI into its products and services, if it has not already.

As has been the case with other technological developments – the rise of the commercial internet in the late 1990s, for example – AI promises both benefits and potential harms to consumers. In light of that, and having stated publicly that it missed a chance to develop a more effective regulatory regime regarding consumer privacy at the dawn of the commercial internet 20 years ago, the Federal Trade Commission (FTC) is now in the midst of an extraordinary campaign, largely through blog posts but also in reports and through enforcement, to warn industry that it is on the beat and ready to enforce existing laws that already apply to tools powered by AI.

FTC guidance tends to preview its enforcement agenda. To that end, the FTC’s rapid and comprehensive response to the commercial availability of AI-enabled tools should be seen as identifying areas of enforcement with respect to unlawful bias and compliance with laws regulating consumer reports, extensions of credit, advertising, intellectual property, competition, privacy, and data security. Although enforcement has just begun, it lags the market, which is only now in full bloom. There is no question that enforcement is coming, and this body of guidance is the early roadmap to it.

Eligibility Determinations for Employment, Credit, Insurance, and Housing: The FCRA, ECOA, and the Problem of Bias

As far back as 2016, the FTC was thinking about automated systems and algorithms. In a Commission report entitled Big Data: A Tool for Inclusion or Exclusion?, the FTC warned companies using big data algorithms that existing laws, namely the FTC Act, the Fair Credit Reporting Act (FCRA), the Equal Credit Opportunity Act (ECOA), apply to both inputs and outputs of this technology. The FTC admonished users of big data algorithms to make sure that they use representative data sets, that their data models account for bias, that their predictions using big data are accurate, and that they incorporate principle of ethics and fairness in their models. Ultimately, the FTC used this report to warn companies against the use of big data and algorithms to perpetuate unlawful bias or to unlawfully restrict opportunities in the employment, credit, insurance, and housing markets.

The FTC’s Bureau of Consumer Protection (BCP) revisited these topics in a 2020 blog post entitled Using Artificial Intelligence and Algorithms. BCP instructed companies to consider FCRA obligations (including with respect to data accuracy, limitation on data use, and in providing adverse action notices to consumers, where required) when making eligibility determinations based on third-party datasets; disclose reasons for denying something of value to consumers (including what data is used in companies’ models and how that data is used to arrive at decisions), and any factors used in risk scoring of consumers; and to avoid any disparate impact on protected classes.

In the 2021 Appropriations Act, the US Congress directed the FTC to study and report on whether and how AI can be used to address online harms, including harassment, hate crimes, and misleading or exploitative consumer interfaces. Congress asked the FTC for recommendations on “reasonable policies, practices, and procedures” for such AI uses and on legislation to “advance the adoption and use of AI for these purposes.” In its 2022 report to Congress entitled Combatting Online Harms Through Innovation, the FTC counseled that AI is not the right solution to stop the spread of these online harms, noting that AI tools can be inaccurate, biased, and discriminatory by design, and incentivize relying on increasingly invasive forms of “commercial surveillance.”

The FTC drove home its concern regarding bias in AI-enabled tools in a joint statement with the US Department of Justice’s Civil Rights Division, the Consumer Financial Protection Bureau, and the Equal Employment Opportunity Commission, reminding industry that the FTC Act prohibits the use of automated tools that have discriminatory impacts, and that the FTC has required firms to destroy algorithms that were trained on data that should not have been collected.

Balancing Benefits and Risks: Unfairness and Section 5 of the FTC Act

By 2021, the FTC was beginning to cast a broader net regarding potentially unlawful use of artificial intelligence in its blog post entitled Aiming for Truth, Fairness, and Equity in Your Company’s Use of AI. While continuing to remind industry that the FCRA and ECOA apply to the use of artificial intelligence in connection with eligibility determinations, the FTC argued that Section 5 of the FTC Act itself prohibits the use of algorithms that perpetuate bias, even if the ECOA does not apply. Importantly, the FTC’s guidance stated that the unfairness prong of Section 5, which prohibits acts and practices that cause, or are likely to cause substantial consumer harm, which consumers cannot reasonably avoid and which harm is not outweighed by benefits to consumers or to competition, applies to a facially neutral offer of credit that in fact had a discriminatory effect. Ultimately, the FTC stated that it would evaluate algorithms on whether they “do more good than harm.” This is, obviously, a very broad interpretation of the law, and one that opens the door for a wide range of subsequent enforcement.

Advertising Law

The FTC’s 2021 blog post also asserted that core advertising law principles also apply to AI, whether companies direct their claims regarding AI products to consumers or to business customers. The FTC urged companies not to overstate their clams on what their AI products do or can do.

With its 2023 blog post entitled Keeping Your AI Claims in Check coming just months after OpenAI released ChatGPT, and as the market reacted with scores of new AI-empowered products, the FTC issued a warning that advertising claims regarding artificial intelligence products must comply with the FTC’s body of law prohibiting false, misleading, and unsubstantiated advertising claims. Claims are false, the FTC noted, when they are exaggerated, and claims are misleading, for example, if they only apply to certain users or under certain conditions.

Importantly, the FTC also counseled companies to identify and mitigate reasonably foreseeable risks of AI before releasing it to the market. Even where the underlying technology is developed by others, the FTC stated that those who disseminate it are liable for law violations arising out of its use. “If something goes wrong – maybe it fails or yields biased results – you can’t just blame a third-party developer of the technology. And you can’t say you’re not responsible because that technology is a ‘black box’ you can’t understand or didn’t know how to test.”

Later in 2023, the FTC published Chatbots, Deepfakes, and Voice Clones: AI Deception for Sale. The FTC’s concern in this post is not false claims about AI, but rather false, or fake content created by AI, such as fictitious websites, false social medias posts, and online reviews, as well as imposter scams, extortion, and even malware. The FTC admonished companies to consider the reasonably foreseeable risks associated with their AI technology, and to mitigate those risks to the extent possible. This is another very broad statement that provide the FTC wide enforcement latitude.

The FTC also discussed the intersection of AI and advertising law in an August 2023 blog post entitled Can’t Lose What You Never Had: Claims About Digital Ownership and Creation in the Age of Generative AI, in which the FTC warned companies to make clear when selling AI-enabled products “what [they are] selling, including who made it, how it was made, or what rights people have in their own creations.” For example, if selling a digital music file, companies should not claim the music is by a certain artist when it was generated by AI using the artist’s voiceprint. The FTC also suggested that when selling an AI product, companies may have an affirmative duty to disclose the extent to which the training data includes copyrighted or otherwise IP-protected material.

Intellectual Property

The FTC is not an intellectual property enforcement agency, as such. But in connection with its consumer protection mission, it has recently begun to explore intellectual property issues that arise in the context of AI. An October 2023 blog post entitled Consumers Are Voicing Concerns About AI provides a rare glimpse into the vast array of consumer complaints housed in the FTC Consumer Sentinel database to shed light on consumers’ concerns about AI. This is important because, historically, the FTC’s law enforcement program has tended to be informed by consumer complaints. The FTC’s query of this database revealed thousands of consumer complaints in the previous 12 months alone, among them concerns over training data sets containing copyrighted information scraped from the internet. The FTC also expressed concern over use inclusion of biometric data, such as voice recordings, that can be used to train models or to create “voice prints” that can be used in scams, bias inherent in AI-enabled facial recognition software, and the increasing absence of human alternatives available to consumers.

The FTC hosted a workshop in early October 2023 on The Creative Economy and Generative AI to help it gain a better understanding how creative fields are affected by generative artificial intelligence. As we reported here, participants representing artists, writers, actors, musicians, and other creative professions voiced concern about the use of copyrighted works to train generative AI models without consent, compensation, or credit. They also called for companies to disclose the sources of their training data. FTC Chair Lina Kahn remarked that the agency will use its law enforcement authority to police deceptive and exploitive business practices. As with other FTC workshops, an FTC report on this topic is likely to follow.

Competition

In a June 2023 the FTC’s Bureau of Competition, together with the FTC’s Office of Technology, joined the conversation with a blog post entitled Generative AI Raises Competition Concerns. This piece stated that because AI requires vast training data and computational resources, the barrier to entry to compete against established firms is high, leading to an increased risk of anticompetitive practices. It went on to warn companies not to lock up their highly skilled workforces, such as with noncompete agreements, so that the market for this talent can move freely among competitors. It also warned against M&A activity that could concentrate power in large firms and affect competition in the AI market and of network and platform effects that can be exploited by early entrants to develop dominant positions in the market.

Privacy and Data Security

Having long-ago established itself as the de-facto federal privacy enforcement agency and having directed substantial resources to its privacy policymaking and enforcement activities over the previous three decades, it is curious that the FTC has not addressed privacy and data security issues associated with the use of AI tools in its series of recent AI-related blog posts. Perhaps one or more blog posts on these issues is forthcoming. Nevertheless, as we discussed at length here, a leaked FTC Civil Investigative Demand (CID) directed to OpenAI earlier this year reveals that the privacy and data security are top of mind for the FTC when evaluating AI-enabled tools.

From a privacy perspective, the FTC is concerned, again, with both inputs and outputs associated with AI-enabled tools. Notably, the FTC asks OpenAI what steps it has taken to remove personal information from training data sets. Is this a suggestion that such removal is necessary? If the data were scraped from publicly available information on the internet, does the FTC mean to suggest the companies are not free to use those data? If so, that is startling in and of itself, and would alter privacy law in the United States. In terms of outputs, the FTC asks a series of questions to understand the extent to which AI outputs about individuals are misleading and harmful to those individuals, suggesting, perhaps, a Section 5 accuracy standard to AI tools, not unlike the FCRA’s accuracy standard. The FTC’s CID goes on to address more familiar privacy territory, asking questions about the choices offered to users and how OpenAI has honored those choices.

The FTC’s data security questions go to OpenAI’s own data security practices, especially with respect to users’ chat history, payment information, and any “prompt injection” attacks (unauthorized attempt to bypass filters or manipulate a large language model to ignore prior instructions or to perform actions that its developers did not intend). The FTC does not seem to be trying to expand the law here, but rather to apply its existing data security standards to AI tools. The line of questions on how OpenAI required users of its plugins and API are more interesting, as they go to how the company ensures that downstream users protect the security of consumers’ personal information. This line of questioning suggests that the FTC thinks that companies in the AI industry need to do due diligence on their partners and to hold them to contractual provisions and monitoring to make sure that the partners do not misuse the technology.

Conclusion

The FTC’s campaign to underscore the application of existing law to AI-enabled tools is remarkable for its speed, its scope, and for the sheer amount of guidance on this topic. Clearly, the FTC wants industry to know it’s on the beat. More enforcement is coming, and if there were any doubt about what the FTC intends to enforce, this comprehensive body of guidance is the roadmap. Those that develop or deploy AI-enabled tools are on notice and should take note of the FTC’s statements regarding FCRA and ECOA compliance, as well as FTC admonitions to avoid bias and discriminatory impact, to treat content creators fairly, and to comply with existing advertising, competition, privacy, and data security law.

[View source.]

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations.

© ArentFox Schiff | Attorney Advertising

Written by:

ArentFox Schiff
Contact
more
less

ArentFox Schiff on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide