Four Federal Agencies Reaffirm Authority to Monitor Automated Systems for Unlawful Discrimination and Other Federal Law Violations

On April 25, 2023 four federal agencies—the Consumer Financial Protection Bureau (CFPB), the Federal Trade Commission (FTC), the United States Department of Justice (DOJ), and the U.S. Equal Employment Opportunity Commission (EEOC)—released a joint statement pledging vigorous use of their respective authorities to protect against discrimination and bias in automated systems.

Although the statement does not break any new ground, it illustrates that federal agencies are concerned about how quickly the technology around AI and other automated systems is advancing and, we think, amounts to a tacit acknowledgement that any comprehensive AI legislation or regulation is unlikely in the near term. The statement asserts that the agencies’ enforcement authorities apply to automated systems and that those systems may contribute to unlawful discrimination or otherwise violate federal law. Each of these agencies has issued guidance or taken action in relation to automated systems already, stressing the relevance of their existing legal authorities to innovative technologies, even if it may not be immediately apparent how exactly those authorities apply to technological changes. This joint statement is a reminder that entities need to thoughtfully approach how they deploy automated systems that are used to make important decisions about individuals to ensure those decisions align with the law.   

Broad Definition of Automated Systems

The joint statement defines “automated systems” broadly; it covers not just AI, but any software and algorithmic processes “that are used to automate workflows and help people complete tasks or make decisions.” This is an expansive definition that encompasses many algorithms used by businesses and other applications that leverage consumer data. 

That statement focuses on three sources of potential discrimination:

Data and Datasets - Automated systems need large amounts of data to find patterns or correlations and then apply those patterns to new data.  Issues with the underlying data can affect how the system makes decisions. For example, automated system outcomes can be skewed by unrepresentative datasets.  These datasets could also contain baked-in biases, which could lead to discriminatory outcomes when applied to new data. 

Model Opacity and Access - Automated systems are complex and most people, sometimes even those who develop the tools, are unaware of exactly how these systems work; this lack of transparency makes it difficult for entities to assess whether their automated system is fair.

Design and Use - Developers might design an automated system based on flawed assumptions about its users, relevant context, or the underlying traditional practices that the system is replacing.

Existing Agency Guidance

The four agencies that issued the joint statement are among the federal agencies responsible for enforcing civil rights, non-discrimination, fair competition, and consumer protection. All four have previously expressed concern about the potential harm of AI systems either through statements, guidance or through enforcement actions. For example, in a 2022 circular, the CFPB confirmed that federal consumer protection laws apply regardless of the technology being used, and that the fact that technology used to make a credit decision was too complex is not a defense for violating these laws. The FTC has previously issued a report evaluating the use and impact of AI in combatting online harms, highlighting that AI tools can be discriminatory and incentivize relying on invasive forms of commercial surveillance. The FTC has also warned market participants that the use of automated tools that have discriminatory impacts might violate the FTC Act, and that making unsubstantiated claims about AI or deploying AI before taking steps to evaluate and minimize risk could be violations, as well.

Takeaways and Conclusion

The joint statement and recent agency guidance make clear that the CFPB, FTC, DOJ, and EEOC will monitor the development and use of automated systems to protect consumers, promote fair competition, and prevent discrimination. Companies using automated systems should keep this guidance in mind and assess the risk and potential harmful impacts of these systems. 

  1. Companies using automated systems should establish sound governance processes that would include (a) inventorying automated systems; (b) assigning risks to the systems based on such factors as their potential impact on consumers and current and prospective employees; (c) documenting system design and testing; and (d) implementing a robust change management process.
  2. Companies should understand what biases might spread from skewed datasets. For instance, datasets containing disproportionate data points about certain demographic groups could lead to automated systems perpetrating further discrimination.
  3. Entities should understand how their automated systems work and make decisions, so they can evaluate and address any potential biases in the design of the system that could lead to discriminatory outcomes.
  4. Businesses should understand the users that will use their AI systems and in what context to mitigate unintended discriminatory outcomes.

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations.

© WilmerHale | Attorney Advertising

Written by:

WilmerHale
Contact
more
less

WilmerHale on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide