FTC Announces Groundbreaking Action Against Rite Aid for Unfair Use of AI

WilmerHale
Contact

On December 19, 2023, the Federal Trade Commission (FTC) announced an enforcement action against  the retail pharmacy Rite Aid for unfair practices associated with its use of a facial recognition technology (FRT) surveillance system to deter theft in its retail stores and for violations of a previous FTC Order. The FTC’s 54-page complaint alleges that Rite Aid (1) failed to take reasonable measures to prevent harm to consumers from its use of facial recognition technology and (2) violated provisions from the 2010 Order that required a comprehensive information security program and document retention for vendor management. 

This action marks an important moment in AI regulatory history, as it is the first time the FTC has taken enforcement action against a company for using AI in an allegedly biased and unfair manner. It also continues a trend we are seeing in FTC enforcement actions in terms of the agency relying on data and algorithmic disgorgement as a remedy: Rite Aid (and any third parties with whom they shared covered information with) must delete all of Rite Aid biometric data that was processed unfairly, and any AI models or algorithms associated with such data.

This enforcement decision also serves as a warning and a guidepost for companies using and developing AI systems, especially if those AI systems use facial recognition technology in order to identify individuals and make automated decisions about them that could result in potential harm. The FTC’s enforcement action in this case aligns with the unfairness criteria outlined by the agency in its May 2023 policy statement warning about misuses of biometric information and harm to consumers. Moreover, this enforcement action highlights the importance of conducting risk assessments to understand potential consumer impacts, implementing bias mitigation strategies, overseeing vendors, employee training, and complying with company security standards at every stage of the procurement and deployment of an AI system (vendor selection, model development, model maintenance, and post-deployment monitoring).

In this post, we summarize the FTC’s complaint, including its formal counts against Rite Aid and supporting allegations. We also present a consolidated assessment of its stipulated order and some key takeaways that companies currently deploying or interested in deploying AI technology should note. 

The Complaint

The complaint details how Rite Aid deployed an FRT surveillance system in hundreds of its retail pharmacy locations, primarily in urban, low-income, and racially diverse communities, from October 2012 until July 2020. The company contracted with two different third-party vendors to develop the AI system, which involved creating a database of images of “persons of interest” comprised of (1) individuals who Rite Aid employees enrolled in the system for suspected or actual criminal activity at a Rite Aid store, and (2) individuals who had been flagged by law enforcement in “Be On the Look Out” alerts. Rite Aid encouraged its store-level employees to enroll as many individuals as possible. The FRT database contained the personal details and images, often low-quality still shots from CCTV cameras or cell phone photos, of “tens of thousands” of individuals. Rite Aid had no data retention policy—the FRT database stored these images indefinitely. 

Rite Aid’s FRT surveillance system captured images of all customers entering the retail store and compared the images to the database to identify potential matches. If the match cleared a specific confidence score, the system would alert store employees and provide instructions to the employee (e.g. “observe and provide customer service” or “notify police”) based on the match information. These match alerts generally did not give the actual confidence score to store employees. However, as the FTC notes at length in its complaint, the system frequently generated false-positive matches that “subjected consumers to surveillance, removal from stores, and emotional and reputational harm,” among other harms. Rite Aid instructed its store employees not to tell shoppers or the media about the use of this FRT.

The FTC’s allegations roll up into 2 hefty counts against Rite Aid, with the allegations summarized and below. Notably, despite the fact that the FTC could have obtained monetary penalties against Rite Aid given the agency’s decision to enforce a prior order, the consent order did not require Rite Aid to pay money (presumably because the company is in bankruptcy proceedings).

1. Unfair facial recognition technology practices
The FTC contends that Rite Aid failed to:

  • Assess, monitor, or mitigate any risks of incorrectly identifying consumers, with such errors occurring more frequently based on race or gender. As a result, Black, Asian, Latinx, and women consumers were at higher risk of being misidentified and generating a false positive by Rite Aid’s FRT. 
  • Reasonably assess or even inquire about the accuracy of the FRT before deploying the technology. In fact, both of its vendors’ contracts expressly disclaimed any warranty as to the accuracy or reliability of the results.
  • Enforce image quality controls, increasing the likelihood of false-positive match alerts. The FTC alleges that Rite Aid’s image quality policies demonstrated that it understood how poor image quality could lead to false alerts, and yet Rite Aid did not place any controls or oversight to ensure that enrollment photos complied with the policy.  
  • Appropriately train or oversee store employees responsible for operating the FRT, including how to interpret and act on the match alerts. The available training materials also did not adequately address the possibility of false-positive matches. 
    Regularly monitor or test the accuracy of the technology after deployment. According to the complaint, Rite Aid inadequately examined the accuracy of match alerts, documented outcomes, monitored the frequency of false-positive matches, and addressed issues with problematic enrollments.

2. Unfair failure to implement or maintain a comprehensive information security program mandated by a previous FTC Order in 2010
The FTC also claimed that Rite Aid violated the 2010 Order, which mandated the implementation and maintenance of a comprehensive information security program and documentation of the compliance. This violation included failures to:

  • Use reasonable steps to assess and select service providers who could meet the security standards for the personal information shared by Rite Aid. (Rite Aid also failed to maintain risk assessment documentation for these service providers.)
  • Conduct periodic security assessments of its service providers to ensure that they continued to meet the company’s security standards.
  • Contractually require their service providers to establish appropriate safeguards for the personal information shared by Rite Aid.

The Stipulated Order

The order contains 16 stipulations that Rite Aid agrees to in order to settle the case. In addition to submitting an annual certification of compliance, reporting any “covered incidents” (such as data breaches) to the FTC, and maintaining accurate records, Rite Aid:

1. Cannot use any Facial Recognition or Analysis System (defined broadly as “an automated biometric security or surveillance system that analyzes or uses depictions or images, descriptions, recordings, copies, measurements, or geometry of or related to an individual’s face to generate an output”) for 5 years.

2. Must delete covered biometric information and destroy any AI models or algorithms derived from that information (also called model disgorgement). Rite Aid also must notify any third parties with these data or models and instruct them to do the same.

3. Must establish and implement an Automated Biometric Security or Surveillance System Monitoring Program if Rite Aid continues to use the FRT surveillance system in their stores after the 5-year ban or if they want to use another Automated Biometric Security or Surveillance System not subject to the ban.. (Requirements for the monitoring program start on p. 7 of the order.)

4. Must establish and implement procedures to provide consumers with notice and a means for submitting complaints related to the outputs of the AI system if Rite Aid continues to use the FRT surveillance system in their stores after the 5-year ban or if they want to use another Automated Biometric Security or Surveillance System not subject to the ban. (Requirements for the notice and complaint procedures start on p. 13 of the order.)

5. Must have retention limits for its biometric data prior to implementing any Automated Biometric Security or Surveillance System.

6. Must post clear and conspicuous notices at the retail locations and online platforms disclosing the company’s use of any Automated Biometric Security or Surveillance System that uses biometric information collected from consumers. The notices must contain information about:

a. The specific types of biometric information collected,
b. The types of outputs generated by the AI,
c. All purposes for which Rite-Aid uses the FRT and its outputs, including any actions that store employees may take on account of the outputs, and
d. The timeframe for deletion of each type of biometric information used, as established in the (also mandated) data retention policies.

7. Cannot misrepresent its compliance with these orders.

8. Must establish and maintain a comprehensive information security program for vendors that protects the personal information shared by Rite Aid. (Requirements for the security program start on p. 17 of the order.)

9 & 10. Periodically undergo a security assessment by an independent third party, report the results to the FTC, and cooperate with the assessor. 

Key Takeaways 

In an accompanying statement to the enforcement action, Commissioner Alvaro Bedoya said, “Section 5 of the FTC Act requires companies using technology to automate important decisions about people’s lives…to take reasonable measures to identify and prevent foreseeable harms,” He explained that the settlement with Rite Aid “offers a strong baseline for what an algorithmic fairness program should look like.”

Companies looking to ensure compliance and avoid attention from regulators for consumer-facing AI systems—especially those engaging with high-risk applications of AI that use personal and/or biometric information to make automated decisions about an individual—should look at this order for guidance. It highlights some of the FTC’s priorities in data privacy and AI governance enforcement, and it provides concrete steps companies can take to avoid AI bias and protect personal information. Here are some specific takeaways from this enforcement action:

Consider the what, how, and where—that is, the context—when deploying AI. As the FTC and other regulators warned companies in 2023, existing laws and authorities that protect against discrimination apply to new technologies, like AI, just as they apply to other practices. In addition to assessing the AI itself for any potential disparate outcomes based on race, ethnicity, or gender presentation, companies should be cognizant of the context in which the AI is being deployed. Here, the FTC noted that Rite Aid not only failed to assess false-positive rates across races and gender, but it also failed to consider how the locations it targeted for deploying its AI surveillance system—locations considered “urban” and “along public transportation routes”—would have disproportionate impacts on racial and ethnic minority communities.

FTC penalties for noncompliance in AI cases can include data deletion and algorithmic disgorgement. We have seen the FTC order algorithmic disgorgement (or the deletion or destruction of algorithms or models) before, such as in their enforcement actions against Everalbum, Cambridge Analytica, and Ring. The Rite Aid enforcement action demonstrates that disgorgement continues to be a remedy that the FTC is actively seeking, which can have meaningful effects for companies.

Conducting regular risk assessments is important for high risk AI that produces outputs that could potentially harm consumers. Provision 3 in the stipulated order can serve as a compliance checklist for companies looking to implement “automated biometric security or surveillance systems” or similar AI technology that makes automated decisions about consumers. In particular, the FTC is concerned about the risks of “physical, financial, or reputational harm to consumers” (including stigma and severe emotional distress) and states that these risk assessments should not only identify and address risks, but also consider if there will be a disproportionate impact on consumers based on race, ethnicity, gender, sex, age, or disability, alone or in combination.

Ensure that your company has a strong training program for employees operating high risk AI systems. Employees tasked with operating and/or acting on outputs from AI, like Rite Aid’s FRT surveillance system, should undergo regular training and understand the limitations of the technology. The FTC’s complaint details how the in-store employees should have been trained on:

  • how to evaluate the quality of the images enrolled into the model,
  • how to visually compare the images (as the human operator) to see if their assessment agrees with the AI’s outcome, and 
  • how to understand the effects of bias that might be inherent in the data and model. 

Companies should be transparent about any use of AI that involves consumer personal information, especially biometric data. Provision 4 in the order explains what details should be in a consumer notice and how Rite Aid can set up a formal complaint procedure for affected consumers. For example, the notice should include the specific types of biometric information collected, the type of outputs generated, the purpose of the data collection and use, and the data retention policy. These requirements highlight the importance of transparency and consumer communication when deploying AI tools like automated biometric security or surveillance systems. 

Companies need to scrutinize contracts with vendors handling biometric data or other personal information. The FTC emphasizes the importance of vendor oversight and due diligence at every stage of the procurement process, from the initial selection to ongoing retention. Conducting periodic assessments of vendors’ capability to safeguard consumers’ personal information and retaining documentation of these efforts can help a company remain in compliance with FTC policy guidelines. 

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations.

© WilmerHale | Attorney Advertising

Written by:

WilmerHale
Contact
more
less

WilmerHale on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide