Director of FTC’s Consumer Protection Bureau Gives Guidance on Consumer Protection Risks Associated with AI and Algorithms

Wilson Sonsini Goodrich & Rosati
Contact

Wilson Sonsini Goodrich & Rosati

These days, most companies are focused on the myriad of legal, health and safety, and financial issues caused by COVID-19. While the firm is actively monitoring these issues,1 we also want to keep you abreast of other developments that may be relevant to your business. Here, we provide an overview of guidance recently offered by Andrew Smith, the director of the Federal Trade Commission's (FTC) Bureau of Consumer Protection, on how to manage consumer protection risks associated with artificial intelligence (AI) and algorithms.2 Although recognizing the significant potential of AI, Smith explained that, to avoid consumer protection problems, companies should ensure that their use of AI tools is transparent, explainable, fair, and empirically sound, while fostering accountability.

More detail on Smith's guidance on each of these points is included below.

Transparency

  • Smith noted that companies should be careful not to mislead when using AI tools, such as chatbots, to interact with consumers. He explained that this is an area where the FTC has been active, taking action against companies that used fake dating profiles to convince consumers to sign up for a dating service,3 and that sold fake followers, subscribers, and "likes" to enhance other companies' and individuals' social media influence.4
  • According to Smith, companies should also be transparent about their collection of sensitive data and consumers' choices with respect to the collection of that data. He noted that failing to do so could give rise to an FTC action, using the FTC's recent action against Facebook as an example.5
  • Finally, to be more transparent, Smith said that companies should give consumers adverse action notices where legally required to do so. He explained that companies that make automated decisions based on information received from a "consumer reporting agency" (CRA) may be required to give consumers adverse action notices under the Fair Credit Reporting Act (FCRA).6 For example, charging a consumer higher rent based on a risk score received from a background check company triggers a requirement under the FCRA to inform the consumer of his/her right to access the information received about them and correct inaccurate information.

Explainability

  • According to Smith, companies that deny consumers something of value based on algorithmic decision-making should be able to explain why. He noted that companies that use AI to make decisions about consumers in any context should be able to explain to consumers what data is used in their models and how that data is used to arrive at a decision.
  • When using algorithms to assign risk scores to consumers, Smith explained that companies should also disclose the key factors that affected the score in order of importance.
  • Smith also noted that companies should notify consumers if they change the terms of a deal based on automated tools. As an example, Smith noted that, over a decade ago, the FTC took action against a subprime credit marketer that failed to disclose that it used a behavioral scoring model to reduce consumers' credit limits.7

Fairness

  • To ensure fairness, Smith encouraged companies to periodically test their algorithms. He explained that testing should take place before algorithms are used and periodically afterwards to ensure they do not discriminate against or disparately impact a protected class.
  • Similarly, Smith suggested that companies evaluate inputs and outputs for potential discrimination issues. According to Smith, companies should review the types of information that go into their models to determine whether they include ethnically-based factors or proxies for such factors, such as census tract. He also said that companies should consider testing their outputs to ensure they are not discriminating against or disparately impacting a protected class.
  • Smith also asked companies to consider giving consumers access to the information they use to make decisions about them and allowing consumers to dispute the accuracy of that information even if they are not legally required to do so under the FCRA.

Empirically Sound

  • Smith flagged the importance of ensuring that data used in models is accurate and up to date. He explained that companies that provide consumer data to others who use the data to make eligibility decisions about consumers may be required to implement procedures to ensure the data is accurate and up to date under the FCRA. He also noted that companies that provide data about their customers to others who use it to make automated-decisions for eligibility purposes may have similar obligations under the FCRA.
  • Smith encouraged companies to take steps to ensure their AI tools are "empirically derived, demonstrably and statistically sound." For example, he suggested that companies assess whether their tools are: based on data derived from an empirical comparison of sample groups; developed, validated, and periodically reevaluated using accepted statistical principles and methodology; and adjusted as necessary to maintain predictive ability.

Fostering Accountability

  • To avoid risks of bias or other harm to consumers, Smith identified four key questions that companies should ask before using an algorithm: 1) how representative is your data set? 2) does your model account for biases? 3) how accurate are your predictions based on big data? and 4) does your reliance on big data raise ethical or fairness concerns?
  • Smith noted that companies that sell AI tools to other businesses should consider whether access controls and other technologies can be used to prevent unauthorized use.
  • Finally, Smith discussed the importance of evaluating accountability mechanisms, suggesting that companies consider using independent standards or experts to test their AI tools for risks of bias or other harm to consumers.

[1] For more information, visit our COVID-19 Client Advisory Resource page here: https://www.wsgr.com/en/services/practice-areas/COVID-19.html.

[2] Andrew Smith, “Using Artificial Intelligence and Algorithms,” FTC Business Blog, April 8, 2020, https://www.ftc.gov/news-events/blogs/business-blog/2020/04/using-artificial-intelligence-algorithms.

[3] FTC v. Ruby Corp. et al., FTC Matter No. 1523284 (2016), https://www.ftc.gov/enforcement/cases-proceedings/152-3284/ashley-madison.

[4] FTC v. Devumi, LLC & German Calas, Jr., FTC Matter No. 1823066 (2019), https://www.ftc.gov/enforcement/cases-proceedings/182-3066/devumi-llc.

[5] In the Matter of Facebook, Inc., FTC Matter No. C-4365 (2019), https://www.ftc.gov/enforcement/cases-proceedings/092-3184/facebook-inc.

[6] Under the FCRA, a CRA is any person that compiles and sells consumer information that is used or expected to be used for credit, employment, insurance, housing, or similar decisions about consumers’ eligibility for certain benefits and transactions.

[7] FTC v. CompuCredit Corp. & Jefferson Capital Sys., LLC, FTC Matter No. 0623212 (2008), https://www.ftc.gov/enforcement/cases-proceedings/062-3212/compucredit-corporation-jefferson-capital-systems-llc.

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations.

© Wilson Sonsini Goodrich & Rosati | Attorney Advertising

Written by:

Wilson Sonsini Goodrich & Rosati
Contact
more
less

Wilson Sonsini Goodrich & Rosati on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide