FTC Provides Guidance on Using Artificial Intelligence and Algorithms

Patrick Law Group, LLC
Contact

The Director of the Federal Trade Commission (FTC) Bureau of Consumer Protection recently issued guidance in its Tips and Advice blog as to how companies can manage consumer protection risks that may arise as a result of using artificial intelligence and algorithms. The blog post includes the following key takeaways:

  • Be Transparent. Companies should ensure that consumers are not mislead about their interactions with AI tools. The FTC offers the example of the 2017 Ashley Madison complaint that alleged, in part, that the website deceived consumers by using bots to send male users fake messages and entice individuals to subscribe to the service. Companies should also be transparent when collecting sensitive data, and secretly collecting audio or visual data could lead to an enforcement action.

The FTC guidance also notes that companies that make automated decisions based on information from a third-party vendor may be required to provide the consumer with an “adverse action” notice, which is required when an individual has a negative action taken against her or him because of information in a consumer report. For example, if a company uses reports from a background check company to predict whether an individual will be a good tenant, and the background check company’s AI tool utilized credit reports to make the prediction, the company may be required to provide an adverse action notice if it uses the report to deny someone an apartment or charge higher rent.

  • Explain Your Decision to the Consumer. Companies that deny consumers something of value based on algorithmic decision-making should explain why. Although the FTC acknowledges that it may not be easy to explain the many factors that are involved in algorithmic decision making, companies must know what data is being used (and how it is being used) to train their algorithms in order to make sufficient disclosures to individuals. In addition, companies that use algorithms to assign credit scores to consumers should also disclose the key factors that adversely impact an individual’s credit score.

The blog post also warns that companies must notify consumers if a company changes the terms of an agreement based on automated tools. For example, if a company decides to use an AI tool to determine whether it will reduce a consumer’s credit score (e.g., by taking into account purchases made by the consumer) and this was not initially disclosed, the company must now disclose this to the consumer.

  • Ensure that your decisions are fair. Although AI tools have many beneficial uses, AI tools can also result in discrimination against a protected class. For example, if a company makes credit decisions based on consumers’ Zip Codes and this results in a disparate impact on a protected group, the company may be in violation of the Equal Credit Opportunity Act. Companies should also give consumers an opportunity to correct information used to make decisions about the consumer.
  • Ensure that your data and models are robust and empirically sound. If a company provides consumer data to third parties to train algorithms that will make decisions about consumer eligibility for credit, employment, insurance, housing, or similar benefits, the company may be considered a consumer reporting agency. Consequently, the company would be required to comply with the FCRA, and would be responsible for ensuring that the data is accurate. The company would also be required to give consumers access to and the opportunity to correct their own information.

In addition, even if a company is not deemed to be a consumer reporting agency, companies that provide data about their customers to third parties for use in automated decision making may have an obligation to ensure the data is accurate. Under the FCRA, a company is considered a “furnisher” if it provides data about customers to consumer reporting agencies. Furnishers are prohibited from providing data to third parties that they have reason to believe may be inaccurate, and are required to maintain written policies and procedures to ensure information is accurate and investigate consumer disputes related to such data.

  • Hold yourself accountable for compliance, ethics, fairness and nondiscrimination. Before using an automated decision tool, Companies should consider four key issues: whether the data set is representative, whether the model accounts for bias, whether there are problems with the accuracy of the predictions and whether reliance on big data raise ethical or fairness concerns. The FTC also stresses that companies should protect their algorithms from unauthorized use, and consider whether the inclusion of access controls or other safeguards could prevent abuse. Lastly, companies may want to consider using third party objective observers to independently test their algorithms for potential problems.

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations.

© Patrick Law Group, LLC | Attorney Advertising

Written by:

Patrick Law Group, LLC
Contact
more
less

Patrick Law Group, LLC on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide