Legal Requirements for Mitigating Bias in AI Systems

Wilson Sonsini Goodrich & Rosati
Contact

Wilson Sonsini Goodrich & Rosati

An alphabet soup of U.S. government agencies has taken steps toward regulating artificial intelligence (AI). Last year, Congress passed the National Artificial Intelligence Initiative Act, which creates numerous new initiatives, committees, and workflows on AI, with the goal of preparing the federal workforce, conducting and funding research, and identifying and mitigating against risks. In November 2021, the White House announced efforts to create a bill of rights for an automated society. And members of Congress are introducing bills like the Algorithmic Accountability Act and the Algorithmic Fairness Act, aimed at promoting ethical AI decision making. On the state level, at least 17 state legislatures introduced AI legislation in 2021.

With this flurry of activity, you might think that no legal requirements implicating AI exist today. But you’d be mistaken. There are many requirements that touch on AI already on the books, and some pack a big punch. Here are some U.S. local, state, and federal requirements to be aware of:

  • State and Local Rules on AI in Hiring: The New York City Council passed a measure banning employers in New York City from using automated employment decision tools to screen job candidates unless the technology has been subject to a “bias” audit in the year before use of the tool. Illinois requires employers using AI interview technology to notify applicants about how AI works and obtain the applicant’s consent. Maryland similarly requires consent for employers using facial recognition tools when interviewing applicants.
  • Federal Laws on Using AI for Eligibility Decisions: Under the Fair Credit Reporting Act (FCRA), a vendor that assembles information to automate decision making about an applicant’s eligibility for credit, employment, insurance, housing, or similar benefits or transactions may be a “consumer reporting agency.” This triggers duties for companies that use the vendor’s services, such as a requirement to provide an adverse action notice to the applicant. For example, suppose an employer purchases AI-based scores for assessing whether an applicant will be a good employee. In many circumstances, if the employer denies the applicant a job based on the score, it must, among other things, provide the applicant with an adverse action notice, which tells the applicant they can access the underlying information from the vendor, and correct it if it is false.
  • Civil Rights Laws: Although not specifically applicable to AI, companies should be aware of federal prohibitions on discrimination based on protected characteristics such as race, color, sex or gender, religion, age, disability status, national origin, marital status, and genetic information. These laws apply regardless of whether a human or machine engages in discrimination. Indeed, in 2019, the Department of Housing and Urban Development sued Facebook for violating the Fair Housing Act, alleging that it allowed advertisers to eliminate certain categories of consumers from its advertising algorithm, based on racial characteristics. If your AI tool discriminates against a protected class, whether intentionally or not, you could be the subject of a civil rights inquiry or lawsuit.
  • Privacy Laws: Given the possibilities of using AI in the healthcare industry, AI developers should be familiar with the requirements of the Health Insurance Portability and Accountability Act (HIPAA). When using consumer data to populate algorithms, companies should also consider federal and state privacy laws that require notice to consumers about how their information will be used, including HIPAA, the Children’s Online Privacy Protection Act (COPPA), and the Gramm-Leach-Bliley Act. California privacy laws give consumers the right to be informed about how data is being gathered about them and the right to access, delete, and opt out of certain disclosures of their data to third parties, which may implicate AI-based systems. California’s new privacy agency is tasked with issuing regulations that will require businesses to provide consumers with meaningful information about the logic involved in automated decision-making processes, a description of the likely outcome of the process with respect to the consumer, and the right to opt out. Virginia and Colorado’s new privacy laws will also require businesses to offer an opt out for certain automated processing of consumers’ data. And several state laws, such as Illinois’ Biometric Information Privacy Act (BIPA), require notice and consent before collecting biometric identifiers, which may feed into algorithms.
  • Prohibitions on Unfair or Deceptive Practices: The FTC Act and corresponding state laws prohibit unfair or deceptive practices. For example, if you make false or unsubstantiated claims about lack of bias in your algorithm, that could be a deceptive practice. The Federal Trade Commission has also stated that using an algorithm that discriminates against protected classes could be an unfair practice.

The consequences for violating these laws can be severe. For example, federal agencies can seek and obtain civil penalties for violations of HIPAA and COPPA. The Fair Credit Reporting Act, civil rights laws, and certain state privacy laws like BIPA include private rights of action, where plaintiffs often seek and obtain significant damages.

So, what should companies creating and using algorithms do now to avoid running afoul of these requirements? At the very least, they should be thinking about these issues, asking questions, evaluating risks, and mitigating against those risks. Here are some tips:

  • Develop cross-disciplinary/diverse teams to develop/review algorithms: Lawyers, engineers, economists, data scientists, ethicists, and others might spot different issues. Creating diverse teams with different viewpoints and life experiences are essential to any effort to mitigate bias.
  • Educate your teams on the causes of bias and how to mitigate them: One cause of bias might be a lack of diverse representation in the data set used to train the algorithm. But even if the data set is diverse, it might replicate historical patterns of bias. ProPublica did a study a few years ago which found that, under an algorithm that judges used to determine whether defendants should be released on bail, Black defendants were twice as likely as white defendants to be misclassified as having a higher risk of violent recidivism. Institutional biases of the U.S. criminal justice system may have been responsible for this outcome.
  • Use a risk-based approach to determine appropriate solutions: An algorithmic decision about who gets an ad for a financial or educational opportunity should get more scrutiny than a decision about a shoe ad. An algorithm about who gets a job or credit should get more scrutiny than an algorithm that determines whether a customer will get a product upgrade. Indeed, if you are making eligibility decisions based on an algorithm, consider whether you need to comply with the FCRA. At the same time, even the lower-risk decisions merit further questioning and internal discussion.
  • Ask questions, and develop and document programs, policies, and procedures based on responses: What will the automated decision do? Which groups are you worried about when it comes to training data errors, disparate treatment, and impact? How will potential bias be detected, measured, and corrected? What will you gain in the development of the algorithm, and what are potential bad outcomes? How open will you make the process of developing the algorithm? A good set of sample questions can be found here.
  • Consider effective ways to test the algorithm before deploying it: If there is a disparate impact for different groups, ask additional questions. Do you need additional training data? Do you need human intervention or review before making a decision based on data?
  • Consider periodic audits: Periodic audits can elicit ongoing problems with an algorithm. Depending on the risk level of the algorithmic decision, these audits can be internal or external. Companies may also want to consider vetting their programs with outside advocacy groups.
  • Comply with privacy laws: Be aware of the legal requirements described above, and make sure you’re complying with your company’s policies on use of consumer data. This includes abiding by the privacy representations you make to consumers.
  • Consider how you would explain any algorithmic decisions: Some may say it’s too difficult to explain the multitude of factors that might affect algorithmic decision making, but under the Fair Credit Reporting Act, such an explanation is required in some circumstances. For example, if a credit score is used to deny credit or offer credit on less favorable terms, the law requires that consumers be given notice, a description of the score, and key factors that adversely affected the score. If you are using AI to make decisions about consumers in any context, consider how you would explain your decision to your customer if asked.

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations.

© Wilson Sonsini Goodrich & Rosati | Attorney Advertising

Written by:

Wilson Sonsini Goodrich & Rosati
Contact
more
less

Wilson Sonsini Goodrich & Rosati on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide