Artificial Intelligence Briefing: Feds Take Aim at Algorithmic Bias

Faegre Drinker Biddle & Reath LLP

Our latest briefing examines the latest signal that the Federal Trade Commission is considering rulemaking, a groundbreaking settlement between the Justice Department and Meta over allegedly discriminatory algorithms, proposed AI legislation and other noteworthy developments at the federal level. We also cover important guidance issued by the California Department of Insurance and a much-anticipated insurance department hearing in the District of Columbia over potential bias in the auto insurance market.

Regulatory Developments and Government Activity

  • The Federal Trade Commission delivered a report to Congress warning about the use of artificial intelligence to combat online harms. The June 16 report lays out the FTC's latest thinking on AI, and any organization that uses algorithmic decision-making in a way that impacts consumers should take heed. Key takeaways include:
    • The importance (and limitations) of having a human in the loop.
    • The need for AI to be "meaningfully transparent, which includes the need for it to be explainable and contestable, especially when people’s rights are involved or when personal data is being collected or used."
    • Companies that use AI "must be accountable both for their data practices and for their results" and should consider independent audits and algorithmic impact assessments.
    • Companies that develop or use AI "are responsible for both inputs and outputs." They should "strive to hire and retain diverse teams, which may help reduce inadvertent bias or discrimination, and to avoid using training data and classifications that reflect existing societal and historical inequities."
  • The FTC sends another signal on possible rulemaking. The FTC is considering rulemaking to “ensure that algorithmic decision-making does not result in unlawful discrimination.” The agency could publish an advanced notice of preliminary rulemaking shortly, with the public comment period potentially ending in August. On June 22, Senators Ed Markey (D-MA), Elizabeth Warren (D-MA), Brian Schatz (D-HI), Cory Booker (D-NJ), Ron Wyden (D-OR), Tina Smith (D-MN) and Bernie Sanders (VT) sent a letter to FTC chair Lina Khan urging the FTC to "build on its guidance regarding biased algorithms and use its full enforcement and rulemaking authority to stop damaging practices involving online data and artificial intelligence."
  • DOJ takes aim at discriminatory algorithms. The Justice Department and Meta (formerly known as Facebook) have entered into a settlement agreement that resolves allegations that Meta's advertising algorithms discriminate against Facebook users based on characteristics protected under the Fair Housing Act. Among other things, DOJ alleged that Meta's machine-learning algorithms violated the FHA by "steering ads for housing in majority-White neighborhoods disproportionately to White users and steering ads for housing in majority-Black neighborhoods disproportionately to Black users." Meta has until December to develop a new system for housing ads that addresses the problems detailed in the complaint.
  • Health Equity and Accountability Act of 2022 would address algorithmic bias in health care. The bill (H.R. 7585) would require the Secretary of Health and Human Services to establish a “Task Force on Preventing AI and Algorithmic Bias in Healthcare.” The purpose of the Task Force would be to develop guidance “on how to ensure that the development and [use] of artificial intelligence and algorithmic technologies” in delivering care “does not exacerbate health disparities” and help ensure broader access to care. The Task Force would be charged with identifying the risks posed by a health care system’s use of such technologies to individuals’ “civil rights, civil liberties, and discriminatory bias in health care access, quality, and outcomes.”
  • Federal data privacy bill includes civil rights protections. Among other things, the American Data Privacy and Protection Act (H.R. 8152) would prohibit a covered entity or service provider from collecting, processing, or transferring covered data “in a manner that discriminates in or otherwise makes unavailable the equal enjoyment of goods or services on the basis of race, color, religion, national origin, sex, or disability.” The bill also would require certain covered entities to perform annual algorithm impact assessments and algorithm design evaluations to ensure algorithms comply with the prohibition on discrimination.
  • The EEOC has announced a series of AI-focused summer workshops, following the 2021 launch of its artificial intelligence and algorithmic fairness initiative.
    • The Indianapolis and Memphis District Offices will present “Building Back Your Workforce: Artificial Intelligence and the HIRE Initiative” on July 13, 2022.
    • The Indianapolis and Philadelphia District Offices will present “Creating Equity in the Workplace: Artificial Intelligence and Equal Pay” on July 20, 2022.
    • The Houston District Office will present “Understanding the Risks and Rewards of Using Artificial Intelligence in the Workplace” on July 21, 2022.
    • The Los Angeles District Office will present “AI, Algorithms & the ADA: How the Use of Artificial Intelligence Can Violate Disability Rights” on August 18, 2022.
    Information on these and other EEO workshops can be found on the EEOC Training Institute’s website.
  • California Department of Insurance issues bulletin addressing racial bias and unfair discrimination. The CDI joined the debate over insurers' use of AI and consumer data in a big way, with a far-reaching and detailed bulletin issued on June 30. Key takeaways include:
    • "[I]nsurance companies and other licensees must avoid both conscious and unconscious bias or discrimination that can and often does result from the use of artificial intelligence, as well as other forms of 'Big Data' (i.e., extremely large data sets analyzed to reveal patterns and trends) when marketing, rating, underwriting, processing claims, or investigating suspected fraud…."
    • "In order to ensure that all Californians are treated equally, before utilizing any data collection method, fraud algorithm, rating/underwriting or marketing tool, insurers and licensees must conduct their own due diligence to ensure full compliance with all applicable laws."
    • "[I]nsurers and licensees must provide transparency to Californians by informing consumers of the specific reasons for any adverse underwriting decisions."
  • The District of Columbia's Department of Insurance, Securities and Banking held a public hearing on its evaluation of possible bias in the auto insurance market on June 29. Commissioner Karima Woods presided over the 3-hour hearing, with support from Associate Commissioner Phil Barlow and a presentation by Cathy O’Neil. The Department heard testimony from 10 witnesses, who staked out competing positions on applicable law and the proposed data call, and occasionally posed questions to each other. Witness presentations and a recording of the hearing will be posted to the Department’s website.

What We’re Watching

  • The Colorado Division of Insurance will hold its next stakeholder session on the implementation of SB 21-169 on July 8.
  • The District of Columbia Council has scheduled a hearing on the proposed Stop Discrimination by Algorithms Act of 2021 for September 22.

What We’re Reading

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations.

© Faegre Drinker Biddle & Reath LLP | Attorney Advertising

Written by:

Faegre Drinker Biddle & Reath LLP
Contact
more
less

Faegre Drinker Biddle & Reath LLP on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide