Regulation of Artificial Intelligence Heats Up

Faegre Drinker Biddle & Reath LLP
Contact

Faegre Drinker Biddle & Reath LLP

2021 ended with several significant developments in the regulation of artificial intelligence (AI). Some of the key developments include:

  • District of Columbia Attorney General Karl A. Racine introduced legislation in the D.C. Council aimed at discriminatory or biased algorithms. The bill specifically targets algorithms that limit “important life opportunities” involving insurance, credit, education, employment, housing and public accommodations. Among other things, the bill would require that companies inform consumers about what personal information is collected and how that information is used to make decisions, require annual algorithmic audits and reports, impose penalties for non-compliance and create a private cause of action.
  • The Federal Trade Commission announced that it is considering rules to ensure that algorithmic decision-making does not result in unlawful discrimination. The announcement was the latest sign of the FTC’s keen interest in AI. Earlier in 2021, the FTC provided guidance on how to use AI truthfully, fairly and equitably and hired AI experts to advise the agency on emerging issues. In addition, FTC Commissioner Slaughter published an article in the Yale Journal of Law & Technology titled "Algorithms and Economic Justice: A Taxonomy of Harms and a Path Forward for the Federal Trade Commission." We’re expecting more AI news from the FTC in 2022.
  • The National Telecommunications and Information Administration announced plans to convene three listening sessions and publish a report on the ways that commercial data flows of personal information can lead to disparate impact and outcomes for marginalized or disadvantaged communities. (The public notice specifically called out personal information being used by the insurance industry.) Assistant Attorney General Kristen Clarke delivered the keynote at the first listening session, noting that the DOJ’s Civil Rights Division is “particularly concerned about how the use of algorithms may perpetuate past discriminatory practices by incorporating, and then replicating or ‘baking in,’ historical patterns of inequality.”
  • Looking ahead, the Colorado Insurance Division will soon kick off the stakeholder sessions required by Senate Bill 169, which restricts insurers’ use of external consumer data, algorithms and predictive models. We expect the sessions to start with life insurance, with health insurance (especially wellness programs) coming second. Any rules that are developed through this process will not be effective until January 1, 2023, at the earliest.

We continue to monitor these and other AI-related developments.

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations.

© Faegre Drinker Biddle & Reath LLP | Attorney Advertising

Written by:

Faegre Drinker Biddle & Reath LLP
Contact
more
less

Faegre Drinker Biddle & Reath LLP on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide