Digitalized Discrimination: COVID-19 and the Impact of Bias in Artificial Intelligence

Pillsbury - Internet & Social Media Law Blog
Contact

Pillsbury - Internet & Social Media Law Blog

[co-author: Jordan Rhodes]

As the world grapples with the impacts of the COVID-19 pandemic, we have become increasingly reliant on artificial intelligence (AI) technology. Experts have used AI to test potential treatments, diagnose individuals, and analyze other public health impacts. Even before the pandemic, businesses were increasingly turning to AI to improve efficiency and overall profit. Between 2015 and 2019, the adoption of AI technology by businesses grew more than 270 percent.

The growing reliance on AI—and other machine learning systems—is to be expected considering the technology’s ability to help streamline business processes and tackle difficult computational problems. But as we’ve discussed previously, the technology is hardly the neutral and infallible resource that so many view it to be, often sharing the same biases and flaws as the humans who create it.

Recent research continues to point out these potential flaws. One particularly important flaw is algorithm bias, which is the discriminatory treatment of individuals by a machine learning system. This treatment can come in various forms but often leads to the discrimination of one group of people based on specific categorical distinctions. The reason for this bias is simpler than you may think. Computer scientists have to “teach” an AI system how to respond to data. To do this, the technology is trained on datasets—datasets that are both created and influenced by humans. As such, it is necessary to understand and account for potential sources of bias, both explicit and inherent, in the collection and creation of a dataset. Failure to do so can result in bias seeping into a dataset and ultimately into the results and determinations made by an AI system or product that utilizes that dataset. In other words, bias in, bias out.

Examining AI-driven hiring systems expose this flaw in action. An AI system can sift through hundreds, if not thousands, of résumés in short periods of time, evaluate candidates’ answers to written questions, and even conduct video interviews. However, when these AI hiring systems are trained on biased datasets, the output reflects that exact bias. For example, imagine a résumé-screening machine learning tool that is trained on a company’s historical employee data (such as résumés collected from a company’s previously hired candidates). This tool will inherit both the conscious and unconscious preferences of the hiring managers who previously made all of those selections. In other words, if a company historically hired predominantly white men to fill key leadership positions, the AI system will reflect that preferential bias for selecting white men for other similar leadership positions. As a result, such a system discriminates against women and people of color who may otherwise be qualified for these roles. Furthermore, it can embed a tendency to discriminate within the company’s systems in a manner that makes it more difficult to identify and address. And as the country’s unemployment rate skyrockets in response to the pandemic, some have taken issue with companies relying on AI to make pivotal employment decisions—like reviewing employee surveys and evaluations to determine who to fire.

Congress has expressed specific concerns regarding the increase in AI dependency during the pandemic. In May, some members of Congress addressed a letter to House and Senate Leadership, urging that the next stimulus package include protections against federal funding of biased AI technology. If the letter’s recommendations are adopted, certain businesses that receive federal funding from the upcoming stimulus package will have to provide a statement certifying that bias tests were performed on any algorithms the business uses to automate or partially automate activities. Specifically, this testing requirement would apply to companies using AI to make employment and lending determinations. Although the proposal’s future is uncertain, companies invested in promoting equality do not have to wait for Congress to act.

In recent months, many companies have publicly announced initiatives to address how they can strive to reduce racial inequalities and disparities. For companies considering such initiatives, one potential actionable step could be a strategic review of the AI technology that a company utilizes. Such a review could include verifying whether the AI technology utilized by the company is bias-tested and consideration of the AI technology’s overall potential for automated discriminatory effects given the context of its specific use.

Only time will reveal the large-scale impacts of AI on our society and whether we’ve used AI in a responsible manner. However, in many ways, the pandemic demonstrates that these concerns are only just beginning.

[View source.]

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations. Attorney Advertising.

© Pillsbury - Internet & Social Media Law Blog

Written by:

Pillsbury - Internet & Social Media Law Blog
Contact
more
less

PUBLISH YOUR CONTENT ON JD SUPRA NOW

  • Increased visibility
  • Actionable analytics
  • Ongoing guidance

Pillsbury - Internet & Social Media Law Blog on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide