When Machines Discriminate – NIST Tackles Bias in AI

Rothwell, Figg, Ernst & Manbeck, P.C.
Contact

At this point you have probably heard about one of the many incidents where an AI-enabled system discriminated against certain populations in settings ranging from healthcare, law enforcement, and hiring, among others. In response to this problem, the National Institute of Standards and Technology (NIST) recently proposed a strategy for identifying and managing bias in AI, with emphasis on biases that can lead to harmful societal outcomes.  The NIST authors summarize:

“[T]here are many reasons for potential public distrust of AI related to bias in systems. These include:

  • The use of datasets and/or practices that are inherently biased and historically contribute to negative impacts
  • Automation based on these biases placed in settings that can affect people’s lives, with little to no testing or gatekeeping
  • Deployment of technology that is either not fully tested, potentially oversold, or based on questionable or non-existent science causing harmful and biased outcomes.”

As a starting place, the NIST authors outline an approach for evaluating the presentation of bias in three stages modeled on the AI lifecycle: pre-design, design & development, and deployment.  In addition, NIST will host a variety of activities in 2021 and 2022 in each area of the core building blocks of trustworthy AI (accuracy, explainability and interpretability, privacy, reliability, robustness, safety, and security (resilience), and bias).  NIST is currently accepting public comment on the proposal until September 10, 2021.

Notably, the proposal points out that “most Americans are unaware when they are interacting with AI enabled tech but feel there needs to be a ‘higher ethical standard’ than with other forms of technologies,” which “mainly stems from the perceptions of fear of loss of control and privacy.”  From a regulatory perspective, there currently is no federal data protection law in the US that broadly mirrors Europe’s GDPR Art. 22 with respect to automated decision making – “the right to not be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her.”  But several U.S. jurisdictions have passed laws that more narrowly regulate AI applications that have the potential to cause acute societal harms, such as the use of facial recognition technology in law enforcement or interviewing processes, and perhaps more regulation is likely as (biased) AI-enabled technology continues to proliferate into more settings.

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations.

© Rothwell, Figg, Ernst & Manbeck, P.C. | Attorney Advertising

Written by:

Rothwell, Figg, Ernst & Manbeck, P.C.
Contact
more
less

Rothwell, Figg, Ernst & Manbeck, P.C. on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide

This website uses cookies to improve user experience, track anonymous site usage, store authorization tokens and permit sharing on social media networks. By continuing to browse this website you accept the use of cookies. Click here to read more about how we use cookies.