About Face: Algorithm Bias and Damage Control

Pillsbury - Internet & Social Media Law Blog
Contact

Pillsbury - Internet & Social Media Law Blog

As research continues to prove that AI is not an impartial arbiter of who’s who (or who’s what), various mechanisms are being devised to mitigate the collateral damage from facial recognition software.

Legislation: Since 2019, several bills have been introduced in the House or Senate to address privacy issues and algorithm bias associated with facial recognition software, including the Commercial Facial Recognition Privacy Act, the Ethical Use of Facial Recognition Act, and the Facial Recognition and Biometric Technology Moratorium Act. While none of these bills has moved forward in the current congressional quicksand, their existence gives us hope for more legislative momentum in the future.

Technology: If you can’t beat it, block it. That was the idea behind a pair of glasses, developed by Japan’s Institute of Informatics, that uses near-infrared light to prevent facial recognition by smartphone and tablet cameras. The concept inspired artist Ewa Nowak to design Incognito, a line of minimalist masks that block facial recognition software in public and on social media.

Remediation: Any discussion of fixing algorithm bias should begin with the standard argument that it’s fundamentally unfixable. Karen Yeung, a professor at the University of Birmingham Law School, in the United Kingdom, puts it well:

“How could you eliminate, in a non-arbitrary, non-subjective way, historic bias from your dataset? You would actually be making it up. You would have your vision of your ideal society, and you would try and reflect it by altering your dataset accordingly, but you would effectively be doing that on the basis of arbitrary judgment.”

That said, the problems specific to facial recognition software seem more straightforward, and therefore potentially easier to fix, than other types of algorithm bias. For example, we know—and should be able to correct for the fact—that lighter-skinned people account for the vast majority of images (perhaps as high as 81 percent) that train this software. We also should be able to recalibrate photographic technology that’s been optimized for lighter skin.

Humans are at the cause of algorithm bias, and humans can help mitigate it, by keeping the problem front of mind from development to application.

[View source.]

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations.

© Pillsbury - Internet & Social Media Law Blog | Attorney Advertising

Written by:

Pillsbury - Internet & Social Media Law Blog
Contact
more
less

PUBLISH YOUR CONTENT ON JD SUPRA NOW

  • Increased visibility
  • Actionable analytics
  • Ongoing guidance

Pillsbury - Internet & Social Media Law Blog on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide