Auditing AI

Thomas Fox - Compliance Evangelist
Contact

Thomas Fox - Compliance Evangelist

 

The recent kerfuffle over an AI tool misinterpreting instructions to make a woman look more professional as making her look Caucasian has raised important questions about how to audit AI code to avoid undesirable outcomes. AI instruments are behaving in a fundamentally different way than most other types of apps and systems, and auditing AI code for implicit bias is not yet feasible. Matt Kelly recently wrote a blog post on this topic on Radical Compliance. I thought it would make a great podcast so this week’s episode of Compliance into the Weeds is dedicated to it. I also thought it was so important that I should blog about it as well.

It started when MIT grad student Rona Wang tested an AI tool called Playground AI to modify a photo of herself wearing an MIT T-shirt to look ‘more professional’. Rather than replacing the T-shirt she was wearing with more professional business attire to achieve a more professional look, the AI tool interpreted the instruction to make her look more professional as making her look Caucasian. Wang posted a before and after comparison of her photo on Twitter, which caused a big kerfuffle in the AI world about how this happened. The CEO of Playground AI responded to Wang on Twitter saying “We’re quite displeased with this and hope to solve it”.

We began with a discussion of the implications of implicit bias in AI code. Matt suggested that the code in the AI app may have been influenced by the disproportionate number of white people on LinkedIn. It may not be the fault of the AI program, but rather a result of structural bias and racism in the world. Matt believes that at this point, it is impossible for a human to audit the code of AI programs like Chat GPT, which evaluates data according to 1.76 trillion different parameters. Unfortunately, it is not possible to eliminate implicit bias in AI code by simply correcting a few parameters. Matt compared it to the difficulty of eliminating implicit bias in AI code to the difficulty of eliminating racism in the human brain.

AI can handle 1.7 trillion parameters of data, but it is difficult to audit for an ethical outcome. AI can misinterpret structural racism and inequities that exist in the world. AI can be used to filter out images that are not representative of the population as a whole. Auditing AI is difficult because there are few people who know how to design and audit these programs. AI decisions may have life and death consequences, but there is no way to audit them yet.

Companies using AI in the hiring process must consider whether they will scrap the AI tool and use another, use human HR people and recruiters, or have auditors and coders sit down and try and figure out the problem. Additionally, there is a risk of implicit bias when someone must define the pool of data that the AI is looking at. New York City has a regulation requiring employers to audit AI tools used in the hiring process at least annually, but this is only a small step towards addressing the issue of implicit bias in AI.

 

Auditing AI code for implicit bias is a complex process. AI tools used in the hiring process can range from keyword matching to Chat GPT. While it is important for companies to audit their AI tools, it is also important to consider the data that is being used to train the AI. If the data is biased, the AI will be biased as well. To ensure that AI tools are not biased, companies should consider using a diverse set of data and conducting regular audits of the AI tools.

The Wang incident over an AI tool misinterpreting instructions to make a woman look more professional as making her look Caucasian is a reminder of the importance of auditing AI code to avoid undesirable outcomes. AI instruments are behaving in a fundamentally different way than most other types of apps and systems, and auditing AI code for implicit bias is not yet feasible. Companies using AI in the hiring process must consider whether they will scrap the AI tool and use another, use human HR people and recruiters, or have auditors and coders sit down and try and figure out the problem.

Finally, there is a risk of implicit bias when someone has to define the pool of data that the AI is looking at. New York City has a regulation requiring employers to audit AI tools used in the hiring process at least annually, but this is only a small step towards addressing the issue of implicit bias in AI. To ensure that AI tools are not biased, companies should consider using a diverse set of data and conducting regular audits of the AI tools.

For the complete discussion of this issue check out this week’s episode of Compliance into the Weeds.

 

[View source.]

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations.

© Thomas Fox - Compliance Evangelist | Attorney Advertising

Written by:

Thomas Fox - Compliance Evangelist
Contact
more
less

Thomas Fox - Compliance Evangelist on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide