Cyber Chronicles IV: Bias, discrimination, and AI for employers

Constangy, Brooks, Smith & Prophete, LLP

It’s an understatement to say that companies are excited about Artificial Intelligence. AI has the potential to optimize productivity and improve efficiency in many areas of a business. The potential benefits are undeniable, but there are some uses that present significant risk to businesses. One area that warrants caution is in the context of employment. 

Some employers have already begun using AI to identify qualified applicants with the hope of reducing the time it takes to sift through an overwhelming number of applications.  When well-executed, AI can be extremely helpful for this daunting task; however, there are some areas where employers must be cautious. One such area—and the focus of this post—is bias and discrimination in the AI systems themselves.

Artificial Intelligence, just like any computer program, reflects how it was designed. In practice, this means that where the programmer overlooks something, the resulting AI system may be flawed. One potential area of deficiency that presents significant risk is unintended biases that lead the program to “prefer” some candidates over others based upon a protected characteristic. 

When AI programs are trained on historical data sets, underrepresentation in these data sets will be perpetuated in the AI’s outputs. This is not a novel problem and has been discussed by leading experts and government officials for years. However, the new and expanded uses of AI in the employment process have brought this issue to the forefront. Before, early machine learning AI systems would review applications and identify promising leads. But now, generative AI systems may be used to develop job descriptions and materials used to evaluate candidates. Bias can creep into these uses in a number of ways, such as the manner in which prompts are given to the AI system. Each of these uses must be considered when determining whether and how bias is present and, if it is, how it can be mitigated so that it does not result in unlawful discrimination. 

Nationwide, new laws are emerging that regulate the use of AI in the workplace. For example, New York City’s Local Law 144 recently came into effect in January 2023, with enforcement starting in July 2023. This law mandates (1) that any employers or employment agencies using an automated employment decision tool in hiring or promotion must complete a bias audit within one year of the tool’s use, publishing the results, and (2) the employer must notify job applicants or candidates for promotion who are NYC residents that an AI tool is being used. This past Thursday, September 21, the U.S Equal Employment Opportunity Commission issued  its Strategic Enforcement Plan for 2024-2028.  The plan notes that, among other things, the EEOC intends to focus on eliminating barriers in recruitment and hiring, including barriers created through the use of AI systems to “target job advertisement, recruit applicants, or make or assist in hiring decisions where such systems intentionally exclude or adversely impact protected groups.” 

We recommend that employers carefully consider how they use AI in recruiting and hiring. When implemented thoughtfully, AI can also be used to expand a traditional candidate pool to identify diverse applicants with the right qualifications. However, employers must plan ahead, carefully evaluate the AI platform, and take steps to comply with the emerging laws and regulations that are implicated by this practice. 

Written by:

Constangy, Brooks, Smith & Prophete, LLP
Contact
more
less

Constangy, Brooks, Smith & Prophete, LLP on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide