Artificial Intelligence (“AI”) programs have gained notoriety by injecting ease into otherwise burdensome and difficult daily tasks. However, as with most innovative advancements, AI has also drawn concern from skeptics regarding the everyday effects such programs may have.
With fast-growing scrutiny around AI’s use and evolvement, New York City has recently taken its own stab at regulating how these programs are implemented, specifically in the world of hiring candidates. Effective this month, New York City has rolled out a new rule that requires any automatic employment decision tool (“AEDT”) to pass a third-party audit in order for businesses to continue using them (the “AI Law”). The AI Law requires that the AEDT undergo an extensive review to expose whether the program is free from any racist or sexist bias. Not only must companies publish their results, but they are prohibited from using any programs that haven’t been audited. In other words, businesses must prove it, or lose it.
Specifics of the AI Law
Under the AI Law, the use of any AEDT will be illegal unless: (1) the company has utilized a bias audit within a year of its use of the AEDT; (2) the results from the most recent audit are published; and (3) each candidate is provided with notice of the AEDT’s use and is given the opportunity to request an alternative selection process. The AI Law defines the required bias audit as “an impartial evaluation by an independent auditor” and is subject to certain minimum requirements, including a calculation of the selection rate for each race/ethnicity and a sex category and comparison of the same to determine an “impact ratio.” In order to further promote regulating the use of AEDTs, New York City has also provided guidance for individuals to file complaints in order to report an employer or employment agency that used an AEDT but failed to comply with the AI Law.
Employers can protect themselves from violating the AI Law’s provisions by first identifying whether they are utilizing any tools that would be subject to the AI Law and by ensuring that a proper audit is completed. A tool is subject to regulation if: (1) it is used relative to decisions regarding hiring and/or firing; (2) issues a score, classification, or recommendation that is used as: (a) the sole criterion; (b) a criterion that has considerable more influence than another; or (c) a criterion used to overrule other conclusive factors such as human decision-making; and (3) is derived from “machine learning, statistical modeling, data analytics, or artificial intelligence,” as more fully defined in the final regulation. Once a sufficient audit is complete, employers should consider developing a universal protocol for: (1) providing notice of their AEDT use; and (2) publishing the results in conformance with the AI Law in order to avoid any issues, such as complaints regarding the sufficiency and/or location of the posted content.
While certain restrictions may act as a deterrent to some, the sheer number of applicants being screened by companies will likely compel companies to continue seeking machine-guided ways to vet candidates for open positions. In doing so, it is important that employers take heed in their use of AEDT, not only in New York City, but around the country as attention remains on AI technology and the cry for further regulation gets louder.
 Defined as any process “derived from machine learning, statistical modeling, data analytics, or artificial intelligence, that issues simplified output, including a score, classification, or recommendation, that is used to substantially assist or replace discretionary decision making.”