Biometrics and AI in the Workplace: Boon or Bane?

Kohrman Jackson & Krantz LLP
Contact

Technology has brought efficiency into the workplace, but not without legal risk. Employers are increasingly tasking technology to assist with human resource functions, security, and workplace monitoring, all of which can necessitate new approaches brought on by the proliferation of remote work and the new hybrid work environment. When that technology comprises biometrics and artificial intelligence (AI), employers must be cognizant that their uses and practices do not infringe on their employees’ rights to privacy under common law but also a growing number of state laws that attempt to harness the use of biometrics and/or AI, impose stringent requirements, and levy harsh penalties for violations.

What are Biometrics?

Biometrics involve the identification (who are you?) and authentication (are you who you say you are?) through data collected from physiological, biological, and/or behavioral identifiers unique to a specific individual such that the individual can be accurately identified. Typical biometric identifiers include facial geometry, fingerprints, voice print, retina or iris scan, vein patterns, and even gait patterns. A biometric key or mathematical representation is then created from the biometric identifier and encrypted. Generally, the biometric key cannot be transformed back into the original, thus ensuring increased security of the data against hackers.

Employers typically use biometrics:

  • During pre-employment to authenticate the candidate and assure they have hired the individual interviewed
  • To allow secure access to a physical facility or IT infrastructure,
  • To clock-in employees and establish accurate work hours
  • To prevent fraudulent clock-ins and support self-reported timecards

The benefits are obvious: (i) increase efficiency, (ii) provide additional security, (iii) eliminate or minimize errors, (iv) provide insight into working patterns, (v) decrease costs, and (vi) decrease the risk of lost or stolen devices such as fobs and passwords. Use of biometrics also decreases the risk of hacking and data breaches as biometric data is not immediately useable, or may not be useable at all, by a hacker.

What is AI?

AI involves the use of algorithms to make or assist in making decisions. Through use of a constructed algorithm directed to defined end purposes, a computer can process certain data to make predictions and recommendations, even decisions. Employers can use AI:

  • To evaluate job candidates
  • To evaluate employees for promotion
  • To determine team members to be placed on a certain project
  • To evaluate productivity by monitoring time spent on tasks, videos of computer screens and keyboard strokes and mouse movements

AI also increases efficiency and saves cost. It is often used with the intent to minimize bias when making personnel decisions.

These Advantages Come with Significant Liabilities

Behind all these advantages lurks significant liability for employers who do not protect the privacy of their employees when implementing these practices or properly store and manage the data they collect. Several states have enacted laws to protect employees’ privacy rights in their biometric data. Generally, they require the employee’s informed consent, often in writing, prohibit discrimination based on an employee’s biometric data, forbid the sale of such data, and impose requirements for storage and deletion of such data. The most notorious state law is the Illinois Biometric Privacy Act (BIPA) and its counterpart, the Illinois Artificial Intelligence Video Interview Act. BIPA is the most restrictive of the state laws enacted solely to govern the use of biometrics in the workplace and being the only one with a private right of action, it has unleashed a spate of class actions against employers and third parties engaged to perform the data collection. BIPA requires that before an employer obtains its employees’ biometric identifiers for any reason, it must first notify each employee in writing of the collection of the biometric identifier, the specific reason for collecting, storing, and using the information, and how long the employer intends to use or store it. The employer must also obtain the employee’s written consent and develop a publicly available written policy which includes a retention schedule. BIPA permits statutory damages up to $1000 for negligent violations and $5000 for reckless or willful violations. There have been over 400 class actions since its enactment in 2020.

Bhavilav v. Microsoft Corp

One such class action lawsuit is Bhavilav v. Microsoft Corp. recently filed in the Cook County, Ill, Circuit Court. Microsoft provided the software for the employer to scan fingerprints to clock its employees in and out. Although the employer complied with BIPA including obtaining written consents of its employees, Plaintiffs claim that Microsoft is also bound by BIPA and failed to get written consent before obtaining the data or make publicly available a written policy explaining its retention and deletion practices. Microsoft does have a data center Illinois. Even so, third party vendors are finding it difficult to hide behind an extraterritoriality defense.

An Increase Laws Addressing Biometrics

Other states with laws specifically addressing biometrics include Washington, Texas, Maryland, and Connecticut. California has recently introduced SB 1189 which will prohibit employers from collecting a person’s biometric data absent written consent or a valid business purpose. Penalties include actual damages or statutory damages between $100 and $1000 per violation per day, punitive damages, and attorney fees.

The European Union has released a first draft of a proposed Artificial Intelligence Act which seeks to implement a comprehensive regulatory framework for AI technologies. Similar to the EU’s General Data Protection Regulation, the territorial limits will extend beyond Europe and reach any business whose operation of AI impacts EU residents.

Employers Utilizing AI Must Comply with the ADA

Although there is no comprehensive federal law regulating the use of biometrics or AI, the EEOC has raised its concerns that AI especially can have a deleterious impact on the disabled. The EEOC has issued a Technical Assistance Guideline providing guidance to educate employers on assuring compliance with the Americans with Disabilities Act when using AI and other software to hire and assess employees.

Violations under the ADA can occur in several situations – when the AI application itself is biased or fails to address the issue of disability or the administration of the AI application in the employment setting fails to provide a disabled person a necessary accommodation to fully participate. An employer can be liable even if the AI failure is the result of a third party’s development of the AI application.

ADA Violations May Be Inherent to the Application

The use of AI can violate the ADA by screening out a person that fails to meet the selected criteria for the job as coded for by the AI application due to either not accounting for the disability or failing to account for a reasonable accommodation

  • The AI application may unlawfully ask disability-related inquiries and medical examinations if not programmed correctly
  • The AI application evaluating a person’s ability to solve problems may screen out a person or result due to the applicant’s inability to fully participate in the process such as when the answers were not fully intelligible by the AI applicant suffering from a speech impediment

ADA Violations May Be Due to Access and Failure to Accommodate

AI administration can violate the ADA where the employer fails to provide accommodations to access the AI application such as to provide or allow specialized equipment or an alternate environment during the AI evaluation process. The Technical Assistance Guidelines provide an example whereby the employer can announce to job applicants that the process may involve a video interview and provide a way for the applicant to request a reasonable accommodation such as particular equipment necessary for the applicant to fully participate. Other accommodations may include additional time to participate in the evaluation whether that be for cognitive issues in understanding questions, a lack of manual dexterity to use a keyboard or other manual input device, or even providing an alternative evaluation of the job applicant other than through the use of AI. As with all accommodation requests, such accommodation must be reasonable and not impose an undue hardship on the employer.

Recently, claims have also been filed under Title VII for failure to accommodate based on religious grounds. In EEOC v. Consol Energy, Inc. the US Court of Appeals for the Fourth Circuit affirmed an award over $400,000 against an employer who failed to accommodate an employee’s religious beliefs when it imposed mandatory hand scanning for timekeeping purposes. The employee who was a devout Christian claimed that using the biometric hand scanner would associate him with the “Mark of the Beast,” and therefore prohibited by his religion. Although the employer offered that the employee could use his left hand as apparently only the right hand is associated with the Mark of the Beast, the jury found evidence that the employer allowed those with injuries to bypass the hand scanner convincing evidence of discrimination.

Absent a law such as BIPA, employers are free to require their employees to undergo such scanning systems or other biometric or AI applications in the workplace as a condition of employment. Employers however must allow accommodations for disabilities and religious beliefs, as discussed below, when lawfully required.

Additionally, recent studies have shown that AI is not itself immune to its own bias. Bias creeps into AI through the training data inputted into the algorithms. That can be due to the inputter’s own bias or through flawed data samples included in the training data.

Employer Must Affirmatively Manage Risks

Ultimately, employers may enjoy the benefits of using biometrics and AI but need to affirmatively manage the risks. Employers must assure that they are in compliance with any applicable laws, in any jurisdiction where they have employees, regarding privacy in general or governing the use of biometric data or AI in particular, as well as their obligations under the ADA and Title VII that might be implicated by the use of these technologies.

Employers can minimize such claims by adopting policies and practices that protect the privacy of employees and their rights to accommodations due to disabilities or religious beliefs by ensuring the following:

  • Implement a written privacy policy to:
    • Define what data is going to be collected and for what purpose
    • Explain how the data will be collected, stored, and used
    • Identify who is collecting the data
    • Protect the confidentiality of the data
    • Provide a means to delete the data
  • Obtain written consents from employees before instituting the collection practice which should include authorization to share the biometric information with its business partners.
  • Identify any potential for race, disability or other bias and work to eliminate it.
  • Implement and maintain a consistent practice for securely storing the data, both internally and by any third party, and using the same or more protective methods and standards of care that the employer uses for other confidential or sensitive information.
  • Confirm that the employer’s practices as to collection, storage, use and destruction of biometric information is consistent with standard the employer’s industry.
  • Require indemnification agreements from any third-party vendor engaged to collect or store biometric data or AI data for violation of an applicable law, mishandling the data or a data breach.
  • Periodically audit AI for bias creep.
  • Assure that the use of biometrics and AI comply with the ADA and Title VII.
    • Is the interface used for AI interviews accessible to persons with disabilities?
    • Is there a notice to persons with disabilities that there may be an alternative format available as an accommodation to a disability?
    • Has there been any review of the AI application to determine that it is tuned to the possible existence of a disability so that the operation of the AI application is not skewed as to exhibit bias against a person with a disability?
  • Confirm that employer’s insurance policies cover claims related to the use of biometrics or AI.

The use of biometrics and AI can be fraught with legal risk. Our attorneys in the Labor & Employment Practice Group are skilled and knowledgeable to assist you in employing these technologies and adopting appropriate policies and practices.

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations. Attorney Advertising.

© Kohrman Jackson & Krantz LLP

Written by:

Kohrman Jackson & Krantz LLP
Contact
more
less

PUBLISH YOUR CONTENT ON JD SUPRA NOW

  • Increased visibility
  • Actionable analytics
  • Ongoing guidance

Kohrman Jackson & Krantz LLP on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide