California's Civil Rights Council Approves New Regulations On AI In Employment

Ervin Cohen & Jessup LLP
Contact

Ervin Cohen & Jessup LLP

The California Civil Rights Council has approved new regulations that clarify how existing anti-discrimination laws under the Fair Employment and Housing Act (“FEHA”) apply to the use of artificial intelligence (AI) and automated decision systems. These regulations become effective on October 1, 2025.

The new rules are not a ban on AI. Instead, they serve to expand and clarify existing safeguards to ensure AI tools do not have a disparate impact on, or a disparate treatment of, employees based on FEHA-protected characteristics.

This development is distinct from AB 1018, a separate legislative proposal that was not signed into law this session. While AB 1018 is on pause for now, the new AI regulations are moving forward, and these new regulations require every California employer’s attention.

The regulations are broad and cover any Automated Decision System (ADS) used in employment decisions, from screening resumes to targeted job advertisements. Here is what employers need to know:

  • Proactive Bias Testing: Employers must audit all automated systems used in employment decision making for potential discriminatory impact before using them. The absence of such efforts could be used against employers in any subsequent legal claim.
  • Expanded Record-Keeping: The regulations double the required record retention period for ADS-related data to a minimum of four years. This includes dataset descriptions, scoring outputs, and all audit findings.
  • Vendor Responsibility: Employers are responsible for the actions of their third-party vendors. Any AI tools purchased or used for employment decisions must comply with these new rules.

Employers must also ensure that the use of AI does not interfere with the obligation to provide reasonable accommodations to individuals with disabilities. For example, if an AI tool evaluates an applicant's dexterity, reaction time, or tone of voice, you may need to provide an alternative assessment or method for a candidate with a disability. Further, the regulations clarify that an AI-based assessment that is "likely to elicit information about a disability" can be considered an unlawful medical inquiry. The key is to ensure automated systems do not screen out qualified candidates with disabilities without providing an opportunity for reasonable accommodation.

Compliance with these regulations is crucial to avoid costly litigation and potential penalties. We recommend that employers take a complete inventory of all AI and automated systems used for employment decisions and consult with legal counsel to ensure that these processes are in full compliance with the new rules.

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations. Attorney Advertising.

© Ervin Cohen & Jessup LLP

Written by:

Ervin Cohen & Jessup LLP
Contact
more
less

What do you want from legal thought leadership?

Please take our short survey – your perspective helps to shape how firms create relevant, useful content that addresses your needs:

Ervin Cohen & Jessup LLP on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide