AI and the employment relationship

Ius Laboris
Contact

Ius Laboris

[author: Marie Behle Pondji]*

The general public's enthusiasm for artificial intelligence (AI) technologies is making its way into the workplace.

While AI offers many advantages, employers must remain aware of the risks that a lack of supervision can generate.

Avoiding discrimination

Discrimination is one of the risks most feared by the intrusion of AI into decision-making processes, particularly in terms of recruitment and candidate selection. Failure to comply with non-discrimination rules exposes the employer to various risks, ranging from the invalidity of the decision in question to possible civil and/or criminal proceedings.

If AI is used, the main risk to watch out for is indirect discrimination, i.e. criteria that appear neutral but are nevertheless likely to place individuals at a disadvantage because of a protected characteristic (e.g. gender, age, disability).

In 2018, a large multinational realised that the algorithm model of its (experimental) AI recruitment tool had endorsed sexist discriminatory biases resulting in a less favourable rating of a higher proportion of female applications. It turned out that the algorithm had been developed on the basis of a history of applications over the previous ten years, which had been marked by a predominance of male applicants for technical positions.

This demonstrates that an algorithm built on a model of biased practices or situations will increase the probability that the final decision will itself be tainted by discrimination.

There is no guarantee that, in the event of legal action, the employer will be able to exonerate itself from liability on the sole grounds that the discriminatory practice was ’caused’ by an algorithm that it designed. A well-informed employer will therefore take care to ‘clean up’ and put proper parameters on the data on which the AI algorithm model will be based.

Ensuring compliance with data protection regulations

Employers who use AI technologies as part of the employment relationship will need to perform an analysis of compliance with the General Data Protection Regulation (GDPR).

The market is full of promising and highly effective HR solutions: systems capable of analysing a candidate’s facial expressions and body language, predicting resignations on the basis of performance reviews and salary increases, and discreetly monitoring the productivity of teleworking employees. However, these practices are not neutral in terms of data protection: fully or partially automated decisions, profiling and surveillance of workers, all of these subjects are subject to very strict rules, if not outright bans.

The involvement of the Data Protection Officer (if there is one) and an appropriate prior impact analysis are essential for any company wishing to take full advantage of this technological potential.

It should be remembered that the administrative fines provided for in the GDPR are particularly punitive, not to mention the fact that they could be combined with criminal and civil penalties.

Supervising the use of generative AI by staff

It is no longer taboo for many employees to carry out their day-to-day tasks with the help of online translators, audio and video transcribers or chatbots.

Employers therefore need to be proactive in monitoring these practices, at the risk of suffering legal and operational inconveniences. For example, one of the giants of new technologies recently had to deal with a serious breach of business secrecy by several employees who had disclosed highly sensitive data using ChatGPT.

The inappropriate use of these generative AI tools may also give rise to civil liability on the part of the organisation and/or its directors vis-à-vis third parties – especially vicarious liability of the employer for damage caused to third parties by its employees – particularly if it results in a violation of the rights of others (e.g. invasion of privacy, copyright infringement).

Updating IT charters and policies, as well as raising awareness and training staff, remain essential preventive measures.

Respecting the prerogatives of staff delegations

Finally, employers planning to make use of AI technologies in the context of the employment relationship will need to take account of the powers conferred by law on staff delegations in the same way as for any decision likely to affect the organisation and conditions of work, the health and safety of staff (e.g. issues relating to technological stress and the right to disconnect) or employees’ rights with regard to surveillance in the workplace.

Takeaway for Employers

AI is a powerful tool, and one that is increasingly entering the mainstream. It promises to streamline and simplify many tasks in the workplace. However, employers should remember that, like any tool, AI must be used responsibly and within the framework of existing rules.

*CASTEGNARO

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations.

© Ius Laboris | Attorney Advertising

Written by:

Ius Laboris
Contact
more
less

PUBLISH YOUR CONTENT ON JD SUPRA NOW

  • Increased visibility
  • Actionable analytics
  • Ongoing guidance

Ius Laboris on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide