AI In The Workplace: Helpful Or Harmful?

Kerr Russell
Contact

Kerr Russell

Most of us use artificial intelligence (AI) every single day without even thinking about it. We open our phones with face ID, we ask Alexa to set an alarm, we scroll through our social media algorithms, we rely on spellcheck to correct our typing before we send out emails.

With so many forms of AI out there, individuals are beginning to explore more with how AI can help them, not only in their personal lives, but also in their places of employment.

What are the Benefits of AI in the Workplace?

The use of AI can be extremely beneficial to businesses and organizations through the reduction of human error, enhancement of productivity, improvement upon workflow processes, and the automation of routine, monotonous tasks. Generative AI specifically, such as the large language model-based chatbot ChatGPT, enables employees to instantly generate emails, documents, social media posts, contracts, and many other forms of work product.

In fact, a recent survey found that 43% of professionals are using AI tools at work. [1] Of those surveyed, however, almost 70% of them were using it without their employer’s knowledge.

What are the Risks of AI in the Workplace?

Although it is abundantly clear that AI has many beneficial applications that can help make employees more efficient, there are also some significant pitfalls to be aware of.

Confidentiality Concerns

When using AI, certain inputs must be provided to generate content. When information is inputted into an AI tool, it is disclosed to a third party, which often does not consider the inputted information to be confidential. In addition to not being treated as confidential, these systems often keep records of the information and can use the information shared for other purposes, such as training data. Moreover, AI is not free from the risks of data breaches.

To protect against the unintended disclosure of confidential and proprietary information, employers should seek to ensure that certain information, including medical, personal, financial, and vendor/customer information is never inputted into AI.

AI can be biased

Unsurprisingly, AI has been shown to have bias.[2] Because AI learns from human-generated inputs, the potential for discrimination in AI outputs is a significant concern. In fact, the Equal Employment Opportunity Commission (“EEOC”) has issued guidance, Assessing Adverse Impact in Software, Algorithms, and Artificial Intelligence Used in Employment Selection Procedures Under Title VII of the Civil Rights Act of 1964, to address some of these concerns. [3]

In this guidance, the EEOC warns employers that discrimination as a result of AI is still discrimination imputable to the employer. This is the case even when AI tools are administered by third party AI vendors. Before using AI tools to assist with employment decisions about whether to hire, retain, promote, or take similar employment actions, employer should first carefully assess those tools to ensure they will not cause an adverse impact on protected groups and that they otherwise comply with all applicable relevant state and federal laws.

AI can hallucinate

One particularly dangerous consequence of reliance on AI is that it is known to generate outright falsehoods, errors, and fabrications.[4] Inaccuracies generated by AI are referred to as “hallucinations.” These hallucinations have already led to liability such as the lawyers who were sanctioned for citing fictitious cases in a legal brief written by ChatGPT.[5]

How Can Employers Mitigate the Risks of AI in the Workplace?

Similar to how employers responded to the rise of social media, employers must adjust to the reality that AI is and will continue to be a part of the workplace. A responsible path forward includes:

  • Developing clear and responsive company policies about the use of AI in the workplace;
  • Properly vetting AI vendors;
  • Prohibiting the disclosure of confidential and proprietary information into AI;
  • Ensuring compliance with applicable guidance and regulations from various agencies including the EEOC, Department of Justice, Consumer Financial Protection Bureau, and the Federal Trade Commission, which have each highlighted their commitment to the enforcement of existing rights as they apply to AI in the workplace;
  • Ensuring compliance with existing data privacy laws including the General Data Protection Regulation and the California Consumer Privacy Act; and
  • Requiring human involvement in the process and always as the final reviewer of any AI generated work product or decisions.

[1] Fishbowl, 70% of Workers Using ChatGPT at Work Are Not Telling Their Bosses; Overall Usage Among Professionals Jumps to 43% (Feb. 1, 2023).

[2] Jeffrey Dastin, Reuters, Amazon Scraps Secret AI Recruiting Tool That Showed Bias Against Women (Oct. 10, 2018).

[3] https://www.eeoc.gov/laws/guidance/select-issues-assessing-adverse-impact-software-algorithms-and-artificial.

[4] https://www.nytimes.com/2023/05/01/business/ai-chatbots-hallucination.html

[5] https://www.reuters.com/legal/new-york-lawyers-sanctioned-using-fake-chatgpt-cases-legal-brief-2023-06-22/

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations.

© Kerr Russell | Attorney Advertising

Written by:

Kerr Russell
Contact
more
less

Kerr Russell on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide