AI‑powered notetaking tools which can record or transcribe employment-related discussions and meetings have quietly entered the workplace, often through everyday applications employees already use. Employees can transcribe or summarize discussions through the use of the “Dictate” button (see here) on Microsoft Word or Copilot (see here), or use the “Take Notes” function in Zoom (see here), the record and transcribe option on Teams (see here) or Neural Band from Meta to record entire meetings, oftentimes without detection.
However, what may provide conveniences and seem harmless—letting software “capture the conversation”—can quickly become a legal minefield. Employers who fail to address this emerging AI technology risk may find themselves or the company responsible for recordings they never knew existed. Many of the available tools automatically record, transcribe, or summarize conversations, which may trigger state recording, eavesdropping or wiretapping laws. This is especially critical in the 13 jurisdictions in the United States which require all parties to consent to recording before it occurs. This is of particular concern in two-party consent states like California, Florida, Maryland and Pennsylvania, where such actions can be criminal offenses (e.g., felonies, misdemeanors and/or jail time). In fact, in some states, like Florida, it is a crime to merely be in possession of recordings that were not made legally.
The use of such AI tools raises several important considerations for employers and the lawyers advising them. For example, companies could be held liable for unauthorized recordings made and/or maintained by employees in the course of their employment. On remand in Sanders v. American Broadcasting Companies, Inc., 20 Cal. 4th 907 (1999), ABC was found vicariously liable for its undercover employee’s intrusion‑upon‑seclusion because she secretly recorded a coworker while acting within the scope of her employment. See here. Additionally, there are privilege considerations when using AI tools that attorneys and clients should be aware of. Recently, in U.S. v. Heppner, No. 25 Cr. 503 (S.D.N.Y.), the Court held that a defendant’s AI legal queries about a pending investigation are not protected by the attorney-client privilege or attorney work product doctrines–“the AI tool is plainly not an attorney.” See here. Thus, any notes or transcripts of calls produced by an AI tool–even if a lawyer for the company is present and providing legal advice–may be discoverable in litigation.
What Should Employers Do?
Employers who deploy AI notetaking systems, or fail to monitor employees’ use of such tools, may face legal risk if they are not ensuring compliance with applicable state laws requiring consent. They further risk exposing sensitive discussions–including those they otherwise would expect to be privileged–to wider disclosure.
With the above in mind, employers should consider taking the following preventive measures:
- Consult with legal counsel to identify potential risks related to AI use in the workplace, including recording, privacy, confidentiality, discrimination and e-discovery issues, among others.
- Require explicit consent from all participants in meetings or conversations where AI will be used, and implement controls that automatically prompt participants to provide that consent when any recording feature is activated.
- Develop policies addressing the acceptable use of AI in the workplace for both employers and employees and ensure compliance with the National Labor Relations Act (NLRA) when crafting such policies.
- Regularly review and update policies to adapt to evolving legal standards and technological advancements relating to AI.
- Train managers and IT staff on the use of company-approved AI tools and set forth guidance on which AI tools are prohibited.
[View source.]