Zoom’s recent reversal on changes to its terms of service illustrates both data security and privacy minefields particular to the growth of generative AI.
Previously, the terms of service of the popular videoconferencing technology stated that it would treat users’ non-public information as confidential. On March 31, Zoom quietly amended those terms, including by giving itself the right to preserve, process, and disclose “Customer Content” for a range of purposes, including “machine learning” and “artificial intelligence.” Customer Content included any data or materials originating from one of Zoom’s users. The amendments became subject to widespread public scrutiny after being picked up by a technology blog. A few days later, Zoom further amended its terms of service, including a new specification that “Zoom does not use any of your audio, video, chat, screen sharing, attachments or other communications-like Customer Content (such as poll results, whiteboard and reactions) to train Zoom or third-party artificial intelligence models.”
These instances reflect what may be a growing trend of businesses looking to pull ahead in the generative AI revolution, a pattern that continues to grab headlines. Generative AI is often able to create new digital content using complex computer models trained on vast amounts of data sourced from users. Businesses and developers are, no doubt, eager to explore possible applications of the new technology, which requires obtaining data, potentially from their customer base, and permission from that base to use the data. That permission is commonly obtained from the terms of service agreed to by users, which is why some companies are seeking to amend those terms.
We will continue to monitor and report on the data security implications of generative AI as they develop.