Generative AI is incredibly popular. So popular that many GenAI tools now have browser extensions that work across all open pages of the browsers upon which they are installed. They helpfully summarize pages, highlight portions of likely interest, gather information and aggregate it for deeper analysis and, unfortunately, often take a copy of that information back to its parent company for training or other reasons. The problem with this is that the tools seldom discriminate and have voracious appetites for information. Any information. And that presents security and privacy challenges.
The information the browser extension collects could be intellectual property subject to an NDA your organization has with another company, and that company likely will not want the owner of the GenAI tool having a copy of that IP. The information could be sensitive health or employment information. The browser extension is indifferent. Your company hopefully has an agreement in place with the GenAI tool that it will never sell or share any of the information it collects. But even if the information never goes anywhere else, simply sharing this type of information with an unnecessary third party will likely violate certain privacy laws and security or privacy agreements with other organizations. It may even violate the privacy terms of your own organization’s website.
More problematic is even recognizing it is happening. Individual employees are very creative at installing technology that makes their lives easier. Depending on how open your organization’s computers are, there may not be any restrictions on downloading a GenAI browser extension. Most users are not even aware of how the extension works, never really read the privacy notice associated with the browser extension and, unless they specialize in this area, would not really appreciate what they read anyway. Even if the installation is handled by the IT department and the GenAI product has a fully vetted privacy policy, the tools are still exfiltrating information. We are seeing organizations create sound privacy agreements with their GenAI tools, but fail to appreciate their other privacy commitments which could preclude what the GenAI tool is trying to accomplish. This is why we are issuing this legal alert.
How to Respond to Generative AI Security and Privacy Threats
To address this, organizations first need to determine if a problem exists. There are products (ironically some of which use AI) to determine whether browser plug-ins are being used and to identify them. Any unauthorized plug-in should be reviewed and a risk assessment should be performed to determine if the use should be continued. This is also an opportunity to educate everyone about the risks associated with plug-ins because some are fundamentally malicious, other are legitimate but associated with malicious activity and others are both harmless and useful – just never approved for use. However, some will likely be associated with a generative AI tool and require further analysis.
For the Generative AI tools, the organization must review the privacy policy and terms of use. These should explain what information the tool captures and how it will be used. Whatever the use, this will need to be measured against any security regulation applicable to your industry and any security agreements you have with others about how you will secure the information entrusted with your organization. If there is a conflict, this will need to be addressed.
Other immediate steps your organization should consider is to review the policies and procedures in place around the use of Generative AI. Determine if they even address this situation, if there is even a list of approved Generative AI tools and if so, are there any limitations on how they can be used? Consider if the policies align with current regulations and contracts, as these change almost daily. Finally, review employee training content to determine if these issues are on the training agenda and, if not, develop a program to inform your workforce of these risks and how to reduce them.
Long term, a lot of policy work needs to happen around how GenAI interacts. Some obvious solutions are to have a tag on web-based applications, which the AI industry indicates will suspend the technology from reading or exfiltrating from that page. This won’t stop malicious code, but it will help for legitimate GenAI tools. Another step would be to create a legal exception where a defined standard is created which binds the GenAI tool on how it may use the content it consumes when it is performing according to that standard. This would protect both third parties and the primary contractor for that service. GenAI providers could charge a premium for this service, much like there is a premium charge for hosting on a government cloud to cover the costs of those additional protection and limitation. Of course, for any of these solutions to be created, everyone needs to recognize there is an issue and that it matters.