“Generative AI” tools – i.e., tools that use artificial intelligence (AI) technology to generate content, such as text, images, and videos – have become increasingly popular. As a result, companies are quickly facing questions about whether and how employees should use them. These AI tools can be potentially beneficial. For example, they can help to perform certain kinds of work more efficiently. But importantly, using generative AI tools can raise some unique issues in the workplace, and companies are well advised to set guardrails in advance with a generative AI policy.
Below we outline five key considerations for companies and organizations developing a generative AI policy. We note that these are just some factors, and companies will need to look more closely at how the tools might be used to address other issues (e.g., intellectual property) that may potentially arise.
- Privacy. Generative AI tools can collect information as part of the process of generating output, and potentially use data inputs to train their AI models. Any user needs to understand what information can be collected and used and the potential controls in place. In the workplace environment, companies should look closely at employee rules against sharing, at a minimum, any personal or employee information, confidential commercial information, or any other sensitive data.
- Transparency. As part of their own risk management strategies, companies should understand when and how employees are using AI tools. Depending on the specific use case, companies also may need to tell customers and business partners whether and how these tools are being used. Policies requiring employees to document when they use generative AI can assist with this.
- Avoiding bias. Generative AI tools can reflect the bias of their data inputs, and while many are programmed with controls to try to avoid generating content reflecting certain biases, their outputs should still be carefully reviewed. Companies will want to ensure that antidiscrimination and fair treatment policies apply to the use of AI tools, and that they are adequately monitoring for unfair and unlawful outputs and impacts.
- Human review. Generative AI tools are useful but are not generally appropriate to be the final word in answering questions. Companies should put accountability mechanisms and procedures in place to ensure adequate oversight and use of generative AI outputs, including checks for accuracy and relevance.
- Limiting use cases. As a practical matter, some uses may not raise many flags, but others – such as using AI tools for customer-related purposes – can bring greater risks if not properly managed. With so many potential AI uses, and unique risks certain uses can bring, companies should consider identifying any particularly risky or sensitive use cases that are off-limits for their employees to use generative AI, as well as any uses that the company would like to allow. For any additional uses, companies should consider requiring up-front approval of any use cases for generative AI tools.
While having basic policies and procedures for employee use of generative AI tools is an important place to start, managing generative AI tools should be a broader part of how a company or organization uses AI and manages AI risk more generally. Frameworks like NIST’s AI Risk Management Framework – discussed in more detail in our AI RMF summary and podcast interview – can help with overall AI risk management for an organization.
Overall, as technological and legal AI developments continue to move forward quickly, organizations should make sure that their AI approach keeps pace.