San Francisco Releases Generative AI Guidelines for City Workers

Epstein Becker & Green

On December 11, 2023, the City of San Francisco released the San Francisco Generative AI Guidelines (“Guidelines”).  The Guidelines set forth parameters for City employees, contractors, consultants, volunteers, and vendors who use generative artificial intelligence (AI) tools to perform work on behalf of the City.

Specifically, the Guidelines encourage City employees, contractors, consultants, volunteers, and vendors to use generative AI tools for purposes such as preparing initial drafts of documents, “translating” text into levels of formality or for a particular audience, coding tasks, generating diagrams or images, and developing service interfaces such as chatbots.  The Guidelines also warn these individuals about common pitfalls when using generative AI, including “making an inappropriate decision that affects residents based on AI-generated content,” “producing information, either to the public or internally, that is inaccurate,” “incorporating biases found in the AI’s training data, resulting in inequities,” “cybersecurity problems or other errors,” “exposing non-public data as part of training data sets,” and “inaccurately attributing AI-generated content to official SF sources.” 

City employees, contractors, volunteers and vendors are cautioned by the Guidelines not to enter any information that cannot be fully released to the public, publish content created by generative AI without human review and disclosure, conceal the use of generative AI during interactions with colleagues or the public, or generate images, audio, or video that could be mistaken for real people (such as “deepfakes”), even with disclosure.

The Guidelines place responsibility on information technology (IT) leaders in connection with the use of generative AI by City employees. Among other things, departmental IT leaders have “a responsibility to support right-sized generative AI uses that deliver the greatest public benefit,” and should work with vendors to ensure that AI built into procured tools is explainable and auditable; experiment with training internal models on internal data; and, when considering implementing public-facing chatbots, thoroughly test and develop a language access plan.

The Guidelines follow President Joseph Biden’s Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, issued in October 2023, California Governor Gavin Newsom’s Executive Order N-12-23, issued in September 2023, and the California Legislature’s Senate Concurrent Resolution 17, affirming the state’s commitment to The White House’s Blueprint for an AI Bill of Rights.

Although limited in scope, private-sector employers who seek to be proactive in establishing AI guidelines or policies around employee use of generative AI should look to the Guidelines as one of many points of reference.  Given the rapid and fluid nature of this space, private-sector employers should also consult with counsel to make sure that they are implementing all workplace AI tools, including generative AI, in a legally compliant manner.

[View source.]

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations.

© Epstein Becker & Green | Attorney Advertising

Written by:

Epstein Becker & Green

Epstein Becker & Green on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide