Too Much Chatter? AGs Continue Criticism of AI Chatbots

Kelley Drye & Warren LLP
Contact

Following a multistate AG letter to AI companies relating to protecting children, California Attorney General Rob Bonta and Delaware Attorney General Kathleen Jennings met with OpenAI and sent a separate letter to its board. The letter states that as a Delaware nonprofit with California headquarters, the company’s recapitalization plan ​“is subject to review” to protect beneficiaries and the nonprofit mission. The letter reiterates concerns previously outlined in the multistate correspondence regarding inappropriate chatbot interactions with children and cites additional news reports linking similar interactions to recent suicides. The AGs evoke OpenAI’s charitable mission, and note that ​“before we get to benefiting [humanity], we need to ensure that adequate safety measures are in place to not harm.” The AGs urge the company to ​“amplify safety” throughout the continued dialogue between their offices.

Relatedly, soon after, AG Bonta announced his support of California’s Leading Ethical AI Development (LEAD) for Kids Act, AB 1064. The Act would prohibit companion chatbots from being available to children unless the chatbot, ​“is not foreseeably capable of,” among other things:

  • encouraging certain harm to themselves or other illegal activity,
  • providing therapy,
  • sexually explicit interactions, and
  • validating the child over factual accuracy or the child’s safety.

The bill defines companion chatbots to include generative AI ​“that simulates a sustained humanlike relationship” by retaining user sessions/interactions to personalize and facilitate ongoing engagement, asks unsolicited, emotional questions, and sustains personal ongoing dialogue. It excludes customer service bots, research or tech support systems, or internal business systems.

The legislature’s findings recite recent instances of teen deaths by suicide after interacting with chatbots – which it states ​“are not incidental but the direct result of design choices by companies that intentionally simulate social attachment and emotional intimacy,” and ​“are designed to exploit children’s psychological vulnerabilities.” The legislature declares, “[a]llowing children to use companion chatbots that lack adequate safety protections constitutes a reckless social experiment on the most vulnerable users.”

If the bill passes, businesses must follow an actual knowledge standard until 2027, and then must have made a reasonable determination that a user is not a child to permit use of a companion chatbot. Under the Act, the AG may seek penalties of up to $25,000 per violation. In addition, a parent may bring a civil action for damages and other relief.

California already has an existing law requiring disclosures of the use of chatbots generally, including in the context of customer service. And other states have more specific AI chatbot laws, requiring among other things increased transparency for customers when they are interacting with AI. Be on the lookout for our continued reporting on government enforcement in the AI space.

[View source.]

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations. Attorney Advertising.

© Kelley Drye & Warren LLP

Written by:

Kelley Drye & Warren LLP
Contact
more
less

What do you want from legal thought leadership?

Please take our short survey – your perspective helps to shape how firms create relevant, useful content that addresses your needs:

Kelley Drye & Warren LLP on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide