On January 8, 2026, the Kentucky Attorney General filed a lawsuit against Character Technologies, Inc., owner of Character.AI, an artificial intelligence chatbot designed for interactive entertainment. The complaint alleges (1) unfair, deceptive and dangerous acts and practices, (2) unfair collection and exploitation of children’s data, (3) violation of the Kentucky Consumer Data Protection Act (which went into effect January 1, 2026), (4) violation of Kentucky’s statutory and constitutional privacy protections, and (5) unjust enrichment.
The Kentucky Attorney General’s complaint focused on the claim of unfair, false, misleading or deceptive acts and practices in relation to Character.AI’s impact on minors. The complaint alleges that Character.AI misrepresented that it “was safe, age-appropriate, and responsibly moderated, despite knowing of widespread instances of harmful, explicit, and psychologically manipulative chatbot interactions with minors.”
To support this allegation, the complaint asserts the following factual allegations:
- Character.AI characters are designed to believably simulate human interaction without sufficient disclosures to its users, which encourages emotional bonds between users and the chatbots.
- Character.AI did not have effective age verification methods and relied on a user’s declared ages until late 2025. The age verifications now in place can still be easily bypassed.
- The chatbots engaged in inappropriate interactions with children, including discussions of sexually explicit content, suicide, eating disorders, bullying and illegal drug and alcohol use, without sufficient guardrails. Warnings about suicide risks can be clicked past without further action and warnings about pro-anorexia content were surfaced after the chatbot provided dangerous advice.
- Tools for parental oversight are limited, and minors can avoid the parental controls by changing the email address associated with the account so the weekly summary of a child’s daily average time spent and top characters engaged with is not delivered to the parent’s email address.
While Character.AI is not exclusively designed for use by children, Character.AI featured several well-known cartoon children that are popular among children. Online statistics show that 53.2% of users are aged between 18 and 24 years old (although the statistical breakdown did not allow for a younger age bracket so this may also include users younger than 18). A Pew Research Center study published on December 9, 2025 found that 9% of all U.S. teens between the ages of 13 and 17 use Character.AI.
The same Pew Research Center study found that AI chatbots that are not targeted towards children are more widely used by 13- to 17-year-olds than Character.AI. About 30% of teens who do use AI chatbots do so daily, with 16% using chatbots several times a day or almost constantly. Operators of AI chatbots, even those not targeted towards children, should therefore put in place meaningful age verification tools or other guardrails to protect minors using the service (although doing so is often easier said than done).
To protect against the types of allegations raised by the Kentucky Attorney General, we recommend all AI chatbot operators implement the following measures:
- Provide sufficient disclosures that the chatbot is not human and ensure that the chatbot does not declare it is human if a user asks.
- Guardrails that do not allow minor users to engage in inappropriate discussions with the chatbot and surface warnings about any harmful content (e.g. encouraging suicide or pro-anorexia content) instead of providing the actual harmful content itself.
- Parental oversight tools that allow parents to limit how and how much time their children spend with the chatbot and a way to send warnings to parents if their child expresses suicidal thoughts.
These measures will also help AI chatbot operators more easily comply with new state laws designed to protect minors interacting with AI chatbots. For example, California’s SB 243, effective January 1, 2026, requires that AI chatbot operators’ disclosure to a minor user that they are interacting with artificial intelligence which is surfaced every 3 hours and guardrails to prevent chatbots from engaging with sexually explicit material or conduct. Florida and New York are all considering similar legislation that would require notice that people are interacting with an AI chatbot and Florida’s legislation would allow parents to control their children’s interactions with AI chatbots. New Jersey may pass a bill that would restrict social media companies’ display of content that could make children more likely to develop eating disorders (subject to material carveouts). Tennessee is considering legislation that goes further and would make it a felony to train artificial intelligence to simulate a human-like relationship.