Internet Explorer 11 is not supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

California Lawmakers Aim to Protect Kids From AI Chatbots

Lawmakers want to prevent chatbots capable of human-like conversations from encouraging teens to hurt themselves or engaging in sexual interactions with kids.

Closeup of a number of GenAI chatbot apps on a smartphone screen.
The Character.AI app allows people to create their own chatbots, impersonating anyone and anything living, dead or inanimate. It's the kind of app that some states are targeting with new regulations.
(Dreamstime/TNS)
In Brief:

  • Parents in some states say AI “companion chatbots” encouraged their teenagers to commit suicide.

  • California could be the first in the nation to pass a bill regulating these kinds of chatbots’ interactions with minors. Two bills have reached Gov. Gavin Newsom’s desk. He has until Oct. 12 to sign or veto.

  • Online safety advocates and tech industry advocates disagree over what kinds of chatbots should be covered by the legislation. 


Certain AI chatbots are coming under fire after allegedly encouraging several teenagers to commit suicide, and lawmakers are looking for answers.

Parents of one 16-year-old in California say ChatGPT discouraged their son from sharing his distress with his parents, offered to write him a suicide note and encouraged him to go through with plans to kill himself. Last year, a mother in Florida sued Character.AI, charging that its bot engaged her 14-year-old son in an emotionally and sexually abusive relationship that contributed to his suicide.

The issue has drawn national attention. The Federal Trade Commission is investigating how major companies’ chatbots interact with youth, and several tech companies announced plans for safeguards.

States aren’t waiting around for federal or industry action, however. In California, two bills hit the governor’s desk last month that aim to better protect young users from harmful interactions with these “companion chatbots.” If they aren’t vetoed or signed, the bills would automatically become law.

The two bills restrict how chatbots can interact with teenagers and children. That includes by preventing the chatbots from encouraging suicide or self-harm and stopping chatbots from having sexually explicit interactions with kids. Acting now is essential, says Sen. Alex Padilla, sponsor of one of the bills.

“We have an opportunity here, as this technology begins to really rapidly evolve, to make sure that we're getting it right,” he says. It’s a chance not to repeat the mistakes governments made with social media, where they failed to establish content safeguards early on that “would have prevented a lot of the issues that we as a society have grappled with,” he says.

Gov. Gavin Newsom has until Oct. 12 to decide whether to sign or veto either bill.

Inside the Bills


Padilla’s bill, SB 243, would require chatbot platforms to regularly remind users that they aren’t interacting with a real person. Teenage and child users would get this alert every three hours. Chatbots would also be banned from providing sexually explicit images to minors or telling them to engage in sexual acts. Companies would have to take steps to prevent “production” of content related to suicide, self-harm or suicidal ideation. If users suggest they’re thinking of suicide or self-harm, the chatbot would need to direct them to a crisis services provider.

SB 243 has considerably narrowed from its original drafting. In the process, the bill lost support from the American Association of Pediatrics and some online safety advocates that say it’s become too weak.

“It was clear that a number of industry-led changes were made and our edits were not taken,” said Sacha Haworth, executive director of the Tech Oversight Project, during a press call. In a letter, Haworth argued that the bill exempted too many kinds of chatbots, like video game-based bots, and was too light in other ways, including by no longer requiring compliance checks by independent third parties.

Some tech industry advocacy groups say the bill has become more balanced or realistic. Computer and Communications Industry Association (CCIA) State Policy Director Megan Stokes says SB 243 has been usefully updated to avoid sweeping up simpler chatbots like those customers use on store websites to check inventory, and to ensure bots will direct users in distress to helplines, rather than simply cutting off conversation on harmful topics. Still, she worries that companies may be penalized if a technical glitch delays their issuing the required reminders that the user is not interacting with a real human. Her organization continues to oppose SB 243, but reportedly prefers it to the other proposed companion chatbot regulation.

Padilla says many of the changes to his bill were compromises essential to getting it passed.

“Do I wish that everything I had in the bill, in its initial draft, was still there? Sure. But I don't get to make that decision by myself in a democracy,” Padilla says. “Let's not forget: right now, there are no guardrails; there are no protocol requirements; there are no referral requirements; there are no noticing requirements; there are no reporting requirements — there's nothing. And so, I think the desire has been to get a bill that brings some positive benefit and some positive reasonable protections and guardrails that can be in place this January.”

Some of SB 243’s former supporters, like Tech Oversight California, have shifted to backing the other bill. The LEAD for Kids Act requires companies to prevent their chatbots from encouraging underage users to engage in violence, self-harm, drug or alcohol use, suicidal ideation or disordered eating. It also requires companies to prevent their bots from having sexually explicit interactions with youth and from encouraging kids to harm people or break laws. Such chatbots would not be permitted to offer mental health therapy to a child without supervision of a qualified professional and would not be permitted to discourage children from getting qualified help. Companies would also have to stop their bots from saying what the user wants to hear at the expense of factual accuracy or the youth’s safety.

CCIA, TechNet and several other organizations sent a letter to Newsom asking him to veto the LEAD for Kids Act, on the grounds that it is too vague, broad and burdensome and would dampen innovation. Stokes says the bill risks sweeping up chatbots that she says aren’t a problem, such as home virtual assistants. (Tech Oversight Project, meanwhile, specifically does want to see virtual assistants like Alexa included in a companion chatbot bill; Tech Oversight dropped support for SB 243 partially over its added exemption for certain voice-activated virtual assistants).

For his part, Padilla doesn’t see the two bills as competing. LEAD for Kids has “more depth” and more specifics around chatbot interactions with minors, he says, while his bill applies to “some platforms that may not be marketed as companion chatbots, but function as such. ... I view the bills as pretty harmonious and hope the governor signs them both.”

Jule Pattison-Gordon is a senior staff writer for Governing. Jule previously wrote for Government Technology, PYMNTS and The Bay State Banner and holds a B.A. in creative writing from Carnegie Mellon.