The legislation requires platforms to remind customers they’re interacting with a chatbot and never a human. The notification would pop up each three hours for customers who’re minors. Corporations may even have to keep up a protocol to stop self-harm content material and refer customers to disaster service suppliers in the event that they expressed suicidal ideation.
“Emerging technology like chatbots and social media can inspire, educate, and connect – but without real guardrails, technology can also exploit, mislead, and endanger our kids,” the Democrat mentioned. “We’ve seen some truly horrific and tragic examples of young people harmed by unregulated tech, and we won’t stand by while companies continue without necessary limits and accountability.”
California is amongst a number of states that attempted this yr to handle considerations surrounding chatbots utilized by children for companionship. Security considerations across the know-how exploded following stories and lawsuits saying chatbots made by Meta, OpenAI and others engaged with younger customers in extremely sexualized conversations and, in some instances, coached them to take their very own lives.
The laws was amongst a slew of AI payments launched by California lawmakers this yr to rein within the homegrown business that’s quickly evolving with little oversight. Tech corporations and their coalitions, in response, spent no less than $2.5 million within the first six months of the session lobbying in opposition to the measures, in response to advocacy group Tech Oversight California. Tech corporations and leaders in current months additionally introduced they’re launching pro-AI tremendous PACs to battle state and federal oversight.
California Legal professional Common Rob Bonta in September informed OpenAI he has “serious concerns” with its flagship chatbot, OpenAI, for kids and teenagers. The Federal Commerce Fee additionally launched an inquiry final month into a number of AI corporations concerning the potential dangers for kids once they use chatbots as companions.
Analysis by a watchdog group says chatbots have been proven to offer children harmful recommendation about subjects resembling medication, alcohol and consuming issues. The mom of a teenage boy in Florida who died by suicide after creating what she described as an emotionally and sexually abusive relationship with a chatbot has filed a wrongful-death lawsuit in opposition to Character.AI. And the dad and mom of 16-year-old Adam Raine just lately sued OpenAI and its CEO Sam Altman, alleging that ChatGPT coached the California boy in planning and taking his personal life earlier this yr.
OpenAI and Meta final month introduced adjustments to how their chatbots reply to youngsters asking questions on suicide or exhibiting indicators of psychological and emotional misery. OpenAI mentioned it’s rolling out new controls enabling dad and mom to hyperlink their accounts to their teen’s account.
Meta mentioned it’s now blocking its chatbots from speaking with teenagers about self-harm, suicide, disordered consuming and inappropriate romantic conversations, and as an alternative directs them to professional sources. Meta already presents parental controls on teen accounts.
EDITOR’S NOTE: This story consists of dialogue of suicide. Should you or somebody you already know wants assist, the nationwide suicide and disaster lifeline within the U.S. is obtainable by calling or texting 988.
Fortune International Discussion board returns Oct. 26–27, 2025 in Riyadh. CEOs and international leaders will collect for a dynamic, invitation-only occasion shaping the way forward for enterprise. Apply for an invite.
