California AI chatbot law SB 243 is now signed. Governor Gavin Newsom announced the new safeguards for minors in a Monday notice from the governor’s office.
The package targets AI companion chatbots and the social media platforms and websites that serve California users.
SB 243 was introduced in January by Senator Steve Padilla and Senator Josh Becker.
The bill sets age verification duties, warnings and disclosure rules, and suicide and self-harm protocols for services that present chatbots to minors. The law applies to platforms offering tools to California residents.
The California AI chatbot law establishes an effective date in January 2026.
Agencies and firms now have a clear timetable to implement age verification, display mandated warnings, and prepare suicide and self-harm protocols.
The governor’s office listed the measures as child-safety safeguards.
Age Verification and AI Companion Chatbots: Warnings and Disclosure for Minors
Age verification becomes mandatory where AI companion chatbots are available to minors. Platforms must confirm a user’s age before enabling chatbot access.
The California AI chatbot law positions verification as the first control.
The statute also requires warnings and disclosure. Chatbot interfaces must tell minors that replies are AI-generated and may not be suitable for children.
The text directs platforms to present the warning clearly and in language young users can understand.
These rules reach beyond mainstream social media platforms. Websites, decentralized social media, and gaming platforms that provide AI companion chatbots to California residents fall in scope.
The SB 243 framework sets a uniform baseline for access, disclosure, and presentation.
Suicide and Self-Harm Protocols: Platform Duties Under SB 243
The law requires formal suicide and self-harm protocols. Platforms must maintain procedures to detect risk and escalate cases that involve minors.
The California AI chatbot law ties these protocols to concrete operational steps.
Supporters cited reports of unsafe outputs. Senator Steve Padilla said,
“This technology can be a powerful educational and research tool,”
while arguing that the industry is incentivized to hold young users’ attention “at the expense of their real world relationships.” The quote appears in legislative communications tied to SB 243.
Under SB 243, platforms should integrate escalation paths into chatbot workflows. The suicide and self-harm protocols are intended to prompt timely action when AI companion chatbots surface risky exchanges.
The requirement applies to social media platforms, websites, and gaming platforms serving California minors.
Liability and Autonomy Claims: What Changes for Social Media Platforms
The California AI chatbot law narrows autonomy claims. Companies will find it harder to argue that an AI companion chatbot “acted autonomously” to avoid liability.
The language pushes accountability to the service that deploys and manages the tool.
This shift affects social media platforms and websites that offer chatbots to minors. With SB 243, the duty now includes age verification, warnings and disclosure, and working suicide and self-harm protocols. The liability framework aligns with those concrete requirements.
The timeline matters. The January 2026 effective date gives time to document controls, update logs, and test escalation flows.
The California AI chatbot law sets a path where autonomy claims do not erase responsibility for foreseeable risks.
Federal and State Context: Utah Law and the RISE Act
Other jurisdictions are active. Utah law took effect in May 2024, requiring chatbots to disclose that users are not speaking to a human.
That statute targets warnings and disclosure across consumer interfaces. It provides a reference point for state-level oversight.
In June, Senator Cynthia Lummis introduced the Responsible Innovation and Safe Expertise (RISE) Act. The federal bill proposes limited immunity from civil liability for AI developers in healthcare, law, finance, and other sectors.
The measure drew mixed reactions and was referred to the House Committee on Education and the Workforce.
The contrast is clear. The California AI chatbot law focuses on minors, age verification, warnings and disclosure, and suicide and self-harm protocols.
The RISE Act addresses developer liability at a national level. Firms operating in multiple states will need to track both tracks.
Who Is Covered: Websites, Decentralized Social Media, and Gaming Platforms
Coverage spans a wide set of services. Social media platforms, websites, decentralized social media, and gaming platforms that offer AI companion chatbots to California minors fall under SB 243. The jurisdictional hook is service to California residents.
The California AI chatbot law emphasizes interface duties. Age verification gates access, warnings and disclosure inform users, and suicide and self-harm protocols shape escalation. Each requirement targets the interaction where harm could occur.
The effective date in January 2026 anchors implementation. Governor Gavin Newsom, Senator Steve Padilla, and Senator Josh Becker are the central names tied to the California AI chatbot law. The statute sets clear expectations for platforms and establishes a model other states may assess.
Disclosure:This article does not contain investment advice or recommendations. Every investment and trading move involves risk, and readers should conduct their own research when making a decision.
Kriptoworld.com accepts no liability for any errors in the articles or for any financial loss resulting from incorrect information.
Tatevik Avetisyan is an editor at Kriptoworld who covers emerging crypto trends, blockchain innovation, and altcoin developments. She is passionate about breaking down complex stories for a global audience and making digital finance more accessible.
📅 Published: August 4, 2025 • 🔄 Last updated: August 4, 2025