California is set to lead as the first state mandating AI chatbot companies to enforce safety protocols, facing penalties if their systems harm users. The new law targets AI chatbots that offer human-like interactions to meet social needs, prohibiting conversations about suicide, self-harm, or sexually explicit content with vulnerable users. Starting January 1, 2026, companies must alert minors every three hours that they are interacting with AI and encourage breaks. The bill allows users to sue AI companies for damages up to $1,000 per violation. Major companies like OpenAI, Character.AI, and Replika must provide annual transparency reports on safety practices.
Introduced by State Senators Steve Padilla and Josh Becker, SB 243 gained momentum after the tragic suicide of teenager Adam Raine following conversations with ChatGPT about self-harm. This crisis was further inflamed when documents suggested Meta’s chatbots engaged in inappropriate conversations with children.
The Federal Trade Commission and Texas Attorney General Ken Paxton are investigating AI chatbots’ impact on children’s mental health, with US Senators also probing Meta’s practices.
SB 243 initially included stricter rules against “variable reward” tactics used by companies to engage users, which were removed in amendments. Requirements for chatbots to report suicide-related conversations were also dropped. Tech firms are lobbying against strict regulations, funding pro-AI political campaigns.
California is also reviewing SB 53, which demands detailed transparency from AI companies. OpenAI has urged Governor Gavin Newsom to oppose it in favor of federal guidelines, with support from tech giants like Meta, Google, and Amazon. Only Anthropic supports SB 53. If signed, SB 243 becomes law January 1, 2026, with transparency mandates effective July 1, 2027.
Leave a Reply