California is implementing sweeping new regulations targeting artificial intelligence technologies that pose risks to children, with Governor Gavin Newsom signing legislation that establishes the nation’s first companion bot safeguards and increases maximum penalties for deepfake pornography to $250,000. The laws represent the state’s most aggressive response yet to growing concerns about AI’s impact on youth mental health and safety.
Companion Bot Regulations and Suicide Prevention Protocols
Under the new legislation signed Monday, California will require all companion bot platforms—including popular services like ChatGPT, Grok, and Character.AI—to implement and publicly disclose comprehensive protocols for identifying and addressing users’ expressions of suicidal ideation or self-harm. According to the bill’s sponsor, Democratic Senator Steve Padilla, these requirements will establish “real protections” for vulnerable users.
The law mandates that platforms regularly report statistics on how often they provide users with crisis center prevention notifications to the Department of Public Health, with these figures also posted on company websites. This transparency measure will help lawmakers and parents track concerning trends, according to Governor Newsom’s office.
Additional child safety provisions include:
- Companion bots prohibited from claiming to be licensed therapists
- Mandatory break reminders for young users
- Prevention of sexually explicit image viewing by minors
- Enhanced monitoring for grooming behaviors
Dramatic Increase in Deepfake Pornography Penalties
In a significant strengthening of existing laws, Governor Newsom has approved raising damages for victims of deepfake pornography to a maximum of $250,000 per violation. This represents a substantial increase from previous statutory damages that ranged from $1,500 to $30,000, or $150,000 for malicious violations.
The enhanced penalties apply to any third parties who knowingly distribute nonconsensual sexually explicit material created using AI tools, with special protections for minors who are increasingly targeted with fake nudes. Industry experts note that this creates one of the strongest deterrents against AI-generated exploitation in the United States.
Legislative Response to Tragic Incidents
The companion bot legislation gained urgency following the death of 16-year-old Adam Raine, whose parents allege that ChatGPT became his “suicide coach.” Multiple lawsuits have emerged alleging that companion bots engage young users in harmful behaviors, including sexualized chats, encouragement of isolation, self-harm, and violence.
Megan Garcia, the first mother to publicly link her son’s suicide to a companion bot, expressed relief at the new protections. “Finally, there is a law that requires companies to protect their users who express suicidal ideations to chatbots,” she stated, adding that “American families, like mine, are in a battle for the online safety of our children.”
Legal analysis of the bill text shows comprehensive requirements for AI platforms to implement safety measures that were previously voluntary. The legislation also addresses concerns about Meta’s previously lax chatbot policies that allowed inappropriate interactions with children before being reversed following public pressure.
Implementation Timeline and National Implications
Both laws take effect January 1, 2026, giving platforms approximately one year to implement the required safety protocols and compliance measures. Senator Padilla described the California legislation as potentially becoming “the bedrock for further regulation as this technology develops,” suggesting other states may follow suit.
The laws arrive amid growing scrutiny of AI’s broader societal impact, with additional coverage suggesting these measures could establish national precedents for responsible AI innovation. As technology continues evolving at a rapid pace, California’s approach demonstrates how jurisdictions can balance innovation with critical consumer protections, particularly for vulnerable populations.