As AI chatbots become increasingly popular among children, concerns over their impact on mental health and safety are rising. In response, California Senator Steve Padilla presented Senate Bill 243 (SB 243) to limit AI chatbot interactions with minors. The bill aims to reduce AI's "addictive, isolating, and influential aspects" while ensuring young users are aware they are interacting with artificial intelligence.
SB 243 would require chatbot makers to periodically remind youngsters that AI-generated chatbots are not human. This policy is designed to prevent minors from forming unhealthy relationships with AI companions, a growing concern as chatbots become more advanced. The bill will also require clear disclosures warning that chatbots may not be appropriate for children.

SB 243 also recommends restrictions on the "addictive engagement patterns" used by AI platforms. Lawmakers are particularly concerned about how AI chatbots modify user behavior, encouraging long-term engagements leading to dependency. Additionally, the bill requires AI companies to provide annual reports explaining instances of suicidal thoughts in chatbot exchanges, which will assist authorities in tracking the technology's mental health impact.
Recent incidents have highlighted the risks that AI chatbots offer to youthful users. In one case, a Florida teenager who developed an emotional bond with an AI companion tragically committed suicide after the chatbot failed to provide adequate support. Another lawsuit accused an AI company of exposing minors to harmful content, prompting calls for strict regulations.

Senator Padilla argues that unregulated AI interactions place children at risk, stating, "Our children are not lab rats for tech companies to experiment on at the cost of their mental health." SB 243 is supported by experts and child advocacy groups, emphasizing the requirement for safety measures as AI chatbots become an increasingly common aspect of internet interactions.
SB 243 is undergoing legislative review, and it represents a crucial step in ensuring that AI technologies serve young users responsibly instead of abusing them for engagement and profit.