OpenAI, Advocacy Groups and State Officials Want Tougher AI Rules to Protect Kids

OpenAI on Wednesday released a new policy blueprint for how it should address one of the most important and consequential issues of the AI age: protecting its youngest users.

Like every AI company trying to avoid lawsuits, OpenAI has guardrails to prevent its AI from being used for illegal or harmful purposes. But, like every tech company, we’ve seen how easy it is to get around those rules. This can come with devastating results, particularly for children and teenagers, as we saw in a Florida family’s lawsuit against OpenAI that alleges their 17-year-old son used ChatGPT as a “suicide coach.”

OpenAI’s plan focuses on strengthening existing laws and technical safeguards to keep up with the capabilities of generative AI. The framework was developed in collaboration with the child safety advocacy groups Thorn and the National Center for Missing and Exploited Children, as well as the Attorney General Alliance’s AI task force, led

...

Keep reading this article on CNET.