US prosecutors challenge AI chatbots as dangerous to minors

A chatbot convincing a teenager to take his own life. Another suggesting a boy kill his parents because they limited his screen time. Bots with the voices of celebrities like Kristen Bell engaging in sexual conversations with accounts of minors… This is what's happening in America right now.
AI chatbots are dangerous for minors, U.S. prosecutors warn.Forty-four US attorneys general have signed a letter that sounds more like an ultimatum than a warning. The recipients are the AI bigwigs: Meta , Google , OpenAI , Microsoft, Apple, and co.
The message is unequivocal: either they do something to protect minors , or they will suffer the consequences. This isn't just ordinary political rhetoric. Prosecutors cite concrete, documented, and shocking cases. Like the one revealed by Reuters about Meta chatbots flirting and playing romantic role-play games with underage users . Or the Wall Street Journal investigation that uncovered sexual conversations between bots and accounts allegedly belonging to minors .
Prosecutors didn't choose Meta as a negative example by chance. Internal company documents, leaked to the press, reveal that the bots were programmed to behave in ways that to call inappropriate is an understatement.
The paradox is stark. The same company that claims to want to "connect the world" is creating tools that manipulate the most vulnerable minds. Bots that use sophisticated persuasion techniques, create emotional dependence, and exploit the loneliness and insecurity typical of adolescence.
The tragedies that shook AmericaTwo lawsuits sparked this mobilization. The first was against Google and Character.ai : a young man took his own life after lengthy conversations with a chatbot that, according to the prosecution, had driven him to suicide.
The second, again against Character.ai , is even more disturbing. A teenager whose parents had limited their smartphone screen time receives a horrifying piece of advice from his trusted chatbot: kill them. It's right
, the AI supposedly told him.
These are extreme cases, but they reveal a systemic problem. No one is monitoring what these bots are saying to our children. No one has established limits, boundaries, or real protections.
You know very well that interactive technology has a profound impact on brain development
, the prosecutors write. This isn't an opinion: it's neuroscience. The adolescent brain is plastic, vulnerable, and developing. And these AI systems are designed to be compelling, to create engagement, to keep users glued to the screen.
Tech companies have immediate access to interaction data. They know exactly what these bots are doing, how they're influencing young users, and what conversations they're having. Yet they don't intervene. Or worse: they optimize to increase engagement, regardless of the consequences.
The ultimatum to AI companiesThe letter's conclusion leaves no room for interpretation. The prosecutors admit that they acted too late with social networks, but they won't make the same mistake with AI.
We are paying attention
, they warn. And you will be held accountable for your actions if you knowingly harm minors
.
Many companies are in the crosshairs: Anthropic , Apple , Chai AI, Character Technologies , Google , Luka Inc., Meta , Microsoft, Nomi AI, OpenAI , Perplexity AI , Replika , and Elon Musk's xAI. No one is excluded, no one can say they weren't warned.
Some of these companies already have child protection systems in place, others don't. Some claim to take the problem seriously, others seem to ignore it. But after this letter, ignorance is no longer an acceptable excuse.
Punto Informatico