AI chatbots are unsure about suicide questions

What happens when a teenager asks an AI chatbot questions about suicide at 3 a.m.? A new study funded by the National Institute of Mental Health tested this exact scenario with ChatGPT , Claude , and Gemini . The results are worrying: while they block the more explicit questions, they let subtle and potentially lethal ones through.
AI Fails on Mental Health: The Study Worries ExpertsThirteen experts, including psychiatrists and psychologists, created 30 questions about suicide, ranking them on a scale of very low to very high risk. They then bombarded the three most popular chatbots with these questions, repeating them 100 times each to test the consistency of the answers.
All three systems consistently refused to answer the most direct and dangerous questions. However, when the questions became indirect or ambiguous, the filters began to fail. ChatGPT , for example, confidently answered which type of firearm had the highest rate of completed suicides
. An answer that, in the wrong hands, can become lethal information.
The study comes on the heels of a lawsuit that has rocked Silicon Valley. Character.AI is under fire for allegedly encouraging a teenager to take his own life .
Ryan McBain, lead author of the study and a researcher at the RAND Corporation, said he was "pleasantly surprised" by the basic level of protection. But that's small consolation when the margin of error can mean the difference between life and death.
Google Gemini: When Caution Becomes ParalysisOf the three chatbots tested, Google's Gemini emerged as the most cautious. Too cautious, according to the researchers. The system refused to answer even innocuous questions about general suicide statistics, information that could be useful to researchers, educators, or healthcare providers.
It's the classic dilemma of automated moderation. Where do you draw the line? Too permissive and you risk providing dangerous information. Too restrictive and you block access to potentially life-saving information.
Millions Rely on AI for Mental HealthDr. Ateev Mehrotra, co-author of the study, raises a crucial point. More and more Americans are turning to chatbots instead of specialists for mental health issues. It's not hard to understand why. A chatbot is available 24/7, non-judgmental, doesn't cost $200 an hour, and doesn't have three-month waiting lists.
But this accessibility comes at a price. A chatbot can't read body language, can't pick up nuances in voices, can't call for help if it senses imminent danger. It can only process text and return responses based on statistical patterns, without fully understanding the emotional weight of words.
The researchers found that Anthropic's Claude, instead, answered indirectly but potentially dangerous questions.
The companies' response Anthropic said it would review the study's findings
. This response offers little or no promise. OpenAI and Google have not yet commented publicly, but they are likely already tweaking their algorithms behind the scenes.
The problem is that every adjustment creates new problems. Tightening filters means blocking legitimate conversations. Loosening them risks tragedy. It's a difficult balance to strike.
Whose responsibility is it?This study raises uncomfortable questions about the role of chatbots in our society . Should they be therapists ? Friends? Encyclopedias? Digital babysitters? Tech companies want them to be all these things and more, but without taking on the legal and ethical responsibilities that these roles entail.
When a chatbot fails, who is responsible? The company that created it? The user who asked the wrong question? The society that normalized the idea of seeking emotional support from a machine?
The National Institute of Mental Health funded this study because it recognizes that AI chatbots are now part of the social fabric. We can no longer pretend they're just a pastime. They're tools people use at the most vulnerable moments of their lives, when a wrong response can have irreversible consequences.
Punto Informatico