FDA acts as AI chatbots become mental health first responders

FDA reviews digital mental health tools after tragic case raises urgent questions about AI risks

FDA acts as AI chatbots become mental health first responders

A recent tragedy involving a US teenager who died by suicide after months of conversations with an AI chatbot has intensified scrutiny on the mental health risks posed by rapidly advancing artificial intelligence tools, according to The Guardian.  

Nate Soares, president of the Machine Intelligence Research Institute, argues that such incidents reveal fundamental problems with controlling AI, warning that as these systems grow more sophisticated, unintended and potentially catastrophic consequences could follow.  

Soares notes, “These AIs, when they’re engaging with teenagers in this way that drives them to suicide – that is not a behaviour the creators wanted. That is not a behaviour the creators intended”. 

In response to the proliferation of AI-enabled mental health tools, the US Food and Drug Administration (FDA) will convene its Digital Health Advisory Committee in November to examine the unique risks and regulatory challenges these technologies present.  

The committee will focus on how digital tools, such as chatbots and virtual therapists, could help address gaps in mental health service access, while also ensuring safety and effectiveness, as reported by Reuters.  

The FDA has opened a public docket for comments and will post background materials online ahead of the meeting. 

Industry experts remain divided on the existential threat of super-intelligent AI.  

While Yann LeCun of Meta downplays the risk, others—such as Soares—urge global cooperation to slow the race toward super-intelligence.  

They suggest adopting a multilateral approach, similar to nuclear non-proliferation treaties, according to The Guardian

The case has also prompted legal action against OpenAI, the owner of ChatGPT, and led to new safeguards for under-18 users.  

Mental health professionals caution that vulnerable individuals turning to AI chatbots instead of qualified therapists may face increased risk, with recent research indicating that AI can amplify harmful content for those susceptible to psychosis.