Written by : Jayati Dubey
May 5, 2025
Beyond product development, the Dartmouth team is considering a nonprofit structure for Therabot to ensure affordability and accessibility for those who cannot afford traditional therapy.
Researchers at Dartmouth College are developing an artificial intelligence (AI) application called Therabot to address the global shortage of mental health professionals.
Unlike many unverified mental health apps currently on the market, Therabot is grounded in clinical science and is showing early promise in delivering reliable psychotherapy.
Nick Jacobson, assistant professor of data science and psychiatry at Dartmouth, emphasized the urgency of innovation in mental health delivery.
"Even if we multiply the number of therapists by ten, we still won't meet the demand," he told AFP.
Jacobson's team recently published a clinical study demonstrating Therabot's effectiveness in treating individuals with anxiety, depression, and eating disorders. A second trial is planned to compare outcomes with traditional face-to-face therapies.
Unlike many commercial AI apps in the mental health space, Therabot has been developing for nearly six years, focusing on safety and clinical efficacy.
Jacobson and project co-lead psychiatrist Michael Heinz stress that profit-driven shortcuts could compromise patient safety.
The team created simulated patient-caregiver interactions from scratch to enhance trust and reliability instead of relying solely on mined therapy transcripts or existing training videos.
This approach, they say, adds a layer of authenticity and clinical control over the AI's development.
The American Psychological Association (APA) is cautiously optimistic about AI's potential in mental health.
Vaile Wright, APA's senior director of healthcare innovation, sees a future where AI chatbots—if developed responsibly—could support people with scientifically validated therapy.
"These applications have a lot of promise, particularly if they are done ethically," she said, although she expressed concern about risks to younger users.
Darlene King, who chairs the American Psychiatric Association's committee on mental health technology, echoed these sentiments. While acknowledging AI's potential, she emphasized the need for more data.
"There are still a lot of questions," she said, adding that thorough research is essential to determining AI's long-term safety and impact.
Beyond product development, the Dartmouth team is considering a nonprofit structure for Therabot to ensure affordability and accessibility for those who cannot afford traditional therapy.
Jacobson sees Therabot as more than just an app—it is a public health tool designed to fill systemic gaps in mental health care.
By delivering daily, on-demand support, Therabot could act as a bridge between individuals and the overstretched mental health care system.
The tool is particularly relevant when traditional therapy is inaccessible, such as late at night or in rural regions.
Dartmouth's caution contrasts sharply with concerns around other AI mental health apps, which critics say prioritize engagement over therapeutic value.
"Many apps seem more designed to capture attention and generate revenue than to improve mental health," said Wright. Young users may be unable to distinguish helpful tools from manipulative ones, raising ethical red flags.
The danger of poorly regulated digital tools was highlighted by a Florida case involving Character.AI, where a mother claimed a chatbot contributed to her teenage son's suicide. Incidents like this underscore the need for stricter oversight.
Despite growing reliance on digital health tools, the U.S. Food and Drug Administration (FDA) does not certify AI-based therapy apps.
According to an FDA spokesperson, the agency may authorize marketing for such tools after reviewing the appropriate pre-market submission. However, enforcement remains limited, leaving room for unregulated apps to enter the market.
The FDA has acknowledged the potential of digital mental health therapies to expand access to care but continues to face scrutiny over its oversight mechanisms.
Startups like Earkick are also entering the AI mental health space. Its product, Panda, is being clinically tested for its ability to detect emotional crises and suicidal ideation and trigger help alerts.
CEO Herbert Bay highlighted the importance of clinical safety and differentiation from entertainment-based bots.
"AI isn't suited for handling major psychiatric crises yet," Bay said, "but for daily mental health support, it's a game-changer."
Stay tuned for more such updates on Digital Health News.