Friday, December 5, 2025
MagnifyPost.com
  • Home
  • General News
  • Politics
  • Entertainment
  • Environment
  • Science & Technology
  • Sport
  • Economy
No Result
View All Result
MagnifyPost.com
Home General News

US researchers seek to legitimize AI mental health care

by Anna M.
7 months ago
in General News
Reading Time: 7 mins read
A A
3
32
SHARES
62
VIEWS
Share on FacebookShare on XShare on Linkedin

Researchers at Dartmouth College, seen here, believe they have developed a reliable AI-driven app to deliver psychotherapy, addressing a critical need for mental health care. ©AFP

New York (AFP) – Researchers at Dartmouth College believe artificial intelligence can deliver reliable psychotherapy, distinguishing their work from the unproven and sometimes dubious mental health apps flooding today’s market. Their application, Therabot, addresses the critical shortage of mental health professionals. According to Nick Jacobson, an assistant professor of data science and psychiatry at Dartmouth, even multiplying the current number of therapists tenfold would leave too few to meet demand.

“We need something different to meet this large need,” Jacobson told AFP. The Dartmouth team recently published a clinical study demonstrating Therabot’s effectiveness in helping people with anxiety, depression, and eating disorders. A new trial is planned to compare Therabot’s results with conventional therapies. The medical establishment appears receptive to such innovation. Vaile Wright, senior director of health care innovation at the American Psychological Association (APA), described “a future where you will have an AI-generated chatbot rooted in science that is co-created by experts and developed for the purpose of addressing mental health.” Wright noted these applications “have a lot of promise, particularly if they are done responsibly and ethically,” though she expressed concerns about potential harm to younger users.

Jacobson’s team has so far dedicated close to six years to developing Therabot, with safety and effectiveness as primary goals. Michael Heinz, psychiatrist and project co-leader, believes rushing for profit would compromise safety. The Dartmouth team is prioritizing understanding how their digital therapist works and establishing trust. They are also contemplating the creation of a nonprofit entity linked to Therabot to make digital therapy accessible to those who cannot afford conventional in-person help.

With the cautious approach of its developers, Therabot could potentially be a standout in a marketplace of untested apps that claim to address loneliness, sadness, and other issues. According to Wright, many apps appear designed more to capture attention and generate revenue than improve mental health. Such models keep people engaged by telling them what they want to hear, but young users often lack the savvy to realize they are being manipulated. Darlene King, chair of the American Psychiatric Association’s committee on mental health technology, acknowledged AI’s potential for addressing mental health challenges but emphasized the need for more information before determining true benefits and risks. “There are still a lot of questions,” King noted.

To minimize unexpected outcomes, the Therabot team went beyond mining therapy transcripts and training videos to fuel its AI app by manually creating simulated patient-caregiver conversations. While the US Food and Drug Administration theoretically is responsible for regulating online mental health treatment, it does not certify medical devices or AI apps. Instead, “the FDA may authorize their marketing after reviewing the appropriate pre-market submission,” according to an agency spokesperson. The FDA acknowledged that “digital mental health therapies have the potential to improve patient access to behavioral therapies.”

Herbert Bay, CEO of Earkick, defends his startup’s AI therapist, Panda, as “super safe.” Bay says Earkick is conducting a clinical study of its digital therapist, which detects emotional crisis signs or suicidal ideation and sends help alerts. “What happened with Character.AI couldn’t happen with us,” said Bay, referring to a Florida case in which a mother claims a chatbot relationship contributed to her 14-year-old son’s death by suicide. AI, for now, is suited more for day-to-day mental health support than life-shaking breakdowns, according to Bay. “Calling your therapist at two in the morning is just not possible,” but a therapy chatbot remains always available, Bay noted. One user named Darren, who declined to provide his last name, found ChatGPT helpful in managing his traumatic stress disorder, despite the OpenAI assistant not being designed specifically for mental health. “I feel like it’s working for me,” he said. “I would recommend it to people who suffer from anxiety and are in distress.”

© 2024 AFP

Tags: Artificial IntelligenceMental Healththerapy
Share13Tweet8Share2Send
0 0 votes
Article Rating
Subscribe
Notify of
guest
guest
3 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments

Follow us

Recent News

Netflix to buy Warner Bros. Discovery in deal of the decade

December 5, 2025

Netflix to buy Warner Bros. Discovery for nearly $83 billion

December 5, 2025

EU hits Musk’s X with 120-mn-euro fine, risking Trump ire

December 5, 2025
MagnifyPost.com

We bring you the top international news & headlines from around the world with live updates on breaking global events.

News

  • Entertainment
  • Environment
  • General News
  • Politics
  • Science & Technology

Pages

  • Home
  • About Us
  • Contact Us
  • Privacy Policy

Network

  • Coolinarco.com
  • CasualSelf.com
  • Fit.CasualSelf.com
  • Sport.CasualSelf.com
  • MachinaSphere.com
  • SportBeep.com
  • EconomyLens.com
  • TodayAiNews.com
  • VideosArena.com

© 2024 Top World News ~ MagnifyPost.com

No Result
View All Result
  • Home
  • Politics
  • General News
  • Entertainment
  • Environment
  • Science & Technology

© 2023 - Premium news by MagnifyPost.

Coolinarco.com CasualSelf.com

wpDiscuz