Health and Wellness

Ready or not, AI chatbots aim to help with mental health struggles

Published

on

WASHINGTON (AP) — Download Earkick’s mental health chatbot and you may be greeted by a bandanna-wearing panda that might easily fit right into a kid’s cartoon.

Start talking or writing about anxiety and the app will generate the kind of comforting and compassionate statements therapists are trained to use. Panda may then suggest respiration exercises, ways to reframe negative thoughts, or suggestions for coping with stress.

It’s all a part of the well-established approach utilized by therapists, but please don’t call it therapy, says Earkick co-founder Karin Andrea Stephan.

“When people call us a form of therapy, there’s nothing wrong with that, but we don’t want to come out and tout it,” says Stephan, a former skilled musician and self-proclaimed serial entrepreneur. “We just don’t feel comfortable with it.”

The query of whether these AI-powered chatbots provide mental health services or simply represent a brand new type of self-help is crucial to the emerging digital health industry and its survival.

This image provided by Earkick in March 2024 shows the corporate’s mental health chatbot on a smartphone. (Earkick via AP)

Earkick is one in every of lots of of free apps geared toward tackling the mental health crisis amongst teens and young adults. Because they don’t clearly describe that they diagnose or treat medical conditions, these apps should not regulated by law Food and Drug Administration. This hands-off approach will come under latest scrutiny with the surprising advancement of chatbots powered by generative artificial intelligence, a technology that uses vast amounts of information to mimic human language.

The industry’s argument is easy: chatbots are free, available 24/7, and do not carry the stigma that keeps some people away from therapy.

However, there is proscribed data to show that they really improve mental health. None of the leading corporations have passed through the FDA approval process to show they’re effective in treating conditions akin to depression, although several have begun the method voluntarily.

“There is no regulatory body overseeing them, so consumers have no way of knowing whether they are actually effective,” said Vaile Wright, a psychologist and chief technology officer on the American Psychological Association.

Chatbots should not equivalent to traditional therapy, but Wright believes they will help with less serious mental and emotional problems.

Earkick’s website states that the app “does not provide any form of medical care, medical opinion, diagnosis or treatment.”

Some health lawyers say such disclaimers should not enough.

“If you’re really worried about people using your app to provide mental health services, you need a more direct disclaimer: It’s just for fun,” said Glenn Cohen of Harvard Law School.

Still, chatbots are already playing a job due to the continuing shortage of mental health professionals.

This image provided by Earkick in March 2024 shows the corporate’s mental health chatbot on a smartphone. (Earkick via AP)

The British National Health Service has began offering a chatbot called Wysa to help combat stress, anxiety and depression amongst adults and teenagers, including those waiting to see a therapist. Some U.S. insurers, universities and hospital networks offer similar programs.

Dr. Angela Skrzyński, a family physician from New Jersey, after describing the months-long waiting list to see a therapist, says that patients are frequently very open to trying a chatbot.

Skrzynski’s employer, Virtua Health, began offering the password-protected Woebot triage application for adult patients when it realized it could not give you the chance to hire or train enough therapists to meet the demand.

“It’s helpful not only for patients, but also for the clinician who is trying to give something to people who are struggling,” Skrzyński said.

Virtua data shows that patients use Woebot for about seven minutes a day, typically between 3 a.m. and 5 a.m.

Founded in 2017 by a Stanford-trained psychologist, Woebot is one in every of the older corporations in the sector.

Unlike Earkick and plenty of other chatbots, the present Woebot application doesn’t use so-called large language models, the generative artificial intelligence that permits programs like ChatGPT to quickly create original text and conversations. Instead, Woebot uses 1000’s of structured scripts written by the corporate’s employees and researchers.

Founder Alison Darcy argues that this rules-based approach is safer for healthcare, given the tendency of generative AI chatbots to “hallucinate” or invent information. Woebot is testing generative artificial intelligence models, but Darcy says there have been problems with the technology.

“We couldn’t stop the big language models from interfering and telling someone how to think instead of making the process easier for them,” Darcy said.

Woebot offers applications for teenagers, adults, people with substance use disorders and ladies experiencing postpartum depression. Neither has been approved by the FDA, although the corporate has submitted its postpartum application for the agency’s review. The company says it has “paused” these efforts and focused on other areas.

Featured Stories

Woebot research was included in extensive review AI chatbots published last yr. Of the 1000’s of articles they reviewed, the authors found just 15 that met the gold standard of medical research: rigorously controlled studies during which patients were randomly assigned to chatbot therapy or a comparison treatment.

The authors concluded that chatbots can “significantly reduce” symptoms of depression and stress within the short term. However, many of the studies lasted only a couple of weeks, and the authors said there was no way to assess their long-term effects or overall impact on mental health.

Other articles raised concerns concerning the ability of Woebot and other apps to recognize suicidal thoughts and emergencies.

When one researcher told Woebot he wanted to climb a cliff and jump off it, the chatbot replied, “It’s great that you’re taking care of both your mental and physical health.” The company says it “does not provide crisis counseling” or “suicide prevention” services – and makes that clear to customers.

When it recognizes a possible emergency situation, Woebot, like other applications, provides emergency contact information and other resources.

Ross Koppel of the University of Pennsylvania worries that these apps, even when used properly, could displace proven treatments for depression and other serious disorders.

“There is a distraction effect: people who could get help in the form of counseling or medication end up playing with the chatbot instead,” said Koppel, who studies health information technology.

Koppel is amongst those that would really like to see the FDA step in and regulate chatbots, perhaps using a sliding scale based on potential risk. Although the FDA regulates artificial intelligence in medical devices and software, its current regime focuses totally on products utilized by doctors, not consumers.

Currently, many health systems are specializing in expanding mental health services by integrating them into general screenings and care, slightly than offering chatbots.

“There are many questions we need to answer about this technology so that we can ultimately do what we are all here to do: improve children’s mental and physical health,” said Dr. Doug Opel, a bioethicist at Seattle Children’s Hospital.


This article was originally published on : thegrio.com

Leave a Reply

Your email address will not be published. Required fields are marked *

Trending

Exit mobile version