Mental health counselor Nicole Doyle was stunned when the head of the US National Eating Disorders Association (NEDA) appeared at a staff meeting to announce that a chatbot would replace the group's helpline.
Days after the helpline was taken down, the bot - named Tessa - was also taken down for giving harmful advice to people with mental illness, the Thomson Reuters Foundation (TRF) reports.
"People discovered that he was giving weight loss advice to people who told him they were struggling with an eating disorder," said Doyle, 33, one of five employees who were fired in March, about a year after the chatbot launched.
"While Tessa might be able to simulate empathy, it's not the same as real human empathy," Doyle said.
NEDA said that while the investigation into the bot's operation yielded positive results, they are determining what happened to the advice given and are "carefully considering" next steps. NEDA did not directly respond to questions about adviser redundancies, but said there was no plan for the chatbot to replace the helpline.
From the US to South Africa, mental health chatbots using artificial intelligence (AI) are gaining popularity amid a shortage of healthcare resources, despite concerns from tech experts about data privacy and counseling ethics, TRF writes.
While digital mental health tools have been around for more than a decade, there are now more than 40 mental health chatbots worldwide, according to the International Journal of Medical Informatics.
Jonah, an anthropology student from New York, tried a dozen different psychiatric medications and helplines over the years to help him deal with his OCD.
He has now added ChatGPT to his list of support services to complement weekly consultations with a therapist.
Even before ChatGPT, Johna thought about talking to the machine because "there is already a growing ecosystem of venting to the void on Twitter and Discord," he told TRF.
While the 22-year-old, who asked to use a pseudonym, said ChatGPT gives "standard advice", he finds it useful "if you're upset and just need to hear something basic... instead of worrying yourself".
Costs are a big obstacle
Mental health tech startups have raised $1,6 billion in venture capital since December 2020, when mental health took center stage due to Covid-19, according to data from PitchBook.
"The need for remote medical assistance has been highlighted even more by the Covid pandemic," said Johan Stein, artificial intelligence researcher and founder of AIforBusiness.net, an AI education and management consultancy.
An estimated 82 billion people around the world were living with anxiety and depression before Covid - 27 percent of them in low- and middle-income countries, according to data from the World Health Organization (WHO). The pandemic increased that figure by about XNUMX percent, WHO estimates.
And treatment for mental disorders is divided by income, with cost being a major barrier to accessing care.
Researchers caution that while the affordability of AI therapy may be tempting, tech companies must be careful not to impose health care inequity.
People without Internet access risk being left out, or patients with health insurance can access in-person therapy visits, leaving those without access with a cheaper chatbot option, according to the Brookings Institution.
Privacy protection
Despite the growing popularity of mental health support chatbots around the world, privacy concerns remain a major risk for users, the Mozilla Foundation found in research published in May.
Of the 32 mental health and prayer apps such as Talkspace, Woebot and Calm analyzed by the tech nonprofit, 28 were flagged for "strong concerns about user data management" and 25 failed to meet security standards such as requiring strong passwords.
For example, mental health chatbot Woebot was singled out in the survey for “sharing personal data with third parties”.
Woebot says that while it promotes the app using targeted ads on Facebook, "no personal data is shared or sold to those marketing/advertising partners," and that it gives users the option to delete all of their data upon request.
Mozilla researcher Misha Rikov described the apps as "data vacuums with the appearance of a mental health app," which opens up the possibility of users' data being collected by insurance and data brokers and social networking companies.
AI experts warn virtual therapy companies could lose sensitive data to cyber breaches.
"AI chatbots face the same privacy risk as more traditional chatbots or any online service that accepts users' personal information," Eliot Bendinelli, senior technologist at rights group Privacy International, told TRF.
In South Africa, mental health app Panda is set to launch an artificial intelligence-powered "digital companion" that would chat with users, make treatment suggestions and, with the user's consent, provide results and insights about users to traditional therapists who are also available in the app.
"Saputnik does not replace traditional forms of therapy, but expands them and helps people in their daily lives," said Panda founder Alon Lits.
Panda encrypts all backups and access to AI conversations is completely private, Lits said.
Technology experts like Stein hope that strong regulation will eventually be able to "protect against unethical AI practices, strengthen data security and maintain consistency in health standards."
From the United States to the EU, lawmakers are rushing to regulate AI tools and are pressuring the industry to adopt a voluntary code of conduct as new laws are prepared.
Empathy
Nevertheless, the anonymity and lack of judgment is why people like 45-year-old Tim, a warehouse manager in Britain, have turned to ChatGPT instead of a human therapist.
"I know it's just a big language model and it doesn't 'know' anything, but it actually makes it easier to talk about things that I don't talk about with anyone else," said Tim - not his real name - who addressed the bot in an attempt to ward off chronic loneliness.
Research shows that the empathy of chatbots can be greater than that of humans.
A 2023 study in the American internal medicine journal JAMA evaluated the responses of chatbots and physicians to 195 randomly drawn patient questions from a social networking forum.
They found that bot responses were rated “significantly higher in both quality and empathy” compared to physician responses.
The researchers concluded that "artificial intelligence assistants can help compile answers to patients' questions" and not completely replace doctors.
But while bots can simulate empathy, it will never be the same as the human empathy that people long for when they call the helpline, said Doyle, a former NEDA adviser.
"We should use technology to work together with us people, not to replace us," she assessed.
Bonus video: