Can AI therapy help ease America’s mental health crisis?
Over the past few years, there has been an explosion in digital tools to help people manage their mental health. By one estimate, there are as many as 20,000 apps available for that purpose. Most of them offer pre-programmed tips for maintaining an emotionally healthy routine, like breathing exercises, daily affirmations and wellness checklists.
A small handful go even further by offering actual therapy, not from a human, but powered by artificial intelligence.
Apps like Woebot and Wysa include AI-powered chatbots that can maintain a complex text-based conversation with users and respond to their inputs using many of the same treatment strategies that real therapists rely on — including an often passable re-creation of human empathy. There have also been reports of people turning to popular AI chatbots like ChatGPT, which were not designed to serve as digital therapists, for mental health support.
It’s estimated that 1 in 5 American adults lives with depression, anxiety or some other mental illness. Therapy has proven to be very effective in helping people manage these conditions. But a long list of barriers — including a nationwide shortage of therapists, spotty insurance coverage and lack of access — mean that more than half of those with a mental illness don’t receive treatment. AI therapists, at least in theory, could help fill this massive gap in mental health care.
Why there’s debate
Even the most enthusiastic backers of AI therapy say the technology is not at a point where it’s ready to replace human therapists, at least not yet. They do believe, though, that it has become sophisticated enough to be an important supplement to regular mental health care, especially for those whose conditions will go untreated if the only option is a real-life practitioner. Some research suggests that people can develop strong connections with AI therapists and that the programs can have a positive impact — particularly when providing the more systematic forms of psychological treatment like cognitive behavioral therapy.
Skeptics say there are too many risks to trust AI to provide therapy to emotionally vulnerable people. There are already examples of the systems providing dangerously incorrect advice or being used unethically. Others have major concerns about privacy, oversight and accountability when something goes wrong. On a deeper level, though, many experts believe that the human-to-human connection is the foundation of effective therapy and, that no matter how well AI might mimic that bond, it will never actually replicate it.
What’s next
None of the available AI therapy options have been approved by the Food and Drug Administration, meaning they’re unregulated and aren’t legally considered to be an alternative to traditional therapy. That could soon change. Wysa’s chatbot is currently being put through an expedited research phase that could lead to it being approved by the FDA as early as next year.