Mental health apps are scooping up your most sensitive data. Will you benefit?

An app for monitoring people with bipolar disorder and schizophrenia is so precise it can track when a patient steps outside for a cigarette break or starts a romantic relationship — and where that new partner lives. Another app, meant to screen for suicidality, analyzes not only text message metadata, but also the content of the conversations.

While these smartphone trackers are being developed through academic research projects, which require patients to give informed consent before sharing intensely personal information, mental health apps without such safeguards are already on the market for depression, anxiety, PTSD, and other conditions.

Many of these apps have a common problem, say experts on health technology: They put patients’ privacy at risk while providing marginal or unknown benefits. And in some cases, without customers’ knowledge, makers of mental health apps or services are using the data collected to create products that have nothing to do with health care.

Phone apps hold enormous promise for mental health research and treatment, and could even prevent acute episodes of psychosis or suicide attempts. They’re always with the patient, unobtrusively monitoring sleep patterns, movements, location, and social interactions, providing clinicians insight into a person’s life that a monthly appointment can’t. But the question lingers whether patients using these apps are aware of just how much information they’re handing over, and how it’s being used.

“You don’t typically share medical information like this with companies,” said Dr. Mason Marks, an assistant professor at Gonzaga University School of Law who has written extensively on digital privacy. “Our health information used to be off limits, but because this is kind of a Wild West of health apps, that information can be collected and it can be shared with anyone.”

An eye-opening study published in JAMA Network Open in April revealed that 81% of the 36 top-rated mental health apps sent data to Google and Facebook for analytics or advertising purposes. Only 59% of those apps revealed this in their privacy policy; three explicitly stated that they would not share information with a third party but did it anyway, and nine others had no policy at all.

Most apps share data with Google and Facebook in order to find out who is using their product by taking advantage of the giant companies’ computing power and databases. Based on an IP address or digital identifier from a phone, a data aggregator can pull together the different websites or apps a person uses and create a profile that includes their age, gender, geographic location, interests, and income. The apps say they employ this service to understand their users better, but they can also use the information to sell targeted advertisements on their site or even sell the user profiles outright to other companies that are looking to appeal to a similar clientele.

Consumers by and large seem to have accepted this tradeoff inherent in using apps, but for mental health tools, the potential downsides are greater, experts caution.

While none of the data shared in the JAMA paper was personally identifiable, such as a name or an email address, that may not matter, said Dr. John Torous, director of the digital psychiatry division at Beth Israel Deaconess Medical Center in Boston and one of the authors of the JAMA paper. “There is a digital breadcrumb that’s tying some identifier from your phone to a mental health app, and they’re sending it to places like Facebook Analytics,” he said. “You can imagine that on the Facebook side [they] might say, ‘Hey, we’ve seen this metadata tag from John’s phone before. Now this tag has a depression app. We can probably infer John is seeking help for depression.’”

A major concern with companies obtaining this type of information is that they could use it for something Marks calls “algorithmic discrimination” — when a group, like people with a certain health condition, are screened out of opportunities such as housing, employment, or insurance by an automated system. Last year, the Department of Housing and Urban Development sued Facebook, alleging it violated the Fair Housing Act by excluding people from receiving housing-related ads based on their race, religion, sex, and disability. Facebook agreed in March to overhaul its targeted advertising for housing, credit, and job opportunities.

Another worry is targeted advertising that could exacerbate people’s health problems. For example, people with chronic pain or substance use disorders receiving ads for opioids, someone with an eating disorder seeing an ad for stimulants or laxatives, or a person with a gambling problem targeted with ads for discounted airfare to Las Vegas.

“It’s not like Facebook is going in there and has some nefarious programmer that’s trying to target these people intentionally,” said Marks. “It’s that the systems are automated and they learn on their own, ‘Oh these particular people in this group, they tend to click on these ads more often than others,’ or, ‘They tend to follow through and actually make a purchase more than others.’ That mechanism can just be baked into the advertising platform, whether it’s Facebook or YouTube or Google.”

Read more: How do you make a mental health app people actually want to use? Take a page from podcasts and Pixar

Even if companies are transparent with their data policies, most people don’t bother reading them, especially when they’re in the middle of a mental health crisis. Jennifer Murray’s doctor recommended she download the MoodGym app while she was going through a tough time a few years ago. MoodGym describes itself as an “interactive self-help book” that helps people cope with anxiety and depression. Murray said it never occurred to her to read the app’s privacy policy, which states the company may disclose personal information to “entities who assist us in providing our services (including hosting and data storage providers and payment providers)” and “specialist consultants.”

“That was one of my more vulnerable times,” she said. “I’m guessing most people’s more vulnerable times are when they’re downloading and dumping data into a mental health coaching app.”

Not everyone who uses these apps gets upset about targeted marketing, though. Jennifer Billock, who was diagnosed with anxiety and panic attacks, used the app CheckingIn to monitor her mood. CheckingIn’s privacy policy is relatively detailed, and says it both sends and receives data and personal information from third parties, including Google, for analytics and advertising purposes. But Billock said it doesn’t bother her if other people or companies know about her struggle with mental illness. In fact, she said she found out about CheckingIn through a targeted online advertisement.

“I like seeing what new products are out there that could potentially help,” she said. “I’ve written about [my mental health] a few times, so I’m really not all that uncomfortable. I feel like people need to be open about their mental health if we want the stigma to go away.” But, she added, “I do think it should be up to me to disclose.”

One reason companies can get away with sharing user information is that their products are classified as wellness rather than medical apps, so they’re not regulated by the Food and Drug Administration and don’t need to be compliant with HIPAA privacy regulations.

“They’re kind of like the vitamin C in the pharmacy aisle,” said Torous. “Because they self-categorize as a health and wellness product in the terms and conditions, that says, ‘We’re not subject to any kind of medical regulations around privacy, around confidentiality.’”

Some mental health services are using client data to create businesses that have nothing to do with health care — a strategy that falls into an ethical gray area if the clients aren’t told what their information will be used for.

Users of Crisis Text Line can chat in real time with volunteer crisis counselors who offer an empathetic ear and de-escalation strategies. From these millions of text conversations, the not-for-profit organization has devised algorithms to detect trigger words that are indicative of suicidal thoughts, substance abuse, eating disorders, and other mental health problems, which can help guide the conversation.

Now, a sister company is providing these sorts of algorithms to corporations to improve their employee communication. Crisis Text Line founder and CEO Nancy Lublin launched Loris.ai in 2018 to teach human resources departments, sales teams, and customer service reps how to negotiate hard conversations. According to Loris’ website, the software was developed by mining the intelligence gleaned from “the hard conversations” and “large sentiment-rich data corpus” of Crisis Text Line.

Read more: In the quest to turn voice data into a depression screening tool, one startup looks to India

 

“Our Loris team analyzed the over 100 million messages exchanged by Crisis Text Line counselors and extracted words and patterns that have proven effective [at] moving texters from a hot moment to a cool calm,” Etie Hertz, Loris’ CEO, said via email. “Loris uses these proprietary learnings and insights … to analyze and score customer messages. We then propose techniques and language to guide agents’ response in real time.”

“No personal identifiable information from Crisis Text Line has been shared with Loris or any other company. Only aggregated and anonymized trends,” added Lublin.

Crisis Text Line is open about using texter data to improve its service and sharing it with research institutions and other nonprofits, and it recently added Loris to its list of partners. In its terms of service, Crisis Text Line states it “may collect, use, transfer, and disclose anonymized non-Personally Identifiable Information to third parties for any purpose, including but not limited to improving our Services, generating support for Crisis Text Line, or as required by law.”

To Lublin, Loris is a source of income for Crisis Text Line and a way to help “the sustainability of the organization.” But Marks sees the situation differently.

“Hundreds of thousands of people text in from this vulnerable group of people who are having suicidal ideation … [and] the intelligence that may be derived from that information is being sold to Fortune 500 companies,” he said.

Some apps grow out of academic studies, which must be HIPAA-compliant and have participants go through an informed consent process. The information these studies collect is often more invasive than what consumer apps do, including GPS data, call and text logs, voice analysis, and even the content of messages.

“But at least as a patient or subject in them,” Torous said, “you have some reassurance that there’s a whole institutional ethics review board that’s watching.”

A key question for such research is how much of a benefit these apps can potentially provide and whether that offsets the privacy risks. Nicholas Allen, director of the Center for Digital Mental Health at the University of Oregon, is running one of the most extensive mobile phone studies in teenagers who are at risk for suicide. The trial collects GPS data, activity levels, call and text metadata, audio diaries, text conversations, social media posts, and even analyzes facial expressions in selfies. The goal is to see whether these digital signals can predict an imminent suicide threat so clinicians or emergency responders can intervene in time.

“It is a fairly intrusive method,” he admitted. “This is why the suicide example is a good place to start working with it, because the benefit of early detection and prevention is really profound. I mean, you could be saving a life potentially.”

Allen isn’t analyzing the data in real time — he’s still working to find a predictive signal from the noise — so any benefit to the user is still several years off. For now, the teenagers in the trial just have to trust that their information is being stored securely, just like in any research study.

Torous is optimistic about the promise of mobile phone apps to improve care for people with mental health problems, but he acknowledges they’re not a panacea. “This isn’t all smoke and mirrors,” he said. “But they’re probably not as great as they’re sometimes put out to be.”