Suicide hotline shares data with for-profit spinoff, raising ethical questions

Crisis Text Line is one of the world’s most prominent mental health support lines, a tech-driven nonprofit that uses big data and artificial intelligence to help people cope with traumas such as self-harm, emotional abuse and thoughts of suicide.

But the data the charity collects from its online text conversations with people in their darkest moments does not end there: The organization’s for-profit spinoff uses a sliced and repackaged version of that information to create and market customer service software.

Crisis Text Line says any data it shares with that company, Loris.ai, has been wholly “anonymized,” stripped of any details that could be used to identify people who contacted the helpline in distress. Both entities say their goal is to improve the world — in Loris’ case, by making “customer support more human, empathetic, and scalable.”

In turn, Loris has pledged to share some of its revenue with Crisis Text Line. The nonprofit also holds an ownership stake in the company, and the two entities shared the same CEO for at least a year and a half. The two call their relationship a model for how commercial enterprises can help charitable endeavors thrive.

For Crisis Text Line, an organization with financial backing from some of Silicon Valley’s biggest players, its control of what it has called “the largest mental health data set in the world” highlights new dimensions of the tech privacy debates roiling Washington: Giant companies like Facebook and Google have built great fortunes based on masses of deeply personal data. But information of equal or greater sensitivity is also in the hands of nonprofit groups that fall outside federal regulations on commercial businesses — with little outside control over where that data ends up.

Ethics and privacy experts contacted by POLITICO saw several potential problems with the arrangement.

Some noted that studies of other types of anonymized datasets have shown that it can sometimes be easy to trace the records back to specific individuals, citing past examples involving health records, genetics data and even passengers in New York City taxis.

Others questioned whether the people who text their pleas for help are actually consenting to having their data shared, despite the approximately 50-paragraph disclosure the helpline offers a link to when individuals first reach out.

The nonprofit “may have legal consent, but do they have actual meaningful, emotional, fully understood consent?” asked Jennifer King, the privacy and data policy fellow at the Stanford University Institute for Human-Centered Artificial Intelligence.

Those disclosure terms also note that Meta’s Facebook Messenger and WhatsApp services can access the content of conversations taking place through those platforms. (Before this article was published, Meta confirmed that it has access to that data but says it does not use any of it, except for cases involving risk of imminent harm. After publication, WhatsApp clarified that it and Meta have no access to the contents of messages occurring via WhatsApp.)

Former federal regulator Jessica Rich said she thought it would be “problematic” for third-party companies to have access even to anonymized data, though she cautioned that she was unfamiliar with the companies involved.

“It would be contrary to what the expectations are when distressed consumers are reaching out to this nonprofit,” said Rich, a former director of the Federal Trade Commission’s Bureau of Consumer Protection. She later added: “The fact that the data is transferred to a for-profit company makes this much more troubling and could give the FTC an angle for asserting jurisdiction.”

The nonprofit’s vice president and general counsel, Shawn Rodriguez, said in an email to POLITICO that “Crisis Text Line obtains informed consent from each of its texters” and that “the organization’s data sharing practices are clearly stated in the Terms of Service & Privacy Policy to which all texters consent in order to be paired with a volunteer crisis counselor.”

In an earlier exchange, he emphasized that Crisis Text Line’s relationship with its for-profit subsidiary is “ethically sound.”

“We view the relationship with Loris.ai as a valuable way to put more empathy into the world, while rigorously upholding our commitment to protecting the safety and anonymity of our texters,” Rodriguez wrote. He added that "sensitive data from conversations is not commercialized, full stop.”

Loris’ CEO since 2019, Etie Hertz, wrote in an email to POLITICO that Loris has maintained “a necessary and important church and state boundary” between its business interests and Crisis Text Line.

After POLITICO began asking questions about its relationship with Loris, the nonprofit changed wording on its website to emphasize that “Loris does not have open-ended access to our data; it has limited contractual rights to periodically ask us for certain anonymized data.” Rodriguez said such sharing may happen every few months.

A ‘tech startup’ for mental health crises

Since its launch in 2013, Crisis Text Line says it has exchanged 219 million messages in more than 6.7 million conversations over text, Facebook Messenger and WhatsApp — channels that it says allow it to meet its often youthful client base “where they are.” It has spread beyond the U.S. to open operations in Canada, the U.K. and Ireland.

The New York-based nonprofit says it knows how “deeply personal and urgent” these silent conversations are for those reaching out, many of them young, people of color, LGBTQ or living in rural areas: “68% of our texters share something with us that they have never shared with anyone else,” the helpline wrote in one government filing.

In a little less than 1 percent of cases, the group says, the conversation becomes so dire that it contacts emergency services to “initiate an active rescue.” Two to three times a week, it wrote in one 2020 report, the discussion turns to thoughts of homicide, “most often a school shooting or partner murder.”

Data science and AI are at the heart of the organization — ensuring, it says, that those in the highest-stakes situations wait no more than 30 seconds before they start messaging with one of its thousands of volunteer counselors. It says it combs the data it collects for insights that can help identify the neediest cases or zero in on people’s troubles, in much the same way that Amazon, Facebook and Google mine trends from likes and searches.

“We know that if you text the words ‘numbs’ and ‘sleeve,’ there's a 99 percent match for cutting,” the nonprofit’s co-founder and former CEO, Nancy Lublin, said in a 2015 TED talk. “We know that if you text in the words ‘mg’ and ‘rubber band,’ there's a 99 percent match for substance abuse. And we know that if you text in ‘sex,’ ‘oral’ and ‘Mormon,’ you're questioning if you're gay.”

“I love data,” added Lublin, who has also described the helpline as “a tech startup.” She had previously founded the group Dress for Success, which provides business clothing and job training to women in need. (This month, Lublin referred questions about the relationship between the help line and Loris to Hertz, the current Loris CEO.)

Crisis Text Line has partnered with local governments and more than a dozen school systems across the country and has expanded its reach by teaming up with tech titans like Google, Meta and TikTok. The organization also allows access to its data for research purposes.

But it also came to view texters’ data as valuable for another purpose: helping corporations deal with their customer service problems.

So in 2018, Lublin created Loris, with backing from investors including former LinkedIn CEO Jeff Weiner and the Omidyar Network of billionaire eBay founder Pierre Omidyar. Its purpose, the company outlined, was to use Crisis Text Line’s “de-escalation techniques, emotional intelligence strategies, and training experience” to develop AI software that helps guide customer service agents through live chats with customers.

“We’ve baked all of our learning into enterprise software that helps companies boost empathy AND bottom line,” says Loris’ website, which features testimonials from clients such as the ride-hailing company Lyft and the meal-subscription service Freshly.

The “core” of its artificial intelligence, Loris said in a news release last year, comes from the insights “drawn from analyzing nearly 200 million messages” at Crisis Text Line. (Hertz, the Loris CEO, said in an email that its AI has evolved and now includes data from e-commerce and other industries.)

Loris’ website says a portion of the company’s revenue would go toward supporting the nonprofit, calling the arrangement “a blueprint for ways for-profit companies can infuse social good into their culture and operations, and for nonprofits to prosper.” In practice, Crisis Text Line’s Rodriguez said, the company “has not yet reached the contractual threshold” where such revenue-sharing would occur, although he said Loris paid Crisis Text Line $21,000 in 2020 for office space.

“Simply put, why sell t-shirts when you can sell the thing your organization does best?” reads Crisis Text Line’s description of the data-sharing partnership.

Volunteers speak out

Former Crisis Text Line volunteer Tim Reierson has a different term for Loris’ use of the crisis line’s data: “disrespectful.”

“When you're in conversation with someone, and you don't know how it's going to end … it's a very delicate and tender and fragile space,” said Reierson, who has started a public campaign to try to change the nonprofit’s data practices. He said the people who contact the text line — many of them teens or younger — include “somebody staring at blades on their table in front of them, or somebody hiding from a parent who's on a rampage, or someone who's struggling with an eating disorder, somebody who's ready to end their life.”

In his experience as a volunteer, he said he believed those individuals “definitely have an expectation that the conversation is between just the two people that are talking.”

Reierson said the organization terminated him in August after he began raising concerns internally about its handling of data. Rodriguez, the Crisis Text Line general counsel, disputed this, saying Reierson “was dismissed from the volunteer community because he violated Code of Conduct.” Despite repeated requests, Rodriguez declined to specify what the alleged violations were.

“I absolutely did not ever violate the code of conduct, not even close, and they know that,” Reierson said. “There is a process that is followed for code of conduct violations and it was never invoked. … The organization encourages volunteers and staff to use their internal system for expressing any concerns, and that’s what I did.”

Former volunteer Alison Diver — who described talking one texter out of jumping off a freeway bridge and another off a hotel balcony — left Crisis Text Line in July. She said she has since signed onto a Change.org petition that Reierson started that urges the nonprofit “to phase out its practice of monetizing crisis conversations as data, as soon as possible.” Diver expressed alarm after hearing Reierson describe the nonprofit’s data practices.

“That makes me feel betrayed,” she said, contending that volunteers should have a say in whether the data is used for other purposes. “They wouldn't even have a Crisis Text Line if it wasn't for us.”

Beck Bamberger, a current volunteer in California who has logged nearly 300 hours during her three years with the hotline, said she was not aware that data from those conversations was being used for customer service applications until POLITICO reached out for this story.

"Mental health and people cutting themselves adapted to customer service?” she said. “That sounds ridiculous. Wow.”

She added: "If your volunteers, staff and the users themselves are not aware of that use, then that's a problem.”

Reierson launched a website in January calling for “reform of data ethics” at Crisis Text Line, and his petition, started last fall, also asks the group to “create a safe space” for workers to discuss ethical issues around data and consent.

Crisis Text Line’s Rodriguez disputed the accuracy of Reierson’s complaints, while asserting that “Loris’s access and use of anonymized data is not a privacy issue.”

Crisis Text Line also has a data, ethics and research advisory board that includes Reddit’s vice president of data, Jack Hanlon, and medical experts affiliated with Harvard, Yale, Brown and other health-focused institutions. (None are volunteer crisis counselors.) Until recently, the chief data scientist in charge of that committee was Crisis Text Line co-founder Bob Filbin, who left for Meta last fall.

‘People at their worst moments’

Data ethics experts contacted about the nonprofit helpline, its for-profit spinoff and their data-sharing practices said the setup raised various possible issues.

One is the issue of whether the data being shared is truly anonymous, despite Rodriguez’s assurance that “[s]haring personally identifiable information is strictly forbidden under the contracts between Crisis Text Line and Loris.”

“The re-personalization [of data] is perhaps esoteric but not completely beyond the means of some nefarious operator,” said health tech expert John Nosta, founder of the think tank NostaLab.

Rich, the former FTC consumer protection chief, agreed. “Anonymizing the data could decrease the likelihood of harm, but we don’t know whether that could be reverse-engineered,” she said.

And should someone manage to trace the data back to specific individuals, the people who sought help could find their autonomy and choices compromised, said Eric Perakslis, the chief science and digital officer at the Duke Clinical Research Institute.

If that were to happen with the Crisis Text Line’s data, “your name could be associated with a suicide hotline,” he said, noting how disclosures about a person’s HIV status in the 1980s, or involvement with Planned Parenthood today, could put a person at risk. “It's a lot different than someone just understanding your cholesterol,” Perakslis said.

Asked how texters’ data is scrubbed to ensure anonymity, Rodriguez from Crisis Text Line said it’s done “via an automated process” that removes information including names, phone numbers and social media handles. "This is done in order to strongly and responsibly reinforce that anonymized data cannot reveal the identity of a texter," he said.

But King, the Stanford fellow, called it “ethically questionable” to make commercial use of this kind of data — even if it’s anonymized — given the emotional stress that people are under when presented with a link to terms of service they may never open. (“By texting further with us, you agree to our Terms,” says the automated first message.)

“We're seeing more and more how often data online is not just my shopping history; it's a real glimpse into my psyche,” King said. “These are people at their worst moments. Using that data to help other people is one thing, but commercializing it just seems like a real ethical line for a nonprofit to cross.”

She added: “It probably passes legal muster, but does it pass the 'feel-good' muster? A lot less certain."

A miss for Washington

As a nonprofit, Crisis Text Line falls into a gap in the federal government’s approach to data privacy.

Washington has spent more than a decade debating how to regulate the way companies collect and sell individuals’ sensitive personal information, an ever more valuable source of revenue for tech giants and smaller startups alike. But that debate has largely overlooked nonprofits, which are chiefly regulated by states and fall outside the jurisdiction of the FTC’s consumer protection rules.

Nonprofits are “a really significant missing piece” of federal regulators’ authority, said Rich, the former FTC consumer protection director. She noted that even nonprofits — from educational institutions to hospitals — have mishandled highly sensitive data.

“It's tricky to cover nonprofits because in general, they aren't in the business of monetizing their data the way profit-making companies are — but this is a gap that should be filled, and Congress hasn't filled it,” she said in an interview.

Numerous congressional proposals on privacy — including a leading bill from Senate Republicans, S. 2499 — would expand the FTC’s jurisdiction to include nonprofits, but passage of that legislation is unlikely anytime soon.

Crisis Text Line is not the only nonprofit support line that collects and shares data as part of its operations. For example, the Trevor Project, a 23-year-old group that provides suicide prevention services to young LGBTQ people, discloses in its online privacy policy that it passes along information about visitors to its website to third parties including Meta and Google for targeting of online ads or other purposes, and the content of their conversations to partners for research. The Trevor Project group did not initially respond to questions from POLITICO, but said after the story published Friday that it does not sell or share personal information from its “crisis contacts” with any for-profit companies or use it for any other commercial purpose, including "to make money, to help for-profits, nor to power advertising."

Nosta, the tech think tank founder, noted that it’s also not uncommon in the digital health space for businesses to share data in exchange for services, describing it as “the nature of the beast” — with one “classic example” being the genetics testing company 23andMe.

“It's definitely not unusual in the life sciences industry,” Nosta said, “and I think in many instances, it's looked at as almost a cornerstone of revenue generation: If we're generating data, we could use the data to enhance our product or our offering, but we can also sell the data to supplement our income.”

But Duke’s Perakslis argued that Crisis Text Line’s arrangement with a for-profit company is still unusual.

“For self-improvement of the services, I think that's an expected use of their data,” he said. “But the fact that that improvement then goes to a for-profit company that sells it for other uses — that's where you have to kind of look at and see: Is this simply exploiting people with mental health crises?”

CLARIFICATION: This story has been updated to include additional details about the data use practices of WhatsApp and the Trevor Project.