Paralysis can rob people of their ability to speak. Now researchers hope to give it back

When Jaimie Henderson was 5 years old, his father was in a devastating car crash. The accident left his father barely able to move or speak. Henderson remembers laughing at his dad's jokes, though he never could understand the punchlines. "I grew up wishing I could know him and communicate with him."

That early experience drove his professional interest in helping people communicate.

Now, Henderson's an author on one of two papers published Wednesday showing substantial advances toward enabling speech in people injured by stroke, accident or disease.

Although still very early in development, these so-called brain-computer interfaces are five times better than previous generations of the technology at "reading" brainwaves and translating them into synthesized speech. The successes suggest it will someday be possible to restore nearly normal communication ability to people like Henderson's late father.

"Without movement, communication is impossible," Henderson said, referencing the trial's participant who has amyotrophic lateral sclerosis, or ALS, which robs people of their ability to move. "We hope to one day tell people who are diagnosed with this terrible disease that they will never lose the ability to communicate."

A research subject named Pat tries out a brain-computer interface designed to help her speak.
A research subject named Pat tries out a brain-computer interface designed to help her speak.

Both the technologies, developed at Stanford and nearby at the University of California, San Francisco, enabled a volunteer to generate 60 to 80 words per minute. That's less than half the pace of normal speech, which typically ranges from 150 to 200 words per minute, but substantially faster than previous brain-computer interfaces. The new technologies can also interpret and produce a much broader vocabulary of words, rather than simply choosing from a short list.

At Stanford, researchers chose to decode signals from individual brain cells. The resolution will improve as the technology gets better at allowing recording from more cells, Henderson said.

"We're sort of at the era of broadcast TV, the old days right now," he said in a Tuesday news conference with reporters. "We need to increase the resolution to HD and then on to 4K so that we can continue to sharpen the picture and improve the accuracy."

The two studies "represent a turning point" in the development of brain-computer interfaces aimed at helping paralyzed people communicate, according to an analysis published in the journal Nature along with the papers.

"The two BCIs represent a great advance in neuroscientific and neuroengineering research, and show great promise in boosting the quality of life of individuals who have lost their voice as a result of paralysing neurological injuries and diseases," wrote Dutch neurologist Nick Ramsey and Johns Hopkins University School of Medicine neurologist Nathan Crone.

Paralyzed patients walking in minutes: New electrode device a step forward in spinal injury care

Two different approaches to communication, both work

At UCSF, researchers chose to implant 253 high-density electrodes across the surface of a brain area involved in speech.

The fact that the different approaches both seem to work is encouraging, the two teams said Tuesday.

It's too early to say whether either will ultimately prove superior or if different approaches will be better for different types of speech problems. Both teams implanted their devices into the brains of just one volunteer each, so it's not yet clear how challenging it will be to get the technology to work in others.

The UCSF team also personalized the synthesized voice and created an avatar that can recreate the facial expressions of the participant, to more clearly reeplicate natural conversation. Many brain injuries, like ALS and stroke also paralyze the muscles of the face, leaving the person unable to smile, look surprised, or offer concern.

Ann, the participant in the USCF trial, had a brain stem stroke 17 years ago and has been participating in the research since last year. Researchers identified her only by her first name to protect her privacy.

The electrodes intercepted brain signals that, if not for Ann's stroke, would have gone to muscles in her, tongue, jaw and larynx, as well as her face, according to UCSF. A cable, plugged into a port fixed to her head, connected the electrodes to a bank of computers.

For weeks, she and the team trained the system’s artificial intelligence algorithms to recognize her distinctive brain signals by repeating phrases over and over again.

Instead of recognizing whole words, the AI decodes words from phonemes, according to UCSF. “Hello,” for example, contains four phonemes: “HH,” “AH,” “L” and “OW."

Researchers used video from Ann's wedding to create a computer-generated voice that sounds much like her own did and to create an avatar that can make facial expressions similar to the ones she made before her stroke.

Advances in machine learning have made such technologies possible, said Sean Metzger, a bioengineering graduate student who helped lead the research. "Overall, I think this work represents accurate and naturalistic decoding of three different speech modalities, text, synthesis and an avatar to hopefully restore fuller communication experience for our participant," he told reporters.

The healing power of a good beat: Neurologic music therapy helps kids with brain injuries

Stanford approach: Tiny sensors on the brain

The Stanford trial relied on volunteer Pat Bennett, now 68, a former human resources director, who was diagnosed with ALS in 2012.

“When you think of ALS, you think of arm and leg impact,” Bennett wrote in an interview with Stanford staff conducted by email and provided to the media. “But in a group of ALS patients, it begins with speech difficulties. I am unable to speak.”

On March 29, 2022, neurosurgeons at Stanford placed two tiny sensors each on the surface of two regions of Bennett's brain involved in speech production. About a month later, she and a team of Stanford scientists began twice-weekly, four-hour research sessions to train the software that was interpreting her speech.

She would repeat in her mind sentences chosen randomly from telephone conversations, such as: “It’s only been that way in the last five years.” Another: “I left right in the middle of it.”

As she recited these sentences, her brain activity was translated by a decoder into a stream of "sounds" and then assembled into words. Bennett repeated 260 to 480 sentences per training session. Initially, she was restricted to a 50-word vocabulary, but then allowed to choose from 125,000 words, essentially, all she would ever need.

After four months, she was able to generate 62 words per minute on a computer screen merely by thinking them.

“For those who are nonverbal, this means they can stay connected to the bigger world, perhaps continue to work, maintain friends and family relationships,” she wrote.

The technology made a lot of mistakes. About 1 out of every 4 words was interpreted incorrectly even after this training.

Frank Willett, the research scientist who helped lead the Stanford work, said he hopes to improve accuracy in the next few years, so that only 1 out of 10 words will be wrong.

Edward Chang, the senior researcher on the UCSF paper, said he hopes his team's work will "really allow people to interact with digital spaces in new ways," communicating beyond simply articulating words.

All four researchers said restoring communication abilities to Ann and Bennett during the trial was a highlight in their professional careers.

"It was quite emotional for all of us to see this work," said Chang, a member of the UCSF Weill Institute for Neuroscience.

"I felt like I'd come full circle from wishing I could communicate with my dad as a kid to seeing this actually work," Henderson added. "It's indescribable."

Contact Karen Weintraub at kweintraub@usatoday.com.

Health and patient safety coverage at USA TODAY is made possible in part by a grant from the Masimo Foundation for Ethics, Innovation and Competition in Healthcare. The Masimo Foundation does not provide editorial input.

This article originally appeared on USA TODAY: New brain-computer interface helps 2 paralyzed people communicate