- Anarthria, which is the loss of the ability to speak, can have a variety of causes, such as a stroke or amyotrophic lateral sclerosis.
- People with anarthria often retain their linguistic ability, but paralysis may prevent them from using technological aids to communicate.
- Neuroscientists used machine learning to translate into words and sentences electrical activity in the part of the brain responsible for speech.
For the first time, scientists have managed to translate into text the brain activity of a person with paralysis who is unable to speak.
Neuroscientists implanted an electrode array the size of a credit card onto the 36-year-old’s sensorimotor cortex, which is the part of the brain that controls the articulation of speech.
The scientists, led by the University of California, San Francisco (UCSF), then used “deep-learning algorithms” to train computer models to recognize and classify words from patterns in the participant’s brain activity.
They asked the man to think about saying words that they presented to him on a screen.
In addition to the deep-learning models, they used a “natural-language model,” which is a program that predicts the next word in a sentence based on the previous words.
“This outcome really is a milestone toward restoring speech for people who are unable to communicate due to paralysis,” said David Moses, a postdoctoral engineer at UCSF and lead author of the study, in an interview with the San Francisco Chronicle.
“There have been other existing approaches, but this is the first proof that someone can attempt to speak and that we can translate what they were trying to say from that brain activity.”
The participant, who was 36 at the start of the study, had a stroke in his brain stem when he was 20 that resulted in severe paralysis and loss of intelligible speech, known as anarthria.
Other causes of anarthria include amyotrophic lateral sclerosis, a rare neurological condition that mainly affects the nerves responsible for voluntary movement.
The man in the study could vocalize grunts and moans but was unable to articulate words, despite having healthy cognitive functions.
Using slight movements of his head, he can control a computer-based typing device to communicate. However, his typing speed with this device is only around five correct words per minute.
After 48 training sessions with the new “mind-reading” technology over the course of 81 weeks, he was able to generate around 15 words per minute, with an error rate of 26%.
The scientists, who report their findings in The New England Journal of Medicine, say speech decoding technologies are generally considered usable if they have a word error rate below 30%.
“This study represents a transformational breakthrough in the field of brain-computer interfaces,” said Dr. Lee H. Schwamm, who chairs the advisory committee of the American Stroke Association and was not involved in the new research.
“While the intervention is quite invasive, requiring brain surgery to implant a recording strip on the surface of the brain, and the ‘thought-to-spoken’ conversion accuracy was modest, the paradigm is groundbreaking,” Dr. Schwamm told Medical News Today.
Dr. Schwamm is vice president, virtual care, at Mass General Brigham and professor of neurology at Harvard Medical School in Boston, MA.
He said that while previous research has used a similar interface to translate thoughts about movement into control of a robotic arm, it is an even bigger leap to decipher words.
“What is most astounding is that the area of the brain for sensor implant, the sensory-motor cortex, is not thought to play an important role in language understanding or word generation,” he added.
The area is involved in moving the lips and throat to produce speech sounds, so the next step may be to see whether it can also assist with people with aphasia, which is a more common cause of disabling language disturbance after a stroke, said Dr. Schwamm.
In 48 training sessions, the participant attempted to produce particular words from a set of 50 words.
In each trial, he was presented with one of these words on a screen. When the word turned green after a 2-second delay, he attempted to say the word.
Over the course of these sessions, the researchers collected 22 hours of brain activity data, which they fed to the deep-learning algorithm.
In an additional final two sessions, the man attempted to use the previously trained words to generate sentences such as “I am thirsty” and “I need my glasses.”
In each trial, the participant was presented with the sentence and attempted to generate the words as quickly as possible by thinking about saying them.
The deep-learning algorithm and language model enabled his thoughts to be decoded without error in more than half of the sentence trials.
A
However, one of the biggest challenges for researchers trying to create a deep-learning program that can translate the brain activity of a person with paralysis is the precise timing of their words.
This is because the algorithm does not have an obvious way to distinguish electrical activity that arises from trying to articulate words, from the background brain activity.
Following training, however, the new algorithm managed to correctly identify 98% of the attempts by the participant to produce individual words.
Another challenge is to process the patterns of brain activity fast enough to translate in real time.
A
“The new study does indeed decode the brain activity in real time, but I believe that is not the biggest achievement,” said one of its authors, Prof. Christian Herff, who was not involved in the new research.
He told MNT that a few teams had subsequently achieved real-time translation, including the Brain-Computer Interface Group at Maastricht University in 2020, where he now works.
“The big step of [the new study] is that they present their results in a patient who is actually not able to speak anymore. Before, all studies were conducted with volunteers who were still able to speak,” said Prof. Herff.
Source: Medical News Today
No comments:
Post a Comment