|
Info-ad
11-12-25
Decoding inner speech from brain signals could help people with paralysis
At a glance
- Scientists designed a brain-computer interface to decode inner speech in real time from activity in the brain's motor cortex.
- The findings could help some people with paralysis to communicate more easily than current interfaces, which are based on attempted speech.
Brain-computer interfaces (BCIs) show promise for allowing people with paralysis to communicate. These devices use machine learning to translate electrical signals within the brain into words. Current systems need their users to attempt to physically speak.
Decoding inner speech-thoughts expressed as words in a person's head-could inform the design of practical speech BCIs. Decoding inner speech that doesn't involve attempting to speak could make speech output easier for some users. But beyond the technical challenge of decoding inner speech, one concern is the possibility of exposing thoughts that the person didn't intend to say out loud.
A research team led by Dr. Erin Kunz, Benyamin Abramovich Krasa, and Dr. Francis Willett at Stanford University studied how inner speech is represented in the brain compared with attempted speech. They also explored whether a BCI could decode such speech, and whether unintentional output could be prevented. Results appeared in Cell on Aug. 14.
The team began by recording activity in the brain's motor cortex from four participants. All had impaired speech due to either ALS (amyotrophic lateral sclerosis) or stroke. Participants either attempted to speak or imagined saying a set of words. For all participants, the team found that words could be decoded from motor cortex activity in both cases.
Attempted and inner speech evoked similar patterns of neural activity. But attempted speech was associated with stronger signals on average than inner speech. This allowed the decoder to distinguish attempted from inner speech.
Next, the participants imagined speaking whole sentences while a BCI decoded the sentences in real time. Error rates were between 14% and 33% for a 50-word vocabulary, and between 26% and 54% for a 125,000-word vocabulary. These participants had severe weakness in the muscles used for speech. They preferred using imagined speech over attempted speech, mainly because of the lower physical effort required.
To test whether the system could also decode unintentional inner speech, participants performed non-verbal tasks involving sequence recall and counting. A speech BCI was able to decode the memorized sequences and the counted numbers from participants' brain signals.
The team devised two strategies to prevent BCIs from decoding private inner speech. In one, they trained the decoder to distinguish attempted speech from inner speech and to silence the latter. This prevented decoding of inner speech while still effectively decoding attempted speech. Another strategy used a system that only decoded inner speech when it was first "unlocked" by detecting a specific keyword from the user. The system recognized the keyword more than 98% of the time. Some speech BCIs may require additional design elements to lock out decoding until the user intends to speak.
The findings suggest that attempted speech and inner speech are similarly represented in the brain's motor cortex. BCIs can thus decode inner speech using neural signals, including private inner speech. But the study demonstrated strategies that can prevent unintentional decoding of private inner speech without sacrificing accuracy. This approach could help people who are unable to speak communicate more easily than BCIs using attempted speech.
"This work gives real hope that speech BCIs can one day restore communication that is as fluent, natural, and comfortable as conversational speech," Willett says.
-by Brian Doctrow, Ph.D.
For further information on this and other health topics, visit the web site of the National Institute of Health at www.nih.gov.
|