Brain-computer interface could translate thoughts into speech

31 January 2019


Neuroengineers from the Mortimer B. Zuckerman Mind Brain Behavior Institute at Columbia University in the US have developed a brain-computer interface, which can directly translate thoughts into intelligible, recognisable speech.

The new system offers hope for people with little to no ability to speak, such as patients with amyotrophic lateral sclerosis (ALS) or those recovering from a stroke. Around one third of people who have had a stroke have some kind of problem with speech.

“Our voices help connect us to our friends, family and the world around us, which is why losing the power of one’s voice due to injury or disease is so devastating,” said Dr Nima Mesgarani from the Zuckerman Mind Brain Behavior Institute. “We’ve shown that, with the right technology, these people’s thoughts could be decoded and understood by any listener.”

The interface is based upon speech synthesisers and artificial intelligence (AI), which track brain activity and reconstructs these into words. The researchers believe that the new system has the potential to be used in computers that can directly communicate with the brain.

To create the system, the team used a computer algorithm called vocoder, which synthesises speech after being trained on recordings of people talking. This technology is currently used by a number of big players in the medical device industry for their smart speakers.

The researchers partnered with Northwell Health Physician Partners Neuroscience Institute neurosurgeon Dr Ashesh Dinesh Mehta to train the vocoder in interpreting brain activity.

Epilepsy patients currently being treated by Mehta were asked to listen to specific sentences and digits, and their brain signals were recorded to be run through the vocoder. The sound generated in response to these signals was analysed and refined by AI-based neural networks, which mimic the structure of biological neurons.

This resulted in the production of a robotic-sounding voice. People were able to understand and repeat the produced sounds in about 75% of the cases, which is significantly higher than previous attempts.

Although the system requires further training and testing, researchers hope that it can later be applied in implants that can be worn to directly translate the user’s thoughts into words. “This would be a game changer,” said Mesgarani. “It would give anyone who has lost their ability to speak, whether through injury or disease, the renewed chance to connect to the world around them.”



Privacy Policy
We have updated our privacy policy. In the latest update it explains what cookies are and how we use them on our site. To learn more about cookies and their benefits, please view our privacy policy. Please be aware that parts of this site will not function correctly if you disable cookies. By continuing to use this site, you consent to our use of cookies in accordance with our privacy policy unless you have disabled them.