I wouldn’t call it mind reading -- mind translating is probably more accurate -- but with extremely invasive electrodes and consenting patients, it is possible to guess what word a person just heard by deciphering the way their brain responds to it. When you hear the word “waldo”, for example, this auditory information is transmitted through certain regions of your brain and processed so that you can understand what you just heard. The neurons in these areas of the brain will respond to “waldo” in a different way than “property”, allowing you to discriminate between these two words. A lot of processing is going on in the brain for this to happen, so information that is deemed unnecessary for comprehension may be lost, but researchers in Berkeley wanted to see whether enough information got through for them to decode which word led to a specific response.
To get this information directly from the neurons responsible for processing speech, the researchers hooked up electrodes to the areas of the brain involved in speech processing, a method only possible with patients already having brain surgery to treat epilepsy or brain tumors. First they looked at the neural responses triggered by hearing specific words, and then they translated the brain’s language into something resembling sound. They did this by taking the patterns of neural responses triggered by different words, and putting that information through mathematical models to transform the brain’s response into something as similar as possible to the particular word that led to that response. They looked at how similar they were by comparing their auditory spectrograms -- diagrams representing qualities that define the sounds that make up each word -- and adjusting their mathematical models until the spectrogram of the filtered neural responses looked similar enough to the original word that simple speech recognition algorithms could recognize it.
After they were confident that their brain-to-word translator was as accurate as possible, they could test it with new words. The goal was to guess which word a person had just heard out of 47 possible words. They measured the neural responses after hearing each of the 47 words and used their mathematical filters to reconstruct a spectrogram of what each one should look like, and then they used speech recognition algorithms to try and match up their reconstructions to the closest match from the 47 original words. You can listen to this audio file for some examples of how the original word (the first you will hear) compares to the reconstructions (the second and the third have been decoded with two different methods). This worked remarkably well: 89% of the time the actual word that the person heard was the one the researchers guessed based on what the reconstruction looked like.
While they could match the reconstructions to the original words, this was done by computer and not by ear. The researchers aren’t sure yet whether getting better information from the electrodes would allow these reconstructions to make sense to a listener, or whether the way the brain processes information means that some of it can’t easily be converted back to intelligible sound. And perhaps an even bigger hurdle is looking at whether internal speech is processed in a similar way. As we understand more of how speech information is represented in the brain, we might one day be able to reconstruct inner speech in a way that could allow someone who has lost the ability to speak to be understood.