Brian Pasley at the University of California, Berkeley quips, "If a pianist were watching a piano being played on TV with the sound off, they would still be able to work out what the music sounded like because they know what key plays what note,". Something analogous with brainwaves tells us what parts of speech are being matched by neural areas in the auditory cortex. Details of frequency can be extracted and recorded along with their fluctuations and the rhythm of syllables tripping off the tongue.
It sounds a little far- fetched, but if this kind of brain tampering takes off, we are in for thought reading and brain scanning in an alarming way. Perhaps for example, we would no longer need courts of law or juries. The verdict is in the computer: the individual admits guilt without even uttering a syllable!
"The area of the brain that they are recording from is a pathway somewhere between the area that processes sound and the area that allows you to interpret it and formulate a response," says Jennifer Bizley, an auditory researcher at the University of Oxford. She helped create a spectrogram showing how much of each sound frequency is being created in speech over a certain time.
Participants listened to words (acoustic waveform, top left), while neural signals were recorded from cortical surface electrode arrays (top right, red circles) implanted over superior and middle temporal gyrus (STG, MTG). Speech-induced cortical field potentials (bottom right, gray curves) recorded at multiple electrode sites were used to fit multi-input, multi-output models for offline decoding. The models take as input time-varying neural signals at multiple electrodes and output a spectrogram consisting of time-varying spectral power across a range of acoustic frequencies (180-7,000 Hz, bottom left). To assess decoding accuracy, the reconstructed spectrogram is compared to the spectrogram of the original acoustic waveform; Credit: PLoS Biology doi:info:doi/10.1371/journal.pbio.1001251.g001
The key test is comparing this spectrogram with the original speech spectrogram. Another test converted the neural spectrogram into audible speech. Unfortunately, to date, it is only possible to interpret some words easily. Steven Laureys at the University of Liege, Belgium says, "We know that for much of our sensory processing, mental imagery activates very similar networks." So thinking about a word would be enough to create the neural equivalent. He thinks that medical situations, with comatose patients for example, would benefit from this research.
The research approached the problem of accessing the signals from the cortex by utilising epilepsy treatment techniques from neurosurgery. The cortex of the brain sends electrical signals through receptors on the skull surface. In that way, the spectrograms are composed. The research has a long way to go, but that potential is enormous, and not only for the medical industry.