Neuroimaging of Brain Shows Who Spoke to a Person and What Was Said

November 12th, 2008

And what was said: As long as what was said was limited to one of three vowels. Man, I love the Science Daily headlines.

Anyway, this is probably good enough to enable the researchers to swindle some more grant money, buy a few new Macbooks and possibly attract the attention of Darth Vader or some other deep pocketed evil lord. (Hint: This is about as good as life gets in any neuroscience department.)

Look for version 2 to be able to differentiate between All Your Base and Chocolate Rain.

Via: Science Daily:

Scientists from Maastricht University have developed a method to look into the brain of a person and read out who has spoken to him or her and what was said. With the help of neuroimaging and data mining techniques the researchers mapped the brain activity associated with the recognition of speech sounds and voices.

In their Science article “‘Who’ is Saying ‘What’? Brain-Based Decoding of Human Voice and Speech,” the four authors demonstrate that speech sounds and voices can be identified by means of a unique ‘neural fingerprint’ in the listener’s brain. In the future this new knowledge could be used to improve computer systems for automatic speech and speaker recognition.

Seven study subjects listened to three different speech sounds (the vowels /a/, /i/ and /u/), spoken by three different people, while their brain activity was mapped using neuroimaging techniques (fMRI). With the help of data mining methods the researchers developed an algorithm to translate this brain activity into unique patterns that determine the identity of a speech sound or a voice. The various acoustic characteristics of vocal cord vibrations (neural patterns) were found to determine the brain activity.

Just like real fingerprints, these neural patterns are both unique and specific: the neural fingerprint of a speech sound does not change if uttered by somebody else and a speaker’s fingerprint remains the same, even if this person says something different.

Moreover, this study revealed that part of the complex sound-decoding process takes place in areas of the brain previously just associated with the early stages of sound processing. Existing neurocognitive models assume that processing sounds actively involves different regions of the brain according to a certain hierarchy: after a simple processing in the auditory cortex the more complex analysis (speech sounds into words) takes place in specialised regions of the brain. However, the findings from this study imply a less hierarchal processing of speech that is spread out more across the brain.

The research was partly funded by the Netherlands Organisation for Scientific Research (NWO): Two of the four authors, Elia Formisano and Milene Bonte carried out their research with an NWO grant (Vidi and Veni). The data mining methods were developed during the PhD research of Federico De Martino (doctoral thesis defended at Maastricht University on 24 October 2008).

One Response to “Neuroimaging of Brain Shows Who Spoke to a Person and What Was Said”

  1. pdugan says:

    “The research was partly funded by the Netherlands Organisation for Scientific Research (NWO)”

    What an acronym! 😛

Leave a Reply

You must be logged in to post a comment.