We thought we knew how our brains understand speech. We were wrong
New findings suggest that your brain hears sounds and words separately and simultaneously, instead of the long-standing assumption that the mind processes sound to turn it into familiar words.
After seven years of research, a team of neuroscientists has finally uncovered how our brains process speech – and it's not the way we thought it was.
Instead of turning the sound of someone speaking into words, like had long been assumed, our minds process both the sounds and the words at the same time, but in two different locations in the brain.
This finding, the researchers say, could have implications for our understanding of disorders of hearing and language, like dyslexia.
Scientists' ability to understand speech processing has been held back by topology: the brain region that is involved in speech processing, the auditory cortex, is deeply hidden between the brain's frontal and temporal lobes.
Even if researchers could get access to this area of the brain, to get neurophysiological recordings from the auditory cortex would require a scanner with extremely high resolution.
But advancements in technology, along with nine participants undergoing brain surgery, allowed a team of neuroscientists and neurosurgeons from across Canada and the USA to answer the question of how we understand speech.
Read more about the brain:
- We have more than five senses. A neuroscientist explains the hidden abilities we often overlook
- How to change your personality, according to a cognitive neuroscientist
“We went into this study hoping to find evidence that the transformation of the low-level representation of sounds into the high-level representation of words,” said Dr Edward Chang, one of the study's authors from the University of California, San Francisco.
When we hear the sound of talking, the cochlea in our ear turns this into electrical signals, which it then sends to the auditory cortex in the brain. Before their research, Chang explained, scientists believed that this electrical information had to be processed by a specific area known as the primary auditory cortex, before it can be translated into the syllables, consonants and vowels, that make up the words we understand.
"That is, when you hear your friend’s voice in a conversation, the different frequency tones of her voice are mapped out in the primary auditory cortex first... before it is transformed into syllables and words in the primary auditory cortex.
More like this
"Instead, we were surprised to find evidence that the nonprimary auditory cortex does not require inputs from the primary auditory cortex and is likely a parallel pathway for processing speech," Chang said.
To test this, researchers stimulated the primary auditory cortex in participants' brains with small, harmless electrical currents. If participants needed this area to understand speech, stimulating it would prevent, or distort, their perception of what they were being told.
Surprisingly, the patients could still clearly hear and repeat any words that were said to them.
Then, the team stimulated an area in the nonprimary auditory cortex.
The impact on the patients' ability to understand what they were being told was significant. ‘‘I could hear you speaking but can’t make out the words," one said. Another patient said it sounded like the syllables were being swapped in the words they heard.
Read more about sound:
- COVID-19 may be linked to hearing loss, tinnitus and vertigo
- Loud music puts young people at risk of undetected hearing damage
"[The study] found evidence that the nonprimary auditory cortex does not require inputs from the primary auditory cortex, meaning there is likely a parallel pathway for processing speech," Chang explained.
"[We had thought it was] a serial pathway – like an assembly line. The parts are assembled and modified along a single path, and one step depends upon the previous steps.
"A parallel pathway is one where you have other pathways that are also processing information, which can be independent."
The researchers caution that while this is important step forward, they don't yet understand all the details of the parallel auditory system.
"It certainly raises more questions than it answers," Chang said. "Why did this evolve, and is it specific to humans? What is the anatomical basis for parallel processing?
"The primary auditory cortex may not have a critical role in understanding speech, but does it have other potential functions?"
Amy is the Editorial Assistant at BBC Science Focus. Her BA degree specialised in science publishing and she has been working as a journalist since graduating in 2018. In 2020, Amy was named Editorial Assistant of the Year by the British Society of Magazine Editors. She looks after all things books, culture and media. Her interests range from natural history and wildlife, to women in STEM and accessibility tech.
- Try your first 6 issues for just £9.99 when you subscribe to BBC Science Focus Magazine.
- Risk - free offer! Cancel at any time when you subscribe via Direct Debit.
- FREE UK delivery.
- Stay up to date with the latest developments in the worlds of science and technology.