Loud and Clear: The Neurological Secrets Behind Hearing in Noisy Surroundings

Man Listening Science Hearing Art Concept

Researchers have found that the brain processes speech differently depending on how distinct it is and whether we’re focusing on it. The study combines neural recordings and computer modeling, demonstrating that phonetic information is encoded differently when a speech is overwhelmed by louder voices compared to when it isn’t.

Columbia University scientists have found that the brain encodes speech differently based on its clarity and our focus on it. This discovery, involving distinct processing of “glimpsed” and “masked” speech, could improve the accuracy of brain-controlled hearing aids.

Researchers led by Dr. Nima Mesgarani at Columbia University, US, report that the brain treats speech in a crowded room differently depending on how easy it is to hear, and whether we are focusing on it. Published recently in the open-access journal PLOS Biology, the study uses a combination of neural recordings and computer modeling to show that when we follow speech that is being drowned out by louder voices, phonetic information is encoded differently than in the opposite situation. The findings could help improve hearing aids that work by isolating attended speech.

Focusing on speech in a crowded room can be difficult, especially when other voices are louder. However, amplifying all sounds equally does little to improve the ability to isolate these hard-to-hear voices, and hearing aids that try to only amplify attended speech are still too inaccurate for practical use.

Two Brain Mechanisms Picking Speech Out of Crowd

Example of listening to someone talking in a noisy environment. Credit: Zuckerman Institute, Columbia University (2023) (CC-BY 4.0)

In order to gain a better understanding of how speech is processed in these situations, the researchers at Columbia University recorded neural activity from electrodes implanted in the brains of people with epilepsy as they underwent brain surgery. The patients were asked to attend to a single voice, which was sometimes louder than another voice (“glimpsed”) and sometimes quieter (“masked”).

The researchers used the neural recordings to generate predictive models of brain activity. The models showed that phonetic information of “glimpsed” speech was encoded in both primary and secondary auditory cortex of the brain, and that encoding of the attended speech was enhanced in the secondary cortex. In contrast, phonetic information of “masked” speech was only encoded if it was the attended voice. Lastly, speech encoding occurred later for “masked” speech than for “glimpsed’ speech. Because “glimpsed” and “masked” phonetic information appear to be encoded separately, focusing on deciphering only the “masked” portion of attended speech could lead to improved auditory attention-decoding systems for brain-controlled hearing aids.

Vinay Raghavan, the lead author of the study, says, “When listening to someone in a noisy place, your brain recovers what you missed when the background noise is too loud. Your brain can also catch bits of speech you aren’t focused on, but only when the person you’re listening to is quiet in comparison.”

Reference: “Distinct neural encoding of glimpsed and masked speech in multitalker situations” by Vinay S Raghavan, James O’Sullivan, Stephan Bickel, Ashesh D. Mehta and Nima Mesgarani, 6 June 2023, PLOS Biology.
DOI: 10.1371/journal.pbio.3002128

This work was supported by the National Institutes of Health (NIH), National Institute on Deafness and Other Communication Disorders (NIDCD) (DC014279 to NM). The funders had no role in the study design, data collection and analysis, decision to publish, or preparation of the manuscript.

Be the first to comment on "Loud and Clear: The Neurological Secrets Behind Hearing in Noisy Surroundings"

Leave a comment

Email address is optional. If provided, your email will not be published or shared.