Home Products How the Brain Decodes Speech in Noisy Rooms

How the Brain Decodes Speech in Noisy Rooms

by Universalwellnesssystems

summary: Researchers found that in a noisy environment, the brain decodes speech differently depending on how loud it is and how focused you are on it.

Their work, which uses neural recordings and computer models, shows that when we struggle to keep up with conversations in loud voices, our brains encode speech information differently than when speech is easy to hear. indicates that there is This can be extremely important in enhancing sound isolating hearing aids.

This study may lead to significant improvements in auditory attention decoding systems, especially in brain-controlled hearing aids.

Important facts:

  1. This study reveals that in noisy situations, our brains encode audio information differently, depending on the volume of audio we focus on and the level of attention we pay to it.
  2. The researchers used neural recordings to generate predictive models of brain activity, demonstrating that ‘glimps’ and ‘masked’ speech information are encoded differently in the brain.
  3. This finding could provide a major breakthrough in improving hearing aid technology, especially auditory attention decoding systems in brain-controlled hearing aids.

sauce: agreement

Researchers led by Dr. Nima Mesgalani of Columbia University in the United States report that the ability to hear speech in a crowded room and how the brain handles speech differs depending on whether one is concentrating or not.

Published in an open access journal on June 6 PLOS biologyIn this study, we combine neural recordings and computer modeling to show that when tracking speech that is drowned out by a louder voice, speech information is encoded differently than in the opposite situation.

The findings could help improve hearing aids that work by separating sounds.

In a crowded room, it can be difficult to focus on your speech, especially when other voices are loud. However, amplifying all sounds equally does little to improve the ability to separate these difficult sounds, and hearing aids that try to amplify only the sounds of interest are still too imprecise to be practical.

Credit: Neuroscience News

To better understand how speech is processed in these situations, researchers at Columbia University recorded neural activity from electrodes implanted in the brains of epilepsy patients who underwent brain surgery. bottom. Patients were asked to listen to a single voice. That voice could be louder (“grazed”) or softer (“masked”) than another voice.

Researchers used neural recordings to generate predictive models of brain activity. This model showed that speech information in ‘glanced’ speech is encoded in both the primary and secondary auditory cortices of the brain, and that the encoding of focused speech is enhanced in the secondary cortex. rice field.

In contrast, the audio information of the “masked” audio was encoded only if it was the attendee’s audio. Finally, audio encoding occurred slower for “masked” audio than for “obscured” audio. By focusing on decoding only the ‘masked’ portion of the noted audio, the ‘glanced’ and ‘masked’ audio information are likely encoded separately. It may lead to improved auditory attention decoding systems for brain-controlled hearing aids.

Lead author of the study, Vinay Raghavan, said, “Hearing someone talk in a noisy environment helps your brain recover what it missed when the background noise was too loud. You can pick up snippets of unfocused speech, but only if the person you’re listening to is relatively quiet.”

About this auditory neuroscience research news

author: Nima Mesgalani
sauce: agreement
contact: Nima Mesgalani – PLOS
image: Image credited to Neuroscience News

Original research: open access.
Unique neural encoding of glimpsed and masked speech in multi-speaker situations” Nima Mesgarani et al. PLOS biology


overview

Unique neural encoding of glimpsed and masked speech in multi-speaker situations

Humans can easily tune in to one speaker in a multi-speaker environment while still picking up some of the background speech. However, it remains unclear how we perceive masked speech and to what extent non-target speech is processed.

Some models suggest that perception can be achieved by glancing, which is a minute spatiotemporal region where the speaker has more energy than the background. However, other models require recovery of masked areas.

To clarify this question, we directly recorded attention to one speaker in a multi-speaker conversation from the primary and non-primary auditory cortex (AC) of neurosurgical patients and developed a temporal response function model. We trained to predict high gamma neuronal activity from glimpsed and masked stimulus features. .

Encoding target speech in non-primary ACs was enhanced, and we found that glimpsed speech was encoded at the level of speech features of target and non-target speakers. In contrast, encodings of masked speech features were found only for targets, with longer response latencies and distinct anatomy compared to glimpsed speech features.

These findings suggest distinct mechanisms for encoding glimpsed and masked speech and provide neurological evidence for a glimpsed model of speech perception.

You may also like

Leave a Comment

The US Global Health Company is a United States based holistic wellness & lifestyle company, specializing in Financial, Emotional, & Physical Health.  

Subscribe my Newsletter for new blog posts, tips & new photos. Let's stay updated!

Copyright ©️ All rights reserved. | US Global Health