Our Inner Ear Pores Let Us Tune In And Out Of Conversations
We've all done it: sitting in a noisy room, you pick out bits of conversation by focusing in on a specific voice. While scientists have always known of this phenomenon, they never could figure out the mechanism behind it. Is it a neurological talent, something the brain does without us realizing it? Or is there something else going on in the ear itself?
Like Us on Facebook
In mammals, the "inner ear separates sounds by their frequency content, and loss of this separation impairs our ability to understand speech in noisy environments in ways that cannot generally be compensated with a hearing aid," wrote the authors of a groundbreaking new study, led by Jonathan Sellon, an MIT graduate student, and published recently in Biophysical Journal. "Whereas this problem is well understood, its molecular origins are not."
Sellon and his colleagues say they've answered the question by cracking open the inner ears of mice. Inside is a part of the ear called the cochlea, a snail-shell-looking piece responsible for delivering sound information to the brain. Within the cochlea is a meshy material called the tectorial membrane. The membrane is porous, the individual holes thinner than a human hair.
The researchers delicately removed these membranes from the mice and suspended them in a wave chamber. By photographing them, they could see the sound waves at work. How wide or narrow those holes, the researchers found, holds the key to our ability to tune in and out of conversations in a noisy room. "This is the first study to suggest that porosity may affect cochlear tuning," William Brownell, an ear, nose, and throat scientist, told MIT News. (Brownell was not involved in this research.)
What they found was that membrane pores of a certain size were optimal for picking out conversations. Any smaller or larger, and hearing was impaired. This turns previous hypotheses on their heads; before, scientists thought that neural sensitivity to frequency (how high or low a sound's pitch is) was the key. Not so. Hypersensitivity to frequency actually relates to worse sound filtering. And besides, humans are able to tune in and out more quickly than neurons can relay information.
The new research solves the problem by describing a built-in process. Our ears have simply evolved the physical structure over time. "It really changes the way we think about this structure," co-author Roozbeh Ghaffari tells MIT News. Now that we know how sound discrimination works, they say engineers could eventually apply it to machines. Hearing aids and computers (like Siri?) that must listen to people speaking in loud environments could be improved based on science's new understanding of sound filtering.
Photo courtesy of Shutterstock
© 2012 iScience Times All rights reserved. Do not reproduce without permission.