The way humans experience music and speech differs from what was previously thought. This is the finding of a study conducted by researchers from Linköping University in Sweden and Oregon Health and Science University in the United States. The findings, which have recently been published in the journal Science Advances could improve cochlear implant design.
We are sociable beings. We value hearing other people’s voices, and we use our hearing to recognize and experience human speech and voices. Sound that enters the outer ear is transmitted by the eardrum to the spiral-shaped inner ear, also known as the cochlea. The cochlea is home to the outer and inner hair cells, which are the sensory cells of hearing. The inner hair cells’ “hairs” bend as a result of the sound waves, delivering a signal through the nerves to the brain, which interprets the sound we hear.
We have believed for the last 100 years that each sensory cell has its own “optimal frequency” (a measure of the number of sound waves per second). This frequency elicits the strongest response from the hair cell. This implies that a sensory cell with an optimum frequency of 1000 Hz will respond significantly less strongly to sounds with slightly lower or higher frequencies. It has also been thought that all parts of the cochlea function similarly. However, a research team has revealed that this is not the case for sensory cells that process low-frequency sound with frequencies less than 1000 Hz. Vowel sounds in human speech fall into this category.
“Our study shows that many cells in the inner ear react simultaneously to low-frequency sound. We believe that this makes it easier to experience low-frequency sounds than would otherwise be the case since the brain receives information from many sensory cells at the same time,” says Anders Fridberger, professor in the Department of Biomedical and Clinical Sciences at Linköping University.
The scientists believe that this construction of our hearing system makes it more robust. If some sensory cells are damaged, many others remain that can send nerve impulses to the brain.
It is not only the vowel sounds of human speech that lie in the low-frequency region: many of the sounds that go to make up music also lie here. Middle C on a piano, for example, has a frequency of 262 Hz.
These results may eventually be significant for people with severe hearing impairments. The most successful treatment currently available in such cases is a cochlear implant, in which electrodes are placed into the cochlea.
“The design of current cochlear implants is based on the assumption that each electrode should only give nerve stimulation at certain frequencies, in a way that tries to copy what was believed about the function of our hearing system. We suggest that changing the stimulation method at low frequencies will be more similar to the natural stimulation, and the hearing experience of the user should in this way be improved,” says Anders Fridberger.
The researchers now plan to examine how their new knowledge can be applied in practice. One of the projects they are investigating concerns new methods to stimulate the low-frequency parts of the cochlea.
These results come from experiments on the cochlea of guinea pigs, whose hearing in the low-frequency region is similar to that of humans.
Reference: “Best frequencies and temporal delays are similar across the low-frequency regions of the guinea pig cochlea” by George Burwood, Pierre Hakizimana, Alfred L Nuttall and Anders Fridberger, 23 September 2022, Science Advances.
The study was funded by the U.S. National Institutes of Health and the Swedish Research Council.