Unit III Hearing Aids
Unit III Hearing Aids
Anatomy of ear, Common tests – audiograms, air conduction, bone conduction, masking
techniques, SISI, Hearing aids – principles, drawbacks in the conventional unit, DSP based
hearing aids.
I. ANATOMY OF EAR
Outer Ear
The visible part of the ear is called the Pinna or the Auricle. The pinna is made of
cartilage.
The outer ear is concerned with the transmission of sound.
The outer ear consists of the Pinna, the ear canal and the outer layer of the eardrum,
also called the Tympanic membrane.
The ear canal is filled with air and is about 2.5cm long.
The skin surrounding the ear canal contains glands that secrete ear wax.
Ear wax is part of the ears protection mechanism.
Middle Ear
The middle ear is a small air filled space connecting the outer and inner ear.
The Primary function of the middle ear is to conduct sound waves through the
tympanic membrane to the cochlear via the ear bones.
The 3 smallest bones in the body are in the middle ear, the are called the hammer
(malleus), anvil (incus) and stirrup (stapes).
These bones are collectively known as the ossicles. Sound waves cause them to
vibrate.
The eustachian tube is also inside the middle ear. The eustachian tube controls the
pressure within the ear.
Inner Ear
TheInner Ear has 2 main functions, to convert sound waves into electrical signals for
the brain and to maintain balance by detecting position and motion.
The inner ear has 3 main parts, the cochlear, the semi-circular canals and the
vestibule.
The cochlear is filled with liquid and acts like a microphone, converting sound waves
to nerve impulses that travel to your brain via the auditory nerve.
The vestibule and semi-circular canals both help you to balance.
An Audiogram is a simplified graph of symbols representing the softest sounds that a person
can hear across define range of pitches.
Decibel (dB)
Decibel refers to the loudness of sounds. A sound low in dB is perceived as soft and a sound
high in dB is perceived as loud.
dB SPL vs. dB HL
Loudness of sound is typically measured in sound pressure level (dB SPL). The output of
hearing aids and assistive listening devices is displayed in dB SPL; however, auditory
thresholds (on an audiogram) are measured in hearing level (dB HL).
Frequency
The unit used to measure frequency is Hertz (Hz). The perceptual correlate of frequency is
pitch. As frequency increases, so does pitch. Examples of low frequency (low pitch) sounds
include drums and bass guitars and vowels, while high frequency (high consonants (f, th, s).
Hearing is typically tested between 250 and 8000 Hz, which is where most speech sounds
fall.
Auditory thresholds
Auditory thresholds are the softest sounds an individual can detect. They are plotted
between -10 and 110 dB HL at octave or mid-octave intervals from 125 to 8000 Hz. The
normal hearing listener can typically hear sounds as soft as 0 dB HL and when sounds are
above 100 dB HL they are generally considered to be uncomfortably loud.
KEY CONCEPTS
CHL are characterized by a reduction in hearing ability despite a normal functioning cochlea
(inner ear). This type of hearing loss is caused by impaired sound transmission through the
ear canal, eardrum, and/ or ossicular chain. Conductive hearing losses are infections and wax
impaction are two common causes of this type of hearing loss. In conductive hearing losses,
air conduction thresholds are abnormal, bone conduction thresholds are normal, and an air-
bone gap is present.
SNHL are characterized by a reduction in hearing ability due to disorders involving the
cochlea and/or the auditory nervous system. This type of hearing loss is usually irreversible.
Sensorineural hearing losses can be further divided into sensory and neural losses. A sensory
(cochlear) hearing loss occurs when the damage to the auditory system is located within the
cochlea. Noise induced and age related hearing losses are typically sensory in nature. A
neural (retrocochlear) hearing loss occurs when the damage to the auditory system is
beyond the level of the cochlea, ranging anywhere from the hearing nerve up to the brain. A
tumor on the hearing nerve can be one cause of a neural hearing loss. In sensorineural
hearing losses, air conduction and bone conduction thresholds are both abnormal, but are
impaired to approximately the same degree (no air-bone gap present).
Mixed hearing losses occur when both conductive and sensorineural components are
present. As in conductive hearing losses, the conductive component of a In mixed hearing
losses, air conduction and bone conduction thresholds are both abnormal, but air conduction
threshold are worse than bone conduction thresholds (an air-bone gap is present).
Degree (or severity) of hearing loss is determined by looking at where one’s pure tone air
conduction thresholds were obtained (and are plotted on the audiogram). Degree of hearing
loss can be calculated by taking the average pure tone air conduction thresholds at several
frequencies and matching that number to a category of severity. A three frequency pure tone
average (PTA) at 500, 1000, and 2000 Hz is commonly used, although some entities utilize
higher frequencies (3000 and/or 4000 Hz) in order to encompass the higher frequency
speech areas. The PTA (500, 1000, and 2000 Hz) calculated for the above audiogram is
approximately 53 dB HL in each ear, a hearing loss in the moderate range. Degrees of hearing
sensitivity include: normal (< 25 dB HL), mild (26 to 40 dB HL), moderate (41 to 55 dB HL),
moderately-severe (56 to 70 dB HL), severe 71 to 9 0 dB HL), and profound (> 90 dB HL).
Configuration of hearing loss refers to the “shape” of one’s hearing loss. Audiograms are
always read by looking at an individual’s low frequency thresholds followed by their mid
frequency thresholds, and high frequency thresholds. For example, most individuals have
high frequency sensorineural hearing losses that are sloping in configuration, which
suggests that their hearing loss gets progressively worse with increasing frequency. As an
example, the audiogram with PTA of 53 dB above shows a sloping sensorineural hearing loss.
If air conduction and bone conduction thresholds are similar, this indicates that the outer
ear and middle ear can carry sounds to the cochlea without resistance, but there is an issue
in the inner ear or hearing nerve. This would be sensorineural hearing loss (SNHL).
Air-Bone Gap
What if air conduction thresholds and bone conduction thresholds are different? In this case
we could speak of air-bone gap. Just like it sounds, this is the difference between air
conduction thresholds and bone conduction thresholds.
[Air conduction threshold (dB)] – [Bone conduction threshold (dB)] = Air-bone gap (dB).
Understanding this simple measure can help you quickly identify if your patient’s
hearing loss is sensorineural, conductive, or a mixed hearing loss, which is a combination of
conductive and sensorineural hearing loss.
But why is it so important to identify the type of hearing loss?
With a pure sensorineural hearing loss, traditional hearing aids are generally effective,
although it can be difficult if there is high-frequency hearing loss or a more severe degree of
sensorineural hearing loss. However, if there is a conductive hearing loss present, (above an
air-bone gap of approximately 25–35 dB)) any amplification from a hearing aid cannot
clearly and effectively be transmitted to the inner ear. This can lead to muffled, distorted
sound quality from traditional hearing aids.
Here’s where bone conduction hearing solutions shine. A bone conduction system bypasses
the conductive structures of the outer and middle ear, so it can send sound vibrations
directly to the cochlea. This is what makes bone conduction solutions an ideal treatment
option when conductive hearing loss is present. Now let’s take a quick look at the bone
conduction systems that we’ll cover in these candidacy guidelines.
Modern Bone Conduction Systems
In the end, the most effective bone conduction system is the one your patients will wear. It
can be helpful to have multiple solutions available, but many bone conduction systems
compromise comfort & wearability in exchange for hearing performance, or vice versa.
That’s why we created the only two bone conduction systems designed to combine healthy
skin and optimal hearing performance for the best possible patient experience.
ADHEAR
ADHEAR is a non-surgical bone conduction solution that uses a unique no-pressure adhesive
adapter, so it offers all day wearing comfort. Thanks to this new design, ADHEAR can deliver
hearing performance that matches SoundArc, Softband, and other non-surgical bone
conduction systems—without any of the painful pressure and prominent headband.
This combination of comfort and performance makes ADHEAR an ideal non-surgical choice
for treating temporary or chronic conductive hearing loss in children and adults
ADHEAR Candidacy
With the simple non-surgical adhesive adapter, your patients can easily test the immediate
benefits of ADHEAR for themselves.
Bone conduction thresholds should be between 0–25 dB in the range of frequencies 500 Hz
and 4,000 Hz.
Any air conduction thresholds are acceptable, because ADHEAR relies on bone conduction.
Temporary or chronic conductive hearing loss.
Unilateral or bilateral hearing loss.
BONEBRIDGE
If your patients are ready for a reliable implantable solution to conductive hearing loss, or
need a higher maximum power output, BONEBRIDGE is a one-of-a-kind active bone
conduction implant that offers direct-drive amplification with no feedback.
As the only active bone conduction implant, BONEBRIDGE is fully implanted under the skin,
so your patients won’t have to worry about skin infections or osseointegration failures
commonly associated with the percutaneous screw of the Baha Connect or Ponto systems.
The external audio processor is held in place by gentle magnetic attraction, offering all-day
wearing comfort.
BONEBRIDGE offers an excellent option for conductive or mixed hearing loss, as well as
single-sided deafness
BONEBRIDGE Candidacy
If your patients want a reliable, implantable bone conduction solution, BONEBRIDGE is the
right choice. It’s also the best solution for patients who want more amplification, because
BONEBRIDGE offers a higher maximum power output thanks to direct-drive active bone
conduction. This transcutaneous direct-drive technology maximizes the efficiency of sound
transfer and eliminates any microphone feedback for the best possible sound quality.
Bone conduction thresholds should be between 0–45 dB from 500Hz through 4,000 Hz.
If bone conduction thresholds are normal, but there’s a moderate air-bone gap (e.g 30 dB
PTA), this indicates a pure conductive hearing loss. This is often caused by temporary or
chronic conditions, such as:
In this case, the cochlea is able to effectively detect sounds through bone conduction (BC
thresholds of 0-20 dB), but the outer ear and middle ear structures are damaged or
completely missing (high air conduction thresholds).
An ADHEAR non-surgical solution can be an effective treatment option in these cases. If the
conductive hearing loss is stable, the BONEBRIDGE implant offers a straightforward,
permanent solution that combines outstanding comfort and hearing performance.
In this case, there is also a 30dB air-bone gap. But, as you’ll notice, the bone conduction
thresholds in are in the mild-to-moderate hearing loss range (BC thresholds of 20–40 dB).
This indicates that this hearing loss is caused by a combination of mild-to-moderate
sensorineural hearing loss, plus a conductive hearing loss.
For mixed hearing loss, a bone conduction system can be very effective, but more
amplification is needed to overcome the sensorineural element at the cochlea. In this case,
the direct-drive design of the BONEBRIDGE Active Bone Conduction implant is the ideal
treatment option.
Central masking
It has long been recognized that even with small to moderate amounts of masking noise in
the non-test-ear, thresholds of test ear shift by as much as 5- 7 dB. The term central masking
is used to explain this phenomenon and is defined as threshold shift in test ear resulting from
introduction of masking signal into non-test-ear that is not due to crossover.
Central masking occurs due to inhibitory response within central nervous system,
behaviorally measured as small threshold shifts in presence of masking noise. as a result, the
signal intensity levels must be raised to compensate for the attenuation effect from the
neural activity. Both pure tone speech thresholds are affected similarly by central masking
phenomena.
This refers to the inability of the brain to identify a tone in the presence of masking, even
when they are heard in opposite ears; hence masking is occurring centrally rather than
peripherally (in the cochlea).
Central masking is a phenomenon that occurs beyond the ear, during central auditory
processing.
This phenomenon occurs when two stimuli are presented binaurally through well-
insulated headphones: A test signal sounds in one ear while a masker sounds in the opposite
ear. Although no direct interference between the stimuli occurs, a person’s perceptual
threshold for the test signal increases and the signal becomes more difficult to detect. This
effect is most commonly apparent at the higher masking levels.
IV. SISI
The Short Increment Sensitivity Index (SISI) test is a valuable tool in audiology for evaluating
the function of the inner ear and auditory nerve. Let’s delve into what this test entails, its
purpose, and how it aids in diagnosing hearing disorders.
Test Procedure:
The SISI test requires the following equipment:
Headphones or insert phones
A response button
Here’s how the test is conducted:
1. Select the desired test frequency and set the input level 20 dB above threshold.
2. In the most common type of SISI test, the incremental steps are set to 1 dB.
3. Prior to the actual test, a trial with 2 or 5 dB steps ensures patient understanding.
4. The patient is informed that they will hear a series of tones.
5. If there’s a sudden change in loudness during tone presentation, the patient presses the
response button.
6. The system counts the number of reactions over 20 presentations to calculate a SISI score.
7. Repeat the test for all desired test frequencies.
8. Interpreting Results:
The SISI test should be conducted at 20 dB SL (Sensation Level) for all tested frequencies.
A low SISI score may indicate retrocochlear damage.
Conclusion
The SISI test provides crucial insights into the functioning of the auditory system. By
assessing the ability to detect minute intensity changes, audiologists can better diagnose and
manage hearing-related conditions. Remember, though, that this test is just one piece of the
comprehensive audiological puzzle.
The Short Increment Sensitivity Index (SISI) test is a hearing assessment that evaluates the
function of the inner ear and auditory nerve. It measures the ability of the inner ear to detect
small changes in sound intensity that patients with normal hearing wouldn’t be able to
perceive.
The ability to recognise and respond to such changes is called hyperacusis - an over-
sensitivity to sound that is usually caused through damage to the cochlea (cochlear origin)
or damage to the ascending auditory pathway (retro cochlear origin).
This can regularly be used in a test battery when assessing a patients’ suitability for a
cochlear implant, as it compares two different lesions.
The patient will need to respond to these increments to indicate they can perceive the
change. A normal listener shouldn’t be able to perceive an increment this small. In total, 20
increment bursts will happen and the test result is calculated as a percentage.
Overall, the SISI test is a valuable tool for assessing hearing function and can help healthcare
providers diagnose and treat a variety of conditions that affect the inner ear and auditory
nerve. It can also be used to differentiate between cochlear and retro cochlear disorders
because a patient with a cochlear disorder can perceive the increments of 1dB, whereas a
patient with a retro cochlear disorder can’t.
Here are some basic steps for performing a SISI test:
1. At the start of the test, make sure the patient is comfortable and understands
the procedure
2. Provide the patient with headphones or insert earphones
3. Play a tone at a specific frequency and supra-threshold loudness. This will
serve as the reference tone and will be continuous. The frequency and loudness
should vary depending on the individual's hearing level as well as specific testing
protocol
4. Present a second tone as a short-duration increment, this is the "stimulus", at
the same frequency as the reference tone but at a slightly higher intensity (1dB
higher)
5. Ask the patient to flag if they perceive a change in loudness between the
reference tone and the stimulus tone by pressing a button, raising a hand, or giving a
verbal response
6. This stimulation should be repeated 20 times to provide a percentage score of
how many correct increments the patient has identified
7. Calculate the SISI score by subtracting the number of correct responses from
the total number of trials. The result should be expressed as a percentage. A score of
0-20% is considered normal, while a score of 20-65% is inconclusive, and a score
above 75% is positive.
It is essential to note that the SISI test should be performed by a licensed audiologist or
trained healthcare professional. The interpretation of results requires expertise and should
be done in the context of the patient's overall hearing evaluation.
The SISI test should be conducted at 20dB SL for all frequencies. If the patient does not
manage to get a high score on the SISI test, this could be indicative of retro cochlear damage.
The final calculation of the percentage score indicates whether the test has shown a positive
or negative indication of recruitment.
If the patient scores above 70%, this is a positive indication and is indicative of the loss being
due to a cochlear lesion. Results lower than 30% are perceived to be negative and indicative
of a normal listener or a patient with a central lesion (as a cause of their hearing impairment).
Results within the range of 30% to 70% are indifferent and a lesion can’t be determined.
V. HEARING AIDS-PRINCIPLE
Hearing aids can't restore normal hearing, but they're packed with advanced technology that
can amplify and process select sounds to improve speech recognition.
At its core, every hearing aid works the same way: a microphone captures sound, an
amplifier makes the sound louder, and a receiver (similar to a speaker) outputs the amplified
sound into the wearer’s ear.
e way: a microphone captures sound, an amplifier makes the sound louder, and a receiver
(similar to a speaker) outputs the amplified sound into the wearer’s ear.
In young people, healthy hearing spans a frequency range from 20-20,000Hz. As we age, we
progressively lose the ability to hear high-frequency sounds.
Age-related and noise-induced hearing loss typically result from damage to the hair cells;
this is also known as sensorineural hearing loss. When hair cells stop functioning, we lose
sensitivity to certain frequencies, meaning some sounds seem much quieter or become
inaudible. For as long as sounds remain audible, sensorineural hearing loss responds well to
amplification.
Hearing loss can also originate from damage to other parts of the ear, in which case hearing
aids might be of limited use. For example, when the eardrum or ossicles can’t transmit sound,
amplification alone might not be able to bypass the damaged middle ear to reach the cochlea.
Hearing aids do more than amplify sounds. To create a custom sound profile that improves
speech recognition, they process sounds in multiple ways.
Microphones
Like the human ear, digital hearing aids don’t process sound waves directly. First in line are
the microphones. They act as transducers, capturing mechanical wave energy and
converting it into electrical energy.
The analog signals coming from the microphones are converted into a digital signal (A/D).
The binary signal is subject to digital signal processing (DSP). The processed digital signal is
then converted back into an acoustic signal (D/A), which enters the ear canal through the
receiver.
The Hearing Review A block diagram outlining the analog to digital (A/D) and vice versa
(D/A) conversion in a hearing aid.
Much can go wrong during these conversions. The Hearing Review notes:
This (conversion) process, if not carefully designed, can introduce noise and distortion, which
will ultimately compromise the performance of the hearing aids.
A key issue of the A/D conversion is its limited dynamic range. Average human hearing spans
a dynamic range of 140dB, helping us hear anything from rustling leaves (0dB) to fireworks
(140dB). The still common 16-bit A/D converter is limited to a dynamic range of 96dB,
which, like a CD, could range from 36-132dB. While raising the lower limit eliminates soft
sounds, for example whispering at 30dB, lowering the upper limit yields poorer sound
quality in loud environments.
Generally, hearing loss reduces an individual’s dynamic range, often to as little as 50dB. This
narrows the range that sounds need to be compressed into to remain audible and yet sound
comfortable.
Human hearing can span frequencies from 20 to 20,000Hz and a dynamic range from -10 up
to 125+dB SPL.
While linear amplification would make soft sounds louder, ideally audible, it would also
make loud sounds uncomfortable or even painful. Hence, most hearing aids use wide
dynamic range compression (WDRC). This compression method strongly amplifies soft
sounds, only applies a moderate amount of amplification to medium sounds, and barely
makes loud sounds any louder.
Modern hearing aids process sounds based on the wearer’s hearing loss. To do that, they
split up frequencies into a number of channels, anywhere from three to over 40. Each
channel covers a different frequency range and is analyzed and processed separately.
While channels determine processing, bands control the volume or gain at different
frequencies. Most modern hearing aids have at least a dozen frequency bands they can
amplify. This is similar to headphone or speaker equalizers, where you can manually raise
the level of a specific frequency range, for example, to boost the bass.
Acoustic Today A graphic representation of channels and bands within a digital hearing aid
(courtesy of Steve Armstrong).
The upside of having more channels and bands is increased finetuning. The hearing aid can
better separate speech from background noise, cancel out feedback, reduce noise, and match
the volume and compression at different frequencies to the wearer’s specific needs.
The downside of having additional channels is longer processing times. This can be a
problem for people who can still hear environmental sounds without a hearing aid. Studies
have shown that a processing delay of three to six milliseconds gets noticed as degraded
sound quality.
No single hearing aid setting is perfect for all occasions. When you’re at home listening to
quiet music, you’ll need different processing than when you’re out with friends in a loud
environment. To address this, most hearing aids come with different programs that let you
switch from a default program to music-friendly processing or to a more aggressive
elimination of background noise.
While your hearing care provider can customize these programs to your hearing, they can
only cover a limited number of scenarios. And since you’ll be in a quiet environment for the
hearing aid fitting, they have to guesstimate your preferences for different sound scenes.
That’s where artificial intelligence (AI) comes in.
The Widex MOMENT My Sound AI can help users identify the best sound profile for every
environment.
Equipped with machine learning abilities, the AI can learn about different environments, tap
into the data from other users, predict the settings you’ll find most pleasant, and
automatically adapt its processing.
We’ve already established that hearing aids can’t restore normal hearing because they can’t
fix what’s broken. Their main task is to restore speech recognition by working with the
remaining hearing ability. But what happens when hearing loss affects the frequencies that
cover speech?
DPA Microphones The frequencies between 2,000 to 4,000Hz are most important for speech
intelligibility.
In non-tonal (Western) languages, the frequencies key to speech intelligibility range from
125-8,000Hz. That’s the exact bandwidth covered by standard hearing tests and hearing
aids. Remember, human hearing can reach up to 20,000Hz, typically 17,000Hz in a middle-
aged adult.
When hearing loss is so severe, that it affects frequencies below 8,000Hz, hearing aids can
shift those higher frequency sounds into a lower frequency band. Unfortunately, transposing
sounds in this manner creates artifacts.
All products featured are independently chosen by us. However, SoundGuys may receive a
commission on orders placed through its retail links.See our ethics statement.
HEARING AIDS
Starkey The core components of a hearing aid: microphone, amplifier, receiver, and battery.
This seems simple enough, yet you can choose from a number of different hearing aids and
features. What exactly happens inside a hearing aid, why does it never sound as good as
natural hearing, and why should you use a hearing aid in the first place?
The visible outer part of your ear, the pinna, captures and funnels sound to the ear canal and
middle ear, eventually it reaches the inner ear, followed by the auditory nerve, and finally
the auditory cortex; the region in the brain where we perceive sound. The sound is subject
to processing at every step of the way.
Like the pinna, the ear canal attenuates some frequencies, while emphasizing others. The
only things that can pass through a healthy eardrum (tympanic membrane) are sound waves.
On the inner side, the membrane connects to a series of small bones; the ossicles.
The interaction of the membrane with the ossicles transforms the sound waves into
mechanical energy. That’s necessary because otherwise the air-based sound waves would
simply bounce off of the higher density liquid in the cochlea and reflect back into the ear
canal. By matching the impedance (resistance) of the liquid, sound can pass into the cochlea.
In the cochlea, hair cells convert acoustic energy into electrochemical nerve impulses, a
process known as transduction. The outer hair cells act as amplifiers, which helps with the
perception of soft sounds, while the inner hair cells work as transducers. When the nerve
impulse is strong enough, the inner hair cells trigger the release of neurotransmitters,
activating the auditory nerve, which in turn relays the signal to the auditory cortex in the
brain.
Microphones
Like the human ear, digital hearing aids don’t process sound waves directly. First in line are
the microphones. They act as transducers, capturing mechanical wave energy and
converting it into electrical energy.
The analog signals coming from the microphones are converted into a digital signal (A/D).
The binary signal is subject to digital signal processing (DSP). The processed digital signal is
then converted back into an acoustic signal (D/A), which enters the ear canal through the
receiver.
The Hearing Review A block diagram outlining the analog to digital (A/D) and vice versa
(D/A) conversion in a hearing aid.
This (conversion) process, if not carefully designed, can introduce noise and distortion, which
will ultimately compromise the performance of the hearing aids.
A key issue of the A/D conversion is its limited dynamic range. Average human hearing spans
a dynamic range of 140dB, helping us hear anything from rustling leaves (0dB) to fireworks
(140dB). The still common 16-bit A/D converter is limited to a dynamic range of 96dB,
which, like a CD, could range from 36-132dB. While raising the lower limit eliminates soft
sounds, for example whispering at 30dB, lowering the upper limit yields poorer sound
quality in loud environments.
Generally, hearing loss reduces an individual’s dynamic range, often to as little as 50dB. This
narrows the range that sounds need to be compressed into to remain audible and yet sound
comfortable.
The smallest and least visible units focus on amplification and simple processing. Larger
units, like behind-the-ear (BTE) or receiver-in-the-canal (RIC) can include an additional
microphone, a second processor, a larger battery, and apply more complex algorithms.
What are the benefits of hearing aids?
On a physiological level, hearing aids do much more than restoring speech recognition.
Multiple studies link age-related hearing loss to dementia, Alzheimer’s disease, depression,
and other forms of cognitive decline. One proposed reason is social isolation; people with
hearing loss avoid socializing. Why do they retreat? Even with mild to moderate hearing loss,
the brain has to work harder to process speech, which can be exhausting and frustrating.
Assuming the brain has limited resources, this increase in auditory processing may also
impair memory. The cognitive load theory is compelling and well-studied, but it’s not the
only possible explanation.
NCBI Hearing loss can trigger a number of consequences, but other factors, like genetics or
lifestyle, can impact the outcomes.
Other potential reasons for the correlation of cognitive decline and hearing loss are much
simpler. Hearing loss and mental decline could have a common cause, such as inflammation.
The lack of sensory input could also lead to structural changes in the brain (cascade
hypothesis). Finally, over-diagnosis (harbinger hypothesis) may be an issue, though recent
studies clearly confirm the cognitive load theory and cascade hypothesis and suggest that a
combination of causes isn’t uncommon.
That’s where hearing aids come in. By addressing hearing loss, they can slow down cognitive
decline, reduce the risk for late-life depression, and thus improve the quality of life;
regardless of causes. While hearing aids are a key component, a holistic treatment should
aim to address all underlying causes.
VI.DRAWBACKS OF CONVENTIONAL UNIT
As technology advances, hearing aids continue to improve. But in recent years, most
improvements have been limited to aesthetics, comfort, or secondary functions (e.g.,
wireless connectivity). With respect to their primary function—improving speech
perception—the performance of hearing aids has remained largely unchanged. While
audibility may be restored, intelligibility is often not, particularly in noisy environments
(Hearing Health Care for Adults. National Academies Press, 2016).
Why do hearing aids restore audibility but not intelligibility? To answer that question, we
need to consider what aspects of auditory function audibility and intelligibility depend on.
For a sound to be audible, it simply needs to elicit a large enough change in auditory nerve
activity for the brain to notice; almost any change will do. But for a sound to be intelligible,
it needs to elicit a very particular pattern of neural activity that the language centers of the
brain can recognize.
The key problem is that hearing loss doesn't just decrease the overall level of neural activity,
it also profoundly distorts the patterns of activity such that the brain no longer recognizes
them. Hearing loss isn't just a loss of amplification and compression, it also results in the
impairment of many other important and complex aspects of auditory
A good example is the creation of distortions: When a sound with two frequencies enters the
ear, an additional sound is created by the cochlea itself at a third frequency that is a complex
combination of the original two. These distortions are, of course, what we measure as
distortion product otoacoustic emissions (DPOAEs), and their absence indicates impaired
cochlear function.
But these distortions aren't only transmitted out of the cochlea into the ear canal. They also
elicit neural activity that is sent to the brain. While a hearing aid may restore sensitivity to
the two original frequencies by amplifying them, it does not create the distortions and, thus,
does not elicit the neural activity that would have accompanied the distortions before
hearing loss.
These distortions themselves may not be relevant when listening to broadband sounds like
speech, but they are representative of the complex functionality that hearing aids fail to
restore. Without this functionality, the neural activity patterns elicited by speech are very
different from those that the brain has learned to expect. Because the brain does not
recognize these new patterns, perception is impaired.
A useful analogy is to think of the ear and brain as two individuals having a conversation.
The effect of hearing loss is not simply that the ear now speaks more softly to the brain, but
rather that the ear now speaks an entirely new language that the brain does not understand.
Hearing aids enable the ear to speak more loudly, but make no attempt to translate what the
ear is saying into the brain's native language. In this sense, hearing aids are like tourists who
hope that by shouting they will be able to overcome the fact that they are speaking the wrong
language.
Why don't hearing aids correct for the more complex effects of hearing loss? In severe cases
of extensive cochlear damage, it may be impossible. Even when hearing loss is only
moderate, it is not yet clear how a hearing aid should transform incoming sounds to elicit the
same neural activity patterns as the original sounds would have elicited before hearing loss.
EVOLVING OPPORTUNITIES
But there is reason for optimism. In recent years, advances in machine learning have been
used to transform many technologies, including medical devices (Nature. 2015 May
28;521(7553):436). In general, machine learning is used to identify statistical dependencies
in complex data. In the context of hearing aids, it could be used to develop new sound
transformations based on comparisons of neural activity before and after hearing loss.
But machine learning is not magic; to be effective, it needs large amounts of data. Fortunately,
there have also been recent advances in experimental tools for recording neural activity (J
Neurophysiol. 2015 Sep;114(3):2043; Curr Opin Neurobiol. 2018 Feb 10;50:92). These new
tools allow recordings from thousands of neurons at the same time and, thus, should be able
to provide the required “big data.”
In audiology, hearing loss is quantified by measuring hearing thresholds for each ear at
different frequencies but hearing loss is not a linear reduction in ear sensitivity that can be
compensated for by a simple or even frequency-dependent linear sound amplification as
shown in Figure 1. Instead, hearing loss is both frequency and sound level dependent,
meaning that the degree and type of hearing loss can vary significantly depending on the
specific frequencies and sound levels involved.
One of the main characteristics of hearing loss is "loudness recruitment," where the
perceived loudness of sounds grows much faster for people with hearing loss than for those
with normal hearing. As a result, the difference in sound level between "barely heard" and
"normal" becomes very short for people with hearing loss, which can explain why they often
have difficulty hearing others speaking to them in a normal voice.
This phenomenon has led to the development of multi-channel Wide Dynamic Range
Compression (WDRC) for hearing loss compensation in hearing aids. However, hearing loss
is a complex, non-linear, frequency-dependent, and cognition-related phenomenon that
cannot be universally compensated by amplifying sounds. Nonetheless, proper hearing
amplification can still benefit most people with hearing loss to some extent. Our goal is to
make such amplification as efficient as possible.
Most of the digital signal processing in hearing aids is done in frequency sub-bands. Digital
signal processing is sequential, block by block. After all the processing is done, the full-band
signal is resynthesized (combined) from the processed sub-band signals. In some cases, the
processing blocks can receive sub-band signals from multiple sources and combine them
into a set of sub-band signals. Acoustic beamforming is one type of this processing, shown in
Figure 3.
Splitting into and combining frequency sub-bands can be accomplished using various and
well-known methods such as the short-time Fourier transform (STFT) or filter banks. Finite
impulse response (FIR) or infinite impulse response (IIR) filter banks can be used, each
having its own advantages and disadvantages. For example, IIR filter banks provide the
shortest delays but are less suitable for implementing other algorithms such as adaptive
beamforming, noise reduction, and others. When STFT is used for decomposition, the sub-
bands contain complex numbers and are said to be processed in the "frequency domain."
The number of sub-bands, their frequency distribution, and overlap depend on the method
of sub-band decomposition as well as a tradeoff between frequency resolution, processing
delay, computational complexity, available data memory on a given chip, and other affected
parameters.
The channels are amplified/compressed differently according to a specific hearing loss. Each
channel usually contains signal frequencies that correspond to a certain predefined
frequency range and partially overlap. The channels can correspond to the frequency sub-
bands one-to-one or combine several and even an unequal number of sub-bands, depending
on the required frequency resolution. In practical implementations for hearing aids, the
number of frequency channels can vary from 4 to 64.
Most modern digital hearing aids are equipped with two microphones, positioned apart from
each other (front and back) for acoustic beamforming. This technique generates a directional
"sensitivity beam" toward the desired sound, preserving sounds within the beam while
attenuating others. Typically, sounds from the front are preserved, while sounds from the
sides and rear are considered unwanted and suppressed. However, different beamforming
strategies can be applied in specific environments.
Acoustic beamforming relies on a slight time delay, typically in tens of microseconds, for
sounds to reach the two microphones. This time difference is dependent on the direction of
the sound and the distance between the microphones, as illustrated in Figure 5.
Front sounds reach the front microphone first, side sounds arrive simultaneously, and rear
sounds reach the rear microphone first. Acoustic beamforming utilizes this time difference
to create a variable sensitivity called a "polar pattern" or "polar sensitivity pattern" that is
directionally dependent.
While sensitivity to front sounds stays consistent, sensitivity to sounds from other directions
decreases with one angle having zero sensitivity. However, these theoretical polar patterns
only apply in a free field and are not realistic in practical situations due to acoustic distortion
caused by the user's head and torso.
Acoustic beamforming can be either fixed or adaptive. Fixed beamforming has a static polar
pattern that is independent of the microphone signals and acoustic environment, making it
easy to implement and computationally efficient. Adaptive beamforming, on the other hand,
changes the polar pattern in real time to steer the null of the sensitivity polar pattern toward
the direction of the strongest noise, optimizing array performance.
Adaptive acoustic beamforming can be implemented in either the full frequency band or in
frequency sub-bands. The full band scheme optimizes an average performance with a
uniform polar pattern across all frequencies in the hearing aid frequency range, which may
not be optimal for individual frequencies. In the sub-band scheme, polar patterns are tailored
to different frequency regions to optimize performance across frequencies. This can be more
efficient for practical scenarios where noise comes from different directions in different
frequency regions.
Noise Reduction
The primary concern of individuals with hearing loss is difficulty communicating in noisy
environments. While acoustic beamforming can effectively reduce noise from specific
directions, it has a limited ability to mitigate diffuse noise - noise that emanates from
multiple sources without a distinct direction. Diffuse noise arises from multiple noise
reflections off hard surfaces in a reverberant space, which further complicates noise
directionality. For example, noise inside a car is often diffuse, as uncorrelated noises from
various sources such as the engine, tires, floor, and roof are diffused by reflections off the
windows.
Compensating for hearing loss with WDRC amplifies sounds that are below the user's
hearing threshold, but it can also amplify noise and reduce the contrast between noise and
the desired sound. Noise reduction technologies aim to mitigate these issues by suppressing
noise prior to amplification by WDRC, thereby improving the output Signal to Noise Ratio
(SNR).
Today, most noise reduction technologies used in hearing aids are based on spectral
subtraction techniques. The input signal is divided into frequency sub-bands or channels,
and the noise is assumed to have relatively low amplitude and be stationary across sub-
bands, while speech exhibits dynamic spectral changes with rapid fluctuations in sub-band
amplitude. If the average noise amplitude spectrum is known or estimated, a simple "spectral
subtraction" strategy can be employed to suppress noise, as shown in Figure 6.
The upper plot illustrates the sub-band signal amplitudes of a speech in noise signal across
different frequency sub-bands. The relatively stationary noise amplitudes are averaged by
the red line, while the speech signal amplitudes are significantly higher. In the lower plot, the
sub-band noise amplitude shown by the red line in the upper plot has been subtracted from
the sub-band input amplitudes using the spectral subtraction technique, which is applied to
all sub-bands. Finally, the output full-band signal is reconstructed using the same method
used for splitting, with the noise effectively reduced.
Alango Digital Signal Processing for Hearing Aids
Alango Technologies has developed a full range of state-of-the-art DSP algorithms, software,
and turnkey reference designs that can be scaled for various types of hearing enhancement
devices, including OTC hearing aids, as well as TWS earbuds and headsets with amplified
transparency and conversation boost.
Figure 7 displays the DSP technologies that are integrated into the Hearing Enhancement
Package (HEP) developed by Alango.
While some of the technologies share names with those shown in Figure 2, they are based
on Alango's proprietary algorithms, which have been refined based on our experience and
feedback received from customers.
To select or design a DSP for hearing aids, one needs to compare different DSP cores and
rank them according to certain criteria. The most important criteria are:
- Selection of available software algorithms
- Ease of porting unavailable algorithms
- Ease of debugging and optimization
- Possible power consumption to achieve the target uptime before recharging.
While we may have our own preferences, Alango partners with all major DSP IP providers
and can deliver optimized solutions for them. Please, contact us for more details.