0% found this document useful (0 votes)
197 views32 pages

Unit III Hearing Aids

Assisstive technology notes
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
197 views32 pages

Unit III Hearing Aids

Assisstive technology notes
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 32

UNIT III HEARING AIDS

Anatomy of ear, Common tests – audiograms, air conduction, bone conduction, masking
techniques, SISI, Hearing aids – principles, drawbacks in the conventional unit, DSP based
hearing aids.

I. ANATOMY OF EAR

Outer Ear

 The visible part of the ear is called the Pinna or the Auricle. The pinna is made of
cartilage.
 The outer ear is concerned with the transmission of sound.
 The outer ear consists of the Pinna, the ear canal and the outer layer of the eardrum,
also called the Tympanic membrane.
 The ear canal is filled with air and is about 2.5cm long.
 The skin surrounding the ear canal contains glands that secrete ear wax.
 Ear wax is part of the ears protection mechanism.

Middle Ear

 The middle ear is a small air filled space connecting the outer and inner ear.
 The Primary function of the middle ear is to conduct sound waves through the
tympanic membrane to the cochlear via the ear bones.
 The 3 smallest bones in the body are in the middle ear, the are called the hammer
(malleus), anvil (incus) and stirrup (stapes).
 These bones are collectively known as the ossicles. Sound waves cause them to
vibrate.
 The eustachian tube is also inside the middle ear. The eustachian tube controls the
pressure within the ear.
Inner Ear

 TheInner Ear has 2 main functions, to convert sound waves into electrical signals for
the brain and to maintain balance by detecting position and motion.
 The inner ear has 3 main parts, the cochlear, the semi-circular canals and the
vestibule.
 The cochlear is filled with liquid and acts like a microphone, converting sound waves
to nerve impulses that travel to your brain via the auditory nerve.
 The vestibule and semi-circular canals both help you to balance.

II. COMMON TESTS – AUDIOGRAMS

An Audiogram is a simplified graph of symbols representing the softest sounds that a person
can hear across define range of pitches.

Decibel (dB)

Decibel refers to the loudness of sounds. A sound low in dB is perceived as soft and a sound
high in dB is perceived as loud.

dB SPL vs. dB HL
Loudness of sound is typically measured in sound pressure level (dB SPL). The output of
hearing aids and assistive listening devices is displayed in dB SPL; however, auditory
thresholds (on an audiogram) are measured in hearing level (dB HL).

Frequency

The unit used to measure frequency is Hertz (Hz). The perceptual correlate of frequency is
pitch. As frequency increases, so does pitch. Examples of low frequency (low pitch) sounds
include drums and bass guitars and vowels, while high frequency (high consonants (f, th, s).
Hearing is typically tested between 250 and 8000 Hz, which is where most speech sounds
fall.

Auditory thresholds

Auditory thresholds are the softest sounds an individual can detect. They are plotted
between -10 and 110 dB HL at octave or mid-octave intervals from 125 to 8000 Hz. The
normal hearing listener can typically hear sounds as soft as 0 dB HL and when sounds are
above 100 dB HL they are generally considered to be uncomfortably loud.

KEY CONCEPTS

Conductive hearing losses (CHL)

CHL are characterized by a reduction in hearing ability despite a normal functioning cochlea
(inner ear). This type of hearing loss is caused by impaired sound transmission through the
ear canal, eardrum, and/ or ossicular chain. Conductive hearing losses are infections and wax
impaction are two common causes of this type of hearing loss. In conductive hearing losses,
air conduction thresholds are abnormal, bone conduction thresholds are normal, and an air-
bone gap is present.

Sensorineural hearing losses (SNHL)

SNHL are characterized by a reduction in hearing ability due to disorders involving the
cochlea and/or the auditory nervous system. This type of hearing loss is usually irreversible.
Sensorineural hearing losses can be further divided into sensory and neural losses. A sensory
(cochlear) hearing loss occurs when the damage to the auditory system is located within the
cochlea. Noise induced and age related hearing losses are typically sensory in nature. A
neural (retrocochlear) hearing loss occurs when the damage to the auditory system is
beyond the level of the cochlea, ranging anywhere from the hearing nerve up to the brain. A
tumor on the hearing nerve can be one cause of a neural hearing loss. In sensorineural
hearing losses, air conduction and bone conduction thresholds are both abnormal, but are
impaired to approximately the same degree (no air-bone gap present).

Mixed hearing losses

Mixed hearing losses occur when both conductive and sensorineural components are
present. As in conductive hearing losses, the conductive component of a In mixed hearing
losses, air conduction and bone conduction thresholds are both abnormal, but air conduction
threshold are worse than bone conduction thresholds (an air-bone gap is present).

Degree (or severity) of hearing loss

Degree (or severity) of hearing loss is determined by looking at where one’s pure tone air
conduction thresholds were obtained (and are plotted on the audiogram). Degree of hearing
loss can be calculated by taking the average pure tone air conduction thresholds at several
frequencies and matching that number to a category of severity. A three frequency pure tone
average (PTA) at 500, 1000, and 2000 Hz is commonly used, although some entities utilize
higher frequencies (3000 and/or 4000 Hz) in order to encompass the higher frequency
speech areas. The PTA (500, 1000, and 2000 Hz) calculated for the above audiogram is
approximately 53 dB HL in each ear, a hearing loss in the moderate range. Degrees of hearing
sensitivity include: normal (< 25 dB HL), mild (26 to 40 dB HL), moderate (41 to 55 dB HL),
moderately-severe (56 to 70 dB HL), severe 71 to 9 0 dB HL), and profound (> 90 dB HL).

Configuration of hearing loss

Configuration of hearing loss refers to the “shape” of one’s hearing loss. Audiograms are
always read by looking at an individual’s low frequency thresholds followed by their mid
frequency thresholds, and high frequency thresholds. For example, most individuals have
high frequency sensorineural hearing losses that are sloping in configuration, which
suggests that their hearing loss gets progressively worse with increasing frequency. As an
example, the audiogram with PTA of 53 dB above shows a sloping sensorineural hearing loss.

AIR CONDUCTION VS. BONE CONDUCTION: CANDIDACY GUIDE FOR BONE


CONDUCTION SYSTEMS
How do you choose the best treatment option for conductive hearing loss and mixed hearing
loss? Do you look at reference audiograms, patient preference, or clinical guidelines? When
should you step up to an implantable bone conduction system? We put together a simple
guide to air conduction and bone conduction thresholds and the air-bone gap to help you
easily determine which bone conduction solution is right for each patient. This guide for
clinicians covers conductive and mixed hearing loss, as well as bone conduction options for
unilateral hearing loss. First, we’ll have a quick review of the concepts and compare available
bone conduction systems. Then we’ll go over audiograms and show how to determine the
right bone conduction hearing solution for your patient.
Air Conduction & Bone Conduction Audiometry

Air Conduction Thresholds


In natural hearing, sound waves are carried through the air. Air conduction relies on the
outer ear, middle ear, and inner ear. This makes air conduction audiometry an effective
measure of everyday hearing ability. Air conduction thresholds are usually marked with
an O for the right side and an X for the left side.
Bone Conduction Thresholds
Bone conduction audiometry sends vibrations via a bone oscillator directly to the inner ear,
so bone conduction thresholds should be equal to or better than air conduction thresholds
in the same ear. Bone conduction thresholds are often marked with an < for the right side or
an > for the left side.

If air conduction and bone conduction thresholds are similar, this indicates that the outer
ear and middle ear can carry sounds to the cochlea without resistance, but there is an issue
in the inner ear or hearing nerve. This would be sensorineural hearing loss (SNHL).
Air-Bone Gap
What if air conduction thresholds and bone conduction thresholds are different? In this case
we could speak of air-bone gap. Just like it sounds, this is the difference between air
conduction thresholds and bone conduction thresholds.
[Air conduction threshold (dB)] – [Bone conduction threshold (dB)] = Air-bone gap (dB).
Understanding this simple measure can help you quickly identify if your patient’s
hearing loss is sensorineural, conductive, or a mixed hearing loss, which is a combination of
conductive and sensorineural hearing loss.
But why is it so important to identify the type of hearing loss?
With a pure sensorineural hearing loss, traditional hearing aids are generally effective,
although it can be difficult if there is high-frequency hearing loss or a more severe degree of
sensorineural hearing loss. However, if there is a conductive hearing loss present, (above an
air-bone gap of approximately 25–35 dB)) any amplification from a hearing aid cannot
clearly and effectively be transmitted to the inner ear. This can lead to muffled, distorted
sound quality from traditional hearing aids.
Here’s where bone conduction hearing solutions shine. A bone conduction system bypasses
the conductive structures of the outer and middle ear, so it can send sound vibrations
directly to the cochlea. This is what makes bone conduction solutions an ideal treatment
option when conductive hearing loss is present. Now let’s take a quick look at the bone
conduction systems that we’ll cover in these candidacy guidelines.
Modern Bone Conduction Systems
In the end, the most effective bone conduction system is the one your patients will wear. It
can be helpful to have multiple solutions available, but many bone conduction systems
compromise comfort & wearability in exchange for hearing performance, or vice versa.
That’s why we created the only two bone conduction systems designed to combine healthy
skin and optimal hearing performance for the best possible patient experience.

 Optimal hearing outcomes


 Healthy, intact skin
 Comfortable, discreet wearing options

ADHEAR
ADHEAR is a non-surgical bone conduction solution that uses a unique no-pressure adhesive
adapter, so it offers all day wearing comfort. Thanks to this new design, ADHEAR can deliver
hearing performance that matches SoundArc, Softband, and other non-surgical bone
conduction systems—without any of the painful pressure and prominent headband.

This combination of comfort and performance makes ADHEAR an ideal non-surgical choice
for treating temporary or chronic conductive hearing loss in children and adults

ADHEAR Candidacy
With the simple non-surgical adhesive adapter, your patients can easily test the immediate
benefits of ADHEAR for themselves.
Bone conduction thresholds should be between 0–25 dB in the range of frequencies 500 Hz
and 4,000 Hz.

 Any air conduction thresholds are acceptable, because ADHEAR relies on bone conduction.
 Temporary or chronic conductive hearing loss.
 Unilateral or bilateral hearing loss.

BONEBRIDGE
If your patients are ready for a reliable implantable solution to conductive hearing loss, or
need a higher maximum power output, BONEBRIDGE is a one-of-a-kind active bone
conduction implant that offers direct-drive amplification with no feedback.

As the only active bone conduction implant, BONEBRIDGE is fully implanted under the skin,
so your patients won’t have to worry about skin infections or osseointegration failures
commonly associated with the percutaneous screw of the Baha Connect or Ponto systems.
The external audio processor is held in place by gentle magnetic attraction, offering all-day
wearing comfort.

BONEBRIDGE offers an excellent option for conductive or mixed hearing loss, as well as
single-sided deafness

BONEBRIDGE Candidacy
If your patients want a reliable, implantable bone conduction solution, BONEBRIDGE is the
right choice. It’s also the best solution for patients who want more amplification, because
BONEBRIDGE offers a higher maximum power output thanks to direct-drive active bone
conduction. This transcutaneous direct-drive technology maximizes the efficiency of sound
transfer and eliminates any microphone feedback for the best possible sound quality.

Bone conduction thresholds should be between 0–45 dB from 500Hz through 4,000 Hz.

 Any air conduction thresholds are acceptable.


 Ages 5 years and older.
 Absence of retro-cochlear or central auditory disorders.

Air-Bone Gap: Audiograms & Candidacy

If bone conduction thresholds are normal, but there’s a moderate air-bone gap (e.g 30 dB
PTA), this indicates a pure conductive hearing loss. This is often caused by temporary or
chronic conditions, such as:

 Otosclerosis (initial state)


 Stenosis of ear canal
 Chronic tympanic membrane perforation
 Persistent Eustachian tube dysfunction
 Persistently blocked ventilation tubes
 Chronic otitis externa
 Chronic otitis media with effusion (glue ear)
 Chronic wax impaction
Normal bone conduction thresholds indicate that this 30dB air-bone gap is a case of mild-to-
moderate conductive hearing loss. ADHEAR would be an effective non-surgical solution for
temporary conductive hearing loss, or chronic conditions such as damaged ossicles, chronic
otitis media, or other causes of conductive hearing loss.

Large Air-Bone Gap


A large air-bone gap indicates severe conductive hearing loss. This could be caused by
external auditory canal atresia, or other significant blockage of sound conduction. If the bone
conduction thresholds are relatively normal, this indicates a pure conductive hearing loss.

In this case, the cochlea is able to effectively detect sounds through bone conduction (BC
thresholds of 0-20 dB), but the outer ear and middle ear structures are damaged or
completely missing (high air conduction thresholds).

An ADHEAR non-surgical solution can be an effective treatment option in these cases. If the
conductive hearing loss is stable, the BONEBRIDGE implant offers a straightforward,
permanent solution that combines outstanding comfort and hearing performance.

Moderate Air-Bone Gap: Mixed Hearing Loss


Now for a challenge: A moderate air-bone gap (e.g 30 dB PTA) indicates some degree
of conductive hearing loss. But is it a case of moderate conductive hearing loss or something
else?

In this case, there is also a 30dB air-bone gap. But, as you’ll notice, the bone conduction
thresholds in are in the mild-to-moderate hearing loss range (BC thresholds of 20–40 dB).
This indicates that this hearing loss is caused by a combination of mild-to-moderate
sensorineural hearing loss, plus a conductive hearing loss.

For mixed hearing loss, a bone conduction system can be very effective, but more
amplification is needed to overcome the sensorineural element at the cochlea. In this case,
the direct-drive design of the BONEBRIDGE Active Bone Conduction implant is the ideal
treatment option.

Bonus Audiogram: Single-Sided Deafness


What if air and bone conduction thresholds are normal in one ear, but there’s severe-to-
profound sensorineural hearing loss in the other ear?

III. MASKING TECHNIQUES

Purpose of the test


The technique of masking is used in order to isolate the test ear and ensure that results
obtained are the true thresholds of the test ear. In pure tone audiometry for both air
conduction and bone conduction it is possible that responses obtained are those of the non-
test ear.
Rationale
To establish the true threshold of detectability for air and bone conduction.
Air conduction pure tone audiometry
It is possible for sounds introduced into the test ear via headphones to be carried by bone
conduction across the skull and stimulate the cochlea of the non-test ear.
The amount of sound energy that is lost as it crosses the skull is known as transcranial
attenuation.
It varies in individuals between 40 and 85 dB.
It is accepted that if the difference in thresholds between the air conduction results at any
frequency is 40dBHL or greater then it is possible that the response is due to stimulus of the
non-test ear.
When there is this difference of at least 40dBHL then masking is introduced in order to
isolate the test ear and obtain true thresholds.
In masking a narrow band noise centered around the test frequency is introduced into the
non-test ear. This noise “occupies” the non-test ear and allows the test ear to respond at its
true threshold. Pure tones are presented into the test ear in the usual way until a true
threshold can be recorded.
Masking procedure in air conduction testing (This is known as Hood’s technique)
The procedure uses conventional headphones.
The adult/child is asked to listen to the narrow band noise in the non-test ear and indicate
when it is just audible. Increase the level by 20dB. Instruct the adult/child to ignore this noise
and listen for the signal.
Using the usual 10 down 5 up method, re-measure the threshold of the test ear.
Increase the masking level by 10dB.
Re-measure the threshold.
Repeat the process until for two successive increases in masking level the threshold does
not change.
This gives the true air conduction threshold of the test ear.
This technique is not recommended for very young children as they can find it difficult to
understand what to do. Generally, it can be done at around age seven.

Bone conduction pure tone audiometry


In bone conduction pure tone audiometry masking for bone conduction assessment is
required when there is a gap at any frequency of 15dB or more between the unmasked bone
conduction result and the air conduction threshold. This is known as the air-bone gap.
Masking in bone conduction testing
The same method is used as for air conduction.
The bone conduction vibrator is placed on the mastoid process of the test ear.
Masking noise is introduced to the non-test ear through an insert earphone which is placed
in the ear canal and held in place by a hook over the pinna. The tone is introduced via
headphone into the test ear.
Indicators of cross-hearing and the rules for masking
Rule 1: Masking is needed at any frequency where the difference between the left and right
not-masked A/C thresholds is 40 dB or more when using supra- or circum-aural earphones
or 55 dB or when using insert earphones
Rule 2: Masking is needed at any frequency where the not-masked b-c threshold is more
acute than the air-conduction threshold of either ear by 10 dB or more. The worse ear (by
air conduction) would then be the test ear and the better ear would be the non-test ear to be
masked.
Rule 3: Masking will be needed additionally where Rule 1 has not been applied, but where
the b-c threshold of one ear is more acute by 40 dB (if supra or circum-aural earphones have
been used) or 55 dB (if insert earphones have been used) or more than the not-masked a-c
threshold attributed to the other ear.

Central masking
It has long been recognized that even with small to moderate amounts of masking noise in
the non-test-ear, thresholds of test ear shift by as much as 5- 7 dB. The term central masking
is used to explain this phenomenon and is defined as threshold shift in test ear resulting from
introduction of masking signal into non-test-ear that is not due to crossover.
Central masking occurs due to inhibitory response within central nervous system,
behaviorally measured as small threshold shifts in presence of masking noise. as a result, the
signal intensity levels must be raised to compensate for the attenuation effect from the
neural activity. Both pure tone speech thresholds are affected similarly by central masking
phenomena.
This refers to the inability of the brain to identify a tone in the presence of masking, even
when they are heard in opposite ears; hence masking is occurring centrally rather than
peripherally (in the cochlea).
Central masking is a phenomenon that occurs beyond the ear, during central auditory
processing.
This phenomenon occurs when two stimuli are presented binaurally through well-
insulated headphones: A test signal sounds in one ear while a masker sounds in the opposite
ear. Although no direct interference between the stimuli occurs, a person’s perceptual
threshold for the test signal increases and the signal becomes more difficult to detect. This
effect is most commonly apparent at the higher masking levels.

IV. SISI

The Short Increment Sensitivity Index (SISI) test is a valuable tool in audiology for evaluating
the function of the inner ear and auditory nerve. Let’s delve into what this test entails, its
purpose, and how it aids in diagnosing hearing disorders.

What Is the SISI Test?


The SISI test measures an individual’s ability to recognize small changes in sound intensity.
Specifically, it assesses the patient’s capacity to detect 1 dB increases in intensity during a
series of bursts of pure tones. These pure tones are presented at a level 20 dB above the pure
tone threshold for the test frequency.

Why Is the SISI Test Important?


Differentiating Cochlear and Retrocochlear Disorders:
 Patients with normal hearing typically struggle to perceive these subtle intensity changes.
 However, the SISI test becomes particularly informative when distinguishing between
cochlear and retrocochlear disorders.
 A patient with a cochlear disorder will be able to perceive the 1 dB increments, while a
patient with a retrocochlear disorder will not.

Test Procedure:
 The SISI test requires the following equipment:
 Headphones or insert phones
 A response button
 Here’s how the test is conducted:
1. Select the desired test frequency and set the input level 20 dB above threshold.
2. In the most common type of SISI test, the incremental steps are set to 1 dB.
3. Prior to the actual test, a trial with 2 or 5 dB steps ensures patient understanding.
4. The patient is informed that they will hear a series of tones.
5. If there’s a sudden change in loudness during tone presentation, the patient presses the
response button.
6. The system counts the number of reactions over 20 presentations to calculate a SISI score.
7. Repeat the test for all desired test frequencies.
8. Interpreting Results:
 The SISI test should be conducted at 20 dB SL (Sensation Level) for all tested frequencies.
 A low SISI score may indicate retrocochlear damage.
Conclusion

The SISI test provides crucial insights into the functioning of the auditory system. By
assessing the ability to detect minute intensity changes, audiologists can better diagnose and
manage hearing-related conditions. Remember, though, that this test is just one piece of the
comprehensive audiological puzzle.

What is the Short Increment Sensitivity Index (SISI) test?

The Short Increment Sensitivity Index (SISI) test is a hearing assessment that evaluates the
function of the inner ear and auditory nerve. It measures the ability of the inner ear to detect
small changes in sound intensity that patients with normal hearing wouldn’t be able to
perceive.

The ability to recognise and respond to such changes is called hyperacusis - an over-
sensitivity to sound that is usually caused through damage to the cochlea (cochlear origin)
or damage to the ascending auditory pathway (retro cochlear origin).

This can regularly be used in a test battery when assessing a patients’ suitability for a
cochlear implant, as it compares two different lesions.

The patient will need to respond to these increments to indicate they can perceive the
change. A normal listener shouldn’t be able to perceive an increment this small. In total, 20
increment bursts will happen and the test result is calculated as a percentage.

Overall, the SISI test is a valuable tool for assessing hearing function and can help healthcare
providers diagnose and treat a variety of conditions that affect the inner ear and auditory
nerve. It can also be used to differentiate between cochlear and retro cochlear disorders
because a patient with a cochlear disorder can perceive the increments of 1dB, whereas a
patient with a retro cochlear disorder can’t.
Here are some basic steps for performing a SISI test:

1. At the start of the test, make sure the patient is comfortable and understands
the procedure
2. Provide the patient with headphones or insert earphones
3. Play a tone at a specific frequency and supra-threshold loudness. This will
serve as the reference tone and will be continuous. The frequency and loudness
should vary depending on the individual's hearing level as well as specific testing
protocol
4. Present a second tone as a short-duration increment, this is the "stimulus", at
the same frequency as the reference tone but at a slightly higher intensity (1dB
higher)
5. Ask the patient to flag if they perceive a change in loudness between the
reference tone and the stimulus tone by pressing a button, raising a hand, or giving a
verbal response
6. This stimulation should be repeated 20 times to provide a percentage score of
how many correct increments the patient has identified
7. Calculate the SISI score by subtracting the number of correct responses from
the total number of trials. The result should be expressed as a percentage. A score of
0-20% is considered normal, while a score of 20-65% is inconclusive, and a score
above 75% is positive.

It is essential to note that the SISI test should be performed by a licensed audiologist or
trained healthcare professional. The interpretation of results requires expertise and should
be done in the context of the patient's overall hearing evaluation.

The SISI test should be conducted at 20dB SL for all frequencies. If the patient does not
manage to get a high score on the SISI test, this could be indicative of retro cochlear damage.
The final calculation of the percentage score indicates whether the test has shown a positive
or negative indication of recruitment.

Result SISI test conclusion recruitment


70-100% Positive
35-65% Indifferent
0-30% Negative

If the patient scores above 70%, this is a positive indication and is indicative of the loss being
due to a cochlear lesion. Results lower than 30% are perceived to be negative and indicative
of a normal listener or a patient with a central lesion (as a cause of their hearing impairment).
Results within the range of 30% to 70% are indifferent and a lesion can’t be determined.
V. HEARING AIDS-PRINCIPLE
Hearing aids can't restore normal hearing, but they're packed with advanced technology that
can amplify and process select sounds to improve speech recognition.

At its core, every hearing aid works the same way: a microphone captures sound, an
amplifier makes the sound louder, and a receiver (similar to a speaker) outputs the amplified
sound into the wearer’s ear.

e way: a microphone captures sound, an amplifier makes the sound louder, and a receiver
(similar to a speaker) outputs the amplified sound into the wearer’s ear.

This seems simple enough, yet you can choose from a


number of different hearing aids and features. What exactly happens inside a hearing aid,
why does it never sound as good as natural hearing, and why should you use a hearing aid in
the first place?

Where does hearing loss happen?

In young people, healthy hearing spans a frequency range from 20-20,000Hz. As we age, we
progressively lose the ability to hear high-frequency sounds.

Age-related and noise-induced hearing loss typically result from damage to the hair cells;
this is also known as sensorineural hearing loss. When hair cells stop functioning, we lose
sensitivity to certain frequencies, meaning some sounds seem much quieter or become
inaudible. For as long as sounds remain audible, sensorineural hearing loss responds well to
amplification.

Hearing loss can also originate from damage to other parts of the ear, in which case hearing
aids might be of limited use. For example, when the eardrum or ossicles can’t transmit sound,
amplification alone might not be able to bypass the damaged middle ear to reach the cochlea.

How do digital hearing aids process sound?

Hearing aids do more than amplify sounds. To create a custom sound profile that improves
speech recognition, they process sounds in multiple ways.

Microphones
Like the human ear, digital hearing aids don’t process sound waves directly. First in line are
the microphones. They act as transducers, capturing mechanical wave energy and
converting it into electrical energy.

Modern hearing aids come with two microphones:

1. The omnidirectional microphone picks up sounds from any direction.


2. The directional microphone targets sounds coming from the wearer’s front; its main focus is
capturing speech.

The directional microphone can be fixed or adaptive. An adaptive directional


microphone can turn on or off as needed. When turned on, it automatically switches between
different directional microphone algorithms, depending on the listening environment.

Analog to Digital Conversion

The analog signals coming from the microphones are converted into a digital signal (A/D).
The binary signal is subject to digital signal processing (DSP). The processed digital signal is
then converted back into an acoustic signal (D/A), which enters the ear canal through the
receiver.

The Hearing Review A block diagram outlining the analog to digital (A/D) and vice versa
(D/A) conversion in a hearing aid.

Much can go wrong during these conversions. The Hearing Review notes:

This (conversion) process, if not carefully designed, can introduce noise and distortion, which
will ultimately compromise the performance of the hearing aids.

A key issue of the A/D conversion is its limited dynamic range. Average human hearing spans
a dynamic range of 140dB, helping us hear anything from rustling leaves (0dB) to fireworks
(140dB). The still common 16-bit A/D converter is limited to a dynamic range of 96dB,
which, like a CD, could range from 36-132dB. While raising the lower limit eliminates soft
sounds, for example whispering at 30dB, lowering the upper limit yields poorer sound
quality in loud environments.

Sounds that don’t make it through the converter can’t be amplified.

Amplification and frequency compression

Amplification is a key function of hearing aids, and it’s delicate.

Generally, hearing loss reduces an individual’s dynamic range, often to as little as 50dB. This
narrows the range that sounds need to be compressed into to remain audible and yet sound
comfortable.

Human hearing can span frequencies from 20 to 20,000Hz and a dynamic range from -10 up
to 125+dB SPL.

While linear amplification would make soft sounds louder, ideally audible, it would also
make loud sounds uncomfortable or even painful. Hence, most hearing aids use wide
dynamic range compression (WDRC). This compression method strongly amplifies soft
sounds, only applies a moderate amount of amplification to medium sounds, and barely
makes loud sounds any louder.

Channels and bands

Modern hearing aids process sounds based on the wearer’s hearing loss. To do that, they
split up frequencies into a number of channels, anywhere from three to over 40. Each
channel covers a different frequency range and is analyzed and processed separately.

While channels determine processing, bands control the volume or gain at different
frequencies. Most modern hearing aids have at least a dozen frequency bands they can
amplify. This is similar to headphone or speaker equalizers, where you can manually raise
the level of a specific frequency range, for example, to boost the bass.
Acoustic Today A graphic representation of channels and bands within a digital hearing aid
(courtesy of Steve Armstrong).

The upside of having more channels and bands is increased finetuning. The hearing aid can
better separate speech from background noise, cancel out feedback, reduce noise, and match
the volume and compression at different frequencies to the wearer’s specific needs.

The downside of having additional channels is longer processing times. This can be a
problem for people who can still hear environmental sounds without a hearing aid. Studies
have shown that a processing delay of three to six milliseconds gets noticed as degraded
sound quality.

So, what kind of processing happens at the channel level?

Custom programs and artificial intelligence for different environments

No single hearing aid setting is perfect for all occasions. When you’re at home listening to
quiet music, you’ll need different processing than when you’re out with friends in a loud
environment. To address this, most hearing aids come with different programs that let you
switch from a default program to music-friendly processing or to a more aggressive
elimination of background noise.

While your hearing care provider can customize these programs to your hearing, they can
only cover a limited number of scenarios. And since you’ll be in a quiet environment for the
hearing aid fitting, they have to guesstimate your preferences for different sound scenes.
That’s where artificial intelligence (AI) comes in.
The Widex MOMENT My Sound AI can help users identify the best sound profile for every
environment.

Equipped with machine learning abilities, the AI can learn about different environments, tap
into the data from other users, predict the settings you’ll find most pleasant, and
automatically adapt its processing.

Frequency shifting and lowering improves speech recognition

We’ve already established that hearing aids can’t restore normal hearing because they can’t
fix what’s broken. Their main task is to restore speech recognition by working with the
remaining hearing ability. But what happens when hearing loss affects the frequencies that
cover speech?

DPA Microphones The frequencies between 2,000 to 4,000Hz are most important for speech
intelligibility.

In non-tonal (Western) languages, the frequencies key to speech intelligibility range from
125-8,000Hz. That’s the exact bandwidth covered by standard hearing tests and hearing
aids. Remember, human hearing can reach up to 20,000Hz, typically 17,000Hz in a middle-
aged adult.

When hearing loss is so severe, that it affects frequencies below 8,000Hz, hearing aids can
shift those higher frequency sounds into a lower frequency band. Unfortunately, transposing
sounds in this manner creates artifacts.

The Hearing Review explains:


Some reported that the transposed sounds are “unnatural,” “hollow or echoic,” and “more
difficult to understand.” Another commonly reported artifact is the perception of “clicks” which
many listeners find annoying.
Table of contents
Hearing aid types
Human hearingWhere does hearing loss happen?How hearing aids workHearing aid
typesHearing aid benefitsDo you need a hearing aid?FAQ

All products featured are independently chosen by us. However, SoundGuys may receive a
commission on orders placed through its retail links.See our ethics statement.
HEARING AIDS

How do hearing aids work?


Hearing aids can't restore normal hearing, but they're packed with advanced technology that
can amplify and process select sounds to improve speech recognition.

Starkey The core components of a hearing aid: microphone, amplifier, receiver, and battery.

This seems simple enough, yet you can choose from a number of different hearing aids and
features. What exactly happens inside a hearing aid, why does it never sound as good as
natural hearing, and why should you use a hearing aid in the first place?

How does human hearing work?

The visible outer part of your ear, the pinna, captures and funnels sound to the ear canal and
middle ear, eventually it reaches the inner ear, followed by the auditory nerve, and finally
the auditory cortex; the region in the brain where we perceive sound. The sound is subject
to processing at every step of the way.

The outer ear


To experience how the pinna works, try holding your right hand behind your right ear. By
expanding the physical reach of your ear, you should notice that some sounds seem a bit
louder.

The middle ear

Like the pinna, the ear canal attenuates some frequencies, while emphasizing others. The
only things that can pass through a healthy eardrum (tympanic membrane) are sound waves.
On the inner side, the membrane connects to a series of small bones; the ossicles.

The interaction of the membrane with the ossicles transforms the sound waves into
mechanical energy. That’s necessary because otherwise the air-based sound waves would
simply bounce off of the higher density liquid in the cochlea and reflect back into the ear
canal. By matching the impedance (resistance) of the liquid, sound can pass into the cochlea.

The inner ear

In the cochlea, hair cells convert acoustic energy into electrochemical nerve impulses, a
process known as transduction. The outer hair cells act as amplifiers, which helps with the
perception of soft sounds, while the inner hair cells work as transducers. When the nerve
impulse is strong enough, the inner hair cells trigger the release of neurotransmitters,
activating the auditory nerve, which in turn relays the signal to the auditory cortex in the
brain.

Microphones

Like the human ear, digital hearing aids don’t process sound waves directly. First in line are
the microphones. They act as transducers, capturing mechanical wave energy and
converting it into electrical energy.

Modern hearing aids come with two microphones:

1. The omnidirectional microphone picks up sounds from any direction.


2. The directional microphone targets sounds coming from the wearer’s front; its main focus is
capturing speech.

The directional microphone can be fixed or adaptive. An adaptive directional


microphone can turn on or off as needed. When turned on, it automatically switches between
different directional microphone algorithms, depending on the listening environment.

Analog to Digital Conversion

The analog signals coming from the microphones are converted into a digital signal (A/D).
The binary signal is subject to digital signal processing (DSP). The processed digital signal is
then converted back into an acoustic signal (D/A), which enters the ear canal through the
receiver.

The Hearing Review A block diagram outlining the analog to digital (A/D) and vice versa
(D/A) conversion in a hearing aid.

This (conversion) process, if not carefully designed, can introduce noise and distortion, which
will ultimately compromise the performance of the hearing aids.

A key issue of the A/D conversion is its limited dynamic range. Average human hearing spans
a dynamic range of 140dB, helping us hear anything from rustling leaves (0dB) to fireworks
(140dB). The still common 16-bit A/D converter is limited to a dynamic range of 96dB,
which, like a CD, could range from 36-132dB. While raising the lower limit eliminates soft
sounds, for example whispering at 30dB, lowering the upper limit yields poorer sound
quality in loud environments.

Sounds that don’t make it through the converter can’t be amplified.

Amplification and frequency compression

Amplification is a key function of hearing aids, and it’s delicate.

Generally, hearing loss reduces an individual’s dynamic range, often to as little as 50dB. This
narrows the range that sounds need to be compressed into to remain audible and yet sound
comfortable.

The smallest and least visible units focus on amplification and simple processing. Larger
units, like behind-the-ear (BTE) or receiver-in-the-canal (RIC) can include an additional
microphone, a second processor, a larger battery, and apply more complex algorithms.
What are the benefits of hearing aids?

On a physiological level, hearing aids do much more than restoring speech recognition.

Multiple studies link age-related hearing loss to dementia, Alzheimer’s disease, depression,
and other forms of cognitive decline. One proposed reason is social isolation; people with
hearing loss avoid socializing. Why do they retreat? Even with mild to moderate hearing loss,
the brain has to work harder to process speech, which can be exhausting and frustrating.
Assuming the brain has limited resources, this increase in auditory processing may also
impair memory. The cognitive load theory is compelling and well-studied, but it’s not the
only possible explanation.

NCBI Hearing loss can trigger a number of consequences, but other factors, like genetics or
lifestyle, can impact the outcomes.

Other potential reasons for the correlation of cognitive decline and hearing loss are much
simpler. Hearing loss and mental decline could have a common cause, such as inflammation.
The lack of sensory input could also lead to structural changes in the brain (cascade
hypothesis). Finally, over-diagnosis (harbinger hypothesis) may be an issue, though recent
studies clearly confirm the cognitive load theory and cascade hypothesis and suggest that a
combination of causes isn’t uncommon.

That’s where hearing aids come in. By addressing hearing loss, they can slow down cognitive
decline, reduce the risk for late-life depression, and thus improve the quality of life;
regardless of causes. While hearing aids are a key component, a holistic treatment should
aim to address all underlying causes.
VI.DRAWBACKS OF CONVENTIONAL UNIT

As technology advances, hearing aids continue to improve. But in recent years, most
improvements have been limited to aesthetics, comfort, or secondary functions (e.g.,
wireless connectivity). With respect to their primary function—improving speech
perception—the performance of hearing aids has remained largely unchanged. While
audibility may be restored, intelligibility is often not, particularly in noisy environments
(Hearing Health Care for Adults. National Academies Press, 2016).

Why do hearing aids restore audibility but not intelligibility? To answer that question, we
need to consider what aspects of auditory function audibility and intelligibility depend on.
For a sound to be audible, it simply needs to elicit a large enough change in auditory nerve
activity for the brain to notice; almost any change will do. But for a sound to be intelligible,
it needs to elicit a very particular pattern of neural activity that the language centers of the
brain can recognize.

UNDERSTANDING THE LIMITATIONS

The key problem is that hearing loss doesn't just decrease the overall level of neural activity,
it also profoundly distorts the patterns of activity such that the brain no longer recognizes
them. Hearing loss isn't just a loss of amplification and compression, it also results in the
impairment of many other important and complex aspects of auditory

A good example is the creation of distortions: When a sound with two frequencies enters the
ear, an additional sound is created by the cochlea itself at a third frequency that is a complex
combination of the original two. These distortions are, of course, what we measure as
distortion product otoacoustic emissions (DPOAEs), and their absence indicates impaired
cochlear function.

But these distortions aren't only transmitted out of the cochlea into the ear canal. They also
elicit neural activity that is sent to the brain. While a hearing aid may restore sensitivity to
the two original frequencies by amplifying them, it does not create the distortions and, thus,
does not elicit the neural activity that would have accompanied the distortions before
hearing loss.

These distortions themselves may not be relevant when listening to broadband sounds like
speech, but they are representative of the complex functionality that hearing aids fail to
restore. Without this functionality, the neural activity patterns elicited by speech are very
different from those that the brain has learned to expect. Because the brain does not
recognize these new patterns, perception is impaired.

A useful analogy is to think of the ear and brain as two individuals having a conversation.
The effect of hearing loss is not simply that the ear now speaks more softly to the brain, but
rather that the ear now speaks an entirely new language that the brain does not understand.
Hearing aids enable the ear to speak more loudly, but make no attempt to translate what the
ear is saying into the brain's native language. In this sense, hearing aids are like tourists who
hope that by shouting they will be able to overcome the fact that they are speaking the wrong
language.

Why don't hearing aids correct for the more complex effects of hearing loss? In severe cases
of extensive cochlear damage, it may be impossible. Even when hearing loss is only
moderate, it is not yet clear how a hearing aid should transform incoming sounds to elicit the
same neural activity patterns as the original sounds would have elicited before hearing loss.

EVOLVING OPPORTUNITIES

But there is reason for optimism. In recent years, advances in machine learning have been
used to transform many technologies, including medical devices (Nature. 2015 May
28;521(7553):436). In general, machine learning is used to identify statistical dependencies
in complex data. In the context of hearing aids, it could be used to develop new sound
transformations based on comparisons of neural activity before and after hearing loss.

But machine learning is not magic; to be effective, it needs large amounts of data. Fortunately,
there have also been recent advances in experimental tools for recording neural activity (J
Neurophysiol. 2015 Sep;114(3):2043; Curr Opin Neurobiol. 2018 Feb 10;50:92). These new
tools allow recordings from thousands of neurons at the same time and, thus, should be able
to provide the required “big data.”

The combined power of machine learning and large-scale electrophysiology provide an


opportunity for an entirely new approach to hearing aid design. Instead of relying on simple
sound transformations that are hand-designed by engineers, the next generation of hearing
aids will have the potential to perform sound transformations that are far more complex and
subtle. With luck, these new transformations will enable the design of hearing aids that can
restore both audibility and intelligibility—at least to a subset of patients with mild-to-
moderate hearing loss.

Digital Signal Processing for Over-The-Counter Hearing Aids


Figure 1: Hearing test and audiogram example. In audiology, hearing loss is quantified by
measuring hearing thresholds for each ear at different frequencies. These thresholds are
expressed relative to those of an average individual with healthy hearing, usually around 0-
10dB of Sound Pressure Level (SPL) and then plotted on an audiogram, which is a graph that
shows the hearing thresholds relative to those of an average individual with healthy hearing.
Audiograms, which illustrate these relative thresholds, are used to summarize the collective
data.

In audiology, hearing loss is quantified by measuring hearing thresholds for each ear at
different frequencies but hearing loss is not a linear reduction in ear sensitivity that can be
compensated for by a simple or even frequency-dependent linear sound amplification as
shown in Figure 1. Instead, hearing loss is both frequency and sound level dependent,
meaning that the degree and type of hearing loss can vary significantly depending on the
specific frequencies and sound levels involved.

One of the main characteristics of hearing loss is "loudness recruitment," where the
perceived loudness of sounds grows much faster for people with hearing loss than for those
with normal hearing. As a result, the difference in sound level between "barely heard" and
"normal" becomes very short for people with hearing loss, which can explain why they often
have difficulty hearing others speaking to them in a normal voice.

This phenomenon has led to the development of multi-channel Wide Dynamic Range
Compression (WDRC) for hearing loss compensation in hearing aids. However, hearing loss
is a complex, non-linear, frequency-dependent, and cognition-related phenomenon that
cannot be universally compensated by amplifying sounds. Nonetheless, proper hearing
amplification can still benefit most people with hearing loss to some extent. Our goal is to
make such amplification as efficient as possible.

Basic Digital Signal Processing in Modern Hearing Aids


All modern hearing aids are digital, i.e., analog electrical microphone signals are first
digitized, then processed in the digital domain, and finally converted to an analog signal and
reproduced through a receiver (loudspeaker).
The typical digital signal processing blocks of a high-end hearing aid are shown in Figure 2.

Figure 2: Standard DSP tasks in hearing aids.

Most of the digital signal processing in hearing aids is done in frequency sub-bands. Digital
signal processing is sequential, block by block. After all the processing is done, the full-band
signal is resynthesized (combined) from the processed sub-band signals. In some cases, the
processing blocks can receive sub-band signals from multiple sources and combine them
into a set of sub-band signals. Acoustic beamforming is one type of this processing, shown in
Figure 3.

Figure 3: Acoustic beamforming sub-band merging.

Splitting into and combining frequency sub-bands can be accomplished using various and
well-known methods such as the short-time Fourier transform (STFT) or filter banks. Finite
impulse response (FIR) or infinite impulse response (IIR) filter banks can be used, each
having its own advantages and disadvantages. For example, IIR filter banks provide the
shortest delays but are less suitable for implementing other algorithms such as adaptive
beamforming, noise reduction, and others. When STFT is used for decomposition, the sub-
bands contain complex numbers and are said to be processed in the "frequency domain."

The number of sub-bands, their frequency distribution, and overlap depend on the method
of sub-band decomposition as well as a tradeoff between frequency resolution, processing
delay, computational complexity, available data memory on a given chip, and other affected
parameters.

Personalized Hearing Amplification


Modern digital hearing aids use multi-channel WDRC to achieve personalized amplification.
This involves digitally amplifying and compressing the wide dynamic range of sounds heard
by a healthy ear into a narrower range. As ambient sound increases, amplification is
naturally reduced compensating for the loudness recruitment.

The channels are amplified/compressed differently according to a specific hearing loss. Each
channel usually contains signal frequencies that correspond to a certain predefined
frequency range and partially overlap. The channels can correspond to the frequency sub-
bands one-to-one or combine several and even an unequal number of sub-bands, depending
on the required frequency resolution. In practical implementations for hearing aids, the
number of frequency channels can vary from 4 to 64.

In general, gain/compression parameters are set according to individual hearing thresholds


measured directly or indirectly during a hearing test. Figure 4 illustrates the input/output
characteristics of a four-channel WDRC corresponding to a high-frequency loss.

Figure 4: Example compensation for a high tone frequency loss.

Most modern digital hearing aids are equipped with two microphones, positioned apart from
each other (front and back) for acoustic beamforming. This technique generates a directional
"sensitivity beam" toward the desired sound, preserving sounds within the beam while
attenuating others. Typically, sounds from the front are preserved, while sounds from the
sides and rear are considered unwanted and suppressed. However, different beamforming
strategies can be applied in specific environments.

Acoustic beamforming relies on a slight time delay, typically in tens of microseconds, for
sounds to reach the two microphones. This time difference is dependent on the direction of
the sound and the distance between the microphones, as illustrated in Figure 5.

Figure 5: Acoustic beamforming principles.

Front sounds reach the front microphone first, side sounds arrive simultaneously, and rear
sounds reach the rear microphone first. Acoustic beamforming utilizes this time difference
to create a variable sensitivity called a "polar pattern" or "polar sensitivity pattern" that is
directionally dependent.

While sensitivity to front sounds stays consistent, sensitivity to sounds from other directions
decreases with one angle having zero sensitivity. However, these theoretical polar patterns
only apply in a free field and are not realistic in practical situations due to acoustic distortion
caused by the user's head and torso.

Acoustic beamforming can be either fixed or adaptive. Fixed beamforming has a static polar
pattern that is independent of the microphone signals and acoustic environment, making it
easy to implement and computationally efficient. Adaptive beamforming, on the other hand,
changes the polar pattern in real time to steer the null of the sensitivity polar pattern toward
the direction of the strongest noise, optimizing array performance.

Adaptive acoustic beamforming can be implemented in either the full frequency band or in
frequency sub-bands. The full band scheme optimizes an average performance with a
uniform polar pattern across all frequencies in the hearing aid frequency range, which may
not be optimal for individual frequencies. In the sub-band scheme, polar patterns are tailored
to different frequency regions to optimize performance across frequencies. This can be more
efficient for practical scenarios where noise comes from different directions in different
frequency regions.
Noise Reduction
The primary concern of individuals with hearing loss is difficulty communicating in noisy
environments. While acoustic beamforming can effectively reduce noise from specific
directions, it has a limited ability to mitigate diffuse noise - noise that emanates from
multiple sources without a distinct direction. Diffuse noise arises from multiple noise
reflections off hard surfaces in a reverberant space, which further complicates noise
directionality. For example, noise inside a car is often diffuse, as uncorrelated noises from
various sources such as the engine, tires, floor, and roof are diffused by reflections off the
windows.

Compensating for hearing loss with WDRC amplifies sounds that are below the user's
hearing threshold, but it can also amplify noise and reduce the contrast between noise and
the desired sound. Noise reduction technologies aim to mitigate these issues by suppressing
noise prior to amplification by WDRC, thereby improving the output Signal to Noise Ratio
(SNR).

Today, most noise reduction technologies used in hearing aids are based on spectral
subtraction techniques. The input signal is divided into frequency sub-bands or channels,
and the noise is assumed to have relatively low amplitude and be stationary across sub-
bands, while speech exhibits dynamic spectral changes with rapid fluctuations in sub-band
amplitude. If the average noise amplitude spectrum is known or estimated, a simple "spectral
subtraction" strategy can be employed to suppress noise, as shown in Figure 6.

Figure 6: Spectral subtraction principles.

The upper plot illustrates the sub-band signal amplitudes of a speech in noise signal across
different frequency sub-bands. The relatively stationary noise amplitudes are averaged by
the red line, while the speech signal amplitudes are significantly higher. In the lower plot, the
sub-band noise amplitude shown by the red line in the upper plot has been subtracted from
the sub-band input amplitudes using the spectral subtraction technique, which is applied to
all sub-bands. Finally, the output full-band signal is reconstructed using the same method
used for splitting, with the noise effectively reduced.
Alango Digital Signal Processing for Hearing Aids
Alango Technologies has developed a full range of state-of-the-art DSP algorithms, software,
and turnkey reference designs that can be scaled for various types of hearing enhancement
devices, including OTC hearing aids, as well as TWS earbuds and headsets with amplified
transparency and conversation boost.
Figure 7 displays the DSP technologies that are integrated into the Hearing Enhancement
Package (HEP) developed by Alango.

Figure 7: Alango DSP block for hearing aids.

While some of the technologies share names with those shown in Figure 2, they are based
on Alango's proprietary algorithms, which have been refined based on our experience and
feedback received from customers.

To select or design a DSP for hearing aids, one needs to compare different DSP cores and
rank them according to certain criteria. The most important criteria are:
- Selection of available software algorithms
- Ease of porting unavailable algorithms
- Ease of debugging and optimization
- Possible power consumption to achieve the target uptime before recharging.

While we may have our own preferences, Alango partners with all major DSP IP providers
and can deliver optimized solutions for them. Please, contact us for more details.

The Roadmap to the Hearing Enhancement


The possibilities for DSP technologies in the realm of hearing are endless. There are many
other exciting opportunities to explore. For example, we could harness the power of neural
networks to develop even more advanced noise reduction techniques, or leverage low
latency wireless connectivity to create smart remote microphones with broadcasting
capabilities. Additionally, active occlusion effect management and dynamic vent
technologies offer promising avenues for addressing common hearing problems.

You might also like