0% found this document useful (0 votes)
21 views122 pages

Sound, Mind and Emotion (1-1 ) ( ) ( )

The document discusses the interdisciplinary connections between sound, mind, and emotion, highlighting research presented at symposiums organized by The Sound Environment Centre at Lund University in 2008. It covers various aspects of how sound influences emotional and psychological states, including the effects of sound in trauma, mental disturbances, and conditions like tinnitus and hyperacusis. The publication serves as a comprehensive report on these findings, contributing to the understanding of the auditory system and its implications for therapy and well-being.

Uploaded by

hellochloe0508
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
21 views122 pages

Sound, Mind and Emotion (1-1 ) ( ) ( )

The document discusses the interdisciplinary connections between sound, mind, and emotion, highlighting research presented at symposiums organized by The Sound Environment Centre at Lund University in 2008. It covers various aspects of how sound influences emotional and psychological states, including the effects of sound in trauma, mental disturbances, and conditions like tinnitus and hyperacusis. The publication serves as a comprehensive report on these findings, contributing to the understanding of the auditory system and its implications for therapy and well-being.

Uploaded by

hellochloe0508
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 122

Sound, mind and emotion - research and aspects

Mossberg, Frans

2008

Link to publication

Citation for published version (APA):


Mossberg, F. (Ed.) (2008). Sound, mind and emotion - research and aspects. (Ljudmiljöcentrum skriftserie; Vol.
8). Sound Environment Center at Lund university.

Total number of authors:


1

General rights
Unless other specific re-use rights are stated the following general rights apply:
Copyright and moral rights for the publications made accessible in the public portal are retained by the authors
and/or other copyright owners and it is a condition of accessing publications that users recognise and abide by the
legal requirements associated with these rights.
• Users may download and print one copy of any publication from the public portal for the purpose of private study
or research.
• You may not further distribute the material or use it for any profit-making activity or commercial gain
• You may freely distribute the URL identifying the publication in the public portal

Read more about Creative commons licenses: https://creativecommons.org/licenses/


Take down policy
If you believe that this document breaches copyright please contact us providing details, and we will remove
access to the work immediately and investigate your claim.

LUNDUNI
VERSI
TY

PO Box117
22100L und
+4646-2220000
LJUDMILJÖCENTRU
Sound, mind and emotion
The Sound Environment Centre
at Lund University

Sound, mind and emotian


vid Lunds Univers
Publications from
Report no. 8
The Sound Environment Centre
at Lund University
Human mind and emotion can be profoundly affected by sounds. In this
Report no. 8
interdisciplinary volume researchers from different fields take a look at
connections between sound, mind and emotion, and ways of understand

Sound, mind and emotion


them.

Sound, mind and emotion Research and aspects


Sound, mind and emotion
In the spring of 2008 a series of interdisciplinary symposiums were
arranged in Lund by The Sound Environment Centre at Lund Research and aspects
University. The main subject was to further understanding of sound
and sound environments and it´s influence on personal, emotional
and psychological levels. The three different symposiums had different
perspectives. The first one focused on how sound can affect us in times
of emotional crisis, trauma or catastrophic events. The second topic Research and aspects
was to investigate aspects of how sound is perceived in states of mental
disturbances of various kinds, and the third finally discussed implications
of oversensitivity to sound – hyperacusis – as well as hearing impairments
such as tinnitus.
Research and aspects
This is the eight issue in a series of reports from The Sound Environment
Centre at Lund university.

Texts from a series of interdisciplinary symposiums arranged


Frans Mossberg, red
2008 by The Sound Environment Centre at Lund university, Sweden.
Texts from a series of interdisciplinary symposiums arranged
Lund 2009

2008 by The Sound Environment Centre at Lund university, Sweden


Lund 2009

Texts from a series of interdisciplinary symposiums arranged


ISSN 1653-9354
www.ljudcentrum.lu.se
2008 by The Sound Environment Centre at Lund university, Sweden
Printed at Media-Tryck, Lund 2009 Lund 2009
Sound, mind and emotion
Publications from
The Sound Environment Centre
at Lund University

Report no. 8

Sound, mind and emotion

Research and aspects

Texts from a series of interdisciplinary symposiums arranged


2008 by The Sound Environment Centre at Lund university, Sweden.
Lund 2009

3
ISSN 1653-9354
ISBN 978-91-976560-1-6

Editor: Frans Mossberg


Printed at MEDIA-TRYCK, Lund 2009

4
Contents

Patrik N. Juslin
Seven Ways in which the Brain Can Evoke Emotions from Sounds 9

Ulf Rosenhall
Auditory Problems - Not only an Issue of Impaired Hearing 37

Sören Nielzén, Olle Olsson, Johan Källstrand and Sara Nehlstedt


The Role of Psychoacoustics for the Research on Neuropsychiatric States 49

Sverker Sikström and Göran Söderlund


Why Noise Improves Memory in ADHD Children 63

Gerhard Andersson
Tinnitus and Hypersensititvity to Sounds 75

Kerstin Persson Waye


”It Sounds like a Buzzing in my Head”
–children’s perspectiveof the sound environment in pre-schools 83

Björn Lyxell, Erik Borg and Inga-Stina Olsson


Cognitive Skills and Percieved Effort in Active and Passive
Listening in a Naturalistic Sound Environment 91

Åke Iwar
Sound, Catastrophy and Trauma 105

Kerstin Bergh Johannesson


Sounds as Triggers 115

5
6
Sound, Mind and Emotion

- Editor´s foreword

In the spring of 2008 a series of interdisciplinary symposiums were arranged in Lund


by the Sound Environment Centre at Lund university. The main objective was to
further understanding of how sound and sound environments can affect humans
on personal, emotional and psychological levels. The three different symposiums
had different perspectives. The first one focused on how sound can affect us with
intensified strength in times of emotional crisis, trauma or catastrophic events.
The second topic was to investigate how sound is perceived by subjects in mental
disturbances of various kinds, and the third finally discussed oversensitivity to
sound – hyperacusis – as well as hearing impairments such as tinnitus or similar.
Patrik Juslin opens this volume by presenting an attempt to find out how
the human brain arouses emotions from sounds, by suggesting a analytical model
originally applied to music. This model consists of a number of psychological
mechanisms, organized by their approximate place in the line of human evolution.
Predictions are then being made on how various parameters affects these
mechanisms, as well as different brain regions. Juslin argues that this theoretical
framework provides a more precise tool for understanding interaction of sound,
music and emotion, something that can also be useful in therapeutic situations.
Professor Ulf Rosenhall gives a thorough description of the details of
the complexity of the whole auditory system from ear to brain, and the various
impairments that can affect hearing on different levels. Drawing on psychoacoustics
and neurophysiology Professor Sören Nielzén et al. follows with an overview of the
clinical and scientific background in theory and practise, of the so called “S-detect
method”, developed as an aid to diagnostics and the treatment of schizofrenia in
psychiatry. This method uses responses to certain sound stimuli to discriminate
between heatlhy subjects and persons with schizofrenia, claiming to do so with a
certainty of 90 %.

7
That noise can affect people in unexpected ways is discussed hery by Sverker
Sikström and Göran Söderlund. By, amongst other things, looking at brain arousal
and dopamine production, their studies shows that cognitive performance of
ADHD children sometimes can benefit from noise at appropriate levels.
Specializing on the treatment of tinnitus and hyperacusis prof. Gerhard
Andersson draws a brief description of diagnosis, treatment and research on
impairments such as these.
From the sound world of preschools Kerstin Persson Waye, at the Department
of Occupational and Environmental Medicine at Sahlgrenska University Hostpital
in Gothenburg, reports on the sound environment at day care centers from
the child´s perspective. The study shows, through in-place measurements and
interviews, that children tend to evaluate sounds by the consequenses they have
for them in their immediate perception as well in their own bodies, and adopt
avoidance strategies to loud or unwanted sounds.
Preschools and day care sound enviroments are not only hard on children´s
hearing, but also on the hearing of the staff, and well known to produce tiredness,
sick leave absence and hearing impairments. Staff who have developed impaired
hearing from their work and continue to work face double trouble as far as hearing
goes. Björn Lyxell et. al show that children´s cognitive skills will be negatively affected
as they have to work harder to keep up with decoding their sound enviroment
than their normal hearing collegues, thus resulting in problems of fatigue and social
alienation, as those with impaired hearing always have to listen actively not to miss
out, and in reality seldom have the possibility of a relaxed passive listening.
Åke Iwar who is a practicing psychologist specialising in trauma treatment
and also a member of the SOS International Crisis Group in Copenhagen, gives a
picture of the role that experiences and memories of sound impressions can play in
therapeutic treament of victims of serious accident and cathastrophies, like that of
the tsumani in Thailand recently.
Finally Kerstin Bergh-Johannesson, at the National Center for Disaster
Psychiatry, discusses an unorthodox method of treatment of traumatic memories in
patiens with postraumatic stress symptoms, memories that are repeatedly triggered
by internal or external stimuli like sounds, thoughts or pictures.
This is the eighth in a series of reports from The Sound Environment Centre
at Lund university.
The centre wishes to thank all authors for their participation.

2009-08-28 Frans Mossberg

8
Sound of music:

Seven Ways in which the Brain Can Evoke


Emotions from Sounds

Patrik N. Juslin

Sound moves us. It may cause great pleasure as well as great pain. Nowhere
is this more apparent than in the world of music – often referred to as “that
one of the fine arts which is concerned with the combination of sounds with
a view to beauty of form and the expression of emotion” (Oxford English
Dictionary, 3rd ed.). Emotional reactions to music have fascinated people
since Ancient Greece (Budd, 1985), though it is only recently that researchers
have made progress in understanding how such reactions come about (Juslin
& Västfjäll, 2008). It turns out that our reactions to music tell us a story
about who we are – both as individuals (e.g., in terms of our memories,
preferences, and personalities) and as a species (e.g., in terms of our innate
human disposition to use sounds as sources of information in our inferences
about future events, potential danger and affective states of other individuals).
Although music arouses positive emotions more frequently than
negative emotions (Juslin et al., 2008), music does arouse some negative
emotions such as sadness and irritation quite frequently. If we consider
sounds more generally, it is even more common that sounds are a cause of
negative emotions and stress (Västfjäll, in press). As shown in some of the
other contributions to this volume, specific sounds may also be connected to
traumatic life events in post-traumatic stress disorder (PTSD) such that hearing
a certain sound may continue to arouse negative emotions long after the
9
event in which the sound originally occurred. Strange as it may seem, the
underlying mechanisms that cause these responses may be partly similar to
those that arouse positive emotions in music listening. Hence, systematic
knowledge about the mechanisms that underlie emotional reactions to music
could be of potential importance also for therapeutic attempts to address
aversive reactions to sounds in PTSD.
In this chapter, I will propose a psychological framework for
understanding how the brain evokes emotional reactions from sound waves.
The framework was developed with music especially in mind, although as will
be apparent, it is one of the assumptions of the framework that most of the
psychological mechanisms apply to perception of sound more generally. In
the following, I first discuss the role of psychological theory in studying how
the human brain arouses emotions from sounds. Then, I outline a theoretical
framework, featuring six mechanisms and a set of predictions that can guide
future research. Finally, I consider the implications of this framework for
both empirical research and applications in therapy.

Why is psychological theory important?


The important role of psychological theory in studies of music and emotion
may be illustrated with regard to a recent series of neuropsychological studies
(for a review, see Koelsch, 2005). First of all, it should be noted that emotional
responses can be analyzed along a number of different dimensions from a
neuropsychological perspective. Thus, for instance, one may distinguish
brain regions in terms of whether they involve perception or experience of
emotions (Blonder, 1999; Davidson, 1995; Tucker & Frederick, 1989). For
example, perceiving a facial expression as “happy” is different from feeling
“happy”. One may also distinguish brain regions in terms of discrete emotional
states (Damasio, Grabowski, Bechara, Damasio, Ponto, Parvizi, & Hichwa,
2000; Murphy, Nimmo-Smith, & Lawrence, 2003; Panksepp, 1998; Phan,
Wager, Taylor, & Liberzon, 2002). That is, the experience of “fear” might
activate a different brain region than the experience of “joy”. Yet another
approach, which ultimately may be more fruitful in this context, is to analyze
brain regions in terms of distinct psychological processes or functions (Cabeza
& Nyberg, 2000). For instance, an emotion aroused by an episodic memory
may involve a different set of brain regions than an emotion aroused by a
startle response.

10
In this chapter, I shall argue that an analysis of underlying psychological
processes is crucial for an understanding of emotional reactions to sounds.
Indeed, the coupling of psychological predictions with functional brain
imaging techniques is probably one of the most promising avenues in the study
of music and emotion. While imaging studies could inform and constrain
psychological theorizing, psychological theories could organize data from
imaging studies. Unfortunately, this is not how current neuropsychological
research on emotional reactions to music has been conducted (Juslin &
Västfjäll, 2008).
A review of the literature reveals that a number of different brain regions
have been implicated in studies of emotional reactions to music, including
the thalamus, cerebellum, hippocampus, amygdala, prefrontal cortex,
orbitofrontal cortex, midbrain, insula, Broca’s area, nucleus accumbens,
visual cortex, and supplementary motor areas. Note, however, that different
regions have been activated in different studies, without any explanation of
these differences (e.g., Bauer Alfredson et al., 2004; Blood & Zatorre, 2001;
Blood et al., 1999; Brown et al., 2004; Gosselin et al., 2006; Koelsch et al.,
2006; Menon & Levitin, 2005).
How can we bring some order to the inconsistent findings from
previous research? The solution is to consider the underlying psychological
mechanisms that give rise to the emotional responses. Brain imaging studies
have tended to simply present listeners with supposedly “emotional” music to
explore which areas may be activated by this stimulus. In some cases listeners
have been asked to bring their own music to increase the chance that the
music will be “effective”. Rarely, however, have researchers manipulated - or
at least controlled for - the underlying mechanism that induced the emotion.
As I will show, music can induce emotions in many different ways, and what
brain regions are activated will depend on the precise mechanism involved.
Hence, if researchers can manipulate (or at least control for) induction
mechanisms in future experiments, they may be better able to account for
obtained activation patterns. In the following, I briefly outline a framework
that aims for a more theory-driven approach to studying musical emotions.

11
A novel theoretical framework
Although most scholars regard the question of how music evokes emotions
as the primary issue (e.g., Dowling & Harwood, 1986, p. 202), a literature
search reveals that few studies make any attempt to test a theory about the
psychological mechanism that underlies emotional reactions to music. Yet,
such reactions are intriguing. This is because in the paradigmatic case, an
emotion is evoked when an event is appraised as having the capacity to
influence the goals of the perceiver somehow. Music does not appear to have
any capacity to further or block goals in life. Thus, researchers have been
forced to come up with alternative mechanisms that make more sense in a
musical context.1
I shall use the term “psychological mechanism” broadly in this chapter
to refer to any information processing that leads to the induction of emotions
through listening to music. The processing may be simple or complex.
It may be available to consciousness or not. The crucial thing is that the
mechanism somehow takes the music as its “object”. Most scholars who have
written about possible mechanisms have limited themselves to only one or
a few mechanisms (e.g., Levinson, 1997), or have argued that the “default”
mechanism for induction of emotions – cognitive appraisal – is most suitable
to explain emotional reactions to music (e.g., Waterman, 1996).
In contrast, Juslin and Västfjäll (in press) outlined a novel framework,
featuring six psychological mechanisms (besides cognitive appraisal) through
which music may evoke emotions. The mechanisms are: brain stem reflexes,
evaluative conditioning, emotional contagion, visual imagery, episodic memory,
and musical expectancy (explained further below). Juslin and Västfjäll argue
that one may think of these mechanisms as consisting of a number of distinct
“brain functions” that have developed gradually and in a specific order during
the evolutionary process – from simple sensations to syntactical processing
(e.g., Gärdenfors, 2003). The mechanisms are seen as information-processing
devices at different levels of the brain that utilize different means to track
significant aspects of the environment, and that may lead to conflicting
outputs in some contexts. All mechanisms have their origin outside the
musical domain. Because the mechanisms depend on brain functions with
different evolutionary origins, each mechanism is expected to have unique

1 However, musical induction of emotions through a cognitive appraisal may occur some-
times, such as when our ‘goal’ to go to sleep at night is ‘blocked’ by a neighbor playing
loud music.
12
characteristics that one should be able to demonstrate, for instance, in
experiments.
Below, I first briefly define each mechanism and then outline theoretical
predictions for each mechanism – in particular, as they pertain to neural
correlates. I hope that this may contribute to more hypothesis-driven
approaches to brain imaging studies of music and emotion (for further
discussion and evidence, see Juslin & Västfjäll, 2008).

Psychological mechanisms
Building on the work of the pioneers in this field (Berlyne, 1971, Meyer,
1956) as well as on more recent research (Juslin & Sloboda, 2001), Juslin &
Västfjäll (2008) suggested the following six mechanisms:

Brain stem reflex refers to a process whereby an emotion is induced by music


because one or more fundamental acoustical characteristics of the music are
taken by the brain stem to signal a (potentially) important and urgent event.
All other things being equal, sounds that are sudden, loud, dissonant, or
feature fast patterns induce arousal in the listener. The responses reflect the
immediate impact of simple auditory sensations.

Evaluative conditioning (EC) refers to a process whereby an emotion is


induced by music simply because this stimulus has been paired with other
positive or negative stimuli. For instance, a specific piece of music may have
occurred repeatedly together in time with a specific event that always makes
you happy such as meeting your best friend. Over time, through repeated
pairings, the music itself will, eventually, arouse happiness even in the absence
of the friendly interaction.

Emotional contagion refers to a process whereby an emotion is induced by


music because the listener perceives the emotion expressed in the music, and
then ‘mimics’ this expression internally, which by means of either peripheral
feedback from muscles, or a more direct activation of the relevant emotion
representations in the brain, leads to an induction of the same emotion.

Visual imagery refers to a process whereby an emotion is induced in a listener


because he or she conjures up visual images (e.g., of a beautiful landscape)
while listening to the music. The emotions experienced are the result of

13
a close interaction between the music and the images. Listeners appear
to conceptualize the structure of the music in terms of a metaphorical,
nonverbal mapping between the music and image-schemata grounded in
bodily experiences; for instance, hearing the melody as “moving upward”.
Listeners react to the mental images much in the same way as they would to
the corresponding visual stimuli in the “real” world (e.g., reacting positively
to a beautiful nature scene).

Episodic memory refers to a process whereby an emotion is induced in a


listener because the music evokes a memory of a particular event in the
listener’s life (often referred to as the “Darling they are playing our tune”
phenomenon). When the memory is evoked, so is also the emotion associated
with the memory, and this emotion may be relatively intense – perhaps
because the psychophysiological reaction pattern to the original event is
stored in memory along with the experiential contents.

Music expectancy refers to a process whereby an emotion is evoked in a


listener because a feature of the musical structure violates, delays, or confirms
the listener’s expectations about the continuation of the music. Thus, for
example, the sequential progression of E-F# sets up the expectation that the
music will continue with G#. If this does not happen, a listener familiar with
the musical idiom could become, say, surprised. The expectations are based
on the listener’s previous experiences of the same style of music.

Theoretical hypotheses
By synthesizing theories and findings from several research domains outside
music, Juslin and Västfjäll (2008) were able to offer the first set of hypotheses
that may help music researchers to distinguish among different mechanisms in
future research. Table 1 shows the preliminary hypotheses. The mechanisms
are listed in the approximate order in which they can be assumed to have
appeared during evolution (see Gärdenfors, 2003; Joseph, 2000; Tulving,
1983).
The 66 predictions can be divided into two subgroups: the first
subgroup concerns characteristics of the psychological mechanism as such.
Survival value of brain function describes the most important benefit that each
brain function brought to organisms that possessed this function. Information
focus specifies broadly the type of information that each mechanism is
processing. Ontogenetic development concerns the approximate time in human

14
development when respective mechanism begins to have a noticeable effect
on emotional responses to music. Key brain regions describes those regions of
the brain that have been most consistently associated with each mechanism
in functional brain imaging studies (detailed below). Cultural impact and
learning refers to the extent to which each mechanism is influenced differently
by music that varies from one culture to another.
A second group of characteristics (see Table 1) concerns the specific
nature of the emotion induction process of the respective mechanism.
Hence, Induced affect specifies which emotional states might be expected
to be induced, depending on the mechanism. Induction speed refers to how
much time each mechanism requires, in relation to other mechanisms, for
an emotion to occur in a specific situation. Degree of volitional influence
refers to the extent to which a listener him- or herself can actively influence
the induction process (e.g., through focus of attention, active recall, etc.).
Availability to consciousness is the extent to which at least some aspects of
the induction process are available to the listener’s consciousness, so that
the listener may be able to explain his or her response. Modularity refers to
the extent to which the mechanism may function as an independent and
information-encapsulated “brain module” that can be activated in parallel
with other psychological processes. Dependence on musical structure refers to
the relative extent to which the induction depends on the precise structure or
style of the music the listener is hearing.

15
Table 1: Hypotheses for six psychological mechanisms through which music might induce emotions (adapted from Juslin & Västfjäll, 2008).

16
____________________________________________________________________________________________________________________

Nature of mechanism Characteristic

Survival value of brain function Information focus Ontogenetic development


Mechanism ______________________________________________________________________________________________

Brain stem reflex Focusing attention on potentially Extreme or rapidly changing Prior to birth
important changes or events in the basic acoustic characteristics
close environment

Evaluative conditioning Being able to associate objects or Covariation between events Prior to birth
events with positive and negative
outcomes

Emotional contagion Enhancing group cohesion and Emotional motor expression First year
social interaction, e.g. between
mother and infant

Visual imagery Permitting internal simulations of Self-conjured visual images Pre-school years
events that substitute for overt and
risky actions

Episodic memory Allowing conscious recollections of Personal events in particular 3-4 years
previous events and binding the self places and at particular times
to reality

Musical expectancy Facilitating symbolic language with Syntactic information 5-11 years
a complex semantics

____________________________________________________________________________________________________________________
Table 1 (continued)

____________________________________________________________________________________________________________________

Nature of mechanism Characteristic

Key brain regions Cultural impact/learning


Mechanism ______________________________________________________________________________________________

Brain stem reflex Reticular formation in the brain stem, the intralaminar nuclei Low
of the thalamus, the inferior colliculus

Evaluative conditioning The lateral nucleus of the amygdala, the interpositus nucleus High
of the cerebellum

Emotional contagion ‘Mirror neurons’ in the pre-motor regions, right inferior frontal Low
regions, the basal ganglia

Visual imagery Spatially mapped regions of the occipital cortex, the visual High
association cortex, and (for image generation) left temporo-
occipital regions

Episodic memory The medial temporal lobe, especially the hippocampus, and the High
right anterior prefrontal cortex (applies to memory retrieval)

Musical expectancy The left perisylvian cortex, ‘Broca’s area’, the dorsal region of High
the anterior cingulate cortex

____________________________________________________________________________________________________________________

17
Table 1 (continued)

18
____________________________________________________________________________________________________________________

Nature of induction process Characteristic

Induced affect Induction speed Degree of volitional influence


Mechanism ______________________________________________________________________________________________

Brain stem reflex General arousal, unpleasantness vs. High Low


Pleasantness

Evaluative conditioning Basic emotions High Low

Emotional contagion Basic emotions High Low

Visual imagery All possible emotions Low High

Episodic memory All possible emotions, although Low Medium


especially nostalgia

Musical expectancy Surprise, awe, pleasure, ‘thrills’ Low Low


disappointment, hope, anxiety

____________________________________________________________________________________________________________________
Table 1 (continued)

____________________________________________________________________________________________________________________

Nature of induction process Characteristic

Availability to consciousness Modularity Dependence on musical structure


Mechanism ______________________________________________________________________________________________

Brain stem reflex Low High Medium

Evaluative conditioning Low High Low

Emotional contagion Low High Medium

Visual imagery High Low Medium

Episodic memory High Low Low

Musical expectancy Medium Medium High

____________________________________________________________________________________________________________________

From: Juslin & Västfjäll (2008), adapted by permission from the Cambridge University Press.

For further theoretical and empirical support of the different hypotheses, see the original article.

19
Key brain regions: Overview
Returning to our previous discussion of functional brain imaging studies, it may
be useful to consider the key brain regions associated with each mechanism.
However, first of all, it should be noted that emotional responses to music
may be expected to involve three general types of brain regions: (1) regions
virtually always involved when music is perceived (e.g., the primary auditory
cortex), (2) regions virtually always involved in the conscious experience of
emotion – regardless of the “source” of the emotion (candidates include the
rostral anterior cingulate and the medial prefrontal cortex; e.g., Lane, 2000,
pp. 356-358), and (3) regions involved in emotional information-processing
that differ depending on the precise mechanism causing the emotion. Hence,
although responses to music are likely to involve several regions of the brain,
the predictions in Table 1 concern only the last set of regions (i.e., those that
can help researchers to discriminate between the underlying mechanisms). It
should be noted that several of the processes that these mechanisms entail
(e.g., syntactical processing, episodic memory) do not in themselves imply
that emotions have been evoked – they may also occur in the absence of
emotion. However, whenever emotional responses to music occur, one or
more of the mechanisms will be involved, and, hence, at least one associated
sub-process should be observable as well.

Brain stem reflexes


The precise neurophysiological processes that underlie brain stem reflexes
are not fully understood, although evidence suggests that they occur in close
connection with the reticular formation of the brain stem and the intralaminar
nuclei of the thalamus, which receive inputs from the auditory system (see
Kinomura et al., 2006). The brainstem is an ancient structure of the brain
subserving several sensory and motor functions, including auditory perception
and the mediation and control of attention, emotional arousal, heart rate,
breathing, and movement (Joseph, 2000). The reticular formation is able to
quickly induce arousal so that attention can be selectively directed at sensory
stimuli of potential importance. The system exerts its widespread influences
on sensory and motor functions and arousal through neurotransmitters such
as norepinephrine and serotonin. While the system may be activated and
inhibited by the amygdala, hypothalamus, and orbitofrontal cortex, it may
also be activated independently of these structures in a more “reflex-like”

20
manner (Tranel, 2000). Brain stem reflexes to music rely on the early stages
of auditory processing. When an auditory signal reaches the primary auditory
cortex, it has already undergone several analyses by such brain structures
as the superior olivary complex, the inferior colliculus, and the thalamus.
Accordingly, alarm signals to auditory events which suggest “danger” may be
emitted as early as at the level of the inferior colliculus. Brain stem reflexes
appear to be largely “hard-wired” in the brain.

Evaluative conditioning
EC is often regarded as a special kind of “classic” conditioning, with some
unusual characteristics (e.g., De Houwer et al., 2001). First, EC can occur
even if the participant is unaware of the contingency of the associated stimuli.
In fact, studies have shown that EC responses can be both established and
arouse emotions without awareness (Martin et al., 1984). Second, EC appears
to be fairly resistant to extinction, as compared to classic conditioning (Diaz
et al., 2005). Hence, once a piece of music has been associated with a certain
emotional outcome, this association may be quite persistent. Although EC has
not been studied much in regard to music, compared to other mechanisms,
two studies have reported EC effects with music (Blair & Shimp, 1992;
Razran, 1954).
EC depends on unconscious, unintentional, and effortless processes,
which involve sub-cortical regions of the brain, such as the amygdala and
the cerebellum (Fanselow & Poulus, 2005; Johnsrude et al., 2000; LeDoux,
2000; Sacchetti et al., 2005; Thompson, 2005). In the EC process, it is
plausible that the amygdala is particularly involved in the evaluation of the
emotional stimulus, whereas the cerebellum is involved in the timing of the
response (Cabeza & Nyberg, 2000). In addition to these brain areas, one can
expect some hippocampal activation if the conditioning depends strongly on
the specific context. However, the amygdala appears to be required for EC,
whereas the hippocampus is not (LeDoux, 2000).

Emotional contagion
People commonly catch the emotions of others when seeing their facial
expressions or hearing their vocal expressions (e.g., Hatfield, Cacioppo,
& Rapson, 1994; Neumann & Strack, 2000). Because music commonly

21
features acoustic patterns similar to those that occur in emotional speech
(see Juslin & Laukka, 2003), it has been argued that we might get aroused
by voice-like features of music that lead us to “mimic” the perceived emotion
internally. That emotional reactions to music could involve contagion effects
is supported by findings that the perception of “happy” and “sad” music may
induce the corresponding emotions in listeners, as indicated by experiential,
physiological, and expressive emotion components (Lundqvist, Carlsson,
Hilmersson, & Juslin, 2009).
Recent research has suggested that the process of emotional contagion
may occur through the mediation of so-called “mirror neurons” discovered
in studies of the monkey pre-motor cortex in the 1990’s (di Pellegrino et
al., 1992). It was found that the mirror neurons discharged both when the
monkey carried out an action and when it observed another individual
carrying out a similar action. Direct evidence for “mirror neurons” in humans
is still lacking, but there is indirect evidence: Several studies have shown
that when individuals observe an action carried out by another individual,
the motor cortex may become active in the absence of overt motor activity
(Rizzolatti & Craighero, 2004).
While the idea of emotional contagion remains speculative in
relationship to music, a recent fMRI study by Koelsch et al. (2006) found
that music listening activated brain areas related to a circuitry serving the
formation of pre-motor representations for vocal sound production even
though no singing was observed among the participants. Koelsch et al.
concluded that this may reflect a mirror-function mechanism. The findings
support the idea that listeners may mimic the emotional expression of the
music internally. Thus, emotional contagion from music might be assumed
to involve two types of brain regions. First, several studies have indicated
that perception of emotion from the voice (and thus presumably of emotion
from voice-like characteristics of music) may involve particularly the right
inferior frontal regions and the basal ganglia (see Adolphs, 2002; Adolphs
et al., 2002; Buchanan et al., 2000; Cancelliere & Kertesz, 1990; George et
al., 1996). Also, we would expect to find activation of “mirror neurons” in
pre-motor regions, in particular the areas involved in vocal sound production
(Koelsch et al., 2006).

22
Visual imagery
Visual imagery is commonly defined as an experience which resembles
perceptual experience, but that occurs in the absence of relevant sensory
stimuli. The primary issue in imagery research has been whether visual
imagery involves a distinctively “pictorial” representation of events in mind
or a propositional representation? The pictorial view is supported by findings
that many brain regions that are activated during visual perception are
similarly activated when a person is involved in imagery (for reviews, see e.g.,
Farah, 2000; Ganis et al., 2004). Specifically, both studies with focally brain-
damaged patients and brain imaging studies of normal participants have
shown that visual representations in the occipital lobe known to be spatially
mapped are activated in a “top-down” manner during imagery (Charlot et
al., 1992; Goldenberg et al., 1991). ERP studies have further suggested that
imagery involves the same visual cortical areas that are involved in early visual
processing (Farah, 2000).
It has been proposed that music may be particularly effective in
stimulating mental imagery (Osborne, 1980; Quittner & Glueckauf, 1983),
and several studies also indicate that imagery can be effective in enhancing
emotional reactions to music in listeners (e.g., Band, Quilter, & Miller, 2001-
2002). However, what is characteristic of visual imagery as a mechanism is
that the listener is able to influence the process to a considerable extent and
therefore is very much a “co-creator” of the emotions evoked by the music.
Although images may sometimes come into the mind unbidden, in general
a listener may conjure-up, manipulate, and dismiss images at will. Hence,
whereas the “bottom-up” process of visual perception is “automatic”, visual
imagery appears to require the intervention of an attention-demanding
process of image generation. Though the localization of this image generation
process is a controversial issue, a number of studies suggest a left temporo-
occipital localization (Farah, 2000). Thus, activation of visual association
cortex including occipital cortex during music listening may indicate that the
listener is involved in music-stimulated visual imagery.

Episodic memory
Music often evokes memories (Gabrielsson, 2001; Juslin et al., 2009; Sloboda,
1992). When the memory is evoked, so is also the emotion associated with
the memory (Baumgartner, 1992). Though evaluative conditioning is also

23
a form of memory, episodic memory differs from conditioning in that it
involves a conscious recollection of a previous event in time, preserving a lot
of contextual information (Tulving, 1983). Also, the two kinds of memory
have different process characteristics and brain substrates (Table 1).
Episodic memory can be divided into different stages (encoding, storage,
retrieval). Note that different brain regions may be involved depending on the
stage of the memory process. Here, I focus on the retrieval stage of the memory
process, which may be most important in order to be able to differentiate
the mechanisms in a listening experiment. Previous research has consistently
indicated that the conscious experience of recollection of an episodic memory
involves the medial temporal lobe, in particular hippocampus, and the right
anterior prefrontal cortex (e.g., Nyberg et al., 1996; Schachter et al., 1996;
see also Cabeza & Nyberg, 2000).

Musical expectancy
Musical expectancy does not simply refer to any unexpected event that might
occur in regard to music. Musical expectancy refers to those expectancies that
involve syntactic relationships among different parts of the music’s structure
(Patel, 2003; see also Meyer, 1956, 2001). Like language, music consists
of perceptually discrete elements, organized into hierarchically structured
sequences according to “well-formedness” rules. Thus, it is a common view
among music theorists that most musical styles are, in principle at least,
describable by a grammar (Lerdahl & Jackendoff, 1983). It is only through
the perception of this syntax that the relevant musical expectations arise.
These expectations are based on the listener’s previous experience of the
musical style (e.g., Carlsen, 1981). Emotional reactions are evoked when a
listener’s musical expectations are disrupted somehow – for instance by new
or unprepared harmony (Steinbeis, Koelsch, & Sloboda, 2006).
Lesion studies suggest that several areas of the left perisylvian cortex
is involved in different aspects of syntactic processing (Brown, Hagoort,
& Kutas, 2000). Relatively few brain imaging studies of language so far
have looked at processes at the sentence level and beyond. A few studies
have indicated that especially parts of Broca’s area increase their activity
when sentences increase in syntactic complexity (e.g., Caplan et al., 1998;
Stromswold et al., 1996). Similarly, a few studies using MEG or fMRI have
revealed that syntactical processing of music is processed in Broca’s area (see
Maess, Koelsch, Gunter, & Friederici, 2001; Tillman, Janata, & Bharucha,
24
2003). It has been found that violations of musical expectancy activate the
same brain areas as have previously been implicated in violations of syntax
in language. Patel (2003) has thus argued that syntax in language and music
share a common set of processes for syntactical integration (localized around
Broca’s area) that operate on different structural representations in more
posterior brain regions.
In addition to the areas involved in syntactic processing, which
might work in fairly automatic fashion (e.g., Koelsch et al., 2002), musical
expectancy also probably involves a monitoring of musical expectancies and
conflicts between expected and actual musical sequences. This monitoring
might involve parts of the anterior cingulate cortex as well as the prefrontal
cortex (Botvinick, Cohen, & Carter, 2004; Cabeza & Nyberg, 2000).
In sum, we see that different mechanisms are likely to involve partly
different brain regions, which could explain the inconsistent findings
in previous functional brain imaging studies, since these studies have not
controlled for underlying mechanisms.

Implications of the novel framework

Implications for research


One implication of the novel framework is that it may resolve many
disagreements in the field. Apparent contradictions of different approaches
may be partly reconciled by observing that they focus on different mechanisms.
Hence, the framework might help to resolve past disagreement regarding what
emotions music can evoke, how early musical emotions develop, whether
listeners are “active” or “passive” in inducing emotions, how much time it
takes to induce an emotion through music, and whether musical emotions
are innate or learned responses – it all depends on the mechanism concerned
(Table 1).
The most crucial implication of the new framework for studies of
music and emotion is that it will not be sufficient to induce and study
musical emotions in general. In order for data to contribute in a cumulative
fashion to our knowledge, researchers must try to specify, as far as possible,
the mechanism involved in each study. Otherwise studies will yield results
that are inconsistent or that cannot be given a clear interpretation. As noted

25
earlier, neuropsychological studies offer one example of this problem. The
studies have tended to simply present “emotional music” to listeners without
manipulating or at least controlling for the underlying induction mechanism.
This makes it exceedingly difficult to understand what the obtained neural
correlates actually reflect in each study (“It is not possible to disentangle
the different subcomponents of the activation due to limitations of this
experimental design”, Bauer Afredson et al., 2004, p. 165). Given the aim
to explore emotional reactions to music, one would expect the manipulation
of musical stimuli to be crucial to the task. Yet, stimuli used so far have been
selected non-systematically (e.g., instrumental songs of the rembetika style,
joyful dance tunes, listener-selected music). The fact that different studies
have reported activations of different brain regions does suggest that different
mechanisms were involved. But after the fact, there is no way of knowing.
This shows that musical emotions cannot be studied without regard to how they
were evoked. On the other hand, if researchers could manipulate induction
mechanisms in future listening experiments, they would be better able to
explain the obtained brain activation patterns. Indeed, to the extent that we
can obtain systematic relations among mechanisms and brain regions, we
might eventually be able to discriminate between the mechanisms based on
brain indices alone. To facilitate studies of music and emotion, we should try
to develop standard paradigms and tasks that reliably evoke specific emotions
in listeners through each of the mechanisms mentioned earlier. (This would
be somewhat analogous to the different tasks used to measure distinct
memory systems; e.g., Tulving, 1983).

Implications for therapeutic applications


The novel framework could also have implications for therapy that involves
sound, one way or the other. For example, the various mechanisms are related
to a number of different methods in music therapy (e.g., Bunt & Hoskyns,
2002) which involve music in procedures to accomplish different aims.
One of the six mechanisms in the framework is already used systematically
in music therapy – namely, the visual imagery mechanism. Helen Bonny
developed a method, Guided Imagery and Music (GIM), where a “traveler”
is invited to “share” his or her images as they are experienced in real time
during a pre-programmed sequence of music (Bonny & Savary, 1973). Such
music-induced imagery may help to facilitate the expression, identification,
and experience of specific emotions in patients. Imagery may also yield a
26
state of deep relaxation, with health benefits such as reduced cortisol levels
(McKinney, Antoni, Kumar, Tims, & McCabe, 1997). To help to activate
the visual imagery mechanism, one may choose music with characteristics
that seem especially effective in stimulating vivid imagery, such as repetition,
predictability in melodic, harmonic, and rhythmic elements, and slow tempo
(McKinney & Tims, 1995). In order to further increase the relaxing effects of
the music – if that is the intended effect of the therapy – one can make sure
to choose music with features opposite to those that activate the brain stem
reflex mechanism (and increase the arousal level of the listener). That is, one
should select music that features a slow tempo, low sound level, soft timbre,
and, in particular, no sudden changes in the musical parameters.
There are other, less obvious ways in which mechanisms can be involved
in specific kinds of music therapy. One of the oldest methods in music therapy
is the so-called “iso principle” (Altshuler, 1954). This means that, in order to
modify the mood of the patient, one must first match the patient’s mood using
music with the same emotional expression (e.g., “sad” music if the patient is
feeling “sad”), and then begin to gradually change the expression of the music
in the direction of the desired mood (e.g., gradually increase the tempo and
pitch to aim for “joy”). In this process, the emotional contagion mechanism
is of central importance. Music therapists may find it useful to take stock of
current theory on this mechanism in order to maximize the effectiveness of
the method. For instance, if musical expression is based on its similarities
with emotional speech, as hypothesized in the new framework, then music
therapists should create musical stimuli based on similar emotion-specific
patterns of acoustic parameters in speech and music (Juslin & Laukka, 2003,
Table 7), perhaps also using timbres that are particularly “voice-like”, such as
the cello and the violin. This might increase the chances that the therapist is
able to “steer” the patient’s mood in the desired direction.
Music can also be used in therapy to create new and positive associations
to stimuli that evoke, say, anxiety in patients. Such procedures involve the
evaluative conditioning mechanism, and theoretical knowledge about the
characteristics of this mechanism (see Table 1) might help therapists to
design more efficient procedures. For instance, to make the conditioning
procedure as effective as possible, perhaps the conditioning should occur
outside awareness – for instance, by giving the patient some task that requires
his or her attention. (Research has indicated that evaluative conditioning is
more easily established outside conscious awareness; see De Houwer et al.,
2005; Razran, 1954.)

27
The above discussion focuses on how sounds of music can be
beneficial in therapy. Sometimes, however, the sounds themselves are part
of the problem. As I mentioned in the introduction, PTSD involves cases
where specific sounds are perceived as extremely unpleasant because they
are somehow associated with traumatic and stressful events in the patient’s
past. In such instances, the two mechanisms based on memory (evaluative
conditioning, episodic memory) are most relevant. Memories associated with
sounds may give rise to anything from nostalgic recognition to sheer fear.
Studies of fear conditioning have provided examples of “one-trial-learning”
to simple sounds (LeDoux, 2000). Further studies are needed to clarify the
exact degree of generality and discriminability of sounds involved in EC.
How similar to the sound occurring in the original traumatic event must a
subsequent sound be in order to elicit the conditioned response?
Note that the two memory mechanisms (evaluative conditioning,
episodic memory) have quite different implications for treatments, as
suggested by the various hypotheses presented earlier. If a negative reaction
to a sound is based on the largely sub-conscious evaluative conditioning
mechanism, treatments that focus on conscious thought patterns may have
little effect. Instead, the treatment of choice to get rid of the negative emotion
associated with the sound may be to create new associations to the sound
through a new conditioning procedure with repeated pairing of the sound to
a positive stimulus. Episodic memories evoked by sounds, on the other hand,
are presumably more amenable to a re-structuring and conscious reasoning
about the previous event.
Much more could be said about the therapeutic implications of the
new framework for musical emotions outlined in this chapter. However, even
the simple examples above may go some way towards demonstrating that
research on music and emotion might be useful also in our understanding
of how sounds evoke pleasant and unpleasant emotions more generally.
Hopefully, the present volume may help to motivate further collaboration
among researchers and practicians in this important endeavor.2

2 This research was supported by the Swedish Research Council.


28
References
Adolphs, R. (2002). Neural systems for recognizing emotion. Current Opinion in
Neurobiology, 12, 169-177.
Adolphs, R., Damasio, H., & Tranel, D. (2002). Neural systems for recognition of
emotional prosody: A 3-D lesion study. Emotion, 2, 23-51.
Altshuler, I. M. (1954). The past, present and future of music therapy. In E. Podolsky
(Ed.), Music therapy (pp. 24-35). New York: Philosophical Library.
Band, J. P., Quilter, S. M., & Miller, G. M. (2001-2002). The influence of selected
music and inductions on mental imagery: Implications for practitioners of
Guided Imagery and Music. Journal of the Association for Music and Imagery, 8,
13-33.
Bauer Afredson, B., Risberg, J., Hagberg, B., & Gustafson, L. (2004). Right temporal
lobe activation when listening to emotionally significant music. Applied
Neuropsychology, 11, 161-166.
Baumgartner, H. (1992). Remembrance of things past: Music, autobiographical
memory, and emotion. Advances in Consumer Research, 19, 613-620.
Berlyne, D. E. (1971). Aesthetics and psychobiology. New York: Appleton Century
Crofts.
Blair, M. E., & Shimp, T. A. (1992). Consequences of an unpleasant experience
with music: A second-order negative conditioning perspective. Journal of
Advertising, 21, 35-43.
Blonder, L. X. (1999). Brain and emotion relations in culturally diverse populations.
In A. L. Hinton (Ed.), Biocultural approaches to the emotions (pp. 275-296).
Cambridge, UK: Cambridge University Press.
Blood, A. J., & Zatorre, R. J. (2001). Intensely pleasurable responses to music
correlate with activity in brain regions implicated in reward and emotion.
Proceedings of National Academy of Sciences, 98, 11818-11823.
Blood, A. J., Zatorre, R. J., Bermudez, P., & Evans, A. C. (1999). Emotional responses
to pleasant and unpleasant music correlate with activity in paralimbic brain
regions. Nature Neuroscience, 2, 382-387.
Bonny, H. L., & Savary, L. M. (1973). Music and your mind. New York: Station Hill.
Botvinick, M. M., Cohen, J. D., & Carter, C. S. (2004). Conflict monitoring and
anterior cingulate cortex. Trends in Cognitive Sciences, 8, 539-546.

29
Brown, C. M., Hagoort, P., & Kutas, M. (2000). Postlexical integration processes
in language comprehension: Evidence from brain-imaging research. In M.
S. Gazzaniga (Ed.), The new cognitive neurosciences (2nd ed.). (pp. 881-895).
Cambridge, MA: MIT Press.
Brown, S., Martinez, M. J., & Parsons, L. M. (2004). Passive music listening
spontaneously engages limbic and paralimbic systems. Neuroreport, 15, 2033-2037.
Buchanan, T. W., Lutz, K., Mirzazade, S., Specht, K., Shah, N. J., Zilles, K., &
Jäncke, L. (2000). Recognition of emotional prosody and verbal components
of spoken language: An fMRI study. Cognitive Brain Research, 9, 227-238.
Budd, M. (1985). Music and the emotions. The philosophical theories. London:
Routledge.
Bunt, L., & Hoskyns, S. (Eds.). (2002). The handbook of music therapy. London:
Routledge.
Cabeza, R., & Nyberg, L. (2000). Imaging cognition II: An empirical review of 275
PET and fMRI studies. Journal of Cognitive Neuroscience, 12, 1-47.
Cancelliere, A. E. B., & Kertesz, A. (1990). Lesion localization in acquired deficits of
emotional expression and comprehension. Brain and Cognition, 13, 133-147.
Caplan, D., Alpert, N., & Waters, G. (1998). Effects of syntactic structure and
propositional number on patterns of regional cerebral blood flow. Journal of
Cognitive Neuroscience, 10, 541-542.
Carlsen, J. C. (1981). Some factors which influence melodic expectancy.
Psychomusicology, 1, 12-29.
Charlot, V., Tzourio, N., Zilbovicius, M., Mazoyer, B., & Denis, M. (1992).
Different mental imagery abilities result in different regional cerebral blood
flow activation patterns during cognitive tasks. Neuropsychologia, 30, 565-
580.
Damasio, A. R., Grabowski, T. J., Bechara, A., Damasio, H., Ponto, L. L. B., Parvizi,
J., & Hichwa, R. D. (2000). Subcortical and cortical brain acitivity during
the feeling of self-generated emotions. Nature Neuroscience, 3, 1049-1056.
Davidson, R. J. (1995). Celebral asymmetry, emotion, and affective style. In R. J.
Davidson & K. Hugdahl (Eds.), Brain asymmetry (pp. 361-387). Cambridge,
MA: MIT Press.
De Houwer, J., Baeyens, F., & Field, A. P. (2005). Associative learning of likes and
dislikes: Some current controversies and possible ways forward. Cognition &
Emotion, 19, 161-174.

30
De Houver, J., Thomas, S., & Baeyens, F. (2001). Associative learning of likes and
dislikes: A review of 25 years of research on human evaluative conditioning.
Psychological Bulletin, 127, 853-869.
Diaz, E., Ruiz, G., & Baeyens, F. (2005). Resistence to extinction of human evaluative
conditioning using a between-subjects design. Cognition & Emotion, 19, 245-
268.
Di Pellegrino, G., Fadiga, L., Fogassi, L., Gallese, V., & Rizzolatti, G. (1992).
Understanding motor events: A neurophysiological study. Experimental Brain
Research, 91, 176-180.
Dowling W. J., & Harwood, D. L. (1986). Music cognition. New York: Academic Press.
Fanselow, M. S., & Poulos, A. M. (2005). The neuroscience of mammalian associative
learning. Annual Review of Psychology, 56, 207-234.
Farah, M. J. (2000). The neural bases of mental imagery. In M. S. Gazzaniga (Ed.),
The new cognitive neurosciences (2nd ed.). (pp. 965-974). Cambridge, MA:
MIT Press.
Gabrielsson, A. (2001). Emotions in strong experiences with music. In P. N. Juslin &
J. A. Sloboda (Eds.), Music and emotion: Theory and research (pp. 431-449).
New York: Oxford University Press.
Ganis, G., Thompson, W. L., Mast, F., & Kosslyn, S. M. (2004). The brain’s mind
images: The cognitive neuroscience of mental imagery. In M. S. Gazzaniga
(Ed.), The cognitive neurosciences (3rd ed.). (pp. 931-941). Cambridge, MA: MIT
Press.
Gärdenfors, P. (2003). How homo became sapiens: On the evolution of thinking. New
York: Oxford University Press.
George, M., Parekh, P., Rosinsky, N., Ketter, T., Kimbrell, T., Heilman, K.,
Herscovitch, P., & Post, R. (1996). Understanding emotional prosody
activates right hemisphere regions. Archives of Neurology, 53, 665-670.
Goldenberg, G., Podreka, I., Steiner, M., Franzén, P., & Deecke, L. (1991).
Contributions of occipital and temporal brain regions to visual and acoustic
imagery: A SPECT study. Neuropsychologia, 29, 695-702.
Gosselin, N., Samson, S., Adolphs, R., Noulhiane, M., Roy, M., Hasboun, D.,
Baulac, M., & Peretz, I. (2006). Emotional responses to unpleasant music
correlate with damage to the parahippocampal cortex. Brain, 129, 2585-
2592.
Hatfield, E., Cacioppo, J. T., & Rapson, R. L. (1994). Emotional contagion. New
York: Cambridge University Press.
31
Johnsrude, I. S., Owen, A. M., White, N. M., Zhao, W. V., & Bohbot, V. (2000).
Impaired preference conditioning after anterior temporal lobe resection in
humans. Journal of Neuroscience, 20, 2649-2656.
Joseph, R. (2000). Neuropsychiatry, neuropsychology, clinical neuroscience. New York:
Academic Press.
Juslin, P. N., & Laukka, P. (2003). Communication of emotions in vocal expression
and music performance: Different channels, same code? Psychological Bulletin,
129, 770-814.
Juslin, P. N., Liljeström, S., Västfjäll, D., Barradas, G., & Silva, A. (2008). Emotional
reactions to music in everyday life: Music, listener, and situation. Emotion, 8,
668-683.
Juslin, P. N., & Sloboda, J. A. (Eds.). (2001). Music and emotion: Theory and research.
New York: Oxford University Press.
Juslin, P. N., & Västfjäll, D. (2008). Emotional responses to music: The need to
consider underlying mechanisms. Behavioral and Brain Sciences 31, 559-575.
Kinomura, S., Larsson, J., Gulyás, B., & Roland, P. E. (1996). Activation by attention
of the human reticular formation and thalamic intralaminar nuclei. Science,
271, 512-515.
Koelsch, S. (2005). Investigating emotion with music: Neuroscientific approaches.
Annals of the New York Academy of Sciences, 1060, 412-418.
Koelsch, S., Fritz, T., von Cramon, D. Y., Müller, K., & Friederici, A. D. (2006).
Investigating emotion with music: An fMRI study. Human Brain Mapping,
27, 239-250.
Koelsch, S., Schroger, E., & Gunter, T. C. (2002). Music matters: Preattentive
musicality of the human brain. Psychophysiology, 39, 38-48.
Lane, R. D. (2000). Neural correlates of conscious emotional experience. In R. D.
Lane & L. Nadel (Ed.), Cognitive neuroscience of emotion (pp. 345-370). New
York: Oxford University Press.
LeDoux, J. (2000). Cognitive-emotional interactions: Listen to the brain. In R. D.
Lane & L. Nadel (Ed.), Cognitive neuroscience of emotion (pp. 129-155). New
York: Oxford University Press.
Lerdahl, F., & Jackendoff, R. (1983). A generative theory of tonal music. Cambridge,
MA: MIT Press.
Levenson, J. (1997). Emotion in response to art. In M. Hjort & S. Laver (Eds.),
Emotion and the arts (pp. 20-34). New York: Oxford University Press.

32
Lundqvist, L.-O., Carlsson, F., Hilmersson, P., & Juslin, P. N. (2009). Emotional
responses to music: Experience, expression, and physiology. Psychology of
Music, 37, 61-90.
McKinney, C. H., Antoni, M. H., Kumar, M., Tims, F. C., & McCabe, P. M. (1997).
Effects of Guided Imagery and Music (GIM) therapy on mood and cortisol
in healthy adults. Health Psychology, 16, 390-400.
McKinney, C. H., & Timms, F. C. (1995). Differential effects of selected classical
music on the imagery of high versus low imagers: Two studies. Journal of
Music Therapy, 32, 22-45.
Maess, B., Koelsch, S., Gunter, T. C., & Friederici, A. D. (2001). Musical syntax is
processed in Broca’s area: A MEG study. Nature Neuroscience, 4, 540-545.
Menon, V., & Levitin, D. J. (2005). The rewards of music listening: Response and
physiological connectivity of the mesolimbic system. Neuroimage, 28, 175-
184.
Meyer, L. B. (1956). Emotion and meaning in music. Chicago: Chicago University
Press.
Meyer, L. B. (2001). Music and emotion: distinctions and uncertainties. In P. N.
Juslin & J. A. Sloboda (Eds.), Music and Emotion: Theory and Research (pp.
309-337). New York: Oxford University Press.
Murphy, F. C., Nimmo-Smith, I., & Lawrence, A. D. (2003). Functional
neuroanatomy of emotions: a meta analysis. Cognitive, Affective, & Behavioral
Neuroscience, 3, 207-233.
Neumann, R., & Strack, F. (2000). Mood contagion: The automatic transfer of mood
between persons. Journal of Personality and Social Psychology, 79, 211-223.
Nyberg, L., McIntosh, A. R., Houle, S., Nilsson, L.-G., & Tulving, E. (1996).
Activation of medial-temporal structures during episodic memory retrieval.
Nature, 380, 715-717.
Osborne, J. W. (1980). The mapping of thoughts, emotions, sensations, and images
as responses to music. Journal of Mental Imagery, 5, 133-136.
Panksepp, J. (1998). Affective neuroscience. New York: Oxford University Press.
Patel, A. D. (2003). Language, music, syntax, and the brain. Nature Neuroscience, 6,
674-681.
Phan, K. L., Wager, T., Taylor, S. F., & Liberzon, I. (2002). Functional neuroanatomy
of emotion: A meta analysis of emotion activation studies in PET and fMRI.
NeuroImage, 16, 331-348.

33
Quittner, A., & Glueckhauf, R. (1983). The facilitative effects of music on visual
imagery: a multiple measures approach. Journal of Mental Imagery, 7, 105-120.
Razran, G. (1954). The conditioned evocation of attitudes: cognitive conditioning?
Journal of Experimental Psychology, 48, 278-282.
Rizzolatti, G., & Craighero, L. (2004). The mirror-neuron system. Annual Review of
Neuroscience, 27, 169-192.
Sacchetti, B., Scelfo, B., & Strata, P. (2005). The cerebellum: Synaptic changes and
fear conditioning. The Neuroscientist, 11, 217-227.
Schacter, D. L., Alpert, N. M., Savage, C. R., Rauch, S. L., & Alpert, M. S. (1996).
Conscious recollection and the human hippocampal formation: Evidence
from positron emission tomography. Proceedings of the National Academy of
Science, 93, 321-325.
Sloboda, J. A. (1992). Empirical studies of emotional response to music. In M. Riess-
Jones & S. Holleran (Eds.), Cognitive bases of musical communication (pp.
33-46). Washington, DC: American Psychological Association.
Steinbeis, N., Koelsch, S., & Sloboda, J. A. (2006). The role of harmonic expectancy
violations in musical emotions: Evidence from subjective, physiological, and
neural responses. Journal of Cognitive Neuroscience, 18, 1380-1393.
Stromswold, K., Caplan, D., Alpert, N., & Rauch, S. (1996). Localization of syntactic
comprehension by positron emission tomography. Brain and Language, 52,
452-473.
Thompson, R. F. (2005). In search of memory traces. Annual Review of Psychology,
56, 1-23.
Tillman, B., Janata, P., & Bharucha, J. J. (2003). Activation of the inferior frontal
cortex in musical priming. Cognitive Brain Research, 16, 145-161.
Tranel, D. (2000). Electrodermal activity in cognitive neuroscience: Neuroanatomical
and neuropsychological correlates. In R. D. Lane & L. Nadel (Ed.), Cognitive
neuroscience of emotion (pp. 192-224). New York: Oxford University Press.
Tucker, D. M, & Frederick, S. L. (1989). Emotion and brain lateralization. In
H. Wagner et al. (Eds.) Handbook of social psychophysiology (pp. 27-70).
Chichester, UK: Wiley and Sons.
Tulving, E. (1983). Elements of episodic memory. Oxford, UK: Oxford University
Press.
Västfjäll, D. (in press). Affective reactions to sounds without meaning. Cognition &
Emotion.

34
Waterman, M. (1996). Emotional responses to music: Implicit and explicit effects in
listeners and performers. Psychology of Music, 24, 53-67

Correspondence should be addressed to


Dr. Patrik N. Juslin, Department of Psychology, Uppsala University, Box 1225, SE
– 751 42 Uppsala, Sweden,
e-mail: [email protected]

35
36
Auditory Problems

- Not only an Issue of Impaired Hearing

Ulf Rosenhall

Department of Audiology, Karolinska University Hospital, and Karolinska


Institutet, Department of Clinical Neuroscience, Section of ENT and
Hearing, Stockholm, Sweden

Introduction
The auditory system is phylogenetically a very old sensory system. It provides
information for environmental orientation, and serves as a warning system.
The vision is the most important sensory system for orientation, but hearing
has a complementary function. The auditory system scans the surrounding
landscape in all directions, day and night, and in terrain where the view is
blocked. For this purpose the auditory system must be able to detect short,
deviant sounds in a background of ambient noise. Moreover, the auditory
system has a very accurate directional hearing, above all in the horizontal
plane. The sound localisation process is based on dichotic hearing, a fusion
of auditory input from both ears. Minor differences of arrival time, phase,
and intensity of sounds between the two ears is the physiological basis for
directional hearing.
The most important function of the auditory system for us
is communication. Hearing constitutes the afferent branch of oral

37
communication, which is the fundament of social contacts. The ability
to develop spoken language in early childhood is dependent on normal
hearing. The presence of hearing impairment, even of moderate extent, is
detrimental on the development of a language. The auditory system must be
able to detect and register subtle and rapidly changing patterns of the speech
regarding frequency, intensity, and rhythm, often in the presence of ambient
background noise. This on-line process represents a formidable challenge on
perception, cognition and working memory. This process is executed on all
levels of the auditory system.
Still another function of the auditory system is to provide aesthetical
qualities: music is important to most persons, and we also appreciate sounds
in the nature like bird song, the wind blowing in the canopy of a tree, and
the sound of the sea on a beach. These sensory inputs give us refreshment and
positive experiences.
Finally, after a day in our normal soundscape at work, and in leisure
time, we need to relax in a quite surrounding at home.

Anatomy and physiology of the auditory system – a


short summary
The auditory system has a peripheral part and a central part. The peripheral
part is constituted by the external and middle ear, by the inner ear, and by the
cochlear nerve. The central part consists by the auditory pathway, that starts
in the pons, passes the midbrain, and ends in the cortex (Figure 1).

38
Figure 1. The peripheral part of auditory system consists of the external,
middle and inner ear, and the cochlear nerve (blue ellipses). The central part
of the auditory system consists of the auditory pathway (red ellipse), and
cortical areas (green ellipses). There are connections to other cortical areas
and commissural connections between the hemispheres.

The outer part of the ear consists of the external ear and canal, the tympanic
membrane and the middle ear with the three ossicles. Sound waves are
conveyed from the air to the inner ear by this passive, mechanical sound
transmitting system. It provides and amplification of 30 dB, and by this
mechanisms a physical obstacle is overcome when air borne sound waves are
transformed to sound waves in the peri- and endolymph fluids in the cochlea
in the inner ear.
The cochlea is the sense organ where the mechanical energy of the
sound waves are transformed to nerve impulses by approximately 15 000 hair
cells. One single row of inner hair cells (IHCs) are mechanoreceptors. Three
rows of outer hair cells (OHCs) modulate the sensitivity of the cochlea, and
sharpen the frequency discrimination.

39
Figure 2. The cochlea with the sense organ, the organ of Corti. There are
two types of hair cells, IHCs in one row, and OHCs in three rows. The hair
bundles of the IHCs are slightly curved (upper row in the photomicrograph),
and W-shaped in the OHCs (lower row).

An active, nonlinear process in the cochlea is mediated by the OHCs, and


facilitates the perception of the complex sound patterns in speech. These
patterns are characterized by rapid sound variations combined with slow
modulations caused by speech syllables, words and intonation. Weak sounds,
otoacoustic emissions (OAEs), are generated in the normal cochlea, and they
reflect the active, motile function of the OHCs.

40
IHCs

OHCs

Figure 3. Different functions of the IHCs and OHCs. The IHCs function at a
moderately intense levels and can perceive complex sound patterns. The OHCs
sharpens the frequency discrimination.

Every hair cell has a hair bundle, consisting of about 100 kinocilia (Figure
2). At the base of the hair cell there are synaptic nerve endings from the first
auditory neuron. 85 – 90% of all afferent neurons (in all about 30 000) form
synaptic contact to the IHCs. A few hundred efferent nerve fibers from the
brain stem make synaptic contacts to the OHCs and mediate a modulating
function from the brainstem to the cochlea as an efferent regulatory system
(MOC).
The central auditory system (CAS) consists of the auditory pathway,
with nerve tracts and nuclei, starting at the entrance of the cochlear nerve in
the pons, and passes the midbrain and subcortical areas. In the cortex there
are primary, secondary and tertiary cortical areas. Commissural connections
between the hemispheres are present at all levels of the auditory system from
the brainstem to the corpus callosum (Figure 1).

41
Impairments of threshold hearing and of speech per-
ception
When we are talking about auditory problems we most often think of hearing
impairment (HI). The presence of a HI means that the sounds within a specific
frequency range are not perceived when they are presented at an intensity
level heard by a normal functioning ear. The range and extend of a HI can is
measured with a “conventional” hearing test, the pure tone audiogram. A HI
can be of a very variable extent, from mild to profound up to total deafness,
when the ear cannot hear any sounds, even at intense levels. The frequency
range affected also varies. In the most common types HI the high frequencies
are affected more severely than the low frequencies.
The speech perception is often disturbed, in most instances in
connection to a HI. High frequency HI compromises the perception of many
consonants, with disturbed speech perception as a result. It is considerably
easier to hear speech in quiet than in background noise. Without disturbing
noise the capacity to hear monosyllables is remarkably intact in cases with
mild to moderate high frequency HI, but at a certain level of HI the speech
perception collapses (Figure 4). In background noise a person with even mild
high frequency HI has difficulties to hear monosyllables, and the situation
becomes still worse if the hearing impairment worsens (Figure 4).

42
Taluppfattning vid Diskant-HNS
20-50 år
Speech perception in high frequency hearing
impairment, 20 – 50 y

100

90

80
SPIQ Tal i Tyst
70
perception % SPIN Tal i Brus
60
Taluppfattning, %
Speech
50

40
0 10 20 30 40 50 60 70 80 90

HFPTA 3-6 kHz


DTMV, 3-6 kHz

Figure 4. Speech perception (in percent) for monosyllabic words in quiet (SPIQ)
and in background noise (SPIN), adults aged 20 to 50 years. The effect of high
frequency hearing impairment is shown in the figure. The average threshold
elevation of the frequencies 3, 4 and 5 kHz is shown on the X-axis. Normal
hearing to the left, severe hearing impairment to the right. In average, an
ear with normal hearing perceives 100% of the words in quiet, and about
85% of the words in noise. High frequency HI reduces the speech perception,
especially in the SPIN-situation. The data have been collected from Lidén,
1954 (SPIQ); Magnusson, 1996; Barrenäs& Wikström, 2000 (SPIN).

43
Many patients with normal pure tone audiograms have difficulties to perceive
speech in relatively mild background noise (King-Kopetzky syndrome). The
causes of this syndrome remains unknown in most instances, and might
vary from psychogenic – stress related, to disturbances of the CAS. In some
instances the efferent MOC system is not functioning properly.
A small group of patients with auditory neuropathy (AN) have extreme
difficulties to perceive speech in quiet. These patients can hear sounds, but
they cannot understand speech, that sounds totally blurred to them. The
impairments in AN are complicated and poorly understood. It has been
suggested that there can be selective IHC-loss, synaptic disturbances, and
lesion of the cochlear nerve. The OHC function is normal in AN.

Tinnitus
Tinnitus is defined as a sensation of a sound, or sounds, in one or in both
ears, or inside the head, such as buzzing, ringing, or whistling, occurring
without an external stimulus. Tinnitus is a symptom, not a diagnose, and the
causes of tinnitus vary. In many instances tinnitus is related to a concomitant
HI. The peripheral lesion triggers changes of the neural input to the CAS,
that activates unimpaired cortical and subcortical auditory centres. Severe
tinnitus is common in profound hearing impairment and total deafness, and
is regarded as a phantom sensation.
Tinnitus retraining therapy is a specific clinical method based on the
neurophysiological model of tinnitus that involves the limbic system and
the autonomic nervous system (Jastreboff, 2007). The method is aimed at
habituation of reactions evoked by tinnitus, and subsequently habituation of
the tinnitus perception. One part of the method is sound therapy, aimed at
weakening tinnitus-related neuronal activity.
Other explanatory models of tinnitus are ion channel dysfunction of
IHCs, impaired gate control, and ephatic transmission, “cross-talk” between
nerve fibres, are other models for the generation of tinnitus (Holgers &
Barrenäs, 2003). The gate control theory of pain is a model to that of sound,
and hypothesizes that physical pain is not just a direct result of activation
of pain receptor neurons, but is rather modulated by interaction between
different neurons.
There is also a somatosensory model, in which non-auditory neural
input triggers tinnitus (Shore & Zhou, 2006). These patients have most
often normal hearing, measured with pure tone audiometry.
44
Other auditory symptoms, paracuses
Paracuses encompass a variety of disturbances of auditory perception
(Hinchcliffe, 2003). Sound intolerance constitutes one important group.
Hyperacusis is defined as abnormal intolerance for every-day sounds.
Hyperacusis is often seen in without any deterioration of hearing sensitivity.
A common cause of hyperacusis is exposure to loud noise. Phonophobia, fear
of sounds, is an extreme and very disabling variant of hyperacusis, and is often
present together with psychiatric conditions. Misophonia is intolerance to
specific sounds. Very little is known about neurophysiological models causing
sound intolerance. Dysfunction of the efferent systems (one is the MOC-
system) has been proposed, as well as abnormal gate control. Autophonia
is intolerance to the subject’sown voice. The condition has been related to
patulous Eustachian tube caused by abnormal muscular activity.
Recruitment of loudness is a non-linear increase of loudness, seen
in cochlear hearing impairment. The patient has a sensorineural hearing
impairment, and suprathreshold sounds, even at levels that are fairly close
to the thresholds, are disturbingly loud. The condition is seen in OHC-
degeneration, and the reception of sounds of mild to moderate intensity is
impaired (Figure 3). When the intensity reaches the level where the tuning
curves are flattened (the domain of IHC-hearing), the reception of sounds
reaches the normal level within a narrow increase of intensity. The dynamic
range of loudness, from threshold to an uncomfortable level, is decreased.
Recruitment should not be labelled as hyperacusis.
Sound distortion can refer to abnormal non-linear affects of the inner
ear. The most common distortion is diplacusis, which is a frequency related
disturbance in which a single tone is heard as two tones of different pitch in
one ear, or in the two ears. It is a typical phenomenon of Ménière’s disease.
Disturbed directional hearing causes no or only minor problems in most
situation. However, accurate sound localisation is useful in traffic situation
and to localise a person in a crowded place.

45
Conclusions
Hearing impairment is only one manifestation of lesions within the auditory
system. Other symptoms are problems to hear speech in noise, tinnitus,
hyperacusis and other phenomena related to intolerance to sounds, sound
distortion, and diplacusis (Figure 5). These manifestations of auditory
lesions and symptoms occur often in combinations. Lesions, diseases and
disorders are most common in the peripheral part of the auditory system,
but can appear at higher levels of the auditory system. Influences of other
Auditor
somatosensory systems are important y insymptoms
many instances of e.g. tinnitus.

Moder ate - sever e Pr ofound HI


Total Deafness
Hear ing Impair ment

Impair ment of
Per ception and Tinnitus
Cognition

Difficulty to
Listen in noise, KKS Hyper acusis
Poor Sound
Distor tion
Localization
Diplacusis

Figure 5. There is a variety of auditory symptoms, often occurring in various


combinations.

46
References
Barrenäs ML, Wikström I. The influence of hearing and age on speech recognition
scores in noise in audiological patients and in the general population. Ear
Hear 21, 569-77, 2000
Hinchcliffe R. In: Textbook of Audiological Medicine. Clinical Aspects of Hearing
and Balance (eds: LM Luxon, JM Furman, A Martini, D Stephens). Taylor &
Francis Group, London, pp. 579-91, 2003
Holgers KM, Barrenäs ML. The pathophysiology and assessment of tinnitus. In:
Textbook of Audiological Medicine. Clinical Aspects of Hearing and Balance
(eds: LM Luxon, JM Furman, A Martini, D Stephens). Taylor & Francis
Group, London, pp. 555-69, 2003
Jastreboff PJ. Tinnitus retraining therapy. Prog Brain Res 166, 415-23, 2007
Lidén G. Speech audiometry. Arch Otolaryngol Suppl 89, 399-403, 1954
Magnusson L. Predicting the speech recognition performance of elderly individuals
with sensorineural hearing impairment. A procedure based on the Speech
Intelligibility Index. Scand Audiol 25, 215-22, 1996
Shore SE, Zhou J. Somatosensory influence on the cochlear nucleus and beyond.
Hear Res 216-217, 90-9, 2006

47
48
The Role of Psychoacoustics for the
Research on Neuropsychiatric States

Theoretical basis of the S-Detect method

by

Sören Nielzén, Olle Olsson, Johan Källstrand


and Sara Nehlstedt

Introduction
Sound is constituted by air borne pressure waves. A completely regular simple
form is the sinus wave. This may vary in its excursions – amplitude variations
- thereby representing higher or lower energy.
It may further have shorter or longer periods, which represent wave
frequencies. The sinus waves of many separate frequencies may gather to
a more complex pressure wave, which then contains a sound spectrum of
frequencies.
The spectra get different contours when the separate frequencies are
depicted side by side along an abscissa. Such pictures describe what is called
frequency envelopes.
Two sinus tones may have exactly the same amplitude maxima and
minima in time during their propagation through the air. They are in phase.
Displacements between two waves may appear in any angle within the period.
Phases and amplitudes may as well be depicted with regard to their envelopes
in time. The sound wave contains the basic structure for communication,
but the communication functions only thanks to the analyzing powers of the
various detection systems in machines, animals and humans. Combinations
of the basic sound parameters create endless variations of sound perceptions.
In humans, the combinations importantly serve the reception of speech and
have an almost equally appreciated value in music1.

49
Clinical aspects
In the following an abbreviated description will be made on the analyzing
systems in the auditory pathway and on psychoacoustic phenomena used to
reveal various aspects of them. Such descriptions help to clarify the complexity
of stimulus-response relations and make one better understand implications
for perception, cognition and other psychological effects.
Sound experiences become different when some neuropsychiatric
disturbance overshadows the natural neurophysiological and mental
system. The aim of this article is to expose the use of psychoacoustics and
neurophysiology to demonstrating aberrances of the function within the
auditory system in schizophrenia and ADHD.

Sounds in experiments
The elements of sound have been studied by comparisons which have been
registered by sophisticated, often statistical methods such as discriminant
ratings, forced-choice techniques etc. Regarding frequency and sound
pressure it is found that albeit - like for all senses - they are processed in a
logarithmical manner, they are further non-linear due to greater discriminant
sensitivity for high pitched tones and medium loud sounds. The logarithmic
scales have to be corrected into scales of subjective perception. These are
called the mel-scale for pitch and sone-scale for loudness. Distances in pitch
and sound pressure are measured in Mel and dB (decibel). The human ear is
extremely sensitive for pitch changes and has a very low threshold for sound
pressure (10-12 Watt/cm2). It should be noted, that pitch and loudness are
perceptual conceptions, subject to reciprocal interactions and influences.
This means that a tone may be higher or lower when combined with different
sound pressures2.
Spectrum of many frequencies may be depicted in sonagrams, i.e.
graphic lines with the ordinate representing frequencies and the abscissa
giving time. Complex sounds have a fundamental, which is the dominating,
regular – often the lowest – tone, and partials, which arise by even and/or
un-even parts of a sounding string or pillar of air. In speech experiments one
talks of formants as corresponding to the partials of musical sounds. The
combination of the fundamentals and two formants and their relative sound
pressure defines the vowels in speech, and fundamentals and partials in music
make voices and instruments characteristic. Any complex sound is perceived

50
as having a fundamental – it is appointed one by the ear – even if there exists
no fundamental from an acoustic definition. It is worth mentioning that
all natural sounds are complex and that the sinus tone case is an exception.
Further, the ear constructs sound itself and emits it from the ear-drum. As
mentioned, the central nervous system similarly constructs a fundamental
when specific partials are sounding, and we perceive this as a tone (periodic
pitch, Tartini pitch, musical pitch, virtual pitch, organ fundamental)3.

Psychoacoustic stimuli
Zwicker tones are produced by presenting a white noise that has a hole in it,
i.e. a small range of frequencies is lacking somewhere in the middle of the
spectrum. When a subject is stimulated with it for a while (30 seconds) and
the stimulus then is stopped, he or she will start to hear a tone corresponding
to the hole. This is a psychoacoustic after-effect, which is well known to
psycho-physiologists for other sensory modalities too. After-effects vary with
psychiatric conditions and are therefore valuable in the development of test
methods4.
Another stimulus dealing with noise in the auditory system is gap
detection. A noise is presented followed by silence (the gap), thereafter the
noise continues again. When the gap is sufficiently short, the subject perceives
a continuous sound. The normal threshold for this is normally less than 20
msec of silence duration. Longer thresholds indicate midbrain lesions (as in
aging)5.
Masking denotes the hearing of one sound in the presence of another.
Commonly, noise is used to “cover” a tone and this may be done simultaneously,
forward in time or backward in time in relation to the tone. It is used to
study dissolution of time and frequency in hearing. By means of moving
the masker to the sides of the tone (within the frequency domain), one has
assessed regions and cell populations that are tuned to sharper perceiving of
pitch-heights. In this way critical bands for frequency perception have been
defined6. These are centered automatically on the pitch that first arrives at the
ear – they are so called dynamic filters.
If any component of the noise in masking experiments is shared by the
stimulus tone, such as amplitude modulation (tremolo in music) or frequency
modulation (vibrato in music) “release of masking” occurs. Investigations of
thresholds for this are valuable to shed light of specific problems. Binaural
(both ears) or dichotic ( one stimulus in one ear and another in the other)
51
stimulation is used for research on specific questions related to directional
hearing, changes of thresholds and effects on perception.
Disturbance of simultaneous masking is related to dysfunctions of
peripheral structures of the hearing system while forward and backward
masking point to dysfunctions of the central nervous system7.
Not only noise may be used to demonstrate suppression of reactions to
sounds. A tone in presence with another is suppressed, less clearly heard. Two
tone suppression is used experimentally in neurophysiological experiments e.g.
to assess inhibition in neural networks and of cells8.
Virtual pitch is an interesting psychophysical phenomenon with
importance for clearer hearing. When aliquots are added to a dull sounding 16
feet bass voice of the organ, the bass sound is heard very clearly and distinctly.
The bass sound may be heard even without any bass pipe sounding. This
mechanism evidently serves some function of sound detection within the
hearing system. It was originally supposed to be a result of unresolved (the
auditory system resolves a complex sound into its partial pitch components)
parts of the pitch processing in the auditory pathway9, but has later been
proposed to be a more central process based on resolved harmonics in the
nervous system10. The virtual pitch is computed by the central nervous
system’s pitch processors, thereby integrating binaural fusion, when accurate11.
Virtual pitch constitutes a possible stimulus for neuropsychiatric studies.
The resolution of pitches is a complex process influenced by the mass
of elements of the incoming stimulus. This may be examplified by the “pitch
paradoxes” investigated by Diana Deutsch12. She demonstrated that pitch
judgments of ascending or descending tones are influenced by different
factors such as proximity of tones and spectral envelopes.
Still more complicated may the sound experience be when binaural
mechanisms are involved. The precedence effect offers such an example. In an
experiment where two clicks are delayed between the two ears, they will be
perceived as one click sound coming from the side of the first arriving click.
If the time between them is more than 12 ms, two sounds will be heard and
if it is 2 msec or less, only one sound is perceived and interpreted as coming
from a central position in the auditory field. Preliminary experimental results
indicate that this mechanism is dysfunctional in schizophrenia13.
Psychoacoustic phenomena are not only advantageous to study in
connection with instantaneous events and elementary aspects of sound.
On the contrary, when discontinuous sounds with broad frequency spectra
dispersed in patterns are used as stimuli, many possibilities of ambivalent

52
interpretations arise that may lead the listener in diverse directions. This is
due to the fact that so called features are created by means of cross-correlations
at high levels in the central nervous system. In the owl, localization e.g., is
computed in the form of a map in the matrix of cells in the superior colliculi,
and the owl reacts to its prey according to combined visual and auditory cues
represented as “feature” in neural activity 14. Humans have similar preformed
neural systems in relation to syllables and linguistic sounds15. Complex
stimulation may contain elements that elicit features which mislead the
listener and in this way auditory illusions occur.
Consider consecutive tones played very slowly. You hear the tones one
by one. Played a bit faster you suddenly perceive a melody. When played
very fast and with broad intervals, voices will come up and at extreme speeds
mixtures of tones and noise seem to exist. That the perceptual apparatus
organizes the sound material in different percepts means that different
“streams” are formed. Streaming is the psychological term for the appearance
of e.g. voices in a composition. The conditions for the formation of streams
to show have been formulated by Albert Bregman in his monograph Auditory
Scene Analysis from 199016.
Olle Olsson17 used streaming in his investigations on persons with
schizophrenia and found that clear aberrances of perception of streaming
were connected with the disorder. This was true even for the continuity
illusion, which may arise in loud environments and means that the brain
reconstructs missing sounds. This makes it possible to hear a single message
in a noisy party; therefore it is sometimes called “the cocktail party effect”.
Persons with schizophrenia further showed aberrances when compared with
healthy subjects regarding contralateral induction. Contralateral induction
refers to the motion of sounding objects in the environment. Localization
is generally controlled by interaural time- and intensity differences between
the two ears, but if e.g. spectral content is more or less identical most people
will say that the sound comes from one defined sound source where the
dominating spectrum is emitted, and not judge according to the time- and
intensity difference cues. The process is analogous to when you see a person
speak on the TV. You will hear his voice from his or her mouth, even if the
loudspeaker is fairly far from the TV-set.

53
Neurophysiology
In order to understand the rationale for investigating neuropsychiatric states
by auditory measures, a brief description of the principles of the neural
functions of hearing is certainly helpful if not necessary.
The hearing system is built up by the receptor organ, the cochlea, and its
nerve trajectories into the brain up to cortex. On the way to that location the
signals pass main relay stations called nuclei. They are in order from bottom
to top cochlear nucleus, the olivary complex with trapezoid bodies, inferior
colliculus in the brain stem, and medial geniculate body in the thalamus.
In the cochlea an elastic structure called the basilar membrane fills the
function of a sensory receptor. The sound waves become transformed to a
liquid oscillation in the cochlea and due to the anatomical form of the basilar
membrane and the cochlea, frequencies are spatially separated on different
places of this receptor organ. Specific receptor cells, hair cells, transform the
physical stimulation to electrical pulses which are transmitted to the auditory
nerve. The spatial frequency representation is preserved among the axons
(nerve fibers) of the auditory nerve, further through the whole of the pathway
and even in cortex.
This example of representing different frequencies is a code which is
fairly simple in comparison with the demands of the nerve functions for
resolving most other tasks dealt with by the system. It must be recognized,
that the auditory nervous system is highly differentiated. It contains cells that
may fire up to at least 3000 times per second, while skin receptor cells are
refractory during single decimal parts of seconds before next firing. Further, a
multitude of specializations occur among auditory cells. Some are sensitive to
frequency ranges, some to specific loudness ranges and some to lateralization
(from which ear – side - signals come). There are cells that react to features,
i.e. compound nerve processing at earlier stages in the pathway, representing
e.g. calls among birds, linguistic or musical components among humans etc.
All these functions depend on anatomical specializations developed
during the phylogenetic past. In the cochlear nucleus there are “bushy” cells
that have trees of short dendrites (cell receptor branches) securing a phase
constant transmission. Stellate cells on the other hand have longer dendrites,
compound functions, and can signal with bursts containing variable spike
numbers. Fusiform cells of the dorsal part of the nucleus have long dendrites
which make them integrate signals and exert inhibition on the activity of
other cells. Up to ten types of reaction patterns of single cells within the

54
auditory system are known, such as “on, off, sustained, chopper, pauser”, and
so on.
The grouping of signals into features and finally percepts does seldom
rely only on one coding principle. By inhibition, facilitation, integration
and cross correlation a continuous refining of basic information takes place.
Frequency is e.g. coded also in real time because the cells of the auditory
nerve samples the periods of the sound wave. Together with neighboring
cells they send a volley18 with a correct continuous electrical picture of the
frequency for which they are especially sensitive. At the same time they put
each frequency in the same phase because they always fire in the raise portion
of the stimulation from the hair cells. It is called phase locking. But this is
not enough. While exerting these specialized functions they at the same time
convey a code for loudness by changing the underlying general spike rate.
Furthermore, they are –from systems higher up - subjected to control of
the degree of synchrony among cell firing, which is a further instrument for
refined coding of maybe pitch sharpening.
The mechanisms behind phase locking has generalized counterparts
elsewhere. In the owl and bat for instance, locking to space cues occurs by
“space specific” cells 19. At higher levels a diffuse border between learning
and automatic grouping (the subconscious perceptual counterpart of feature
formation) exists, and there are many types of feature locking for natural
sounds, harmonies, tones, linguistic word roots and so forth.
The olivary complex is designed to analyze directions of the incoming
sounds. This is achieved by the phase-locked firing from the two sides. An
interaural time difference results in different delays according to the angle
in front of the head. The time difference is coded and space specific cells in
colliculus inferior register an angle. Similarly an angle is coded for intensity
difference between the ears, because the head shadows one of the sides more
or less. For humans, this works in azimuth, but for positions in the vertical
plane the frequency spectra from the two ears are compared since these
spectra are different because of filtering in the two pinnae.
The space sensitive cells are organized in a spatial pattern, which is
another principle of the function of the auditory system. As mentioned
before, frequency is displayed in a topographical manner and so is loudness.
Detector cells for various features – amplitude modulation frequency is an
example 20 - are similarly organized in quasi circular patterns in the inferior
colliculus, the medial geniculate body and the cortex. In this way spatial
maps of frequency, amplitude, space and features are organized in separated

55
planes, most clearly to be observed in the inferior colliculus. Time codes are
converted to place (of the neuronal substrate) codes.
In the auditory cortex the mapping principle and the spatial organization
is at hand. But both renewed parceling and integration occurs. Speech in
bilingual persons is for example is processed at different locations for the two
languages 21. On the other hand, features are created by combination sensitive
neurons which react on specific combinations of loudness and frequency,
and of course on combinations of other parameters. A final percept arises
only after fairly long times (parts of seconds). When auditory illusions are
concerned this so called build up period may take up to five seconds, before
the relevant information is made conscious to the listener. Finally, the main
function of the cortex is the integration of auditory information by associative
connections to vision, memory, thinking etc.

Preparatory studies
Auditory illusions and schizophrenia was studied at the Psychoacoustic
Laboratory of the Psychiatric Clinic in Lund by Olle Olsson22. The aims
of the studies were to assess aberrances of perception between persons with
schizophrenia and healthy subjects. It was further anticipated that aberrances
could perhaps be objectively assessed by contemporary objective measures
within neurophysiology. This should in the end lead to the ultimate goal
which was to offer an objective method as support for diagnostic and
treatment controlling needs in psychiatry.
The aims were based on forthcoming evidence that schizophrenia is a
neuropsychiatric state in its own right. It is documented since nearly a century
that cortical degeneration occurs during the course of the disease, which
explains the gradual deterioration of cognitive functions – the dementia –
characteristic of the illness. This, however, seems to be a secondary process
correlated with each psychotic exacerbation. Atrophic processes of cortical
gray and white matter and microscopic cell abnormalities are demonstrated
in several studies23. Disconnectivity between functional circuits has been
assessed with Diffusion Tensor Imaging. Disruption of anatomical structures
is compatible with clinical symptoms of discontinuity in perceiving,
thinking, talking and smooth movements regularly observed as signs of the
disease. From the experimental view it has been noted, that adaptation is
not as effective in schizophrenia as in healthy states. The deshabituation, as

56
it is called, applies to all senses and is an argument for theories of lacking
filters in schizophrenia. Deficient “filters” are supposed to cause an undue
increased influx of stimuli to the central nervous system, thereby causing
chaos in mental processing.
Neurophysiological dysfunctions in prepsychotic states and within
relatives of psychotic patients point towards the existence of trait markers of
an elementary character. Thus, Freedman et al have documented abnormality
of an electrophysiological brain wave after 50 ms post stimulus24. After 300
ms several studies have shown abnormalities of evoked auditory responses of
the common electroencephalogram (EEG). Näätenen et al studied another
characteristic of the EEG, the so called “mis-match negativity”25. This is a
response which shows for healthy subjects after a short break in an otherwise
continuous sound stimulation. Persons with schizophrenia show abnormal
results in this examination. Green26 has shown that visual masking does not
function normally for schizophrenics as indeed Källstrand et al observed in
connection with auditory masking.
A study by means of fMRI (functional Magnetic Brain-Imaging) with
the use of a tonal streaming as stimulus was performed in Vienna 2001 –
200227. The study was very elaborate and time-consuming and therefore only
few patients could be measured at the cost and time given for the study.
Far less did we reach the goal to include schizophrenic patients. However,
a valuable finding was seen for those 3 patients who validly reported the
hearing of streaming and were technically measured in a reliable manner. As
seen from the figure 1 there is no activation of associative connections of the
cortex. The processing of streaming appears to take place within networks
within the brainstem, thalamus and up to the gyri of the temporal lobe.
This fact supports a supposition made within psychology, that automatic
grouping, such as streaming, is a genuine or primitive function without
elements of any conscious processing.

57
Figure 1: fMRI activation of streaming in a sample of healthy subjects
tentatively showing, that streaming is mainly processed in subcortical
generators and that it is not a process of associative cortical activity.

From the assertions related above it became logical to direct the attention
to the brain stem in order to search for neurophysiological correlates with
stimuli provoking automatic grouping in the nervous system.
A few postulates for the continuing work with auditory brain stem
evoked response (see below!) were put forward:
Persons with schizophrenia harbor constitutional traits
(neurophysiological markers) which make them susceptible to react with
clinical signs of the schizophrenic disorder when exposed to releasing factors.
The fundamental pathological processes in schizophrenia may affect
any part or parts of a sensory or motor system and are not localized at any
spot in the nervous system.
The assessment of neurophysiological correlates must therefore rely on
complex stimulation and complex measurements.
Analysis of differential measurements must take into account both
general differences and systematic discrepancies related to single elements of
the stimuli and the singular response patterns of the nuclei corresponding to
the peaks and troughs of the ABR-waves.

58
There are several techniques available for investigating the functional
state of the brain stem. For practical reasons the BERA (Brain stem Evoked
Response Audiometry) also denoted ABR (Auditory Brain stem Response) was
chosen. A number of different sounds were composed to serve as convenient
triggers to target the earlier revealed weaknesses in the schizophrenic
perception. The sounds were patented. Then, the stimuli and test parameters
were organized to suit the technical requests of ABR-measurements as it
is commonly used in audiology. When this had succeeded the testing of
healthy subjects and persons with schizophrenia began. It has turned out that
differences of the ABR-waves between the two groups reliably separate them
from each other. As hypothesized, there is no simple differentiation between
a non-specific stimulation and any certain peak or trough of the resulting
brain wave. Differences are only discerned by sophisticated analysis of the
waves regarding correlations with specific qualities of the diversified acoustic
stimuli. The analysis was based on the huge amount of data acquired from
the measurements and was accomplished through weighting of significant
discrepancies. Results from our studies are in line with our hypotheses
and postulates and the discrimination between groups of persons with
schizophrenia and groups of healthy subjects is over 90 percent. Preliminary
findings on ADHD seem to be very promising. The S-Detect method, an
application of ABR for psychiatric use, has been developed in consequence
with the studies. It is predicted to get an important role as a support within
clinical psychiatry.

59
References
Everest, F.A.,Master Handbook og Acoustics. McGraw-Hill, New York 2001.
Nielzén, S., Olsson O., (Eds.) Structure and Perception of Electroacoustic Sound
and Music. Elsevier Science Publishers.Amsterdam, 1989.
Rosen, S., Howell.P., Signals and Systems for Speech and Hearing. Academic Press,
London 1999.
Franosch, J-M. et al., Zwicker Tone Illusion and Noise Reduction in the Auditory
System. Physical Review Letters, vol.90, nr 17, p.178103-1 – 178103-3.
Fitzgibbons, P.J., & Gordon-Salant, S., Temporal Gap Resolution in Listeners with
High-Frequency Sensoryneural Hearing Loss. J. Acoust.Soc. Am., 81, 133-
137, 1987.
Fletcher, H., Auditory Patterns. Rev.Mod. Phys. 12:47-65.
Gutschalk, A. et al., Human Cortical Activity During Streaming Without Spectral
Cues Suggests a General Neural Substrate for Auditory Stream Segregation. J.
Neurosci. 2007 Nov 28;27(48):13074-81.
Källstrand, J., Montnémery, P., Nielzén, S., Olsson, O.Auditory masking experiments
in schizophrenia. Psychiatry Res. 2002 Dec 15;113(1-2):115-25.
Shouten, J.F., The Perception of Subjective Tones. Proc., K. Ned. Akad. Wet., 41,
1086-1093.
Terhardt, E, Grubert, A., Factors Affecting Pitch Judgments as a Function of Spectral
Composition. Percept Psychophys. 1987 Dec;42(6):511-4.
Pantev, C, Elbert, T., Ross, B., Eulitz C., Terhardt, E.,.Binaural Fusion and the
Representation of Virtual Pitch in the Human Auditory Cortex.Hear Res.
1996 Oct;100(1-2):164-70.
Deutsch,.D., Some new Musical Paradoxes, in Nielzén,S., Olsson O., (Eds.)
Structure and Perception of Electroacoustic Sound and Music. Elsevier
Science Publishers.Amsterdam, 1989.
Fristedt Nehlstedt, S., A Study of the Precedence Effect in a Sample of Auditory
Hallucinating Schizophrenics, Research Report for M.Sc., Dpt of Audiology,
Lund University, 2004.
Knudsen, E. I., Experience Shapes Sound Localization and Auditory Unit Properties
During Development in the Barn Owl. In Auditory Function, Neurobiological
Bases of Hearing. Eds.: Edelman G.M., Gall W.E. and Cowan W. M., John
Wiley and Sons, New York, 1985.

60
Mattingly, G. M. and Liberman, A. M., Specialized Perceiving Systems for Speech and
Other Biologically Significant Sounds. In Auditory Function, Neurobiological
Bases of Hearing. Eds.: Edelman G.M., Gall W.E. and Cowan W. M., John
Wiley and Sons, New York, 1985.
Bregman, A. S., Auditory Scene Analysis. ”A Bradford Book”, MIT Press, Cambridge
Mass., London, 1990.
Olsson, O., Psychoacoustics and Hallucinating Schizophrenics; A Psychobiological
Approach to Schizophrenia. Thesis, Lund 2000.
Wever, E.G., Theory of Hearing, Wiley, New York, 1949.
Suga, N., Feature Extraction in the Auditory System of Bats. In Basic Mechanisms in
Hearing Møller, ed., pp. 675-744, Academic, New York, 1973.
Ehret, G., Merzenich, M.M.,Complex sound analysis (frequency resolution, filtering
and spectral integration) by single units of the inferior colliculus of the cat.
Brain Res. 1988 Apr-Jun; 472(2):139-63.
Öjemann, G.A., Models of the brain organization for higher integrative functions
derived with electrical stimulation techniques. Hum Neurobiol. 1982;
1(4):243-9.
Olsson, O., Psychoacoustics and Hallucinating Schizophrenics; A Psychobiological
Approach to Schizophrenia. Thesis, Lund 2000.
Links, S.K.E., Baldeweg, T., Friston, K.J., Synaptic plasticity and dysconnection in
schizophrenia. Biol Psychiatry, 2006 May 15; 59(10):929-39. Freedman R,
Adler LE, Gerhardt GA, Waldo M, Baker N, Rose GM, Drebing C, Nagamoto
H, Bickford-Wimer P, Franks R., Neurobiological studies of sensory gating in
schizophrenia. Schizophr Bull. 1987;13(4):669-78.
Todd, J. M. PT, Schall, U., Karayanidis, F, Yabe, H., Näätänen, R.,Deviant matters:
duration, frequency, and intensity deviants reveal different patterns of
mismatch negativity reduction in early and late schizophrenia.Biol Psychiatry.
2008 Jan 1;63(1):58-64.
Green MF, Mintz J, Salveson D, Nuechterlein KH, Breitmeyer B, Light GA,
Braff DL.,Visual masking as a probe for abnormal gamma range activity in
schizophrenia.
Biol Psychiatry. 2003 Jun 15;53(12):1113-9.
Nielzén, S., Geissler, A., Olsson, O., Lanzenberger, R., Tahamtan, N., Gartus, A.,
Deecke, L., Källstrand, J., Beisteiner, R. CNS-activation with experience of
auditory streaming. Unpublished manuscript.

61
62
Why Noise Improves Memory in
ADHD Children

Sverker Sikström
Lund University Cognitive Science, Lund University

Göran Söderlund
Department of Linguistics, Stockholm University, School of psychology,
Southampton University, UK

Please address correspondence to:


Sverker Sikström, Lund University Cognitive Science, Kungshuset
Lundagård, s-22222 Lund.
Email: [email protected], phone: 0046-70-3614333

Abstract
Noise is typically conceived as being detrimental for cognitive performance.
Contrary to this notion recent studies have found that auditory noise may
in some cases improve cognition. Recent data and theories suggest that this
beneficial effect of noise depends on several factors including the type of task
being performed and noise levels. More importantly, whereas some groups of
children increase performance during noise other groups declines. From our
point of view these findings can be accounted for in a neurocomputational
theory where noise interacts with dopamine, environmental factors, and
individual levels of dopamine, where well controlled levels of noise may be
beneficial for cognitive performance.

63
Introduction to noise induced enchantments of cogni-
tive performance
Noise is typically considered to be deleterious for cognitive functioning. Under
most circumstances cognitive processing is easily disturbed by noise from the
environment and non-task distractors, an effect that has been know for a long
time (Broadbent, 1954, 1957, 1958a, 1958b). The effect of distraction is
believed to be due to competition for attentional resources between the target
stimuli and the distractor i.e., the distractor removes attention from the target
task. Repeated research on this topic has demonstrated this finding to hold
a across a vide variety of target tasks, distractors and participant populations
(Belleville et al., 2003; Boman et al., 2005; Hygge et al., 2003; Rouleau
& Belleville, 1996; Shidara & Richmond, 2005). Most experiments since
Broadbent’s days’ have dealt with the negative effects of noise and distraction.
However recently, in contrast to the main body of evidence regarding
distractors and noise, the opposite has been shown. Two studies were able
to demonstrate that under certain circumstances ADHD participants could
benefit from auditory, task irrelevant noise presented concurrently with
the target task (Abikoff et al., 1996; Gerjets et al., 2002). This finding is
particular surprising because persons with attentional problems, for example
attention deficit hyperactivity disorder (ADHD) children are known to
be more vulnerable to distraction compared to normal control children
(Blakeman, 2000; Brodeur & Pond, 2001; Higginbotham & Bartling,
1993). These studies did not, however, provide a satisfactory theoretical
account for why noise was beneficial for cognitive performance. Our research
has recently extended these findings and suggested a theoretical framework
for understanding these apparently contradictory results. We showed that
auditory stimulation effect had different effects on the memory performance
of children with ADHD and control children (Söderlund et al., 2007).
These effects were replicated in two studies for children with sub-clinical
attentional problems (Söderlund et al., in progress). In this chapter we review
or findings that the noise induced improvements in cognitive performance
can be accounted for by a statistical phenomenon that occurs in threshold-
based system called stochastic resonance. We suggest that auditory noise can,
under certain prescribed circumstances, improve attention and cognitive
performance in inattentive children. We review a model and findings that
shows a link between noise stimulation and cognitive performance. This
is accomplished in the Moderate Brain Arousal (MBA) model (Sikström

64
& Söderlund, 2007), which suggests a link between attention, dopamine
transmission, and external auditory noise (white noise) stimulation.

The Moderate Brain Arousal Mode


Stochastic resonance (SR) is the counterintuitive phenomenon by which weak
signals that cannot be detected because they are presented under detection
threshold, become detectable when additional random (stochastic) noise is
added (Moss et al., 2004). SR may be conceived as that noise adds additional
energy to the signal and pushes it above the threshold for detection. However,
it should be noted that beneficial effect of SR is found also when the threshold
is lower than the signal.
SR, although a paradoxical phenomenon, is well established across a
range of settings; it exists in any threshold-based system with noise and signal
that requires the passing of a threshold for the registering of a signal. Figure
1 is a representation of this phenomenon.

= Threshold passing event

Threshold
Signal
Noise + signal

Figure 1: Stochastic resonance where a weak sinusoidal signal goes undetected


as it does not bring the neuron over its activation threshold. With added
noise, the same signal results in action potentials.

SR has been identified in a number of naturally occurring phenomena and


the concept has been used to explain climate changes (Benzi et al., 1982);
bistable optical systems (Gammaitoni et al., 1998); mechanoreceptors of the
crayfish (Douglass et al., 1993); and the feeding behavior in the paddlefish

65
(Russell et al., 1999). In particular SR has been found in neural systems
and in behavioral data. Threshold phenomena in neural systems are linked
to the all-or-none nature of action potentials and they can be modeled by
a non-linear activation function, the sigmoid function, that estimates the
probability that a neural cell will fire (Servan-Schreiber et al., 1990). This
firing probability or gain parameter modifies how responsive a neural cell is to
stimulation; the higher gain (less random response) the better performance.
In humans SR has been found in the different modalities, including audition
(Zeng et al., 2000), vision (Simonotto et al., 1999), and touch (Wells et al.,
2005) etc., where moderate noise improves sensory discriminability. In fMRI
scans a moderate noise level increased neural cortical activity in visual cortex
(Simonotto et al., 1999).
Interestingly, the SR effect is not restricted to sensory processing; SR
has been found an enhancing effect in higher cognitive functions as well.
Auditory noise improved the speed of arithmetic computations in a normal
population (Usher & Feingold, 2000). The amount of noise to induce an
SR-effect on higher functions is much higher as compared to the ones used in
signal detection experiments. In Usher et al’s. (2000) experiment noise levels
ranged between 50-90dB and performance, as measured by reaction times,
were fastest at 77dB noise level. Moreover, SR can be transferred to other
modalities as e.g. auditory noise improves visual detection (Manjarrez et al.,
2007) and has a role in the motor system as well (Martinez et al., 2007). SR
may also play a role in patients with neuro degenerative disease suggesting
that SR may also improve central processing (Yamamoto et al., 2005). Tactile
stochastic stimulation provided by vibrating insoles improved balance control
in elderly (Priplata et al., 2003), in stroke and diabetes patients (Priplata et
al., 2006), and also improved gait i.e. speed, stride length and variability in
Parkinson patients’ (Novak & Novak, 2006).

Inattention, dopamine and Stochastic Resonance


Noise induced cognitive enhancement is of particular interest in ADHD
children that normally are viewed as having severe problem with attention.
There are several types of attentional problems, and these problems are also
depending on the subtype of ADHD (Nigg, 2005). Paradigms involving
attention deficits include; delay aversion, deficit in arousal/activation
regulation, and executive function/inhibitory deficits (Castellanos & Tannock,
2002). Delay aversion is the phenomena that characterized by intolerance for
66
waiting and is believed to be related to difficulty in sustaining attention on
long and boring tasks (Sonuga-Barke, 2002). Poor regulation of activation
or arousal are also connected with inattention (Castellanos & Tannock,
2002) and hyperactivity may be regarded as a form of self-stimulation to
achieve a higher arousal level. Executive deficits are predominantly linked to
impairments in working memory and effortful attentional control shown in
the difficulty to stop an ongoing response and response shift (Casey et al.,
1997).
The MBA model suggests that attentional problem adheres from to
strong reactions from environmental stimuli that are caused by too low levels
of extracellular dopamine. Dopamine signaling comes in two different forms.
One form is stimulus independent, more or less continuous and is called
tonic firing. This from determines the amount of dopamine in the extra-
cellular fluid. The second form is fast and stimulus dependent, and is called
phasic dopamine release. The tonic form modulates the phasic form via a pre-
synaptic auto feedback mechanism. Autoreceptors in the pre-synaptic cell are
activated when the tonic level is too high and suppresses spike-dependent
phasic dopamine release. However, when the tonic levels are low the phasic
releases increases (Grace, 1995). Too much tonic firing inhibits phasic release
and is, according the MBA model, associated with cognitive rigidity. Low tonic
levels, in contrast, cause neuronal instability and boosted phasic responses
(Grace et al., 2007). Excessive phasic transmission could cause instability
in the neuronal network activation and is related to symptoms of failure to
sustain attention, distractibility and excessive flexibility that are common in
ADHD. It is known that ADHD has low tonic dopamine levels (Volkow et
al., 2002) and from the MBA perspective this leads to an abundance of phasic
dopamine release and behavioral problems. In this context we prefer to view
ADHD not as a discrete category, rather we believe that children could be
more or less more likely to have the symptoms that are typical of ADHD and
it should be viewed as a continuous dimension. From this view ADHD like
symptoms are spread in the in the populations and can explain inattention
and hyperactivity seen in normal populations as well. A major insight gained
from the MBA model is that individual differences in the level of background
noise within the neural system (linked to differences in dopamine signaling)
will be reflected in different effects of environmental noise on performance.
Simulation of dopamine in neural cells shows that a neural system
with low dopamine levels requires more noise for an optimal performance.
This modeling has been contacted in the MBA model where dopamine is

67
manipulated by the gain parameter. This modeling shows that children with
low levels of dopamine (ADHD and inattentive) require more noise than
attentive children to perform well in cognitive tasks. Attentive children are
believed to have enough internal noise for performing well. Therefore, neural
systems with low levels of noise, as in inattention, require more external noise
for the facilitating effect of SR to be observed. Accordingly, systems with high
internal noise levels require less external noise. In this sense the individual
levels of neural noise, and the individual SR curve, influence the external
noise and performance differently. The effect of noise on performance
follows an inverted U-shaped curve. A moderate noise is beneficial for
performance whereas too little and too much noise diminish performance.
Levels of noise that enhance performance of children with low internal noise
attenuate performance for children with higher levels of internal noise. The
MBA model takes as an input an external noise and a signal that in turn
activates internal neural noise and signal. Through the SR phenomenon these
provide an output measured by cognitive performance. Thus, this provides
a straightforward prediction of noise-induced improvement in cognitive
performance in ADHD and inattentive children.
To conclude, the MBA model predicts that the dopamine system
modulates the SR phenomenon leading to that cognitive performance in
ADHD and inattentive children benefits from noisy environments. The
stochastic resonance curve is right shifted in ADHD due to lower dopamine.
The MBA model predicts that for a given cognitive task ADHD children and
inattentive children require more external noise or stimulation, compared to
control children, in order to reach optimal (i.e. moderate) brain arousal level.
This prediction is tested in three studies that are reviewed below.
Experimental support of the MBA model The affirmed predictions of the
MBA model have been experimentally tested in three studies consisting of
an episodic memory task where participants are learning word pairs. The
main manipulations were auditory noise and grouping of children based on
ADHD and other behavioral testing. Participants are presented with verbal
commands, simple verb – noun sentences such as “roll the ball” or “break
the match” (Nilsson, 2000). At the subsequent memory test, participants
are instructed to remember as many of the verbal commands presented as
possible. Results from the studies are summarized in figures 2 to 4 below. For
a more extensive description of study 1 see Söderlund et al. (2007), study 2
and 3 see Söderlund, Sikström and Loftesnes (in preparation).

68
0.6

Interaction: p = .023

0.5

0.4
Percent correct in free recall

ADHD
0.3 Control

0.2
No Noise Noise

Figure 2. Study 1; Percentage correct recall as a function of noise and group


(ADHD vs. Control)

In study 1 (Söderlund et al. 2007), ADHD and normal children participated


in a word pair learning task followed by a free recall task, which as conducted
either during noise exposure or a silent control condition. The results showed
an interaction between noise and group when medicated children where
excluded while medication could be a possible confound. (F(1,33)= 5.73,
p= .023, eta2= .15) (see Figure 2). When the medicated group was included,
to see if noise effect was present in this group too, in the assessment the
interaction between noise and group became stronger (F(1,40) = 8.41, p =
0.006, eta2= .17).
Study 2 (Söderlund, Sikström and Loftesnes, in preparation) comprised
of a normal population of school children where children were divided into
groups depending on cognitive performance. Cognitive performance was
measured by teacher’s judgment of general scholastic skills in three levels:
average, above and below average. While the below group only consistedof
four participants the below and average groups were merged together Figure
3A shows that the interaction between noise and group is significant (F(1,30)
= 5.92, p = 0.021, eta2= .14). The significant difference between groups in the
no noise condition (t(30)= 3.67, p= .001) disappears in the noise condition
(Figure 3A).

69
Study 3 (Söderlund, Sikström & Loftesnes, in preparation) were an
extension and replication of study 2, which also consisted of a normal
population of school children. The children were grouped according to (1)
teachers’ judgments of general school performance, (2) teacher judgments
of inattention/hyperactivity, and (3) the score on a Raven test. The results
are presented in figures 3B, 4A, and 4B (below), note that group sizes differ
between the figures
In Study 3, there was a significant interaction effect between noise and
below/above groups, however, there was no interaction effect involving the
middle group (Figure 3B). Note that the memory performance level was
significantly lower for the below group as compared to the average and above
groups (F(2,48)= 8.51, p= .001).
In Study 3, there was a significant interaction effect between noise and
below/above groups, however, there was no interaction effect involving the
middle group (Figure 3B). Note that the memory performance level was
significantly lower for the below group as compared to the average and above
groups (F(2,48)= 8.51, p= .001).

Figure 3A Figure 3B
0.6
0.6

Interaction: p = 021
Interaction: p = .069
0.5
0.5

Percent correct in free recall


0.4
0.4 Percent correct in free recall

0.3
0.3
above above
below/average
average
Below, above: p = .039 below

0.2
0.2 No Noise Noise
No Noise Noise

Figure 3A. Recall performance as Figure 3B. Study 3: Recall performance


a function of noise and school as a function of noise and school
performance in two groups (teachers performance (teachers judgments in
judgments: above N= 12, below/ three groups: above N= 22; average
average N= 20). N= 22; below N= 7).

70
In Study 3, the interaction between noise and Raven score was significant
(F(2,48)= 3.35, p= .044, eta2=.12) (Figure 4B). Note that the difference
in memory performance between below and high performing groups
disappeared with noise exposure when t-tested separately. Figure 4A shows
the lowest p-value in the interaction between attention and noise (F(2,48)=
4.99, p= .011, eta2=.17). Inattentive children did benefit most from noise
and there was no main effect on performance of group, all groups performed
at the same level (F(2,48) = 1.28, p = .288).

Conclusions
Traditionally, noise has been conceived as being detrimental for cognitive
performance. Recent results from our laboratory shows that this picture has
to be revised. Several independent datasets are now showing that noise may
actually be beneficial for cognition. However, this beneficial effect only occurs
in well defined circumstances. First of all the volume of the noise has to be
well tuned for the task. Our data (see also Usher, 2000) shows beneficial effect
during noise levels within 70-80 decibel, where lower levels show weaker
or absent effects, and higher volumes are detrimental for performance. The
aforementioned noise levels apply to cognitive testing and are much larger
than the noise levels showing benefits in perceptual auditive tests, where most
of stochastic resonance studies have been conducted. More importantly, our
studies shows that the benefit of noise differ depending on the groups of
participants, where some groups show benefits in cognitive performance by
noise, whereas for other groups a decline in performance is found for the
same noise levels. Groups that show benefits in performance are ADHD
children and children with low cognitive skills, whereas normal controls
and particularly high achieving children show decline in performance. The
decline in performance for some of the groups should not be interpreted as
that noise always is bad for these groups. In contrast the MBA framework
suggests that these participants may benefit of noise at other noise levels, or
in other task. This framework further suggests that moderate amount of noise
may increase the neural activity to optimal levels, and function as a substitute
for insufficient dopamine levels. Further studies from our group will focus on
directly measure how noise influence dopamine levels and the neural activity.
This line of research may potential lead to possibilities of tuning our neural
systems to optimal levels. In the future, this environmental therapy may be
an alternative to classical pharmacological therapies.
71
References
Abikoff, H., Courtney, M. E., Szeibel, P. J., & Koplewicz, H. S. (1996). The effects of
auditory stimulation on the arithmetic performance of children with ADHD
and nondisabled children. Journal of Learning Disabilities, 29(3), 238-246.
Belleville, S., Rouleau, N., Van der Linden, M., & Collette, F. (2003). Effect of
manipulation and irrelevant noise on working memory capacity of patients
with Alzheimer’s dementia. Neuropsychology, 17(1), 69-81.
Benzi, R., Parisi, G., Sutera, S., & Vulpiani, A. (1982). Stochastic resonance in
climatic change. Tellus, 34, 10-16.
Blakeman, R. S. (2000). ADHD and distractibility: The role of distractor appeal.
Dissertation Abstracts International: Section B: The Sciences & Engineering,
61(1-B), 517.
Boman, E., Enmarker, I., & Hygge, S. (2005). Strength of noise effects on memory
as a function of noise source and age. Noise Health, 7(27), 11-26.
Broadbent, D. E. (1954). Some effects of noise on visual performance. Quarterly
Journal of Experimental Psychology, 6, 1-5.
Broadbent, D. E. (1957). Effects of noises of high and low frequency on behaviour.
Ergonomics, 1, 21-29.
Broadbent, D. E. (1958a). Effect of noise on an “intellectual” task. Journal of the
Acoustical Society of America, 30, 824-827.
Broadbent, D. E. (1958b). The effects of noise on behaviour. In. Elmsford, NY, US:
Pergamon Press, Inc.
Brodeur, D. A., & Pond, M. (2001). The development of selective attention in
children with attention deficit hyperactivity disorder. Journal of Abnormal
Child Psychology, 29(3), 229-239.
Douglass, J. K., Wilkens, L., Pantazelou, E., & Moss, F. (1993). Noise enhancement
of information transfer in crayfish mechanoreceptors by stochastic resonance.
Nature, 365(6444), 337-340.
Gammaitoni, L., Hänggi, P., Jung, P., & Marchesoni, F. (1998). Stochastic resonance.
Reviews of Modern Physics, 70(1), 223-287.
Gerjets, P., Graw, T., Heise, E., Westermann, R., & Rothenberger, A. (2002). Deficits
of action control and specific goal intentions in hyperkinetic disorder.
II: Empirical results/Handlungskontrolldefizite und störungsspezifische
Zielintentionen bei der Hyperkinetischen Störung: II: Empirische Befunde.
Zeitschrift für Klinische Psychologie und Psychotherapie: Forschung und Praxis,
31(2), 99-109.
72
Higginbotham, P., & Bartling, C. (1993). The effects of sensory distractions on
short-term recall of children with attention deficit-hyperactivity disorder
versus normally achieving children. Bulletin of the Psychonomic Society, 31(6),
507-510.
Hygge, S., Boman, E., & Enmarker, I. (2003). The effects of road traffic noise and
meaningful irrelevant speech on different memory systems. Scandinavian
Journal Psychology, 44(1), 13-21.
Manjarrez, E., Mendez, I., Martinez, L., Flores, A., & Mirasso, C. R. (2007). Effects
of auditory noise on the psychophysical detection of visual signals: cross-
modal stochastic resonance. Neurosci Lett, 415(3), 231-236.
Martinez, L., Perez, T., Mirasso, C. R., & Manjarrez, E. (2007). Stochastic resonance
in the motor system: effects of noise on the monosynaptic reflex pathway of
the cat spinal cord. J Neurophysiol, 97(6), 4007-4016.
Moss, F., Ward, L. M., & Sannita, W. G. (2004). Stochastic resonance and sensory
information processing: a tutorial and review of application. Clinical
Neurophysiology, 115(2), 267-281.
Novak, P., & Novak, V. (2006). Effect of step-synchronized vibration stimulation
of soles on gait in Parkinson’s disease: a pilot study. J Neuroeng Rehabil, 3, 9.
Priplata, A. A., Niemi, J. B., Harry, J. D., Lipsitz, L. A., & Collins, J. J. (2003).
Vibrating insoles and balance control in elderly people. Lancet, 362(9390),
1123-1124.
Priplata, A. A., Patritti, B. L., Niemi, J. B., Hughes, R., Gravelle, D. C., Lipsitz, L.
A., et al. (2006). Noise-enhanced balance control in patients with diabetes
and patients with stroke. Ann Neurol, 59(1), 4-12.
Rouleau, N., & Belleville, S. (1996). Irrelevant speech effect in aging: an assessment
of inhibitory processes in working memory. The Journals of Gerontology. Series
B, Psychological Sciences and Social Sciences, 51(6), P356-363.
Russell, D. F., Wilkens, L. A., & Moss, F. (1999). Use of behavioural stochastic
resonance by paddle fish for feeding. Nature, 402(6759), 291-294.
Servan-Schreiber, D., Printz, H., & Cohen, J. D. (1990). A network model of
catecholamine effects: gain, signal-to-noise ratio, and behavior. Science,
249(4971), 892-895.
Shidara, M., & Richmond, B. J. (2005). Effect of visual noise on pattern recognition.
Exp Brain Res, 163(2), 239-241.
Simonotto, E., Spano, F., Riani, M., Ferrari, A., Levero, F., Pilot, A., et al. (1999). fMRI
studies of visual cortical activity during noise stimulation. Neurocomputing:
73
An International Journal. Special double volume: Computational neuroscience:
Trends in research 1999, 26-27, 511-516.
Söderlund, G. B. W., Sikström, S., & Loftesnes, J. M. (in progress). Noise is Not a
Nuisance: Noise Improves Cognitive Performance in Low Achieving School
Children.
Söderlund, G. B. W., Sikström, S., & Smart, A. (2007). Listen to the noise: Noise
is beneficial for cognitive performance in ADHD. Journal of Child Psychology
and Psychiatry, 48(8), 840-847.
Usher, M., & Feingold, M. (2000). Stochastic resonance in the speed of memory
retrieval. Biological Cybernetics, 83(6), L11-16.
Wells, C., Ward, L. M., Chua, R., & Timothy Inglis, J. (2005). Touch noise increases
vibrotactile sensitivity in old and young. Psychological Science, 16(4), 313-
320.
Yamamoto, Y., Struzik, Z. R., Soma, R., Ohashi, K., & Kwak, S. (2005). Noisy
vestibular stimulation improves autonomic and motor responsiveness in
central neurodegenerative disorders. Ann Neurol, 58(2), 175-181.
Zeng, F. G., Fu, Q. J., & Morse, R. (2000). Human hearing enhanced by noise.
Brain Research, 869(1-2), 251-255.
Zentall, S. S., & Zentall, T. R. (1983). Optimal stimulation: A model of disordered
activity and performance in normal and deviant children. Psychological
Bulletin, 94(3), 446-471.

74
Tinnitus and Hypersensititvity to
Sounds

Gerhard Andersson, Prof.


Department of Behavioural Science and Learning, Linköpings
University,Swedish Institute for Disability Research.
Department of Clinical Neuroscience, Karolinska Institutet
Email: [email protected]

Introduction
In this text the phenomena of tinnitus and hypersensitivity to sound will be
highlighted. Tinnitus is defined as the experience of sound that does not come
from an exterior source. One can speak of tinnitus as “sound in silence”. It
often concerns shrieking, whistling, rushing sounds, but many other sounds
may occur (Andersson, 2000). On the other hand, tinnitus does not include
meaningful sound. The experience of, for example voices or music does not
count as tinnitus. The occurrence of tinnitus is widespread. At least 15% of
the adult population have some form of recurring tinnitus, even if a lesser
part of these are troubled by it. Epidemiological studies suggest that 2% of
the population suffer from severe tinnitus. The problem increases with age
and is rare among younger people, but common among the elderly. In certain
cases tinnitus can occur in children and the risks of exposure to noise among
youngsters should be taken seriously (Olsen Widen and Erlandsson, 2004).
It should be added that musicians can often get tinnitus which is related to
the exposure to sound their ears suffer in the musician´s profession. The fact
remains though that tinnitus increases with progressive age, which mainly
depends on increasing hearing impairment among the elderly, and that
tinnitus has close proximity to hearing impairment.
Sound sensitivity is a closely related phenomenon, especially with regard
to extreme sound sensitivity so called hyperacusis (Andersson et al., 2005b).
This pertains to sensitivity to everyday sounds, and not therefore only to

75
loud sounds, which is reported by almost half the population. The term
hyperacusis means that the patient reacts strongly to the screwing up of a
piece of paper, traffic, and a number of sounds that would not usually lead to
reactions of pain or distress. A small number go so far as to protect themselves
with earplugs, which in some cases can be motivated for loud sounds, but
for everyday sounds tends to leads to the patient becoming more sensitive
(Formby et al., 2003). The prevalence of hyperacusis is unclear, but a Swedish
study found an prevalence of 9% (Andersson et al., 2002). If we look at more
serious hyperacusis, where the person protects himself, the prevalence is likely
to be sufficiently less, 1 -2% (Baguley and Andersson, 2007)

Can it be measured?
Tinnitus is measured, as is pain, with the help of evaluation on the part
of the patient. Tinnitus has certainly objectively been established in the
brain with the help of brain scanning techniques, but these have, as yet, no
practical clinical use (Andersson et al., 2005a). It is of interest that tinnitus
appears to engage those parts of the brain that “interpret sound”, that is to
say secondary areas in the auditory cortex. Even activation in the areas that
steer attention (Andersson et al., 2006) and emotion (Lockwood et al., 1998)
has been observed. Although the sound of tinnitus itself cannot be measured,
it is important to measure hearing levels and to make other possible tests
to investigate hearing pathology and problems that may be related to, for
example, the jaw. With the help of an audiometer one can partly recreate
tinnitus and ask the patient to evaluate the level. One can also ask the patient
to report when tinnitus can no longer be heard if it is masked by an external
sound (static). These methods have no obvious clinical relevance, but can
be experienced as important by the patient as the symptoms are being taken
seriously.
Sound sensitivity and hyperacusis are also measured mainly by self-
report scales and patient interviews. Distress thresholds can be measured with
an audiometer, but in certain cases this is impossible or of no diagnostic value,
as it is unreliable and dependent on instructions. In other words, the patient
can “cope” in the test situation, but be tormented by the same level of sound
in everyday life. Sound sensitivity occurs frequently among those afflicted
with tinnitus and hearing impairments. In this instance it should be noted
that it can imply so called recruitment, which is a term to describe a greatly
increased level of discomfort with loud sounds, but which does not apply to
76
everyday sounds. Recruitment means that sound level increase is not linear.
This is something that modern hearing aids are often capable of handling.
Hyperacusis is not the same thing, but recruitment and hyperacusis can be
present together, especially if the term hyperacusis is not reserved for people
without hearing impairment.

Problems connected to tinnitus and sound sensitivity


If we start with tinnitus we can list the complaint categories that are related to
tinnitus trouble. These include sleeping problems, depression and anxiety,
hearing problems and concentration problems that the patient relates to
tinnitus (Andersson et al., 2005a). With severe tinnitus, simultaneous
depression and anxiety are not unusual (Zöger et al., 2001). The handicap
can apply to work and participation in life in general. Some severely affected
tinnitus patients cannot accept that they have tinnitus and avoid situations.
Tinnitus can be stressful for some, but stress alone is not thought to cause
tinnitus, it rather exacerbates the discomfort and becomes a consequence of
tinnitus. By far the most common problem that tinnitus patients describe
is that tinnitus never disappears and that they “miss silence” (Andersson
and Edvinsson, 2008). Discomfort can vary. For some, sleep is the most
troublesome while for others it might be concentration.
The handicap caused by hyperacusis is similar to problems suffered
by patients with chronic pain. It often concerns difficulty in remaining in
certain environments. Partaking in activities may also prove difficult. For
musicians hyperacusis can, for example, mean that they cannot continue
with their work.

Which groups have the greatest problems?


In glaring contrast to what is reported in the newspapers, it is older people
who have the greatest problem with tinnitus (Davis and El Rafiae, 2000).
However, they seldom seek help for their tinnitus trouble and the typical
tinnitus patient in the clinic is likely to be in their fifties. Although it happens
that young people can be afflicted with acute tinnitus, but they seldom
develop long term symptoms, even if these can occur. Severe tinnitus in
children is rare. There is no clear gender difference, but men and women

77
differ in their discomfort. There are several factors which increase the risk
of developing severe tinnitus. The degree of hearing impairment, dejection
and anxiety, and according to one theory, the degree to which tinnitus is
associated with something negative are factors that can predetermine the
development of tinnitus trouble (Andersson and Westin, 2008).Concerning
hyperacusis there is still little support from research, but in clinic we often see
such professional groups as teachers and musicians (Anari et al., 1999). We
should remember though that not everybody with hyperacusis can be found
within the world of audiology (Andersson et al., 2005). With hyperacusis,
migraine attacks for example, are common.

What forms of treatment are there and how successful


are they?
There are many forms of treatment that have been tried on tinnitus. One
category attempts to silence tinnitus. In principle there is nothing to support
that this works, with the exception of patients with an obvious ear pathology
which can be treated surgically (e.g. otosclerosis). On the other hand there is
more hope for the kind of treatment that concentrates on alleviating distress.
Cognitive Behavioural Therapy (CBT) can be found among these, which is the
method that has the strongest support in research (Martinez Devesa et al.,
2007). CBT includes working with relaxation, thoughts, concentration, and
where relevant, sleep, and noise sensitivity. A self-help book based on CBT
principles is available in Swedish (Kaldo and Andersson, 2004), and has been
tried with good results in a controlled study (Kaldo et al., 2007). A method
called Tinnitus Retraining Therapy (TRT) has some support (Jastreboff and
Hazell 2004). There exist a number of other experimental methods and
several complimentary medical treatments such as acu-puncture. Support
for these is virtually non-existent or of doubtful quality. Severely afflicted
patients with diagnosed depression can be helped by antidepressant drugs. But
antidepressants should not be prescribed for most tinnitus patients according
to a Cochrane review (Baldo et al., 2006). The treatment of hyperacusis is
often good, but there are unfortunately no controlled studies to support this
statement. Hyperacusis treatment requires a gradual approach (exposure) to
sound, without the patient protecting himself too much. Sound stimulators
that produce static can help, but for the most severe cases one should consider
a referral to a psychologist with a focus on CBT. (Baguley and Andersson,
2007).
78
Conclusions
Tinnitus and sound sensitivity are common phenomena which we only partly
understand. Tinnitus research is an active area and several different methods
of treatment have been tried. With regard to extreme sensitivity to sound
there is so far very little research. Modern psychological research studies
cognitive mechanisms, but also strategies to better enable the acceptance of
that tinnitus cannot be cured. Certain forms of tinnitus may, in the future,
be curable, but at the moment no safe and guaranteed effective method exists
that can silence tinnitus. However, there is much that can be done to relieve
discomfort, and among these methods CBT has the strongest support in
research.

(Translation: Janet Kinnibrugh)

79
References
ANARI, M., AXELSSON, A., ELIASSON, A. & MAGNUSSON, L. (1999)
Hypersensitivity to sound. Questionnaire data, audiometry and classification.
Scandinavian Audiology, 28, 219-230.
ANDERSSON, G. (2000) Tinnitus: orsaker, teorier och behandlingsmöjligheter, Lund,
Studentlitteratur.
ANDERSSON, G., BAGULEY, D. M., MCKENNA, L. & MCFERRAN, D. J.
(2005a) Tinnitus: A multidisciplinary approach, London, Whurr.
ANDERSSON, G. & EDVINSSON, E. (2008) Mixed feelings about living with
tinnitus: a qualitative study. Audiological Medicine, 6, 48-54.
ANDERSSON, G., JÜRIS, L., KALDO, V., BAGULEY, D. M., LARSEN, H. C.
& EKSELIUS, L. (2005b) Hyperacusi – ett outforskat område. Kognitiv
beteendeterapi kan lindra besvären vid ljudöverkänslighet, ett tillstånd med
många frågetecken Läkartidningen, 44, 3210-3212.
ANDERSSON, G., JÜRIS, L., TRAUNG, L., FREDRIKSON, M. & FURMARK,
T. (2006) Consequences of suppressing thoughts about tinnitus and the
effects of cognitive distraction on brain activity in tinnitus patients. Audiology
& Neurootology, 11, 301-309.
ANDERSSON, G., LINDVALL, N., HURSTI, T. & CARLBRING, P. (2002)
Hypersensitivity to sound (hyperacusis). A prevalence study conducted via
the Internet and post. International Journal of Audiology, 41, 545-554.
ANDERSSON, G. & WESTIN, V. (2008) Understanding tinnitus distress:
Introducing the concepts of moderators and mediators International Journal
of Audiology, 47(Suppl. 2), 178-183.
BAGULEY, D. M. & ANDERSSON, G. (2007) Hyperacusis: Mechanisms, diagnosis,
and therapies, San Diego, Plural Publishing Inc.
BALDO, P., DOREE, C., LAZZARINI, R., MOLIN, P. & MCFERRAN, D. J.
(2006) Antidepressants for patients with tinnitus. Cochrane Database of
Systematic Reviews, CD003853.
DAVIS, A. & EL RAFAIE, A. (2000) Epidemiology of tinnitus. IN TYLER, R. S.
(Ed.) Tinnitus handbook. San Diego, Singular. Thomson Learning.
FORMBY, C., SHERLOCK, L. P. & GOLD, S. L. (2003) Adaptive plasticity of
loudness induced by chronic attenuation and enhancement of the acoustic
background. Journal of the Acoustical Society of America, 114, 55-58.
JASTREBOFF, P. J. & HAZELL, J. (2004) Tinnitus retraining therapy: Implementing

80
the neurophysiological model, Cambridge, Cambridge University Press.
KALDO, V. & ANDERSSON, G. (2004) Kognitiv beteendeterapi vid tinnitus, Lund,
Studentlitteratur.
KALDO, V., RENN, S., RAHNERT, M., LARSEN, H.-C. & ANDERSSON, G.
(2007) Use of a self-help book with weekly therapist contact to reduce tinnitus
distress: a randomized controlled trial. Journal of Psychosomatic Research, 63,
195-202.
LOCKWOOD, A. H., SALVI, R. J., COAD, M. L., TOWSLEY, M. L., WACK,
D. S. & MURPHY, B. W. (1998) The functional neuroanatomy of tinnitus.
Evidence for limbic system links and neural plasticity. Neurology, 50, 114-
120.
MARTINEZ DEVESA, P., WADDELL, A., PERERA, R. & THEODOULOU,
M. (2007) Cognitive behavioural therapy for tinnitus. Cochrane database of
systematic reviews (Online), CD005233.
OLSEN WIDÉN, S. & ERLANDSSON, S. (2004) Self-reported tinnitus and noise
sensitivity among adolescents in Sweden. Noise and Health, 7, 29-40.
ZÖGER, S., HOLGERS, K.-M. & SVEDLUND, J. (2001) Psychiatric disorders in
tinnitus patients without severe hearing impairment: 24 month follow-up of
patients at an audiological clinic. Audiology, 40, 133-140.

81
82
”It Sounds like a Buzzing in my Head”

– children’s perspective of the sound environment in


pre-schools

Kerstin Persson Waye, PhD, Assoc professor.

Occupational and Environmental Medicine, Sahlgrenska Academy


Gothenburg University.
Email: [email protected]

1 INTRODUCTION
Previous investigations indicate that noise may be a serious occupational
and public health problem in preparatory schools [1,2]. Typical A-weighted
sound pressure levels (LpAeq 8h) are in the range of 75-80 dB in Sweden.
Data from our own measurements in pre-schools show similar tendencies,
with personnel on average being exposed to 78 dB LpAeq 95% CI (77.3-
78.7) when being indoors. Children are generally exposed to higher levels
and our data show that they on average are exposed to 85dB LpAeq 95%CI
(83.7-86.3) when being indoors. The individual exposure dose is affected by
the own voice and parallel studies [3,4] have resulted in estimations of the
contribution of voice during different conditions. For example if the wearer
of the dosimeter is speaking during 50% of the time in a background level
of 70 dB LpAeq the contribution can be estimated to 5 dB, and if speaking
during 20% of the time the estimated contribution is 3 dB. How this applies
to the field situation remains to be analyzed.
The main sources for the sounds are children activities in the rooms.
The level and the sound characteristics are influenced by the room acoustics,
the total number of children, the number of children per room, but also
83
other factors such as the pedagogic methods applied, the awareness of the
noise problem, the occurrence of other noise sources such as traffic and
ventilation systems can be expected to play a role. Official statistics from the
work environment show that fifty percent of the female pre-school teachers
report that they are exposed to sound levels that make it impossible to speak
with a normal voice during at least 25% of their working day [5]. Reported
problems among the personnel are “tired ears”, general tiredness, lack of
energy, stress and voice problems. [2, 6, 7]. The adverse health effects of
these rather high sound pressure levels and long exposure times per day for
preschool children are less well known. It has been suggested that children
may be at particular risk of developing noise induced hearing impairment
[8,9], due to their behaviour at play and possibly to increased age-related
vulnerability [9], but precise knowledge is lacking. As children need a better
signal to noise ratio to understand speech as compared to normal hearing
adults, the noisy environments in pre-schools could lead to a reduced or
delayed understanding of speech and later impaired reading and writing
abilities [10, 11]. Children with another mother tongue than Swedish and
children with hearing impairment are especially affected. In order to make
themselves heard in the noisy environment, pre-school children need to raise
their voices or in fact scream and there is a risk for acute and chronic voice
disorders [12, 13]. Apart from these effects it can not be excluded that noise
also for children lead to a general tiredness, reduced wellbeing and stress
related symptoms.
It could be assumed that children who voluntary and gladly produce
sounds when they play also would be less disturbed or distressed by these.
While this may be true for shorter periods of the day, we do not know how
they feel when spending 6-10 hours in a noisy environment.
In order to increase the knowledge of how the sound environment at
pre-schools affects the health of children and personnel a three-year research
project has been initiated at Occupational and Environmental Medicine. The
project is financed by the Swedish council for working life and Social research
(FAS) and the Swedish research council for Environment, Agricultural
Sciences and Spatial planning (FORMAS). A further aim is to evaluate the
results of an intervention program aimed to improve the acoustic conditions
at pre-schools and finally to see if the improved acoustic conditions have a
positive effect on health.
The study sites are all in the city of Mölndal - a municipality of about
59000 inhabitants in the south west of Sweden. The study population

84
comprises seven randomly selected pre-schools. They were part of a larger
renovation program in the city of Mölndal with special emphasis to improve
the acoustic conditions in order to reduce the noise levels generated by the
activities inside the pre-schools. The study population also comprises three
pre-schools where no renovation had been undertaken (controls).
The renovation program include putting up ceiling panels (absorbents
class A), and also in some cases addition of absorbents to one wall in the
children’s play room, change of flooring to vinyl flooring with acoustic
properties. The chairs are fitted with chair pads. In all pre-schools, including
the controls, the table surface is made of an acoustically soft material (type
vinyl Tapiflex).
The selected pre-schools are studied in the autumn one month before
the renovation and in the spring at least three months after the renovations.
Measurements of noise levels are done during a week using sound level meters
(B&K 2260) with the microphone hanging from the ceiling in the rooms,
individual dosimeters (SPARK 705+) worn every day by two children and
two personnel. Room acoustic measures are done in empty rooms before
and after the renovation. Subjective data are collected using questionnaires
distributed to the personnel, the parents while the children are interviewed.
Voice-data are recorded at four specific intervals during the day, as well as
during activities during the day for one child and one personnel.
Before the study, a number of challenges were identified, the major
being that methods to study the effects of noise on pre-school aged children
were lacking. We did not know how preschool-aged children recognize,
communicate, perceive and are affected by the sounds at their preparatory
school. This knowledge was necessary in order to develop a questionnaire
assessing possible adverse effects on children. The process carried out to
develop the interview questionnaire to be used for children aged 4-6 years
will be summarised in the following. For further information see [14].

85
2 Children´s Perspectives of Sound Environment in
Pre-schools

2.1 A qualitative approach


Constructivist grounded theory was used as the qualitative approach,
taking the perspective that individuals create social realities through their
interpretation as well as through individual and collective actions [15].

2.2 Sampling procedure and study-sample


The preschools were strategically selected, to capture a variation of
soundscapes environments and pedagogic principles. Prior to each interview
the children’s parents were informed about the study and gave their written
consent. Also the children were asked whether they would like to participate
or not. All children in the five selected preschools, 4-6 years old and present
the day the interviews took place was interviewed. This formed a sample
of 36 young children. The sampling and data-collection proceeded parallel
with analysis, in line with grounded theory methodology, until theoretical
saturation was reached.

2.3 Focus-group interviews


Qualitative focus group interviews were used in order to generate a depth of
understanding into the young children’s shared perceptions and experiences of
the preschool sound environment [16, 17]. Their perceptions of positive and
negative sounds were explored as well as their perceptions and management
of consequences. Eleven focus group interviews with 1-3 participants in each
group were completed.

2.4 Qualitative analysis


The analysis of data was in line with constructivist grounded theory approach
[14], with initial systematic description and secondly conceptualising theory
development. Theoretical notes and theoretical discussion within the research
group was used to deepen the analysis. The interviews were coded in order to
describe categories and their relations.

86
3 RESULTS

3.1 Qualitative description of young children’s perception of the sound


environments at pre schools
The result describes how young children relate their experience of sounds to
which consequences the sound had for themselves, the type of sound, their
understanding of its source, their bodily experience of the sound and the
communicating opportunities the sound may have for them.
I general four categories of sounds could be deduced from the data:
Threatening sounds, High frequency sounds, Background sounds and
Communicating sounds.

Threatening sounds
Threatening sounds were sounds like: screams, cry and angry voices. These
sounds were experienced as negative, threatening and discomforting. The
interviewed young children did often relate this noise to a certain child, which
often screamed and thereafter became angry, violent or very sad and upset.
The interviewed children described their discomfort in these situations.

High-frequency sounds
Children also described high discomfort when there were high-frequency
noises at the preschool. Squeaking, creaking and scratching noise was
described as unexpected and as a physical experience. For example noise from
squeaking and creaking bicycles, table wares, plates, doors or swings.

Background sounds
Background sounds from e.g. a fan, radiators, computers were often non-
reflected, unknown sounds, not easy to communicate. Many children noted
these sounds but some did not notice them at all, although there might be
high at rather high levels. Children also showed their uncertainty and lack of
knowledge about the source of the background noise, making it difficult to
communicate and reflect upon it.

Communicating sounds
Some expressed sounds were interpreted as communicating and learned
noise. These sounds could be sounds from animals, e.g. how a dog sounds.

87
Or, family noises such as “my mum is snoring”.

Strategies for discomforting sounds


The strategies mentioned by the children when hearing discomforting
sounds were holding their ears, hiding, running away, running out to the
play-ground or go to the teacher.

Described bodily experience of sounds


The young children described “how they felt” when they were exposed to
disliked sounds. The descriptions were often physical and emotional. They
could experience a “pain in their ears”, that they felt it, in their stomach, that
their heart was beating very fast, that it hurt in their head or spin in their
head or just that they felt bad and felt discomfort. They handled this by
avoiding strategies, e.g. withdrawal and “holding their ears”, hiding, running
away or running out to the play-ground.

4 CONCLUDING COMMENTS
Personnel and to an even higher degree children at pre-schools are exposed to
high sound levels during time spent indoors. In order to better understand
how children perceive their sound environment focus group interviews were
carried out. The qualitative methods used gave us insights beyond what could
have been achieved using the common quantitative measurement techniques.
The children’s experience of sounds was related to which consequences
the sound had for them in their immediate being. They described various
experience related to the type of sound and also their understanding of its
source. Further, their described consequences were typically experienced
as physical, i.e. as within their bodily. Their descriptions of sounds they
disliked could be formed into different categories of sounds. These sounds
were disliked to a high degree and resulted in avoidance strategies among the
children. The findings also enabled us to construct a questionnaire that is
used in interviews with children.

88
5 AKNOWLEDGEMENTS
The research group for this part of the project involves Lotta Dellve, PhD,
Lena Samuelsson, Special needs educator, Agneta Agge research assistant,
Occupational and Environmental Medicine, The Sahlgrenska Academy
Gothenburg University.

6 REFERENCES
A. Bertilsson, A-C. Hagaeus, Y. Sandqvist, K. Skagelin, M. Björkman and L.
Barregård, ” Report from sound level measurements in preschools and schools
in the municipalities of Lidköping and Skara 2000-2003,” The Community
of Lidkoping and Skara and Department of Public Health and Community
Medicine, Goteborg (2003). (In Swedish)
U. Landström, B. Nordström, A. Stenudd and L. Åström, “Effects of the number
of children on noise levels and of the experience among staff in preschools,”
Working Life Report nr 6, 1-43. ISSN 1401-2928 (2003). (In Swedish)
Ryherd, S., Persson Waye, K. Kleiner M. Ryherd, E. (2008). Quantifying the Noise
Environment: Effects of Wearer’s Voice on Body-mounted Noise Dosimeter
Measurments. Acoustics’08, Paris, France.
Ryherd, S. R. (2008). Influence of wearer´s voice on noise dosimeter measurements.
Chalmers Room Acoustics Group. Gothenburg, Chalmers University of
Technology. Master in sound and vibration: 47pp.
The Work Environment 2001, SOS, SM: AM 68 SM 0201. (In Swedish)
B. Fritzell, “Voice problems are related to the profession,” Läkartidningen nr 14, 93,
1325-1327 (1996). (In Swedish).
L. Rantala, E. Vilkman and R. Bloigu, ”Voice changes during work: subjective
complaints and objective measurements for female primary and secondary
school teachers,” J Voice, 16(3), 344-55 (2002).
Guidelines for community noise. Edited by Birgitta Berglund , Thomas Lindvall and
Dietrich H. Schwela, World Health Organization (WHO) (1999).
W. Passchier-Vermeer, “Noise and health of children,” Leiden: TNO report PG/
VGZ/2000.042 (2000).
G. W. Evans and S. J. Lepore, “Nonauditory effects of noise on children: a critical
review,” Child Environ, 10, 31-51 (1993).

89
L. Maxwell and G. Evans, “The effects of noise on pre-school children’s pre-reading
skills,” J Environ Psychol, 20, 91-97 (2000).
A. McAllister, “Acoustic, perceptual and physiological studies of ten-year-old
children´s voices,” Thesis. Department of Logopedics och Foniatrics,
Karolinska University Hospital, Huddinge and Department of Speech, Music
and Hearing , KTH, Stockholm, Sweden (1997).
E. Sederholm, “Prevalence of hoarseness in ten-year-old children,” Scand J Log
Phon, 20, 165-173 (1995).
Persson Waye, K. Samuelson, L. Agge, A. Dellve, L. (2006). Methods for assessing
preschool children’s perception and experience of their soundscape. The 2006
Congress and exposition of noise control engineering, Inter-Noise, Honolulu,
Hawaii, USA.
K. Charmaz, “Constructing grounded theory – a practical guide through qualitative
analysis,” SAGE Publications Inc. Thousand Oaks, USA (2006).
R. A. Krueger, “Focus Groups. A Practical Guide for Applied Research,” 2ed. Sage
Publications. Thousand Oaks, USA (1994).
C. Webb and J. Kevern, “Focus groups as a research method: a critique of some
aspects of their use in nursing research,” Journal of Advanced Nursing, 33,
798-805 (2001).

90
Cognitive Skills and Percieved Effort
in Active and Passive

Listening in a Naturalistic Sound Environment

Björn Lyxell1,2,3, Erik Borg2 and Inga-Stina Olsson2


1
Department of Behavioural Sciences, Linköping University, Sweden,
2
Ahlsén Research Institute, Örebro University, Sweden
3
The Swedish Institute for Disability Research, Linköping University

INTRODUCTION
Speech understanding is more difficult in situations with background noise
than in situations without noise and this is further hampered if the listener
is hearing-impaired (Divenyi & Simon, 1999; Hällgren, Larsby, Lyxell &
Arlinger, 2001; Schneider, Daneman & Pichora-Fuller, 2002). Understanding
speech in noise engages the individual´s cognitive system to a higher extent as
a larger portion of the spoken signal is either missing or ambiguous and has to
be inferred or disambiguated by means of cognitive operations (Lunner, 2003;
Lyxell, Andersson, Olsson & Borg, 2003; Pichcora-Fuller, 2003, Schneider
& Pichora-Fuller, 2000). The level of speech understanding may also vary
as a function of the type of listening task that is required in the background
noise. That is, the performance in listening tasks that requires some form of
execution based on the spoken information (i.e., active listening) taxes the
individual´s cognitive abilities relatively more compared to listening situations
where such requirements are not present (passive listening). The nature of the
interfering noise is also important for the degree of interference with speech
understanding (e.g. the spectrum and temporal features: Dreschler et al, 2001
and Magnusson, 1995). In order to handle this multivariate situation and
to construct useful and reasonably reliable and valid test situations, standard
masking noise has typically been composed of, for example speech spectrum
noise, babble noise or ICRA-noise (Dreschler et al, 2001) An alternative,
91
but less frequently used, approach is to record and edit naturalistic noise
sequences representative of specific classes of enironmental situations (e.g.,
a specific workplace). In the present study focus is on hearing impaired and
normal hearing day-care centre teachers and a typical sound environment at
the day-care centre entrance hall was created (cf., Tun & Wigfield, 1999).
The general purpose of the present study is to examine the possible
relationship between the individual´s cognitive skills, type of listening
situation (active vs passive) in a naturalistic background noise (i.e., the noise
specific for their workplace environment) and perceived effort during and
after listening. In the study, we will compare the performance of two groups
of day-care centre teachers; one with hearing-impairment and a group of
matched normal hearing individuals with respect to cognitive skills, perceived
effort and listening in naturalistic situations.
In the present study, the listening tasks will be constituted by the “Just-
follow-conversation” paradigm (JFC; Hygge, Rönnberg, Arlinger & Larsby,
1992; Hygge, 2003). In this paradigm, the participants listen to a story that
is presented against a competing background noise. The background noise
is presented at a fixed or individual sound level (65 dB A, most comfortable
level, or maximal acceptable level) and the individual´s task is to adjust the
sound level of the speaker to a level where it is just possible to follow the
content of the story. JFC has proved to be a sensitive method to investigate
speech perception in noise for populations of (young and old) hearing-
impaired and normal hearing listeners (Borg et al., 1999). Two different
JFC-tasks will be used, one with passive and one with active listening. The
passive listening condition will follow the standard JFC procedure, where the
individual’s task is to adjust the sound level of the speaker to the level where
they are able to just follow the speaker. This is also the situation in the active
listening task but the individuals are, in addition, required at random time
intervals during listening, to answer simple questions based on the content
of the story. The same story is employed in both tasks and background noise
was constituted by noise typically occurring in day-care centres (e.g., children
screaming and playing, adults conversing). The hypothesis is that the active
listening task will require a more active and effortful processing, i.e. more
cognitively demanding listening situation. Thus, being prepared to answer
simple questions about the content of a story requires a number of cognitive
operations, but primarily that parts of the story are actively held in working
memory and that lexical and phonological information stored in long-term
memory is accessed relatively fast.

92
The assessment of the individuals´ cognitive skills will focus on the skills
that are central for processing of spoken language, particularly in populations
with hearing-loss (Andersson, 2003; Andersson, Lyxell, Rönnberg & Spens,
2001; Lyxell, Andersson, Ohlsson & Borg, 2003; Pichora-Fuller, 2003).
Specifically, we will examine three cognitive components; working memory,
lexical access speed and phonological processing skills.
A frequent report from hearing-impaired individuals is that listening
in general and in noisy environment in particular is effortful and resource
demanding. However, few studies have examined how perceived effort relates
to cognitive skills and to listening situations with varying cognitive demands.
In the present study, perceived effort will be assessed by means of Borgs
(1998) CR-10 scale, where level of perceived effort is assessed before, during
and after the listening tasks.

METHOD

Participants
The participants in the study were 11 female day-care centre teachers (21
– 65 years) constituting the total number of bilaterally hearing impaired
individuals (with one exception) in this profession in the Örebro region (with
a total population of 270 000). A group of 11 normal hearing female day-
care centre teachers matched for age and work places constituted the control
group.

Materials and procedure


Design of sound environment
The acoustic recordings were made in a Swedish day-care centre. The
recordings were designed on the basis of observations in the day-care centres
and interviews with hearing impaired individuals. Repeated recordings were
obtained in the same room, i.e., an entrance hall. The recordings were made
with a two channel digital tape recorder, edited off-line (Digidesign session
8) and stored in a computer. Individual recordings were made of different
groups of parents and children and combined to create an acoustically
active and realistic environment representing a time when several children
were leaving the day-care centre. The environments were reproduced in a
93
specially designed test room (Borg et al, 1998, 1999) equipped with twelve
loudspeakers. The day care centre environment was presented from eleven
loudspeakers and the target sound from loudspeaker 12 (0 degrees azimuth).

Active and passive listening task


In the naturalistic test environment a target speech sound was introduced to
the sound environment and the dependent measure was the adjusted sound
level of the target sound in dB. This speech material (i.e., the target sound)
was a 1-hour recording from Selma Lagerlöf´s book “Nils Holgersson´s
wonderful journey”. The participants´ task was to adjust the level of the
spoken sound to that level where they were able to just follow the conversation
(cf, Hygge et al, 1992). Two listening conditions were used in the study:
One active and one passive. In the active mode the participants were asked
to adjust the sound level in order to be able to just follow the conversation
and also to be prepared to answer simple questions on content of the text at
random intervals during the test session. In the passive condition the task
was to adjust the spoken sound level so that they were able to just follow the
conversation.

Perceived effort
The individual´s perceived effort before, during and after listening was
assessed by means of G. Borg´s CR-10 scale. During the session this scale
was administered halfway through both listening tasks. The participants were
asked to indicate on the scale how effortful they experienced the listening
task.

Cognitive tests
All cognitive testing was performed individually and administered by a
computer test-platform. All instructions regarding the cognitive tasks were
presented in written form and complemented with oral instructions.

94
RESULT
In the first part we will describe the test results and measurements of cognitive
capacity and active and passive listening. In the second part we will examine
the relationship between the individually selected signal noise ratios in the
two listening conditions and the cognitive capacity and how these factors
relate to level of perceived effort.

Cognitive capacity
Table I give the descriptive statistics for the cognitive tasks used in the
study. As can be seen, the two groups do not differ statistically from each
other. The performance levels are further similar to performance levels that
have been reported from other studies where the present tasks have been
employed (Andersson, 2002; Andersson, Lyxell, Rönnberg & Spens, 2001;
Lyxell, Andersson, Arlinger, Bredberg, Harder & Rönnberg, 1996, Lyxell,
Andersson, Arlinger, Bredberg & Harder, 1998) and where moderately
hearing-impaired, deafened adults and normal hearing individuals have
participated in the working memory tasks and the lexical and semantic
decision-making tasks. Furthermore, for the hearing-impaired group, the
level of hearing loss did not correlate with performance on any cognitive task.

Active and passive listening


The number of errors (i.e., wrong answers to the questions) in the active
condition did not differ between the groups and was also low. The results
display that there is a highly significant difference between the two groups
in both the active and the passive listening condition (t = 5.13 and t = 5.40,
p< .001, respectively). It is also interesting to note that the magnitude of the
difference is parallel across the two conditions. That is, the two conditions do
not interact with each other and the adjusted sound level does not increase
as a function of increased cognitive demands for the hearing-impaired group
relative to the normal hearing participants. A further inspection of the data
reveals that there is a significant difference between the active and the passive
listening condition for the normal hearing participants in the study. That
is, when the cognitive demands in the listening situation are increasing, the
adjusted speech sound level is also increasing. For the group of hearing-
impaired individuals, the adjusted signal to noise ratio sound level between
95
the active and the passive listening condition is different, but this difference
is not statistically significant (t = 1.80, p > .05).
An analysis of the results from 0º azimuth at an individual level
demonstrates that the pattern of adjusted sound level between the first and
the second half of the two listening tasks differs between the two groups.
That is, in the active condition most of the hearing impaired individuals
increased (i.e., seven individuals increased,two decreased and two remained
at the same level) their adjusted sound level (average increase 2.22 in dB),
whereas the normal hearing individuals decreased their average adjustment
(- .45 dB). A sign test reveals that the change (i.e., increase vs decrease in
dB) is significant for the hearing-impaired group (z = 1.83, p < .05), whereas
significance was not reached for the normal hearing group. In the passive
condition there is an increase in the adjusted speech sound level for both
groups (the hearing-impaired increased with 2.1 dB and the normal hearing
group with .81 dB). A sign test yielded significance only for the hearing-
impaired group (z = 2.38,p<.05).

Cognitive performance and active – passive listening


Table 2 gives the correlation coefficients for the two groups in cognitive
performance and in the adjusted sound level (signal-to-noise ratio) for
loudspeaker 12 in the active and passive listening conditions. The empirical
picture for the normal hearing group reveals significant correlation coefficients
and the cognitive tasks (with one exception; the semantic decision-making
task) and the active listening task. For the passive listening condition,
significant correlation coefficients are, with one exception (the rhyme
judgement task), absent. The empirical picture for the hearing-impaired
participants, on the other hand, demonstrates no pattern of correlation
regardless of type of listening situation and cognitive task.

Perceived effort and cognitive performance


The results demonstrate that perceived effort increases, as expected, during
and after compared to before the tasks were performed. The hearing-impaired
subjects showed consistently higher level of perceived effort than the normal
hearing participants. Significance was not reached for the difference between
active and passive listening. The pattern of correlations reveal that perceived

96
effort does not correlate in a systematic way with cognitive performance
for the hearing-impaired group, whereas we have a systematic pattern of
correlations for the normal hearing participants in the active condition for
the measurement after the test session.

GENERAL DISCUSSION
The purpose of the present study was to examine the possible relationship
between the individual´s cognitive skills, type of listening task in a naturalistic
sound environment and perceived effort.
The results can be summarised in five main points: First, the results
display no differences in cognitive performance between the two groups. The
observation that the groups perform at the same level on the rhyme-judgement
task when speed as well as accuracy level are examined is interesting, as this
pattern is different in comparison with results reported when participants
with a more severe hearing-losses (e.g., a bilateral hearing loss greater than
75 dB for the “best ear”) or deafened adults have been studied (Andersson,
2002; Conrad, 1979; Hanson & McCarr, 1988; Lyxell, Rönnberg, &
Samuelsson, 1995; Lyxell, Andersson, Arlinger, Bredberg, Harder, &
Rönnberg, 1996). Typically, these populations show signs of a deteriorating
phonological processing skill. The results from the present sample, with a
relatively moderate hearing-loss, may suggest that there is a “breaking-point”
in terms of hearing loss where this deterioration starts to operate (c.f., Lyxell
& Holmberg, 2001) and that this “breaking-point” is not reached with the
level of hearing-loss in the present sample.
Second, the difference in the adjusted sound level between the two groups
follows the expectations: The hearing-impaired group adjust the spoken
voice to a significantly higher level than the normal hearing participants in
both the active and the passive listening condition. For the active – passive
manipulation of the listening situations, the pattern of the adjusted sound
level for the normal hearing individuals follows again the expectation. That is,
a significantly higher adjusted sound level for the active condition compared
to the passive is observed. The hearing-impaired participants deviate from
this pattern, as their adjusted sound level is higher for the active condition
than for the passive, but the difference between the two conditions does not
reach statistical significance.
Third, the groups differ in how they adjust the speech sound level
between the two measurement occasions. The hearing impaired increased,
97
rather than decreased, the sound level, between occasion one and two,
whereas such a pattern was not present in the normal hearing group. This
outcome may reflect the fact that listening is more effortfull for the hearing-
impaired over time compared to the normal hearing individuals and one way
of compensating for this state of affairs is to increase the signal to noise ratio
level.
Fourth, the groups differ in how their cognitive skills relate to adjusted
sound level in the active versus the passive listening condition. Cognitive skills
are correlated to adjusted sound level in the active condition, but not in the
passive condition for the normal hearing individuals. Correlations between
adjusted sound level and cognitive skill are absent in the hearing-impaired
group for both listening conditions. Thus, the outcome for the normal hearing
group may imply that the active listening condition is more demanding from
a cognitive point of view, whereas the absence in the hearing-impaired group
may reflect that the distinction between active and passive listening is not a
fruitful one for this population. This will be discussed in some detail below.
Fifth, the two groups differ in perceived effort during and after the
listening tasks, but the perceived effort is only reflected in terms of significant
correlation coefficients in the normal hearing group for the active listening
condition after the test. Hence, perceived effort is related to cognitive capacity
for the normal hearing individuals when the listening situation demands a
higher extent of cognitive processing. This relation is absent for the hearing-
impaired.
The results from the present study for the normal hearing participants
displays the expected pattern. That is, an increase in the cognitive demands in
the listening situation (i.e., active compared to passive listening) is correlated
with cognitive skills and perceived effort, whereas this pattern is not displayed
in the hearing-impaired group. There are at least two possible explanations
for this state of affairs. First, there is a difference in recruitment between
the two groups, such that only a small increase in the speech sound level in
the hearing-impaired group generates an impression of a large increase in
the speech sound level. Thus, the consequence is that the physical difference
in signal to noise ratio between active and passive listening in the hearing-
impaired group is small and does not reach significance. A second explanation
is that the response criteria differ between the two groups. The normal mode
of listening in normal hearing individuals is in most cases a rather effortless or
a low cognitively demanding information-processing task. This state changes
when the listening task requires a more active processing of information

98
(e.g., answering questions) and/or when the background noise makes parts
of the spoken stimuli ambiguous or when pieces of information are missing.
The listening situation for the hearing-impaired individuals is, on the other
hand, never an effortless task. Parts of the stimuli will always be missing or
ambiguous regardless of listening situation. Thus, the distinction between
active and passive listening is not valid for the hearing-impaired individuals.

99
References
Allén, S. (1970). Frequency dictionary of present-day Swedish. (In Swedish:
Nusvensk frekvensordbok.). Stockholm: Almquist& Wiksell.
Andersson, U. (2003). Deterioration of the phonological processing skills in
adults with an acquired severe hearing loss. European Journal of Cognitive
Psychology.
Andersson, U. & Lyxell, B. (1998). Phonological Deterioration in Adults with an
Acquired Severe Hearing Impairment. Scandinavian Audiology, 27 (suppl,
49), 93 - 100.
Andersson, U., Lyxell. B., Rönnberg, J., & Spens, K-E. (2001). Cognitive correlates
of visual speech understanding in hearing impaired individuals. Journal of
Deaf Studies and Deaf Education.
Baddeley, A., Logie, R., Nimmo-Smith, I. & Brereton, N. (1985). Components of
fluent reading. Journal of Memory and Language, 24, 119-131.
Baddeley, A. and Wilson, B. (1985). Phonological coding and short-term memory
in patients without speech. Journal of Memory and Language, 24, 490-502.
Borg, G. (1998). Borg´s Perceived Exertion and Pain Scales. Champaign, IL: Human
Kinetics Publishers.
Borg E, Rönnberg J, Neovius L. (1999). Monitoring the environment: Sound
localization equipment for deaf-blind people. Acta Otolaryngol: 119; 146-
149.
Borg E, Wilson M, Samuelsson E. (1998). Towards an ecological audiology:
Stereophonic listening chamber and acoustic environmental tests. Scand
Audiol; 27: 195-206.
Conrad, R. (1979). The deaf schoolchild. London: Harper & Row.
Dreschler, W.A., Verschuurem H., Ludvigsen, C., & Westermann S. (2001)
ICRA noises: artificial noise signals with speech-like spectral and temporal
properties for hearing instrument assessment. International Collegium for
Rehabilitative Audiology. Audiology 40, 148-157
Divenyi, P. & Simon, H. (1999). Hearing in aging: Issues old and young. Current
opinion in otolaryngology and head and neck surgery, 7, 282 – 289.
Hanson, V. and McCarr, N. (1989). Rhyme generation by deaf adults. Journal of
Speech and Hearing Research, 32, 2-11.

100
Hygge, S. (2003). Classroom experiments on the effects of different noise sources
and sound levels on long-term recall and recognition in children. Applied
Cognitive Psychology, 17, 895 – 914.
Hygge, S., Rönnberg, J., Larsby, B. & Arlinger, S. (1992). Normal-hearing and
hearing-impaired subjects’ ability to just follow conversation in competing
speech, reversed speech, and noise backgrounds. J Speech Hear Res, 35, 208-
215.
Hällgren, M., Larsby, B., Lyxell, B. & Arlinger, S. (2001). Evaluation of a cognitive
test battery in young and elderly normal-hearing and hearing-impaired
subjects. J Am Acad Audiol, 12, 357-370.
Lunner, T. (2003) Cognitive function in relation to hearing aid use. Int J Audiol,
42, S49-S58.
Lyxell, B. & Holmberg, I. (2000). Speechreading and cognitive skills in schoolchildren
(11 – 14 years). British Journal Educational Psychology
Lyxell, B., Rönnberg, J., & Samuelsson, S. (1994). Internal Speech Functioning
and Speechreading in Deafened and Normal Hearing Adults. Scandinavian
Audiology, 23, 179-185.
Lyxell, B., Andersson, J., Andersson, U., Arlinger, S., Bredberg, G., Harder, H.
(1998). Phonological representation and speech understanding with cochlear
implants in deafened adults. Scandinavian Journal of Psychology, 39, 175-
179.
Lyxell, B., Andersson, J., Arlinger, S., Bredberg, G., Harder, H., & Rönnberg, J.
(1996). Verbal Information-processing Capabilities and Cochlear Implants:
Implications for Preoperative Predictors Of Speech Understanding. Journal
of Deaf Studies and Deaf Education, 1, 190-201.
Lyxell, B., Andersson, U., Olsson, I-S & Borg, E. (2003). Working memory capacity
and phonological processing in deafened adults andindividuals with a severe
hearing-impairment. International Journal of Audiology, 42, suppl.1, 86-89.
Magnusson L. Reliable clinical determination of speech recognition scores using
Swedish PB words in speech-weighted noise. Scand Audiol 1995; 24: 217-23.
Pichora-Fuller, M.K. (2003). Processing speed and timing in aging adults:
psychoacoustics, speech perception, and comprehension. Int J Audiol, 42,
S59-S67.
Scheider, B. & Pichora-Fuller, MK. (2000). Implications of perceptual processing for
cognitive aging research. In F. Craik & T. Salthouse (eds) The handbook of
aging and cognition (pp. 155-219). Mahwah, NJ; L E A.

101
Schneider, B., Daneman, M. & Pichora-Fuller, M. K. (2002). Listening in aging
adults: From discourse comprehension to psychoacoustics. Canadian Journal
of Experimental psychology, 56, 139 – 152.
Shoben, E. (1982). Semantic and lexical decisions. In Puff, C. R. (ed.) Handbook
of research methods in human memory and cognition. New York: Academic
Press.
Tun, P. A. & Wingfield, A. (1999). One voice too many: Adult age differences in
language processing with different types of distracting sounds. J Gerontol,
54B, 317-327.

102
Table 1
Table 1 gives the Mean performance and the SDs on the cognitive tasks for
both groups. For the span tests the means reflect the number correct recalled
items in each tasks and for the three other tasks mean reaction-time

_________________________________________________
Cognitive tasks
Hearing-impaired Normal hearing

Reading span 24.75 (5.19) 22.83 (5.16) ns


Word-span 56.16 (9.98) 57.91 (8.79) ns
Semantic
decision-making 1.01 (.16) 1.00 (.14) ns
Lexical
decision-making 2.57 (.81) 2.38 (.66) ns
Rhyme-
judgement 1.48 (.27) 1.52 (.37) ns
__________________________________________________

Table 2
Table 2 gives the correlation coefficients between performance on the
cognitive tasks and the adjusted sound level in the active and passive listening
condition for both groups.
__________________________________________________

Hearing-impaired Normal hearing


Active Passive Active Passive

Reading span .09 -.17 -.69* -.03


Word-span -.42 .18 -.49* .13
Semantic decision-
making -.43 -.08 .23 -.16
Lexical decision-
making -.19 -.23 -.57* .21
Rhyme-
judgement -.18 -.13 -.56* -.75*
__________________________________________________
*
p < .05, one-tailed

103
104
Sound, Catastrophy and Trauma

Åke Iwar

Introduction
Each individual will have different reactions and behaviour in connection
to a traumatic experience, and this includes the sensory impressions. In the
shock phase especially, patients can describe sensory phenomena as “unreal
and supernatural” as though they had considerable amplification of hearing
for example. The experience of silence or the absence of sound can be
marked, as in the classical expression “you could hear a pin drop”. So-called
dissociation can arise in the shock phase, where time and place can change so
that a minute can be like an hour and vice versa the sound experience may
have a lengthy, or split second lapse. In connection with robbery, a patient
may have reported no experience at all of glass being broken for example,
only the impression of the perpetrators eyes or gun, everything else is missing
or very diffuse. When the traumatised person enters into the reaction phase,
certain flashbacks of sensory experiences such as an explosion, screeching car
tyres etc. can suddenly return and often be very frightening. In the case of a
severe bus accident 1988 where many children died and many were injured,
terrifying memories of the accident were awakened at the hospital when some
children recognised the sound of drink being sucked through a straw, which
was a similar sound to the sound in the bus exactly prior to the accident.
So-called traumatised sensory experience, such as a sound, can of course be
unique, for example a bus skidding round on a motorway. In that context
the sound was described as “when you crumple up a Coca Cola tin”. Sensory
experience/sounds can also cause a conflict of feeling as it can be a common
everyday sound, and basically a positive one, but transformed by the trauma
to one representing danger, threat and death. Another example of so-called
sound trauma is told by a patient who had been robbed twice, where the first
assailant had “only” pointed his pistol at her temple, but the second had fired
his pistol in the air. The patient could now experience her own death; the
scene and the sound from the two traumas became one in her experience.
105
It appears to be common that one or several sensory experiences
disappear or are amplified with trauma, and create permanent scares paired
with severe anxiety. According to my clinical experience those traumas that
are partly more sudden, and partly violent, have a deeper and more prolonged
effect on the patient. One patient who, when young was exposed to sexual
violence, could as an adult be frightened by the sound of a door being opened
before she could see who it was.
It is of the utmost importance that the traumatised patients receive
psychological help for their crises. An important part of crisis counselling
is to create security and to work through emotions. Music can also play a
meaningful role both in evoking closeness and fascilitating grief reactions.
All events that threaten our security and existence will cause us huge
psychological stress. They will challenge our belief in the world as a good and
safe place to inhabit.
When we are hit it happens in the middle of our daily lives when we least
expect it. In a flash, life changes and will never be the same again. The more
sudden the event, the more our psychological preparedness will be challenged.
We must face a chaos that will have enormous immediate consequences, but
also consequences in the long term. We want the pain to heal as soon as
possible and are forced to see that we are no longer invulnerable. We will ask
ourselves questions like, “Why does this happen to me or my family?” Then
we must cope with our daily lives again with all their uncertainty and worry.
We try in every way to make sense of what has happened, to bring meaning
to the terrible event. This as an attempt to bring knowledge, understanding
and meaning as to why we react as we do, when catastrophe threatens our
daily lives.

The Myth of Invulnerability


There is a tendency to hide from painful and difficult things that can befall us.
Through the media we can take part in how others are affected by violence,
accidents, and catastrophes, but we run the risk of failing to see that we
ourselves are exposed to the same dangers. We often underestimate the risks,
having difficulty in accepting the impermanence of life and that we can all be
affected. This behaviour I would name ”The Myth of Our Invulnerability.”
This can obstruct our awareness of real risks and it also amplifies the scope
of our reactions. We will also search for explanations to make a critical event
more understandable. There is always a risk that we think the fault is our
106
own, or that we become trapped in attempts to find an explanation. There is
also the possibility that the media influences our need to find a guilty party.
The trauma will also contest the view of the world as a good place in which
to live in, being inhabited mainly by good people.
Through knowledge and understanding of our reactions we can obviate
and prevent our tendency to flee. This is important because it will better
enable us to make use of our previous experience when we go through a crisis.
“Try to forget what has happened” is not a good command. The danger is
that we do not accommodate our experiences and sorrow. It is important to
try to incorporate those experiences, both good and bad.

The Phases of Crisis


Suddenly afflicted by critical events we will have various crisis reactions. All
reactions are normal, even if they make us feel uncertain, afraid, and lacking
in control. Even if each person reacts differently, there is a pattern that we
follow. Our reaction patterns have a survivalist and protective function to
help us mobilise our inner physical and psychological resources. The crisis
phases vary in intensity and magnitude.

Shock

The Shock Phase (Shutting off ) is a primal form of survival that we cannot
control at will. Later we can look back on this phase as having been in a
dream or state of unreality. Our senses are strongly focused. Thoughts and
feelings are often shut off. Seconds can feel like minutes/hours. The shock
may remain for several seconds, hours, or days. When we look back on
what happened, some of our experiences seem amplified, while others may
be very diffuse. Sometimes we question ourselves or have feelings of guilt,
for example, of why we did not react more strongly. Then it is important to
understand that shock can shut off both thoughts and feelings. Dissociation
is a strongly protective function which means, amongst other things, that
visual and auditory experiences become split off and “disappear” from
consciousness. In certain cases we must redefine the sound sensation of
something we had never previously experienced before.

107
The Reaction Phase (Repetition) is a phase in which we begin more openly
to understand and feel what has happened. We begin trying to take in the
unmanageableness of reality although we fight against it. It is a period of
feelings, reactions and thoughts that come and go. Perhaps mostly in the
evenings when there might be stillness and peace. It may be difficult to
sleep. Memories and thoughts can bombard us, and we may feel indecisive.
Our fears and physical discomforts make us less concentrated and maybe
more irritable. It is always difficult to accept what has happened. Although
one might not do very much, one might feel tired because so much energy
is used in bearing these experiences. Various questions seek answers in an
attempt to make the situation more understandable. There are many “What
if...?” thoughts that we brood over. There is also a risk in this that we isolate
ourselves.

The Processing Phase (Approaching) means that we try to wrestle with what
has happened to us. We can perhaps visit those times and situations that
might bring back memories. The pain and the grief remain, but one might
begin to try to accept the wounds and scars that were caused. It is of great help
to speak to others about what has happened. Each experience contributes an
important piece of the puzzle in helping to get a better grip on events. Being
able to arrange memorials and other ceremonies becomes an important form
of support. Time and space is needed to work through the experiences.

Reorientation phase Perhaps we begin to discover our surrounding world


again, as if life had stood still during our crisis, grief and trauma. The outer
awareness the crisis caused has now distinctly diminished. This can mean that
we are now more alone with our experiences, but also that we might have
found greater fellowship with others. Perhaps the crisis has given our life a
new perspective and meaning. Perhaps we have become much closer to other
people. Our view of ourselves, others, and the world about us has probably
changed, for better or for worse. We have a greater vulnerability, but also
increased experience and maturity. Nevertheless, we may constantly worry
that the same thing can happen again. Slowly we begin to accept everyday
life with its potentials, even if it may never be the same again.

108
Violence
When someone abuses us psychologically or physically, for example by using
violence or sexual abuse, the result can cause deep physical and psychological
wounds. When we are suddenly attacked, we loose our natural security and
control. We become more suspicious, and convinced that the same thing can
happen again. – Who can one trust?
Trust and confidence in others may be seriously damaged. Perhaps we
begin to experience that which is unfamiliar, unknown and dark, as unpleasant
or even dangerous. Perhaps the question of whether justice and goodness really
exists arises within us. We may isolate ourselves in order to create temporary
protection. Our increased vulnerability creates an uncertainty where both
our self-esteem and social networks are threatened. Guilt-imposing thoughts
often come to those who have experienced violence. “I should have…” and
-“What if I had done so and so instead?” – become reproaches that hack like
a scratched grammophone record inside us. It is therefore crucial that every
crime victim has the possibility of undergoing crisis therapy, and receiving
help in working through that which has happened in order to understand and
become aware of their reactions and feelings of guilt. It is of great importance
to help those afflicted to understand that there are forces of goodness and
strength that can protect and re-enforce a sense of security and safety again.
Fellowship, a feeling of belonging, happiness, laughter, and tears, belong
together and create a light in the darkness. Support from relatives, colleagues
and friends is very important, especially in case if a trial where we will once
again confront the perpetrator. We need to understand that this is a process
of heeling that in many cases can take considerable time. Long-term support
– practically, legally, emotionally, and even spiritually – is necessary to help us
find our security again when violence has threatened our existence.

Crisis Reactions
Understanding the reactions to crisis can fill a preventative and support
giving function. Each individual’s reaction and sorrow is unique and must be
approached with great respect. There are differences and similarities, but all
reactions are normal even though they may differ in intensity and magnitude.
The reactions come and go, open up and close, sometimes like waves, when
we begin to sense and understand what has happened. When we return, in
our thoughts and memories to what has happened we are once again in the

109
grip of fear and grief. Grief is a part of everybody’s experience, as death is a
part of life. Grief and fear can be expressed in different ways, the external
events mirroring the internal meaning for the person concerned. Below is a
list of diverse reactions that we may feel.

Acute physical reactions. Thought reactions


Difficulty in breathing / Difficulty in concentrating
Palpitations / Limited ability to think logically
Sweating and chills / Difficulty in acting and decision making
Weakness / Disorientation
“Lump in the throat” No appetite / Strong imagery
Diminished hearing / Tunnel vision or distorted memory
Heightened sense experience

Emotional reactions. Behavioural reactions


Feelings of vulnerability / Impaired capacity
Fear – Uncertainty / Difficulty in relaxing
Apathy – Numbness of feeling / Hyperactivity and restlessness
Anger / Rage - Conflict within relationships
Sadness - Feelings of guilt / Crying, wanting to cry
Uneasiness / Isolation, retreating
Suspicion / Difficulty in self-expression, verbal and written
Strong apprehension of new dangers

Social Reactions
Our perception of the value of life can change in either a positive or a negative
direction. We may have difficulty in returning to everyday life again. For example,
ordinary problems at work and in our surroundings seem unimportant. An
increased vulnerability might mean that we become more cautious and have a
greater need for control, especially when the threat is invisible. There is a risk of
our becoming conflict prone. We might also have a feeling of living more in the
present or more intensely. Various conflicts may arise in our daily encounters.
There is also the possibility that those around us “forget” what has happened
sooner than we ourselves, and make a quicker return to everyday life.
110
Crisis Handling

Preventative measures
Can we prepare ourselves for a crisis? Both yes and no. Life experiences
naturally prepare us to a certain extent, but, at the same time, we can never
know how we will react in a given situation. Sometimes we are told to forget
what has happened in order to go on with our lives. This tends to have the
reverse effect, forcing us to remain with the thoughts and pain connected
to the event. It is when we can voluntarily permit ourselves to approach
painful experiences that we can free ourselves from them and it is moreover
of great importance that we carry our experiences with us into the future. “If
we neither look back nor forward we must look out”. Also of significance
and importance, is the protection of good traditions and even humour that
strengthen solidarity because ceremonies and rituals can help us to express
what we feel without words. Education and training, within for example the
police, rescue service, and healthcare, have shown that mental readiness for
difficult events is strengthened by this.
One must, however, be aware that there are always limits to what we
can cope with. This especially applies to when children are involved in a
crisis. To “utilize” so called evil and painful experiences gives greater readiness
in the future. To go through different experiences creating “objective
pictures” beforehand, (briefing), is a way to increase mental readiness more
systematically.

Urgent Help
This is aimed primarily at helping the afflicted recover from shock (shutting
off ). It is necessary to create maximum safety and care as quickly as possible,
with the helper taking the initiative, as the afflicted party is incapable of
decision-making with regard to help.

111
Never leave anyone alone who is in a state of shock / Try to give warmth
(blanket) contact (careful physical contact, eye contact) / Protection from
further trauma and the press/ The importance of early contact to inform and
gather family, relatives or a personnel group / Strong reactions are a positive
sign of recovery from shock / Everyone who experiences a trauma has extra
need of security and of someone to trust / If you give information, remember
that it is quickly forgotten / Do not forget the so- called strong and responsible
person, (for example the leader) who has an initial tendency to reject help for
himself/herself.

Long term Help


Great attention may be given to a critical event. When daily life returns
to normal there is a risk for feelings of alienation. It is important to have
support even in the long-term, and to be aware that this is a process that takes
time. It is also vital to support a process in which the afflicted deals with his/
her feelings instead of fighting them. The aim is to prevent the person, the
feelings, and the event from being isolated.

Family and friends are important resources / Making room for company in
which thoughts feelings and impressions can be freely expressed / Try to gather
facts in order to combat falsification / Do not take over responsibility for daily
problems / Try to counteract isolation / Support various kinds of rituals and
ceremonies / In case of death make provision for those nearest to take farewell /
Give information and support on common reactions / Remember that grieving
takes time / Encourage physical activity, it reduces restlessness / Anniversaries
and other memorials/places, help the afflicted tend their grief / Do not forget
to find support and emotional outlet for yourself as helper / Find spontaneous
or organised forms of dialogue to release the pressure / Try to return to daily
routine. Be aware that conflicting feelings will arise / Try not to demand too
much of yourself. Give yourself space to pause / Physical activity improves
fitness and helps fight restlessness / Remember that everyone needs extra support
and encouragement especially in a crisis / Supervision and debriefing reduce
the risk of burn-out.

112
Conclusion
A traumatic experience has an extensive impact on the individual, socially,
psychologically and physically and our sensory organs are of vital significance
for our survival. In traumatic circumstances hearing and sound can be
“distorted” and “disappear” completely from consciousness. The ways our
senses coordinate or become damaged depends probably on the type of trauma
and the individual’s capability of overcoming/working through the crisis.
Many interesting future research areas opens up around this subject. With
regard to treatment, the consideration of the senses is of great importance.

(translation: Janet Kinnibrugh)

References (Swedish)
Arbetarskyddsstyrelsens tidskrift nr 1 1999, Miljön på jobbet,
Krishantering, 1999
Andersson, Birgitta. Avlastningssamtal (Studentlitteratur 1999)
Cullberg, Johan. Kris och utveckling, (Natur och Kultur 1980)
Dyregrov, Atle. Katastrofpsykologi, (Natur och Kultur 2002)
Fredriksson, Max. Brottsoffer handbok, (Skandia 1996)
George, Mike. Vägen till inre lugn, (ICA-förlaget 1999)
Herting, Anna. Stressens olika ansikten, (Folkhälsoinst. 1999)

113
114
Sounds as Triggers

How traumatic memories can be processed by Eye


Movement Desensitization and Reprocessing (EMDR)

Kerstin Bergh Johannesson

lic. Psychologist, specialist in clinical psychology, National Centre for


Disaster Psychiatry

In December 1991 an aircraft from Scandinavian Airline Systems (SAS) had


to make an emergency landing in a field in the countryside north-west of
Stockholm just after take off. The plane smashed the trees in its descent.
All 129 passengers survived. A few were physically injured. The passengers
reported afterwards that the sound of the trees smashing against the airplane
remained as vivid auditive impressions. Later on, some of the passengers
talked about reminders, that is to say other sounds which were very similar
and which became triggers for the experience of the emergency landing.
These reminders activated the same type of arousal and emotional reactions
as the original reactions at the time of the accident.
A threatening incident, as the one described above, which is sudden
and unexpected, is experienced differently by different people. Most people
react initially with strong arousal, followed, for some time by sensory
re-experiencing, like pictures, sounds, taste or body sensations, but also
accompanied by elevated tension and anxiety and maybe a lack of emotional
balance. For most people these reactions will decrease after a shorter period,
but for some they will remain. It seems as if these persons cannot process their
impressions, and the unprocessed memories are stuck in the mental memory
system. Anxiety arises when the individual is reminded of the incident, and
because of this fact, tries to avoid everything that might remind them of what

115
happened. An invisible wound has emerged, which does not seem to change
over time. What happened in the past continues as an everlasting presence.
This is the nature of the traumatic memory, which is typically dominated
by perceptual characteristics, pictures, sounds, scents or body sensations.
Often the memory is fragmented, but the fragments can be very vivid. The
fragmented memory is associated with strong negative feelings. Some parts
of the incident might be dissociated, that is, they cannot be retrieved in a
voluntary way. If this condition lasts for more than a month, the person
might have developed what is called a posttraumatic stress syndrome, PTSD.
Persons can be reminded of a traumatic experience by so called triggers.
These triggers consist of external or internal reminders, which are related to
the original incident. Triggers can consist of verbal stimuli, sounds, thoughts
or pictures. The reminders will cause the person to re-live the traumatic
incident, as if it happened again. This re-living can be characterised by
sensory stimuli and might be experienced as a video clip, as pictures, or as
body sensations. Sometimes re-living might have a more auditive quality, like
words, or as in the example above, noise or even inner voices.
Findings suggest that the right hemisphere of the brain is important for
traumatic memories. A study by Pagani, Högberg et al (2006) demonstrated
that clients with PTSD, who had auditive trauma reminders, had an increased
blood flow in the right brain hemisphere when they were compared to a
group who had not developed PTSD.

Treatment for PTSD


Standard psychotherapeutic treatments for PTSD and traumatic memories
usually include some type of exposure to the traumatic experience, either
in vivo or, if more appropriate and maybe more common, in an imaginary
mode. It is also common that the treatment model includes investigating and
processing maladaptive cognitions of the self, which might have developed
from the negative experience.
Eye Movement Desensitization and Reprocessing (EMDR) is a
psychotherapeutic approach for reducing distress after traumatic experiences,
which is disturbing in everyday life. Treatment is focuses on how trauma
affects present functioning. EMDR is an evidence-based method for treating
chronic post-traumatic stress syndromes (Bisson et al 2007). The method
has been demonstrated to be equally effective as exposure-based therapies

116
(Spates et al 2009). EMDR can also be applied for acute PTSD. EMDR has
also been used for other types of problems like anxiety and panic attacks,
traumatic grief, reactions to physical illnesses and many other conditions that
are associated with distressing experiences.
EMDR is a therapeutic approach that emphasises the brain’s information
processing system and how memories are stored. The adaptive information-
processing model posits the existence of an information processing system
that assimilates new experiences into already existing memory networks.
These memory networks are the basis of perception, attitudes and behaviour.
Problems arise when an experience is inadequately processed. Current
symptoms are viewed as resulting from disturbing experiences that have been
encoded in state-specific, dysfunctional form (Shapiro, 1995, 2001, 2007,
2008). Even if the traumatic incident took place a long time ago, it will
be experienced once again together with the emotions and sensations that
were experienced at the original time. The core goal of EMDR involves the
transmutation of these dysfunctionally stored experiences into an adaptive
resolution, which promotes psychological health. EMDR aims at activating
the ability to handle the distress of traumatic memory and to decrease
disturbing thoughts and emotions. It can also help the patient to think
differently about himself in relation to the traumatic memory.
EMDR integrates elements of many psychotherapeutic orientations,
such as psychodynamic, cognitive behavioural and body-centred orientation.
Treatment follows a structured protocol. The method was originally developed
for adults, but it is easily adjusted for children. Treatment is usually focused
on the individual, but applications have been made for group treatment.
EMDR uses an eight-phase approach. During EMDR processing, the
patient is asked to focus on a specific traumatic memory and to identify the
distressing image that represents the memory, the associated negative cognition,
an alternative positive cognition, to identify emotions that are associated with
the traumatic memory, and to identify trauma-relevant physical sensations
and their respective body locations. This process is quantified by use of
subjective indicators and measures. After these preparations, the patient is
asked to hold the distressing image in mind along with the negative cognition
and associated body sensations, while tracking the therapist’s fingers back
and forward across the patient’s field of vision in rhythmic sweeps during
approximately 20 – 40 seconds. The patient is then asked to take a break
and to give feedback to the therapist of any changes in images, sensations,
thoughts or emotions that might have occurred. This process is repeated

117
and continued until the client no longer experiences any distress from the
traumatic memory. Bilateral tactile stimulation or sounds can be used as an
alternative to eye movements. If EMDR is effective for the particular patient,
this will show within one to two treatment sessions. A limited number of
sessions are often enough for problems after a single trauma. However, length
of treatment depends on the complexity of the traumatic experiences.
It is not yet established how and why EMDR is effective, but there
are some hypotheses. One line of thinking stresses the fact that processing
is connected with a doubled focus. The client is encouraged to think of
the memory and simultaneously follow the moving hand of the therapist.
Possibly this could establish a state of mindfulness, which creates a more
open mind, stimulating a free process of associations which could open up
for other perspectives. Some authors have described this as changing the
orienting response in the mind. The method is also characterised by a dosed
exposure to the traumatic content, which might be benevolent for the client,
avoiding him or her being overwhelmed.
Processing with EMDR can be emotionally powerful. Only therapists
licensed to work with psychotherapy and who have a specially approved
EMDR-training should therefor perform EMDR-treatment.

118
References
Bisson J, Andrew M. Psychological treatment of post-traumatic stress disorder
(PTSD). Cochrane Database of Systematic Reviews 2007, Issue 3
Pagani M, Högberg G, Salmaso D, Tärnell B, Sanchez-Crespo A, Soares J, Aberg-
Wistedt A, Jacobsson H, Hällström T, Larsson SA, Sundin O. (2005).
Regional cerebral blood flow during auditory recall in 47 subjects exposed
to assaultive and non-assaultive trauma and developing or not posttraumatic
stress disorder.Eur Arch Psychiatry Clin Neurosci. Oct; 255(5):359-65.
Solomon, R & Shapiro, F (2008). EMDR and the Adaptive Information Processing
Model. Journal of EMDR Practice and Research; 2 (4): 315-325.
Spates, C.R., Koch, E., Cusack, K., Pagoto, S. & Waller, S. (2009). In Foa, E.B.,
Keane, T.M., & Friedman, M.J., (eds.), Effective treatments for PTSD:
Practice Guidelines of the International Society for Traumatic Stress Studies
(pp279-305). New York: Guilford Press.
Shapiro, F. (1995). Eye Movement Desensitization and Reprocessing: basic principles,
protocols and procedures. New York: Guilford Press.
Shapiro, F. (2001). Eye Movement Desensitization and Reprocessing: basic principles,
protocols and procedures (2nd Ed.). New York: Guilford Press.
Shapiro, F. (2007) EMDR and case conceptualization from an adaptive information
processing perspective. In F. Shapiro, F. Kaslow, & L. Maxfield (Eds.),
Handbook of EMDR and family therapy processes (pp 3-36. New York:
Wiley.

119
120

You might also like