Psychopathology - Research, Assessment, and Treatment in Clinical Psychology - Third Edition (G. Davey 2021)
Psychopathology - Research, Assessment, and Treatment in Clinical Psychology - Third Edition (G. Davey 2021)
PSYCHOPATHOLOGY
RESEARCH,ASSESSMENT,
Theonlyseries
ANDTREATMENTIN
tobeapprovedby
theBRITISH
CLINICALPSYCHOLOGY
PSYCHOLOGICAL
SOCIETY
GRAHAMDAVEY
ANWebs
-
BPSTEXTBOOKS WILEYBlackwell
Psychopathology
THIRD EDITION
GRAHAM DAVEY
University of Sussex
Brighton, UK
Acknowledgements
Once again I'd like to begin by thanking the Wiley commissioning and production team. Without their
hard work, skills, and persistence this new edition would not have been possible. I would also like to
thank all of those people who advised me on the first and second editions of the book by making
sensible suggestions about content, providing me with materials, and reviewing drafts. These include
Chris Brewin, Alison Brown, Kate Cavanagh, Roger Cocks, Rudi Dallos, Suzanne Dash, Thomas
Ehring, Andy Field, Daniel Freeman, Theresa Gannon, David Green, Richard Hastings, Marko Jelicic,
Jo Johnson, Fergal Jones, Nick Lake, Ruth Mann, Charlie Martin, Lance McCracken, Fran Meeten,
Michael Morgan, Peter Muris, Filip Raes, Ben Smith, Helen Startup, Emma Veitch, Brendan Weekes,
and Leonora Wilkinson. Those who have advised me on aspects of the third edition include Kate
Cavanagh, Ann Cook, Alan Frances, Samei Huda, Lucy Johnstone, Nick Lake, Frances Meeten, Paul
Salkovskis, and Adrian Whittington.
Also, a big thank you to all my family and friends who have endured me talking about very little else
than finishing this new edition for over the last 6 months! Especially my mother Betty and daughters
Kate and Lizzie.
If you have time on your hands you can read my thoughts about mental health and other issues on my
blogs at http://www.papersfromsidcup.com and
https://www.psychologytoday.com/gb/experts/graham‐cl‐davey‐phd and follow me on Twitter at
http://twitter.com/GrahamCLDavey
Finally, I finished the last few pages of this third edition during the second week of the first coronavirus
lockdown in the UK. I would like to dedicate this book to all NHS health professionals and staff whose
tireless and courageous work under extreme circumstances has been doing so much for the country
under what are unprecedented and very difficult circumstances.
Graham Davey
Brighton, April 2020
Preface to Third Edition
It feels like a significant landmark to have published the third edition of Psychopathology. Students still
want to study psychology in their tens of thousands, and a significant proportion of those still enter
university with the intention of becoming professional psychologists, and more specifically, with a view
to learning about mental health and becoming clinical psychologists.
And there is no more important time than now when it comes to learning about mental health. We live
in times when mental health problems are more high profile than they've ever been, more people are
becoming aware of their mental health and seeking help and support for their problems, and there are
many researchers and clinicians who would claim that we live in times when many common mental
health problems have reached epidemic proportions.
Over the three editions of its life so far, this textbook has been refined to provide the enquiring student
with access to most aspects of mental health and clinical psychology, and I hope this new edition is in a
form that is easily accessible and engaging and facilitates learning.
One of the main aims of this third edition was to update the research database in the book, and we've
included a total of over 1,100 new references spread across all 17 chapters. This enables Psychopathology
to act as a sourcebook for students and instructors wanting to access the most recent evidence‐based
research on the causes and treatments of mental health problems.
So, has our understanding of mental health problems changed over the 5 years since the publication of
the second edition? Yes, there have been many significant developments in the science, treatment, and
conceptualisation of mental health problems over recent years and many of these are represented in this
third edition. These developments include the growth in alternative evidence‐based approaches to
diagnosing and categorising mental health problems; the surge in research in epigenetics and the
possibility that many forms of early experience may moderate the expression of genes in ways that
affect mental health symptoms; the continuing advancement of the neurodiversity movement, which
promotes the idea that there is not just one ‘normal’ or ‘healthy’ type of brain; the growing importance
of network analyses showing how mental health symptoms interact; and the greater understanding of
how mental health problems develop out of the socioeconomic conditions in which people live and are
not in any simple way about biological or even psychological dysfunction.
The book continues to provide multiple perspectives on mental health problems from a range of
psychological, sociological, biological, and genetic approaches and to emphasise the importance of
experimental psychopathology to good clinical psychology research. It is also associated with a wide
variety of supplementary material, available for both students and instructors. In this third edition, these
resources have been expanded and updated.
Just like the first and second editions, this third edition is supplemented by a range of features designed
to facilitate effective teaching and learning. These include:
Focus Point Boxes: These provide more in‐depth discussion of particular topics that are conceptually
important, controversial, or simply of contemporary interest. Whenever possible these are linked to
everyday examples—such as high‐profile news items—that allow the reader to consider the issues in a
contemporary, everyday context.
Research Methods Boxes: These features contain detailed descriptions of methods utilised in
psychopathology research and describe the pros and cons of individual methods and their potential
uses. These examples act to supplement the general material provided on research methods in Chapter
3. Like most researchers, those involved in clinical psychology research are often imaginative in their use
of research methods, and many of the examples provided in research methods boxes attempt to convey
how methods from other areas of psychology and science generally can be adapted to study issues
relevant to psychopathology.
Case Histories: Most chapters contain example case histories describing the symptoms, experiences
and life circumstances encountered by individuals experiencing particular psychopathologies. Many of
these examples conclude with a clinical commentary that is designed to link the detail of that specific case
history to the general facts to be learned in the text.
The Client's Perspective: Many chapters also contain examples of an individual's own descriptions
of the experience of psychopathology. These are designed to provide the reader with an insight into the
phenomenology of different psychopathologies and the way that symptoms affect moods, experiences,
and everyday living—including social, occupational, and educational functioning. These descriptions
are also supplemented by the personal accounts of psychopathologies that begin each chapter. As with case
histories, client's perspective features usually conclude with a clinical commentary that links the personal
experiences of the psychopathology to the academic content of the text.
Treatment in Practice Boxes: These boxes attempt to provide the reader with a more detailed
insight into how individual treatments and interventions are conducted in practice. It is often difficult for
a student to understand, for example, how a therapy is conducted in practice from descriptions given in
academic texts. These boxes provide some specific examples of how a practitioner might implement the
principles of a treatment in a specific case.
Self‐Test Questions: Throughout each chapter the reader will encounter self‐test questions. These are
designed to test the reader's absorption of basic factual and conceptual knowledge. Instructors and
teachers can also use these questions as a basis for discussing key material in class or in small group
discussions.
Glossary: At the end of each chapter there is a list of key terms used in the chapter. When each term
first appears in the text, it is highlighted in bold and is either described or defined at that point.
Highlighting these terms makes them easy to locate, and the list of key terms can serve as a revision
checklist—especially for students due to take multiple‐choice questionnaire assessments.
Finally, I hope you find this third edition readable, accessible, and enlightening and a worthwhile
addition to your teaching and learning activities. To all who have used the earlier editions, thank you so
much for your support, and I wish you well with your teaching and learning. Good luck!
Graham Davey
Brighton
April 2020
About the Companion Website
CHAPTER OUTLINE
1.1 A BRIEF HISTORY OF PSYCHOPATHOLOGY
1.2 DEFINING PSYCHOPATHOLOGY
1.3 EXPLANATORY APPROACHES TO PSYCHOPATHOLOGY
1.4 MENTAL HEALTH AND STIGMA
1.5 CONCEPTS, PARADIGMS, AND STIGMA REVISITED
LEARNING OUTCOMES
When you have completed this chapter, you should be able to:
1. Discuss the pros and cons of a number of different approaches to defining
psychopathology.
2. Describe important developments in the history of our understanding and response to
mental health problems.
3. Describe and evaluate the nature and causes of mental health stigma.
4. Compare and contrast approaches to the explanation of psychopathology, including
historical approaches, the medical model, and psychological models.
Am I crazy? I don't know what is wrong with me. I did have depression in the past and what I am going through
doesn't feel a lot like what I had before. My moods change every 30 minutes at times. I have been like this for a
while. I started out about once a week I would have a day where I was going from one extreme to the next. In the
past few weeks it has gotten worse. It seems like my moods change for no reason at all. There are times that I will
just lay down and cry for what appears to be no reason at all and then 2 hours later I will be happy. I find myself
yelling at my son for stupid reasons and then shortly after I am fine again. I truly feel that I am going crazy and
the more I think about it the worse I get. I am not sleeping or eating much and when I do eat I feel like I will be
sick.
Joan's Story
Ten years ago after a pub brawl I was beaten and left for dead outside a pub. Following the attack, my physical
recovery from a broken collarbone and broken ribs was slow. From that day on, I lost my confidence and was
scared of going out in case I was attacked again as the attackers were never caught; scared that I would lose my
rag and end up in prison. I started getting panic attacks and getting easily upset by noise, spending most of the
time in my room on my own. I was unable to stop thinking about the beating and it played in my mind almost
constantly like a film. Sometimes, I went crazy, I lost the sense of where I was and it felt like the assault was
happening all over again—the footsteps behind me, the whack on my head, the sense of falling on my face
thinking ‘This is it, I'm in for it’. I started smoking cannabis because that was the only way to numb my feelings
and get me to sleep but I woke soon after with nightmares of being chased and would wake up shouting and
soaked in sweat.
Ten years on and little has changed. I'm stuck in a rut and don't know how to get out. My life is worthless. I'm a
failure for letting this get on top of me.
Adapted from Davey GCL, Lake N & Whittington A (2015) Clinical Psychology. Routledge
Peter's Story
I found it hard at secondary school to make friends and not having a lot of money meant I was singled out. While
it wasn't physical, mainly name calling and being spat at, it reinforced the feeling that I didn't deserve to be here.
By now I was hearing ‘inside’ voices in my head telling me I was useless, shouting words like ‘Bitch!’ and ‘Die!’.
I was having severe mood swings. I thought about self‐harming and became controlling about my food intake. At
age 14 I started taking drugs and drinking alcohol. Between 14 and 21 I had a cannabis and cocaine addiction
which I overcame. At age 24 I was planning my suicide when my father died. Within 3 days I was having
extreme audio and visual hallucinations such as whispering and people calling my name and seeing deceased
people, dead bodies and shadows as well as everyday objects. People also transformed into other people in front of
me leading me to believe they were possessed by the dead. I also heard menacing voices issuing commands. I
experienced strange smells, tasting poison in my food and on one occasion felt someone stroking my hair. It sounds
crazy, but I thought that my mind was being controlled, that I could communicate with the dead and that, because
of this, the government was spying on me and plotting to kill me.
Adapted from Davey GCL, Lake N & Whittington A (2015) Clinical Psychology. Routledge
Jo's Story
I started using cocaine at 13. Before, I was using marijuana and alcohol and it didn't really work for me, so I
wanted to step it up a level. I started using heroin when I was 15. I began using it to come down from cocaine
and get some sleep. But I started liking the heroin high and started using it straight. Every day, after a while.
Along with cocaine, I also began taking prescription drugs when I was thirteen. They were so easy to get. I never
had to buy them or get them from a doctor. I would just get them from friends who had gone through their parent's
medicine cabinet. I also thought that prescription drugs were “safer” than other drugs. I figured that it was okay
for people to take them, and if they were legal, I was fine. Like I said, prescription drugs were incredibly easy to
get from friends, and it always seemed to be a last‐minute thing. Heroin was also easy to get—all I had to do was
go into town and buy it. My heroin use started spiraling out of control. I stopped going to school. I was leaving
home for days at a time. My whole life revolved around getting and using drugs—I felt like I was going crazy.
Erica's Story
Introduction
We begin this book with personal accounts from four very different individuals. Possibly the only
common link between these four accounts is that they each use the word ‘crazy’ in relating their story.
Joan questions whether she is going crazy, Peter feels he's going crazy as he relives the trauma he
experienced, Jo experienced auditory and visual hallucinations that in retrospect seem crazy, and Erica's
life gets so out of control that she too felt like she was ‘going crazy’. We tend to use words like ‘crazy’,
‘madness’, and ‘insanity’ regularly—as if we knew what we meant by those terms. However, we do tend
to use these terms in a number of different circumstances—for example, (a) when someone's behaviour
deviates from expected norms, (b) when we are unclear about the reasons for someone's actions, (c)
when a behaviour seems to be irrational, or (d) when a behaviour or action appears to be maladaptive
or harmful to the individual or others. You can try seeing whether these different uses of the term
‘crazy’ or ‘mad’ apply to each of our personal accounts, but they probably still won't capture the full
meaning of why they each used the word ‘crazy’ in their vignettes. Trying to define our use of everyday
words like ‘crazy’, ‘madness’, and ‘insanity’ leads us on to thinking about those areas of thinking and
behaving that seem to deviate from normal or everyday modes of functioning and cause distress to those
exhibiting these behaviours. For psychologists the study of these phenomena is known as
psychopathology, and the branch of psychology responsible for understanding and treating
psychopathology is known as clinical psychology.
clinical psychology The branch of psychology responsible for understanding and treating
psychopathology.
But before we go any further, let's quickly unpack what is meant by the term psychopathology. A
traditional definition of psychopathology based on the linguistic origins of the term is that it is ‘the
scientific study of mental disorders’—a definition that harks back to the days when the medical or illness
model of mental health problems was the most influential. But as we'll see in this chapter, in current
usage the term psychopathology has a much broader meaning covering the in‐depth study of mental
health problems generally, and many contemporary approaches to mental health problems do not
conceive of them as disorders or illnesses but as the product of perfectly healthy psychological processes
in response to stressful or extreme life experiences. In this way, psychopathology has become a term that
describes a general scientific approach that embraces attempts to understand the causes of mental
health problems, how we should classify them, and how we can successfully fix them. If you're interested
in how the nature of mental health terms can change over time, see the descriptions of ‘concept creep’
in clinical psychology terminology discussed by Haslam (2016) and McGrath, Randall‐Dzerdz, Wheeler,
Murphy, & Haslam (2019).
With this broader, eclectic meaning of psychopathology in mind, let's examine our four personal
accounts a little closer. In each case, the individual finds what is happening to them distressing, and to
some extent out of their control. Joan is distressed because she appears to have no control over her
moods. She feels depressed; she shouts at her son, she feels sick when she eats. Peter is plagued by
continually reliving the horrors of a traumatic assault and feels his life is now stuck in a rut and is
worthless. In response to severe bullying at school and then the death of her father, Jo developed
unusual ways of interpreting events around her, hearing voices and experiencing visual hallucinations
—interpretations of the world that many other people might label as crazy. Finally, Erica's behaviour has
become controlled by her need for drugs. She feels out of control and all other activities in her life—
such as her education‐ are suffering severely because of this.
These four cases are all ones that are likely to be encountered by clinical psychologists and although
very different in their detail, they do all possess some commonalities that might help us to define what
represents a mental health problem. For example, (a) both Joan and Peter experience debilitating distress,
(b) both Joan and Erica feel that important aspects of their life (such as their moods or cravings) are out
of their control and they cannot cope, (c) both Joan and Erica find that their conditions have resulted in
them failing to function properly in certain spheres of their life (e.g., as a mother or as a student), and (d)
Jo's life appears to be controlled by interpretations of the world that are extreme and are probably not
real. As we shall see later, these are all‐important aspects of psychopathology and define to some extent
what will be the subject matter of clinical practice.
However, deciding what are proper and appropriate examples of psychopathology is not easy. Just
because someone's behaviour deviates from accepted norms or patterns does not mean they are
suffering from a mental health problem, and just because we might use the term ‘crazy’ to describe
someone's behaviour does not mean that it is the product of disordered thinking. Similarly, as we
alluded to earlier, we cannot attempt to define psychopathology on the basis that some ‘normal’
functioning (psychological, neurological, or biological) has gone wrong. This is because (a) we are still
some way from understanding the various processes that contribute to mental health problems, and (b)
many forms of behaviour that require treatment by clinical psychologists are merely extreme forms of
what we would call ‘normal’ or ‘adaptive’ behaviour. For example, we all worry and we all get depressed
at some times, but in most cases these activities do not significantly interfere with our everyday living.
However, for some other people, their experience of these activities may be so extreme or so chronic as
to cause them significant distress and prevent them from undertaking normal daily activities such as
looking after a family or earning a living (Focus Point 1.1).
There has been a general concern in recent decades that the prevalence of mental health
problems is increasing, to almost epidemic proportions. This concern has been expressed in the
media generally (https://www.bbc.co.uk/news/health‐41125009), by blogging health
journalists (https://www.thenational.ae/lifestyle/wellbeing/the‐fear‐factor‐1.640564), by heads
of mental health agencies (e.g., https://www.telegraph.co.uk/health‐fitness/mind/mental‐
health‐crisis‐among‐children‐selfie‐culture‐sees‐cases/), and by health practitioners and
researchers (Davey, 2018a; The Lancet, 2018b).
But is there an epidemic? Well it's very hard to find longitudinal data to confirm this. In part
this is because reliable data on prevalence rates has been patchy over the last 3–4 decades, and
when data are reported they are often reported in different ways using different methods of
data collection.
The Adult Psychiatric Morbidity Survey carried out in England by the National Health Service
(NHS) uses validated mental health screening and assessment tools to gauge the level of mental
health problems in a sample of the general population, and this survey provides prevalence data
for years 1993, 2000, 2007, and 2014 (NHS Digital, 2016). Key findings suggest that between
2007 and 2014, the number of adults aged 16–74 years accessing mental health treatment in
the UK with conditions such as anxiety or depression had increased significantly from 24% to
39%. In addition, in 2014 around one in six adults met the criteria for a common mental health
problem such as anxiety or depression, and this number has increased modestly over the 4 years
of sampling since 1993—but these increases are probably not statistically significant (Spiers et
al., 2016). However, what these figures do suggest is that while there may not be a substantial
increase in diagnosable common mental health problems, there does appear to be a sizable
increase in the number of people accessing mental health services for these conditions.
Prevalence of common mental health problems (anxiety or depression) in England between 1993 and 2014 (data
from NHS Digital, 2016).
Similarly, the picture on more severe diagnosable mental health problems such as
schizophrenia, autism, and eating disorders does not obviously indicate a growing mental health
epidemic. The following table shows data on the global prevalence of mental health problems
as collected by the Global Burden of Disease 2017 study (The Lancet, 2018a). To be sure, this
indicates that a significant number of people worldwide (970.8 million) are suffering from a
diagnosable mental health problem, and on current population figures that amounts to one in
seven people globally (14.2%). But a comparison of figures between 1990 and 2017 suggests a
small decrease in reported mental health problems in both males (−2.1%) and females (−3.0%)
over this time. This evidence for a relative stability of mental health problems over time is also
supported by systematic reviews and meta‐analyses, which suggest at best only a modest
increase in the prevalence of mental health problems since at least the 1970s (Richter, Wall,
Bruen, & Whittington, 2019).
Global Prevalence of Mental Health Problems 2017 (The Lancet, 2018a)
Mental Health Problem Prevalence (million)
All mental health problems 970.8
Anxiety disorders 284.3
Depressive disorders 264.4
Substance use disorders 175.5
Bipolar disorder 45.5
Autism spectrum disorders 31.1
Schizophrenia 19.7
Eating disorders 15.8
So what is driving talk of a mental health epidemic? Perhaps people are becoming more aware
of the immense scale of existing levels of mental health problems (the NHS Psychiatric
Morbidity Survey for 2014 suggests that one in six people in the UK will be suffering a
diagnosable common mental health problem at any one time (NHS Digital, 2016). People may
also be growing more aware of when they themselves are suffering mental health symptoms.
They may be more able to identify these symptoms and have a better knowledge of how to
access mental health services for treatment.
In addition, there may be a real growth of mental health problems in specific demographic
groups and a growth in numbers of individuals experiencing only specific conditions. For
example, there may be a growing recognition of mental health problems in children and
adolescents—especially common mental health problems such as anxiety, depression, and self‐
harm (e.g., Creswell, Waite, & Cooper, 2013), and the number of young people experiencing
severe emotional disorders in England has increased from 3.9% in 2004 to 5.8% in 2017 (NHS
Digital, 2017). Also, the 2017 Global Burden of Disease study indicates that anxiety has
overtaken depression as the predominant mental health condition globally—the ‘silent
epidemic’ may have eclipsed the ‘black dog’ (Davey, 2018a). The following figure shows the
Google trends results on searches for ‘depression’ (blue line) and ‘anxiety’ (red line) over the
years from 2004 to 2019, indicating that there has been a relative increase in interest in the
term ‘anxiety’ relative to ‘depression’ during this time.
Before we continue to discuss individual mental health problems in detail, it is important to discuss how
the way we define these problems has evolved over time.
asylums In previous centuries asylums were hospices converted for the confinement of
individuals with mental health problems.
In Western societies demonology survived as an explanation of mental health problems right up until
the eighteenth century, when witchcraft and demonic possession were common explanations for
psychopathology, and analyses of examples of demonic possession from the Middle Ages have identified
symptoms of psychosis, mood disorders, neurosis, and personality disorders in those believed to be
possessed (Forcén & Forcén, 2014). Today, demonic possession is still a common explanation of
psychopathology in some less developed areas of the world—especially where witchcraft and voodoo
are still important features of the local culture such as Haiti and some areas of Western Africa
(Desrosiers & Fleurose, 2002). The continued adoption of demonic possession as an explanation of
mental health problems (especially in relation to psychotic symptoms) is often linked to local religious
beliefs (Hanwella, de Silva, Yoosuf, Karunaratne, & de Silva, 2012; Ng, 2007), and may often be
accompanied by exorcism as an attempted treatment—even in individuals with a known history of
diagnosed psychotic symptoms (e.g., Tajima‐Pozo et al., 2011) (Focus Point 1.2).
psychiatry A scientific method of treatment that is based on medicine, the primary approach
of which is to identify the biological causes of psychopathology and treat them with medication
or surgery.
However, despite its obvious importance in developing a scientific view of psychopathology and
providing some influential treatments, the medical model of psychopathology has some important
implications for the way we conceive mental health problems.
First, an obvious implication is that it implies that medical or biological causes underlie
psychopathology. This is by no means always the case, and bizarre behaviour can be developed by
perfectly normal learning processes. For example, children with autism or intellectual disabilities often
learn disruptive, challenging or self‐harming behaviours through normal learning processes that have
nothing to do with their intellectual deficits (see Treatment in Practice Box 17.1). Furthermore, in
contrast to the medical model, both psychodynamic and contemporary cognitive accounts of
psychopathology argue that many psychological problems are the result of the individual acquiring
dysfunctional ways of thinking and acting, and they acquire these characteristics through normal,
functional learning processes. In this sense, it is not the individual or any part of their biology that is
dysfunctional, it is the experiences they have had that are dysfunctional and has led to them thinking and
acting in the way they do.
Second, the medical model adopts what is basically a reductionist approach by attempting to reduce the
complex psychological and emotional features of psychopathology to simple biology. If you look at the
personal accounts provided at the beginning of this chapter, it is arguable whether the phenomenology
(i.e., the personal experience of psychopathology) or the complex cognitive factors involved in many
psychological problems can be reduced to simple biological descriptions. Biological reductionism cannot
easily encapsulate the distress felt by sufferers, nor can it easily explain the dysfunctional beliefs and
forms of thinking that are characteristic of many psychopathologies. In addition, complex mental
health problems are often not just biological or even simply reducible to psychological problems and
processes, they are influenced by the socio‐economic situation in which the individual lives (Lund et al.,
2018), their potential for employment and education, and the support they are given that will provide
hope for recovery and support for social inclusion (this broad ranging approach to understanding and
treating mental health problems is known as the recovery model and is discussed in more detail in
Chapter 5, Section 5.3.3). All of these factors arguably contribute to a full understanding and
explanation of psychopathology.
recovery model Broad-ranging treatment approach which acknowledges the influence and
importance of socio-economic status, employment and education and social inclusion in helping
to achieve recovery from mental health problems.
Finally, as we have mentioned already, there is an implicit assumption in the medical model that
psychopathology is caused by ‘something not working properly’. For example, this type of explanation
may allude to brain processes not functioning normally, brain or body biochemistry being imbalanced,
or normal physical development being impaired. This ‘something is broken and needs to be fixed’ view
of psychopathology is problematic for a number of reasons:
1. Rather than reflecting a dysfunction, psychopathology might just represent a more extreme form
of normal behaviour. We all get anxious, we all worry, and we all get depressed. Yet anxiety, worry,
and depression in their extreme form provide the basis of many of the common mental health
problems we will cover in this book. If we take the example of worry, we can all testify to the fact
that we worry about something at some time. However, for some of us it may become such a
prevalent and regular activity that it becomes disabling, and may lead to a diagnosis of generalised
anxiety disorder (GAD, see Chapter 6). Nevertheless, there is no reason to suppose that the
cognitive mechanisms that generate the occasional worry bout in all of us are not the same ones
that generate chronic worry in others (Davey & Meeten, 2016). In this sense, psychopathology can
be viewed as being on a dimension rather than being a discrete phenomenon that is separate from
normal experience, and there is accumulating evidence that common psychopathology symptoms
such as anxiety and depression are on a dimension from normal to distressing, rather than being
qualitatively distinct (e.g., Haslam, Holland, & Kuppens, 2011; Lupien et al., 2017).
2. By implying that psychopathology is caused by a normal process that is broken, imperfect or
dysfunctional, the medical model may have an important influence on how we view people
suffering from mental health problems, and indeed, how they might view themselves. At the very
least it can be stigmatising to be labelled as someone who is biological or psychologically imperfect,
and people with mental health problems are often viewed as second‐class citizens—even when
their symptoms are really only more prominent and persistent versions of characteristics that we all
possess (see Section 1.4).
Bethlem Hospital One of the first psychiatric hospitals originally established in Moorfields,
London
However, in the nineteenth century there was a gradual movement towards more humane treatments
for individuals in asylums, and these developments were led by a number of important reforming
pioneers. Philippe Pinel (1745–1826) is often considered to be the first to introduce more humane
treatments during his time as the superintendent of the Bicêtre Hospital in Paris. He began by removing
the chains and restraints that had previously been standard ways of shackling inmates and started to
treat these inmates as sick human beings rather than animals. Further enlightened approaches to the
treatment of asylum inmates were pioneered in the US by Benjamin Rush of Philadelphia, and by the
Quaker movement in the UK. The latter developed an approach known as moral treatment, which
abandoned contemporary medical approaches in favour of understanding, hope, moral responsibility,
and occupational therapy (Digby, 1985).
moral treatment Approach to the treatment of asylum inmates, developed by the Quaker
movement in the UK, which abandoned contemporary medical approaches in favour of
understanding, hope, moral responsibility, and occupational herapy.
Even into the twentieth century and up until the 1970s in both the UK and the US, hospitalisation was
usually the norm for individuals with severe mental health problems, and lifelong hospitalisation was not
uncommon for individuals with chronic symptoms. However, it became clear that custodial care of this
kind was neither economically viable nor was it providing an environment in which patients had an
opportunity to improve (Photo 1.1). Because of the growing numbers of inpatients diagnosed with
mental health problems, the burden of care came to rest more and more on nurses and attendants who,
because of lack of training and experience, would resort simply to restraint as the main form of
intervention. This would often lead to deterioration in symptoms, with patients developing what was
called social breakdown syndrome, consisting of confrontational and challenging behaviour,
physical aggressiveness, and a lack of interest in personal welfare and hygiene (Gruenberg, 1980).
Between 1950 and 1970, these limitations of hospitalisation were being recognised and there was some
attempt to structure the hospital environment for patients. The first attempts were known as milieu
therapies, which were the first attempts to create a therapeutic community on the ward that would
develop productivity, independence, responsibility, and feelings of self‐respect. This included mutual
respect between staff and patients and the opportunity for patients to become involved in vocational
and recreational activities. Patients exposed to milieu therapy were more likely to be discharged from
hospital sooner and less likely to relapse than patients who had undergone traditional custodial care
(Cumming & Cumming, 1962; Paul & Lentz, 1977). A further therapeutic refinement of the hospital
environment came in the 1970s with the development of token economy programmes (Ayllon &
Azrin, 1968; see Hackenberg, 2018, for a review of research and application). These were programmes
based on operant reinforcement, where patients would receive tokens (rewards) for emitting desired
behaviours. These desired behaviours would usually include social and self‐help behaviours (e.g.,
communicating coherently to a nurse or other patient, or washing, or combing hair), and tokens could
subsequently be exchanged for a variety of rewards such as chocolate, cigarettes, and hospital privileges.
A number of studies have demonstrated that token economies can have significant therapeutic gains.
For example, Gripp and Magaro (1971) showed that patients in a token economy ward improved
significantly more than patients in a traditional ward, and Gershone, Errickson, Mitchell, and Paulson
(1977) found that patients in a token economy scheme were better groomed, spent more time in
activities and less time in bed, and made fewer disturbing comments than patients on a traditional ward.
Patients on token economy schemes also earn discharge significantly sooner than patients who are not
on such a scheme or have been involved in a milieu therapy programme (Hofmeister, Schneckenbach, &
Clayton, 1979; Paul & Lentz, 1977). However, despite the apparent success of token economies, their
use in the hospital setting has been in serious decline since the early 1980s (Dickerson, Tenhula, &
Green‐Paden, 2005). There were a number of reasons for this decline, and these include the legal and
ethical difficulties of withholding desired materials and events so they can be used as reinforcers, and a
lack of consensus on whether behaviours nurtured in token economy schemes were maintained after the
scheme ended and whether they generalised to other environments and settings (Davey, 1998; Glynn,
1990).
PHOTO 1.1 This photograph shows a ward in Cardiff City Mental Hospital, Whitchurch, UK, in the early twentieth
century. Beds are crowded close together allowing little personal space for patients, who were often hospitalised for much of
their life.
https://www.bbc.co.uk/news/uk‐wales‐south‐east‐wales‐35766956.
milieu therapies The first attempts to structure the hospital environment for patients, which
attempted to create a therapeutic community on the ward in order to develop productivity,
independence, responsibility and feelings of self-respect.
token economy A reward system which involves participants receiving tokens for engaging in
certain behaviours, which at a later time can be exchanged for a variety of reinforcing or
desired items.
In 1963, the US Congress passed a Community Mental Health Act that specified that, rather than be
detained and treated in hospitals, people with mental health problems had the right to receive a broad
range of services in their communities. These services included outpatient therapy, emergency care,
preventative care, and aftercare. Growing concerns about the rights of mental health patients and a
change in social attitudes away from the stigma associated with mental health problems meant that
other countries around the world swiftly followed suit in making mental health treatment and aftercare
available in the community (Hafner & van der Heiden, 1988). These events led to the development of a
combination of services usually termed assertive community treatment or assertive outreach, and, in the
US alone, by the 1990s this had led to around a 10‐fold decrease in the number of people being treated
in hospital for mental health problems (Torrey, 2001).
Given these developments, treatment and care of individuals diagnosed with severe mental health
problems has moved away from long‐term hospitalisation to various forms of community care.
However, the psychiatric hospital is still an important part of the treatment picture for those displaying
severe and distressing symptoms—especially since it will often be the environment in which treatment
takes place for an individual's first acute experience (e.g., a first psychotic episode). However, length of
stay in hospital for individuals has been significantly reduced as a result of the development of more
effective early intervention treatments and supportive community care and outreach programmes, and
even for individuals diagnosed with a serious mental health problem, length of stay typically ranges
from a few days to just a few weeks depending on the nature of the diagnosis (Jacobs et al., 2015).
Nevertheless, even when living back in their communities, it was clear that many individuals diagnosed
with mental health problems would often need support and supervision. They would need help
maintaining their necessary medication regime, finding and keeping a job or applying for and securing
welfare benefits. They may also need help with many aspects of normal daily living that others would
take for granted, such as personal hygiene, shopping, feeding themselves, managing their money, and
coping with social interactions and life stressors. Today in the UK, these outreach services are delivered
by a Community Mental Health Team (CMHT) that can include psychiatrists, clinical
psychologists, social workers, and nurses, and in more complex cases a Care Programme Approach
(CPA) might be applied where an individual care plan is developed to provide ongoing support (NHS
England, 2019). Many mental health services also have Assertive Outreach Teams whose function is
to help individuals with mental health problems who find it difficult to work with mental health services
or have related problems such as violence, self‐harm, homelessness, or substance abuse. Assertive
outreach staff would expect to meet their clients in their own environments, whether that is a home,
café, park, or street, with the aim of building up a long‐term relationship between the client and mental
health services (Photo 1.2). (Video https://www.youtube.com/watch?v=zBcmTUMJZfI shows how
crisis mental health services are delivered in parts of London using dedicated teams of mental health
professionals). (Treatment in Practice 1.1).
PHOTO 1.2 Assertive Outreach staff try to meet their clients in their own environments, and for many homeless
individuals suffering psychotic symptoms this may mean parks, streets, and cafes. The aim of such programmes is to help
individuals with their medication regimes, provide assistance in dealing with everyday life and its stressors and securing
welfare benefits. These programmes also aim to help build a long‐term relationship between the individual and local mental
health services.
https://www.julianhouse.org.uk/life‐as‐an‐outreach‐worker.
1.1.4 Summary
This section has provided an historical perspective on the way in which people have attempted to
understand and explain mental health problems and also describes how people with mental health
problems have been treated over the centuries. Today, most models of mental health provision espouse
compassion, support, understanding, and empowerment for individuals suffering mental health
problems (Repper & Perkins, 2006), but it has been a long journey to get to this point. It has required us
to understand that individuals with mental health problems are not ‘possessed’, they do not need to have
‘demons’ exorcised or driven from their bodies by physical force, they do not need to be incarcerated in
asylums, and nor do they need lifelong custodial care in psychiatric institutions. However, while most of
the physical constraints and impositions imposed on individuals with mental health problems have been
lifted, attitudes to mental health problems have been slower to evolve, and the stigma and discrimination
associated with mental health problems remain a significant issue in need of resolution (see Section 1.4).
SELF‐TEST QUESTIONS
Why was demonic possession such a popular way of explaining psychopathology in
historical times?
What are the pros and cons of the medical model of psychopathology?
How has care for people with mental health problems developed from the times of
asylums to the present day?
SECTION SUMMARY
service user groups Groups of individuals who are end users of the mental health services
provided by, for example, government agencies such as the NHS.
So, when considering how to define psychopathology we must consider not only whether a definition is
useful in the scientific and professional sense but also whether it provides a definition that will minimise
the stigma experienced by sufferers and facilitate the support they need to function as inclusive members
of society. Let us bear this in mind as we look at some potential ways of identifying and defining
psychopathology.
FIGURE 1.1 This figure represents a normal distribution curve for IQ scores. From this distribution it can be seen that
68% of people score between 84 and 116 points, while only 2.27% of people have an IQ score below 68 points. This
graph suggests that around 2–3% of the population will have IQs lower than the 70 points that is the diagnostic criterion
for intellectual development disorder. However, the problem for basing a definition of psychopathology on scores that deviate
substantially from the norm is that high IQ also is very rare. Only 2.27% of the population have an IQ score greater than
132 points.
Finally, emotions such as anxiety and depression that underlie the most common mental health
problems are not statistically rare emotions. They are experienced almost daily by most people, and this
represents another reason why deviation from the statistical norm does not make a good basis on which
to define psychopathology.
ataque de nervios A form of panic disorder found in Latinos from the Caribbean.
Psychopathology can manifest itself in different forms in different cultures, and this can lead to
some disorders that are culture‐specific (i.e., have a set of symptoms which are found only in
that particular culture). Two such examples are Ataque de Nervos, which is an anxiety‐based
disorder found almost exclusively amongst Latinos from the Caribbean (Salman et al., 1998),
and Seizisman, a state of psychological paralysis found in the Haitian community (Nicolas et al.,
2006).
Ataque de Nervos
Its literal translation is ‘attack of nerves’, and symptoms include trembling, attacks of crying,
screaming uncontrollably, and becoming verbally or physically aggressive. In some cases, these
primary symptoms are accompanied by fainting bouts, dissociative experiences, and suicide
attempts.
Research on Ataque de Nervos has begun to show that it is found predominantly in women, those
over 45 years of age, and from low socio‐economic backgrounds and disrupted marriages
(Guarniccia, De La Cancela, & Carrillo, 1989), The symptoms appear to resemble many of
those found in panic disorder, but with a coexisting affective disorder characterised by
emotional lability and anger (Salman et al., 1998).
From this research, it appears that Ataque de Nervos may be a form of panic disorder brought on
by stressful life events (such as economic or marital difficulties), but whose expression is
determined by the social and cultural norms within that cultural group. In particular, Latino
cultures place less emphasis on self‐control and emotional restraint than other Western cultures,
and so the distress of panic disorder in Latinos tends to be externalised in the form of
screaming, uncontrolled behaviour and aggression. In contrast, in Western cultures the distress
of panic disorder is usually coped with by adopting avoidance and withdrawal strategies—
hence the common diagnosis of panic disorder with agoraphobia.
Seizisman
The name literally means ‘seized‐up‐ness’ and refers to a state of paralysis usually brought on
by rage, anger, or sadness, and in rare cases happiness. Events that can cause Seizisman include
a traumatic event (such as receiving bad news), a family crisis, and verbal insults from others.
Individuals affected by the syndrome become completely dysfunctional, disorganised, and
confused and unresponsive to their surroundings (Laguerre, 1981). The following quote
illustrates how viewing traumatic events while working within a Haitian community that is
attuned to the symptoms of this syndrome can actually give rise to these culture‐bound
symptoms:
‘I remember over and over, when I was a UN Human Rights Monitor and I was down there in
Port‐au‐Prince viewing cadaver after cadaver left by the Haitian army, people would say, “Now
go home and lie down or you will have Seizisman”. And I never really had a problem, you
know? I never threw up or fainted no matter what I saw, but I started to feel “stressed,” which is
an American illness defined in an American way. After viewing one particularly vile massacre
scene, I went home and followed the cultural model I had been shown. I lay down, curled up,
and went incommunicado. “Ah‐hah! Seizisman!” said the people of my household’ (From
Nicolas et al., 2006, p. 705).
1.2.5 Summary
None of these individual ways of defining psychopathology is ideal. They may fail to include examples
of behaviour that we intuitively believe are representative of mental health problems (the distress and
impairment approach), they may include examples we intuitively feel are not examples of
psychopathology (e.g., the statistical approach, the deviation from social norms approach), or they may
represent forms of categorisation that would lead us simply to imposing stigmatising labels on people
rather than considering their individual needs (e.g., the statistical approach). In practice, classification
schemes tend to use an amalgamation of all these approaches with emphasis being placed on individual
approaches depending on the nature of the symptoms and disorder being classified.
SELF‐TEST QUESTIONS
What are the problems with using the normal curve to define psychopathology?
How do cultural factors make it difficult to define psychopathology in terms of deviations
from social norms?
What are the pros and cons of using maladaptive behaviour or distress and impairment as
means of defining psychopathology?
SECTION SUMMARY
Genetics
Genetics is a fast growing and important branch of science, and collaborations such as the Human
Genome Project are attempting to identify those genes that may be responsible for human
characteristics, disorders, and diseases (Collins & McKusick, 2001; Lander, 2011). People are biological
organisms who come into the world with a biological substructure that will be significantly determined
by the genes they have inherited from their ancestors. It is therefore almost a truism to say that
behaviour—and mental health problems too—will therefore have at least some genetic component. In
some cases the genetic component may be extremely influential (e.g., in Huntington's disease—see Focus
Point 1.4), in others it may be a necessary component but may not always be sufficient to trigger a
mental health problem, in still other cases the genetic component may be relatively nonspecific and less
important to the development of a mental health problem than the experiences that an individual may
have during their lifetime.
Huntington's disease is a degenerative neurological conditioning that can often give rise to
dementia, and it is caused by a dominant mutation in a gene on the fourth chromosome. Each
person has two copies of this gene (each one called an allele), one inherited from each parent.
In the case of Huntington's disease an individual needs only one copy of the mutant allele to
develop the disease. Parents randomly give one of their two alleles to their offspring, so a child
of a parent who has Huntington's disease has a 50% chance of inheriting the mutant version of
the gene from their parent. A grandchild of a person with Huntington's disease has a 25%
chance of inheriting the mutant gene and so developing the disease.
The gene for Huntington's disease is dominant, and so the disease can be inherited only if one
parent has the mutant gene. In this case, inheriting the mutant gene is the primary factor in the
affected individual developing the disease. In other mental health problems where genetic
factors have been found to be important (e.g., schizophrenia), inheritance is only one of a
number of factors that has been found to contribute to the development of symptoms, and this
has led researchers to advocate a diathesis–stress model in which inherited factors provide a
vulnerability to develop symptoms, but these symptoms do not appear unless the individual
encounters stressful life experiences.
The way in which genetics might influence psychopathology can be studied in a variety of ways: (a) by
studying psychopathology symptoms across different family members who may differ in the extent to
which they are genetically related to each other. These studies are known as concordance studies,
where the probability of symptoms occurring can be related to the degree to which different family
members share genes in common; (b) twin studies compare the probability with which monozygotic
(MZ) and dizygotic (DZ) twins both develop psychopathology symptoms. MZ twins (identical twins)
share 100% of their genetic material, whereas DZ twins (nonidentical twins) share only 50% of their
genes, so a genetic explanation of psychopathology would predict that there would be greater
concordance in the diagnosis of a mental health problem in MZ than in DZ twins (see Chapter 8 for
some examples of this approach); and (c) Because both families and twins are likely to share similar
environments as well as genes, interpretation of family and twin studies can be difficult. However, many
of these difficulties of interpretation can be overcome by studying the offspring of MZ and DZ twins
rather than the twins themselves (Gottesman & Bertelsen, 1989; see McAdams et al., 2018, for methods
of exploring intergenerational genetic associations). If one MZ twin develops psychopathology
symptoms and the other does not, any genetic element in symptoms should still show up in the children
of either of the two MZ twins. That is, the children of the MZ twins should still exhibit similar rates of
risk for the psychopathology (because they have inherited the same predisposition)—even though one of
their parents developed the symptoms and the other did not.
concordance studies Studies designed to investigate the probability with which family
members or relatives will develop a psychological disorder depending on how closely they are
related – or, more specifically, how much genetic material they have in common.
twin studies Studies in which researchers have compared the probability with which
monozygotic (MZ) and dizygotic (DZ) twins both develop symptoms indicative of a
psychopathology inorder to assess genetic contributions to that sychopathology.
However, in the vast majority of psychopathologies we will describe in this book, people do not solely
inherit a mental health problem through their genes; a mental health problem develops because of an
interaction between a genetic predisposition and our interactions with the environment (Shenk, 2010).
This is basically what is known as a diathesis–stress model of psychopathology, where ‘diathesis’
refers to an inherited predisposition and ‘stress’ refers to a variety of experiences that may trigger the
inherited predisposition (this is a model that is particularly important in the understanding of psychosis,
see Chapter 8). (see https://www.youtube.com/watch?v=yuMi50PrwIM). This interaction between
genes and experiences gives rise to the notion of heritability. Heritability is a measure of the degree to
which symptoms can be accounted for by genetic factors; this ranges from 0 to 1, and the nearer this
figure is to 1, the more important are genetic factors in explaining the symptoms. In the case of
Huntington's disease described in Focus Point 1.4, the heritability of Huntington's symptoms is very
close to 1 because if you inherit the dominant gene for this disorder that is sufficient to ensure that the
individual will develop the disease.
diathesis-stress model Model that suggests a mental health problem develops because of an
interaction between a genetic predisposition and our interactions with the environment.
Heritability A measure of the degree to which symptoms can be accounted for by genetic
factors. It ranges from 0 to 1, and the nearer this figure is to 1, the more important are genetic
factors in explaining the symptoms.
Not only do genetic approaches to psychopathology attempt to estimate the heritability of individual
disorders, the area of molecular genetics also seeks to identify individual genes that may be involved
in transmitting psychopathology symptoms (see Uher & Zwicker, 2017, for a review of genetic
approaches to understanding psychopathologies). One method of identifying individual genes that has
been particularly applied to psychopathology is genetic linkage analysis. Linkage analysis works by
comparing the inheritance of characteristics for which gene location is known (e.g., eye colour) with the
inheritance of psychopathology symptoms. For example, if the inheritance of eye colour follows the
same pattern within a family as particular psychopathology symptoms, then it can reasonably be
concluded that the gene controlling the psychopathology symptoms can probably be found on the same
chromosome as the gene controlling eye colour. While such methods are extremely valuable, it should
be pointed out that it is very rare that psychopathology symptoms can be traced to an individual gene,
and very often symptoms are associated with multiple genes, which testifies to the complex and often
heterogenous nature of mental health problems (e.g., Badner & Gershon, 2002; Faraone et al., 2007;
Levinson, Lewis, & Wise, 2002). Finally, an alternative means of identifying psychopathology‐relevant
genes is to use nonhuman animals. For example, researchers can manipulate specific genes in animals
with some accuracy, and in mice studies can even delete individual genes. This then enables the
researcher to determine whether that gene is linked to any changes in the animal's behaviour that might
be indicative of psychopathology (e.g., by observing more anxious behaviour) (Gross et al., 2002).
molecular genetics Genetic approach that seeks to identify individual genes that may be
involved in transmitting psychopathology symptoms.
Neuroscience
The neuroscience paradigm seeks an understanding of psychopathology by identifying aspects of the
individual's biology that may contribute to symptoms. The main focus of this paradigm is on brain
structure and function, although the broader activity of the neuroendocrine system has also been
implicated in some psychopathology symptoms, especially mood disorders (the neuroendocrine system
involves interactions between the brain and the endocrine system that produces hormone secretions in
the body).
corpus callosum A set of nerve fibres which connects the two mirror-image hemispheres of
the brain.
cerebral cortex The outer, convoluted area of the brain.
temporal lobe The areas of the brain that lie at the side of the head behind the temples and
which are involved in hearing, memory, emotion, language, illusions, tastes and smells.
Frontal lobes One of four parts of the cerebrum that control voluntary movement, verbal
expressions, problem solving, will power and planning.
FIGURE 1.3 The neuroanatomy of the brain. (a) The cerebral cortex. (b) The limbic system.
A further set of brain areas that are often implicated in psychopathology are collectively known as the
limbic system. The limbic system comprises the hippocampus, mammillary body, amygdala,
hypothalamus, fornix, and thalamus. It is situated beneath the cerebral cortex (see Figure 1.3) and is
thought to be critically involved in emotion and learning. For example, the hippocampus is involved
in spatial learning and the amygdala is an important region coordinating attention to emotionally‐
relevant stimuli (e.g., threatening or fear‐relevant stimuli). Because of its function in regulating
emotional responses, the amygdala is an important brain structure in understanding many aspects of
psychopathology. It is involved in the formation and storage of emotion‐relevant stimuli and provides
feedback to the thalamus that results in appropriate motor action (Del Casale et al., 2012). Because of
this role, the amygdala is important in activating phobic fear (Ahs et al., 2009), and depressed
individuals show more activity in the amygdala when viewing emotional stimuli than nondepressed
individuals (Sheline et al., 2001).
limbic system A brain system comprising the hippocampus, mamillary body, amygdala,
hypothalamus, fornix and thalamus. It is situated beneath the cerebral cortex and is thought to
be critically involved in emotion and learning.
hippocampus A part of the brain which is important in adrenocorticotropic hormone
secretion and is also critical in learning about the context of affective reactions.
amygdala The region of the brain responsible for coordinating and initiating responses to fear.
Brain neurotransmitters
These are the chemicals that help neurones to communicate with each other and thus are essential
components of the mechanisms that regulate efficient and effective brain functioning. During synaptic
transmission, neurones release a neurotransmitter that crosses the synapse and interacts with receptors
on neighbouring neurones, and most neurotransmitters relay, amplify and modify signals between
neurones (see Figure 1.4). There are many different types of neurotransmitters that can be grouped
according to either their chemical structure or to their function, and a number of different
neurotransmitters have been implicated in psychopathology, including dopamine, serotonin,
norepinephrine, and gamma‐aminobutyric acid(GABA). Abnormalities in levels of serotonin
and norepinephrine have been implicated in the symptoms of mood disorders (see Chapter 7),
dopamine is central to important theories of schizophrenia and psychotic symptoms (see Chapter 8,
Section 8.5), and norepinephrine and GABA may play a role in anxiety symptoms (Kaur & Singh,
2017; Liu, Zhao, & Guo, 2018; Monaco, Coley, & Gao, 2016). Even so, the functions of
neurotransmitters are often not simple or easy to define. For example, dopamine has many functions in
the brain, including important roles in regulating voluntary movement, motivation, and reward and is
critically involved in mood, attention, and learning. Similarly, early theories of the role of
neurotransmitters in psychopathology symptoms tended to assume that symptoms were caused by either
too little or too much of a particular neurotransmitter. This picture, however, is much too simple, and
more recent theories suggest that symptoms may be associated with much more complex interactions
between different neurotransmitters (e.g., Carlsson et al., 2001; Devor et al., 2017).
serotonin An important brain neurotransmitter where low levels are associated with
depression.
Summary
Most chapters of this book have a section on biological explanations of psychopathology where
explanations of the causes of symptoms are discussed in terms of genetics, brain structure and function,
and brain neurotransmitters. Because behaviour and thought cannot occur in the absence of a
biological substrate, it is clear that biological explanations of psychopathology will be highly relevant.
They will tell us whether all or some of the symptoms of a mental health problem are inherited or not,
and they will also provide us with information about whether abnormalities in brain function or
neurotransmitter activity is associated with psychopathology. There are some clear advantages to the
biological approach—especially in terms of treatments. One prominent example is that if we can
identify associations between psychopathology and imbalances in neurotransmitters, then we can
develop pharmaceutical products that might resolve this imbalance—and this has been particularly the
case with mood disorders and psychotic symptoms. However, mental health problems cannot always be
reduced simply to biological descriptions, and a full understanding of the causes and experience of
mental health problems will require description and explanation at other levels (e.g., how a person's
experiences influence their thoughts and behaviour, how their interpretation of events affect their
emotions, and how distress is experienced and manifested). We discuss some of these alternative—but
complimentary—paradigms later in the chapter. But for a fuller introductory coverage of neuroscience,
the brain and behaviour, see Ward and King (2018).
Sigmund Freud An Austrian neurologist and psychiatrist who founded the psychoanalytic
school of psychology.
The concept of the id was used to describe innate instinctual needs—especially sexual needs. He noted
that from a very early age, children obtained pleasure from nursing, defecating, masturbating, and other
‘sexually’ related activities and that many forms of behaviour were driven by the need to satisfy the
needs of the id.
As we grow up, Freud argued that it becomes apparent to us that the environment itself will not satisfy
all our instinctual needs, and we develop a separate part of our psychology known as the ego. This is a
rational part of the psyche that attempts to control the impulses of the id, and ego defence
mechanisms develop by which the ego attempts to control unacceptable id impulses and reduce the
anxiety that id impulses may arouse.
ego In psychoanalysis, a rational part of the psyche that attempts to control the impulses of the
id.
ego defence mechanisms Means by which the ego attempts to control unacceptable id
impulses and reduce the anxiety that id impulses may arouse.
superego Key concept in Sigmund Freud’s psychoanalytic theory. The superego develops out
of both the id (innateinstinctual needs) and ego (a rational part of the psyche that attempts to
control the impulses of the id), and represents our attempts to integrate ‘values’ that we learn
from our parents or society.
The superego develops out of both the id and ego, and represents our attempts to integrate ‘values’
that we learn from our parents or society. Freud argued that we will often judge ourselves by these values
that we assimilate and if we think our behaviour does not meet the standards implicit in these values we
will feel guilty and stressed.
According to Freud, the id, ego, and superego are often in conflict, and psychological health is
maintained only when they are in balance. However, if these three factors are in conflict then behaviour
may begin to exhibit signs of psychopathology. Individuals attempt to control conflict between these
factors and also reduce stress and conflict from external events by developing defence mechanisms,
and Table 1.1 describes some of these defence mechanisms together with some examples of how they
are presumed to prevent the experience of stress and anxiety.
A further factor that Freud believed could cause psychopathology was how children negotiated various
stages of development from infancy to maturity. He defined a number of important stages through
which childhood development progressed, and each of these stages was named after a body area or
erogenous zone. If the child successfully negotiated each stage then this led to personal growth and a
psychologically healthy person. If, however, adjustment to a particular stage was not successful, then the
individual would become fixated on that early stage of development. For example, Freud labelled the
first 18 months of life as the oral stage because of the child's need for food from the mother. If the
mother fails to satisfy these oral needs, the child may become fixated at this stage and in later life
display ‘oral stage characteristics’ such as extreme dependence on others. Other stages of development
include the anal stage (18 months to 3 years), the phallic stage (3–5 years), the latency stage (5–12 years),
and the genital stage (12 years to adulthood).
oral stage According to Sigmund Freud, the first 18 months of life based on the child’s need
for food from the mother. If the mother fails to satisfy these oral needs, the child may become
fixated at this stage and in later life display ‘oral stage characteristics’ such as extreme
dependence on others.
learning theory The body of knowledge encompassing principles of classical and operant
conditioning (and which is frequently applied to explaining and treating psychopathology).
Classical conditioning The learning of an association between two stimuli, the first of
which (the conditioned stimulus, CS) predicts the occurrence of the second (the unconditioned
stimulus, UCS).
FIGURE 1.5 Classical conditioning.(a) Before conditioning takes place, Pavlov's dog salivates only to the
presentation of food and not to the presentation of the bell; (b) pairing the bell with food then enables
the dog to learn to predict food whenever it hears the bell, and (c) this results in the dog subsequently
salivating whenever it hears the bell. This type of learning has frequently been used to explain
psychopathology, and one such example is the acquisition of specific phobias where the phobic stimulus
(the CS) elicits fear because it has been paired with some kind of trauma (the UCS) (see Figure 6.1).
operant conditioning The modification of behaviour as a result of its consequences.
Rewarding consequences increase the frequency of the behaviour, punishing consequences
reduce its frequency.
These two forms of learning have been used to explain a number of examples of psychopathology.
First, classical conditioning has been used to explain the acquisition of emotional disorders including
many of those with anxiety‐based symptoms (see Chapter 6). For example, some forms of specific
phobias appear to be acquired when the sufferer experiences the phobic stimulus (the CS) in association
with a traumatic event (the UCS), and such experiences might account for the acquisition of dog phobia
(in which dogs have become associated with, for example, being bitten or chased by a dog), accident
phobia (in which travelling in cars has become associated with being in a traumatic car accident), and
dental phobia (when being at the dentist has become associated with a traumatic dental experience)
(Davey, 1989; Doogan & Thomas, 1992; Kuch, 1997). Classical conditioning processes have also been
implicated in a number of other forms of psychopathology, including the acquisition of PTSD (see
Chapter 6), the acquisition of paraphilias (see Chapter 11), and substance dependency (see Chapter 9).
Operant conditioning has been used extensively to explain why a range of psychopathology‐relevant
behaviours may have been acquired and maintained. Examples you will find in this book include
learning approaches to understanding the acquisition of bizarre behaviours in schizophrenia (Ullman
& Krasner, 1975), how the stress‐reducing or stimulant effects of nicotine, alcohol, and many illegal
drugs may lead to substance dependency (e.g., Runegaard, Jensen, Wörtwein, & Gether, 2018; Schacter,
1982), how hypochondriacal tendencies and somatoform disorders may be acquired when a child's
illness symptoms are reinforced by attention from parents (Latimer, 1981), and how the disruptive, self‐
harming, or challenging behaviour exhibited by individuals with intellectual or developmental
disabilities may be maintained by attention from family and carers (Machalicek et al., 2014).
PHOTO 1.3 Operant Conditioning. In operant conditioning, the rat learns to press the lever in this Skinner Box because
it delivers food, and food acts to reinforce that behaviour so that it occurs more frequently in the future (known as operant
reinforcement). Operant reinforcement has been used to explain how many behaviours that are typical of psychopathology are
acquired and maintained. That is, many bizarre and disruptive behaviours may be acquired because they actually have
positive or rewarding outcomes.
The behavioural approach led to the development of important behavioural treatment methods,
including behaviour therapy and behaviour modification. For example, if psychopathology is
learned through normal learning processes, then it should be possible to use those same learning
processes to help the individual ‘unlearn’ any maladaptive behaviours or emotions. This view enabled
the development of treatment methods based on classical conditioning principles (such as flooding,
systematic desensitisation, aversion therapy, see Chapter 4) and operant conditioning principles (e.g.,
functional analysis, token economies, see Chapter 4). Furthermore, learning principles could be used to
alter psychopathology symptoms even if the original symptoms were not necessarily acquired through
conditioning processes themselves, and so the behavioural approach to treatment had a broad appeal
across a very wide range of symptoms and disorders.
behaviour therapy A term currently used for all interventions that attempt to change the
client’s behaviour (and have largely been based on principles from learning theory).
cognitive behaviour therapy (CBT) An intervention for changing both thoughts and
behaviour. CBT represents an umbrella term for many different therapies that share the
common aim of changing both cognitions and behaviour.
As successful as the cognitive approach seems to have been in recent years, it too also has some
limitations. For example, rather than being a cause of psychopathology, it has to be considered that
dysfunctional thoughts and beliefs may themselves simply be just another symptom of psychopathology.
For example, we have very little knowledge at present about how dysfunctional thoughts and beliefs
develop. Are they the product of childhood experiences? Do they develop from the behavioural and
emotional symptoms of psychopathology (i.e., do depressed people think they are worthless because of
their feelings of depression)? Or are they merely post hoc constructions that function to help the
individual rationalise the way they feel? These are all potentially fruitful areas for future research.
empathy An ability to understand and experience a client’s own feelings and personal
meanings, and a willingness to demonstrate unconditional positive regard for the client.
unconditional positive regard Valuing clients for who they are without judging them.
As we said earlier, this type of approach to psychopathology does not put too much emphasis on how
psychopathology was acquired but does try to eradicate psychopathology by moving the individual from
one phenomenological perspective (e.g., one that contains fears and conflicts) to another (e.g., one that
enables the client to view themselves as worthy, respected and achieving individuals). Approaches such
as humanistic and existentialist ones are difficult to evaluate. For example, some controlled studies have
indicated that clients undergoing client‐centred therapy tend to fair no better than those undergoing
nontherapeutic control treatments (Greenberg, Watson, & Lietaer, 1998; Patterson, 2000), whereas some
other studies suggest a significant effectiveness of person‐centred counselling over a 5‐year period when
compared with a waiting list control group (Gibbard & Hanley, 2008). Even so, exponents of existential
therapies believe that experimental methodologies are inappropriate for estimating the effectiveness of
such therapies, because such methods either dehumanise the individuals involved or are incapable of
measuring the kinds of existential benefits that such approaches claim to bestow (May & Yalom, 1995;
Walsh & McElwain, 2002). Nevertheless, such approaches to treatment are still accepted as having some
value and are still used at least in part by clinical psychologists, counselling psychologists, and
psychotherapists.
Summary
The four psychological paradigms we have discussed in this section have tended to evolve historically
from explanatory paradigms that have represented different ‘schools’ of psychology generally, but all
have a relevant place in explaining psychopathology—either at different levels of explanation (e.g.,
cognitive vs behavioural), or using different philosophical approaches to explaining human behaviour
and psychopathology (e.g., the hypothetical constructs developed by the psychoanalytical approach vs.
the learning paradigm developed by behaviourist approaches). In addition to pure psychological
paradigms, clinical psychologists are continually developing new ways of conceptualising and studying
the factors that influence the development of mental health problems, and one approach of growing
importance is to consider how sociocultural factors might affect the acquisition of psychopathology.
Some examples of this latter approach are discussed in Focus Point 1.5.
There is a growing realisation that sociocultural factors can influence both the acquisition of
mental health problems and the way that psychopathology is expressed. These factors include
gender, culture, ethnicity, and socioeconomic factors such as poverty and deprivation, and we
discuss some examples of these here.
Gender
Your gender is likely to be a significant factor in whether you are likely to be diagnosed with a
particular mental health problem. For example, the prevalence of major depression is twice as
high in women as it is in men (Kuehner, 2016); women are significantly more likely to develop
anxiety‐based problems such as social anxiety disorder, panic disorder, or GAD, and to suffer
trauma‐ and stress‐related mental health problems (see Chapter 6). Women are also significantly
more likely to be diagnosed with eating disorders such as anorexia nervosa or bulimia nervosa
(Zayas et al., 2018; see Chapter 10), but males are more likely to be diagnosed with conduct
disorders, ADHD (see Chapter 16), and antisocial personality disorder (Chapter 12). How
gender differentially affects the acquisition of these various disorders is far from clear and could
be linked to gender‐based biological differences, for example, sex hormone differences (Li &
Graham, 2017), to factors associated with the gender roles that males and females adopt in
different societies (e.g., women's roles in society may be more stressful than men's and so
increase the risk of mental health problems) (Mayor, 2015), or differences in gender‐based
coping practices (e.g., women ruminate more than men, whereas men frequently react to stress
by distracting themselves, Just & Alloy, 1997). A comprehensive discussion of the many possible
explanations for gender differences in diagnosis and prevalence rates is provided by Hartung
and Lefler (2019).
Culture
The culture in which you live can also be a factor that will determine whether you will be
diagnosed with a particular mental health problem and also how that problem will manifest
itself. For example, prevalence rates for many common mental health problems differ
significantly across the world. In the case of major depression, prevalence rates can vary
between 1.5% and 19% (Weissman et al., 1996), with some of the highest prevalence estimates
for depression being found in some of the wealthiest countries in the world (Kessler & Bromet,
2013). These cultural‐demographic differences may be caused by the stigma associated with
reporting symptoms in some societies, cultural differences in diagnosing symptoms, and
depression being expressed in more physical terms in some societies (called somatisation), or
simply by cultural difference in the way people report their depression or the methodologies
used to collect data about depression (Compton et al., 1991; Huang, Beshai, & Mabel, 2016;
Patten, 2003). Eating disorders are another example where prevalence rates are higher in most
Western cultures but in the past have only rarely been reported in less socio‐economically
developed societies (Keel & Klump, 2003). However, in recent times, ‘Westernisation’,
industrialisation, and urbanisation of underdeveloped countries do seem to be associated with
rises in the levels of eating disorders reported in those countries (Pike & Dunne, 2015). Finally,
some combinations of mental health problems may be found only in certain specific cultures
and may be examples of the culturally specific ways in which stress and trauma are manifested.
Two specific examples of this are provided in Focus Point 1.3.
Ethnicity
The frequency of diagnosis of many mental health problems also differs across different
ethnicities. For example, schizophrenia is more frequently diagnosed in individuals of African
descent than of White European origin. Conversely, specific types of eating disorders—such as
anorexia nervosa—have been diagnosed more commonly in White women than Black women
(Walcott, Pratt, & Patel, 2003). In some of these cases, there may be a genetic component, for
example, individuals of Asian descent inherit a gene which makes drinking large amounts of
alcohol aversive and so makes them less likely to develop alcohol dependency and abuse
problems (Li, Zhao, & Gelernter, 2012), but equally it may be the case that diagnostic criteria
are either wittingly or unwittingly applied differently to people from different ethnic
backgrounds (e.g., it is caused by a cultural bias in assessment—see Section 2.2.6, Chapter 2).
For example, Black Americans with severe depression are significantly more likely than other
racial or ethnic groups to be misdiagnosed with schizophrenia (Gara, Minsky, Silverstein,
Miskimen, & Strakowski, 2019), and such differential effects may reflect differential diagnoses
driven by implicit racial and ethnic stereotyping.
Socio‐economic Conditions
Finally, the socio‐economic conditions in which an individual is either raised or lives in are an
important contributor to the development of psychopathology (Lund et al., 2018). Obvious
examples include the development of conduct disorders, some personality disorders such as
antisocial personality disorder, and substance abuse and dependency problems, many of which
are associated with poverty and low socio‐economic conditions (Karriker‐Jaffe, 2013; Walsh et
al., 2013). However, poverty is also a risk factor for the development of many common mental
health problems such as depression (Freeman et al., 2016), and the development of anxiety
disorders in women—but not necessarily men (Mwinyi et al., 2017). Reasons for the association
between poor socio‐economic conditions and psychopathology may include the additional
stressors and traumas that accompany poverty, such as unemployment, substandard
accommodation, and neglect, and poverty also brings with it feelings of a lack of control over
one's own life and an inability to access the resources that could actively change poor living
conditions (Evans & Kim, 2012). Indeed, so specific are many of the stressors that afflict people
living in poverty, that it may be necessary to develop interventions that are tailored to the
specific sociocultural experiences of low‐income families (Goodman, Pugach, Skolnik, & Smith,
2013).
SELF‐TEST QUESTIONS
What are the main approaches to understanding psychopathology that are advocated by
the biological approach?
What are the pros and cons of attempting to explain mental health problems in terms of
genetics?
Can you describe the basic concepts underlying psychoanalytic and psychodynamic
approaches to psychopathology?
What are the learning principles on which the behavioural approach to psychopathology is
based?
Who were the main founders of the cognitive approach to psychopathology, and what
were their main contributions?
How do humanistic‐existential approaches to psychopathology differ from most of the
others?
SECTION SUMMARY
Lewis is a university lecturer who has suffered from depression for much of his life. Here is his
view of the mental health stigma he encountered:
‘There can be no doubt that there is considerable stigma associated with depression. I am
repeatedly congratulated for being so brave, even courageous, in talking so openly about my
depression. I, in fact, am a “performer” and there is no bravery, but these comments show how
others view depression and that it is highly stigmatised. An example of how stigma can present
a particularly difficult problem for sportsmen is provided by the case of a professional footballer,
Stan Collymore who played for England. He had a severe depression and his career went into a
rapid decline. He says that he can never forgive the Aston Villa manager for the way he reacted
to his depression. He told him to pull his socks up and that his idea of depression was that of a
woman living on a 20th floor flat with kids. The Sun newspaper said that he should be kicked
out of football as how could anyone be depressed when he is earning so much money. He
bitterly remarks that if you suffer from an illness that millions of others suffer from, but it is a
mental illness which leads many to take their own lives, then you are called spineless and weak.’
(From Wolpert, 2001, Stigma of depression—a personal view. British Medical Bulletin, 57, 221–224.
Mental health stigma Mental health stigma can be divided into two distinct types: social
stigma is characterised by prejudicial attitudes and discriminating behaviour directed towards
individuals with mental health problems. Perceived stigma or self-stigma is the internalising by
the mental health sufferer of their perceptions of discrimination. This can significantly affect
feelings of shame and lead to poorer treatment outcomes.
In relation to social stigma, studies have suggested that stigmatising attitudes towards people with
mental health problems are widespread and commonly held (Byrne, 1997; Crisp, Gelder, Rix, Meltzer,
& Rowlands, 2000; Heginbotham, 1998). In a survey of over 1,700 adults in the UK, Crisp et al. (2000)
found that (a) the most commonly held belief was that people with mental health problems were
dangerous—especially those with schizophrenia, alcoholism, and drug dependence; (b) people believed
that some mental health problems such as eating disorders and substance abuse were self‐inflicted; and
(c) respondents believed that people with mental health problems were generally hard to talk to. People
tended to hold these negative beliefs regardless of their age, regardless of what knowledge they had of
mental health problems, and regardless of whether they knew someone who had a mental health
problem. More recent studies of attitudes to individuals with a diagnosis of schizophrenia or major
depression convey similar findings. In both forms of psychopathology, a significant proportion of
members of the public considered that people with mental health problems such as depression or
schizophrenia were likely to lose control, be unpredictable, and be dangerous, and they would be less
likely to employ someone with a mental health problem (Mannarini & Rossi, 2019; Reavley & Jorm,
2011; Wang & Lai, 2008). Importantly, mental health stigma is an almost universal phenomenon that
can be found in most cultures, and similar forms of mental health prejudice and discrimination can be
found regardless of significant differences in cultural backgrounds (Abdullah & Brown, 2011; Grover et
al., 2017; Mannarini, Boffo, Rossi, & Balottin, 2018)
Finally, while many forms of mental health stigma are overt and represent discriminatory beliefs and
opinions openly held by many members of society, unconscious negative beliefs about mental health
problems can often be found using implicit bias tasks with participants who claim not to hold
stigmatising beliefs about mental health problems (Mannarini & Buffo, 2014; Schlier & Lincoln, 2019)
(similar findings are often found with racist beliefs which may exist as unconscious implicit biases). Such
implicit biases operating outside of an individual's conscious control appear to reflect stigmatisation
based on ‘in‐group’ versus ‘out‐group’ distinctions, where those with mental health problems are
implicitly categorised as members of an ‘out‐group’ (Schlier & Lincoln, 2019). The fact that such
stigmatising beliefs can operate outside of conscious awareness poses some significant challenges for
interventions designed to alleviate mental health stigma.
We discuss ways in which stigma can be addressed later, but it must also be acknowledged here that the
media regularly play a role in perpetuating stigmatising stereotypes of people with mental health
problems. Media portrayals of mental health problems have long been recognised as being misleading
and stigmatising, and the popular press is a branch of the media that is frequently criticised for
perpetuating these stereotypes. A study of mental health‐related stories in nine UK newspapers
published in 2017 found that over half of the articles were negative in tone, and 18.5% indicated an
association with violence (Chen & Lawrie, 2017). But maybe there is some positive movement on this.
Bowen and Lovell (2019) explored how UK newspapers had represented mental health issues on their
Twitter feeds between 2014 and 2017. They did identify a significant decrease in the proportion of
mental health tweets that were characterised as ‘bad news’ over that period. But even in 2017, 24% of
these tweets were still considered as ‘bad news’.
Blame can also be levelled at the entertainment media. For example, cinematic depictions of
schizophrenia are often stereotypic and characterised by misinformation about symptoms, causes and
treatment. In an analysis of English‐language movies released between 1990 and 2010 that depicted at
least one character with schizophrenia, Owen (2012) found that most schizophrenic characters displayed
violent behaviour, one third of these violent characters engaged in homicidal behaviour, and a quarter
committed suicide. This suggests that negative portrayals of schizophrenia in contemporary movies are
common and are sure to reinforce biased beliefs and stigmatising attitudes towards people with mental
health problems. While the media may be getting better at increasing their portrayal of anti‐stigmatising
material over recent years, studies suggest that there has been only a minimal decrease in the news
media's publication of stigmatising articles, suggesting that the media is still a significant source of
stigma‐relevant misinformation (Thornicroft et al., 2013).
As well as the need to inform and educate members of the public about mental health stigma, there is
also a need to create interventions that will reduce internalised stigma in those suffering from mental
health problems. A variety of methodologies have been successfully employed to reduce internalised
stigma, and these include (a) psychoeducational interventions, (b) CBT interventions aimed at modifying
self‐stigmatising beliefs, (c) interventions based on understanding the causes of mental health problems,
and (d) multifaceted interventions that combine several of the above (Alonso, Guillen, & Munoz, 2019).
Such interventions have been shown to reduce measures of internalised stigma, facilitate subjective
measures of recovery or coping, and improve self‐efficacy and insight (e.g., Wood, Byrne, Varese, &
Morrison, 2016).
SELF‐TEST QUESTIONS
What are the different types of mental health stigma?
What kinds of factors may be responsible for causing and maintaining mental health
stigma?
What kinds of interventions have been developed to try and reduce mental health stigma?
SECTION SUMMARY
CHAPTER OUTLINE
2.1 CLASSIFYING PSYCHOPATHOLOGY
2.2 METHODS OF ASSESSMENT
2.3 CASE FORMULATION
2.4 CLASSIFICATION AND ASSESSMENT IN CLINICAL PSYCHOLOGY
REVISITED
LEARNING OUTCOMES
When you have completed this chapter, you should be able to:
1. Compare and contrast the pros and cons of DSM as a means of classifying and diagnosing
psychopathology.
2. Describe a range of clinical assessment methods and evaluate the benefits and limitations
of each.
3. Describe the concepts of reliability and validity as applied to clinical assessment methods.
4. Critically analyse some of the sources of cultural bias that may influence the process of
clinical assessment.
5. Explain what a case formulation is, and provide some examples from different
psychological approaches.
I saw Ann Smith, aged 39, in my clinic today. She met criteria for depression, with a 5‐month history of low
mood. This was triggered by an argument with her husband during the Christmas period. Despite receiving a lot
of support from her husband, Ann continues to experience ‘black spells’, which can go on for 5 days per episode.
In March she expressed some suicidal thoughts, which precipitated her referral to psychiatric services.
At interview Ann was well presented, clear, and articulate. However, both her eye contact and concentration were
poor, and she reported having lost 10 lbs in weight. This is the first time she has been referred to psychiatric
services but has been prescribed antidepressant medication on three previous occasions by her GP. She states that
she has experienced low times throughout her life.
She is the middle child of three and stated that she missed a lot of schooling due to the combination of having
chronic asthma and an overprotective mother. She left school without qualifications, feeling she has not realised her
potential in any area. She married Michael 14 years ago. Owing to his job as a vicar, they entertain frequently.
She finds the entertaining difficult. I would welcome your assessment of this case, with a view of taking her on for
therapy.
Ann's Story (as told by Blackburn, James, & Flitcroft, 2006)
Introduction
Ann has low mood. She has been prescribed antidepressant medication by her GP, who subsequently
referred her to a psychiatrist. Following an interview with the psychiatrist, the latter sent the referral
letter to a clinical psychologist. The referral letter immediately raises a number of questions, the main
one being ‘Can we help this person?’ But this question itself raises a number of other questions that will
need answering. These questions include (a) are Ann's symptoms typical of a specific psychological
problem (e.g., depression)? (b) do they meet the criteria for formal diagnosis of a mental health problem?
(c) what has led this person to have these problems? (d) are there specific events that trigger her
symptoms? (e) how can we help this person? and (f) by what criteria will we judge that we have
successfully helped this person to recover? These are all questions that the clinical psychologist must
answer by gathering a variety of information about the client, and this information is often gathered
using a range of different clinical tools and techniques. Clinical assessment procedures are formal ways
of finding answers to these questions, especially ‘Precisely what problems does this person have?’, ‘What
has caused their problems?’, ‘What is the best way to treat their problems?’, and ‘Were our support and
interventions successful?’
Clinicians use a wide range of assessment procedures to gather this information. In many cases, the
types of techniques they use will depend on their theoretical orientation to psychopathology. For
example, the cognitive‐behavioural clinician may want to find out quite different information to a
psychodynamic clinician—largely because their conceptions of the causes of psychopathology are
different and because the kinds of therapeutic techniques they employ are different. The cognitive
therapist will want to know what kinds of cognitions may trigger symptoms so that these cognitions can
be addressed in therapy, whereas a psychodynamic therapist may want to explore the client's history of
conflicts and defence mechanisms in order to assess their suitability for psychodynamic therapy
(Marzillier & Marzillier, 2008).
In this chapter we describe the range of assessment techniques available to clinicians that enables them
to answer the basic questions about a case that we have just raised. These techniques are an aid to
diagnosis, an aid to determining the best intervention for a client, and a help in establishing whether
treatment has successfully dealt with the client's symptoms. We discuss these assessment types
individually, but to gain a complete picture of the client's condition, the clinician will usually use a range
of different assessments (Antony & Barlow, 2012; Meyer et al., 2001). The chapter begins by discussing
the ways in which we currently classify and diagnose mental health problems, because many forms of
assessment are structured in ways that enable classification. We then move on to discussing different
types of assessment, including the interview, psychological tests, biologically based tests, and
observation. Finally we discuss some issues relating to diagnosis and how diagnosis can be associated
with the development of a treatment plan (known as formulation).
Diagnostic and Statistical Manual (DSM) First published in 1952 by the American
Psychiatric Association (APA), the DSM extended the World Health Organisation’s (WHO)
International List of Causes of Death (ICD) classification system to include a more widely
accepted section on mental disorders.
Neurodevelopmental disorders
Schizophrenia spectrum and other psychotic disorders
Bipolar and related disorders
Depressive disorders
Anxiety disorders
Obsessive‐compulsive and related disorders
Trauma‐ and stressor‐related disorders
Dissociative disorders
Somatic symptom disorders
Feeding and eating disorders
Elimination disorders
Sleep‐wake disorders
Sexual dysfunctions
Gender dysphoria
Disruptive, impulse control, and conduct disorders
Substance use and addictive disorders
Neurocognitive disorders
Personality disorders
Paraphilic disorders
Other disorders
The number of total disorders in DSM‐5 has not increased significantly, but some disorders have now had their
importance recognised by being allocated separate chapter headings (e.g., obsessive‐compulsive disorder). The chapter
on neurodevelopmental disorders is a new heading containing autism spectrum disorders, intellectual development
disorder, and attention/hyperactivity disorder (ADHD). The chapter on substance use and addictive behaviours now
includes gambling disorder. The importance of both bipolar disorder and depressive disorders is recognised by them
being allocated to separate chapters.
Second, simply using DSM criteria to label people with a disorder can be stigmatising and harmful. We
saw in Chapter 1 that individuals with a mental health diagnosis tend to be viewed and treated
differently within society (Perkins et al., 2018). In addition, diagnostic labels may encourage individuals
to adopt a ‘sick’ role and can result in people adopting a long‐term role as an individual with what they
perceive as a debilitating illness (Scheff, 1975).
Third, DSM diagnostic classification tends to define disorders as discrete entities (i.e., after being assessed,
you will either be diagnosed with a disorder or you will not). However, much recent evidence has begun
to suggest that many common mental health problems such as anxiety and depression may be
dimensional rather than discrete (Krueger & Piasecki, 2002; den Hollander‐Gijsman et al., 2012; Clark et
al. 2017). That is, symptoms diagnosed as a disorder may just be more extreme versions of everyday
behaviour. For example, at times we all worry about our own life problems—some more than others. In
extreme cases worry can become so regular and persistent that it will interfere with our daily living and
may meet DSM criteria for diagnosis as a disorder (e.g., generalised anxiety disorder, GAD, see Chapter
6). However, chronic worrying and GAD symptoms appear to be dimensional and range in frequency
and intensity across the general population (Niles, Lebeau, Liao, Glenn, & Craske, 2012), and in such
circumstances, the cut‐off point for defining an activity such as worrying as a disorder becomes
relatively arbitrary. DSM had traditionally attempted to deal with this problem by adding a clinical
significance criterion to many diagnostic categories which required that symptoms cause ‘significant
distress or impairment in social, occupational, or other important areas of functioning’ (Spitzer &
Wakefield, 1999), and the purpose of this was to try to differentiate symptoms that reflect normal
reactions to stress that the individual may be able to cope with from those that may require intervention
and treatment to restore functioning. However, with growing evidence that psychopathology symptoms
are on a dimension, DSM‐5 has included simple dimensional measures of disorder severity to
accompany more specific diagnostic criteria. For example, in the case of GAD, dimensional measures
such as per cent of the day spent worrying can provide an indication of symptom severity on a
dimensional scale.
Fourth, DSM conceptualises psychopathology as a collection of hundreds of distinct categories of
disorders, but what happens in practice provides quite a different picture. For example, the discrete,
differentially defined disorders listed in DSM regularly co‐occur. This is known as comorbidity, where
an individual client will often be diagnosed with two or more distinct disorders (e.g., an anxiety disorder
such as obsessive‐compulsive disorder and major depression). What is interesting is that comorbidity is
so common that it is the norm rather than the exception. For example, surveys suggest that up to 79%
of individuals diagnosed with a disorder at some point during their lifetime will have a history of more
than one disorder (Kessler et al., 1994). The frequency of comorbidity suggests that most disorders as
defined by DSM may indeed not be independent discrete disorders but may represent symptoms of
either hybrid disorders (e.g., a disorder that contains elements of a number of different disorders) or
a more broad ranging syndrome or disorder spectrum that represents a higher‐order categorical
class of symptoms (Kotov et al., 2017; Widiger & Samuel, 2005). An example of a hybrid disorder is
mixed anxiety‐depressive disorder, and many people exhibit symptoms of both anxiety and
depression, yet do not meet the threshold for either an anxiety or a depression diagnosis (Möller, et al.,
2016; Barlow & Campbell, 2000). Examples such as this suggest that because DSM defines disorders as
numerous individual discrete entities, it fails to recognise when combinations of discrete symptoms may
each not reach a level significant enough for diagnosis but may collectively be causing significant
distress. There is also a broader theoretical implication to the fact that comorbidity is so common, and
this is that psychopathology may occur in a spectrum that has a hierarchical structure rather than
consisting merely of numerous discrete disorders (e.g., Conway, et al., 2019, see Section 2.1.4 for further
detail).
disorder spectrum The frequency of comorbidity suggests that most disorders as defined by
DSM may indeed not be independent discrete disorders, but may represent symptoms of a
disorder spectrum that represents a higher-order categorical class of symptoms.
One final problem with DSM is that it can be conceived as a ‘hodgepodge’ collection of disorders that
have been developed and refined in a piecemeal way across a number of revisions (see Focus Point 2.1) –
and this makes it almost impossible to frame a definition of what a mental health problem actually is.
Frances & Widiger (2012) characterise this ‘hodgepodge’ view in the following quotation:
The current list of mental disorders certainly constitutes a hodgepodge collection. Some describe short‐term states, others
life‐long personality. Some reflect inner misery, others bad behaviour. Some represent problems rarely or never seen in
normals, others are just slight accentuations of the everyday. Some reflect too little self‐control, others too much. Some are
quite intrinsic to the individual; others are defined against varying and changing cultural mores and stressors. Some begin
in infancy, others in old age. Some affect primarily thought; others emotions, behaviors, or inter‐personal relations; and
there are complex combinations of all these. Some seem more biological, others more psychological.
(Frances & Widiger, 2012, p. 111)
DSM regularly undergoes an intensive revision process to take account of new research on
mental health problems and to refine the diagnostic categories from earlier versions of the
system. One would assume that this would be a deliberate and objective process that could only
further our understanding of psychopathology, and that is certainly the intention of the
majority of those involved. However, at least some people argue that the process of developing
a classification system such as DSM can never be entirely objective, free from bias, or free from
corporate or political interests. Allen Frances and Thomas Widiger were two individuals who
were prominent in the development of the fourth edition of the DSM, and they have written a
fascinating account of the lessons they believe should be learned from previous attempts to
revise and develop mental health classification systems (Frances & Widiger, 2012). They make
the following points:
1. Just as the number of mental health clinicians grows, so too will the number of life
conditions that work their way into becoming disorders. This is because the proliferation
of diagnostic categories tends to follow practice rather than guide it.
2. Because we know very little about the true causes of mental health problems, it is easier
and simpler to proliferate multiple categories of disorder based on relatively small
differences in descriptions of symptoms.
3. Most experts involved in developing DSM are primarily worried about false negatives (i.e.,
the missed diagnosis or patient who does not fit neatly into the existing categorisations),
and this leads to either more inclusive diagnostic criteria or even more diagnostic
categories. Unfortunately, experts are relatively indifferent to false positives (patients who
receive unnecessary diagnosis, treatment, and stigma) and so are less likely to be concerned
about over‐diagnosis.
4. Political and economic factors have also shaped the ‘medical model’ view of
psychopathology on which DSM is based, and also contributed to the establishment and
proliferation of diagnostic categories. For example, the pharmaceutical industry benefits
significantly from the sale of medications for mental health problems, and its profits will be
dependent on both (a) conceptions of mental health based on a medical model that implies
a medical solution, and (b) a diagnostic system that will err towards overdiagnosis rather
than underdiagnosis (see Pilecki, Clegg, & McKay, 2011).
While DSM is not ideal, it is the most comprehensive classification system we have available, and while
we have just listed a number of criticisms of DSM we must also remember that classification in and of
itself does also have some advantages (see Section 2.1).
2.1.3 DSM‐5
DSM‐5 arguably represents the most comprehensive revision of the DSM so far (Table 2.1), and it has
involved many years of deliberation and field trials to determine what changes to mental health
classification and diagnosis are essential and empirically justifiable. The main changes between DSM‐5
and its predecessor (DSM‐IV‐TR) are listed in Table 2.2 (see Blashfield, Keeley, Flanagan, & Miles,
2014, for a review of changes in the DSM from DSM‐I to DSM‐5).
However, while these most recent changes to the DSM have been extensively discussed and researched,
many of the revisions have been received critically, and it is worth discussing some of these criticisms
because they provide an insight into the difficulties of developing a classification system for
psychopathology that is fair and objective.
First, many of the diagnostic changes in DSM‐5 have reduced the number of criteria necessary to
establish a diagnosis. This is the case with attenuated psychosis syndrome, major depression, and
generalised anxiety disorder, and this runs the risk of increasing the number of people that are likely to
be diagnosed with common mental health problems such as anxiety and depression. It is a debatable
point whether increases in the number of diagnosed cases is a good or a bad thing, but it is likely to have
the effects of ‘medicalising’ many everyday emotional experiences (such as ‘grief ’ following a
bereavement, or worry following a stressful life event), and may create “false‐positive” epidemics
(Frances, 2010; Wakefield, 2016). This is a process that many clinicians critical of DSM claim is ‘eating
up normality’, and ‘none of the new mental disorders added to DSM‐5 passes any of the three
necessary tests for a new diagnosis: a low false positive rate, an effective treatment, and safety. Future
DSMs should stick to well defined mental disorders that have clear treatment implications and can be
easily distinguished from normality’ (Frances, 2015, p. 179)
TABLE 2.2 Summary of changes in DSM‐5
mild neurocognitive disorder DSM-5 has introduced disorder categories that are designed
to identify populations that are at risk for future mental health problems, and these include mild
neurocognitive disorder, which diagnoses cognitive decline in the elderly.
attenuated psychosis syndrome DSM-5 has introduced disorder categories that are
designed to identify populations that are at risk for future mental health problems. Attenuated
psychosis syndrome is seen as a potential precursor to psychotic episodes.
Third, there are concerns that changes in diagnostic criteria will result in lowered rates of diagnosis for
some particularly vulnerable populations. For example, 5 years after the implementation of DSM‐5
criteria for autism spectrum disorder(ASD), a comprehensive review and meta‐analysis suggests
there has been a significant decrease of at least 20% in the number of individuals being diagnosed with
ASD using DSM‐5 criteria compared to using DSM‐IV‐TR criteria (Kulage et al. 2019). In addition,
many researchers believe that introducing a single autism diagnostic dimension in DSM‐5 risks
marginalising future research on the different subtypes of autism that constituted specific diagnostic
categories in DSM‐IV‐TR (Tsai, 2015). Similar concerns have been voiced about changes to specific
learning disorder diagnostic criteria in DSM‐5, and the possibility that deletion of the term dyslexia
as a diagnostic label will disadvantage individuals with specific phonologically based, developmental
reading disabilities.
Finally, two enduring criticisms of DSM generally that have continued to be fired specifically at DSM‐5
have been that (a) DSM‐5 has continued the process of attempting to align its diagnostic criteria with
developments and knowledge from neuroscience (Regier, Narrow, Kuhl, & Kupfer, 2011), but
neuroscience may never be able to provide a comprehensive basis for diagnosis because its approach is
too reductionist and by its very nature it will be unable to capture the social and cultural factors that
indisputably contribute to the symptoms of mental health problems (e.g., de Macedo, 2017); and (b)
most mental health problems (and psychological distress generally) are now viewed as dimensional, so
any criteria defining a diagnostic cut‐off point will be entirely arbitrary. DSM‐5 has attempted to
recognise the importance of the dimensionality of symptoms by introducing dimensional severity rating
scales for individual disorders. But as we have seen from the previous discussion, each iteration change
in DSM diagnostic criteria changes the number and range of people who will receive a diagnosis, and
this makes it increasingly hard to accept diagnostic categories as valid constructs (e.g., Kendler, Kupfer,
Narrow, Phillips, & Fawcett, 2009).
Network analyses
DSM defines psychological disorders in terms of a series of co‐occurring symptoms, yet does not specify
how these symptoms are caused. One way to try to explain how these symptoms co‐occur is to assume
that there is some kind of underlying cause or latent variable that connects the symptoms together
(Reise & Waller, 2009). For example, a lung tumour can explain why an individual experiences bloody
sputum, chest pains, and a chronic cough, and so in psychopathology it may be that an underlying
hypothetical process called ‘depression’ can explain the co‐occurrence of criteria symptoms for
depression such as insomnia, sadness, and loss of interest. However, the thing we call ‘depression’ may
not be a separate entity that causes symptoms in the same way that a lung tumor is an identifiable entity
that is separate from its symptoms and causes its symptoms. For example, clinicians and
psychometricians have proposed a radically different way of understanding what makes up a collection
of symptoms such as depression. Instead of being an underlying cause of the symptoms, ‘depression’ is
the name given to the dynamic causal interactions between defining symptoms themselves, and this has
given rise to network analysis of symptoms indicative of mental health problems (Borsboom &
Cramer, 2013). This approach assumes that disorders emerge from the causal interactions between
symptoms themselves, and understanding the causal relationships between symptoms will enable us to
define individual clusters of symptoms that define individual disorders. In this way, depression is not a
causal entity but merely a name for the network of symptoms that interact to cause the syndrome we
call ‘depression’.
FIGURE 2.2 HiTOP consortium working model. Constructs higher in the figure are broader and more general, whereas
constructs lower in the figure are narrower and more specific. Dashed lines denote provisional elements requiring further
study. At the lowest level of the hierarchy (i.e., traits and symptom components), conceptually related signs and symptoms
(e.g., phobia) are indicated in bold for heuristic purposes, with specific manifestations indicated in parenthesis. ADHD =
attention‐deficit/hyperactivity disorder, BPD = bipolar disorder, GAD = generalised anxiety disorder, HiTOP =
Hierarchical Taxonomy of Psychopathology; IED = intermittent explosive disorder, MDD = major depressive disorder;
OCD = obsessive‐compulsive disorder, ODD = oppositional defiant disorder; SAD = separation anxiety disorder; PD =
personality disorder; PTSD = post‐traumatic stress disorder.
From Conway et al., 2019.
Networks of interacting symptoms can be identified using statistical methods that measure the strength
of the associations between symptoms and the centrality of symptoms within the network (i.e. how
important a symptom is in affecting other symptoms) (e.g., Fried, Epskamp, Nesse, Tuerlinckx, &
Borsboom, 2016). Focus Point 2.2 provides a detailed description of a network analysis of depression
symptoms and its implications (Focus Point 2.2).
The benefits of a network analysis approach are that it provides an objective measure of how symptoms
are interrelated and can identify symptoms that are centrally important in defining a disorder and
affecting other symptoms (i.e., have ‘high centrality’) (Contreras, Nieto, Valiente, Espinosa, & Vazquez,
2019). Identifying symptoms with high centrality that trigger other symptoms means that these
symptoms can then be prime targets for clinical interventions. Network analysis can also help to explain
comorbidity. For example, McNally, Mair, Mugno, and Riemann (2017) found that a network analysis
revealed that sadness was the one symptom that connected obsessive‐compulsive disorder (OCD)
symptoms with depression in clients with comorbid OCD and depression. However, network analysis is
a cross‐sectional analysis of relationships between symptoms and as yet does not provide information on
causal relationships between symptoms that may occur over different time scales, and experimental lab‐
based studies of individual symptoms may be necessary to complement the statistical analyses. A useful
introduction to network analysis is provided by McNally (2016), and a review and future directions by
Robinaugh, Hoekstra, Toner, and Borsboom (2019).
Network analyses can be created by collecting data about symptoms in a variety of ways
and subjecting these data to statistical analyses that provide information about the
connectedness between symptoms (Borsboom & Cramer, 2013). First, it is important to identify
the elements or symptoms that will function as nodes, and these can be taken from diagnostic
manuals such as the DSM, or from clinician‐rated or client‐rated assessment tools such as the
Inventory of Depressive Symptomatology if you are studying depressive symptoms (IDS‐C,
IDS‐SR, Trivedi et al., 2004). Even greater detail can be achieved by asking clinicians and
clients to rate the direction of causality of symptoms.
These data can then be used to create a visual representation of the network of symptoms that
consists of nodes and edges. Nodes represent symptoms, and edges represent associations
between symptoms. The centrality of a node can then be calculated by such details as the
number of edges connected to it and the strength with which it will activate associated nodes.
Nodes (symptoms) with high centrality are those of greatest importance in the network.
The figure shows an example of a network analysis carried out on depression symptoms by
Fried, Epskamp, Nesse, Tuerlinckx, and Borsboom (2016). On the left is the representation of
the network. Green lines represent positive associations between symptoms and red lines
negative ones. The thickness and brightness of the connecting edges indicate the strength of the
association between nodes. The four sleep symptoms are closely connected (hyp, in1, in2, in3) but
only weakly associated with other symptoms. The DSM core depression criteria symptoms of
“diminished interest in pleasure” (int, ple) were closely associated. On the right of the figure is
the centrality strength of each node, with DSM core symptoms of sadness, interest loss and
pleasure loss all having high centrality, meaning they are symptoms which will have a significant
influence on other symptoms, and the centrality of these symptom nodes validates their use as
core diagnostic criteria in DSM‐5. In addition, the fact that anxiety (anx) is also a significant
node in the network is consistent with the high comorbidity rates found between depression and
anxiety (Kessler, Chiu, Demler, Merikangas, & Walters, 2005), and this provides a way of
showing how depression and anxiety may be connected when the two are comorbid. (From
Fried EI, Epskamp S, Nesse RM, Tuerlinckx F & Borsboom D (2016) What are ‘good’
depression symptoms? Comparing the centrality of DSM and non‐DSM symptoms of
depression in a network analysis. Journal of Affective Disorders, 189, 314‐320.)
The Power Threat Meaning (PTM) framework
In 2018 the British Psychological Society published the Power Threat Meaning Framework. This project was
funded by the Division of Clinical Psychology and developed over 5 years by a team of leading clinical
psychologists and service users in the UK. The framework is a scholarly document aimed at promoting
discussion and debate by offering a fundamentally different perspective on the origins, experience and
expression of mental health problems
(https://www.bps.org.uk/sites/bps.org.uk/files/Policy/Policy%20‐%20Files/PTM%20Summary.pdf). It
represents an attempt to move away from primarily biological and medical models of mental health
problems that are based on psychiatric diagnosis and the assumption that mental health problems are
disorders of biology and are ‘pathological’. Instead, the framework takes a broad view of the causes of
psychopathology and views people as social beings whose experiences of distress and troubling
behaviour are inseparable from their material, social, environmental, socio‐economic, and cultural
contexts. In effect, those aspects of life that are labelled as mental health problems can be viewed as
quite natural reactions to stressful and threatening life events, examples of which include poverty,
discrimination, and inequality, and traumatic experiences such as abuse and violence, along with more
subtle pressures such as social norms and expectations. These events can be indicative of the adverse
operation of ‘Power’ (coercive, legal, economic, ideological, social/cultural, or interpersonal), and the
‘Threat’ that the negative operation of power may pose. As a consequence this gives rise to learned and
evolved threat responses that can range from automatic physiological reactions to consciously selected
actions such as hyper‐vigilance, self‐injury, or compulsive behaviour which are currently considered as
symptoms of mental health problems. A significant feature of the framework is the importance of
offering ‘Meaning’ to the client, and helping them to construct a nondiagnostic, nonblaming,
demystifying story about strength and survival and that such experiences and reactions are perfectly
natural attempts to survive the negative impacts of power.
Instead of asking ‘What is wrong with you?’, the framework replaces this type of question with four
others:
‘What has happened to you?’ (How has Power operated negatively in your life?)
‘How did it affect you?’ (What kinds of Threats did it pose?)
‘What sense did you make of it?’ (What is the Meaning of these situations and experiences to you?)
‘What did you have to do to survive?’ (What kinds of Threat Response are you using?)
Translated into clinical practice, two additional questions need to be asked:
‘What are your strengths?’
‘What is your story?’
The latter two questions can form the basis for giving meaning to the client's experiences and providing
a road map to recovery.
The PTM Framework is basically a radical nondiagnostic and dimensional approach to what are
currently called mental health problems. It is dimensional in that reactions to stressors and threats are
seen as perfectly normal attempts to survive such events, and its purpose is to offer a way of helping
people to create more hopeful narratives or stories about the difficulties in their lives instead of seeing
themselves as blameworthy, weak, deficient or ‘mentally ill’.
However, there is still some way to go because at present PTMF only offers a theoretical framework for
approaching and resolving mental health problems in a nondiagnostic way, and more is needed to
translate this framework into clinical practice. Also, because of its radical anti‐diagnostic approach, the
framework has generated a good deal of controversy and discussion amongst mental health
professionals and service users alike (e.g.,
https://www.mentalhealthtoday.co.uk/blog/diagnosis/challenging‐narratives‐the‐power‐threat‐
meaning‐framework; Johnstone, et al., 2019).
2.1.5 Conclusions
Despite its conceptual difficulties and its many critics, DSM is still the most widely adopted classification
and diagnostic system for clinical practice and clinical research. Such a system can be useful for a
number of reasons, including determining the allocation of resources and support for mental health
problems, for circumstances that require a legal definition of mental health problems, and providing a
common language that allows the world to share and compare data on mental health problems. Having
said this, there are still many significant problems associated with DSM, and diagnosing and labelling
people with specific psychological disorders raises other issues to do with stigma and discrimination.
Indeed, we should be clear that diagnostic systems are not a necessary requirement for helping people
with mental health problems to recover, and many clinical psychologists prefer not to use diagnostic
systems such as DSM or ICD, but instead prefer to treat each client as someone with a unique mental
health problem that can best be described and treated using other means such as case formulation
(see Section 2.3 for a fuller description and examples of case formulation). In recent years, the
imperfections in DSM and in particular the criticisms of DSM‐5 have led clinicians and researchers to
attempt to develop alternative evidence‐based classification systems to the DSM (e.g., RDoC, HiTOP,
and network analysis) or to offer theoretical frameworks for clinical practice that do not require
diagnostic classification at all (e.g., the PTM Framework).
SELF‐TEST QUESTIONS
Briefly describe the history of the development of psychopathology classification systems.
What is the DSM classification system primarily designed to do?
DSM is not an ideal classification system. Describe at least four problems associated with
this method of classification.
What are the main alternatives to DSM‐5 that are being developed by clinical
psychologists?
SECTION SUMMARY
reliability The extent that an assessment method will still provide the same result when used
by different clinicians on different occasions.
validity The extent that an assessment method actually does measure what it claims to be
measuring.
Reliability
Reliability refers to how consistently an assessment method will produce the same results, and reliability
can be affected by a number of different factors. First, test‐retest reliability refers to the extent that
the test will produce roughly similar results when the test is given to the same person several weeks or
even months apart (as long as no treatments or interventions have occurred in between). As we indicated
earlier, most psychological tests are based on the assumption that most traits and personal
characteristics are relatively stable and can be reliably measured. If the test has high test‐retest reliability
then when an individual is given the test on two separate occasions, the two scores should be highly
correlated.
test–retest reliability The extent that a test will produce roughly similar results when the test
is given to the same person several weeks or even months apart (as long as no treatments or
interventions have occurred in between).
Second, interrater reliability refers to the degree to which two independent clinicians will actually
agree when interpreting or scoring a particular test. Most highly structured tests, such as personality
inventories, will have high interrater validity because the scoring system is clearly defined and there is
little room for individual clinician judgements when interpreting the test. However, some other tests
have much lower interrater reliability, especially where scoring schemes are not rigidly defined, and
projective tests are one example of this (see Section 2.2.3).
interrater reliability The degree to which two independent clinicians or researchers actually
agree when interpreting or scoring a particular test.
Third, many assessment tests have multiple items (e.g., personality and trait inventories), and internal
consistency within such tests is important. Internal consistency refers to the extent to which all the
items in the test consistently relate to each other. For example, if there are 20 items in a test, then we
would expect scores on each of those 20 items to correlate highly with each other. If one item does not
correlate highly with the others, then it may lower the internal consistency of the test. The internal
consistency of a questionnaire or inventory can usually be assessed by using a statistical test called
Cronbach's α, and this test will also indicate whether any individual item in the test is significantly
reducing the internal consistency of the test (Field, 2017, pp. 821‐830).
Internal consistency The extent to which all the items in a test consistently relate to one
another.
Validity
It is important to be sure that an assessment method actually measures what it claims to be measuring,
and this is covered by the concept of test validity. However, validity is a complex concept, and we
begin by discussing some of the more obvious issues surrounding this problem.
To determine whether a test actually measures what it claims to measure, we need to establish the
concurrent validity of the test. That is, we need to see if scores on that test correlate highly with
scores from other types of assessment that we know also measure that attribute. For example, the Spider
Phobia Questionnaire (SPQ) purports to be a measure of the spider phobic's anxious reaction to spiders
(Watts & Sharrock, 1984), but in order to establish the concurrent validity of this questionnaire, we
might need to be sure that scores actually correlate highly with other measures of spider fear such as the
magnitude of physiological anxiety measures taken while the individual is viewing a spider.
concurrent validity A measure of how highly correlated scores of one test are with scores
from other types of assessment that we know also measure that attribute.
A particular assessment method may appear to be valid simply because it has questions which intuitively
seem relevant to the trait or characteristic being measured. This is known as face validity, but just
because a test has items that seem intuitively sensible does not mean that the test is a valid measure of
what it claims to be. For example, a questionnaire measuring health anxiety may ask about how
frequently the respondent visits a doctor. Although this would be a characteristic of health anxiety, it is
also a characteristic of individuals who are genuinely ill or have chronic health problems.
face validity The idea that a particular assessment method may appear to be valid simply
because it has questions which intuitively seem relevant to the trait or characteristic being
measured.
For an assessment method to have high predictive validity it must be able to help the clinician to
predict future behaviour and future symptoms, and so be valuable enough to help with the planning of
care, support or treatment for that individual. For example, a good measure of depression would predict
that certain types of antidepressant medication will help to alleviate the symptoms. Some assessment
measures are predictive in the sense that they help us to understand the kinds of factors that might pose
as risk factors for subsequent psychopathology. For example, assessments that allow us to gather reliable
information about childhood abuse and neglect will indicate that such individuals are likely to suffer a
range of possible psychopathologies in later life (see Table 16.2).
predictive validity The degree to which an assessment method is able to help the clinician
predict future behaviour and future symptoms.
inflated responsibility The belief that one has power to bring about or prevent subjectively
crucial negative outcomes. These outcomes are perceived as essential to prevent. They may be
actual: that is, having consequences in the real world, and/or at a moral level.
Structured interviews
The clinician can also use the interview method to acquire the kinds of standardised information they
need to make a diagnosis or to construct a case formulation (see Section 2.3), but this requires that they
conduct the interview in a structured way. The normal clinical interview would probably contain many
open questions such as ‘Tell me something about yourself and what you do’, and the direction of the
interview will be to some extent determined by the client's responses to these open questions. However,
structured interviews can be used to enable the clinician to make decisions about diagnosis and
functioning. One such structured interview technique is known as the Structured Clinical Interview
for DSM‐5 (SCID‐5) (First, Williams, Karg, & Spitzer, 2016), which can be used for determining
diagnoses using DSM‐5 criteria. The SCID is a branching, structured interview in which the client's
response to one question will determine the next question to be asked. This enables the clinician to
establish the main symptoms exhibited by a client, their severity, and whether a combination of these
symptoms and severity meet DSM‐5 criteria for a particular disorder, and the SCID‐5 has been shown
to provide highly reliable diagnoses and severity ratings for many disorders (e.g., Shankman et al., 2018;
Somma et al., 2018). In clinical practice, most clinical psychologists are skilled enough to reach
diagnostic decisions about a client without a structured interview such as the SCID‐5, but the reliability
of a diagnosis tends to be much higher when a structured interview is used (Garb, 2005).
Structured interviews can also be used to determine overall levels of psychological and intellectual
functioning, especially in older people who may be suffering from degenerative disorders such as
dementia. One such structured interview is the Mini Mental State Examination(MMSE), which is
a structured test that takes 10 minutes to administer and can provide reliable information on the client's
overall levels of cognitive and mental functioning. A fuller description of this structured interview is
given in Chapter 15 (see Focus Point 15.2).
Mini Mental State Examination (MMSE) A structured test that takes 10 minutes to
administer and can provide reliable information on a client’s overall levels of cognitive and
mental functioning.
Limitations of the clinical interview
The clinical interview is usually a good way of beginning the process of assessment, and it can provide a
range of useful information for the clinician. However, there are limitations to this method. First, the
reliability of clinical interviews is probably quite low. That is, no matter how skilled they may be, two
different clinicians are quite likely to end up with rather different information from an unstructured
interview. For example, clients are likely to give different information to an interviewer who is ‘cold’ and
unresponsive than to one who is ‘warm’ and supportive (Hersen & Thomas, 2007; Eisenthal, Koopman,
& Lazare, 1983), and a teenage client is likely to respond differently to a young interviewer who is
dressed casually than to an older interviewer who is dressed formally. There is also significant evidence
that an interviewer's race and sex will influence a client's responses (Paurohit, Dowd, & Cottingham,
1982). As we have already mentioned, many clients may have quite poor self‐awareness, so only a skilled
interviewer will be able to glean the information they require by inferring information from the client's
responses. Interviewers are also prone to biases that may affect the conclusions they draw from an
interview. For example, they may rely too heavily on first impressions (the primacy effect), or give
priority only to negative information (Meehl, 1996), and may be influenced by irrelevant details such as
the client's biological sex, race, skin colour or sexual orientation. Interestingly, diagnoses are most likely
to be correct when relevant diagnostic information is presented last (a recency effect)(Cwik & Margraf,
2017). Finally, there are some mental health problems in which sufferers may intentionally mislead the
interviewer or lie to them, and these can mean that the client can manipulate the interview or
deliberately provide misleading information. This can occur in the case of personality disorders or
sexual disorders such as paedophilia that may involve illegal behaviours (see Chapters 11 and 12)
(Activity Box 2.1).
4. Unlike the ad hoc quizzes and questionnaires you might find in popular magazines, most
structured psychological tests are rigorously tested to ensure that they are both valid and reliable
(see Section 2.2.1). That is, they are tested to ensure that they are a valid measure of what they
claim to be measuring (e.g., that scores on a written psychological test claiming to measure anxiety
actually correlate with behavioural measures of anxiety) and that the test is reliable in the sense
that it yields consistent scores when it is given to the same person on different occasions.
psychometric approach The idea that a psychological test assumes that there are stable
underlying characteristics or traits (e.g. anxiety, depression, compulsiveness, worry) that exist at
different levels in everyone.
Personality inventories
The most well known of the personality inventories used by clinical psychologists and psychiatrists
is the Minnesota Multiphasic Personality Inventory(MMPI). This was originally developed in
the 1940s by Hathaway and McKinley (1943) and was updated in 1989 by Butcher et al. (1989) (known
as the MMPI‐2). The MMPI‐2 consists of 567 self‐statements to which the client has to respond on a 3‐
point scale by replying either ‘true’, ‘false’ or ‘cannot say’. The questions cover topics such as mood,
physical concerns, social attitudes, psychological symptoms, and feelings of well‐being. The original
authors asked around 800 psychiatric patients to indicate whether the questions were true for them and
compared their responses with those from 800 nonpsychiatric patients. They then included in the
inventory only those questions that differentiated between the two groups. The test has 4 validity scales
and 10 clinical scales, and examples of these are shown in Table 2.4. The test provides scores for each
scale between 0 and 120, and scores above 70 on a scale are considered to be indicative of
psychopathology. The scores from the various scales can be displayed on a graph to give a distinctive
profile indicating the client's general personality features, potential psychopathology, and emotional
needs. The validity scales are particularly useful, because they allow the clinician to estimate whether a
client has been providing false information on the test. Clients might provide false information for a
number of reasons: (a) because they want to ‘look good’ and so respond in a socially acceptable way
(measured by the lie scale), (b) because they may want to fake psychopathology symptoms in order to
receive attention and treatment (measured by the F scale) (Rogers, Sewell, Martin, & Vitacco, 2003), (c)
because they are being evasive or simply having difficulty reading or interpreting the questions
(measured by the ? scale), or (d) because they are defensive and want to avoid appearing incompetent
(measured by the K scale).
hypothetical constructs Constructs that are not necessarily directly observable but have to
be inferred from other data.
Because of their potential diagnostic and theoretical value (such inventories can also be used as research
tools to help us understand the causes of psychopathology), the number of specific trait inventories
available to clinicians and researchers has burgeoned in the past 20 years. While some are very valuable
and have good face validity, many others are relatively underdeveloped. For example, unlike the MMPI,
a majority of specific trait inventories fail to include any questions to indicate whether respondents are
faking responses or are merely being careless with their answers, and many are not subjected to
stringent standardisation, validation, and reliability tests. There is even a view that researchers may
simply create a specific trait inventory to serve their own theoretical purposes and to give their own
theoretical perspective a façade of objective credibility (i.e., they may create an inventory simply to
‘measure’ a construct that they themselves have invented) (Davey, 2003).
Projective tests
This group of tests usually consists of a standard fixed set of stimuli that are presented to the client but
are ambiguous enough for the client to put their own interpretation on what the stimuli represent. This
often allows for considerable variation in responses between clients and also considerable variation
between clinicians in how the responses should be interpreted. The most widely used of the projective
tests are the Rorschach Inkblot Test, the Thematic Apperception Test (TAT), and the Sentence
Completion Test. Projective tests were originally based on the psychodynamic view that people's
intentions and desires are largely unconscious and must be inferred indirectly (Dosajh, 1996). Most
projective tests were designed during the mid‐twentieth century and were extremely popular for
assessment purposes right up to the turn of the century. However, as we shall see later because they are
open‐ended tests that allow significant variation in client responding, they are significantly less reliable
and valid than more structured tests. Nevertheless, even though their popularity has declined in recent
years (Piotrowski, Belter, & Keller, 1998), many clinicians still use these types of tests to give them some
first impressions of a client’s symptoms or as part of a larger battery of assessment procedures (Garb,
Wood, Lilienfeld, & Nezworski, 2002).
projective tests A group of tests usually consisting of a standard fixed set of stimuli that are
presented to clients, but which are ambiguous enough for clients to put their own interpretation
on what the stimuli represent.
The Rorschach Inkblot Test was originally developed by the Swiss psychiatrist Hermann Rorschach.
He created numerous inkblots by dropping ink onto paper and then folding the paper in half to create a
symmetrical image. He discovered that everyone he showed them to saw designs and shapes in the blots,
and he assumed that their responses revealed information about the individual's psychological
condition. Most versions of the Rorschach Inkblot Test now use around 10 official inkblots of which 5
are black ink on white, 2 are black and red ink on white, and 3 are multicoloured. An example of a
black and white and multicoloured inkblot are given in Figure 2.3. The clinician will have available to
them a highly structured scoring system (e.g., Exner & Weiner, 1995) that allows them to compare the
scores the client provides with a set of standardised personality norms, that may provide indications of
underlying psychopathology. However, if the test is used as a formal assessment procedure, it is still
heavily dependent on the clinician's interpretation of the client's responses. For example, if certain
themes keep appearing in the client's responses they may provide evidence of underlying conflicts, such
as the repeated perception of ‘eyes’ on the inkblots perhaps providing evidence of paranoia the clinician
may want to explore further. Nevertheless, the Rorschach test can be a valid and reliable test for the
detection of thought disorders that may be indicative of schizophrenia or people at risk of developing
schizophrenia (Lilienfeld, Wood, & Garb, 2000; Mondal & Prakash, 2015).
Rorschach Inkblot Test A projective personality test using inkblots created by dropping ink
onto paper and then folding the paper in half to create a symmetrical image.
FIGURE 2.3 The Rorschach Inkblot Test. The Rorschach Inkblot Test usually consists of 10 official
inkblots. Five inkblots are black ink on white. Two are black and red ink on white. Three are multicoloured, and the pictures
give examples of a black and white inkblot and a multicoloured one. The clinician shows the inkblots in a particular order
and asks the client: ‘What might this be?’ (a free association phase). After the client has seen and responded to all the
inkblots, the clinician then presents them again one by one to study (the inquiry phase). The client is asked to list everything
they see in each blot, where they see it, and what there is in the blot that makes it look like that. The blot can also be
rotated. The clinician also times the client, which then factors into the overall assessment. Methods of interpretation differ.
The most widely used method in the United States is based on the work of John E. Exner (Exner & Weiner, 1995). In
this system, responses are scored systematically with reference to their level of vagueness or synthesis of multiple images in
the blot, the location of the response, which of a variety of determinants is used to produce the response (for example,
whether the shape of the inkblot, its colour, or its texture is primary in making it look like what it is said to resemble), the
form quality of the response (to what extent a response is faithful to how the actual inkblot looks), the contents of the
response (what the respondent actually sees in the blot), the degree of mental organising activity that is involved in producing
the response, and any illogical, incongruous, or incoherent aspects of responses.
The Thematic Apperception Test (TAT) is a projective personality test consisting of 30 black and
white pictures of people in vague or ambiguous situations (Morgan & Murray, 1935) (Figure 2.4). The
client is asked to create a dramatic story around the picture, describing what they think is happening in
the picture; what events preceded it; what the individuals in the picture are saying, thinking, or feeling;
and what the outcome of the situation is likely to be. Many clinicians claim that this test is particularly
useful for eliciting information about whether the client is depressed, has suicidal thoughts, or strong
aggressive impulses (Rapaport, Gill, & Shaefer, 1968). Clients usually identify with one of the characters
in the pictures (known as the ‘hero’) and the picture then serves as a vehicle for the client to describe
their own feelings and emotions as if they were involved in the ambiguous scene. The TAT may also
allow the clinician to determine the client's expectations about relationships with peers, parents, other
authority figures, and romantic partners. However, there is some doubt about how valid the TAT is as a
clinical diagnostic tool, with one study indicating that clinicians classified individuals as clinical or
nonclinical cases at close to chance level when using the TAT alone (Wildman & Wildman, 1975).
Nevertheless, it can be a useful tool after a client has been formally diagnosed in order to match them
with a suitable form of psychotherapy.
Thematic Apperception Test (TAT) A projective personality test consisting of 30 black and
white pictures of people in vague or ambiguous situations.
FIGURE 2.4 The Thematic Apperception Task (TAT). The Thematic Apperception Test consists of 30
black and white pictures similar to the one in the figure. The client is asked to create a dramatic story around the picture.
Clients will usually identify with one of the characters in the picture which enables them to express their own feelings and
emotions as if they were involved in the scene.
Finally, the Sentence Completion Test is a useful open‐ended assessment test that was first
developed in the 1920s and provides clients with the first part of an uncompleted sentence, such as ‘I
like. . . .’, ‘I think of myself as. . .’, ‘I feel guilty when. . .’, which the client then completes with words of
their own. This test allows the clinician to identify topics that can be further explored with the client,
and can also help to identify ways in which an individual's psychopathology might bias their thinking
and the way they process information. Research Methods in Clinical Psychology Box 2.1 shows how the
sentence completion task has been used to identify trauma‐relevant thinking biases in combat veterans
with post‐traumatic stress disorder (PTSD) (Kimble et al., 2002). Such thinking biases help to maintain
emotional problems, and using the sentence completion task can help the clinician to identify ways of
thinking that can be targeted during treatment (Research Methods Box 2.1).
Sentence Completion Test An open-ended projective personality test that provides clients
with the first part of an uncompleted sentence which they complete with words of their own.
As we mentioned earlier, the popularity of projective tests has declined steadily over the years. There are
a number of reasons for this:
1. Such tests are mainly based on revealing information that is relevant to psychodynamic approaches
to psychopathology, and the role of psychodynamic approaches in the assessment and treatment of
psychopathology has itself declined over the past 30 years.
2. Even though standardised procedures for scoring projective tests have developed over recent years,
the reliability of such tests is still disappointingly low (Lilienfeld, Wood, & Garb, 2000), and
different clinicians will often interpret the same responses in quite different ways (Wildman &
Wildman, 1975).
3. Even with highly standardised scoring methods, some projective tests such as the Rorschach Test
often result in psychopathology being inferred when other evidence for such a conclusion is sparse.
For example, Hamel, Shaffer, & Erdberg (2000) administered the Rorschach Test to 100 school
children—none of whom had any history of mental health problems. However, the results of the
test were interpreted in almost all cases as evidence of faulty reasoning that might be indicative of
schizophrenia or mood disorder.
4. Projective tests such as the TAT have intrinsic cultural biases. For instance, in the traditional set of
TAT pictures there are no ethnic minority characters even though the client is expected to identify
with one of the characters in the picture. In some cases, this has been overcome by developing
more contemporary TAT pictures that contain figures from ethnic minorities (Constantino,
Flanagan, & Malgady, 2001).
The sentence completion task is an open‐ended assessment test that provides the client
with the first part of a sentence which the client then has to complete in their own words.
This is a useful projective test that allows the clinician to identify topics that are important
to the client, and to identify any biases in the way that a client tends to think about things.
For example, incomplete sentences such as ‘My greatest fear. . . .’, ‘I feel. . . . .’, ‘I need. . . ’
etc. can give the clinician an insight into some of the client’s emotional responses.
Similarly, questions such as ‘My father. . . .’, ‘Other pupils. . .’, ‘Most girls. . .’ will provide
some insight into the client's feelings about others.
The sentence completion task can also be used successfully as an important research tool.
For example, Kimble et al. (2002) used a sentence completion task to assess interpretation
biases in combat veterans who were diagnosed with PTSD (see Chapter 6). They gave their
participants 33 sentences to complete. Each item was generated so that it could be
completed with words of military or nonmilitary content. Examples included:
5. Most projective tests are labour intensive for the limited amount of objective information they
provide. Clinicians need extensive training in order to administer tests such as the Rorschach and
TAT, and they are time consuming to administer, interpret, and score. Given the development of
more objective and easily scored inventories, this has inevitability led to a decline in the popularity
of projective tests.
Intelligence tests
Intelligence tests are regularly used by clinicians in a variety of settings and for a variety of reasons. IQ
(intelligence quotient) tests, as they are now generally known, were first devised in the early part of
the twentieth century as a means of comparing intellectual ability in specific groups of people (e.g.,
army recruits). Arguably the first IQ test was that produced by the French psychologist Alfred Binet in
1905, a test that purported to measure intelligence across a number of verbal and nonverbal skills. From
early tests such as this there are now over 100 tests of intelligence available, most of which are
standardised to have a score of 100 as the mean and a score of 15 or 16 as the standard deviation (see
Figure 1.1). As you can see from Figure 1.1, 68% of the population will score between 84 and 116 (one
standard deviation from the mean) on IQ tests, and around 2–3% of the population will have IQ scores
less than 2 standard deviations from the mean (e.g., less than 70). Because of their continued
development over the previous 100 years, IQ tests have high internal consistency (i.e., a client will score
roughly the same on different items that measure the same ability), high test‐retest reliability (i.e., a client
who takes the same test twice but some months or years apart will achieve roughly the same score both
times), and good validity (i.e., the tests are good at predicting intellectual ability or future educational
performance) (Sparrow & Davies, 2000).
Intelligence tests are used by clinicians in a number of contexts. For example, they are used with other
measures of ability to diagnose intellectual and learning disabilities, and the cardinal DSM‐5 diagnostic
criterion for intellectual disability is based primarily on an IQ score two standard deviations below the
mean (e.g., an IQ score of between 65 and 75, see Chapter 17). IQ tests are also used to try to assess the
needs of individuals with learning, developmental, or intellectual disabilities so that support can be
provided in any specific areas of need. Tests that provide scores on a range of different ability scales are
best suited for this purpose, and one such example is the Weschler Adult Intelligence Scale, now in
its fourth edition (WAIS‐IV) (Weschler, 2008). This contains scales that measure vocabulary, arithmetic
ability, digit span, information comprehension, letter‐number sequencing, picture completion ability,
reasoning ability, symbol search, and object assembly ability (Photo 2.1). Tests such as the WAIS‐IV can
also be used as part of a battery of tests to assess whether an individual is eligible for special educational
needs, and it will provide information that will suggest strategies, services, and supports that will
optimise the individual's functioning within society. Intelligence tests are frequently used as part of a
battery of assessments used in neurological evaluations (see Chapter 15, Section 15.1.2) and can help to
detect when a client has brain damage caused by traumatic injury or cerebral infection or has a
degenerative brain disorder such as Alzheimer's disease.
PHOTO 2.1 The Weschler Adult Intelligence Scale (WAIS‐IV). The WAIS‐IV is one of the tests of intellectual ability
most commonly used by clinicians. It comprises a range of verbal and performance tests that measure intellectual ability on
a variety of subscales including vocabulary, arithmetic ability, digit span, information comprehension, letter‐number
sequencing, picture completion ability, reasoning ability, symbol search, and object assembly ability
Weschler Adult Intelligence Scale A test designed to measure intelligence in adults and
older adolescents. It contains scales that measure vocabulary, arithmetic ability, digit span,
information comprehension, letternumber sequencing, picture completion ability, reasoning
ability, symbol search and object assembly ability.
However, despite their practical benefits across a range of clinical contexts, intelligence tests still have a
number of limitations. First, intelligence is an inferred construct. That is, it does not objectively exist in
the same way that physical attributes such as heart rate or blood pressure exist but is a hypothetical
construct that has been developed by psychologists to help us try to understand how well individuals can
adapt to various problems. This has led some skeptical psychologists to suggest that there is no clear
definition of intelligence but that ‘intelligence is merely what IQ tests measure’! Second, if the latter
statement is true, then our conception of whether someone is intelligent or not will depend on the
reliability and validity of the individual IQ test we use to measure their intelligence, and this can raise
some difficulties. For example, many IQ tests are culturally biased and appear to be based on middle‐
class, majority ethnic background views of what is adaptive (Gopaul‐McNicol & Armour‐Thomas,
2002), and so will disadvantage those from lower socio‐economic backgrounds, from ethnic minorities,
or from poorer quality educational backgrounds (Walker, Batchelor, & Shores, 2009). While attempts
have been made over the years to eradicate cultural bias of this kind, it is difficult to eliminate it entirely.
For instance, a test question may ask whether a cup goes with a bowl, a spoon, or a saucer, but a child
from a low socio‐economic background may never have drunk from a cup with a saucer and may be
more likely to associate a cup or mug with a spoon. Even so, because it is widely known that some ethnic
minorities perform relatively poorly on IQ tests, this knowledge alone can interfere with test
performance in that group (Spencer, Steele, & Quinn, 1999). Third, we assume that intelligence tests
mainly tend to be rather ‘static’ tests of intellectual ability (but see the next point) and provide a
snapshot of ability at any point in time. What they do not usually appear to measure is the individual's
capacity to learn or their potential to acquire new cognitive abilities (Grigorenko & Sternberg, 1998).
Fourth, IQ scores may not be as static as we originally imagined, and between 1932 and 1978 there has
been a 13.8‐point increase in IQ scores (Flynn, 1984). This is known as the Flynn effect and although
the rate of this IQ increase has slowed in recent years, it is still apparent (Trahan, Stuebing, Hiscock, &
Fletcher, 2014). It is not clear what has been causing this increase in IQ scores over time, but one
implication is that a test will overestimate an individual's IQ score by an average of about 0.3 points per
year between the year the test was normed and the year in which the test was administered. This has
important implications for psychopathologies such as intellectual disabilities which use the IQ measure
as a criterion for disability and, as a consequence, a criterion for access to services and support. Fifth,
many researchers argue that our current conception of intelligence as measured by IQ tests is too
narrow. There are many other skills that are not usually included in our conceptions of, and measures of
intelligence, and these include music ability; physical skill; the ability to perceive, understand, and
express emotion (known as ‘emotional intelligence’); and the ability to implement solutions to real‐world
problems (Gardner, 1998; Mayer, Salovey, & Caruso, 2000). For example, are Lionel Messi's footballing
skills as much an intelligent skill as arithmetic or verbal ability (Bishop, Wright, Jackson, & Abernathy,
2013)? (Photo 2.2).
PHOTO 2.2 ‘The Little Maestro’. Are Lionel Messi's footballing skills as much an intelligent skill as arithmetic or
verbal ability?
Source: Reuters / Darren Staples ‐ stock.adobe.com
Psychophysiological tests
There are a number of psychophysiological tests that can be used to provide information about
potential psychological problems. For example, anxiety causes increased activity in the sympathetic
nervous system and is regularly accompanied by changes in physiological measures such as heart rate,
blood pressure, body temperature, and electrodermal responding. Similarly, anger is usually associated
with physiological changes in blood pressure and heart rate. So, psychophysiological tests can provide
useful information related to emotionally‐based psychological problems.
One important measure of physiological activity is electrodermal responding, sometimes known as
the galvanic skin response (GSR) or skin conductance response (SCR). Emotional responses such as
anxiety, fear, or anger increase sweat‐gland activity, and changes in this activity can be recorded with the
use of electrodes that would normally be attached to the fingers of the participant. Traditionally,
changes in skin conductance caused by sweat‐gland activity have been measured as changes on a
polygraph—a pen that records changes in skin conductance on a continually moving roll of graph
paper, but this has now been superseded by direct computer analysis of skin conductance which is
displayed on a computer screen (see Photo 2.3). Skin conductance measures have been used in a variety
of contexts: (a) to assess the kinds of stimuli or events that elicit anxiety in a client (Cuthbert et al., 2003;
Alpers, Wilhelm, & Roth, 2005), (b) to assess autonomic or physiological reactivity in certain diagnostic
groups (e.g., individuals diagnosed with antisocial personality disorder tend to have less reactive
autonomic nervous systems than nonclinical samples) (Lykken, 1995), (c) to assess the ability of clients to
cope following treatment interventions (Bobadilla & Taylor, 2007; Grillon et al., 2004), and (d) whether
autonomic indices of anxiety or arousal correspond with appropriate changes in behaviour (e.g., in
panic disorder, avoidance responses may be triggered by physiological changes indicative of anxiety)
(Karekla, Forsyth, & Kelly, 2004).
PHOTO 2.3 Polygraph machine. The polygraph is a device used to measure changes in physiological responding that may
indicate emotional changes such as anxiety, fear, or anger. The polygraph works by recording physiological measures (such as
skin conductance or heart rate) on a continually moving roll of graph paper. In more recent times, these measures can be
analysed directly by computer and the output displayed on the computer screen.
Finally, another important psychophysiological assessment measure is the electroencephalogram
(EEG). This involves electrodes being attached to the scalp that record underlying electrical activity and
can help to localise unusual brain patterns in different areas of the brain. Abnormal electrical patterns
detected by EEG can indicate a number of problems, including epilepsy, brain tumours, or brain injury
(Cuthill & Epsie, 2005)
Neuroimaging techniques
Many behavioural, cognitive, and psychological problems may be linked to abnormalities in brain
functioning, and while neurological tests can indicate that possible brain dysfunction may be involved,
we will usually need to use techniques that provide images of the brain to confirm this. There are a
range of neuroimaging or brain imaging techniques now available, and some provide the clinician with
anatomical and structural information about the brain (e.g., whether a brain tumour is present) while
others provide information about brain activity and brain functioning (e.g., whether specific brain areas
are fully functioning).
Computerised axial tomography, or CAT scan machines, are sophisticated versions of X‐ray
machines and can be used to form a three dimensional picture of the brain. Figure 2.5 shows a CAT
scan machine. The patient lies down on a platform which then moves through a large doughnut‐like
ring. The ring turns so that with each turn a narrow ‘slice’ of the brain is X‐rayed, and a computer uses
this information to construct two‐dimensional cross‐sections of the brain and these many separate
images can also be combined to provide a three‐dimensional image of the brain. CAT scan images can
help to detect abnormal growths in the brain such as tumours or enlargement of the ventricles in the
brain that can indicate tissue degeneration typical of dementia or schizophrenia.
Positron emission tomography, or PET scan allows measurement of both brain structure and
function. A PET scan can provide pictures of chemical activity in the brain either at rest or when the
participant is undertaking cognitive tasks such as language, learning, remembering, or sensory
processing. The PET scanner utilises radiation emitted from the participant to develop images. Each
participant is given a minute amount of a radioactive drug that closely resembles a natural substance
used by the body. Gamma radiation produced by the radioactive drug is detected by the PET scanner
and shows in fine detail the metabolism of glucose in the brain. The PET scanner's computer uses this
detail to produce colour pictures of the functioning brain. Brightly coloured areas represent areas of the
brain where metabolic rates are high, and represent high levels of brain activity. Figure 2.5 illustrates a
series of PET scans taken of a human infant's brain at intervals from 1 to 12 months, and these images
clearly show the increase in brain activity with early development (Figure 2.6). Because the PET scan
provides images of the brain indicating both levels of activity and areas of activity, it is a useful tool for
assessing cognitive functioning, and provides information about brain functioning in degenerative
diseases such as Alzheimer's disease, and brain functioning in intellectual disabilities such as Down
syndrome. In Chapter 8, Figure 8.3 also provides an example of the use of PET scans in assessing brain
functioning in schizophrenia.
A more recent and less expensive way of measuring chemical activity in the brain is Single‐Photon
Emission Computed Tomography (SPECT). Like PET, SPECT requires injecting a radioisotope
into the bloodstream but is able to provide a 3‐D image of neurotransmitter activity in the brain
(Pagani, Carletto, & Ostacoli, 2019).
FIGURE 2.5 Computerised Axial Tomography or ‘CAT’ Scan. The top picture shows a CAT scan
machine. The client lies down on the platform with their head positioned within the large doughnut‐like ring. The ring turns
to X‐ray individual thin ‘slices’ of the brain, and a computer is used to turn these individual images into either a two‐
dimensional or three‐dimensional picture of the brain. The lower picture shows an example CAT scan of a ‘slice’ of a
normal brain next to one that reveals a large brain tumour (the darker area).
One further imaging technique that has been developed is known as magnetic resonance imaging,
or MRI. MRI scanning involves the participant being placed inside a large circular magnet which
causes the hydrogen atoms in the body to move. This then produces an electromagnetic signal that is
converted by the scanner's computer into visual pictures of the brain. Pictures of the brain produced by
MRI scanning are highly detailed and allow the detection of even the smallest of lesions or tumours. A
subsequent development of MRI technology is known as functional magnetic resonance
imaging, or fMRI. This allows the clinician to take brain images so quickly that tiny changes in brain
metabolism can be detected and can provide minute‐to‐minute information about actual brain activity.
This technology can be used to measure changing brain activity while the participant is undertaking
particular tasks, such as a memory task or viewing an emotional film. Figure 2.7 provides an example of
an fMRI scan of individuals diagnosed with PTSD who are asked to recall an autobiographical
memory that gives rise to flashbacks. fMRI analyses allow the researcher to identify which areas of the
brain are involved in this activity and the sequential activation of brain areas that are specific to
experiencing a distressing ‘flashback’.
FIGURE 2.6 PET Scan of a Developing Infant Brain. These images are of PET scans showing the
increase in brain activity which accompanies the growth of the brain, in the same infant, from the age of 1 to 12 months.
This can be used, for instance, to pinpoint developmental problems in children much earlier than other tests would. Brightly
coloured areas represent areas of the brain where metabolic rates are high and indicate high levels of brain activity.
The use of modern brain imaging technology has been useful in providing detailed evidence of brain
abnormalities and dysfunction in relation to a number of psychopathology problems, and as we shall see
in Chapter 3, these techniques are not only valuable for assessment purposes but are also a useful
research tool. However, the use of brain imaging methodology to understand mental health problems is
not without its problems, and as yet there are still many mental health problems for which we have so
far failed to find any significant brain biomarkers – either in the form of dysfunctional brain processes
or normal brain processes that are indicators of specific disorders (Lozupone et al., 2017).
2.2.5 Clinical Observation
A further method of collecting useful clinical information is by direct observation of a client's behaviour.
This can supplement information from interviews and psychological tests and often allows an
assessment of behaviour in its natural context, such as the home, school classroom, or community
setting. Direct observation can provide an objective assessment of the frequency of particular behaviours
(e.g., aggressive behaviours) when this may not be so easily obtained from reports given by the client
themselves, their family, or carers. It also allows behaviour to be assessed in the context of events that
precede the behaviour (and so may trigger the problem behaviour) and events that immediately follow the
behaviour (and may represent the consequences of the behaviour that reinforce its occurrence). In
Chapter 17, Treatment in Practice Box 17.1 provides a detailed example of how an observational
technique can be used to identify what factors might be triggering and maintaining challenging
behaviour in an individual with intellectual disabilities. This example used an ABC chart that requires
the observer to note what happens before the target behaviour occurs (A), what the individual did (B),
and what the consequences of the behaviour were (C). Focus Point 2.2 provides some examples of how
behaviours and events can be coded when undertaking a clinical observation (Nock & Kurtz, 2005),
and which type of coding method you use will depend largely on what you want to find out (e.g., do you
want to just find out how frequently a behaviour occurs, or do you want to know more about the
context in which a behaviour is enacted?) (Focus Point 2.3).
FIGURE 2.7 Functional Magnetic Resonance Imaging (fMRI). fMRI scans allow the researcher to
measure brain activity while the participant is completing various behavioural or cognitive tasks. In this example,
individuals with a diagnosis of PTSD were asked to recall autobiographical memories that might give rise to distressing
‘flashbacks’ of their trauma, and researchers were able to track the brain activity that accompanied this experience.
From Whalley et al., 2013.
ABC chart An observation method that requires the observer to note what happens before the
target behaviour occurs (A), what the individual did (B), and what the consequences of the
behaviour were (C).
There are a number of advantages to using observational techniques. First, if the observer is
appropriately trained, observation can provide important objective measures of the frequency of
behaviours and those events that precede and follow them, and the latter information will often provide
an insight into the purpose the problematic behaviour serves (e.g., in Treatment in Practice Box 17.1,
systematic observation reveals that the purpose of Andy's self‐injurious behaviour is to enable him to be
removed from noisy and crowded situations) (Kahng et al., 2014; Hastings & Noon, 2005). Second,
observational data has greater external or ecological validity than self‐reports or other forms of
testing because such data provide a measurement of the behaviour as it is actually occurring in a
context. Third, observation of behaviour in a context can often suggest workable answers to problem
behaviour as can clearly be seen in Treatment in Practice Box 17.1. Once it was established that Andy's
self‐injurious behaviour was functioning to get him removed from a stressful environment, staff could
look out for signs that he was becoming overstimulated and remove him from the room before he began
to injure himself.
ecological validity The extent to which conditions imulated in the laboratory reflect real-life
conditions.
Having listed these advantages of clinical observation, it also has a number of drawbacks. First, it is one
of the more time‐consuming forms of assessment, not just in terms of the amount of time required to
simply observed behaviour but also in terms of the amount of time needed to properly train observers
in the use of the various coding systems (e.g., Hawes, Dadds, & Pasalich, 2013) (see Focus Point 2.3).
This is especially so if members of the client's family need to be trained to make systematic observations
in the home setting. Second, observation will usually take place in a specific setting (e.g., the school
classroom or the home), and behaviour in this specific context may not be typical of behaviour in other
contexts (Nock & Kurtz, 2005). Third, the presence of an observer may lead those involved in the
observation setting to behave differently to how they would normally behave (Kazdin, 1978; Skinner,
Dittmer, & Howell, 2000), and this may often be the case with children, who will show dramatic
improvements in behaviour when they are aware they are being observed. This problem can often be
overcome by videorecording behaviour without an observer present and then analysing this at a later
time. Similarly, the clinician may want to undertake analogue observations in a controlled
environment that allows surreptitious observation of the client. For example, children can be observed
interacting in a playroom while the observer is situated behind a two‐way mirror. Fourth, unless
observers are properly trained in the coding methods used, there may be poor interobserver reliability
(Kamphaus & Frick, 2002). That is, two different observers assessing the same participant may focus on
quite different aspects of behaviour and context and arrive at quite different conclusions about the
frequency and causes of behaviour. Fifth, as in all observational procedures, the data can be influenced
by the observer's expectations. Observer expectations can cause biases in the way that information is
viewed and recorded and this can be caused by the theoretical orientation of the observer and what
they already know about the person being observed.
In the figure are four different types of observational coding forms and the one that a clinician
would use would depend on the type of data they want to collect. Figure 1 is a simple coding
scheme in which the observer merely describes the behaviours they observe plus the antecedents
and consequences of these behaviours. Because the observer is describing the behaviours in
their own words, there is likely to be poor inter‐rater reliability using this scheme. Figure 2
provides a coding system that simply measures the frequency of selected behaviours. Figure 3
extends this by providing the frequency of selected behaviours over a period of time. This will
allow the clinician to see if there is anything interesting in the sequence or order that
behaviours are emitted. Finally, Figure 4 provides a coding scheme that allows the recording of
quite complex information, including the behaviour of the client in relation to others in the
situation, such as teacher and peers.
One final form of clinical observation that is frequently used is known as self‐observation or self‐
monitoring. This involves asking the client to observe and record their own behaviour, perhaps by
using a diary or a smartphone to note when certain behaviours or thoughts occur and in what contexts
they occur. (Focus Point 2.4—Apps for Self‐Monitoring). This has the benefit of collecting data in real
time and overcomes problems associated with poor and biased recall of behaviour and events when
using retrospective recall methods (Strongman & Russell, 1986). The increasing use of electronic diaries
for self‐observation has come to be known as ecological momentary assessment(EMA) (Stone &
Shiffman, 1994; Dunton, 2017), and such methods have been used to gather information about client's
day‐to‐day experiences, to aid diagnosis, to plan treatment, and to evaluate the effectiveness of
treatment (Piasecki, Hufford, Solham, & Trull, 2007). In addition, self‐monitoring itself can have
beneficial effects on behaviour even prior to any attempts at intervention. For example, many
problematic behaviours (e.g., smoking, illicit drug use, excessive eating) can occur without the individual
being aware of how frequently they happen and in what circumstances they happen, and self‐
monitoring can begin to provide some self‐knowledge that can be acted on by the individual. As a result,
self‐monitoring often has the effect of increasing the frequency of desirable behaviours and decreasing
the frequency of undesirable behaviours (McFall & Hammen, 1971; Maas, Hietbrink, Rinck, & Keijers,
2013). This is known as reactivity, and clinicians can often take advantage of this process to facilitate
behaviour change (Figure 2.8).
self-observation A form of clinical observation that involves asking clients to observe and
record their own behaviour, perhaps by using a diary or a smartphone to note when certain
behaviours or thoughts occur and in what contexts they occur.
self-monitoring A form of clinical observation that involves asking clients to observe and
record their own behaviour, to note when certain behaviours or thoughts occur, and in what
contexts they occur.
ecological momentary assessment (EMA) The use of diaries for self- observation or self-
monitoring, perhaps by using an electronic diary or a palmtop computer.
However, even with sophisticated apps, self‐reporting of mood can still be very unreliable, and
apps are now being developed that can record mood changes throughout the day by detecting
telltale changes in a person's voice that indicate the emotional state they are in. Miller (2012)
provides a detailed review of previous psychological research using mobile electronic devices,
and Lv, Su, and Martin (2016) review a number of recent apps and provide evidence‐based
recommendations for the development of future apps.
FIGURE 2.8 Self‐Monitoring of Smoking Behaviour. This figure shows the results of a smoking self‐
monitoring task undertaken by a college student. The student was asked to record what they were doing each time a cigarette
was smoked and each time they were given random prompts by a palm‐top computer. Comparing the base rates (random
prompts) with smoking rates when in various situations allows the clinician to see whether certain situations are triggers for
smoking. In this case comparisons suggest that low grades and doing schoolwork represent triggers for smoking, whereas
being with family or partner usually elicits no cigarette smoking.
From Piasecki, Hufford, Solhan, & Trull, 2007.
confirmatory bias A clinical bias whereby individuals with a mental health problem ignore
information that does not support their beliefs and they interpret ambiguous information as
supporting their beliefs.
Summary
You can see from the preceding sections that cultural bias in assessment and diagnosis is a complex and
pervasive phenomenon. The clinician needs to be aware of the sources of any cultural bias in these
processes and should be reflective about their own potential stereotypes of ethnic minorities and the
effect this might have on their clinical judgments.
SELF‐TEST QUESTIONS
Define what is meant by test‐retest reliability, interrater reliability and internal reliability.
Define what is meant by concurrent validity, face validity, predictive validity, and construct
validity.
What are the main benefits and limitations of the clinical interview?
What is a structured interview? Can you provide an example of one?
What are the advantages of psychological tests as methods of assessment?
Can you describe a detailed example of a personality inventory?
How do projective tests differ from other types of psychological test?
Can you describe the features of at least one projective test?
What are the benefits and limitations of intelligence tests?
Can you name and describe at least two psychophysiological tests that might be used for
clinical assessment?
Can you name and describe three different neuroimaging techniques?
What are the benefits and limitations of clinical observation techniques?
What is ecological momentary assessment (EMA)?
Can you provide at least three examples of cultural bias in assessment and diagnosis?
Can you describe at least two studies that have identified some of the causes of cultural
bias in assessment and diagnosis?
SECTION SUMMARY
Many practicing clinical psychologists will usually develop a case formulation when dealing with a
client, and this is an attempt to work towards explaining the client's problems in established theoretical
terms. In most cases, the explanation developed will also suggest interventions that may be successful in
resolving those problems, and it will be a precise account of the patient's problems that is a collaborative
exercise with the client, and is not imposed on the client (as psychiatric diagnosis might be). Persons
(1989) has described case formulation as having six components: (a) creating a list of the client's
problems, (b) identifying and describing the underlying psychological mechanisms that might be
mediating these problems (and the nature of the mechanisms described will depend on the theoretical
orientation of the clinical psychologist—see below), (c) understanding the way in which the
psychological mechanisms generate the client's problems, (d) identifying the kinds of events that may
have precipitated the client's problems, (e) identifying how these precipitating events may have caused
the current problems through the proposed psychological mechanisms, and (f) developing a scheme of
treatment based on these explanations and predicting any obstacles to treatment.
How a case formulation is constructed will depend on the theoretical orientation of the clinical
psychologist and, within an individual formulation, explanation of the client's problems will be couched
in terms of the psychologist's own preferred theoretical approach. For example, those who work within a
cognitive or behavioural model of psychopathology (see Chapter 1, Section 1.3.2) will attempt to find
explanations for the client's problems based on cognitive and behavioural causes—sometimes known as
an ABC approach. That is, they will attempt to identify the antecedents (A) to the problems, describe the
beliefs (B) or cognitive factors that are triggered by these antecedents, and the consequences (C) of these
events. For example, if a client suffers from panic attacks, the case formulation may discover that (a)
these occur in situations where there are crowds of people (antecedents); (b) that the client believes that
feeling hot, sweaty, and faint are signals for an impending heart attack (beliefs); and (c) the client
indulges in certain ‘safety’ behaviours designed to keep her ‘safe’—such as avoiding going out of the
house—but which reinforce the symptoms and beliefs (consequences) (Tarrier & Johnson, 2015). Based
on this knowledge, the clinical psychologist can begin to understand the factors that are causing and
maintaining these problems (e.g., the faulty beliefs and the ‘safety’ behaviours) and develop therapeutic
interventions to try and deal with these.
In contrast, psychologists who hold a psychodynamic perspective use formulations to address the way
that current problems reflect underlying unconscious conflicts and early developmental experiences and
will couch their formulations in these kinds of ways. For those psychologists who believe that a holistic
or systemic view of a person's problems is important (e.g., their problems can only be fully understood
within a family or social context), the formulation will be developed in terms of the important
relationships between the client and important other people in their life. For example, within the context
of the family, someone with a psychological problem may be seen as a weak and dependent person, and
this may influence how other members of the family treat the client, and determine what demands the
client may make on their family. Thus, the client's problems can be formulated as interactions between
various ‘actors’ (the family members) that may maintain the client's problems (Marzillier & Marzillier,
2008; Dallos & Draper, 2005). Figure 2.9 provides a simple example of a systemic formulation that
attempts to explain how a client's problems are maintained by the relationships between him and other
members of his family. Johnstone & Dallos (2013) provide a comprehensive guide to the different
approaches to case formulation, and the British Psychological Society's Division of Clinical Psychology
has provided a good practice guide on the use of psychological formulation (British Psychological
Society, 2011).
In many cases clinicians prefer to represent their formulations in a diagrammatic form that permits easy
identification of factors that may be causing the client's problems, and it also enables the clinician to
clearly explain the formulation to the client. Activity Box 2.1 provides a detailed and structured
example of how a formulation based on a cognitive‐behavioural approach could be attempted, and
provides an example of a formulation interview that the reader can attempt to interpret in terms of the
theoretical model provided. This example shows how the case formulation for a client suffering panic
disorder would be interpreted by a cognitive‐behavioural psychologist in terms of existing cognitive
models of panic disorder (see Chapter 6, Section 6.4). Once the diagram is completed this should
suggest some possible targets for interventions (e.g., using cognitive behaviour therapy to change
misinterpretations of bodily sensations and to prevent the use of safety behaviours, see Chapter 6)
(Activity Box 2.2).
FIGURE 2.9 A Systemic Case Formulation. Jack has problems with both drugs and drink. He
later became involved in petty crime, and was diagnosed as depressed. He also began to exhibit
paranoia and delusional ideation. This simple formulation shows how the reactions of Jack and his
mother and sister reinforce Jack's feelings of rejection and his abuse of drink and drugs.
From Dallos & Stedman, 2006.
Tarrier (2006) lists the various advantages of the case formulation approach: (a) it allows a flexible and
idiosyncratic understanding of each client's individual problems irrespective of individual diagnoses
they may have been given (i.e. in clinical practice, a client's problems do not usually fall into simple
diagnostic categories but may reflect a range of problems unique to that individual); (b) it is
collaborative and treats the client with regard; (c) it is firmly based on a theoretical understanding of
psychopathology (unlike diagnosis which is based entirely on a description of symptoms); (d) it can
include information about a client's past history (e.g., their exposure to risk factors) and the client's
personal, social, and family history; and (5) it allows the development of treatment strategies that can be
moulded to the specific needs of that individual client, and is especially advantageous in treating
complex cases that do not easily conform to standard diagnostic categories. Finally, there is growing
empirical support that the case formulation approach has genuine therapeutic benefits with comparison
studies suggesting that clients receiving case formulation approaches may have better outcomes than
those that have not received either an initial case formulation or frequent monitoring of progress based
on the formulation (Jacqueline & Lisa, 2015; Persons & Hong, 2015)
SELF‐TEST QUESTIONS
What are the main components of a case formulation?
Can you describe how a cognitive‐behavioural clinician and a psychodynamic clinician
might approach case formulation differently?
SECTION SUMMARY
CHAPTER OUTLINE
3.1 RESEARCH AND SCIENCE
3.2 CLINICAL PSYCHOLOGY RESEARCH—WHAT DO WE WANT TO FIND
OUT?
3.3 RESEARCH DESIGNS IN CLINICAL PSYCHOLOGY
3.4 ETHICAL ISSUES IN CLINICAL PSYCHOLOGY RESEARCH
3.5 RESEARCH METHODS IN CLINICAL PSYCHOLOGY REVISITED
LEARNING OUTCOMES
When you have completed this chapter, you should be able to:
1. Describe and evaluate a range of research methods that can be used in clinical psychology
research.
2. Describe the types of research questions that are central to clinical psychology research.
3. Critically evaluate the ethical issues relevant to clinical psychology research.
I am a clinical psychologist. Among the health care professions, clinical psychology is one of the few to provide
extensive research training, and a clinical psychologist can be involved in both basic and applied research. Because
of the breadth of their training in research methods, clinical psychologists are well suited to design, implement, and
evaluate research and to conduct evaluations of the services provided by mental health care agencies. When you are
a practicing clinical psychologist, finding time to conduct research of any kind is difficult. But when I am involved
it helps me to satisfy my curiosity, to generate new knowledge on which more effective treatments may be based, and
to evaluate whether the current services we offer are effective.
Sarah's Story
Introduction
Why might a profession whose main aim is arguably to alleviate mental health problems want to do
research or be involved in research? Why are clinical psychologists given such a rigorous training in
research methods anyway—shouldn't they simply be taught how to help people recover from their
mental health problems? The personal account opening the chapter goes some way to answering these
questions. Even if they are simply offering a treatment‐based service, clinical psychologists should be
able to evaluate whether their services are effective and successful, and to do this with any degree of
objectivity requires a knowledge of scientific method. Sarah's Story reflects a widely‐held view that the
clinical psychologist should be thought of as a scientist‐practitioner or an applied scientist—
someone who is competent as both a researcher and a practitioner (Overholser, 2010). This view arose
in the early twentieth century when psychology was thought of as an experimental science. However, as
the discipline of psychology developed from being a pure research subject to an applied profession,
clinical psychology still maintained its links with universities and the academic world (Davey, 2019).
Indeed, in the UK, almost all clinical psychology training courses are based in university psychology
departments and have substantial research training components to them. The current view of the link
between research and practice that is held in the UK tends to be one in which scientific method is
systematically integrated into clinical work (Barker, Pistrang, & Elliott, 2015). Shapiro (1985) defined this
applied scientist view of clinical psychologists as (a) applying the findings of general psychology to the
area of mental health, (b) using only methods of assessment that have been scientifically validated, and
(c) doing clinical work within the framework of scientific method (see also Overholser, 2010). However,
this view of clinical psychologists, their approach to research, and how they use research is not as clear
cut as it sounds, and in order to understand how research is used by clinicians and integrated into their
role as mental health professionals we need to spend a little time understanding what is meant by (a)
research and (b) scientific method, and also we need to look at what value research might have within
the broader scope of psychopathology. For example, some researchers simply want to understand what
causes psychopathology, others want to know whether there is empirical evidence supporting the
efficacy of specific treatments, and others simply want a systematic way of understanding and
interpreting the symptoms of their clients.
SELF‐TEST QUESTIONS
Can you describe the main principles of the scientific method?
What is the difference between a theory and a hypothesis?
What do we mean when we say that decisions about the effectiveness of an intervention
should be evidence based?
What are the benefits and drawbacks with clinical psychology using the scientific method
as a model for research?
What is the ‘replication crisis’ in psychology?
What is social constructionism and how does it offer a different research approach to the
scientific method?
However, although we are one hundred years on from the ‘Little Albert’ experiment and we would have
expected experimental methods to have been refined and data reporting to be fully standardised, we are
still experiencing what is known as a ‘replication crisis’. This is an ongoing methodological crisis in
many sciences in which it has been found that many scientific studies are difficult or impossible to
replicate (Pashler & Wagenmakers, 2012). Focus Point 3.1 discusses some of the issues that this
replication crisis raises for clinical psychology research in particular.
As well as being replicable, research findings also need to be testable. By testable, we mean that a
scientific explanation is couched in such a way that it clearly suggests ways in which it can be tested and
potentially falsified. Scientific method often relies on the construction of theories to explain phenomena,
and a theory is a set of propositions that usually attempt to explain a phenomenon by describing the
cause–effect relationships that contribute to that phenomenon. Theories are expected to be able to take
into account all relevant research findings on a phenomenon and be articulated in such a way that they
will also have predictive value. That is, they should be able to predict what might happen in as yet
untested situations. Thus, a good theory will allow the researcher to generate hypotheses about what
might happen and to test these hypotheses in other research studies. If the hypotheses are confirmed in
these other studies the theory is upheld, but if the hypotheses are disconfirmed then the theory is either
wrong or needs to be changed in detail to explain the new facts. This process illustrates one of the
important distinctions between science and so‐called nonscience. Karl Popper (1959) proposed that
science must be able to formulate hypotheses that are capable of refutation or falsification, and if it is
not possible to falsify a theory by generating testable hypotheses, then that theory is not scientific, and in
Popper's view is of little explanatory value (this is a specific criticism that has been directed at Sigmund
Freud and his theory of psychoanalysis, see Chapter 1, Section 1.3.2).
theory A set of propositions that usually attempt to explain a phenomenon by describing the
cause–effect relationships that contribute to that phenomenon.
SECTION SUMMARY
3.1 RESEARCH AND SCIENCE
Research is about furthering understanding of a topic through careful consideration or
study.
Scientific method espouses the pursuit of knowledge through systematic observation, and
requires that research findings are replicable and testable.
The ‘replication crisis’ is an ongoing methodological crisis in many sciences in which it has
been found that many scientific studies are difficult or impossible to replicate.
A theory is a set of propositions that attempt to explain a phenomenon.
There is growing pressure for mental health services to recommend treatments whose
efficacy is evidence based.
Social constructionism is one research approach in clinical psychology that is an alternative to
the scientific method.
The replication crisis in science emerged in the early 2010s when it was discovered that a
mixture of questionable research practices (QRPs) had been contributing to difficulties in
replicating many scientific studies (Pashler & Wagenmakers, 2012; Simmons, Nelson &
Simonsohn, 2011), and psychology was one of the major disciplines implicated in this
replication crisis.
Psychological research is at the centre of this controversy for a number of reasons. These
include (a) questionable research practices involving dubious ‘flexibility’ in data collection and
reporting, (b) not publishing data that do not fit in with expected results, (c) choosing when to
stop data collection (e.g., when statistical significance has been achieved), (d) manipulation of
outliers (arbitrarily deciding whether to leave in or take out outliers depending on the
hypotheses being tested), and (e) capitalising on grey areas of acceptable scientific practice (e.g.,
arbitrarily deciding how to measure constructs that are not directly observable). Astonishingly,
in one survey, a majority of over 2000 psychologists admitting using at least one of these
questionable research practices (Leslie, Loewenstein, & Prelec, 2012). These practices
individually and collectively have contributed to generating a ‘publication bias’ in which papers
with statistically significant effects are more likely to be published, and so this may motivate
researchers to use questionable research practices to establish significant effects that in reality
are either very weak or nonexistent—and therefore very difficult to replicate.
Clinical psychology research has been less implicated in this replication crisis than have other
areas of psychology, and there may be some positive reasons for this. For example, Tackett,
Brandes, King, and Markon (2019) mention that (a) sample sizes used in studies in clinical
psychology journals tend to be much larger (and so generate more statistical power) than studies
published in other areas of psychology (Fraley & Vazire, 2014), and (b) some of the areas of
research relevant to clinical psychology (such as personality psychology) do have high
replication rates, at least partially because of the large body of research that underlies the
conceptualisation and measurement of personality traits (Soto, 2018).
However, Tackett et al. (2019) do mention some areas of concern within clinical psychology
research. These include (a) the fact that clinically related research that is expensive and time
consuming tends to use relatively small sample sizes (examples include clinical neuroscience and
treatment/intervention research such as randomised controlled trials, RCTs, see Chapter 4,
Section 4.2.2), and this can only raise the chances of unreliable research results caused by a lack
of statistical power (Button et al., 2013; Cuijpers, 2016); (b) much clinical psychology research
uses the Diagnostic and Statistical Manual of Mental Disorders (DSM) diagnostic criteria to define
participant groups, and we know that interrater reliability using DSM criteria is itself poor; (c)
there is a consistent publication bias towards significant results in intervention studies (e.g.,
randomised controlled trials, RCTs) resulting in a likely overestimation of treatment effects for
many categories of mental health problems; and (d) many randomised controlled trials testing
the efficacy of individual treatments or interventions often show what are known as ‘allegiance
effects’ (the tendency of treatment studies to support the theoretical orientation of the authors),
with some critics arguing that such authors may often wittingly or unwittingly use questionable
research practices (QRPs) to influence the outcomes of their studies (Leichsenring et al., 2017).
Attempts to eradicate questionable research practices and increase the transparency and
replicability of scientific research also need to be applied to clinical psychology research
(Tackett et al., 2019). Ways of doing this include ensuring that the materials used in research
(e.g., questionnaires, computer software) and the raw data collected are openly available for
scrutiny by others. One way of preventing questionable research practices during the research
process is to ensure that researchers preregister the details of their studies, such as how they will
be conducted and analysed. Finally, multiple collaborations can be utilised for both replications
and original research—ensuring that acceptable research practices are shared across different
research sites and centres (Open Science Collaboration, 2015).
hypotheses Tentative explanations for a phenomenon used as a basis for further investigation
or predicting the outcome of a scientific study.
Activity Box 3.2 provides an example of a psychological theory that attempts to explain why some
people develop panic disorder (see Chapter 6, Section 6.4). This is a theory that can be represented
schematically as a series of cause–effect relationships, and the sequence of cause–effect relationships
described in this theory are assumed to precipitate regular panic attacks in those diagnosed with panic
disorder. This relatively simple theory is constructed in such a way that we can generate a number of
testable hypotheses from it, and so we could potentially falsify the theory according to Popper's criteria
(Focus Point 3.2).
We have described what the scientific method is, but is it the best model by which to conduct
clinical psychology research? In many countries of the world, clinical psychology has either
explicitly or implicitly adopted the scientist practitioner model described in Section 3.1.2, with
the implication that practicing clinical psychologists are willing to at least call themselves
scientists by training even if they do not regularly practice as scientists. Nevertheless, in many
countries there is a growing pressure for mental health services to provide scientific evidence
that treatments and therapies are effective and economical. In the UK, one such agency that
attempts to assess and recommend effective forms of treatment for mental health problems is
the National Institute for Health and Clinical Excellence (known as NICE,
www.nice.org.uk). It does this primarily by recommending treatments whose efficacy can be
labelled as ‘evidence‐based. That is, whose efficacy has been proven through research using
the scientific method. There is thus some pressure from these agencies for clinical psychologists
to accept the scientific method—at least as a way of assessing the effectiveness of therapies—
and as a way of assessing the cost effectiveness of individual interventions, which is an
important consideration for agencies providing psychological services (e.g., Crow et al., 2013;
Radhakrishnan et al., 2013). However, as we shall see on numerous occasions throughout this
book, many forms of therapy are not couched in ways that make them amenable to assessment
through a traditional scientific approach (e.g., psychoanalysis). As a result, at least some
clinicians view processes designed to scientifically assess treatments (such as those reported by
NICE) as being ways in which those clinicians who support therapies derived from traditional
scientific approaches can impose their own view of what treatments are effective (Elliott, 1998;
Roth & Fonagy, 1996). Let us look at some of the benefits and costs of clinical psychology
adopting the scientific method in its research.
There are a number of benefits to clinical psychology using the scientific method as a model for
research, and there is no doubt that clinical psychology has used its scientific status as a means
of acquiring prestige and establishing its status as an independent discipline within the field of
mental health (Davey, 2019; Lavender, 1996). However, clinical psychologists often consistently
fail to use research evidence to inform their treatments and instead rely on anecdotal clinical
experience (Dawes, 1994), with evidence suggesting that many clinical psychologists are not
consistently utilising NICE guidelines for evidence‐based best practice (Court, Cooke, &
Scrivener, 2016). If this is so, then there becomes a thin line between a clinician who bases their
interventions on unvalidated experience and a bogus psychotherapist who invents a so‐called
therapy whose basic tenets are not amenable to objective assessment. Keeping abreast of recent
developments in evidence‐based research is therefore an important component of good practice
for clinical psychologists, and scientific method provides the theoretical and empirical
developments by which the clinician can achieve this (Singer, 1980).
In contrast, at least some clinicians have argued that the scientific method in its strictest form
may not be suitable for clinical psychology research or practice. First, some writers claim that to
base clinical psychology research on strict scientific method aligns it too closely to the medical
model of psychopathology and invites many of the problems associated with a strict medical
model of psychopathology (see Chapter 1, Section 1.1.2) (Corrie & Callahan, 2000). Second,
while the scientist‐practitioner model is often seen as the model for clinical psychology, it is
seldom an ideal that is fulfilled in practice (Barlow, Hayes, & Nelson, 1984). For the clinical
psychologist, the need to alleviate a client's psychological problems is often more pressing than
the need to be scientifically rigorous. Similarly, the demands placed on overworked clinicians in
underresourced mental health services means that they are rarely likely to engage in any
meaningful research independently of their clinical practice (Head & Harmon, 1990) and will
certainly rank research as a priority significantly lower than their service commitment (Allen,
1985; Corrie & Callahan, 2000). The pressures of their work mean that they will often view the
research literature (whether based on scientific method or not) as irrelevant to their professional
practice (Barlow et al., 1984). Third, in contrast to scientific method, an alternative approach to
research in clinical psychology is one based on social constructionism (Burr, 1995;
McCann, 2016). This approach emphasises that reality is a social construction, and so there are
no basic ‘truths’ of the kind that we seek to discover using the scientific method. Instead,
knowledge consists of multiple realities that are constructed by people and may be historically
and culturally specific. It is claimed this approach has particular relevance in clinical psychology
because psychopathology frequently involves individuals creating their own realities (e.g., the
paranoid individual creates a reality in which everyone is against them, and the depressed
individual creates a reality in which they view themselves as worthless). These various realities
can be accessed through analysing language and social interactions, and so those who advocate
a social constructionist approach argue that the study of language and discourse is the only
means of understanding human experience and, as a consequence, human psychopathology
(Lofgren, Hewitt, & das Nair, 2014).
Despite the fact that at least some clinicians have adopted alternative frameworks (e.g., the
social constructionist approach), scientific method is still the most favoured model for research
in most areas of clinical psychology, including research on the causes of psychopathology
(aetiology), research pursuing the development of new forms of treatment, and research
assessing the efficacy and cost effectiveness of treatments. Even though each person may
develop their own individual psychological reality, the fact that human beings are evolved
biological organisms means that there are almost certain to be general ‘truths’ or processes
common to all humans that can be discovered using scientific method. As a consequence, there
are also likely to be a set of general ‘truths’ or processes common to psychopathology across all
individuals.
Once we have been able to describe and categorise psychopathology then we are one step away from
prediction. A logical next stage is to use these descriptions and categorisations to help us predict
psychopathology. For example, we may know that certain childhood or developmental experiences may
increase the risk of developing psychopathology later in life, and one such list of these risk factors is
provided in Table 16.1 in Chapter 16. This table indicates how various forms of childhood abuse or
neglect can raise the risk of developing a range of psychopathologies (as one example, childhood
physical and sexual abuse increases the risk of developing adolescent eating disorders). However, while
research may have identified such experiences as risk factors, this does not imply a direct causal
relationship between the risk factor and the psychopathology—it merely indicates that the early
experience in some as yet unknown way increases the possibility that a psychopathology will occur.
prediction A statement (usually quantitative) about what will happen under specific conditions
as a logical consequence of scientific theories.
risk factors Factors that may increase the risk of developing psychopathology later in life.
The next aim of research would move beyond describing and categorising events to actually trying to
control them in a way that (a) provides us with a clear picture of the causal relationships involved, and
(b) allows us to develop methods of changing events for the better. In the case of psychopathology, this
latter aim would include using our knowledge of the causal relationships between events to control
behaviour so that we could change it—the basic tenet of many forms of treatment and psychotherapy.
One of the main tools for discovering causal relationships between events is the experimental method
that we will describe in more detail in Section 3.3. This approach is generally known as experimental
psychopathology because it is an experimental attempt to control individual variables in a way that
allows us to define the causal relationships underlying psychopathology (van den Hout, Engelhard, &
McNally, 2016). For example, a number of studies have indicated that experimentally inducing a bias to
interpret ambiguous events as threatening causes an increase in experienced anxiety (Wilson, MacLeod,
Mathews, & Rutherford, 2006). As a consequence this research suggests that if we can decrease this
interpretation bias in anxious individuals, it should significantly reduce the anxiety they experience (see,
for example, Treatment in Practice Box 6.4).
control Using our knowledge of the causal relationships between events to manipulate
behaviour or cognitions.
The final aim of research is understanding. That is, once we have described and categorised
psychopathology, and once we have begun to identify some of the causal factors affecting
psychopathology, we are probably at a point where we want to describe how all these factors interact,
and this will provide us with a theory or model of the phenomenon we are trying to explain. Activity
Box 3.2 provides a useful example of how researchers believe the various causal factors involved in
panic disorder interact, and it is the development of models such as this (describing the
interrelationships between events) that can add significantly to our understanding of psychopathology
and suggest practical ways of alleviating and treating symptoms.
understanding A full description of how the causal factors affecting psychopathology
interact.
To be useful in helping to understand psychopathology, research does not necessarily have to be carried
out on those who have mental health problems or who display symptoms of psychopathology. In
Chapter 1 we discussed the possibility that much of what is labelled as psychopathology is often just an
extreme form of common and accepted behaviours. That is, symptoms diagnosed as a psychological
disorder may just be more extreme versions of everyday behaviour. One good example is worrying.
Worrying is usually viewed as a perfectly normal reaction to the challenges and stressors encountered in
daily life and the activity of worrying may often help us to cope with these problems by enabling us to
think them through. However, once uncontrollable worrying becomes a chronic reaction to even minor
stressors it then begins to cause distress and interfere with normal daily living. Because symptoms
diagnosed as a disorder may just be more extreme versions of everyday behaviour, then what we find
out about activities such as worrying in nonclinical populations will probably provide some insights into
the aetiology of pathological worrying when it is a significant indicator of a psychological disorder such
as generalised anxiety disorder (GAD). Undertaking research on healthy, nonclinical populations in
order to shed light on the aetiology of psychopathology is known as analogue research, and such
research makes an important contribution to the understanding of psychopathology (Davey, 2003, 2017;
Vredenburg, Flett, & Krames, 1993).
Another important function of clinical psychology research is to determine the efficacy of treatments
and interventions. This includes testing the effectiveness of newly developed drug, surgical or
psychological treatments. Research may even try to compare the effectiveness of two different types of
treatment for a psychological disorder (e.g., comparing a drug treatment for depression with a
psychological treatment for depression). Such studies are not quite as simple as they may initially seem
because the researcher will have to compare those who undergo the treatment with those who do not,
and they will also have to control for extraneous factors that might influence improvement that are not
directly due to the therapy being tested (e.g., how attentive the therapist is, or the degree to which the
client participating in the study “expects” to get better). We will discuss therapy outcome research of this
kind in more detail in Chapter 4, but the interested reader may want to have a look at Mendelberg
(2018), or Clark et al. (2006) as examples of how intervention outcome research is conducted and
evaluated, and at Smith & Thew (2017) for consideration of the difficulties encountered when
attempting to conduct research in clinical practice.
analogue research Research on healthy, non-clinical populations in order to shed light on the
aetiology of psychopathology.
Finally, practicing clinical psychologists often have pressing questions that, for various reasons, they need
to answer. Very often these are questions of a practical nature related to their employment as mental
health professionals working in organisations that provide mental health services. For example, in the
UK, most National Health Service (NHS) service providers will want to ensure that the service they are
offering is effective, and this is known as evaluation research or clinical audit. Clinical audit uses
research methods to determine whether existing clinical knowledge, skills and resources are effective and
are being properly used, and the kind of questions addressed will include ‘what is the service trying to
achieve?’ and ‘how will we know if the service has achieved what it is trying to achieve’ (Barker et al.,
2015). In this sense, clinical audit does not add to the body of knowledge about psychopathology but is
an attempt to ensure that current knowledge is being effectively used. In particular, clinical audit is
intended to influence the activities of a local team of clinicians, rather than influencing clinical practice
generally (Cooper, Turpin, Bucks, & Kent, 2005), and clinical audit uses research methods to assess how
much end users value the services on offer, their satisfaction with these services, and what is perceived as
good and bad about the services offered (see Tulett, Jones, & Lavender, 2006, for an example).
clinical audit The use of research methods to determine whether existing clinical knowledge,
skills and resources are effective and are being properly used. Also known as evaluation research.
These, then, are some of the reasons why clinical psychologists do research. They include attempts to
answer pressing practical problems (e.g., what treatments are effective?) and attempts to add to the body
of knowledge about psychopathology (e.g., what causes specific mental health problems?). The next
section introduces you to some of the research methods that can be used to answer these questions.
SELF‐TEST QUESTIONS
Can you name four main goals of research?
What does the term aetiology mean?
What is analogue research?
What is clinical audit?
SECTION SUMMARY
To undertake a correlational analysis the researcher needs to obtain pairs of scores on the variables
being studied. For example, if you are interested in whether there is a relationship between trait anxiety
and worrying, you can ask participants to complete questionnaires measuring worry and trait anxiety.
You will then have two scores for each participant, and these scores can be entered into a spreadsheet
for a computer statistical package such as IBMStatistical Package for the Social Sciences(SPSS)
(Field, 2017), or the R programming language for statistical computing (Field, Miles, & Field,
2012). This will then compute a correlation coefficient (denoted by the symbol r) that measures the
degree of relationship between the two variables. The correlation coefficient can range from +1.00
through 0.00 to −1.00, with 1.00 denoting a perfect positive correlation between the two variables
(i.e., as scores on one variable increase, then scores on the other variable will increase), and − 1.00
denoting a perfect negative correlation between the two variables (i.e., as scores on one variable
increase, then scores on the other variables will decrease). A correlation coefficient of 0.00 indicates that
the two variables are completely unrelated. The relationship between two variables can also be
represented graphically in what is known as a scattergram, and Figure 3.1 provides three examples of
scattergrams representing three different types of relationships between variables. These scattergrams
show how the line of best fit differs with the nature of the relationship between the two variables
concerned. The statistical package that calculates the correlation coefficient and prints out the
scattergram for you will also provide you with an indication of the statistical significance of your
results. A researcher will want to know the degree to which their results occurred by chance, and if the
probability of their results occurring by chance is low then they can be relatively assured that the finding
is a reliable one. Traditionally, a correlation is considered statistically significant if the probability of it
occurring by chance is less than 5 in 100, and this is written as p < .05 (p stands for probability). From
the examples given in Figure 3.1 you can see that the correlations in both Figures a and b are
statistically significant (because the p values are less than .05). However, the correlation in Figure c is not
significant (because the p value is higher than .05), meaning there is probably no important relationship
between the two variables. (see also Focus Point 3.3 for a discussion of probability levels and effect sizes).
Statistical Package for the Social Sciences (SPSS) A computer program specifically
developed for statistical analysis for the social sciences.
positive correlation A relationship between two variables in which a high score on one
measure is accompanied by a high score on the other.
negative correlation A relationship between two variables in which a high score on one
measure is accompanied by a low score on the other.
line of best fit A straight line used as a best approximation of a summary of all the points in
a scattergram.
statistical significance The degree to which the outcome of a study is greater or smaller
than would be expected by chance.
Correlational designs are valuable for clinical psychology researchers in a variety of ways. First, they
allow the researcher to begin to understand what variables may be interrelated and this provides a useful
first step towards understanding a particular phenomenon. Second, correlational designs are useful for
researching how individual differences and personality factors may relate to psychopathology. For
example, it would allow us to determine whether a personality factor, such as perfectionism, was related
to a psychopathology, such as obsessive‐compulsive disorder (e.g., Tolin, Woods, & Abramowitz, 2003).
Third, it would also allow us to determine whether certain experiences were associated with specific
psychopathologies, such as whether the experience of stressful events is associated with depression (e.g.,
Brown & Harris, 1978).
FIGURE 3.1 Correlation scattergrams.
In a questionnaire study, 132 female college student participants were asked to fill in valid and reliable questionnaires
measuring (a) the extent to which they worried, (b) their level of trait anxiety, (c) the degree of positive mood they exhibited
over the past 6 months, (d) their level of dissatisfaction with their body shape, (e) their current level of depression, and (f)
their height.
(a) This scattergram shows the relationship between worry scores and trait anxiety scores for the 132 participants. This
exhibits a positive correlation, and the line of best fit (the straight line) indicates this by showing an increasing trend.
The correlation coefficient calculated by SPSS was r = .66, and this was significant at p < .001.
(b) This scattergram shows the relationship between measures of positive mood and body dissatisfaction for the 132
participants. This exhibits a negative correlation, and the line of best fit (the straight line) indicates this by showing a
decreasing trend. The correlation coefficient calculated by SPSS was r = −.40, and this was significant at p < .001.
(c) This scattergram shows the relationship between measures of height and depression for the 132 participants. This
indicates that these variables are unrelated with the line of best fit (the straight line) showing neither an increasing nor
decreasing trend. The correlation coefficient calculated by SPSS was r = .01, with p > .80, and this was nonsignificant.
However, as we indicated earlier, correlational designs are limited. They certainly do not allow us to
draw any conclusions about causality, and they usually provide very little insight into the mechanism or
process that might mediate the relationship between the two variables that are correlated (see Focus
Point 3.3). We need to use other designs (such as the experimental design) to help us answer the question
of how the two variables are related.
3.3.2 Longitudinal Studies and Prospective Designs
An alternative form of correlational design is known as the longitudinal study or prospective
design. In the traditional correlational design, all measures are taken at the same point in time (known
as a cross‐sectional design, because the study simply takes a sample of measures as a ‘cross‐section’
of ongoing behaviour). However, in longitudinal or prospective designs, measures are taken at two or
more different times. In a longitudinal study, measures are taken from the same participants on different
occasions usually over extended periods of time. This may extend over many years, or in more long‐
term studies, over a participant's whole lifetime. Prospective studies take measures of the relevant
variables at a particular point in time (usually called time 1), and then go back to the same participants
at some future time and take the same or similar measures again (usually called time 2). Both
longitudinal and prospective designs enable the researcher to specify more precisely the time‐order
relationships between variables that are correlated. That is, because measures are taken from the same
participant at both times 1 and 2, the researcher can not only see whether there are correlations
between variables X and Y but also whether variable X measured at time 1 predicts changes in measures
of variable Y that occurred between times 1 and 2. A detailed example of a prospective design is give in
Research Methods Box 7.2 (Chapter 7) where a measure of negative attributional style at time 1 was
shown to predict increases in depression scores between times 1 and 2. This type of design enables the
researcher to understand the time course of relationships between two variables, and to determine
whether one variable predicts changes in a second variable. In the case given in Research Methods Box
7.2, a negative attributional style predicts future increases in depression, and can therefore be identified
as a risk factor for depression.
longitudinal study Research that takes measures from the same participants at two or more
different times in order to specify the time relationships between variables. This may extend
over many years or over a participant’s whole lifetime.
prospective designs Research that takes measures from the same participants at two or more
different times in order to specify the time relationships between variables.
cross-sectional design A research design that involves the collection of data from a sample
at just one point in time.
One example of a longitudinal study is the Dunedin Multidisciplinary Health and Development Study
—a longitudinal investigation of health and behaviour in a complete birth cohort
(http://dunedinstudy.otago.ac.nz). Participants in the study were born in Dunedin, New Zealand,
between April 1972 and March 1973, and over 1,000 of these individuals then participated in follow‐up
assessments at ages 3, 5, 7, 9, 11, 13, 15, 18, 21, 26, 32, 38, and 44 years (a future assessment is
scheduled for age 50 years). The study has enabled researchers to understand the time‐order
relationships between variables associated with health and psychopathology, and to understand how
some variables can be identified as predictors or risk factors for later behaviour. For example, using
prospective data from the Dunedin study, Reichenberg et al. (2010) found that children who grow up to
develop adult schizophrenia enter primary school struggling with verbal reasoning and lag further
behind peers in working memory, attention, and processing speed as they grow older.
Epidemiological studies Research which takes the form of a large-scale survey used to study
the frequency and distribution of disorders within specific populations over a specified period of
time.
One of the main uses of epidemiological studies is to determine the prevalence rates of various mental
health problems, and prevalence rates can be described in a number of different ways. For example,
respondents in an epidemiological study can be asked (a) ‘Have you ever experienced symptoms of a
specific psychopathology in your lifetime?’ (Providing information on the lifetime prevalence rate of
a disorder), (b) ‘Have you experienced symptoms of a specific psychopathology in the last year?’
(Providing information on the one‐year prevalence rate of a disorder, e.g., Regier et al., 1993), or (c)
‘Are you experiencing symptoms of a specific psychopathology at the present time?’ (Providing
information on what is known as the point prevalence of a disorder, that is, the frequency of a
disorder in the population at any one point in time). You can see from these examples that prevalence
rates represent incidence x duration, and it is important to view prevalence in this way because some
disorders are of high incidence but low duration (e.g., bouts of depression), and some others are of low
incidence but long duration (e.g., schizophrenia). DSM usually provides information on either the
lifetime prevalence rates of a disorder or its point prevalence, and these are the kinds of statistics we will
be using when considering specific disorders in later chapters.
point prevalence The frequency of a disorder in the population at any one point in time.
The benefits of epidemiological studies are that they provide information about the frequency of
mental health problems that can be used for planning health care services. They may also provide
information about the risk factors for various psychological disorders, which will help health service
providers to identify those who may be at risk of developing a mental health problem and so introduce
programmes designed to help prevent those problems. For example, excessive alcohol consumption in
pregnant mothers is a risk factor for infant fetal alcohol syndrome in the offspring, and prevention
programmes aim to identify those women at risk of alcohol abuse during pregnancy and to provide
interventions or alcohol‐reduction counselling (Floyd, O'Connor, Bertrand, & Sokol, 2006; Montag,
2016).
However, like all research approaches, there are some limitations to epidemiological studies. For
example, to provide valid descriptions of the prevalence rates of mental health problems in a particular
population, the sample used must be truly representative of that population. This is often difficult to
achieve because such studies will never attain a 100% response rate, and many respondents will often
refuse to take part. Studies suggest that those who are most likely to refuse to take part in an
epidemiological survey are men, individuals of low socio‐economic status, and individuals from ethnic
minority populations (Fischer, Dornelas & Goether, 2001), and this is likely to mean that the samples
used in most epidemiological studies are not fully representative of the population being studied.
experiment A design in which the researcher manipulates a particular variable and observes
the effect of this manipulation on some outcome, such as the participant’s behaviour.
control conditions Conditions within an experiment that control for any effects other than
that produced by the independent variable.
control group A group of participants who experience manipulations other than the
independent variable being investigated.
Traditionally, inferential statistics has used a very specific method for testing whether differences
between experimental groups are statistically significant (and are therefore unlikely to have
occurred by chance). This is known as null hypothesis significance testing (Field, 2017),
and the statistical tests we use tell us the degree to which the pattern of results we got could
have occurred by chance (the null hypothesis). In an experiment where we decide to look at
whether manipulating negative mood increases worrying and we find that measures of worry
are higher in the group that experienced negative mood we can create two simple hypotheses:
null hypothesis significance testing The use of inferential statistics to establish
whether differences between experimental groups are statistically significant (and are
therefore unlikely to have occurred by chance).
Null hypothesis: Increased levels of worrying were nothing to do with negative mood.
Alternative hypothesis: Increased levels of worrying were caused by negative mood.
When using statistics, you can only show that the null hypothesis is likely to be wrong (you can
never prove your alternative hypothesis). Traditionally, researchers have adopted a significance
level of p < .05 to decide whether the null hypothesis can be rejected. If p < .05, it means that
there is only a 1 in 20 chance that your experimental results occurred by chance, so are more
likely to have been caused by your experimental manipulation.
One problem with null hypothesis significance testing is that it led researchers to believe that
either an experimental effect existed (if p < .05) or it did not exist (if p > .05). However, selecting a
p level of .05 by which to reject the null hypothesis is entirely arbitrary, it merely tells us that the
null hypothesis is unlikely—it does not tells us that our experimental hypothesis is correct. In
addition, the p value calculated with inferential statistics can be influenced by many factors that
are not directly related to your manipulation. For example, the more participants you test in
your experiment, the more likely you are to get a p of less than .05—even when the measured
differences between the groups in your experiment are quite small (Cohen, 1990).
More recently, less emphasis has been placed on the importance of the actual p value (and
whether the null hypothesis might be rejected) and more on effect size. An effect size is simply
an objective and standardised measure of the magnitude of the difference between your
experimental condition and the control conditions. The larger the effect size, the more likely it
is that your results did not occur by chance. This provides a dimensional scale by which to judge
the importance of your results rather than an arbitrary categorical “all‐or‐none” decision
provided by null hypothesis significance testing. Many measures of effect size are used in
contemporary clinical psychology research, with the most common being Cohen's d and
Pearson's correlation coefficient r (Field, 2017, see also a special issue of Behaviour Research &
Therapy for discussion of best practice guidelines for modern statistical methods in applied
clinical research, Brown & Field, 2017).
effect size An objective and standardized measure of the magnitude of the effect
observed in a research study.
One other feature of designing experiments is in the assignment of participants to the various
conditions or groups in the experiment. Typically, researchers use random assignment of
participants to experimental conditions. This is to ensure that at the outset of the experiment, all groups
have participants with similar characteristics. In our example experiment, the findings of our study
would be compromised if we happened to have participants in our negative mood group who naturally
worried more than the participants in the other two control groups—even before we had completed our
experimental manipulations. This can usually be prevented by the random assignment of participants to
groups, and this should normally ensure that there are no statistical differences between the groups at
the outset of the experiment on characteristics that may influence the dependent variable.
random assignment Assignment of participants to different treatments, interventions or
conditions according to chance.
Finally, both the experimenter and the participant may introduce bias into an experiment that can affect
the validity of the findings. For example, during an experiment, a participant may begin to think about
the purpose of the experiment and behave in a way that is consistent with these thoughts. When this
occurs, the participant is said to be responding according to the demand characteristics of the
experiment (i.e., what they think the experiment is about) rather than to the stimuli and events in the
experiment. Equally, the experimenter may unwittingly bias the results of an experiment. Because the
experimenter may know which of the experimental conditions a participant is in, and also knows what
the experimental predictions are for these conditions, the experimenter may provide subtle cues that
lead the participant to behave in the predicted way. To avoid experimenter bias of this kind a double‐
blind procedure can be used in which neither the experimenter nor the participant is aware of which
group the participant is in (i.e., a second experimenter may be employed simply to assign participants to
experimental conditions without the first experimenter, who runs the experiment, knowing). An
interesting example of implicit experimenter bias that we encountered in our lab when researching the
relationship between cognitions and OCD symptoms is related in my blog at
https://www.papersfromsidcup.com/graham‐daveys‐blog/an‐effect‐is‐not‐an‐effect‐until‐it‐is‐
replicated‐pre‐cognition‐or‐experimenter‐demand‐effects.
demand characteristics The features of an experiment which are the result of participants
acting according to what they believe is expected of them.
Activity Box 3.3 introduces you to the kinds of questions that someone undertaking an experimental
study needs to ask when designing and analysing their experiment, and you should ensure you
understand the various concepts described in this section before attempting this activity.
analogue populations Populations that participate in mental health research but do not have
mental health diagnoses; they may be human or non-human animals.
1. It is being increasingly argued that psychopathology is dimensional rather than discrete. That is,
symptoms diagnosed as a psychological disorder may just be extreme versions of normal, everyday
behaviours and reactions (Krueger & Piasecki, 2002; Niles, Lebeau, Liao, Glenn, & Craske, 2012;
Olatunji, Williams, Haslam, Abramowitz, & Tolin, 2008). If so, then what we find out about these
behaviours and reactions in nonclinical populations will tell us something about the processes that
cause the more severe reactions found in clinical populations.
2. In the laboratory, we can use experimental manipulations to simulate mild psychopathology
symptoms in nonclinical participants. For example, in Chapter 6, the attention bias modification
procedure described in Treatment in Practice Box 6.4 demonstrates how nonclinical participants
might be made “anxious” by establishing in them a threat interpretation bias, which can then be
used to study how this “anxiety” might be alleviated (e.g., MacLeod & Mathews, 2012).
3. Nonclinical participants can be selected for an experimental study because they are similar to
individuals with psychopathology. For example, a good deal of research has been carried out on
college students who score high on measures of depression but at subclinical levels (Vredenburg et
al., 1993). Such participants do not usually have levels of depression that is clinically significant,
but it does allow an experimenter to compare how college students scoring high or low on
measures of depression might react to an experimental manipulation (see Davey, 2017, and van
den Hout, Engelhard & McNally, 2016, for further discussion of the use of human analogue
participants in experimental psychopathology).
Finally, analogue populations do not even have to be human to provide valuable information about
psychopathology. Animal studies are also a valuable source of information about basic processes that
might underlie psychopathology, especially when attempting to understand how brain function may
influence psychopathology (Belzung & Lemoine, 2011; Stewart & Kalueff, 2015). Animal models
allow researchers to experimentally investigate such factors as the genetics of a psychopathology (using
intensive breeding programmes), changes in brain biochemistry associated with specific
psychopathologies (such as changes in brain neurotransmitter levels associated with psychotic‐like
symptoms), and the effects of drugs on psychopathology (such as the effect of antidepressants on brain
biochemistry and behaviour) (e.g., Lavi‐Avnon, Yadid, Overstreet & Weller, 2005; Porsolt, Leipchon &
Jalfre, 1977). Animal studies have the advantage of having complete control over the organism's
developmental history (and so controlling genetic factors and factors affected by feeding and living
experiences), and permit the use of some experimental methods that would be considered too intrusive
to use with human participants (such as assessing the effects of electrical stimulation of the brain, and
the sampling of brain neurotransmitters). Nevertheless, even though many types of animal research are
legally licensed by governments, it is an area of research that has become increasingly controversial
because of changing views on the ethical implications of using nonhuman animals in scientific
experiments (Guidelines for the Use of Animals, 2018; Rollin, 2006).
Animal models The use of laboratory animals in research to simulate processes comparable
to those occurring in humans.
Another important use of the experimental design in psychopathology research is in studies testing the
effectiveness of treatments for mental health problems. These types of studies are often known as
clinical trials, and attempt to test whether (a) a treatment is more effective than no treatment, (b)
whether treatment A is more effective than treatment B, or (c) whether a newly developed treatment is
more effective than existing treatments. In a standard treatment efficacy experiment, researchers will
allocate clients or patients with a specific psychopathology (e.g., depression) to different experimental
conditions. The experimental group will receive the treatment manipulation whose efficacy is being
tested (e.g., a form of psychotherapy), and control groups will undergo other manipulations depending
on what comparisons need to be made. For example, if the researchers want to discover if the
psychotherapy treatment is more effective than a drug treatment, then a control group will receive the
drug instead of psychotherapy. The researchers will then measure symptoms at various points in time
after the two treatments to assess which is more effective (e.g., Leff et al., 2000; Ward et al., 2000).
Sometimes, researchers may want to assess whether a particular intervention is more effective than
simply doing nothing. However, this is not as simple a comparison as it sounds. Logically you would
imagine that a researcher would subject half the participants to the intervention, and allocate the other
half to a control condition in which they receive no treatment. Suppose the researcher wants to assess
the effectiveness of a drug treatment for depression. Just giving the experimental group a pill containing
the drug and giving the control group nothing has a number of problems. First, the experimental group
may get better simply because they are being giving a pill and this leads them to expect to get better. This
is known as a placebo effect, where a participant may improve simply because the procedure they are
undergoing leads them to believe they should or might get better. To control for this possibility, a
control group should be included in which the participants are given a pill that contains an inactive
substance (such as a sugar pill). This is known as a placebo control condition that controls for the
possibility that participants may improve simply because they are being given a pill regardless of what is
in the pill (Gupta & Verma, 2013). Nevertheless, suffice it to say here that the experimental method does
provide a useful paradigm for assessing the effectiveness of different interventions, but we will discuss
the complexities and limitations of this approach when we discuss treatment methods more thoroughly
in the next chapter.
clinical trials Experimental research studies used to test the effectiveness of treatments for
mental health problems.
placebo effect The effect when participants in a clinical trial show improvement even though
they are not being given a theoretically structured treatment.
placebo control condition A control group that is included in a clinical trial to assess the
effects of participant expectations.
Summary
The experiment is arguably the most powerful research tool that we have because it allows us to draw
conclusions about the direction of causality between variables, and this is the first step towards putting
together theories and models of how psychopathology is caused. However, in order to provide valid
results, experiments must be carefully designed and well controlled. Experiments are more than just
data collection exercises, and the experimenter needs to manipulate important variables in order to
discover causal relationships between events and behaviour. This means that in some cases the
experiment can be too intrusive for use with clients suffering psychopathology, and this means that many
of our studies investigating psychopathology need to be conducted on analogue populations such as
healthy volunteers and nonhuman animals.
3.3.5 Mixed Designs
One of the basic principles of experimental design is that participants must be assigned to different
groups on a random basis. However, this principle can be set aside if the research question being tackled
requires a mixed design. For example, suppose we wanted to see whether negative mood caused anxious
individuals to worry more than depressed individuals. In an experiment of this kind, we would still be
experimentally inducing negative mood (the experimental manipulation), but we would not be assigning
participants randomly to the experimental groups, we would want to ensure that in one experimental
group we only had anxious individuals and in a second experimental group we only had depressed
individuals. We would select the participants pre‐experimentally on the basis of these attributes and
assign them nonrandomly to each group. This is known as a mixed design because (a) we are
adopting elements from the experimental approach (i.e., we are manipulating an independent variable),
but (b) we are assigning our participants nonrandomly to the experimental groups. This is a design that
is used quite frequently in psychopathology research because the clinical psychology researcher may
often want to know if a particular variable will affect individuals with different psychopathologies in
similar or different ways. An example of a mixed design is a study by Sanderson, Rapee, and Barlow
(1989) investigating the effects of expectations on panic disorder. They preselected two groups of
participants: one group consisted of individuals diagnosed with panic disorder and the other group
consisted of individuals with no psychiatric diagnosis. They then subjected both groups to an
experimental manipulation. In this case, they asked all participants to inhale compressed air but told
them they were inhaling CO2 that could induce a panic attack. Even though the compressed air itself
could not have induced a panic attack, participants diagnosed with panic disorder were significantly
more likely to have a panic attack after the manipulation, suggesting that in such individuals the mere
expectation of a panic attack is likely to induce one.
mixed design Research which uses the non-random assignment of participants to groups in
an experiment.
Mixed designs are frequently used in treatment outcome studies, where the effectiveness of a particular
intervention is being assessed on individuals with different psychiatric diagnoses or with different
severity of symptoms. Figure 3.2 shows the results of a mixed design study carried out by Huppert et al.
(2004) designed to assess the effects of administering a placebo pill to three different groups of
participants, each diagnosed with a different psychiatric condition. In this study they found that their
experimental manipulation (the administration of a placebo pill) significantly reduced the severity of
reported symptoms in individuals diagnosed with social phobia and panic disorder, but not in
individuals diagnosed with OCD.
This example illustrates how useful the mixed design can be when attempting to assess how individuals
with different diagnoses or groups of symptoms will react to an experimental manipulation (such as a
treatment intervention). However, we must always be aware of the fact that one of the variables in a
mixed design (in this case the diagnostic groups) is not manipulated, and so we cannot infer a direct
causal relationship between the diagnostic category and the effects of the manipulation. For example, in
Figure 3.2 we cannot infer that the failure of the OCD group to improve after being given a placebo is
caused by the specific fact that they are suffering from OCD because we have not explicitly manipulated
that variable. It could be that some other variable related to OCD is causing the failure to respond,
such as having less faith in drug treatments generally, or individuals with OCD may be more resistant to
any treatment than those in the other groups (see also Focus Point 3.3).
FIGURE 3.2 Example of a Mixed Design. Differential response to placebo among patients with social phobia, panic
disorder, and obsessive‐compulsive disorder. American Journal of Psychiatry, 161, 1485–1487.
From Huppert, Schultz, Foa, & Barlow (2004).
Natural experiments Research which allows researchers to observe the effects on behaviour
of a naturally occurring ‘manipulation’ (such as an earthquake).
Other variables that may play a part in the development of psychopathology include poverty and social
deprivation, and these are clearly factors that we could not easily manipulate in a controlled experiment.
However, Costello, Compton, Keeler, and Angold (2003) took advantage of the opening of a casino in
an American Indian reservation to study how poverty and conduct disorder in children might be linked
(see Chapter 16, Section 16.2.2). The introduction of the casino provided income that moved many of
the local families out of poverty, and Costello et al. found that this resulted in a significant decrease in
the symptoms of conduct disorder in local children—but only in those children whose families had
benefited financially from the introduction of the casino, suggesting either a direct or indirect link
between poverty and symptoms of childhood conduct disorder.
PHOTO 3.1 The Grenfell Tower fire 2017. Natural disasters and accidents—such as the 2017 Grenfell Tower fire in
London ‐ can be studied as ‘natural’ experiments in the sense that they allow clinical psychology researchers to collect data
on the psychological effects of events that we would not be able to manipulate in laboratory settings (e.g., Strelitz, Lawrence,
Lyons‐Amos, & Macey, 2018).
Case studies
Before the development of sophisticated research designs, the case study was one of the most widely
used methods of collecting information about psychopathology, and knowledge collected in this way
often served as the basis for the development of early theories of psychopathology. One famous
exponent of the case study was Sigmund Freud himself, and many important features of psychoanalytic
theory were based on Freud's detailed observation and analysis of individual cases. One such example is
the famous case of Little Hans, a 5‐year‐old boy who had a fear of horses. In Chapter 6, Focus Point 6.1
describes how Freud studied this single case in detail and how it enabled him to develop his view that
many childhood fears were caused by a subconscious Oedipus Complex. In a different example in the
1940s, case studies of disturbed children provided the Austrian psychiatrist Leo Kanner with a set of
observations indicating a consistent set of symptoms that he called infantile autism, and which gave rise to
the symptom classification that we currently know as autistic spectrum disorder (see Chapter 17).
Case studies are valuable in a number of different circumstances. They are useful when there are only a
few instances of a particular psychopathology available for study. This was the case when dissociative
identity disorder (DID) (multiple personalities) was first reported as a specific disorder in the 1950s and
1960s, and an example of the use of case histories in the first descriptions of this disorder is provided in
Case History 14.1 in Chapter 14. Case studies are also valuable for providing new insights into existing
psychopathologies, and the detailed information that a case study can offer can often provide new ways
of looking at a particular problem and new facts that can subsequently be subjected to more rigorous
research methods (Davison & Lazarus, 1995), and the example of Kanner's discovery of infantile autism
through meticulous case studies of individual children is one such example. The case study can also
provide detailed information that may disprove existing theories. We saw in Section 3.1.2 that scientific
hypotheses can often be refuted or falsified by a single finding, and case histories are capable of
providing individual findings that are inconsistent with existing theories or explanations of a
psychopathology. For example, some theories of eating disorders such as anorexia nervosa propose that
dissatisfaction with body shape is a critical factor in developing an eating disorder. However, it would
only take one case history describing an individual who developed anorexia without exhibiting any body
dissatisfaction to question the universality of this theory.
Despite these benefits, the case study also has a significant number of limitations. First, and most
important, case studies lack the objectivity and control provided by many other research methods. For
example, the information collected by a clinical researcher in a case study is likely to be significantly
influenced by that clinician's theoretical orientation. Arguably, the detailed information on Little Hans
collected by Freud was significantly influenced by Freud's own theoretical views on psychopathology,
and it was quite likely that he collected and used only that information that was consistent with his
existing views. Freud clearly spent much time finding out about Little Hans' childhood whereas more
cognitively or behaviourally oriented psychologists would focus on current cognitions or those current
environmental factors that might be maintaining Little Hans' behaviour (see Chapter 1, Section 1.3.2).
Second, case studies are usually low on external validity. That is, the findings from one case are
rarely generalisable to other cases. For instance, because of the subjective nature of the information
collected by a clinician in a case study, how can we be sure the supposed causes of psychopathology in
that case study will also be true for other individuals with similar psychopathologies?
external validity The extent to which the results of a study can be extrapolated to other
situations.
Finally, we have just argued that the case study can be valuable in providing evidence that could
disprove a theory, but because of the uncontrolled way in which case studies are collected it is not
particularly useful for providing evidence to actually support theories. For example, a case study may
indicate that a young woman with an eating disorder is dissatisfied with her body shape. This is
information that is consistent with theories of eating disorders that assume a role for body dissatisfaction,
but it is not evidence that differentially favours that theory because the case study does not (a) rule out
other explanations, or (b) indicate that body dissatisfaction plays a critical role in causing the eating
disorder.
Single‐Case experiments
The single‐case experiment has a particular value in psychopathology research and is used relatively
frequently. The main value of this method is that it enables the researcher (a) to undertake an
experimental manipulation (and so potentially make some inferences about causal relationships between
variables), and (b) to use one individual as both experimental and control participant. There is a
particular advantage to using a single participant and subjecting that individual to both experimental
and control conditions. First, in many psychopathology studies the use of a control group may mean
denying individual participants a treatment that they need. For example, if a researcher is attempting to
assess the efficacy of a particular treatment, they would have to compare the treatment with a control
group who did not receive that treatment. This obviously raises ethical issues about withholding
treatment from clients who may benefit from it. Second, some psychopathologies are quite rare, and it
can be quite difficult to gather enough participants to form groups of experimental and control
participants, and conducting an experiment on a single participant may be a necessity.
The single case experiment allows the experimenter to take some baseline measures of behaviour (the
control condition) before introducing the experimental manipulation (the experimental condition), and
behaviour during baseline can then be compared with behaviour following the manipulation. Most
single‐case experiments use variations of what are known as the ABA or ABAB design. In the ABA
design, an initial baseline stage involves the observation and measurement of behaviour without any
intervention (A), this is then followed by a treatment or manipulation stage where the experimental
manipulation is introduced and its effect on behaviour observed and measured (B), subsequently a final
return‐to‐baseline stage is then introduced (A) in which behaviour is once more observed in the absence
of the treatment or manipulation. The second baseline stage is included to ensure that any behaviour
change that occurs in stage B is caused by the manipulation and not any confounding factor such as a
natural drift in behaviour over time. In the ABAB design (sometimes known as a reversal design), a
second treatment or manipulation stage is introduced and provides extra power in demonstrating that
any changes in behaviour are explicitly due to the manipulation or treatment. Figure 3.3 provides an
example of the use of an ABAB design. This demonstrates the effectiveness of providing a social story
conveying information about appropriate mealtime behaviour for an individual with autism spectrum
disorder (Bledisoe et al., 2007). In this example, the effectiveness of the manipulation was demonstrated
by the fact that behaviours returned to baseline levels following the withdrawal of the manipulation (the
second A stage), and across all four stages the frequency of the measured behaviour fluctuated in
accordance with whether the experimental manipulation was present (B) or not (A).
ABA design A single-case experiment which involves an initial baseline stage of observation
and measurement of behaviour without any intervention (A), followed by a treatment or
manipulation stage where the experimental manipulation is introduced and its effect on
behaviour observed and measured (B). A final returnto-baseline stage is then introduced (A) in
which behaviour is once more observed in the absence of the treatment or manipulation.
FIGURE 3.3 Example of a single‐case experimental ABAB design. The participant in this study was a 13‐year‐old
male with Asperger's syndrome and attention deficit hyperactivity disorder (ADHD) (see Chapter 17) who exhibited a
number of eating‐related problems (e.g., talking with mouth full, spilling food, talking in a loud voice, etc.). Days 1–7
show the baseline levels of spills (a ‘bad’ response) and mouthwipes (a ‘good’ response) (The first ‘A’ phase). The
intervention used (phase ‘B’) was a social story provided to the participant to help him improve his eating habits. The figure
shows how good eating behaviours tended to increase and bad behaviours tended to decrease in frequency during the
intervention phases, but return to normal during baseline phases.
From Bledsoe et al. (2007).
ABAB design A single-case experiment, similar to the ABA design, with the addition of a
second treatment or manipulation stage, providing extra power in demonstrating that any
changes in behaviour are explicitly due to the manipulation or treatment.
One disadvantage to the ABAB design is that it alternates periods of treatment with non‐treatment, and
this may be problematic if the study is assessing the effectiveness of a treatment that has important
benefits for the participant (e.g., it prevents self‐injurious behaviour or alleviates distress). This can be
overcome by using a multiple‐baseline design (e.g., Coon & Rapp, 2017). There are two variations
to this procedure: (a) using a single participant, the researcher can select two or more behaviours to
measure and can target the treatment or manipulation on one behaviour but allow the other behaviours
to act as control comparisons; or (b) the researcher can use multiple participants by first taking baseline
measures from each (stage A), and then introducing the treatment or manipulation (B) successively
across the participants. The multiple baseline design means that each individual within the study can
receive the treatment for a maximum amount of time without compromising the experimental balance
of the study (e.g., Thompson, Kearns, & Edmonds, 2006).
While the single‐case experiment has a number of significant benefits, it too has some limitations. Most
important, it is still a single case study, so it may be difficult to generalise the results to other individuals
with similar psychopathologies—just because a treatment works for just one person, does not necessarily
mean it will work for another. Group designs overcome this problem by using statistical inference across
a number of participants to determine the probability that the findings from the study will be
generalisable to a larger population. However, the problem of generalisability can be overcome to some
extent by using more than one participant. If the treatment or manipulation is effective across more
than one participant then this increases the chances that it will be generalisable to other individuals.
systematic review A review of a clearly formulated question that uses systematic and explicit
methods to identify, select, and critically appraise relevant research, and to collect and analyse
data from the studies that are included in the review.
In addition to a systematic review, a meta‐analysis attempts to detect trends across studies found through
systematic review that may have used different procedures, different numbers of participants, different
types of control procedures, and different forms of measurement, and it does this by comparing effect
sizes across studies. An effect size is an objective and standardised measure of the magnitude of the
effect observed in a study (i.e., the difference in measured outcome between participants in a treatment
or experimental group and those in appropriate control conditions) (see Focus Point 3.4), and the fact
that it is standardised means that we can use this measure to compare the outcomes of studies that may
have used different forms of measurement. Meta‐analyses are now an almost accepted way of
overviewing an area of studies that address the same or a similar research issue, and are particularly
popular as a statistical tool for assessing the effectiveness of interventions for psychopathology (for
examples see Cuijpers, Cristea, Karyotaki, Reijnders, & Huibers, 2016; De Maat, Dekker, Schoevers, &
De Jonghe, 2006; Hanrahan, Field, Jones, & Davey, 2013).
effect size An objective and standardised measure of the magnitude of the effect observed in a
research study.
While many meta‐analyses have been carried out specifically on the effectiveness of individual
treatments and interventions, the basis of comparison can be other factors such as type of
psychopathology being treated or the comparison of drug treatments generally versus psychotherapy
interventions. One of the earliest meta‐analyses was a large‐scale study carried out by Smith, Glass, &
Miller (1980) assessing whether psychotherapies were more effective than no treatment at all. From the
results of their meta‐analyses they concluded that (a) a very wide range of psychotherapies were more
effective at reducing symptoms of psychopathology than no treatment at all, and (b) perhaps more
controversially, that effect sizes did not differ significantly across different types of psychotherapies—
implying that all psychotherapies were equally effective (see also Focus Point 4.4)!
Nevertheless, while a meta‐analysis may seem like an objective solution to the problem of reviewing the
findings from groups of studies, this method too has its limitations. First, meta‐analyses frequently rely
almost entirely on analysing the results of published studies, and published studies are much more likely
to have significant results than nonsignificant results (Dickersin, Min, & Meinert, 1992). This means that
meta‐analyses are likely to overestimate mean effect sizes because they are unlikely to include
unpublished studies that are probably nonsignificant. The result is that they are probably biased towards
claiming that a variable or treatment is effective when it may not be (Field, 2013). Second, effect sizes
will be influenced by the quality of the research (e.g., whether the control conditions are adequate or
whether outcome measures are accurate and sensitive), but meta‐analyses include all studies equally and
do not take into account the quality of individual studies. The researcher undertaking a meta‐analysis
can overcome this problem by comparing effect sizes in ‘well‐conducted’ and ‘badly conducted’ studies
(Field, 2013), but this then involves the researcher in making some subjective judgements about what is
‘good’ and ‘bad’ research (Eysenck, 1994). There is even the possibility that meta‐analyses might
become a self‐perpetuating form of analysis, with at least some studies now attempting meta‐analyses of
meta‐analyses (e.g., Butler, Chapman, Forman, & Beck, 2006)!
FIGURE 3.5 An example of a flow chart for recording and reporting how studies for a systematic review are sourced
and selected.
From Moher et al. (2009). For more information, visit http://www.prisma‐statement.org.
3.3.10 Qualitative Methods
So far we have mainly discussed those research methods that place an important emphasis on accurate
and valid measurement of behaviour and attempt to draw conclusions from their studies on the basis of
statistical inference. These methods tend to be collectively known as quantitative methods, but there
is a growing body of research methodologies in clinical psychology that do place less emphasis on exact
measurement and statistical analysis, and these are known as qualitative methods. Instead of
emphasising mathematical analyses of data, the raw material for qualitative research is ordinary
language, and any analysis is verbal rather than statistical. The raw data in qualitative studies are
usually the participant's own descriptions of themselves, their experiences, their feelings and thoughts,
their ways of communicating with others, and their ways of understanding the world. Study samples are
often small, and data are collected using unstructured or semi‐structured interview techniques that can
be analysed in a variety of nonstatistical ways. Qualitative methods are particularly suited to clinical
psychology research because they enable the researcher to gain an insight into the full experience of
psychopathology, including the sufferer's feelings, ways of coping, and the specific ramifications that the
psychopathology has on everyday life (see Research Methods in Clinical Psychology Box 3.1). In recent
years, qualitative methods have provided information relevant to scale development, informed theories
of psychopathology, and provided explanations for unusual research findings and unusual case histories
(Harper, 2017; Harper & Thompson, 2011; Nelson & Quintana, 2005; Rennie, Watson, & Monteiro,
2002; Willott & Larkin, 2011;).
quantitative methods Research methods that place an important emphasis on accurate and
valid measurement of behaviour and attempt to draw conclusions from their studies on the
basis of statistical inference.
qualitative methods Research methods that rely on the analysis of verbal reports rather than
on statistical analyses of quantifiable data.
AIMS
To explore the situation of dental phobic patients and to investigate (a) how their dental phobia
interferes with their normal routines, their daily functioning and their social activities and
relationships, (b) what factors contribute to the maintenance of their phobia, and (c) how they
cope with their fear.
STUDY SAMPLE
18 patients applying for treatment at a specialised dental fear clinic in Gőteborg, Sweden. All
patients were currently refusing dental treatment because of their phobia.
IN‐DEPTH INTERVIEWS
Audiotaped, open‐ended interviews were conducted with each participant. The purpose of
using open‐ended interviews was to explore the situation of dental phobics as expressed by the
participants themselves. An interview guide was used as a basic checklist to make sure that
relevant topics were covered. These included onset of dental fear, family, experiences in dental
care, health and effects on everyday life, coping strategies. Interviews were introduced with
questions such as ‘Does your dental fear have an impact on your daily life?’, ‘In what way?’,
‘What do you do?’, ‘feel?’, etc.
ETHICAL ISSUES
It was stressed that participation was voluntary, all data collected would be confidential, and the
participant had the right to end participation at any time. All participants completed and signed
an informed consent form.
ANALYSIS OF DATA
Interview transcripts were analysed using grounded theory (seeSection 3.3.10). The aim of this
method is to focus on different qualities of phenomenon in order to generate a model or a
theory. Different qualities of phenomena might include psychosocial processes, existing
problems caused by dental phobia, how participants coped with their problems, etc. This
process should be conducted with the original aims of the study clearly in mind. The interviews
were analysed line by line and broken down into segments reflecting their content, segments
with similar contents were then grouped together to form more abstract categories
CONCLUSIONS
This analysis allowed the researchers to construct a model or theory of the experience of dental
phobia which is represented schematically below. Four main categories of experience were
developed: threat to self‐respect and well‐being, avoidance, readiness to act, and ambivalence in
coping. This provides a rich description of how dental fear affects the daily lives of these
individuals and how social and psychological factors interact to determine how they cope with
this fear.
Taken from Abrahamsson, Berggson, Hallberg, and Carlsson (2002).
Barker, Pistrang, and Elliott (2002) provide a succinct illustration of the difference between qualitative
and quantitative research:
A simplified illustration of the difference between the quantitative and the qualitative approach is shown in the differing
responses to the question ‘How are you feeling today?’ A quantitative oriented researcher might ask the participant to
respond on a seven‐point scale, ranging from 1=‘very unhappy’ to 7=‘very happy’, and receive an answer of 5,
signifying ‘somewhat happy’. A qualitative researcher might ask the same person the same question, ‘How are you
feeling today?’, but request an open‐ended answer, which could run something like ‘Not too bad, although my knee is
hurting me a little, and I've just had an argument with my boyfriend. On the other hand, I think I might be up for
promotion at work, so I'm excited about that’. In other words, the quantitative approach yields data which are relatively
simple to process, but are limited in depth and hide ambiguities; the qualitative approach yields a potentially large
quantity of rich, complex data which may be difficult and time consuming to analyse.
(Barker et al. (2002), p. 73
This example shows how qualitative methods are nonquantitative, usually open ended (in the sense that
the researcher does not know before the study exactly what data they may collect), and enable the
researcher to begin to understand an individual's lived experiences, the feelings they have about their
experiences, and the perceptions and meaning they give to their experiences (Nelson & Poulin, 1997;
Polkinghorne, 1983). Given these characteristics, a typical qualitative study will involve detailed
interviewing of participants to identify themes involving feelings and the meaning that those
participants give to their feelings.
The advantages of using qualitative methods are: (a) some aspects of psychopathology are difficult to
express numerically, and a qualitative approach allows data to be collected about more complex aspects
of experience; (b) they permit intensive and in‐depth study of individuals or small groups of individuals;
(c) because interviewing techniques are usually open ended, the researcher may discover interesting
things about a psychopathology that they were not originally looking for; and (d) they can be an
extremely valuable source of information at the outset of a research programme and provide the
researcher with a rich source of information which may lead them to construct hypotheses suitable for
study using quantitative methods.
Conducting and analysing qualitative methods
Qualitative studies are not entirely unstructured, and qualitative techniques specify ways in which data
should be collected and analysed. First, unlike quantitative methods that tend to emphasise the random
selection of participants and allocation to experimental groups, qualitative methods tend to deliberately
specify groups of participants for sampling depending on the phenomenon or psychopathology the
researcher is interested in. For example, these may include individuals who have suffered childhood
abuse, families with a member who is suffering a mental health problem, parents of autistic children,
etc. (Creswell, 1998). Once selected, participants will then usually take part in a semistructured, open‐
ended interview in a relaxed and comfortable interaction (McGrath, Palmgren, & Liljedahl, 2018). All
interview questions would normally be related back to the original research question(s) posed prior to
the study. For instance, a research question might be ‘How do individuals with panic disorder cope with
day‐to‐day living?’. In this example, the interviewer can ask very general questions or more specific
questions that are derived from the original research question. A general question might be ‘What
problems do you encounter each day because of your panic attacks, and how do you cope with them?’.
A more specific question might be ‘How do you feel about not being able to leave the house because of
the possibility you might have a panic attack?’ In this kind of structure, the participant has the
opportunity to respond to both general and specific questions. The general questions allow the
participant to create their own picture of their experiences, and the specific questions allow the
researcher to obtain detailed information that is relevant to the original research question.
Once detailed responses from the interview have been collected, the researcher has the task of making
sense of the data, picking out consistent themes that emerge in the participant's responding, and
deciding how these themes might relate to the original research question that was posed. The first step is
to break up the interview transcript into manageable and meaningful units. There are a number of
ways to do this (Giorgi, 1985; Merleau‐Ponty, 1962), but for simplicity we will describe a commonly
used approach known as grounded theory (see Heydarian, 2016, for a summary of developing theory
with the grounded theory approach). Grounded theory is an approach to qualitative analysis that was
developed by Glaser & Strauss (1967). It involves identifying consistent categories or themes within the
data, and then building on these to provide more abstract theoretical insights into the phenomenon
being studies. Research Methods Box 3.1 provides a detailed specific example of how grounded theory
has been used to understand how dental phobics cope with their psychopathology and how it affects
their day‐to‐day living. As we can see, this study was able to identify a number of consistent themes that
emerged from the interview data and provided a rich insight into the every‐day experiences and feelings
of individuals with dental phobia. The study also provided some higher‐level theoretical insights by
suggesting how several psychological and social factors interact to determine how dental phobics cope
with their fear (Abrahamsson et al. 2002). Grounded theory can be used with data collected in a
number of forms, including interviews, focus groups, observation of participants, and diary material. It
is also an approach that allows a constant dynamic interaction between research and theory. For
example, the study reported in Research Methods Box 3.1 provided some theoretical insights into how
dental phobics coped with their fear, this theoretical insight can then provide the basis of a refined
research question and a subsequent qualitative study pursuing this issue in further detail.
Summary
Qualitative methods lend themselves particularly well to understanding and describing many aspects of
psychopathology, and are becoming increasingly used in clinical psychology research. They are useful
for collecting data on everyday feelings and experiences associated with psychopathology, and data
collected in this way can make a significant contribution to theory. In this section of the chapter it has
not been possible to convey the full range of qualitative methods available to the researcher, nor to
convey the important philosophical and epistemological underpinnings of many of these techniques (see
Howitt, 2019; Sullivan & Forrester, 2018). However, qualitative methods are not just an alternative to
quantitative methods; the two can be combined in a useful and productive way in clinical psychology
research. Examples include using qualitative data to clarify quantitative findings, beginning research in a
new area with qualitative research but moving this on using quantitative methods, or using qualitative
data to develop quantitative measures (Barker et al., 2015).
Obtaining full informed consent of a participant also becomes somewhat problematic if informing the
participant of all the details of the study is likely to significantly affect the results. For example,
participants who are in a control condition in many drug treatment studies are given placebo pills to
assess what improvement might occur if they believe they are receiving a drug but in fact are not. This
involves some deception on the part of the researcher in the sense that they have not told those taking
the placebo that it is not an active drug. At the very least, this means that the participant is not being
given all the information about what is happening in a study, and if this is the case, can they then make
an informed decision about whether to participate? This is a moot point, and it is important because
many psychological studies depend for the validity of their findings on deceptions of this kind. Many
researchers overcome this problem by withholding some information from participants if providing that
information is likely to affect the outcome of the study. They will then offer the participant a full
debriefing at the end of the study, explaining any deception and offering any withheld information. If
the participant is unhappy about this, the researcher can then offer them the opportunity to withdraw
their consent to use their data (Bersoff & Bersoff, 1999).
Finally, it is worth reiterating that all efforts should be made to ensure that a participant's involvement in
a study should be truly voluntary. There should be no explicit or implicit coercion. How often has a
student trying to finish off their undergraduate project gone into the corridor or coffee bar and tried to
persuade someone to take part in their experiment ‐ ‘I only need half a dozen more people’, they plead!
Is anyone approached in this way a genuine volunteer if they agree to take part? Probably not, because
at least some will feel obliged to participate in order to ‘help’ the student out. Similar problems in
obtaining truly voluntary consent are found in many other situations, such as studies that involve
hospital inpatients, prisoners, and even undergraduate psychology students who have to take part in
research studies to gain course credits for their degree programme! Some service providers (such as the
NHS) have developed a more imaginative and inclusive way of seeking participant involvement in
clinical research, and this is by involving service users (e.g., clients and patients) in the whole research
process, from design, through participation to the reporting of findings. Also, at a national level,
INVOLVE is a group dedicated to facilitating public participation in research designed to improve well‐
being and reduce social problems generally (www.involve.org.uk).
Privacy The right of participants to decide not to provide some forms of information to the
researcher if they so wish (e.g. their age or sexual orientation).
However, issues of confidentiality and anonymity become problematic when the participant discloses
information about illegal activities or events or circumstances that may be detrimental to an individual's
psychological or physical health. For example, what should a researcher do if a participant tells them
about suicidal intentions, serious drug abuse, criminal activities, physical or sexual abuse, etc.? Certainly,
a researcher has a legal and moral obligation to consider appropriate action if they believe a crime has
been committed or is intended, and in some countries it is mandatory by law, for example, to report
information about criminal activities such as child abuse (Becker‐Blease & Freyd, 2006). Perhaps it is
important to be clear that confidentiality is not the same as secrecy, and is therefore not absolute. If the
researcher believes that a study might reveal information about illegal or immoral activities, then they
might inform participants at the outset of the study that (a) confidentiality is not absolute, and (b) the
researcher will inform the participant if confidentiality is broken. However, providing such information
at the outset of a study is likely to mean that participants will be significantly less willing to provide
sensitive information (Bersoff & Bersoff, 1999; Koocher, 2013).
Finally, what should a researcher do when a participant provides information that they are likely to
harm themselves or others or are seriously distressed? This obviously requires a judgement on the part
of the researcher, and no one can morally turn a blind eye knowing that others may be harmed or an
individual is in a state of life‐threatening distress. Because of their knowledge of psychopathology and
the provision of treatments, most clinical psychology researchers are usually in the privileged position of
being able to offer at least some kind of support and guidance to those disclosing information indicating
serious distress. As a consequence, a researcher may be able to suggest treatment or referral to an
appropriate support service immediately after the study.
SELF‐TEST QUESTIONS
What is informed consent?
What ethical issues need to be considered when a research study may cause distress to a
participant or lead to the withholding of benefits?
How should issues of privacy and confidentiality be considered when designing and
conducting a research study?
SECTION SUMMARY
CHAPTER OUTLINE
4.1 THE NATURE AND FUNCTION OF TREATMENTS FOR
PSYCHOPATHOLOGY
4.2 EVALUATING TREATMENT
4.3 TREATING PSYCHOPATHOLOGY REVIEWED
LEARNING OUTCOMES
When you have completed this chapter, you should be able to:
1. Describe some of the reasons for wanting to treat psychopathology symptoms.
2. Describe and compare and contrast the basic theoretical principles on which at least four
different types of psychotherapy are based.
3. Describe and evaluate at least three to four different modes of delivery for treatments of
psychopathology.
4. Critically assess methods for determining the effectiveness of treatments for
psychopathology.
I was a 22‐year‐old trainee working for a publishing company in London, and I was obsessed with food. I made
a pact with myself to limit myself to less than 700 calories a day. This worked well for a while, but then I
started binge eating, and my fear of gaining weight led to me to make myself sick. Sometimes up to 5 or 6 times a
day. This left me totally drained—both emotionally and physically, and my relationship with my partner began to
go downhill rapidly. I really hated myself, and I felt fat and disgusting most days. If only I felt thinner I would
feel better about myself. My GP eventually referred me to a clinical psychologist, who helped me to understand how
my thinking was just plain wrong. He explained to me how I evaluated my self‐worth purely on the basis of my
weight and body shape. My thinking was also ‘black and white’—I believed that foods were either ‘good’ or if
not, they were ‘bad’. During therapy I learned to identify and challenge my irrational thoughts about food and
eating, this helped me to begin to eat relatively normally again, and I began to feel less anxious and worthless.
What amazed me most was that eating normally didn't mean I put on weight, and I felt in control again—the
first time for years. All this as so wonderful that I became anxious about the possibility of therapy ending and that
I'd simple go back to starving and bingeing. But I was encouraged to practice a number of coping strategies and
learned what I should do in circumstances where I felt I might relapse back into my old ways.
Elly's Story
Introduction
Psychopathology can take many forms and involve anxiety, depression, worthlessness, guilt, and feelings
of lack of control, amongst others. For many people these feelings become so intense that they cause
personal distress and significantly impair normal daily functioning. Some people are able to deploy
adaptive coping strategies that allow them to successfully negotiate such periods in their life (e.g., by
seeking help and support from friends and family, or using problem‐solving strategies to deal with life
problems that may be causing their symptoms). Others may be less able to cope constructively and
choose less adaptive means of dealing with their symptoms, such as resorting to substance abuse and
dependency or deliberate self‐harm. Whatever route an individual may take, the distress and disruption
that symptoms of psychopathology cause will often lead an individual to seek professional help and
support for their problems. The first port of call is usually the individual's doctor or GP, and the GP
may be able to offer sufficient help to deal with acute bouts of psychopathology such as those involving
depression, stress and anxiety‐based problems. In most cases, this support will usually be in the form of
suitable medication, but it may also take the form of access to stress‐management courses, short‐term
counselling or psychotherapy, access to self‐help information or even computerised cognitive behaviour
therapy (CCBT) (e.g., Andersson, 2016; Fairburn & Patel, 2017). In other cases, it may be necessary for
the individual to be referred for more specific and specialised treatment, and the nature of this
treatment may often depend on the nature and severity of that person's symptoms. This is a fairly
standardised route by which individuals suffering psychopathology come into contact with the treatment
methods required to alleviate their symptoms and their distress. Others may simply decide to bypass the
health services available in their community and directly approach an accredited counsellor or
psychotherapist who can privately supply the treatment services they require. Whichever route is
followed, the aim is to find a suitable specialist who can help the individual to recover by successfully
alleviating the symptoms of psychopathology and easing the distress that is experienced.
palliative effect The reduction of the severity of symptoms and alleviation of distress.
The treatment that is provided for a psychopathology will depend on at least two factors: (a) the
theoretical orientation and training of the therapist, and (b) the nature of the psychopathology. First, a
therapist will tend to adopt those treatment practices that they have most experience with and were
originally trained to use. This will often involve therapies with a specific theoretical approach (e.g., a
psychodynamic approach, a client‐centred approach, a cognitive approach or a behavioural approach—
see Section 1.3.2 in Chapter 1), and these theoretical approaches will not just advocate different
treatment procedures but will also advocate quite different approaches to understanding and explaining
psychopathology. Most accredited therapists will now also have to demonstrate that they have
periodically engaged in continuing professional development(CPD). That is, they must
demonstrate that they regularly update their knowledge of recent developments in treatment
techniques. If a therapist is unable to demonstrate that they are actively engaged in CPD, then they may
be in danger of losing their status as a legally registered practitioner. This has meant that practitioners
have become much more eclectic in the types of treatment they will offer as they learn new treatment
methods through the need to demonstrate their continuing professional development. While some
practicing therapists may also use the research literature as a way of updating their therapeutic skills,
most rely on information from less formal sources, such as colleagues, supervision, personal therapy,
professional newsletters, workshops and conferences (Goldfried & Wolfe, 1996; Hill, Spiegel, Hoffman,
Kivlighan, & Gelso, 2017).
Second, treatments may be chosen largely on the basis that they are effective at treating a certain type
of psychopathology. In the UK, the National Institute for Health & Clinical Excellence(NICE)
(https://www.nice.org.uk/guidance/lifestyle‐and‐wellbeing/mental‐health‐and‐wellbeing) recommends
treatments for specific psychopathologies on the basis that their effectiveness is evidence‐based and
empirically supported by scientifically rigorous research (see also Focus Point 3.2 in Chapter 3), and we
will discuss some of these recommendations in later chapters when we discuss treatment programmes
for specific psychopathologies (see Chapters 6–17). Nowadays, most types of theoretical approach have
been adapted to treat most psychopathologies, or at least some aspect of most psychopathologies and
these will be discussed in detail in the treatment sections of each ensuing chapter.
4.1.1 Theoretical Approaches to Treatment
Traditionally, popular treatments have been developed around a relatively small number of important
theoretical approaches. We discussed these theoretical approaches in some detail in Chapter 1 (Section
1.3), and you may want to return to this section in order to refresh your memory about how these
different theoretical models conceptualise and explain psychopathology. This section continues with a
summary of how these theoretical approaches are adapted to treat psychopathology.
Psychodynamic approaches
The aim of most psychodynamic therapies is to reveal unconscious conflicts that may be causing
symptoms of psychopathology. Most psychodynamic approaches assume that unconscious conflicts
develop early in life, and part of the therapy is designed to identify life events that may have caused
these unconscious conflicts. Once these important developmental life events and unconscious conflicts
have been identified, the therapist will help the client to acknowledge the existence of these conflicts,
bring them into conscious awareness, and work with the client to develop strategies for change. One
important form of psychodynamic therapy is psychoanalysis, and this is a type of therapy based on
the theoretical works of Sigmund Freud (1856–1939). The aim of psychoanalysis is to bring any
unconscious conflicts into awareness, to help the individual understand the source of these conflicts
(perhaps by identifying past experiences or discussing the nature of important relationships), and to help
the individual towards a sense of control over behaviour, feelings and attitudes. There are a number of
basic techniques used by psychoanalysts to achieve these goals:
1. Free association: here the client is encouraged to verbalise all thoughts, feelings, and images that
come to mind while the analyst is normally seated behind them, and this process functions to bring
into awareness any unconscious conflicts or associations between thoughts and feelings.
2. Transference: Here the analyst is used as a target for emotional responses, and the client behaves
or feels towards the analyst as they would have behaved towards an important person in their lives.
This allows the client to achieve understanding of their feelings by acting out any feelings or
neuroses that they have towards that person.
Transference A technique used in psychoanalysis where the analyst is used as a target for
emotional responses: clients behave towards the analyst as they would have behaved
towards an important person in their lives.
3. Dream Analysis: Freud believed that unconscious conflicts often revealed themselves in symbolic
forms in dreams, and this made the analysis of dream content an important means of accessing
unconscious beliefs and conflicts.
4. Interpretation: Finally, the skilled psychoanalyst has to interpret information from all of the
above sources and help the client to identify important underlying conflicts and help the client
develop ways of dealing with these conflicts.
Interpretation In psychoanalysis, helping the client to identify important underlying
conflicts.
Frosh (2012, p. 100) summarises Freud's conception of psychoanalysis as a therapeutic process in the
following way:
Psychoanalysis is a way of exploring the unconscious that might have therapeutic effects, but it is
not solely or necessarily therapeutic in its aims.
The assumption that psychological conflict arises from unconscious complexes suggests that if
psychoanalysis brings unconscious material into consciousness, it will have the effect of lessening
psychological disturbance.
Freud was cautious about the power of psychoanalysis to make a significant difference but
nevertheless believed that the movement from “unconscious to conscious” was an important step in
advancing individual well‐being as well as social life.
Psychological disturbance is caused by a complex array of phenomena, but at its core is the
relationship between anxiety and repression, which produces a variety of strategies aimed at
keeping troubling unconscious material out of awareness.
The different strategies (defences) adopted by different people and in different circumstances (see
Table 1.1, Chapter 1) characterise the various forms of psychological disturbance—neurosis,
psychosis, and so on.
Psychoanalysis as a mode of therapy aims to produce insight. Its main methods of therapeutic
activity are focussed on interpretation and transference.
As a form of treatment, psychoanalysis may take up to three to five sessions a week and change is
expected to take place at a normal maturational rate, and so may require anything between three and
seven years for the full therapeutic benefits of the therapy to be recognised. Other forms of
psychodynamic therapy may be briefer and less intensive than psychoanalysis and may draw on
techniques from other sources, such as family therapy (see Section 4.1.1). Primarily, psychoanalysis
represents a quest for self‐knowledge, where an individual's problems are viewed in the context of the
whole person, and in particular, any conflicts they may have repressed. It can be a helpful treatment for
many people with moderate to severe anxiety or depression‐based problems—especially when other,
more conventional, therapies have failed. In studies where the effects of long‐term psychoanalytic
therapy have been measurable, it has been shown to only be more effective than control treatments that
do not possess a specialised psychotherapy component, suggesting that the evidence for the effectiveness
of long‐term psychoanalytic therapy for psychopathology is still limited and at best conflicting (e.g.,
Lindfors et al., 2019; Smit et al., 2012).
Behaviour therapy
In the 1940s and 1950s there was a growing dissatisfaction with the medical or disease model of
psychopathology and also with the unscientific approaches to psychopathology being generated by
many psychodynamic theories. These dissatisfactions led psychologists to look towards the developing
area of experimental psychology for objective knowledge that might be used to inform treatment and
therapy. The body of knowledge that psychologists turned to was that of conditioning (see Section
1.3.2, Chapter 1), and this gave rise to the development of what came to be known as behaviour
therapies. First, such therapies stressed the need to treat symptoms of psychopathology as bona fide
behavioural problems rather than the mere symptoms of some other, hidden underlying cause.
Secondly, at the time, many psychologists believed that numerous psychological disorders were the result
of what was called ‘faulty learning’, and that symptoms were acquired through simple conditioning
processes. For example, it was believed that anxiety symptoms could be acquired through classical
conditioning (see Figure 6.1), and behavioural problems might be acquired through processes of
operant conditioning—e.g., bizarre and inappropriate behaviours might be acquired because they have
been reinforced or rewarded in the past (see Focus Point 8.5). The reasoning here was that, if
psychological problems were acquired through learning, then conditioning principles could be used to
develop therapies that effectively helped the individual to ‘unlearn’ those problematic associations. Two
distinctive strands of behaviour therapy developed from these assumptions. The first was a set of
therapies based on the principles of classical conditioning and the second based on principles of
operant conditioning. While the former group of therapies continues to be known as behaviour
therapy, the latter group has also come to be known as behaviour modification or behaviour
analysis. The term behaviour therapy is often used even more eclectically nowadays to refer to any
treatment that attempts to directly change behaviour (rather than, say, cognitions), whether the
underlying principles are based on conditioning or not.
faulty learning A view that the symptoms of psychological disorders are acquired through the
learning of pathological responses.
extinction The classical conditioning principle which assumes emotional problems can be
‘unlearnt’ by disrupting the association between the anxietyprovoking cues or situations and the
threat or traumatic outcomes with which they have become associated.
flooding A form of exposure therapy for the treatment of phobias and related disorders in
which the patient is repeatedly exposed to highly distressing stimuli.
Exposure therapy Treatment in which sufferers are helped by the therapist to confront and
experience events and stimuli relevant to their trauma and their symptoms.
Aversion therapy is another treatment based on classical conditioning but is rather different from the
proceeding therapies because it attempts to condition an aversion to a stimulus or event to which the
individual is inappropriately attracted. For example, aversion therapy is most widely used in the
treatment of addictive behaviours such as alcoholism, and in these procedures the taste of alcohol is
paired with aversive outcomes (e.g., sickness inducing drugs) in order to condition an aversive reaction to
alcohol (e.g., Lemere & Voegtlin, 1950; Voegtlin & Lemere, 1942) (see Chapters 8 and 10 for discussion
of the use of aversion therapy in the treatment of substance abuse and paraphilias). Since the 1950s
and 1960s, this type of procedure has been used to treat a wide variety of problems, including
inappropriate or distressing sexual activities (e.g., Feldman & MacCulloch, 1965), drug and alcohol
addiction (McRae, Budney, & Brady, 2003), and even obsessions and compulsions associated with
anxiety (Lam & Steketee, 2001). Aversion therapy was popularised in the 1971 cult film ‘A Clockwork
Orange’ where the lead character's excessive violence was treated by ‘conditioning’ him to vomit
whenever he saw a violent act (Photo 4.1). However, while aversion therapy for some problems (e.g.,
alcoholism, sexual offending) has been shown to have some therapeutic gains when used in conjunction
with broader community support programmes (Azrin, 1976) or social skills training (Maletzky, 1993),
substance abuse or sexual offending responses are often very resistant to this form of treatment, and
there is very little evidence that aversion therapy alone has anything other than short‐lived effects (e.g.,
Wilson, 1978) and does not significantly reduce reoffending in sexual offenders (Dennis et al., 2012;
Marques, Wiederanders, Day, Nelson, & van Ommeren, 2005).
PHOTO 4.1 Alex, the leading character in the 1971 film A Clockwork Orange undergoes aversion therapy to cure his
violent tendencies.
A functional analysis is where the therapist attempts to identify consistencies between problematic
behaviours and their consequences—especially to try to discover whether there might be a consistent
event or consequence that appears to be maintaining the behaviour by rewarding it. For example, self‐
injurious or challenging behaviours may be maintained by a range of reinforcing consequences, such as
the attention the behaviour may attract or the sensory stimulation it provides (see Treatment in Practice
Box 17.1). Identifying the nature of the consequence allows the therapist to disrupt the reinforcement
contingency and, if necessary, reduce the frequency of that behaviour through extinction (Wacker et al.,
1990). Functional analysis has been adopted across a range of clinical settings and has been successfully
applied to managing aggressive/challenging behaviour (Delgado‐Casas, Navarro, Garcia‐Gonzalez‐
Gordon, & Marchena, 2014), tantrums (Wilder, Chen, Atwell, Pritchard, & Weinstein, 2006), attention‐
deficit‐hyperactivity disorder (Fabiano et al., 2008), depression (Ferster, 1985; Kanter, Cautilli, Busch, &
Baruch, 2011), eating problems (Meyer, 2008), and self‐injurious behaviour (Hastings & Noone, 2005).
Other influential interventions based on operant conditioning principles include the token economy,
response shaping, and behavioural self‐control. In the psychiatric setting, a token economy involves
participants receiving tokens (a generalised reinforcer) for engaging in behaviours defined by the
programme, and at a later time, these tokens can then be exchanged for a variety of reinforcing or
desired items (e.g., access to the hospital grounds, a visit to the cinema, etc.) (Hackenberg, 2018). In
psychiatric care, the token economy was first used to foster prosocial or self‐help behaviours (e.g.,
combing hair, bathing, brushing teeth, etc.) in previously withdrawn patients. However, despite the
apparent success of token economies, their use in the hospital setting has been in serious decline since
the early 1980s (Dickerson, Tenhula, & Green‐Paden, 2005). There were a number of reasons for this
decline, and these include the legal and ethical difficulties of withholding desired materials and events so
they can be used as reinforcers, and a lack of consensus on whether behaviours nurtured in token
economy schemes were maintained after the scheme ended and whether they generalised to other
environments and settings (Davey, 1998; Glynn, 1990).
Response shaping is a procedure that can be used to encourage new behaviours that are not already
occurring at a reasonable frequency. This may be especially a problem with withdrawn individuals or
individuals with restricted behavioural repertoires (such as those with severe intellectual disabilities).
However, the technique of response shaping by successive approximations is a way around this problem.
Here, the therapist will first reinforce a behaviour that does occur quite frequently and is an
approximation to the specific target response. Once this general response is established, reinforcement is
given only for closer and closer approximations to the target response.
Finally, the use of operant conditioning principles for behaviour change purposes does not have to be
overseen or administered by a therapist. The principles are quite clear and can be used by any
individual to manage their own behaviour. This personal use of operant conditioning principles has
come to be known as behavioural self‐control training (e.g., Thoresen & Mahoney, 1974), and has
since been developed into multifaceted behavioural programmes to deal with a variety of personal
problems which include addiction, habits, obsessions, and other behavioural problems (Lutzker &
Martin, 1981; Miller & Munoz, 2013). An early programme developed by Stuart (1967) provides a good
example of a multifaceted behavioural self‐control scheme designed to address obesity by managing
behaviours contributing to overeating. The main elements of this programme were (a) recording the
time and quantity of food consumption (self‐observation); (b) weighing in before each meal and before
bedtime (helping the individual to discriminate how eating might have contributed to weight gain); (c)
removal of food from all places in the house except the kitchen (so that only the kitchen comes to act as
a cue for eating); (d) pairing eating with no other activity that might make eating enjoyable, and so
reinforce it (e.g., eating should not occur while watching an enjoyable TV programme); (e) setting a
weight loss goal of 1‐2 pounds/week (setting clearly attainable goals); (f) slowing down the pace of
eating (defining appropriate responses), and (g) substituting other activities for between‐meal eating
(programming acceptable competing responses). These principles are relatively easy to apply to your
own behaviour, and Activity Box 4.1 (available on the website) provides some suggestions as to how you
might develop your own behavioural self‐control programme to promote an activity such as studying.
cognitive therapies Therapeutic interventions which seek to help the patient overcome
difficulties by identifying and changing dysfunctional thinking, behaviour, and emotional
responses. They include rational emotive therapy (RET), Beck’s cognitive therapy, and cognitive
behaviour therapy (CBT).
Rational Emotive Therapy (RET) A cognitive therapy technique developed by Albert Ellis
(1962) which addresses how people construe themselves, their life and the world.
Aaron Beck's cognitive theory of depression is outlined in more detail in Chapter 7, and from this
theory he developed a cognitive therapy for depression (see Whittington, 2019, and Strauss, 2019). Beck
argues that depression results when the individual develops a set of cognitive schemas (or beliefs) which
bias the individual towards negative interpretations of the self, the world and the future, and any
therapy for depression must therefore address these schemas, deconstruct them and replace them with
more rational schemas which do not always lead to negative interpretations. Beck's Cognitive
Therapy does this by engaging the depressed individual in an objective assessment of their beliefs, and
requires them to provide evidence for their biased views of the world. This enables the individual to
perceive their existing schemas as biased, irrational, and overgeneralised (see Section 7.1.2, Chapter 7).
Beck’s Cognitive Therapy An intervention derived from Beck’s view that depression is
maintained by a ‘negative schema’ that leads depressed individuals to hold negative views about
themselves, their future and the world (the ‘negative triad’).
Out of these early pioneering cognitive therapies developed what is now known as cognitive
behaviour therapy (CBT), which is an intervention for changing both thoughts and behaviour and
represents an umbrella term for many different therapies that share the common aim of changing both
cognitions and behaviour. A CBT intervention usually possesses most of the following characteristics: (a)
the client is encouraged to keep a diary noting the occurrence of significant events and associated
feelings, moods, and thoughts in order to demonstrate how events, moods and thoughts might be
interlinked; (b) with the help of the therapist, the client is urged to identify and challenge irrational,
dysfunctional, or biased thoughts or assumptions; (c) clients are given homework in the form of
‘behavioural experiments’ to test whether their thoughts and assumptions are accurate and rational; and
(d) clients are trained in new ways of thinking, behaving, and reacting in situations that may evoke their
psychopathology. As an example, Treatment in Practice Box 6.3 in Chapter 6 demonstrates how a
cognitive behaviour therapist would conduct an interview designed to identify and challenge irrational
and dysfunctional beliefs in an individual diagnosed with panic disorder.
cognitive behaviour therapy (CBT) An intervention for changing both thoughts and
behaviour. CBT represents an umbrella term for many different therapies that share the
common aim of changing both cognitions and behaviour.
‘Waves’ of CBT
CBT has not been a static treatment innovation, and just like any other knowledge‐based development,
new forms of CBT have evolved out of earlier ones. These progressive developments have come to be
known as ‘waves’, and at the present time we are experiencing what is called the third wave of CBT
techniques. The ‘first wave’ occurred during the 1950s and 1960s and was represented largely by
behaviour therapy techniques based on learning theory and conditioning principles (see Section 4.1.1).
The ‘second wave’ developed in the 1970s and 1980s when it became clear that what we do is not just
influenced by our learning and conditioning experiences but also by what and how we think
(cognitions), and how the way we think affects our emotions. This gave rise to the traditional forms of
CBT initially developed by therapists such as Aaron Beck and described in the previous section.
However, a ‘third wave’ or ‘third generation’ of CBT methods have developed which emphasise
mindfulness, acceptance, and a greater concern with the individual's relationship with their
psychopathology experiences and aim to reduce distress by changing the function of the experience
rather than necessarily changing the experience (for example, instead of attempting to eliminate the
hearing of voices in individuals with a diagnosis of schizophrenia, third‐wave CBT for psychosis
attempts to change the individual's beliefs about their voices and their relationship with those voices)
(Culpitt, 2018). Important examples of third‐wave CBT therapies include Dialectical Behaviour
Therapy (DBT) (to be discussed more fully in Chapter 12), MBCT, ACT, and Behavioural Activation
(BA).
Mindfulness‐based cognitive therapy (MBCT) is a direct extension of traditional CBT in which
treatments emphasise achieving a mental state characterised by present‐moment focus and
nonjudgmental awareness (Bishop et al., 2004; Kabat‐Zinn, 2003). The purpose of this is to improve
emotional well‐being by increasing awareness of how automatic cognitive and behavioural reactions to
thoughts, sensations, and emotions can cause distress. Clients are encouraged to acknowledge and
accept their thoughts and feelings, and by focussing on the present rather than the past or future, the
individual can learn to deal more effectively with life stressors and challenges that generate anxiety or
depression (Activity Box 4.2 – available on the website – provides a series of instructions for a simple 5–
10 minute mindfulness exercise that will facilitate bodily awareness). Mindfulness interventions are
considered to reduce symptoms of common mental health problems such as anxiety and depression by
countering avoidance strategies, helping the individual to respond reflectively rather than reflexively to
stressors, and reducing physical symptoms by advocating the use of meditation and yoga exercises
(Kabat‐Zinn, 1982). Since it's early development, mindfulness has now been successfully applied to a
wide range of mental health problems, including anxiety and stress, depression, pain relief, post‐
traumatic stress disorder (PTSD), and psychosis (Chadwick, Hughes, Russell, Russell, & Dagnan, 2009;
Goyal et al., 2014; Vujanovic, Niles, Pietrefesa, Schmertz, & Potter, 2011; Williams & Kuyken, 2012;
Zeidan & Vago, 2017). A description of the science of mindfulness given by Mark Williams can be
found at https://www.youtube.com/watch?v=0wBZjb8u95o.
Acceptance and commitment therapy (ACT) is also a third‐wave CBT intervention that has
grown in popularity over recent years (Hayes, Strosahl, & Wilson, 2016). It is an approach that adopts
some aspects of mindfulness, but has developed more from the behaviour analysis or Skinnerian
approach to understanding behaviour (see Chapter 1, Section 1.3.2). ACT differs from traditional CBT
in that, rather than getting individuals to manage and change their thoughts and the way they think, it
teaches them to ‘just notice’, accept, and embrace private events such as thoughts (especially thoughts
that may be intrusive, distressing, or unwanted). As such, it aims to help the individual clarify their
personal values, to take action on them, and to increase their psychological flexibility (Hayes, Luoma,
Bond, Masuda, & Lillis, 2006; Zettle, 2005). Systematic reviews and meta‐analyses suggest that ACT is
probably efficacious for chronic pain disorders, depression, some psychotic symptoms, obsessive‐
compulsive disorder (OCD), and some forms of substance abuse, but effect sizes found in randomised
controlled trials (RCTs) tend to be relatively small and there may be insufficient evidence yet to
conclude that ACT for anxiety‐based problems is efficacious (Hacker, Stone, & McBeth, 2016; Öst,
2014; but see Atkins et al., 2017, for a response to these reviews by Hayes and colleagues). On the
book’s website, Activity Box 4.3 provides you with an example of how ACT attempts to help you
distance yourself from negative or distressing thoughts.
Acceptance and commitment therapy (ACT) A third wave CBT intervention that adopts
some aspects of mindfulness, but has developed more from the Skinnerian approach to
understanding behaviour.
Behavioural activation (BA) is a third‐wave therapy that encourages individuals with depression to
approach activities they may have been avoiding. The rationale is that depression is a condition caused
in part by avoiding particular activities or situations that in the past were positively reinforcing. Together
with the client, the therapist defines goals to be achieved and draws up a schedule of activities to achieve
these goals and so reestablish activities that are rewarding and pleasurable. For example, if one of the
goals of the client is to be a compassionate person, the client and therapist might draw up a schedule of
activities such as volunteering, helping out a friend, or donating to charity. BA has its origins in 1970s
behaviourism, which argued that depression was at least in part the result of the depressed individual no
longer engaging in activities that were positively reinforcing (Lewinsohn, 1974). After conducting a
component analysis of CBT, Jacobson and colleagues developed the BA approach to therapy by arguing
that the behavioural activation component of CBT was sufficient enough to account for its success
when treating depression (Jacobson et al., 1996). Subsequent meta‐analyses of random controlled trials
have generally reported significantly better effects of BA in treating depression than appropriate control
conditions (Ekers et al., 2014), but there is less consensus on whether BA is superior to other treatments
such as CBT or antidepressant medication (Dimijian et al., 2016).
While ‘new‐wave’ developments bring new and exciting ways of delivering CBT across a range of
disorders, these new approaches are still being evaluated. Studies suggest that third‐wave therapies such
as MBCT, ACT, and BA should not be considered as separate from the growing body of CBT
approaches (Hofman, Sawyer, & Fang, 2010), may have their successful therapeutic effects through
similar mechanisms to CBT (Dimijian et al., 2016; Hofman & Asmundson, 2008) and are generally
equally effective as each other (Gaudiano, 2009; Öst, 2008). CBT in general is perceived as an evidence‐
based and cost‐effective form of treatment that can be successfully applied to a very broad range of
psychopathologies (Butler, Chapman, Forman, & Beck, 2006; Cuijpers, Cristea, Karyotaki, Reijnders, &
Huibers, 2016), is equally as effective as other forms of psychotherapy, and superior to many other
forms of psychotherapy when treating anxiety and depressive disorders (Tolin, 2010).
Humanistic therapies
Throughout the twentieth century, many psychotherapists felt that psychological therapy was becoming
too focussed on psychological and behavioural mechanisms, or on psychological structures (such as
personality), and was losing sight of both the feelings of the individual and the individual themselves. As
a consequence, a number of what are called ‘humanistic’ therapies developed, including Gestalt therapy
(Perls, 1969), existential therapies (Cooper, 2003), primal therapy (Janov, 1973), narrative therapy
(Freedman & Combs, 1996), and transpersonal therapy (Wellings & McCormick, 2000) and, arguably
the most successful of these is client‐centred therapy (Rogers, 1961). These therapies had a number of
factors in common: (a) they espoused the need for the therapist to develop a more personal relationship
with the client in order to help the client reach a state of realisation that they can help themselves; (b)
they were holistic therapies, in that they emphasised the need to consider the ‘whole’ person and not
just those ‘bits’ of the person that manifest psychopathology; (c) therapy should be seen as a way of
enabling the individual to make their own decisions and to solve their own problems rather than
imposing structured treatments or ways of thinking on the individual; (d) humanistic therapies
espouse the need for the therapist‐client relationship to be a genuine reciprocal and empathetic one,
rather than the limited skilled professional‐referred client relationship that exists in many forms of
psychological therapy; and (e) increasing emotional awareness is a critical factor in alleviating
psychological distress, and is necessary before the client can begin to resolve life problems (Focus Point
4.1).
holistic therapies Therapies which emphasise the need to consider the ‘whole’ person, not
just those ‘bits’ of the person that manifest psychopathology.
humanistic therapies Therapies that attempt to consider the ‘whole’ person and not just the
individual symptoms of psychopathology.
FOCUS POINT 4.1 GESTALT THERAPY
Gestalt therapy is a popular existential humanistic therapy that was originally developed by
Fritz Perls and colleagues in the 1940s and 1950s (Perls, 1947; Perls, Hefferline, & Goodman,
1951). It focuses on an individual's experiences in the present moment, the therapist‐client
relationship, and the contexts in which the individual lives their life. It emphasises that the most
helpful focus of psychotherapy is on what a person is doing, thinking, and feeling at the present
moment, rather than on what was, might be, or could be. It is also a method of awareness
practice very similar to mindfulness (see Section 4.1.1) in which current perceiving, feeling and
acting are helpful to interpreting, explaining, and conceptualising experience. Much of the
Gestalt approach is about the client exploring their relationship with themselves, and one
method of achieving this is by using the role‐playing empty‐chair technique, which involves
the client addressing the empty chair as if another person was in it and acting out the two sides
of a discussion.
Gestalt therapy is often considered a good method for managing tension, depression and
anxiety, and uncontrolled outcome studies suggest that Gestalt therapy provides participants
with better emotional well‐being and a heightened sense of hope (Leung, Leung, & Ng, 2013).
At least some psychotherapists believe that the methods deployed by Gestalt therapy might be
productively integrated with more conventional interventions such as cognitive therapy
(Tonnesvang, Sommer, Hammink, & Sonne, 2010) and might make a useful contribution to
modern psychiatric practice (Clegg, 2010).
Client‐centred therapy focuses on the individual's immediate conscious experience, and critical to
this form of humanistic therapy is the creation of a therapeutic climate that allows the client to progress
from a state of rigid self‐perception to one which encourages the client to become independent, self‐
directed, and to pursue self‐growth. For Carl Rogers (1902–1987), empathy (‘putting yourself in
someone else's shoes’) was the central important feature of any therapist‐client relationship, and it is this
ability that is essential in guiding the client towards resolving their own life problems. Empathy has at
least two main components in this context: (a) an ability to understand and experience the client's own
feelings and personal meanings, and (b) a willingness to demonstrate unconditional positive regard
for the client. This latter feature involves valuing the client for who they are and refraining from judging
them. Another important feature of client‐centred therapy is that it is not directive. The therapist acts
primarily as an understanding listener who assists the client by offering advice only when asked. The
overriding goal is to develop the client through empathy, congruence, and unconditional positive regard
to a point where they are successful in experiencing and accepting themselves and are able to resolve
their own conflicts and difficulties.
In much the same way that psychoanalysis has evolved, client‐centred therapy has developed not just as
a therapy but also as a process for fostering personal self‐growth. The general approach places relatively
little emphasis on how the psychopathology was acquired but attempts to eliminate symptoms by
moving the client from one phenomenological state (e.g., a state of anxiety, depression, etc.) to another
(e.g., one that enables the client to view themselves as worthy and respected individuals).
Family and systemic therapies
Family therapy is a form of intervention that is becoming increasingly helpful as a means of dealing
with psychopathology that may result from the relationship dynamics within the family (Dallos &
Draper, 2015). Family therapy has a number of purposes: (a) it helps to improve communications
between members of the family—especially where communication between individuals might be the
cause of psychopathology in one or more family members, (b) it can resolve specific conflicts—for
example between adolescents and their parents, and (c) it may apply systems theory (attempting to
understand the family as a social system) to treatment by trying to understand the complex relationships
and alliances that exist between family members, and then attempting to remould these relationships
into those expected in a well‐functioning family (the latter may usually involve ensuring that the primary
relationship in the family—between the two parents—is strong and functional) (Minuchin, 1985).
Family therapy A form of intervention involving family members that is helpful as a means
of dealing with psychopathology that may result from the relationship dynamics within the
family.
systems theory Approach that attempts to understand the family as a social system.
In family therapy, the therapist or family therapy team meets with those members of the family willing
to participate in discussion about a topic or problem raised by one or more members of the family. In
the case of an adolescent eating disorder, the parents may have raised the issue of how their child's
eating disorder affects family functioning, and this may be explored with the family over a series of
meetings. Family therapists are usually quite eclectic in the range of approaches they may bring to
family therapy, and these may include cognitive‐behavioural methods, psychodynamic approaches, and
systemic analyses depending on the nature of the problem and its underlying causes. In many cases,
family therapists may focus on how patterns of interaction within the family maintain the problem (e.g.,
an eating disorder) rather than trying to identify the cause (the latter may be seen as trying to allocate
blame for the problem within the family). Over a period of between 5 and 20 sessions, the family
therapist will attempt to identify family interaction patterns that the family may not be aware of, and to
suggest to family members different ways of responding to each other. Family therapy has been shown
to be an effective intervention for a number of psychopathologies in both children and adults, including
conduct disorder, substance abuse, depression, and eating disorders (Stratton, 2016; Sydow, Beher,
Schweitzer, & Retzlaff, 2010). A case example of the use of family therapy with an adolescent with an
eating disorder is provided in Treatment in Practice Box 10.1 in Chapter 10.
Drug treatments
Pharmacological or drug treatments are regularly used to alleviate some of the symptoms of
psychopathologies. They are often the first line of treatment provided by GPs and doctors to tackle
anxiety and mood‐based problems and may be sufficient to enable an individual to see through an acute
bout of anxiety or depression. Some of the most commonly used drug treatments include
antidepressant drugs to deal with symptoms of depression and mood disorder, anxiolytic drugs
to treat symptoms of anxiety and stress, and antipsychotic drugs prescribed for symptoms of
psychosis and schizophrenia.
antidepressant drugs Drug treatments intended to treat symptoms of depression and mood
disorder.
anxiolytic drugs Drug treatments intended to treat symptoms of anxiety and stress.
tricyclic antidepressants Antidepressant drugs developed in the 1960s which have their
effect by increasing the amount of norepinephrine and serotonin available for synaptic
transmission.
monoamine oxidase inhibitors (MAOIs) Antidepressants which are effective for some
people with major depression who do not respond to other antidepressants.
Comparisons of antidepressant drugs with placebo controls suggest that most antidepressants are more
effective in treating depression symptoms than placebos (e.g., Cipriani et al., 2018), but not everyone
benefits from the use of antidepressants. The Royal College of Psychiatrists estimates that 50–65% of
people will see an improvement in symptoms, but some studies suggest that only one in three depression
sufferers will achieve full symptom relief after using antidepressants (Trivedi et al., 2006), and in other
studies only around 40% of people achieve sustained recovery (Arroll, Macgillivray, Ogston, Reid, &
Sullivan, 2005; Nelson, Portea, & Leon, 2006). Some studies suggest that antidepressants are more
effective than placebos for people with moderate to severe depression during the acute phase of their
depression (Undurraga & Baldessarini, 2011) but not for those with subthreshold or mild depression
(Fournier et al., 2010). These issues are compounded by the fact that almost 40% of those who are
prescribed with antidepressants stop taking the drug within the first month—often because of the side
effects (Olfson, Blanco, Liu, Moreno, & Laje, 2006). The length of antidepressant use varies, with
around half of those with a first diagnosis of depression terminating use after around 7 months, but
around a third continue use for 1 year or more (Coupland et al., 2015), and many individuals continue
with long‐term use of antidepressants at least in some cases because their continuing need for
medication use has not been adequately reviewed (Sinclair, Aucott, Lawton, Reid, & Cameron, 2014).
Finally, although antidepressants do appear to hasten recovery from an episode of depression, relapse is
common after the drug is discontinued (Reimherr, Strong, Marchant, Hedges, & Wender, 2001),
although this tendency to relapse is less so with SSRIs than with other forms of antidepressant (Sim,
Lau, Sim, Sum, & Baldessarini, 2015).
Interestingly, there has been a dramatic increase in the prescribing of antidepressants in the past 10–15
years, and in the UK there has been a doubling in the number of prescriptions in 10 years between
2007 and 2017 (Royal College of Psychiatrists, 2019), with a similar dramatic increase being reported in
the United States (Dorick & Frances, 2013). In the year 2017 one in six adults were given at least one
antidepressant prescription in the UK, and individuals over 60 years of age were twice as likely as those
in their twenties to be using an antidepressant (Royal College of Psychiatrists, 2019). Part of this
increase in prescribing may be due to an increase in the diagnosing of depression during this period
(e.g., Sarginson et al., 2017), and there is some evidence that the rate particularly accelerated between
2008 and 2012, during the period of financial recession. But at least some of this increase may be the
result of increased awareness of the symptoms of depression by the public generally (but see also Focus
Point 1.6 in Chapter 1 on the Medicalisation of Normality).
There is no clear conception of how antidepressants work and the original view that antidepressants
correct a chemical imbalance in the brain is an oversimplification. The assumption has been that
antidepressants target monoamine neurotransmitter function and increase serotonin or noradrenaline
availability in the brain (see Chapter 7 for discussion of the role of neurotransmitters in depression). But
there is a lack of evidence that antidepressants cure a deficiency of monoamines that is thought to be
causing depression (Healy, 2015; Huda, 2019). What is perplexing is that the main supposed effect of
antidepressants of inhibiting the reuptake of monoamines such as serotonin and noradrenaline occurs
immediately the individual begins taking the medication, but improvements in depression symptoms
rarely begin until several weeks after drug commencement (Harmer, Duman, & Cowen, 2017). More
recent conceptions of how antidepressants work claim that these medications may help to build up
neural plasticity which may have been eroded by stress, and that this may account in part for the
delayed onset of recovery (e.g., McEwen et al., 2015). In addition, there is evidence that antidepressants
decrease processing of negative emotional stimuli and increase attention to positive emotional stimuli
(Godlewska & Harmer, 2019; Harmer, Goodwin, & Cowen, 2009) and this eventually leads to
improvement in mood consistent with cognitive theories of depression (see Chapter 7).
TABLE 4.1 Common antidepressant medications
Category Generic name Brand Common side
name effects
Selective serotonin reuptake inhibitors (SSRIs) Fluoxetine Prozac Feeling sick
Headaches
Sleep problems
Diarrhoea
Feeling tired
Citalopram Cipramil Dry mouth
Sweating
Sleep problems
Tiredness
Sertraline Lustral Feeling sick
Headaches
Sleep problems
Diarrhoea
Dry mouth
Dizziness
Feeling Tired
Serotonin‐noradrenaline reuptake inhibitors Duloxetine Cymbalta Nausea
(SNRIs) Yentreve Dry mouth
Constipation
Fatigue
Sleep Problems
Venlafaxine Efexor Feeling sick
Headaches
Sweating
Dry mouth
Sleep problems
Feeling dizzy
Constipation
Tricyclic Antidepressants (TCAs) Amitriptyline Tryptizol Constipation
Nausea
Appetite or weight
changes
Itching or rash
Decreased sex drive
Imipramine Tofranil Dry mouth
Blurred vision
Tiredness
Dizziness
Constipation
Nausea
Stomach cramps
Weight gain/loss
Monoamine oxidase inhibitors (MAOIs) Tranylcypromine Parnate Vision problems
Headaches
Fast heartbeats
Nausea
Dizziness
Phenelzine Nardil Headaches
Chest pain
Weight gain
Agitation/Unusual
thoughts
benzodiazepines A group of anxiolytics which have their effect by increasing the level of the
neurotransmitter GABA at synapses in the brain.
Nowadays, the first choice medications for anxiety‐based problems are antidepressants, and in particular
SSRIs or SNRIs (Hoffman & Mathews, 2008). SSRIs and SNRIs have been shown to be an effective
and well tolerated treatment in individuals with anxiety disorders (Dell'Osso, Buoli, Baldwin, &
Altamura, 2010; Slee et al., 2019), making them superior to benzodiazepines in terms of significantly
reduced side effects and less likelihood of withdrawal symptoms following discontinuation (Schweizer,
Rickels, Case, & Greenblatt, 1990).
Group therapy
Therapy can also be undertaken in a group and not just on a one‐to‐one therapist‐client basis. Group
therapy can be useful (a) when a group of individuals share similar problems or psychopathologies (e.g.,
self‐help groups), or (b) when there is a need to treat an individual in the presence of others who might
have a role in influencing the psychopathology (e.g., family therapy). Group therapies can have a
number of advantages, especially when individuals (a) may need to work out their problems in the
presence of others (e.g., in the case of emotional problems relating to relationships, feelings of isolation,
loneliness and rejection), (b) may need comfort and support from others, and (c) may acquire
therapeutic benefit from observing and watching others. There are now many different types of group
therapy (Block & Crouch, 1987), and these include experiential groups and encounter groups
(which encourage therapy and self‐growth through disclosure and interaction) and self‐help groups
(which bring together people who share a common problem, in an attempt to share information and
help and support each other—e.g., Alcoholics Anonymous, http://www.alcoholics‐anonymous.org.uk,
see Focus Point 9.2 in Chapter 9). However, interventions that have traditionally been used only in one‐
to‐one client‐therapist situations are now being adapted to group settings, and two of these include CBT
and mindfulness (Arch et al., 2013; Wolgensinger, 2015). Adapting interventions for groups is likely to be
a cost‐effective solution for service providers as well as an effective way of helping clients to manage
symptoms of psychopathology.
encounter groups Group therapy which encourages therapy and self-growth through
disclosure and interaction.
self-help groups Group therapy which brings together people who share a common problem
in an attempt to share information and help and support one another.
Counselling
Counselling is still a developing and evolving profession that has burgeoned in the past 20–30 years,
and its expansion has partly resulted from the increasing demand for trained specialists able to provide
immediate support and treatment across a broad range of problems and client groups. Counsellors
receive specialised training in a range of support, guidance, and intervention techniques, and their levels
of training are monitored and accredited by professional bodies such as the British Association for
Counselling and Psychotherapy (BACP) in the UK, or Division 17 (Counselling Psychology) of the
American Psychological Association. Arguably, the primary task for counselling is to give the client an
opportunity to explore, discover, and clarify ways of living more satisfyingly and resourcefully, and to
‘help clients to understand and clarify their views of their lifespace, and to learn to reach their self‐
determined goals through meaningful, well‐informed choices and through resolution of problems of an
emotional or interpersonal nature.’ (Burks & Stefflre, 1979, p. 14). These definitions indicate that
counselling is a profession that aims to both promote personal growth and productivity and to alleviate
any personal problems that may reflect underlying psychopathology. In order to achieve these aims,
counsellors tend to adopt a range of theoretical approaches, with the main ones being psychodynamic,
cognitive—behavioural, and humanistic (Galbraith, 2018). Counsellors with different theoretical
orientations may often focus on different outcomes. Humanistic counsellors tend to promote self‐
acceptance and personal freedom, psychodynamic counsellors focus primarily on insight, and cognitive‐
behavioural counsellors are mainly concerned with the management of behaviour and symptoms of
psychopathology (Galbraith, 2018). Some counsellors specialise in areas such as marital breakdown,
rape, bereavement, or addictions, and their specialised roles may be recognised by the use of titles such
as mental health counsellor, marriage counsellor or student counsellor. Counselling
agencies have been established in a range of organisations to supplement community mental health
services and provide more direct and immediate access to support for vulnerable or needy individuals.
Counselling services may be directed towards people with particular medical conditions such as HIV
positive and cancer and also to the carers of individuals suffering these illnesses, and these services are
often provided by voluntary and charitable organisations set up specifically for these purposes. Even
individual companies and organisations may have set up their own in‐house counselling services to help
people through difficulties and anxieties associated with their work.
Counselling A profession that aims both to promote personal growth and productivity and to
alleviate any personal problems that may reflect underlying psychopathology.
Digital technologies
The availability of digital technologies offers new ways of providing psychological treatments for
mental health problems through computers, the Internet, mobile devices such as smartphones, and
mobile software applications (apps). The value of these types of interventions lies in the fact that they
reduce the need for continual therapist–client face‐to‐face therapeutic interactions at a time when
referrals for mental health problems are rising dramatically, and mental health services resources are
often severely stretched. In this section we cover a number of these developing technologies, including
computer‐based treatments, mental health apps, e‐therapy via the Internet, and virtual reality based
exposure.
There are already many well‐established computer‐based treatments for common mental health
problems such as anxiety and depression, and most are designed to be used on their own or with some
form of support (Fairburn & Patel, 2017). For example, because a treatment such as CBT has a highly
organised structure, it lends itself well to delivery by other modes and as a package that might be used
independently by the client. In recent years, computerised CBT (CCBT) has been developed as an
alternative to therapist‐delivered CBT, and consists of highly developed software packages that can be
delivered via an interactive computer interface on a personal computer, over the Internet or via the
telephone using interactive voice response (IVR) systems (Moritz, Schilling, Hauschildt, Schröder, &
Treszl, 2012). Examples of CCBT packages for depression and anxiety are ‘Beating the Blues’,
‘MoodGYM’, and ‘Fear Fighter’: (a) Beating the Blues® as an option for delivering computer‐based CBT
in the management of mild and moderate depression; (b) MoodGYM was developed by the Centre for
Mental Health Research at the Australian National University and is an online self‐help programme
that provides CBT training to help individuals manage symptoms of anxiety and depression
(Christensen, Griffiths, & Groves, 2004); and (c) Fear Fighter™ is an option for delivering computer‐based
CBT in the management of panic, agoraphobia and specific phobia. For example, Beating the Blues
consists of a 15‐minute introductory video and eight 1‐hour interactive sessions, including homework to
be completed between sessions. The programme helps the client identify thinking errors, challenge
negative thoughts, identify core negative beliefs, and provides help and advice on more adaptive
thinking styles (www.beatingtheblues.co.uk; see also Treatment in Practice Box 7.1).
Digital CBT is recommended by the UK National Institute for Health & Clinical Excellence as a
suitable intervention for adults and young people with mild depression (National Institute for Health
and Clinical Excellence, 2019), and early studies comparing CCBT with other forms of support and
intervention were relatively supportive, and suggested that CCBT along with other guided self‐help
programmes could be a valuable and effective way of treating individuals with common mental health
problems (Cuijpers et al., 2009; Kaltenthaler, Parry, Beverley, & Ferriter, 2008). For example, Proudfoot
et al. (2004) found that Beating the Blues provided a more effective treatment for depression and anxiety
than GP treatment as usual, and in a review of 16 studies exploring the efficacy of CCBT, Kaltenthaler,
Parry, and Beverley (2004) found that five studies showed CCBT to have equivalent outcomes to
therapist led CBT, and four studies found CCBT to be more effective than GP treatment as usual.
CCBT offers a number of advantages to both clients and service providers, including increased flexible
access to evidence‐based treatments, reduction of waiting lists, and savings in therapist time
(Learmonth, Trosh, Rai, Sewell, & Cavanagh, 2008; Titov, 2007).
However, there are still issues surrounding the usefulness and effectiveness of CCBT. First, clients may
often find it difficult to engage with CCBT without support. For example, clients need to have a
minimum knowledge of computers, and may often view CCBT as both mechanical and impersonal
(Beattie, Shaw, Kaur, & Kessler, 2009). CCBT may also lack many of the ingredients for successful
therapy that are a function of the therapeutic relationship between therapist and client (Dogg‐
Helgadottir, Menzies, Onslow, Packman, & O'Brian, 2009), but there are some indications that it may
be possible to incorporate these important ‘alliance’ ingredients into self‐help programmes such as
CCBT (Barazzone, Cavanagh, & Richards, 2012; Ormrod, Kennedy, Scott, & Cavanagh, 2010).
Second, and more important, recent large‐scale long‐term RCTs of CCBT have failed to find any
superiority for CCBT (e.g., MoodGYM) over GP care as usual, and relatively low uptake of CCBT is
found when it has been offered (Gilbody et al., 2015). However, the Gilbody et al. (2015) study was
conducted entirely within primary care settings, and was in contrast to many previous studies that had
demonstrated a superiority of CCBT that had targeted their participants through the Internet or
specialist referral centres where participants are directly supervised while using the CCBT package.
Much of the most recent evidence on the efficacy of CCBT suggests that support is critical for CCBT
to be effective or fully engaged with (e.g., with the help of a telephone support worker who provides a
programme of weekly telephone calls explaining the process and introducing the principles of CBT)
(Fairburn & Patel, 2017; Gilbody et al., 2017). To this extent, CCBT is still a work in progress, with
researchers still searching for the optimal conditions that will make CCBT both a significant help with
managing symptoms of mental health problems while being suitably cost effective.
Intervention programmes can also be delivered via mobile phones, and there has been a rapid growth in
the number of mental health apps made available in recent years. This has fuelled a belief that
mental health apps might be able to provide help to those with mental problems without imposing
substantial pressures on already squeezed mental health resources (e.g., Donker et al., 2013) (see also
Focus Point 2.4 in Chapter 2). However, there is still a significant amount of work required to evaluate
the usefulness of a rapidly expanding body of mental health apps. In 2013 there were 1,536 mental
health apps for depression available for download, but only 32 published articles evaluating their
efficacy (Martínez‐Pérez, de la Torre‐Díez, & López‐Coronado, 2013). Unfortunately, many apps have
no reliable evidence base to support the principles they use in their programmes and often cannot
provide data from controlled trials to support the efficacy of their application—and this is also true of
apps accredited and recommended by mental health organisations such as the NHS
(https://www.nhs.uk/apps‐library/category/mental‐health/) (Leigh & Flatt, 2015). What evidence is
available does indicate some hope for the usefulness of mental health apps. A meta‐analysis of RCTs by
Firth et al. (2017) found a significant small to moderate positive effect for apps on depressive symptoms
compared to controls, but this effect was restricted to those with only mild to moderate depression.
These findings are promising and suggest that apps may be helpful for those with subclinical depression
symptoms and may help to prevent worsening mood symptoms over time.
The rapid growth of the Internet over the past 30 years has meant people now have almost immediate
access to information about mental health problems and email provides another potential form of e‐
therapy communication between therapists and clients. As a result many therapists and practitioners
use email as an integral part of the treatment they provide (Hsiung, 2002). Email is a useful adjunct to
face‐to‐face sessions in a number of ways: (a) it may be used to enhance weekly sessions, monitor
treatment from a distance, monitor behaviour daily, communicate with the client's family members, or
intervene in a crisis (Yager, 2002; Yellowlees, 2002); (b) it allows clients to initiate contact with therapist
more easily, and this may make the client feel safer, and because the communication is online, more
secure (Ainsworth, 2002); (c) it allows clients who may be withdrawn or shy in personal face‐to‐face
interviews (such as adolescents with eating disorders) to be more open and compliant (Yager, 2002); and
(d) it enables clients to contact therapists more regularly in areas where resources are more difficult to
access in person, or when clients are living in remote and inaccessible areas (Gibson, Morley, & Romeo‐
Wolff, 2002). However, there are also a number of limitations to email communication, including
miscommunication because neither party to the communication is able to see the nonverbal cues being
given by the other, it is very difficult to ensure the confidentiality of online communications, and online
communication makes it very difficult to intervene effectively in severe emergencies when, for example,
a client may have suicidal intentions. Nevertheless, in recent years psychotherapy through Internet
communications has become a fast growing intervention channel—especially when Internet
communication has been combined with the client's use of an app or Internet programme. The
expansion of the use of internet‐based forms of therapist‐client communication ‐ especially video‐based
communication ‐ has also been a necessary consequence of the COVID‐19 pandemic lockdowns in
2020 and 2021, and is likely to be a form of therapist‐client communication much used in future years.
Recent reviews suggest that e‐therapy has been used successfully in the treatment of a broad range of
psychopathologies, and has been found to be cost effective for both clients and mental health service
providers (Kumar, Sattar, Bseiso, Khan, & Rutkofsky, 2017).
e-therapy A treatment method which involves the use of email and internet technology.
Finally, it is around 20 years since the first virtual reality exposure treatments were conducted, and since
then the use of virtual reality environments to assess and treat a range of mental health problems
has become significantly more widespread (Photo 4.2). Virtual reality is an interactive computer
environment that allows the user to experience a particular environment and also interact with that
environment, usually by wearing a head‐mounted display, headphones, and a position tracker that
allows the user to view changes in the environment in real time with head movements. Problems
interacting with the world are at the centre of many mental health problems, and virtual reality
exposure (VRE) is useful in helping the therapist to identify environmental factors that may trigger
symptoms and is predominantly used as a ‘safe’ form of exposure therapy that will allow clients to think,
react and behave differently in the simulated environments that are of concern to them (Freeman et al.,
2017). VRE therapies have now been successfully used in the treatment of a range of
psychopathologies, including anxiety disorders (Carl et al., 2019), symptoms of psychosis, such as
paranoia (Freeman et al., 2016), PTSD (Peskin et al., 2018), and eating disorders (Cesa et al., 2013).
PHOTO 4.2 Virtual reality exposure (VRE) therapies were first established to treat anxiety‐based problems such as fear
of heights, fear of flying, spider phobia, and post‐traumatic stress disorder (PTSD), and allowed clients to encounter their
fears and develop less fearful responses in the relative “safety” of a virtual environment.
Virtual reality has been used imaginatively during treatment to create helpful virtual environments,
some of which would be difficult to simulate in real life. For example, Freeman et al. (2014) used virtual
reality to manipulate a client's height in order to influence their self‐esteem and help them manage their
paranoia, Girard, Turcotte, Bouchard, and Girard (2009) attempted to reduce cravings for cigarette
smoking by getting clients to crush virtual cigarettes, and Keizer, van Elburg, Helms, and Dijkerman
(2016) used virtual reality to help clients with anorexia nervosa to experience ownership of a healthy
body mass index (BMI).
Studies suggest that VRE is significantly more effective than waiting list or placebo control conditions
(e.g., Carl et al., 2019), has dropout rates that are not significantly different from those found in in vivo
treatment, and can be used successfully to supplement and enhance more traditional forms of therapy
such as CBT (see special issue of the Journal of Anxiety Disorders, Powers & Rothbaum, 2019). Currently,
VRE treatments have tended to be used with a therapist present, and so it remains to be seen whether
VRE treatments will work effectively as a stand‐alone digital intervention that a client can successfully
use without a therapist.
In summary, emerging digital technologies offer a variety of new ways to deliver psychological
treatments for mental health problems, although most of these still need conclusive evidence endorsing
their effectiveness. CCBT has warranted much research and is now commercially available in a number
of different forms, although there is a growing consensus that some form of professional supervision
may be critical for its effectiveness to be achieved. Mental health apps are very much in their infancy,
and although they can often be useful in helping to manage subclinical mood‐based problems, there is
an urgent need for more controlled trials to establish evidence for their effectiveness—especially over the
longer term. Ever since the establishment of the Internet, therapist–client communications via email
have proven to be a helpful means of e‐therapy—especially in circumstances where geographic and social
factors make it difficult for direct face‐to‐face therapist‐client interaction. Finally, developing virtual reality
packages are proving to be effective in helping to treat a broad range of mental health problems. They
can be readily adapted to the very specific symptoms of the client and help to deliver exposure‐like
therapy in relative safety.
psychological well-being practitioners (PWPs) People trained under the IAPT initiative
to deliver psychological therapies such as CBT.
FOCUS POINT 4.2 TRAINING PRACTITIONERS UNDER THE
IAPT INITIATIVE
In 2018–2019 there were 1.6 million referrals to IAPT services in the UK for assessment and treatment,
with 1.09 million people starting a course of therapy of two or more sessions. Each client had an
average of 6.9 sessions of treatment and 52.1% of referrals moved on to recovery (NHS Digital, 2019).
CBT and guided self‐help (use of a self‐help book) accounted for 61.5% of all courses of therapy
delivered.
Schemes for improving access to psychological therapies are now being rolled out in many countries,
including developing countries such as India, Pakistan, and Uganda (Patel, Chowdhary, Rahman, &
Verdeli, 2011), where lay and community health workers are being trained in evidenced‐based practices.
However, the challenges of providing access to psychological therapies for those who need them are still
immense, with the ‘treatment gap’ (a term used to describe the shortfall in mental health provision for
sufferers) exceeding 75% in most parts of the world (Kohn, Saxena, Levav, & Saraceno, 2004), with
access to mental health services even in the European Union being considered “far from satisfactory”
even for common mental health problems such as anxiety and depression (Barbato, Vallarino,
Rapisarda, Lora, & Caldas de Almeida, 2019).
Summary
With the average waiting‐time for conventional treatments provided by community mental health
services in many parts of the world often longer than 1–2 years ‐ especially for popular and specialised
treatments such as CBT—practitioners and service providers are under pressure to provide more cost‐
effective and immediate forms of interventions for people with mental health problems (London School
of Economics and Political Science, 2006). We have reviewed some of the modes of delivery that may
prove to be more immediate and cost effective, although most of these modes of delivery represent
relatively new innovations that have yet to be fully and properly evaluated.
SELF‐TEST QUESTIONS
What are the main principles of psychodynamic therapy?
Can you describe some of the basic techniques used by psychoanalysts?
Can you describe the behaviour therapy techniques that are based on classical
conditioning?
Can you describe some of the treatment techniques that have been developed based on
principles of operant reinforcement?
Can you describe at least two types of cognitive therapy?
What are ‘third‐wave’ CBT methods—can you describe at least one of these?
What are the main theoretical principles on which humanistic therapies are based?
What are the main principles used in client‐centred therapy?
What is family therapy and how is it conducted?
Can you name the main types of drug treatments for psychopathology?
Can you describe two to three types of group therapy?
What are the main characteristics of counselling?
Can you describe the main features of CCBT, e‐therapy, and VRE?
What does IAPT stand for, and what is it trying to achieve?
SECTION SUMMARY
Spontaneous remission
Just because someone exhibits objective improvement in symptoms after treatment does not necessarily
mean that the treatment was the cause of the improvement. The famous British psychologist, Hans
Eysenck, argued that many people who have mental health problems will simply get better anyway over
a period of time—even without therapy (Eysenck, 1961), and this can often be the result of an
individual's changing life experiences (e.g., positive life changes) (Neeleman, Oldehinkel, & Ormel,
2003). This is known as spontaneous remission, and it is estimated that 23% of cases of an
untreated common mental health problem such as depression will spontaneously remit within 3 months,
32% within 6 months, and 53% within 12 months (Whiteford et al., 2013). So, if we are assessing the
effectiveness of a therapy we would expect to see improvement rates significantly greater than 23% after 3
months, 32% after 6 months, and so on, in order to take into account the fact that many of those
undertaking the therapy would show a spontaneous improvement in symptoms anyway.
spontaneous remission The fact that many people who have psychological disorders will
simply get better anyway over a period of time, even without therapy.
FOCUS POINT 4.3 STOP SMOKING IN ONE SESSION!
“As I was passing a local ‘holistic’ health clinic, I noticed a sign outside which—in large letters
—implored ‘STOP SMOKING IN ONE SESSION’. (which session—surely not the first one?!)
Having interests in both clinical and health psychology areas, I was intrigued to find out more.
As it turns out, a friend of mine had just recently visited the clinic and had received a single 1‐
hour session of hypnotherapy in an attempt to stop smoking—cost £50. Knowing the literature
on psychological treatments for smoking and how difficult it is to achieve abstinence, I decided
to find out a little more about these treatment claims. I emailed the hypnotherapists offering
services at the clinic asking if stopping smoking in one session of hypnotherapy was achievable,
and what their success rates were like. I did get a reply from one of the practitioners, who had
worked as a hypnotherapist for seven years. She replied: ‘I did not do follow‐up calls as I
thought this would be intrusive so therefore I did not have stats on my success rates. However, I
knew I had a high success rate as people referred others to me and came back to me for help on
other issues’. Interestingly, my friend who had attended the clinic was smoking regularly again
within three days of the hypnotherapy session, and—despite having long discussions with me
about the validity of the treatment and its lack of success—said she was thinking of attending
again (this time in relation to other aspects of her life) because the hypnotherapist had been so
caring, understanding and interested in her problems!”
Placebo effects
If someone suffering with anxiety symptoms is given a sugar pill but they are told that it is an anxiolytic
medication, they often report significant improvements in those symptoms. This suggests that
individuals will often get better because they expect to get better—even though the actual treatment that
they have been given is effectively useless (Paul, 1966). This is known as the placebo effect (Paul,
1966). Thus, it may be the case that many psychological treatments have beneficial effects because the
client expects them to work—and not because they are treatments that are effective in tackling the factors
maintaining the psychopathology (see Ashar, Chang, & Wager, 2017, for a discussion of the potential
psychological mechanisms of the placebo effect). In some cases, placebo effects can be quite large, with
pharmacological treatments of PTSD showing that between 19% and 62% of participants taking a
placebo treatment can show benefits over the short term (Benedetti, Piedimonte, & Frisaldi, 2018).
Unfortunately, the positive gains produced by placebo effects are short lived, and comparative studies of
placebo effects with actual structured psychotherapies strongly suggest that structured psychotherapies
lead to greater improvement than placebo control conditions (Robinson, Berman, & Neimeyer, 1990;
Andrews & Harvey, 1981). This emphasises the importance of using placebo conditions as a comparison
to ensure that structured and theoretically‐based interventions are internally valid and work because of
the way they are constructed and not simply because the client ‘expects’ them to work.
befriending A form of control condition for attention, understanding and caring used in
treatment outcome studies.
recovery rates The percentage of people who are no longer diagnosable once they have
finished treatment.
Nevertheless, despite some of these limitations, RCTs have remained a popular method for assessing the
relative effectiveness of treatments and are still considered to be the main standard by which the
effectiveness of an intervention is experimentally tested.
FIGURE 4.1 A randomised controlled trial comparing the effectiveness of CBT and applied relaxation for social
phobia.
The following flow chart shows how the 116 participants in the study were allocated to experimental conditions and
assessed. The graph below shows that the CBT condition (CT) was more effective at reducing the symptoms of social
phobia than applied relaxation (Exp + AR) and a “waiting list” control (Wait).
From Clark et al., 2006.
Dodo Bird Verdict An expression from Lewis Carroll’s Alice’s Adventures in Wonderland,
implying that all psychotherapies are more effective than no treatment, but produce equivalent
benefits.
Rather than conducting objective and well‐controlled outcome studies, some others have approached
the issue of the effectiveness of treatments by viewing the client as a consumer and canvassing their
views on how satisfied they have been with the treatment ‘product’ that they received. Seligman (1995)
reported the results of a large scale survey of individuals in the US who had undergone psychotherapy
and concluded that (a) respondents claimed they benefited significantly from psychotherapy; (b)
psychotherapy alone did not differ in effectiveness from medication plus psychotherapy; (c)
psychologists, psychiatrists, social workers and counsellors did not differ in their effectiveness as
therapists; and (d) the longer the duration of their treatment, the larger the positive gains respondents
reported. While the empirical rigour of this consumer‐based study falls far short of that expected in
well‐controlled outcome studies, it does provide some information about how the recipients of
psychotherapy view their treatment and its effects. But as we have noted in Focus Point 4.3, asking a
client how satisfied they are following treatment may not be the best way to judge the effectiveness of a
treatment and may reflect the involvement of psychological factors that extend beyond the original
purpose of the treatment.
4.2.4 Summary
In this section we have described some of the methods that have been developed to try and evaluate the
effectiveness of treatments for psychopathology. Over the past 20 years a large number of studies
assessing the efficacy of therapies for psychopathologies has been carried out, and there is now good
empirical evidence to support the effectiveness and internal validity of many of these therapies. There
are too many to mention here, but the interested reader is referred to the treatment sections of
individual psychopathology chapters in this book for a review of the most effective treatments for
individual psychopathologies.
SELF‐TEST QUESTIONS
What kinds of factors can affect the evaluation of treatments and need to be controlled for
in treatment outcome studies?
What are the two most popular methodologies for assessing the effectiveness of
treatments?
Are structured treatments for psychopathology effective?
What is the ‘Dodo Bird Verdict’ in relation to the effectiveness of psychotherapies?
SECTION SUMMARY
CHAPTER OUTLINE
5.1 THE ECONOMIC COST OF MENTAL HEALTH PROBLEMS
5.2 WHO ARE MENTAL HEALTH PROFESSIONALS?
5.3 PROVIDING MENTAL HEALTH SERVICES
5.4 THE ROLE OF THE CLINICAL PSYCHOLOGIST
5.5 CLINICAL PRACTICE REVIEWED
LEARNING OUTCOMES
When you have completed this chapter, you should be able to:
1. Describe the nature and size of mental health problems facing service providers.
2. Describe how mental health services in the UK are structured.
3. Describe the different types of mental health professionals in the UK together with some
of the skills they possess.
4. Describe and evaluate the role of clinical psychologists in the mental health services.
My family came to England from Uganda when I was four. I found leaving our home behind quite traumatic, and
then a year after we arrived here, my father died. It was difficult, losing him on top of everything else. In some
Asian families there are rigid rules about what can and can't be talked about and intimate things aren't mentioned,
so I never spoke to anyone about how I felt or what it was like growing up. I just pushed the sadness down within
myself. I got married to an English guy but I still didn't talk about it. But, a bit like holding a ball underwater,
there's only so long you can keep your feelings hidden and by the summer of 2006 I was suffering from what I
now know was depression. In the end I went to see my GP because I felt I needed to talk through what was going
on. My GP was able to refer me to an Asian psychologist for cognitive behavioural therapy and psychodynamic
therapy, which involved talking through my problems and finding ways to tackle them. When I saw the therapist I
wasn't worried about confidentiality anymore. I live in a different area from my family now and was assured that
all our sessions would be completely confidential. There were lots of things, which she, as an Asian woman, could
relate to. I saw my therapist for 50 minutes every week or two, over 10 months, and she has helped me a lot. I've
started building my own life and learned not to get so bogged down with the pressures of having to look after
everybody else.
I've been discharged now, but I still go for follow‐up sessions to see how I'm doing. I know the service is there for
me if I need it again: all I have to do is contact them if I have a crisis or need a session. It has been great to have
support from someone I identify with. I've seen things change since I was younger and now there are services for
people when they feel ready to get help. If you want help, see your GP. They may be able to refer you for
confidential counselling, or to a psychologist with a specialist understanding of cultural issues.
Hina's Story (adapted from http://www.nhs.uk/Conditions/stress‐anxiety‐depression/Pages/therapy‐changed‐my‐
life.aspx)
Introduction
Just like Hina, we all experience life events that cause us distress in some form or another. In many cases
this distress is short lived and transient, but for others the distress stays with them for long periods and
becomes an overbearing and central feature of their life. Many people cope as best they can by keeping
their distress and their feelings to themselves. But, as Hina describes in her personal account, there's
only so long you can ‘hold a ball under water’ before it pops up on the surface. It is often only when
distress eventually becomes unbearable, affects social, occupational, and family functioning, or is
recognised by others that many people begin to seek help for their problems. So how do people begin to
seek help for mental health problems? What services are available? How are those services structured?
These are some of the questions we'll be addressing in this chapter. For Hina, her journey for help and
support began with a visit to her local GP who then forwarded her on to more specialised services such
as counselors, psychotherapists, or clinical psychologists.
Clinical psychologists are closely involved in helping people to recover from problems similar to those
described by Hina. They attempt to help people understand the causes of their distress, provide
interventions that can help to alleviate specific symptoms, and provide support and guidance through
the recovery process. However, mental health provision is characterised by the involvement of a whole
range of professionals, all with differing skills and roles, and we describe some of these people and their
individual skills later in this chapter. You are likely to meet directly with a clinical psychologist only if
your problems are relatively severe and enduring, but clinical psychologists do work within
multidisciplinary teams (MDTs) that aim to provide the basis for support and recovery from mental
health problems.
In this chapter we describe the kinds of professionals who provide mental health services, and what
services and facilities are available and how people access them. We then focus on the role of the clinical
psychologist by discussing what they do, what their skills are, and how they are trained. But first it's
probably helpful to begin by looking at the size of the problem that those who provide mental health
services are facing—what is the prevalence of mental health needs across a country such as the UK, and
why are mental health services so important?
One adult in six had a common mental health problem (CMHP): about one woman in five and
one man in eight. Since 2000, overall rates of CMHP in England steadily increased in women
and remained largely stable in men.
Most mental health problems were more common in people living alone, in poor physical health,
and not employed. Those that do live alone were identified as having experienced higher rates of
most different mental health problems, including CMHP, post‐traumatic stress disorder, psychotic
disorder, personality disorder, and bipolar disorder
One person in three with CMHP reported current use of mental health treatment in 2014, an
increase from the one in four who reported this in 2000 and 2007. This was driven by steep
increases in reported use of psychotropic medication. Increased use of psychological therapies
was also evident among people with more severe CMHP symptoms.
There were demographic inequalities in who received treatment. After controlling for level of
need, people who were White British, female, or in midlife (especially aged 35–54) were more
likely to receive treatment. People in the Black ethnic group had particularly low treatment rates.
In 2014, one in five 16–24‐year‐old women reported having self‐harmed at some point in her life
when asked face to face and one in four reported this in the self‐completion section of the survey.
Given this background, we now move on to consider in more detail the structure of mental health
provision and the nature of clinical practice.
SELF‐TEST QUESTIONS
What percentage of people in the UK will experience a mental health problem in any 1
year?
What is the estimated annual cost of mental health problems in the UK?
SECTION SUMMARY
PHOTO 5.1 GPs or physicians are usually the first point of contact for people with mental health problems, and up to
one in three patients seen by a GP on a daily basis will have either emotional or psychological problems.
Community mental health nurses Registered nurses with specialist training in mental
health
Once referred for a more detailed assessment of their mental health problems, an individual is likely to
then be seen either by a psychiatrist or a clinical psychologist. Psychiatrists are qualified medical
doctors who have received further training in mental health problems. Psychiatrists can make initial
assessments of a person's condition and needs and can also prescribe medications. Clinical
psychologists are normally psychology graduates who have completed 3 years of intensive
postgraduate training to learn the skills required for clinical practice, and they will specialise in the
assessment and treatment of mental health problems. Clinical psychologists will often specialise in
working with only one or maybe two client groups, often within a preferred psychological approach (e.g.,
cognitive, behavioural, systemic, or psychodynamic, see Chapter 4). Some may work exclusively within
child and adolescent services, known in the UK as Children & Adolescent Mental Health
Services (CAMHS), others may be involved in learning disability, with older people, or with substance
dependency. Clinical psychologists will often work within MDTs and are closely involved in providing
care and interventions for mental health problems, but—unlike psychiatrists—are not able to prescribe
medications.
Others involved in the provision of mental health services include counsellors, psychotherapists,
occupational therapists, social workers, and approved mental health workers. Counsellors are trained
to offer talking therapies that will support people with mental health problems and help them to cope
better with their lives and their symptoms (see Chapter 4, Section 4.1.2). Psychotherapists provide a
range of psychotherapy support to individuals with mental health problems. Psychotherapists will often
specialise in a particular therapeutic approach and may provide support for more longer‐term mental
health problems. Occupational Therapists are available to help people with mental health problems
adjust to the demands of normal day‐to‐day living, including developing personal independence and
achieving levels of functioning required for employment. Social workers are trained to provide a
valuable link between mental health services and broader social service provision, and they can provide
advice and support on issues such as benefits, housing, day care, and occupational training. Finally,
approved mental health practitioners have a professional clinical qualification and have also
received special training in assessing and helping people who may need to be compulsorily detained
because of their mental health problems.
Counsellors People who are trained to offer talking therapies that will support people with
mental health problems and help them to cope better with their lives and their symptoms.
Occupational Therapists Clinicians who specialise in assessing and training (or retraining)
occupational and daily living skills.
Social workers Professionals whose main focus is clients’ social care needs (e.g. housing).
Approved Social Workers are also involved in Mental Health Act assessments.
approved mental health workers Professionals who are trained to offer treatments that will
support people with mental health problems and help them to cope better with their lives and
their symptoms. They will not normally have the kinds of professional clinical qualifications
possessed by other mental health professionals, but will have received special training.
multidisciplinary teams (MDTs) MDTs include workers from a range of disciplines that
specialise in different aspects of health and social care, e.g., psychiatrists, clinical psychologists,
social workers and occupational therapists.
TABLE 5.2 Clinical psychologists frequently work as members of multidisciplinary teams (MDTs).
From Casares & Lake (2015)
Part of the rationale for MDTs is that they can provide clients with more holistic care, since they
include workers from a range of disciplines that specialise in different aspects of health and social care.
Managers
Psychiatrists (Doctors)
Psychologists (usually Clinical or Counselling Psychologists)
Mental health nurses
Occupational therapists
Social workers
Psychological therapists
Speech therapists (in learning disabilities teams)
Health care or nursing assistants
Support workers
Peer recovery workers (people who draw on their experiences of using mental health services to
support others)
Administrators
SELF‐TEST QUESTIONS
How do people with mental health problems access mental health services?
Can you name at least five different types of mental health professionals in the UK?
SECTION SUMMARY
inpatient hospital care Treatment provided to a client who has voluntarily admitted himself
or herself to hospital. Some people can be compulsorily detained in a hospital under the Mental
Health Act if their mental health problems are severe enough.
In addition to psychiatric hospitals, regional secure units and secure hospitals are available to treat
individuals who have been admitted by the courts under the Mental Health Act, transferred from prison
under the Mental Health Act, or have been transferred from an ordinary hospital ward because they
may need treatment in a more secure setting.
regional secure units Facilities available to treat individuals who have been admitted by the
courts under the Mental Health Act, transferred from prison under the Mental Health Act, or
have been transferred from an ordinary hospital ward because they may need treatment in a
more secure setting.
Finally in this section, it should be emphasised once again that while the range and type of facilities
available for treating mental health problems is quite varied, the vast majority of people with a mental
health problem are treated voluntarily on an outpatient basis usually by a care team of professionals
with a range of skills. Compulsory detention or treatment is extremely rare, and the overwhelming
majority of people with mental health problems are neither violent nor a danger to themselves or
others.
SELF‐TEST QUESTIONS
What different types of mental health facilities are available in the UK?
What is the recovery model, and what are its main features?
SECTION SUMMARY
assessment Normally, the first stage of clinical work with a client, which typically involves
understanding the problems that a client is experiencing, what may have caused these problems
and be maintaining them, and how the client would like to change.
Normally, the first stage of clinical work with a client would be to undertake an assessment. Typically
this will involve understanding the problems that a client is experiencing, what may have caused these
problems and be maintaining them, and how the client would like to change. A full assessment of these
factors may be undertaken by a team of professionals with different skills (e.g., psychiatrists, community
health workers as well as clinical psychologists), and the nature of the assessments will depend on the
theoretical orientation of the professionals involved. For example, those with a psychodynamic approach
will want to understand how the client's problems relate to underlying, intrapsychic conflicts, those with
a cognitive‐behavioural approach will want to explore the importance of cognitions and how they
influence moods and behaviour (Jones & Hartley, 2015; Lake & Whittington, 2015). Professionals with a
medical training such as psychiatrists (and also some clinical psychologists) may want to provide a
diagnosis, which will classify the client's symptoms according to current diagnostic criteria (e.g., DSM‐
5; see Chapter 2, Section 2.2.3). A variety of methods can be used to collect relevant information for an
assessment. The primary means is the clinical interview (see Chapter 2, Section 2.2.2), especially for
clients who are able to communicate their problems effectively. Validated psychological tests and tests of
cognitive ability can also be used when required (see Chapter 2). Primarily, it is important in this first
stage for the clinical psychologist to establish a good working relationship with the client, and you will
have seen in Section 4.2 that this can be an important contributor to the success of any subsequent
therapeutic interventions. Case History 5.1 provides an example of how an assessment might progress
in practice (Case History 5.1).
When the assessment process is complete, the process will move on to a formulation stage. This
combines psychological theory with information gained from the assessment and enables the clinical
psychologist to develop an account of how the client's problems were acquired and how they are being
maintained, and an indication of what interventions might be theoretically useful in alleviating these
problems. A formulation may acknowledge biological and genetic influences (if they are relevant) but
would normally focus on psychological and social processes. This is the stage in which the clinical
psychologist's skill in being able to link psychological theory with practice is highlighted. Formulations
can often be viewed as a series of hypotheses about what is causing a client's problems and these can be
tested out during the intervention stage. A detailed example of formulation and how it is undertaken is
provided in Chapter 2, Section 2.3.
Once the formulation has been completed, a psychological intervention can be implemented on the
basis of this. The intervention may be based on one specific theoretical approach (e.g., CBT), or it may
integrate a number of different approaches and ideas depending on the client's needs and strengths.
Very often the clinical psychologist will directly involve the client in the planning of the intervention to
both help the client to understand what will happen and to develop some conviction on the part of the
client about how the intervention might be successful (Hall & Llewelyn, 2006, pp. 91–92). The
intervention may also need to include the client's family members, especially if the client's difficulties
seem to be linked to family relationships and dynamics or if the client has specific disabilities, such as
learning disabilities.
The client, Peter, was reported to be suffering from an episode of clinical depression and, as a
result, was experiencing depressed mood for much of the week, was struggling to sleep properly,
and had stopped doing many of the things that he used to enjoy.
The assessment began by asking Peter to say something about why he had come along and then
focussed in detail on his current difficulties. As the clinical psychologist was taking a cognitive‐
behavioural approach, this involved being curious about the thoughts and images Peter was
experiencing (i.e., the cognitive) and the things he was doing or not doing (i.e., the behavioural),
and how these affected and were affected by his depressed mood. They went on to explore how
his problems had developed, the impact they were having on his life currently, and his goals for
the future. They also used some questionnaires to (a) establish a baseline estimate of the severity
of Peter's depression and (b) to explore whether Peter was experiencing depression related
thoughts that had not been covered in the interview. Throughout, the clinical psychologist
attempted to build a therapeutic relationship with Peter by listening carefully to what he was
saying and then summarising their understanding, and also by conveying a sense of positive
regard and kindness. Crucially, during the assessment it was also established that, while Peter
sometimes experienced suicidal thoughts, he was at low risk of acting on these. In addition, for
homework, Peter was asked to keep a record of his thoughts when he was feeling depressed,
both because doing so might identify important negative thoughts that he had forgotten in the
assessment, and because the process of recording could help him to become more aware of his
thinking and the impact it had on his mood.
Once the formulation and intervention stages have been completed it is important to evaluate them.
What has been the impact of the intervention? Is the intervention having the desired effects?
Evaluation can be achieved in a number of ways, including discussion with the client and with the use
of validated questionnaires. Therapeutic change can often be very gradual so the client may not have
good insight into the changes that may have begun to occur, and validated questionnaires can often
provide a more objective measure of change. Indeed, the use of validated means of measuring the
effectiveness of interventions is a central feature of evaluation nowadays—largely because of the need
to ensure that interventions can be identified as effective using evidence‐based criteria that will allow
services to use objective criteria by which to decide what types of interventions are effective and which
are not. However, it is still important to get the client's (and carers or family members) impressions of
whether the intervention seems to be helping and their sense of what has and has not changed as a
result of the intervention. Usually, clinical psychologists will want to collect both quantitative and
qualitative information to get a full picture of the effects of the intervention.
Evaluation Stage of treatment that seeks to ensure any intervention is having the desired
effect. Can be achieved in a number of ways, including discussion with the client and with the
use of validated questionnaires.
In summary, a clinical psychologist needs to have the competencies to assess, formulate, intervene and
evaluate, and to be able to do this in collaboration with other mental health professionals. In addition to
these core competencies, clinical psychologists will also need to be able to communicate effectively with
their clients—both verbally and in writing (Hall & Llewelyn, 2006, pp. 22–25), and these are all skills
that will be taught during the clinical psychologist's extensive period of training (see Davey, 2011,
Section 5.4.4).
Finally, it must be pointed out that the reflective practitioner and scientist‐practitioner approaches are
not entirely mutually exclusive, and no matter which philosophical approach you take to your clinical
practice, it will almost certainly be beneficial to regularly reflect on that practice.
The way that clinical psychologists are regulated differs from country to country. In the US and Canada
a license is required to practice as a clinical psychologist, and the terms of this license often differs
between states. But in general terms all states require (a) graduation from an accredited training
institution with an appropriate degree, (b) completion of supervised clinical experience, and (c) passing a
written exam and, in some cases, an oral exam.
Once they have qualified, clinical psychologists are expected to undertake continuing professional
development (CPD) throughout their career in order to develop their professional competencies.
This can include reading about the latest research, attending conferences, and training workshops,
carrying out new activities (e.g., supervising trainee clinical psychologists or applying for a research
grant) and undertaking further qualifications (e.g., specialist training in a particular type of
psychological therapy).
FIGURE 5.1 Number of applications and actual places for UK clinical training courses between 2013 and 2019. In
2019 the applications to places ratio was 6.6:1.
Clinical training
Clinical training courses in the UK are 3‐year programmes that lead to a doctorate in clinical
psychology. This doctorate confers eligibility to register as a chartered psychologist with the BPS and as
a practitioner psychologist with the HCPC. Training usually takes place in service with the NHS, with
the latter covering the trainee's training fees. Training can be divided into three components: (a) an
academic component that takes place in a university setting and consists of conventional teaching and
learning practices covering such topics as human development and interaction, psychological problems,
and mental well‐being and some skills learning associated with different psychological approaches to
mental health problems. Students will learn about the main approaches to alleviating mental health
problems, including cognitive‐behavioural, psychodynamic, and systemic, and a key part of training is
concerned with integrating concepts and interventions from different models to develop interventions
that best suit an individual's needs; (b) a clinical placement component, in which the trainee learns how to
apply these different approaches to different client groups, usually within the NHS. These placements
can take up 3 days a week for as long as 6 months, and a trainee will gain experience by taking
placements across a range of settings and client groups, the latter of which will normally include
working‐age adults with mental health problems, people with learning disabilities, children with mental
health problems, and older adults. During their placements, trainees receive support and guidance from
at least one supervisor who will be a practicing clinical psychologist; and (c) a research component, in which
research skills are developed through both teaching and the trainee conducting their own research
projects. The research projects usually consist of small‐scale service‐based research such as an
evaluation of an intervention, and the most substantial of these projects will form the trainee's doctoral
dissertation. Table 5.3 provides some examples of what a clinical psychology trainee will experience
during their training and covers the nature of the placements they encounter, the types of psychological
models they will learn about, and some of the key competencies they will acquire (see Mayers & Mwale,
2018, for further information on pursuing a career in clinical psychology).
TABLE 5.3 Some examples to illustrate the range of client groups, models and competencies typically covered in clinical
psychology training (for a more in depth picture see Mayers & Mwale, 2018)
Clinical placements
Working age adults with mental health problems
Children with mental health problems and their families
People with learning disabilities
Older people with mental health problems
Possible supplementary placements
Further experience in one of the ‘core’ areas
Adults with brain injury
Children with physical health problems
Adults with physical health problems
Classes of models
Cognitive‐behavioural
Psychodynamic
Systemic
Developmental
Neuropsychological
Key competencies
The ability to communicate effectively with a range of different clients.
The ability to build and maintain effective relationships with others.
The ability to conduct assessments appropriate to different settings.
The ability to develop ‘formulations’, by forming theory‐practice links.
The ability to facilitate interventions appropriate to particular clients and problems.
The ability to evaluate clinical work.
Sufficient self‐awareness to be able to reflect on and learn from clinical work.
The ability to understand, conduct, and apply the findings of psychological research.
SELF‐TEST QUESTIONS
What are the four key capabilities and competencies of clinical psychologists?
What is the reflective practitioner approach to clinical practice?
Can you describe how clinical psychologists in the UK are regulated?
What pretraining qualifications and experiences would someone need before applying for
a clinical training place in the UK?
What are the three main components of a clinical training programme in the UK?
SECTION SUMMARY
CHAPTER OUTLINE
6.1 ANXIETY AS A COMORBID CONDITION
6.2 SPECIFIC PHOBIAS
6.3 SOCIAL ANXIETY DISORDER
6.4 PANIC DISORDER AND AGORAPHOBIA
6.5 GENERALISED ANXIETY DISORDER (GAD)
6.6 OBSESSIVE-COMPULSIVE DISORDER (OCD)
6.7 TRAUMA AND STRESS-RELATED DISORDERS
6.8 ANXIETY-BASED PROBLEMS REVIEWED
LEARNING OUTCOMES
When you have completed this chapter, you should be able to:
1. Describe the kinds of presenting symptoms that are associated with individual anxiety‐
and stressor‐based problems.
2. Describe the characteristics and diagnostic criteria of six of the important anxiety and
stressor disorders.
3. Describe, compare, and contrast at least two contemporary theories of the aetiology of
each disorder.
4. Distinguish between biological and psychological explanations of anxiety‐based problems.
5. Describe the relevance of research methodologies that have contributed to the
understanding of the acquisition of anxiety‐ and stress‐related disorders.
6. Describe, compare, and contrast at least two therapeutic procedures used for each
individual anxiety disorder or stress‐related disorder.
I'm 26 years old and experience severe anxiety. I've had on and off panic attacks since I was 17. I've really
worked hard to manage it using breathing, daily exercise, and diet. I thought I'd beaten it…2 years pretty
symptom free. But shortly after becoming engaged, the panic attacks started and anxiety came back with a
vengeance. I can barely manage my days at work, I have little appetite and I'm terrified of negative thoughts I've
been having. My scariest thought in the past was having a heart attack. But knowing I'm in such good physical
shape I know this isn't a possibility. My scary thought for the past few weeks has been what if I kill myself…so
scary I try not to be alone. My doctor has given me some medication, but I don't know if it's working. My friends
who know about psychology say I'm fine and those thoughts are my anxiety.
Michelle's Story
Introduction
Anxiety and stress are common features of everyday living and will be experienced by us all in one form
or another, and this makes anxiety a very common mental health problem. As an adaptive emotion
anxiety can help us prepare to deal effectively with anticipated threats and challenges by increasing our
arousal and reactivity, focusing our attention, and helping us to solve problems. However, we can often
find there will be times in our lives when we have difficulty managing our anxiety, and it starts to feel
uncontrollable and distressing. Michelle's story provides an example of how anxiety can come to feel
distressing and her commentary gives us a real insight into some of the more debilitating symptoms of
an anxiety problem. These include panic attacks, lack of appetite, scary, uncontrollable thoughts,
thoughts about physical illness, even suicidal ideation—every conceivable threat seems to be amplified.
Anxiety generally has both physical and cognitive attributes. First, there are the physical symptoms of
anxiety—such as muscle tension, dry mouth, perspiring, trembling, and difficulty swallowing. In its more
chronic form, anxiety can also be accompanied by dizziness, chronic fatigue, sleeping difficulties, rapid
or irregular heartbeat, diarrhea or a persistent need to urinate, sexual problems, and nightmares. In
contrast, the cognitive features of anxiety include a feeling of apprehension or fear, usually resulting
from the anticipation of a threatening event or situation. Usually accompanying anxiety are intrusive
thoughts about the threat, catastrophic bouts of worrying about the possible negative outcomes
associated with the threat, and—in some specific types of problems—uncontrollable flashbacks about
past traumas and anxiety‐provoking experiences. Overly anxious people also find it hard to stop
thinking negative and threatening thoughts, and this is in part due to the cognitive biases that have
developed with the experience of anxiety. In the case of some anxiety‐related conditions, such as
obsessive‐compulsive disorder(OCD), the sufferer can also develop sequences of complex
ritualised behaviours designed to help them relieve their anxiety.
We all experience feelings of anxiety quite naturally in many situations—such as just before an
important exam, while making a presentation at college or work, at an interview, or on a first date. Most
anxiety reactions are perfectly natural, and they have evolved as adaptive responses that are essential for
us to perform effectively in challenging circumstances. However, anxiety can often become so intense or
attached to inappropriate events or situations, that it becomes maladaptive and problematic for the
individual (Davey, 2018). This is when an anxiety disorder may develop. In a sufferer of an anxiety
disorder the anxiety response:
1. May be out of proportion to the threat posed by the situation or event (e.g., in specific phobias),
2. May be a state that the individual constantly finds themselves in and may not be easily attributable
to any specific threat (e.g., in generalised anxiety disorder[GAD], or some forms of panic
disorder[PD]),
3. May persist chronically and be so disabling that it causes constant emotional distress to the
individual, who is unable to plan and conduct their normal day‐to‐day living. This can result in an
inability to hold down a regular job, or an inability to maintain long‐term relationships with
friends, partners, and family, etc.
Anxiety‐based problems are relatively common, and around one in five people will report high levels of
anxiety at any one time (Davey, 2018), one in three people will experience an anxiety disorder in their
lifetime (Bandelow & Michaelis, 2015), and a review of anxiety in 84 countries reported that the global
prevalence of diagnosable anxiety disorders at any one time is 7.3% (Baxter, Scott, Vos, & Whiteford,
2012). Anxiety problems may also be one mental health condition that is on the increase, with a US
study conducted by the American Psychiatric Association indicating a 5% rise in anxiety ratings
between 2017 and 2018 alone (American Psychiatric Association, 2018). Anxiety is also a mental health
problem that tends primarily to affect more people in high‐income than low‐income countries (Ruscio et
al., 2017).
As a result of its high levels of prevalence, pathological anxiety imposes a high individual and social
burden, tends to be more chronic than many other psychological problems, and can be as disabling as
physical illness. In both Europe and the US the cost of treating anxiety‐based problems runs into many
billions of pounds annually, making them more economically expensive than most other psychological
problems (Rovner, 1993; Greenberg et al., 1999). These economic costs include psychiatric,
psychological, and emergency care, hospitalisation, prescription drugs, reduced productivity,
absenteeism from work, and suicide (Lepine, 2002; Fineberg et al., 2013).
In this chapter we discuss in detail six of the main anxiety and stress‐related disorders:
Specific phobias
Social anxiety disorder (SAD)
Panic disorder
Generalised anxiety disorder (GAD)
Obsessive‐compulsive disorder (OCD)
Post‐traumatic stress disorder (PTSD)
12‐Month Lifetime
Disorder Women, % Men, % Women, % Men, %
Mood disorders
Major depressive disorder 23.7 19.1 38.3 30.0
Dysfunctional beliefs 11.2 8.9 12.9 10.6
Bipolar disorder I 2.4 3.3 3.0 3.3
Bipolar disorder II 3.3 3.9 3.4 3.2
Substance use disorders
Alcohol abuse 4.3 8.2 15.0 33.2
Alcohol dependency 2.9 4.8 7.9 16.7
Drug abuse 1.4 4.5 10.0 21.8
Drug dependency 1.0 2.2 4.9 9.3
Eating disorders
Anorexia nervosa 0.0 0.0 0.6 0.2
Bulimia nervosa 1.0 0.0 2.2 0.5
Binge‐eating disorder 2.0 1.3 2.7 2.3
Other
Attention deficit/hyperactivity disorder 6.9 7.6 7.8 11.1
Intermittent explosive disorder 9.4 12.5 11.6 19.3
Anxiety disorder
Any additional 37.3 27.9 44.8 34.2
Let's look at each of the anxiety diagnostic categories in turn, starting with a closer look at specific
phobias.
phobic beliefs Beliefs about phobic stimuli that maintain the phobic’s fear and avoidance of
that stimulus or situation.
6.2.1 Prevalence
Specific phobias are extraordinarily common, with surveys suggesting that a clear majority of the
general population (60.2%) experience ‘unreasonable fears’ (Chapman, 1997)—although in most cases
these fears are rarely severe enough to result in impairment or distress. Cross‐national epidemiological
studies suggest lifetime and 12‐month prevalence rates of 7.4% and 5.5% respectively (Wardenaar et al.,
2017) indicating that severe and disruptive phobic symptoms can be relatively common and Table 6.4
shows the prevalence rates for some of the more common forms of specific phobia. There is also a clear
gender difference in the prevalence of specific phobias, with women being twice as likely as men to be
diagnosed with a specific phobia (Wardenaar et al., 2017).
Psychoanalytic accounts
Phobias have intrigued psychologists for more than a century. This may be because they manifest as
irrational fears to things that usually pose little if any realistic threat, and their acquisition more often
than not cannot be explained by recourse to simple learning experiences such as a specific traumatic
event. This has led at least some approaches to psychopathology to view phobias as symbolic of other,
more deep‐rooted psychological difficulties. For example, psychoanalytic theory as developed by Freud
saw phobias as a defence against the anxiety produced by repressed Id impulses, and this fear became
associated with external events or situations that had a symbolic relevance to that repressed Id impulse.
Focus Point 6.1 describes the classic case of Little Hans, a 5‐year‐old boy who developed a severe
phobia of horses. Within Freud's psychoanalytic theory, the function of phobias was to avoid
confrontation with the real, underlying issues (in this case, a repressed childhood conflict). However,
because of the nature of psychoanalytical theorising, there is little in the way of objective evidence to
support such accounts of phobias. Nevertheless, there is often an element of insight that can be drawn
from the symbolic interpretations of case histories provided by psychoanalysis, and many anxiety
disorders may indeed function for the sufferer as a way of avoiding confrontation with more challenging
life issues and difficulties.
One of the most famous cases in the history of psychoanalysis is that of ‘Little Hans’, a 5‐year‐
old who revealed many of his perceptions, fantasies, and fears to his physician father, who, in
turn, reported them to Sigmund Freud. Hans began to have a fear of horses, which eventually
grew to the point that he refused to leave the house. The immediate event that precipitated this
phobia was seeing a big, heavy horse fall down. Freud interpreted this to mean that Hans at
that moment perceived his own wish that his father would fall down. Then Hans, a little
Oedipus, could take his father's place with his beautiful mother. Another part of the fear
derived from the large size of horses, which Hans unconsciously identified with the great power
of his father. He expressed the fear that a horse would come into his room. He also became
afraid not only of horses biting him, but of carts, furniture vans, and buses. This revealed, to
the psychoanalyst, still another aspect of Hans' unconscious fantasies, namely that the falling‐
down horse stood not only for his father but also for his mother in childbirth, the box‐like carts
and vehicles representing the womb. All these complicated, repressed feelings and perceptions
were thus incorporated in a single phobia.
It is important to note that Little Hans was basically a straightforward, cheerful child who
experienced normal psychosexual development marred only by the episode of the phobia, from
which he recovered rather promptly. Fourteen years later, 19‐year‐old Hans came to see Freud.
He had continued to develop well and had survived without unusual difficulty the divorce and
remarriage of both parents. The problems of his childhood were used by Freud to illustrate the
normal process of psychosexual development—the complex, intense, erotic drama of early
childhood.
FIGURE 6.1 The ‘Little Albert’ classical conditioning study by Watson & Rayner (1920) demonstrated the
acquisition of a phobia by pairing Little Albert's pet rat (the conditioned stimulus, CS) with a loud noise (unconditioned
stimulus, UCS).
1. Traumatic experiences are essential for traditional conditioning accounts, yet many phobics appear
unable to recall any trauma or aversive conditioning experience at the time of the onset of their
phobia (Rachman, 1977; Marks, 1969; Emmelkamp, 1982). This appears to be particularly true of
some animal phobics such as snake and spider phobics (Davey, 1992b; Murray & Foote, 1979), and
also height and water phobics (Menzies & Clarke, 1993a, 1993b);
2. Not all people who have pain or trauma paired with a situation develop a phobia. For example, not
everyone who has a traumatic experience undergoing dental treatment acquires a dental phobia
(Lautch, 1971), not everyone who experiences a violent thunderstorm acquires a thunderstorm
phobia (Liddell & Lyons, 1978), and not all fliers who experience a traumatic flying accident
express a subsequent anxiety of flying (Aitken, Lister, & Main, 1981. Goorney, 1970). This suggests
that a potential conditioning experience is itself insufficient to cause a phobia.
3. Simple conditioning models treat all stimuli as equally likely to enter into association with aversive
consequences, yet fears and phobias are not evenly distributed across stimuli and experiences.
People appear to develop phobias of animals (snakes, spiders), heights, water, death, thunder, and
fire more readily than fears of hammers, electric outlets, knives, guns, etc., even though the latter
group of stimuli seem to have a high likelihood of being associated with pain or trauma
(Seligman, 1971);
4. A simple conditioning model does not appear to account for the common clinical phenomenon of
incubation. Incubation is where fear increases in magnitude over successive encounters with the
phobic stimulus—even though it is not followed by a traumatic consequence (Eysenck, 1979).
Incubation is a phenomenon that is frequently observed clinically, but according to conditioning
theory it should lead to extinction rather than enhancement of the fear response.
Due to these features it is problematic for a classical conditioning account to explain the acquisition
of all phobias as resulting from traumatic conditioning episodes, but there is strong evidence that
traumatic conditioning experiences are responsible for the acquisition of at least some phobias.
These include dental phobia (Davey, 1988), choking phobia (Greenberg, Stern, & Weilburg, 1988),
accident phobia (Kuch, 1997), and most dog phobias (DiNardo et al., 1988; Doogan & Thomas,
1992).
If participants in a classical conditioning experiment are shown pictures of ‘fear‐relevant’ stimuli such
as snakes and spiders (CSs) paired with electric shock (UCSs), they develop fear of the CSs more quickly
and show a greater resistance to extinction than if pictures of less fear irrelevant stimuli are used as CSs
(e.g., pictures of houses) (Öhman, Erixon, & Lofberg, 1975).
Cook and Mineka (1989, 1990) also found that laboratory‐reared rhesus monkeys that had never before
seen a snake rapidly acquired fear reactions to snakes after being shown a demonstration of another
monkey being frightened in the presence of a snake. They did not acquire fear reactions after watching
a demonstration of another monkey being frightened in the presence of a stimulus such as a rabbit or a
flower. Both studies suggest that humans and primates such as rhesus monkeys may have an unlearned
predisposition to rapidly acquire fear responses to some types of stimuli and not others (see Öhman &
Mineka, 2001).
Second, Poulton & Menzies (2002) have argued for the existence of a limited number of innate,
evolutionary‐relevant fears. This nonassociative fear acquisition model argues that fear to a set of
biologically relevant stimuli develops naturally after very early encounters given normal maturational
processes and normal background experiences, and no specific traumatic experiences with these stimuli
are necessary to evoke this fear. Following repeated exposure to these stimuli, the innate fear reaction
will habituate and should eventually disappear. Poulton and Menzies (2002) claim that this account
explains why most children go through a discrete developmental period when they appear to be
frightened of potential life‐threatening stimuli such as heights and water (Graham & Gaffan, 1997), and
why there is little evidence in retrospective studies for phobias such as height and water phobia being
caused by specific traumatic experiences (Menzies & Clark, 1993a, 1993b). This account then goes on to
explain adult phobias as instances where these developmental phobias have failed to habituate properly.
nonassociative fear acquisition A model that argues that fear of a set of biologically
relevant stimuli develops naturally after very early encounters given normal maturational
processes and normal background experiences, and no specific traumatic experiences with these
stimuli are necessary to evoke this fear.
While evolutionary accounts are appealing and appear to have at least some face validity, we must be
cautious about accepting them on the basis of existing evidence (Delprato, 1980). First, such accounts
depend on the fact that current phobic stimuli have actually acted as important selection pressures over
our evolutionary past. But this is very difficult to empirically verify. For example, do we tend to have
phobic reactions to spiders because they once constituted an important life‐threatening pressure on our
pretechnological ancestors? There is no convincing evidence to suggest this. Second, evolutionary
accounts can be constructed in a post hoc manner and are at risk of being either ‘adaptive stories’
(McNally, 1995) or ‘imaginative reconstructions’ (Lewontin, 1979) (cf. Merckelbach & de Jong, 1997).
This view argues that it is possible to construct, post hoc, an adaptive scenario for the fear and
avoidance of almost any stimulus or event (McNally, 1995)—see Activity Box 6.1. This does not mean
that evolutionary accounts are wrong (see Öhman & Mineka, 2001, for a recent evolutionary account of
phobias), merely that they are tantalisingly easy to propose, but very difficult to substantiate. You might
find this informal blog discussion on the ‘Mystery of the Origins of Phobias’ useful and informative
(https://www.papersfromsidcup.com/graham‐daveys‐blog/the‐mystery‐of‐the‐origins‐of‐phobias).
PHOTO 6.1 A majority of people claim to have a phobia of some kind, although most are not severe enough to cause
distress or to disrupt normal daily life. Some phobias are unusual, such as phobia of cotton wool or buttons—but they are
much more common than you think.
disgust A food-rejection emotion whose purpose is to prevent the transmission of illness and
disease through the oral incorporation of contaminated items.
disease‐avoidance model The view that some animal phobias are related to attempts to
avoid disease or illness that might be transmitted by these animals.
Alternatively, there is evidence that factors closely associated with panic and panic disorder (see Section
6.4) are also linked to a number of specific phobias. First, there is a fairly high comorbidity rate between
panic disorder and some specific phobias. Studies have identified comorbidity rates of between 40%
and 65% (de Ruiter, Rijken, Garssen, van Schaik, & Kraaimaat, 1989; Starcevic, Uhlenhuth, Kellner, &
Pathak, 1992) suggesting that panic is common in people suffering from many different types of specific
phobia. Second, some categories of specific phobia—especially situational phobias (e.g., fear of heights,
enclosed spaces)—share important characteristics in common with panic disorder. For example,
situational phobias appear to have a preponderance of spontaneous onsets typical of panic disorder
(Himle, Crystal, Curtis, & Fluent, 1991), have a significantly higher rate of comorbidity with panic
disorder than do other types of phobias, such as animal phobias (Starcevic & Bogojevic, 1997), and
frequently have uncontrollable panic attacks as one of the symptoms of phobic responding (e.g., height
phobia, Antony, Brown, & Barlow, 1997; flying phobia, McNally & Louro, 1992; claustrophobia,
McIsaac, 1995). Similarly, both claustrophobia and height phobia share aetiological factors in common
with panic disorder. For instance, subjective fear in claustrophobia is focussed not just on external
dangers but on anxiety expectancies and bodily sensations (Craske, Mohlman, Yi, Glover, & Valeri,
1995), and spontaneous panic attacks are found significantly more often in claustrophobics than in other
types of phobias (Rachman & Levitt, 1985; Craske & Sipsas, 1992). Height phobia is associated not only
with heightened discrimination of bodily sensations, but also with a bias towards interpreting
ambiguous bodily sensations as threatening—a characteristic which is central to the aetiology of panic
disorder (Davey, Menzies, & Gallardo, 1997) (see Section 6.4.4).
PHOTO 6.2 Small animal phobias are very common and consist of creepy‐crawlies, insects, molluscs, rodents, spiders,
snakes, and lizards, etc. Interestingly, if you are fearful of one of these types of animals you are more likely to be fearful of
others in this group. Fear of such animals may be related more to the emotion of disgust rather than anxiety.
These examples suggest that specific phobias may have a number of different causes—depending on the
nature of the phobic stimulus or event—and the aetiologies appear to involve quite different
vulnerability factors and psychological processes. This being so, specific phobias are a coherent category
only on the basis of their defining symptoms, and therapists may need to look more closely at the
different aetiologies to construct successful treatments.
One‐session treatments for specific phobias were developed during the 1990s and are
remarkably successful as effective and long‐lasting treatments for many specific phobias (Öst,
1997: Koch, Spates, & Himle, 2004; Öst, Alm, Brandberg, & Breitholtz, 2001). One‐session
treatments usually include a combination of graduated in vivo exposure and modeling. The
following is an example of a one‐session treatment procedure for spider phobia.
STEP 1:Catching a small spider in a plastic bowl: The therapist first models how the
client should pick up the spider by putting a bowl over it and then sliding a card
underneath to trap the spider and then picking the bowl up using the card as a lid. This is
repeated three to four times and on the last occasion the client is instructed to hold the
bowl in the palm of his/her hand. At this point a brief role‐play can be carried out by
having the therapist play the part of a person born blind, and the client has to describe
what is happening (thus forcing the client to look at the spider in the bowl).
STEP 2: Touching the spider: The therapist asks the client what they think will happen if
they touch the spider. Most spider phobics say the spider will climb up their arm. This is a
prediction that can be tested by the therapist who then touches the spider. This is repeated
up to 10 times to show the client that the spider's reaction is almost always to run away.
This is followed by the client touching the spider—usually with some physical guidance
from the therapist.
STEP 3: Holding the spider in the hand: The therapist takes the spider on his/her hand,
letting it walk from one hand to another. The client is then encouraged to put their index
finger on the therapist's hand so that the spider can walk across the finger and back to the
therapist's hand. This is repeated a number of times until the spider walks across all the
client's fingers. Gradually, the therapist withdraws physical support and the client allows
the spider to walk from one hand to another.
These three steps are then repeated with spiders of increasingly larger size. Throughout the
session, the client is taught that he/she can acquire control over the spider by gradually being
able to predict what the spider will do. The goal of the therapy is to ensure that at the end of
the session the client can handle two spiders with low or no anxiety and no longer believe
his/her catastrophic cognitions about spiders.
From Öst (1997).
In conclusion, it must be remembered that many people can live with their phobias—either because
they are subclinical in intensity or their fears are so specific that they do not interfere substantially with
their daily lives. So only those with the most distressing or disabling phobias are the ones who seek
treatment. In general, recently developed therapies for specific phobias have been shown to be
extremely effective and successful, and these therapies are usually multifaceted and combine aspects of
exposure therapy with cognitive restructuring.
SELF‐TEST QUESTIONS
What are the main diagnostic criteria for specific phobias?
What are the most common phobias, and what are the kinds of phobic beliefs that
accompany them?
How do classical conditioning and evolutionary theories attempt to explain the acquisition
of phobias? What are their similarities and differences?
What is the role of brain areas such as the amygdala in the formation of specific phobias?
Why is exposure such an important feature of treatment for specific phobias?
SECTION SUMMARY
Distinct fear of social interactions, typified by anxiety around receiving negative judgement or of
giving offense to others
Social interactions are avoided or are experienced with intense fear or anxiety
The avoidance, fear, or anxiety often lasts for 6 or more months and causes significant distress and
difficulty in performing social or occupational activities
Anxiety cannot be explained by the effects of other mental or medical disorders, drug abuse, or
medication
6.3.1 Prevalence
Cross‐national epidemiology studies have indicated that SAD has a lifetime prevalence of 4% and a 1‐
year prevalence rate of 2.4%, and globally is associated with specific socio‐demographic features such as
younger age, female gender, unmarried status, lower education, and lower income (Stein et al., 2017).
However, the prevalence rates of social fears and phobias generally may be relatively higher at lifetime
prevalence and 1‐year rates of 12% and 8% respectively (Ruscio et al., 2008). Age of onset is
considerably earlier than for many of the other anxiety disorders, with typical age of onset in early to
mid‐teens, and usually prior to 18 years of age (Rapee, 1995; Otto et al., 2001). It is also a particularly
persistent disorder and has the lowest overall remission rate of the main anxiety disorders (Massion et
al., 2002; Hirshfeld, Micco. Simoes, & Henin, 2008).
Genetic factors
There is evidence accruing that there is an underlying genetic component to SAD, although it is not
clear how specific this genetic component might be, and a systematic review of heritability studies has
revealed heritability rates that vary drastically between 13% and 76%—largely as a result of significant
differences in methodologies between studies (Moreno, Osório, Martín‐Santos, & Crippa, 2016).
Evidence consistent with a genetic component indicates that children with SAD are more likely to have
parents with the disorder than nonphobic children (Lieb et al., 2000; Mancini et al., 1996), and twin
studies also suggest that there is a significant but moderate genetic influence on the development of
SAD (Beatty, Heisel, Hall, Levine, & La France, 2002; Ollendick & Hirshfeld Becker, 2002). While
indicating the importance of genetic influences, such studies do beg the question of what aspect of SAD
is inherited. Some studies have been able to identify specific constructs related to SAD that appear to
have a genetic component, and these include submissiveness, anxiousness, social avoidance, introversion,
and behavioural inhibition (Warren, Schmitz, & Emde, 1999; Robinson, Kagan, Reznick, & Corley,
1992; Stein et al., 2017). Other studies indicate that SAD contains an inherited component that is
shared with other anxiety disorders—and this suggests that what might be inherited is a vulnerability to
anxiety disorders generally rather than social phobia specifically (Kendler et al., 1995; Nelson et al.,
2000). Several genes have been associated with socially anxious traits such as shyness and introversion,
although these studies have been far from consistent in their findings (Gelernter, Page, Stein, & Woods,
2004; Stein & Stein, 2008; Stein & Gelernter, 2014).
Cognitive factors
There appear to be a number of cognitive processes that are characteristic of SAD and which may all
act in some way to maintain fear of social situations (Stravynski, Bond, & Amando, 2004). First,
individuals diagnosed with SAD possess an information processing and interpretation bias in which they
make excessively negative predictions about future social events (Heinrichs & Hofmann, 2001; Hirsch &
Clark, 2004)—predictions that are often based on past failures, poor performance, or rejection (Sluis,
Boschen, Neumann, & Murphy, 2017). For example, individuals with SAD rate the probability of
negative social events occurring as higher than either nonclinical controls or individuals with other
anxiety disorders (Foa, Franklin, Perry, & Herbert, 1996; Gilboa‐Schechtman, Franklin, & Foa, 2000),
and this negative evaluation is likely to maintain their avoidance of social situations. Second, individuals
with SAD interpret their performance in social situations significantly more critically than nonsufferers
and independent assessors who have observed their behaviour (Stopa & Clark, 1993; Rapee & Lim,
1992) and also underestimate their own social skills (Dodd, Hudson, Lyneham, Morris, & Monier,
2011). Socially anxious individuals also find it very difficult to process positive social feedback (Alden,
Mellings, & Laposa, 2004). This focus on negative aspects of the social situation, and the relative
inability to take anything ‘good’ from a social performance are likely to maintain the socially anxious
individual's dysfunctional beliefs that social situations are threatening and that their own performance is
likely to be flawed. Third, some theories of SAD argue that sufferers show a strong tendency to shift
their attention inwards onto themselves and their own anxiety responses during social performance—
especially when they fear they will be negatively evaluated (Clark & Wells, 1995; Rapee & Heimberg,
1997) This is known as self‐focused attention (Spurr & Stopa, 2002; Bogels & Mansell, 2004) and
has the effect of leading socially anxious individuals to believe they may look as anxious as they feel
inside. This prevents objective processing of the social situation, leads them to engage in critical self‐
evaluation, and may well adversely affect their actual performance in the social situation (e.g., Kley,
Tuschen‐Caffier, & Heinrichs, 2011). Studies have shown that social phobics do indeed display higher
levels of self‐reported self‐focused attention than nonclinical populations (Bogels & Lamers, 2002) and
that they recall social memories more often from an observer perspective than a personal perspective
(suggesting that they do indeed ‘observe’ themselves while performing socially) (Wells, Clark & Ahmad,
1998) (Figure 6.3). Self‐focused attention therefore appears to have the effect of reinforcing the
individual's perception of their own anxiety in the social situation, can distract the individual from
focusing on the social task at hand and lead to unskilled performance, and result in avoidance of future
social situations (Alden, Teschuk, & Tee, 1992). Finally, individuals with SAD also indulge in excessive
post‐event processing of social events that includes critical self‐appraisal of performance and assessment
of symptom severity. Such post‐event rumination has the effect of maintaining negative appraisals of
performance over time and maintaining social anxiety (Abbott & Rapee, 2004; Gavric, Moscovitch,
Rowa, & McCabe, 2017) (Focus Point 6.3).
THE CLIENT'S PERSPECTIVE 6.1 SOCIAL ANXIETY
DISORDER
‘You have to be a sufferer of Social Anxiety to understand the pure terror that a victim of this
illness feels. It's the sort of blind panic dread and fear that one would feel facing a firing squad
or if you fell into a lions cage. You shake like a leaf, you blush, your mouth goes dry, you can't
speak, you break out in a cold sweat, your legs feel so weak you think you're going to fall. Your
thoughts become confused and disorientated. Forget butterflies in the stomach—your guts are
twisted inside out with FEAR.
Social Anxiety made me sink so low I ended up cleaning public toilets for a living. I never
married (No I'm not gay) I had no children I never owned my own house. I rent a small flat in a
very poor part of London all because of SEVERE Social Anxiety. My parents were cold
reserved people unable to show their emotions I was never abused in any way but I look back
on my childhood as a lonely unhappy time. Maybe that was the root cause of my phobia. I
mention that because we can all think of something that may have been the cause. My Social
Anxiety started in my last year at school when I became very self‐conscious and developed a
fearful dread of being asked to read in front of the class. This extreme anxiety moved on with
me into my working life. I was a smart looking young man so I got some good jobs, but because
of Social Anxiety no way could I hold them. Would you buy from a salesman who went a deep
red, stammered, couldn't look you in the eye, and shook so much that even his head trembled.
No—nor would the boss who in the end would say get lost, you're bad for business.
Over the years I slid down and down the social ladder with long spells out of work and, of
course, no money. By the time I was 30 I could only do work where I did not have to deal with
people like road sweep, night work in factories and in the end cleaning public toilets when
closed at night. Social Anxiety was now so bad I couldn't face going into a shop to buy
something. To pass a queue of people waiting for a bus was hell I was sure they were all staring
at me. I couldn't sit facing other passengers on a train unless I had a newspaper to hide behind.
If I attempted going into a restaurant or café, I'd pick a table facing the wall and if anyone sat
at my table my hands would shake so much I couldn't get the food into my mouth. I became the
ultimate night person only going out late to walk the streets’.
Clinical Commentary
This client’s perspective highlights the extreme fear experienced by many social phobics
in a range of social and performance situations, and the impact it can have on social
functioning specifically and life planning more generally. This description highlights a
number of important features of SAD, including (a) the biased interpretations that
social phobics have of the reactions of others to them (e.g., ‘To pass a queue of people
waiting for a bus was hell I was sure they were all staring at me’), (b) the belief that there
are obvious physical signs of their nervousness which observers interpret judgementally
(e.g., ‘Would you buy from a salesman who went a deep red, stammered, and couldn’t
look you in the eye’), and (c) the tendency of social phobics to focus their attention on
themselves and their own reactions to the possible detriment of their own performance
(e.g., ‘My social anxiety started in my last year at school when I became very self‐
conscious and developed a fearful dread of being asked to read in front of the class’).
FOCUS POINT 6.2 BEHAVIOURAL INHIBITION (BI) AND
SOCIAL ANXIETY DISORDER
Many children seem quiet, isolated, and anxious when confronted either with social situations
or with novelty, and this characteristic has come to be defined by the construct called
behavioural inhibition (BI) (Kagan, Reznick, Clark, Snidman, & Garcia Coll, 1984). BI
represents ‘a consistent tendency to display extreme fearfulness, withdrawal, reticence, and
restraint in the face of novelty’ (Hirschfeld‐Becker, 2010), and toddlers exhibiting BI will show
overt distress and cling to their mothers in unfamiliar or novel situations. They are also
reluctant to approach novel objects, peers, and adults. Preschoolers with BI seem quiet and are
reticent to speak or play spontaneously, and by age 7 the reluctance to socialise is found mainly
in group contexts. BI is estimated to have quite a high level of inheritability—between 50 and
70% (Smoller & Tsuang, 1998), and BI is considered to be a specific risk factor for SAD
(Hirschfeld‐Becker, 2010).
FIGURE 6.3 High and low socially anxious participants were asked to give a speech to a group of observers. After
giving the speech, the high socially anxious participants rated observers' audience's enjoyment of their speech significantly
lower than low socially anxious participants. The high socially anxious participants even do this when the observers have
been instructed to provide positive feedback—suggesting that socially phobic individuals do not attend to positive feedback
cues given by an audience.
After Perowne & Mansell, (2002).
self‐focused attention A theory of social anxiety disorder arguing that sufferers show a
strong tendency to shift their attention inwards onto themselves and their own anxiety responses
during social performance – especially when they fear they will be negatively evaluated.
The following provides a step‐by‐step account of the Cognitive Therapy for SAD devised by
Clark & Wells (1995). The aims of this procedure are (a) to decrease self‐focused attention, (b) to
reduce the level of negative interpretations of internal information (e.g., sweating as a sign of
poor performance), (c) eliminate the use of safety behaviours which maintain negative beliefs
(e.g., if the phobic believes they are trembling and this may be visible, they may grip objects
tightly in order to conceal this—this response merely maintains the phobic's belief that they are
anxious and trembling), and (d) reduce negative post‐event processing (see Section 4.2.1).
STEP 1: The initial phase is designed to inform the client about those factors that are
maintaining their social phobia (see above) and that these are the factors that the therapy is
specifically designed to target.
STEP 2: The second phase attempts to manipulate safety behaviours. Here the client has
to role play a social situation and observe his or her own responses and identify key safety
behaviours. The client will then attempt to drop these safety behaviours during subsequent
role playing.
STEP 3:Clients are trained to shift their attention externally and away from their own
internal responses and cognitions.
Step 4: Video feedback of performance can be used to modify distorted self‐imagery.
STEP 5: The client is provided with some behavioural experiments in which they specify
their fears of particular social situations and then test out whether they occurred during
role‐play sessions.
STEP 6: Problematic post‐event processing is identified and modified using focused
cognitive restructuring techniques.
SECTION SUMMARY
Marilyn is a 33‐year‐old single woman who works at a local telephone company and lives alone
in her apartment. She has panic disorder with agoraphobia and her first panic attack occurred
3 years ago when driving over a bridge on a very rainy day. She experienced dizziness,
pounding heart, trembling, and difficulty breathing. She was terrified her symptoms meant she
was about to pass out and lose control of her car. Since that time she has experienced eight
unexpected panic attacks during which she feared she was about to pass out and lose control of
herself. She frequently experiences limited symptom attacks (e.g., feels dizzy and fears she may
pass out). As a result of her intense fear of having another panic attack she is avoiding the
following situations: waiting in line, drinking alcohol, elevators, movie theatres, driving over
bridges, driving on the freeway, flying by plane, and heights (e.g., will not go out on her 10th
floor balcony). She is often late for work because of taking a route that doesn't require her to
take the freeway. She is also finding herself avoiding more and more activities. She frequently
feels tearful and on guard. Sometimes she gets very angry at herself as she does not understand
why she has become so fearful and avoidant.
Sharon is a 38‐year‐old single mother of two teenage daughters who works as a fitness
instructor at a local gym. She experienced her first panic attack during her teens when watching
a horror movie with friends at a local movie theatre. Since that time she has experienced one to
two full panic attacks per year that come out of the blue in a variety of situations (e.g., while
waiting in line at the bank, at a shopping mall, walking alone at the park). The panic attacks
reoccurred out of the blue when she was 29 while eating a hot and spicy meal at a local
restaurant. Her panic attacks always include dizziness, feeling of choking, dry mouth, unreality,
feeling detached from her body, and feeling as if she may lose bowel control. Her main fear is
that she is dying due to a stroke although medical problems have been ruled out. Sharon does
not avoid anything to prevent the panic attacks and there has not been a huge negative impact
of the panic attacks upon her work, family or social functioning.
Clinical Commentary
Both Marilyn and Sharon exhibit a number of physical symptoms typical of panic attacks, although
these examples show that not everyone experiences similar symptoms. Panic attacks often come ‘out of the
blue’ and are unpredictable, and this adds to their frightening nature. In both examples, the individual
believes that the symptoms are signs of impending physical illness or loss of control (catastrophic
misinterpretation). The pervasive fear of further attacks means that Marilyn has developed avoidance
responses in an attempt to minimise future attacks. These avoidance responses interfere with her normal
daily life (causing further stress), and inadvertently help to maintain dysfunctional catastrophic beliefs.
TABLE 6.6 Summary: DSM‐5 Criteria for a Panic Attack
A sudden feeling of extreme fear or distress, which can originate from either a calm or an anxious
state. Symptoms intensify in a short space of time and will include a range of sensations such as:
fluctuations in heart rate
shortness of breath or chest pain
nausea
dizziness
shaking
Distinct fear of situations where the individual is outside, in a crowd or an open space, or in
public spaces such as shops, cinemas, or buses
Situations are avoided or are experienced with intense fear that help will be unavailable or that
panic or other resultant symptoms will occur
The individual experiences fear in at least two different situation types and symptoms of anxiety
or avoidance will last for 6 months or more
Fear causes difficulty in performing social or occupational activities and cannot be explained by
the effects of other mental or medical disorders
6.4.2 Agoraphobia
Agoraphobia is a fear or anxiety of any place where the sufferer does not feel safe or feels trapped,
and is accompanied by a strong urge to escape to a safe place (e.g., home). Very often, this urge to
escape or avoid ‘unsafe’ places is associated with the fear of having a panic attack and the
embarrassment that might cause. Agoraphobia is typically associated with fear of specific types of
places or situations, including public transport; open spaces such as car parks; bridges; being in shops,
supermarkets, theatres, or cinemas; standing in a queue or being in a crowd; or simply being alone away
from home. As a result of these fears, many individuals with a diagnosis of agoraphobia rarely venture
far from home, and when they do, they may do so only with a trusted friend or relative. Thus,
agoraphobia can be a severely disabling condition that often confines sufferers to their homes for many
years, rendering them unable to work or to socialise. Sufferers also come to rely heavily on friends and
relatives to help them with even basic activities like shopping or trips to the doctor. This will often put a
severe strain on those family members who care for individuals suffering from agoraphobia (Table 6.8).
Agoraphobia A fear or anxiety of any place where the sufferer does not feel safe or feels
trapped, and is accompanied by a strong urge to escape to a safe place (e.g. home).
6.4.3 Prevalence
Cross‐national studies indicate that, among individuals who have ever had a panic attack, 2 out of 3 of
those will probably have recurrent panic attacks, and of those who do experience recurrent panic
attacks only 1 in 8 (12%) will fulfill DSM‐5 criteria for panic disorder (de Jonge et al., 2016). The 12‐
month prevalence rate for panic disorder is around 1.5–3% and between 0.4% and 3% for agoraphobia
(e.g., Goodwin et al., 2005). Onset is common in adolescence or early adulthood and can often be
associated with a period of stress in the individual's life (de Lijster et al., 2016; Pollard, Pollard, & Corn,
1989). Prevalence rates differ considerably across different countries (de Jonge et al., 2016), possibly
because of the culturally different ways in which symptoms are expressed. For example, in some Asian
societies, prevalence is particularly low—possibly because of the stigma related to admitting and
reporting psychological disorders (e.g., in Taiwan—Weissman et al., 1997). However, in other cultures,
panic disorder may be expressed in the form of quite different symptoms. For example, Ataque de
Nervos is an anxiety‐based disorder found almost exclusively in Latinos in the Caribbean. This appears
to be a form of panic disorder brought on by stressful life events (such as economic or marital
difficulties), but whose expression is determined by the social and cultural norms within that cultural
group (see Chapter 1, Focus Point 1.3). In particular, Latino cultures place less emphasis on self‐control
and emotional restraint than other Western cultures, and so the distress of panic disorder in Latinos
tends to be externalised in the form of screaming, uncontrolled behaviour and aggression (Salman et al.,
1998). In contrast, in Western cultures the distress of panic disorder is usually coped with by adopting
avoidance and withdrawal strategies—hence the common diagnosis of panic disorder with agoraphobia
(see Davey, 2018, chapter 9, for a further discussion of how panic disorder symptoms are experienced in
other cultures).
It is important to remember that panic attacks may be a feature of the symptoms in a number of the
anxiety disorders (e.g., specific phobias and social phobia). However, panic disorder itself is
characterised by frequent uncontrollable panic attacks, and an important aspect of this anxiety‐based
problem is the individual's intense fear of experiencing panic attacks. As we shall see, it is this latter
feature of panic disorder that plays a central role in theories of the disorder.
Biological factors
Panic attacks represent an activation of our physiological fight/flight defensive reaction that's evolved to
provide us with the immediate means to avoid danger. So during periods of stress or danger, our
sympathetic nervous system is activated, and its major effect is to signal release of the hormones
adrenalin and noradrenalin that release energy and prepare the body for action. These chemicals have
the effect of generating an increase in heartbeat that increases blood flow and improves delivery of
oxygen to the muscles (the pounding heart experience), they increase the speed and depth of breathing
to provide more oxygen to the tissues for action (this hyperventilation can make the individual feel dizzy
or light headed), and they increase perspiration which prevents the body from overheating (and makes it
harder for a predator to grab you!). There are some other effects which give rise to panic‐type
symptoms, such as the redirecting of blood away from places where it's not needed (such as the stomach)
towards places where it is needed (the muscles), resulting in feelings of nausea as a result of reduced
activity in the digestive system. The pupils widen to let in more light, resulting in the blurred vision
often experienced during a panic attack. Finally, adrenaline and noradrenaline also cause muscle groups
to tense up in preparation for fight or flight, which can often give rise to sensations such as chest pains
which many panic attack sufferers will interpret as signs of an impending heart attack or cardiac arrest.
These biological symptoms of panic attacks help us to understand the physical experiences that define a
panic attack, but they don't immediately tell us what triggers a panic attack. However, in biological
terms, panic attacks are associated with the brain area known as the locus coeruleus which is located
in the brain stem. This is an area of the brain that is the main source of the neurotransmitter
norepinephrine, and it is the release of norepinephrine in response to stress that activates most of the
biological reactions found in a panic attack. Individuals with a diagnosis of panic disorder show a
greater sensitivity to norepinephrine and to drugs that increase activity in the locus coeruleus (Bandelow
et al., 2017), suggesting that it may be the greater reactivity of the locus coeruleus in some individuals
that bestows a vulnerability for panic attacks.
biological challenge tests Research in which panic attacks are induced by administering
carbon dioxide-enriched air (CO2) or by encouraging hyperventilation.
Classical conditioning
Goldstein and Chambless (1978) were the first to suggest that an important feature of panic disorder
was the sufferer's ‘fear of fear’. That is, when they detected what they thought were any internal signs of
a panic attack (e.g., mild dizziness), they would immediately become fearful of the possible
consequences. The stress of this fear would then precipitate a full‐blown attack. Goldstein and
Chambless (1978) interpreted this as a form of interoceptive classical conditioning, in which the internal
cue (such as dizziness) had become established as an internal CS predicting a panic attack (the UCS)
(see Chapter 1, Section 1.3.2 for a brief description of classical conditioning). However, while this
account has intuitive appeal, it is not clear in conditioning terms what is the CS and what is the UCS.
For example, is a skipped heartbeat a CS that precipitates a panic attack, or is it a symptom of the panic
attack itself (the UCS) (McNally, 1990)? Bouton, Mineka, & Barlow (2001) have attempted to address
these conceptual difficulties by suggesting that anxiety and panic are separable aspects of panic disorder.
They suggest that anxiety is anticipatory and prepares the system for a trauma, whereas panic deals
with a trauma that is already in progress. In this conditioning account, anxiety is the learned reaction,
called CR to the detection of cues CS that might predict a panic attack, and once conditioned anxiety
develops it will exacerbate subsequent panic attacks and lead to the development of panic disorder. As
predicted by this model, studies confirm that panic attacks are regularly preceded by anxiety in
individuals with panic disorder (Barlow, 1988; Kenardy & Taylor, 1999).
Anxiety sensitivity
What is clear about the phenomenology of panic disorder is that sufferers become extremely anxious
when they detect any cues (internal or external) that may be indicative of a panic attack. So any theory
of panic disorder needs to explain why sufferers are made anxious by the detection of these cues, and
how this subsequently leads to a full‐blown panic attack. Individuals who do not suffer panic disorder
report a number of interoceptive and affective responses in biological challenge tests, but they are only
rarely made anxious by these symptoms and hardly ever panic (Bass & Gardner, 1985; Starkman,
Zelnik, Nesse, & Cameron, 1985). So, what determines whether someone will panic in response to
unusual bodily sensations? Reiss & McNally (1985) proposed that some individuals have preexisting
beliefs that bodily sensations may predict harmful consequences. They developed the construct of
anxiety sensitivity, which refers to fears of anxiety symptoms that are based on beliefs that such
symptoms have harmful consequences (e.g., that a rapid heartbeat predicts an impending heart attack).
In order to measure this construct, Reiss, Peterson, Gursky, and McNally (1986) developed the Anxiety
Sensitivity Index (ASI) (see also the Revised Anxiety Sensitivity Index, ASI‐R, Taylor & Cox, 1998),
and this contains items such as ‘Unusual body sensations scare me’ and ‘It scares me when I feel faint’.
Studies have shown that individuals with panic disorder score significantly higher on the ASI than either
nonclinical controls or individuals diagnosed with other anxiety disorders (Taylor & Cox, 1998; Rapee,
Ancis, & Barlow, 1988). Furthermore, in a prospective study, high ASI scores predicted the occurrence
of subsequent panic attacks in army recruits undergoing a stressful period of training (Schmidt, Lerew,
& Jackson, 1997), and this suggests that elevated anxiety sensitivity may be a risk factor for panic and
also panic disorder (McNally, 2002).
anxiety sensitivity Fears of anxiety symptoms based on beliefs that such symptoms have
harmful consequences (e.g. that a rapid heartbeat predicts an impending heart attack).
Anxiety Sensitivity Index A measure, developed by Reiss, Peterson, Gursky & McNally
(1986), to measure anxiety sensitivity.
So where does anxiety sensitivity come from? Why are some people high on anxiety sensitivity and
others seemingly unbothered about whether they're anxious or not? There are still relatively few
comprehensive studies of the origins of anxiety sensitivity, but there are some possible causal candidates
in the research literature. One is genetics. Twin studies suggest that anxiety sensitivity is ‘moderately’
heritable (accounting for up to 61% of the variance in anxiety sensitivity scores) and this genetic effect is
generally stable over time (Zavos, Gregory, & Eley, 2012). Stressful life events are also associated with
subsequent increases in anxiety sensitivity in adolescents, especially stressful events related to health (e.g.,
the individual or a member of the family being hospitalised) and to family discord (e.g., parents being
divorced) (McLaughlin & Hatzenbuehler, 2009), suggesting some experiential factors might also
facilitate anxiety sensitivity.
FIGURE 6.4 Clark's (1986) model of panic disorder. Perception of a threat triggers apprehension and then bodily
sensations associated with that apprehension are interpreted catastrophically. This causes further anxiety which feeds into a
vicious cycle that triggers a full‐blown panic attack.
Despite being the explanation of choice for most clinical psychologists, the catastrophic
misinterpretation of bodily sensations model still begs the question of why some people misinterpret
bodily sensations catastrophically and others do not. One possibility is that those individuals who do
develop panic disorder may have a history of medical illness that has been distressing for them, and now
leads them to interpret all bodily sensations as distressing and threatening, and this possibility is
supported by the fact that panic disorder is frequently comorbid with a number of forms of medical
illness (Meuret, Kroll, & Ritz, 2017).
Summary
All of these accounts suggest that while the symptoms of panic attacks are very much biological
manifestations of the sympathetic nervous system, there is likely to be an important psychological
component to the development of panic disorder, and one that involves a negatively valenced bias in
how the individual interprets and reacts to their own bodily sensations. This interpretation bias appears
to trigger anxiety, which in turn triggers a panic attack. The issues that remain to be resolved in these
accounts are (a) exactly how the anxiety elicited by catastrophic misinterpretation of bodily sensations
leads to panic, and (b) why some individuals have acquired high levels of anxiety sensitivity and
catastrophic beliefs in the first place.
The following transcript gives an example of how a cognitive therapist (T) would try to
challenge the catastrophic beliefs of a panic disorder sufferer (P) who believes that signs of an
impending panic attack are signals for an imminent heart attack.
P:
When I'm panicking, it's terrible I can feel my heart pounding; it's so bad I think it could
burst through my chest.
T:
What thoughts go through your mind when your heart is pounding like that?
P:
Well I'll tell you what I think; it's so bad that I think I'm going to have a heart attack. It can't
be good for your heart beating like that.
T:
So you're concerned that anxiety can damage your heart or cause a heart attack.
P:
Yes, it must do you some damage. You hear of people dropping down dead from heart
attacks caused by stress.
T:
Do you think more people have stress in their lives than die of heart attacks?
P:
Yes, I suppose so.
T:
How can that be if stress causes heart attacks?
P:
Well, I suppose it doesn't always cause problems. Maybe it does only in some people.
T:
Yes, that's right; stress can cause some problems in some people. It tends to be people who
have something wrong with their hearts in the first place. But stress is not necessarily the
same as sudden anxiety or panic. When you panic your body releases adrenalin which
causes the heart to speed up and your body to work faster. It's a way of preparing you to deal
better with danger. If adrenalin damaged the heart or body, how would people have evolved
from dangerous primitive times? Wouldn't we all have been wiped out?
P:
Yes, I suppose so.
T:
So maybe panic itself doesn't cause heart attacks, there has to be something physically
wrong for that to happen. When people have had heart attacks they are often given an
injection of adrenalin directly into the heart in order to help start it again. Do you think they
would do that if it damaged the heart even more?
P:
No I'm sure they wouldn't.
T:
So, how much do you now believe that anxiety and panic will damage your heart?
SELF‐TEST QUESTIONS
What are the main diagnostic criteria for social anxiety disorder and how does this
disorder manifest itself ?
Can you describe the various cognitive factors that appear to play an important role in
maintaining social anxiety disorder?
How do CBT and drug treatments complement each other in the treatment of social
anxiety disorder?
SECTION SUMMARY
catastrophising An example of magnification in which the individual takes a single fact to its
extreme, one example being catastrophic worrying.
6.5.2 Prevalence
Cross‐national studies indicate that DSM‐5 diagnosed GAD has a lifetime prevalence rate of 3.7% and
a 12‐month prevalence rate of 1.8%, and the disorder is significantly more prevalent and impairing in
high‐income than in low‐income countries (Ruscio et al., 2017). Subthreshold GAD is also a significant
problem worldwide, with as many as 12% of individuals suffering disabling and distressing subthreshold
symptoms in their lifetime (Haller, Cramer, Lauche, Gass, & Dobos, 2014). GAD is almost twice as
common in women as in men and can often persist from adolescence to old age (McLean, Asnaani,
Litz, & Hofman, 2011; Barlow, Blanchard, Vermilyea, Vermilyea, & DiNardo, 1986), and GAD is
associated with significant impairments in psychosocial functioning, role functioning, work productivity
and health‐related quality of life (Revicki et al., 2012). Comorbidity is also highly prevalent in sufferers
of GAD, with rates ranging from 45% to 98% (Goisman, Goldenberg, Vasile, & Keller, 1995).
Individuals diagnosed with GAD are likely to be diagnosed with other mental health problems such as
major depression (Zhou et al., 2017) and eating disorders (Brown, O'Leary, & Barlow, 2001), with
comorbidity predicting poorer treatment outcomes.
Biological theories
There is some evidence for a genetic component in both anxiety generally and GAD specifically (Noyes
et al., 1992), which suggests that GAD has an inherited component. Twins studies estimate the heritable
component at around 30% (Dellava, Kendler, & Neale, 2011; Hettema, Neale, Myers, Prescott, &
Kendler, 2001) and recent genome‐wide association studies (GWAS) have reported a similar figure of
30% heritability (Gottschalk & Domschke, 2017). But what appears to be inherited more than anything
else is a vulnerability to anxiety disorders and neuroticism generally, and less an inherited vulnerability
to a specific disorder such as GAD (Tambs et al., 2009).
Neuropsychological perspectives on GAD are still in their infancy, with neuroimaging studies of worry
implicating prefrontal brain regions in this activity (Hoehn‐Saric, Schlund, & Wong, 2004; Paulesu et
al., 2010), but such studies have not yet provided any convincing reasons why worrying in GAD
sufferers should be so extreme and distressing. Other neuroimaging studies have focused on possible
abnormalities in emotional regulation in GAD sufferers (Mennin, Holaway, Fresco, Moore, &
Heimberg, 2007). A common finding is decreased connectivity between the amygdala (the emotional
centre that mediates fear‐related emotion) and the prefrontal cortex (PFC)(the brain area involved in
complex cognitive behaviour such as decision making and planning), and the PFC is critical for the
effective regulation of emotion, and particularly negative emotions such as anxiety, suggesting that
individuals with a diagnosis of GAD may have significant problems in regulating and inhibiting their
anxiety (Roy et al., 2013; Patriquin & Mathew, 2018).
TABLE 6.9 Catastrophising in Worriers and Nonworriers
After Vasey & Borkovec, (1992).
These catastrophising sequences generated by a chronic worrier (top) and a nonworrier (bottom) were
generated using the catastrophic interview procedure in which the individual is first asked ‘What is
your main worry at the moment?’ In this case both participants replied, ‘Getting good grades in
school’. The interviewer then passes this response back to the participant by saying ‘What is it that
worries you about getting good grades in school?’ each time the participant responds, the interviewer
passes that response back by asking what it is about the response that worries them. The interview
continues until the participant can no longer think of any reasons.
By looking at the catastrophising sequences above, we can deduce a number of things about chronic
worriers: (a) they produce significantly more catastrophising steps than nonworriers, (b) they
experience increasing emotional distress as catastrophising continues as evidenced by their ‘discomfort’
scores, and (c) the content of their catastrophising steps becomes more and more threatening and
catastrophic, as evidenced by their increasing ‘likelihood’ scores as catastrophising progresses.
Discomfort Likelihood
Chronic Worrier Topic: Getting good grades in school.
Catastrophising step
I won't live up to my expectations. 50 30
I'd be disappointed in myself. 60 100
I'd lose my self‐confidence 70 50
My loss of self‐confidence would spread to other areas of my life. 70 50
I wouldn't have as much control as I'd like. 75 80
I'd be afraid of facing the unknown. 75 100
I'd become very anxious. 75 100
Anxiety would lead to further loss of self‐confidence. 75 80
I wouldn't get my confidence back. 75 50
I'd feel like 1 wouldn't have any control over my life. 75 80
I'd be susceptible to things that normally wouldn't bother me 75
I'd become more and more anxious. 80 80
I'd have no control at all and I'd become mentally ill. 85 30
I'd become dependent on drugs and therapy. 50 30
I'd always remain dependent on drugs. 85 50
They'd deteriorate my body. 85 100
I'd be in pain. 85 100
I'd die. 90 80
I'd end up in hell. 95 80
Nonworrier Topic: Getting good grades in school.
Catastrophising step
I might do poorly on a test. 3 20
I'd get a bad grade in the class. 3 100
That would lower my grade‐point average. 2 100
I'd have less of a chance of getting a good job. 2 60
I'd end up in a bad job. 2 80
I'd get a low salary. 2 100
I'd have less money to spend on what I want. 2 100
I'd be unhappy. 2 35
It would be a strain on me. 2 10
I'd worry more. 2 5
TABLE 6.10 Summary: DSM‐5 Criteria for Diagnosing Generalised Anxiety Disorder (GAD)
Disproportionate fear or anxiety relating to areas of activity such as finances, health, family, or
work/school life
The individual experiences fear relating to at least two different areas of activity and symptoms of
intense anxiety or worry will last for 3 months or more and will be present for the majority of the
time during this period
Feelings of anxiety or worry will be accompanied by symptoms of restlessness, agitation, or
muscle tension
Anxiety or worry are also associated with behaviours such as frequently seeking reassurance,
avoidance of areas of activity that cause anxiety, or excessive procrastination or effort in
preparing for activities
Symptoms cannot be explained by other mental disorders such as panic disorder
Psychological theories
attention bias modification (ABM) Highly anxious individuals have attentional and
interpretational biases towards threat that are known to cause anxiety. ABM is a practical way
of reversing these biases and uses experimental procedures that will neutralise them.
Because highly anxious individuals have attentional and interpretational biases towards threat
that are known to cause anxiety, a practical way of reversing these biases is to use experimental
procedures that will neutralise them. This training procedure is known as attention bias
modification (ABM) and has been used to modify both attentional and interpretation biases in
anxious individuals and to reduce anxiety vulnerability and levels of dysfunctional anxiety
(MacLeod & Mathews, 2012; Hakamata et al., 2010).
The procedure uses the classic dot probe task that has traditionally been used to measure
attentional biases (van Rooijen, Ploeger, & Kret, 2017). Stimuli are presented on a computer
screen. First, the participant sees a focus point (+), and this is followed by two words presented
simultaneously for a very short duration (usually 500 ms). One will be threatening, (e.g.,
‘Humiliated’) and the other with be nonthreatening (e.g., ‘Dishwasher’). In the ABM procedure,
immediately after the words have appeared a ‘probe’ appears (e.g., a colon) and the participant
has to respond as quickly as possible by indicating where the probe appeared. However, in the
ABM task, the probe always appears in the position where the non‐threatening word had been
presented. This process trains the individual to attend more rapidly to the nonthreatening than
the threatening word and so ameliorates any attentional bias to threat that the participant
possessed.
To date, the effect of ABM procedures in reducing clinical and nonclinical anxiety has been
mixed, with different types of methodologies reporting different rates of success (Mogg, Waters,
& Bradley, 2017). More adequately powered randomised controlled trials (RCTs) are required
with a broader range of relevant measures of anxiety and treatment satisfaction before we are
able to conclude that ABM interventions are an effective and long‐lasting means of reducing
anxiety.
Dispositional characteristics of worriers
In addition to these specific psychological models of pathological worrying, there is a good deal of
knowledge available about what other kinds of psychological features chronic worriers possess. For
example, worriers are intolerant of uncertainty (Ladouceur, Talbot, & Dugas, 1997; Birrell,
Meares, Wilkinson, & Freeston, 2011), are high on perfectionism (Pratt, Tallis, & Eysenck, 1997), and
have feelings of responsibility for negative outcomes (Startup & Davey, 2003; Wells & Papageorgiou,
1998). All of these suggest they possess characteristics that will drive them to attempt to think about
resolving problematic issues. However, worriers also have poor problem‐solving confidence (Davey,
1994a) and couch their worries in ways that reflect personal inadequacies and insecurities (Davey &
Levy, 1998), and this contrasting combination of characteristics appears to drive the individual to try to
resolve problems, but the process is thwarted by their personal doubt about their own ability to solve
them successfully (Davey, 1994b).
A further characteristic of chronic worriers is that they often experience negative mood (negatively
experienced emotional states such as anxiety, sadness, anger, pain, tiredness), and this negative mood
can facilitate their worrying in a number of ways. First, negative mood promotes a more systematic,
deliberate and effortful information‐processing style which facilitates detailed worrying (Dash, Meeten,
& Davey, 2013), and it also raises performance standards which means the worrier will be less satisfied
with the solutions their worrying generates and so want to continue with their worrying (Scott &
Cervone, 2002). Second, worriers often judge the acceptability of their worrying by using their
concurrent mood as information about whether their worrying has been successful (known as the
mood‐as‐input hypothesis). Because they are usually in a negative mood, they interpret this
negativity as though they are not satisfied with their worrying so they should therefore continue to worry
(Meeten & Davey, 2011). In these different ways the worrier's negative mood contributes to making the
worry bout longer, and this may be one factor that makes worrying seem uncontrollable in those with a
diagnosis of GAD.
mood‐as‐input hypothesis A hypothesis claiming that people use their concurrent mood as
information about whether they have successfully completed a task or not.
Pharmacological treatments
Because GAD involves chronic daily anxiety and emotional discomfort, we might expect that anxiolytics
—such as the benzodiazepines—would be the first line of treatment for sufferers. However, at least 50%
of GAD sufferers receive initial treatment with antidepressants such as SSRIs or SNRIs on the basis of
their proven effectiveness in treating the symptoms of GAD in clinical trials, whereas less than 35% are
treated with benzodiazepines (Berger et al., 2011). In support of the use of antidepressants in the
treatment of GAD, a recent large‐scale meta‐analysis of RCTs for drug treatments of GAD indicated
that SSRIs and SNRIs are relatively well tolerated and do reduce symptoms of GAD (Slee et al., 2019).
Treatment with antidepressants is justified in many cases because GAD is regularly comorbid with
depression and SSRIs tend to be better tolerated than benzodiazepines. However, the long‐term
effectiveness of pharmacological treatments for GAD is questionable given that at least one in four of
GAD sufferers do not respond to the most effective medications (Baldwin, Waldman, & Allgulander,
2011).
Psychological treatments
Psychological treatments for GAD have developed using both behavioural and cognitive methodologies.
For example, stimulus control treatment was original developed out of behaviour therapy principles for
dealing with worry behaviours, while acceptance and commitment therapy (ACT, see Chapter 4,
Section 4.1.1) and a number of effective variants of CBT are now available for dealing with the
pathological worrying found in GAD (e.g., cognitive restructuring and metacognitive therapy).
stimulus control treatment An early behavioural intervention for worry in GAD which
adopted the principle of stimulus control, based on the conditioning principle that the
environments in which behaviours are enacted come to control their future occurrence and can
act to elicit those behaviours.
Stimulus control treatment for GAD is an effective treatment for reducing the frequency of
worry by controlling the range of contexts in which the activity occurs (see Section 6.4.4).
There are four basic instructions underpinning this procedure:
1. Learn to identify worrisome thoughts and other thoughts that are unnecessary or
unpleasant. Distinguish these from necessary or pleasant thoughts related to the present
moment.
2. Establish a half‐hour worry period to take place at the same time and in the same location
each day.
3. When you catch yourself worrying, postpone the worry to the worry period and replace it
with attending to present‐moment experience.
4. Make use of the half‐hour worry period to worry about your concerns and to engage in
problem‐solving to eliminate those concerns.
Self‐monitoring involves making the client aware of their fixed patterns of behaviour and the
triggers that may precipitate worry. These triggers are often thoughts about future events that have very
low probabilities of happening (e.g., the accidental death of a loved one while driving to work), and the
client's attention is drawn to the fact that these are cognitively constructed rather than real events.
Relaxation training is an obvious way of dealing with the chronic stress experienced by GAD
sufferers. The specific technique of progressive muscular relaxation is often used (Bernstein, Borkovec,
& Hazlett‐Stevens, 2000), and relaxation is found to be as effective as some forms of cognitive therapy
(Arntz, 2003). Cognitive restructuring methods are used to challenge the biases that GAD sufferers
hold about how frequently bad events might happen (Beck, Emery, & Greenberg, 1985) and to generate
thoughts that are more accurate (Borkovec, 2005). One way of doing this is by using an outcome diary
in which the client writes down on a daily basis their worries and how likely they think the focus of their
worries will actually happen. Clients can then compare their own inflated estimate of the likelihood of
the event with subsequent reality (Borkovec, Hazlett‐Stevens, & Diaz, 1999).
Self‐monitoring A form of clinical observation that involves asking clients to observe and
record their own behaviour, to note when certain behaviours or thoughts occur, and in what
contexts they occur.
Cognitive restructuring Methods used to challenge the biases that a client might hold about
how frequently bad events might happen and to generate thoughts that are more accurate.
Other types of cognitive restructuring involve the challenging and replacement of dysfunctional
metacognitive beliefs about worrying—known as metacognitive therapy (Wells, 1999, 2010)—or the
belief held by pathological worriers that uncertainty has to be resolved by thinking through every
possible scenario (Dugas et al., 2003). Finally, behavioural rehearsal involves either the actual or
imagined rehearsal of adaptive coping responses that need to be deployed when a worry trigger is
encountered. These coping strategies may involve the deployment of relaxation exercises or pleasant
distracting activities designed to avoid worry (Butler, Fennell, Robson, & Gelder, 1991). CBT for GAD
has been shown to be effective with or without the use of pharmacological treatments (Lang, 2004), has
long‐term effectiveness for an important proportion of clients (Durham, Chambers, MacDonald, Power,
& Major, 2003), and can also help to alleviate comorbid depression (Cuijpers et al. (2014). However,
recent meta‐analyses suggest that while CBT treatments for GAD have improved over the past 15–20
years, there is still a significant percentage of sufferers who fail to recover fully following cognitive
treatments for GAD (Hanrahan, Jones, Field & Davey, 2013), and this is a testament to GAD as a
pervasive condition that can last for many years in a large percentage of sufferers despite treatment with
either medications or psychological therapy (see my blog ‘The Lost 40%’,
https://www.psychologytoday.com/gb/blog/why‐we‐worry/201211/the‐lost‐40).
behavioural rehearsal A coping strategy that involves either the actual or imagined
rehearsal of adaptive coping responses that need to be deployed when a worry trigger is
encountered.
SELF‐TEST QUESTIONS
What is the cardinal diagnostic feature of GAD?
What are the features of worry in GAD that make it a distressing experience for the
sufferer?
How do information processing biases and cognitive factors contribute to the acquisition,
maintenance and experience of anxiety in GAD?
How do psychological treatments of GAD attempt to manage the activity of worrying?
SECTION SUMMARY
Obsessions Intrusive and recurring thoughts that an individual finds disturbing and
uncontrollable.
Luke was 21 years old and attending university, living a considerable distance from his family
with whom he had always been close. It was 2 months before his final year exams and his alarm
went off at 8:30 a.m. Turning to switch off his alarm, Luke was immediately overwhelmed by
the feeling of dread that had been with him constantly since the beginning of term.
He quickly got out of bed and went to the bathroom. Luke washed his hands and brushed his
teeth before washing his hands twice more. Stepping into the shower Luke scrubbed himself all
over with shower gel, and washed his hands several more times in between cleaning the rest of
his body. After rinsing his hair through, Luke washed his hands again and got out of the
shower; by now it was 9:30.
Luke began to dry his hair but he felt that his hands were still dirty, though by now they were
red raw. He washed them again, rapidly rubbing the backs of his hands and cleaning repeatedly
under his nails. They still felt dirty. Telling himself not to be stupid, Luke moved to the door of
the bathroom but felt a strong urge to go back and clean his hands once more. The feeling was
too strong to resist so he quickly rinsed his hands again, dousing them in sanitiser gel, and
rushed out of the bathroom, knowing he would be late for his lecture.
Before he could leave the house, Luke checked that the back door was locked and all his
housemates’ windows were securely closed, then he went back to the bathroom to wash his
hands twice more. Grabbing a clean towel from the wash, Luke carefully wrapped up his hand
so he could open the front door without touching the dirty handle. Once the door was open,
Luke ran back inside to check that he had remembered to secure the windows in all his friends’
rooms and to close all the doors.
Pulling the front door shut with his foot, Luke tested it was firmly shut and then ran towards the
bus stop. It was now 10:15 and he was already 15 minutes late. A bus was arriving just as he
reached the stop, hot and out of breath. Sitting on the bus, Luke felt the sweat on the palms of
his hands and wiped them clean on a tissue, repeatedly cleansing them with the sanitiser gel he
always carried with him.
Halfway through the journey, Luke began to panic that he hadn’t locked up the house and
imagined coming home to find he had been burgled and all his housemates’ belongings had
been ransacked. Luke got off at the next stop and took another bus back home, only to find that
the doors and windows were locked. Cursing himself for his irrationality, Luke turned to leave
the house once more but, looking at his watch, saw it was now 11:30 and his lecture was almost
over.
Later on, when Luke was diagnosed with obsessive compulsive disorder, 6 months after failing
his exams, he felt a huge sense of relief and was pleased that he could now give his friends and
family a logical explanation for his behaviour. Luke is now receiving treatment for his OCD
and, when he explained his condition to the university, they agreed he could return and re‐sit
his exams.
Clinical Commentary
This example shows how obsessions and compulsions in OCD are often compelling and difficult for the
sufferer to resist—even when the individual is aware that these thoughts and actions are ‘stupid’ or
irrational. Luke’s compulsions are fueled by the ‘feelings of dread’ that he experiences most mornings
when he wakes up, and this provides the highly anxious state under which compulsions (such as
compulsive washing) are performed. ‘Doubting’ is also a common feature of OCD, and Luke experiences
this on his way to university in the form of doubting whether the doors and windows are locked. The
high levels of inflated responsibility usually possessed by OCD sufferers mean that Luke is driven to
continually check that his doubts are unfounded.
OCD onset is usually gradual and frequently begins to manifest itself in early adolescence or early
adulthood following a stressful event such as pregnancy, childbirth, and relationship or work problems
(Kringlen, 1970). OCD symptoms are also a common way for anxiety to manifest itself in childhood,
and this is discussed further in Chapter 16. Lifetime prevalence of OCD is around 2.5% with a 12‐
month prevalence rate of around 1.2%, and affects women marginally more frequently than men
(Ruscio, Stein, Chiu, & Kessler, 2010; Adam, Meinlschmidt, Gloster, & Lieb, 2012).
Risk factors for OCD in adulthood include childhood isolation and poor peer relationships during
childhood, negative emotionality in adolescence (this is a measure of interpersonal alienation, irritable‐
aggressive attitudes, and reactivity to stress), a history of childhood physical and sexual abuse (those who
reported physical abuse had a specific risk for OCD), experiencing problems at birth such as
haemorrhaging or respiratory distress, poor motor skills, and lower IQ measures during childhood
(Grisham et al., 2011). However, it must be pointed out that many of these risk factors are also risk
factors for other anxiety‐based problems, so they are not necessarily specific to OCD. But there are also
some cultural and ethnic practices that seem to increase the risk of developing OCD symptoms
specifically. In particular, religiosity is a known risk factor for OCD because many religions define
taboos that explicitly prohibit certain behaviours or thoughts and they often ritualise many basic aspects
of daily life (Huppert, Siev, & Kushner, 2007). Some of the ways in which these factors can lead to
obsessions and compulsions in anxious individuals are described in Case History 6.3.
TABLE 6.11 Summary: The Main DSM‐5 Diagnostic Criteria for Obsessive‐Compulsive Disorder
Presence of obsessions such as repeated and unwanted thoughts, urges, or images that the
individual tries to ignore or suppress, and/or
Presence of compulsions where the individual feels compelled to repeat certain behaviours or
mental activities
The individual believes that the behaviours will prevent a catastrophic event but these beliefs have
no realistic connections to the imagined event or are markedly excessive
Obsessions and compulsions consume at least 1 hour per day and cause difficulty in performing
other functions
Symptoms cannot be explained by the effects of other mental or medical disorders, drug abuse, or
medication
hoarding disorder Difficulty discarding or parting with possessions to the point where the
individual’s living area is severely congested by clutter.
trichotillomania Hair-pulling disorder in which the individual compulsively pulls out their
own hair resulting in significant hair loss.
skin-picking disorder Recurrent picking of the skin that results in skin lesions.
Ana's history of self‐doubt dated back to age 7, when she first recalled being uncertain about
adhering to many of the religious rituals normally associated with her community's
cultural/religious beliefs. For example, many Orthodox Jews observe a waiting period of 6 hrs
between eating meat and dairy foods. Ana would carefully count the hours between eating a
meat meal and a dairy meal, but she would later be beset with doubts about whether or not she
had waited the religiously allowed number of hours. Such self‐doubt extended into other
religiously defined regulations, but for the most part she was able to refrain from any ritualistic
behaviours. At the age of 18, Ana attended a girls' school in Israel and embarked on a course of
chronic checking rituals. Thus, if she stopped at restaurants that served nonkosher food, she
compulsively washed her hands repeatedly because she feared that she might have
unintentionally touched some forbidden food, utensils, table, or countertop. She began to skip
meals because she was afraid that she might contaminate her own food with nonkosher food
products unknowingly carried on her hands. Other symptoms included habitually rechecking
labels of food products to ensure that they were kosher.
Ana met her husband‐to‐be in an arranged meeting. Two months after the marriage, however,
her obsessions and compulsions returned and began to trouble her more than ever. She was
unable to prepare any meals because of increasing self‐doubt about contaminating utensils
meant to be used for meat dishes with those reserved for dairy foods, and vice versa.
After marriage, Orthodox Jewish men are prohibited from touching their wives during the time
of menstruation or for 7 days thereafter. According to stipulated ritual, an Orthodox Jewish wife
is responsible for ensuring that she is no longer exhibiting vaginal bleeding by swabbing herself
carefully with a linen cloth for each of the 7 days following the overt cessation of the menstrual
flow. It is only then, after a ritual bath (the Mikvah), that she and her husband are allowed to
physically touch one another. Faced with this responsibility, Ana obsessed about whether there
was a tinge of pink on her linen cloths. She checked the linen cloths repetitively and was unable
to decide definitively that the menstrual flow had ceased. Ultimately, she consulted with her
rabbi, who agreed to check the linen on a monthly basis and make the decision about whether
or not she was free of blood.
Biological factors
As with other anxiety disorders, there is evidence of an inherited component to OCD. Large‐scale
community twin studies suggest that the heritability estimate for OCD is ‘moderate’ (Monzani, Rijsdijk,
Harris, & Mataix‐Cols, 2014)—and it is well known that OCD tends to run in families, which may
reflect this heritable component (Lenane et al., 1990; Hanna et al., 2002). More recent genetic linkage
studies have also begun to identify some of the candidate genes for the transmission of the inherited
component to OCD (e.g., Wang et al., 2008), and serotonin neurotransmitter genes have been an
important focus in OCD candidate gene studies (Sinopoli, Burton, Kronenberg, & Arnold, 2017)—
largely because SSRIs are often effective in alleviating OCD symptoms, suggesting that serotonin
neurotransmitter abnormalities may be influencing OCD symptoms (Nicolini, 2010).
However, OCD is a heterogeneous disorder with considerable variation in the nature of its symptoms
and severity across sufferers, suggesting that the causes of the condition may themselves be varied. For
example, there is evidence that, in some sufferers, small but significant deficits in brain functioning may
be associated with generating repetitive and compulsive responding. These cognitive deficits may not
necessarily give rise to OCD symptoms on their own, but in combination with life experiences that give
rise to anxiety (e.g., periods of stress or increased responsibility) they may help to establish compulsions
and rituals as a way of dealing with that anxiety. They may also make it difficult to inhibit or control the
obsessive aversive thoughts that often generate anxiety during OCD. Cognitive problems identified in
those with a diagnosis of OCD include difficulties in shifting attention (which will make it difficult to
shake off obsessive thoughts or to move away from ritualised trains of behaviour), difficulties in
suppressing tendencies to behave in old, no‐longer relevant ways (this deficit would help to maintain
dysfunctional, lengthy rituals once they've been established), problems with working memory updating
(meaning that more time is required to collect sufficient information to inform actions and decision‐
making, leading to uncertainty and doubting), poor working memory capacity (making it hard to keep
in mind relevant information for decision‐making resulting in doubting caused by reduced memory
confidence), and deficits in what are known as executive functioning skills, such as strategic planning,
organised searching, goal‐oriented behaviour, and ability to suppress responding when required (all of
which give rise to impaired cognitive flexibility and poor abstraction skills, and may lead to the
development of very explicit and well‐defined sequences of behaviour that the individual finds difficult
to inhibit) (Aydin, Koybasi, Sert, Mete, & Oyekcin, 2014).
Interestingly, one group of individuals who often possess a number of these cognitive deficits are those
with a diagnosis of autism spectrum disorder (ASD) (see Chapter 17), and apart from being associated
with social and communication deficits, repetitive behaviours are also a diagnostic feature of autism
spectrum disorder. Studies suggest that around 37% of children with ASD also suffer diagnosable OCD
symptoms (de Bruin, Ferdinand, Meester, Nijs, & Verheij, 2007; Leyfer et al., 2006), and there are
moderate correlations between measures of autism and measures of OCD (Wakabayashi, Baron‐
Cohen, & Ashwin, 2012). However, there are differences between the types of symptoms exhibited by
individuals with autism and those with a single diagnosis of OCD, and individuals with ASD tend to
exhibit higher levels of touching/tapping/rubbing, repetitive self‐injury, hoarding, ordering, and
repeating compulsions (Ruta, Mugno, D'Arrigo, Vitiello, & Mazzone, 2010). This relationship between
symptoms in OCD and ASD has led some researchers to suggest that there may be a symptoms overlap
between ASD and OCD (Wakabayashi, Baron‐Cohen, & Ashwin, 2012) or that ASD may be a clinically
important feature of many people diagnosed with OCD (Bejerot, 2007). Interestingly, both individuals
with ASD and OCD tend to exhibit executive function deficits and a preference for local rather than
global information processing, with the possibility that these characteristics may represent a risk factor
for OCD symptoms in both groups (Savage, Baer, & Keuthen, 1999).
Finally, despite difficulties in interpreting many of the studies that have linked cognitive and brain
deficits with OCD, there is one brain area that has been regularly associated with OCD. This is the
basal ganglia. The basal ganglia is made up of a series of structures deep in the brain, including the
striatum, the globus pallidus, the substantia nigra, the nucleus accumbens, and the subthalamic nucleus.
When all the structures in the basal ganglia are functioning properly, they are believed to contribute to
voluntary planning of actions and habit formation. Attention was first drawn to the basal ganglia when
it was found that individuals with a condition known to affect the basal ganglia (such as Huntington's
disease and Parkinson's disease) were shown to be at significant risk for OCD. In addition, lesions to that
area of the basal ganglia called the striatum resulted in behaviours similar to those seen in OCD
sufferers, such as stereotyped activities with highly patterned compulsive and obsessive behaviour
(Laplane et al., 1989; Gunaydin & Kreitzer, 2016). However, rather than this being the result of a
localised abnormality in the basal ganglia, it is possible that it's the cortico‐basal ganglia neurocircuitry
itself that's dysfunctioning. This pathway links the basal ganglia with cortical brain regions that are
involved in decision‐making and may be responsible for developing smooth sequences of behaviour. A
problem in this pathway would therefore lead to disjointed sequences of behaviours very much like the
overcontrolled and ritualised chains of behaviour seen in many OCD sufferers.
Psychological factors
Memory deficits
‘Doubting’ is a central feature of OCD, especially the compulsions associated with the disorder. As a
result, it has been suggested that OCD may be characterised by memory deficits that give rise to the
doubting that, for example, doors have been locked or hands have been washed properly. Memory
deficit models take a number of different forms. It has been suggested that OCD sufferers may have:
A general memory deficit (Sher, Mann, & Frost, 1984)
Less confidence in the validity of their memories (Watts, 1993)
A deficit in the ability to distinguish between the memory of real and imagined actions (Brown,
Kosslyn, Breiter, Baer, & Jenike, 1994).
However, evidence supporting the role of these memory deficits in OCD is equivocal (Hermans et al.,
2008), and more recent theoretical views suggest that the deficits that give rise to OCD “doubting” may
not be problems with memory or working memory per se, but are the result of a deficit in executive
functioning in general (Harkin & Kessler, 2011). For example, if OCD sufferers spend a lot of time
checking both relevant and irrelevant things on a daily basis, this will overload executive processes and
result in poor encoding of information and poor attention to relevant information, which in turn will
cause memory deficits. This is consistent with another body of evidence which suggests that lack of
confidence in recall may be a consequence of compulsive checking or washing rather than a cause of it
(van den Hout & Kindt, 2003; Harkin & Kessler, 2011; Radomsky & Alcolado, 2010). In effect, the
more someone checks, the less confident they will be about what they have checked (see Hezel &
McNally, 2016, for a brief review of memory biases in OCD).
inflated responsibility The belief that one has power to bring about or prevent subjectively
crucial negative outcomes. These outcomes are perceived as essential to prevent. They may be
actual: that is, having consequences in the real world, and/or at a moral level.
mental contamination Feelings of dirtiness can be provoked without any physical contact
with a contaminant. Mental contamination can be caused by images, thoughts, and memories
and may be associated with compulsive washing and even betrayal experiences.
Inflated responsibility
Everyone experiences uncontrollable intrusive thoughts on almost a daily basis (Rachman & DeSilva,
1978). However, what differentiates these normal intrusive thoughts from the distressing and obsessive
thoughts experienced in OCD is the meaning attached to them by OCD sufferers. Individuals
diagnosed with OCD appear to have developed a series of dysfunctional beliefs about their obsessional
thoughts. For example:
1. Because they had the thought, they feel responsible for its content—so, if a sufferer thinks of
murdering their child, they believe they may be going crazy and will murder their child (Salkovskis,
1985),
2. Sufferers appraise obsessional thoughts as having potentially harmful consequences and this causes
intense anxiety and triggers compulsive actions designed to eradicate the thought or to make sure
the perceived harm does not occur (e.g., compulsive thought suppression strategies such as
counting backwards or checking and re‐checking locks and windows to ensure that the home is
safe),
3. Individuals with OCD tend to have inflated conceptions of their own responsibility for preventing
harm, and this inflated responsibility appears to be an important vulnerability factor in developing
OCD (Salkovskis, 1985; Rachman, 1998).
Salkovskis et al. (1996) have defined inflated responsibility as ‘the belief that one has power which is
pivotal to bring about or prevent subjectively crucial negative outcomes. These outcomes are perceived
as essential to prevent. They may be actual, that is having consequences in the real world, and/or at a
moral level’. There is considerable evidence that inflated responsibility is a characteristic that is a central
causal feature of OCD generally (Salkovskis, Shafran, Rachman, & Freeston, 1999; Salkovskis et al.,
2000) and compulsive checking specifically (Rachman, 2002; Bouchard, Rheaume, & Ladouceur, 1999).
Experimental studies that have manipulated inflated responsibility have shown that it causes increases in
perseverative activities such as compulsive checking (Lopatka & Rachman, 1995; Bouchard, Rheaume,
& Ladouceur, 1999).
Thought‐action fusion
It is a common clinical experience that many individuals with OCD believe that their unpleasant,
unacceptable thoughts can influence events in the world. For example, if they have an intrusive thought
about an airplane crashing, they believe this may cause an airplane to crash. If they have a thought
about becoming ill, they believe this makes it more likely that they will become ill. This is known as
thought‐action fusion (TAF) (Shafran & Rachman, 2004), and it is a belief that simply having thoughts
can in some way directly affect what happens in the world. If the supposed consequences of thoughts
are aversive or negative, then this will cause the sufferer to try to suppress these thoughts (see section on
thought suppression), and it will generate considerable distress and anxiety. Interestingly, TAF will be
related to the degree to which an individual assigns importance to thoughts, and this is something that
has been identified as making religiosity a risk factor for OCD (see Section on Diagnosis and
Prevalence). It is not clear why some individuals hold these significant views about thoughts, but they
may have developed more general metacognitive beliefs that ‘thoughts are dangerous’ or ‘thoughts must
be controlled’ which give rise to obsessive thoughts and to the development of compulsions (Hezel,
Stewart, Riemann, & McNally, 2019), and these negative beliefs about thoughts may even be precursors
to more complex beliefs related to inflated responsibility for preventing harm (Amir, Freshman, Ramsey,
Neary, & Brigidi, 2001).
Mental contamination
While there are a significant number of OCD sufferers who fear actual contamination (e.g., by contact
with dirt, germs, etc.), there is also another group for whom comparable feelings of dirtiness can be
provoked without any physical contact with a contaminant, and this has come to be called mental
contamination (Rachman, 2004, 2006). Mental contamination can be caused by images, thoughts, and
memories, and tends to be caused by a violation of some kind by another person, e.g., degradation,
betrayal, emotional abuse, physical abuse, or humiliation which give rise to feelings of dirtiness or
pollution and in many cases may be associated with compulsive washing or cleansing (Zysk, Shafran, &
Williams, 2018; Rachman, 2010). In addition to these feelings of contamination, individuals also
experience anxiety, disgust, shame, anger, guilt, and sadness (Rachman, Radomsky, Elliot, & Zysk,
2012). While mental contamination seems to represent a very specific form of OCD contamination fear
that is caused by quite specific experiences, it is possible to ameliorate the symptoms using adaptations
of standard CBT interventions (Warnock‐Parkes, Salkovskis, & Rachman, 2012) (Focus Point 6.6).
Thought suppression
Because individuals with obsessive thoughts find these intrusions aversive and distressing, they may try
to actively suppress them (using either thought suppression or distraction techniques). However, there is
good evidence that actively suppressing an unwanted thought will actually cause it to occur more
frequently once the period of suppression or inhibition is over (known as a ‘rebound’ effect) (Figure 6.5),
and this may account to some degree for the fact that OCD sufferers experience significantly more
intrusions than nonclinical populations (Wenzlaff & Wegner, 2000). Wenzlaff, Wegner, and Klein (1991)
have also argued that suppressing an unpleasant thought induces a strong negative emotional state that
results in the suppressed thought becoming associated with that negative mood state. Whenever that
negative mood state occurs in the future, it is therefore more likely to elicit the unwanted and aversive
thought, and this may also contribute to the OCD sufferer experiencing regular, uncontrollable
intrusions (see Abbott & Norton, 2019, for a recent review of the thought suppression paradigm and its
relevance to mental health problems).
Summary
As we mentioned at the outset of this section on aetiology, many of these theories are designed to
address only specific features of OCD rather than represent universal explanations of the disorder. For
example, some theories try to explain why ‘doubting’ is a central feature of OCD (these include both
neurophysiological and memory deficit models), others address why intrusive thoughts become so
aversive and uncontrollable (e.g., inflated responsibility and thought suppression accounts), and yet
others try to explain why individuals with OCD show dysfunctional perseveration at activities such as
checking or washing (e.g., the mood‐as‐input model). Undoubtedly, a full account of OCD will contain
at least some, if not all, of these different elements of explanation. (Clinical psychologists Amita Jassi
and Gazal Jones talk to freelance journalist Jo Carlowe about OCD and its treatment in a podcast at
https://soundcloud.com/user‐664361280/in‐conversation‐ocd‐with‐dr‐amita‐jassi‐dr‐gazal‐jones).
6.6.4 The Treatment of Obsessive‐Compulsive Disorder
Arguably the most effective therapies for OCD are Exposure & Ritual Prevention
Therapies (ERP) (see Section 6.6.4). The first table gives examples of a graded exposure
regime for fear of contamination from germs and distressing thoughts about sexual abuse. The
second provides some examples of response prevention techniques.
Exposure & Ritual Prevention Therapies A means of treatment for
obsessivecompulsive disorder (OCD) which involves graded exposure to the thoughts
that trigger distress, followed by the development of behaviours designed to prevent the
individual’s compulsive rituals
ERP is a highly flexible therapy that can be adapted to group, self‐help, inpatient, outpatient, family
therapy, mindfulness (Strauss et al., 2018), cognitive therapy (Rector, Richter, Katz, & Leybman, 2018),
Acceptance and Commitment Therapy (ACT) (Twohig et al., 2018), and computer‐guided interventions
(Fischer, Himle, & Hanna, 1998; Grayson, 1999; Wetzel, Bents, & Florin, 1999; Hand, 1998; Grunes,
1999; Nakagawa et al., 2000).
SELF‐TEST QUESTIONS
Can you describe what obsessions and compulsions are, and provide some examples of each?
How have biological theories attempted to explain the obsessions and compulsions found
in OCD?
How does the construct of ‘inflated responsibility’ help to explain how OCD is acquired
and maintained?
What are the similarities and differences between exposure & ritual prevention treatment
(ERP) and cognitive behaviour therapy (CBT) for OCD?
SECTION SUMMARY
‘It's been 8 months since my experience and I still deal with these feelings. I'm doing a lot better
but throughout the week, I can feel myself getting worse and worse until I breakdown. The
worst part is the irritability and rage I have inside of me. I don't know why I'm so mad at life
but the littlest things will set me off. I don't have the “flashbacks” anymore…just the feelings I
had when I was going through the ordeal. It's a very dark and depressing place and it's getting
harder to come out of it each time it happens. I almost feel blinded and out of control when I
get these attacks. It scares me to think of what I'm capable of doing. The worst part about this
is that I don't know what triggers these feelings. I can be fine all day and then my mood will
change for the worst. I generally feel very depressed and it's hard to deal with at times. Just
when I think I don't have to worry about it anymore, it hits ten times harder. I've tried just
about every remedy there is. I've seen four therapists and have been on three SSRIs…all of
which made me worse. I feel very discouraged with life. I don't know if this even has to do with
PTSD because I thought I was over it.’
From http://www.healthboards.com
Clinical Commentary:
This description is typical of many PTSD sufferers and highlights feelings of depression, lack of control
and anger. Some theories of PTSD (such as ‘Mental Defeat’) emphasise that those who develop PTSD
after a severe trauma tend to view themselves as a victim, process all information about the trauma
negatively, and view themselves as unable to act effectively. Such individuals believe they are unable to
influence their own fate and do not have the necessary skills to protect themselves from future trauma.
Ehlers and Clark (2000) suggest that such individuals only partially process their memory of the
trauma because of their perceived lack of control over it, and so they do not integrate that event fully into
their own autobiographical knowledge.
Once symptoms develop, PTSD is often a chronic condition that can last for years (Perkonigg et al.,
2005). It can also have significant social consequences such as marital problems, physical illness, sexual
dysfunction, substance abuse, suicidal thoughts, and stress‐related violence, as well as suicidal thoughts
and self‐harm. The kinds of traumatic events that precipitate PTSD are usually life threatening in their
severity. Studies suggest that PTSD symptoms are reported by up to 90% of rape victims (Chivers‐
Wilson, 2006), between 70 and 90% of torture victims (Moisander & Edston, 2003), over 50% of
prisoners of war (POWs) (Engdahl, Dikel, Eberly, & Blank, 1997), between 20 and 28% of earthquake
and flood survivors (Basoglu, Kilic, Salcioglu, & Livanou, 2004; Dai et al., 2016), and around 15% of
motor vehicle accident victims (Bryant & Harvey, 1998). DSM‐5 also emphasises that PTSD symptoms
can be acquired in cases where the stressor has not been life threatening to the sufferer (e.g., suffering
PTSD after the loss of a loved one—Breslau, Holmes, Carlson, & Pelton, 1998) or has involved simply
viewing stressful images of life‐threatening traumas (e.g., watching images of the 9/11 terrorist attacks
on TV—Piotrowski & Brannen, 2002), and this extension of the diagnostic criteria for PTSD has
generated controversy, either because it makes the symptoms of PTSD easier to fake in those who might
benefit financially from a diagnosis (Rosen, 2004) or because it confuses PTSD with merely
experiencing stress (McNally, 2003).
TABLE 6.12 Summary: DSM‐5 Criteria for the Diagnosis of PTSD
The individual has been exposed to or threatened with death, serious injury, or sexual violation:
by direct experience or by witnessing a traumatic event; upon learning about a violent or
accidental death of a close friend or family member; or by extreme or repeated exposure to the
effects of a traumatic event, such as emergency workers encountering human remains
Intrusive symptoms associated with the traumatic event will be experienced, such as disturbing
dreams or feeling that the event is recurring while awake; uncontrolled memories of the event;
extreme physical reactions or mental distress upon being reminded of the trauma
Individuals will avoid internal and/or external reminders of the trauma
At least two changes to mood or thought processes will occur, such as feelings of disconnection,
continual negative emotions and ongoing difficulty in experiencing positive emotions, extreme
and disproportionate negative expectations; reduced interest in activities, being unable to
remember certain aspects of the traumatic event
Changes to reactive behaviour will occur, and individuals will display two or more of the
following symptoms: recklessness, aggression, hypervigilance, inability to concentrate, difficulty
sleeping, an exaggerated startle response
Symptoms began or worsened after the traumatic event(s) and continued for at least 1 month,
causing significant difficulty in functioning
Symptoms cannot be explained by the effects of other mental or medical disorders, drug abuse, or
medication
Prevalence rates for PTSD vary considerably between countries, with prevalence in European countries
ranging between 0.56% and 6.6% in the general population (Wittchen et al., 2011)—findings which at
least in part may be explained by the different probabilities of experiencing trauma as a result of factors
such as the social or political conditions existing in different countries (Burri & Maercker, 2014).
Prevalence rates also differ for groups who are at risk of experiencing severe trauma. These include a
12–33% prevalence rate for civilians living in war zones (Farhood & Dimassi, 2012; Charlson et al.,
2012), 10% for rescue workers (Berger et al., 2012), and 13.2% for members of operational infantry in
the recent Iraq and Afghanistan conflicts (Kok, Herrell, Thomas, & Hoge, 2012). Around 50% of adults
experience at least one event in their lifetime that might qualify as a PTSD‐causing trauma (Ozer &
Weiss, 2004). Following such events, women are significantly more likely than men to develop PTSD (by
a factor of 2.4 : 1), and this is not explained simply by differences in the perceived threat to life from the
experience (Holbrook, Hoyt, Stein, & Sieber, 2002). Apart from gender differences in prevalence rates,
there is also some emerging evidence on the role that cultural variables play in PTSD. Ethnic groups
can differ quite significantly in the prevalence of PTSD ‐ Caucasian disaster victims show lower
prevalence rates than Latinos or African Americans—and these differences cannot be entirely explained
simply by differences in the frequency of exposure to traumatic experiences (Perilla, Norris, & Lavizzo,
2002; Norris, Perilla, Ibanez, & Murphy, 2001). The fact that around 50% of the population will
experience at least one event in their life that could be classified as a PTSD‐relevant trauma means that
many people experience trauma but do not develop PTSD. It is understanding these individual
differences in susceptibility to PTSD that will give us some insight into the mechanisms that give rise to
PTSD.
TABLE 6.13 Summary: DSM‐5 Criteria for the Diagnosis of Acute Stress Disorder
The individual has been exposed to or threatened with death, serious injury, or sexual violation: by
direct experience or by witnessing a traumatic event; upon learning about a violent or accidental death
of a close friend or family member; or by extreme or repeated exposure to the effects of a traumatic
event. At least nine of the following symptoms will be displayed:
Recurrent, intrusive, and involuntary memories of the traumatic event
Repeated distressing dreams related to the traumatic event or feeling that the event is recurring
while awake
Extreme physical reactions or mental distress upon being reminded of the trauma
Numbness or detachment from others
Changes in the individual's sense of reality and an altered perspective of oneself or one's
surroundings
Difficulty in remembering aspects of the traumatic event
Avoidance of internal and/or external reminders of the trauma
Difficulties in sleeping
Hypervigilance
Irritability or aggression
Difficulty in concentrating
Exaggerated startle response
Symptoms may begin within 3 days to 1 month of the trauma taking place and will persist for at least
3 days or up to 1 month
Symptoms cannot be explained by the effects of other mental or medical disorders, drug abuse, or
medication
Symptoms cause difficulties in performing important functions and cause clinically significant distress
The symptoms of acute stress disorder are very similar to those of PTSD, but the duration is much
shorter (3 days to 1 month after trauma exposure). DSM‐5 criteria have been explicitly modified to
make ASD symptoms much more compatible with PTSD, and so ASD may well be a diagnostic
category that predicts subsequent PTSD—but that remains to be determined. There has been some
prior debate about whether ASD actually represents a psychological disorder as such, or whether it is a
normal short‐term psychological and physical reaction to severe trauma (Harvey & Bryant, 2002).
However, with the diagnostic criteria for ASD in DSM‐5 shifting significantly towards that of PTSD, it
remains to be seen whether fewer post‐trauma survivors are diagnosed with ASD as were with DSM‐
IV‐TR criteria.
acute stress disorder A short-term psychological and physical reaction to severe trauma.
Symptoms are very similar to those of PTSD, but the duration is much shorter (3 days to 1
month after trauma exposure).
Biological factors
Family and twin studies suggest that PTSD has a genetic element to it, and a heritability component in
the range of 30–50% has been estimated (Smoller, 2016). More recent molecular genetic research has
examined the ways in which traumatic experiences may impact gene expression through the process of
epigenetics (see Chapter 1, Section 1.3.1). Thus, certain inherited genes may only be expressed
following trauma, and this makes those who have inherited these genes and have experienced trauma
vulnerable to PTSD symptoms (Sheerin, Lind, Bountress, Nugent, & Amstraster, 2017).
One inherited factor that may make some individuals vulnerable to PTSD symptoms is the level of
corticosteroid hormones normally secreted from the adrenal glands following a stressful experience.
Those who go on to develop PTSD symptoms—especially intrusive vivid memories—have been found
to have significantly lower cortisol levels immediately after the traumatic experience than individuals
who do not develop PTSD (Delahanty, Raimonde, & Spoonster, 2000). This is important because high
levels of cortisol can interfere with the laying down of memories of trauma, so low levels of cortisol will
do the opposite and result in the overconsolidation of traumatic memories (Chou, La Marca, Steptoe, &
Brewin, 2014)—an over‐consolidation that retains the vivid detail of the experience and retains this
detail over a long period of time. Even more intriguing is the finding that low levels of cortisol in
pregnant mothers who experience trauma and develop PTSD can also be passed on to their unborn
children through the process of epigenetics (Yehuda, Teicher, Secki, Grossman, & Morris, 2007). That is,
the trauma experience doesn't affect DNA sequences as such but does affect gene activity by switching
specific genes ‘on’ or ‘off ’, and some of this chemically altered activity is inherited from one generation
to another.
Other studies have identified the hippocampus as a brain area linked to PTSD. For example, the
hippocampus is that region of the brain that plays a critical role in memories related to emotion and
also plays a central role in regulating stress hormones such as cortisol. Studies have indicated that this
region is significantly smaller in individuals who develop PTSD (Logue et al., 2018) and a smaller
hippocampus represents a real risk factor for the development of PTSD following a traumatic
experience (Gilbertson et al., 2002). The implications of this are that a smaller hippocampus may make
the regulation of stress hormones less efficient and may also make it more difficult to coordinate
emotional memories such that it can become difficult for the individual to locate emotional memories in
space, time, and context. As a result, the soldier who witnessed friends and comrades die on a hill in
Afghanistan may feel the same extreme emotions experienced during that event when seeing any hill—
even after returning home.
Vulnerability factors
As not everyone who experiences a life‐threatening trauma develops PTSD, there must be factors that
make some people more vulnerable than others. A number of factors have been identified that
characterise those individuals likely to develop PTSD after trauma, which include:
A tendency to take personal responsibility for the traumatic event and the misfortunes of others
involved in the event (Mikhliner & Solomon, 1988),
Developmental factors such as early separation from parents or an unstable family life during early
childhood (King et al., 1996)
Personality factors such as neuroticism, hypochondriasis, and paranoia (DiGangi et al., 2013)
An emotion‐focussed, avoidant or negative coping style (Constans et al., 2012)
A family history of PTSD (Foy et al., 1987),
Existing high levels of anxiety or a preexisting psychological disorder (Breslau et al., 1997).
Interestingly, lower IQ and lower cognitive ability generally increased risk of PTSD (Vasterling et al.,
2002; Betts, Williams. Najman, Bor, & Alati, 2012), and high IQ is the best predictor of resistance to the
development of PTSD (Silva et al., 2000). This may be because there is a link between IQ level and the
development of coping strategies to deal with experienced trauma or stress. Other important predictors
of PTSD development are the experiences reported by trauma victims at the time of the trauma. These
include the reporting of dissociative symptoms at the time of the trauma (see below) and a belief that
one is about to die (McNally, 2003). These types of experiences may be important in that they may
relate to how the individual processes and stores information about the trauma at the time, and this is
important in some specific theories of PTSD symptoms.
Conditioning theory
Because there is always an identifiable traumatic experience in the history of PTSD, it is quite
reasonable to suppose that many of the symptoms of PTSD may be due to classical conditioning (see
Chapter 1, Section 1.3.2). That is, trauma (the UCS) becomes associated at the time of the trauma with
situational cues associated with the place and time of the trauma (the CS) (Keane, Zimering, & Caddell,
1985). When these cues (or similar cues) are encountered in the future, they elicit the arousal and fear
that was experienced during the trauma. For example, seeing a pile of bricks on the ground may elicit
strong arousal, fear, and startle responses for an earthquake survivor, because such cues had become
associated with the fear experienced during the traumatic earthquake experience. The conditioning
model would further argue that such conditioned fear responses do not extinguish because the sufferer
develops both cognitive and physical avoidance responses which distract them from fully processing such
cues and therefore does not allow the associations between cues and trauma to extinguish. The
reduction in fear resulting from these avoidance responses reinforces those responses and maintains
PTSD symptoms. There is probably an element of classical conditioning in the development of PTSD,
largely because formally neutral cues do come to elicit PTSD symptoms. There is also evidence that
individuals suffering PTSD will more readily develop CRs in laboratory‐based experiments than
nonsufferers (Orr et al., 2000). However, classical conditioning does not provide a full explanation of
PTSD. It does not explain why some individuals who experience trauma develop PTSD and others do
not, and it cannot easily explain the range of symptoms that are peculiar to PTSD and rarely found in
other anxiety disorders, such as reexperiencing symptoms, dissociative experiences, etc.
‘Mental defeat’
Ehlers and Clark (2000) have suggested that there is a specific psychological factor that is important in
making an individual vulnerable to PTSD. This is a specific frame of mind called ‘mental defeat’, in
which the individual sees themselves as a victim: they process all information about the trauma
negatively, and view themselves as unable to act effectively. This negative approach to the traumatic
event and its consequences simply adds to the distress, influences the way the individual recalls the
trauma, and may give rise to maladaptive behavioural and cognitive strategies that maintain the
disorder. In effect, these individuals believe they are unable to influence their own fate and do not have
the necessary skills to protect themselves from future trauma. Ehlers and Clark suggest that such
individuals only partially process their memory of the trauma because of their perceived lack of control
over it, and so they do not integrate that event fully into their own autobiographical knowledge. This
leads to symptoms such as reexperiencing the trauma in the present (outside of a temporal context),
difficulty in recalling events from the trauma, and dissociation between the experience of fear responses
and their meaning. The ‘mental defeat’ model is supported by evidence suggesting that PTSD sufferers
do indeed have negative views of the self and the world, including negative interpretations of the
trauma (Dunmore, Clark, & Ehlers, 1999), negative interpretations of PTSD symptoms (Clohessy &
Ehlers, 1999; Mayou, Bryant, & Ehlers, 2001), negative interpretations of the responses of others
(Dunmore, Clark, & Ehlers, 1999), and a belief that the trauma has permanently changed their life
(Dunmore, Clark, & Ehlers, 1999; Ehlers, Maercker, & Boos, 2000). Importantly, mental defeat has been
identified as a critical risk factor for PTSD after just a single traumatic experience or even after multiple
traumatisations (as experienced by Ugandan rebel war survivors) (Wilker et al., 2017), suggesting that it
is still a significant factor in the development of PTSD even after an individual has experienced multiple
traumas.
mental defeat A specific frame of mind in which the individual sees themselves as a victim.
This is a psychological factor that is important in making an individual vulnerable to PTSD.
emotional processing theory Theory that claims that severe traumatic experiences are of
such major significance to an individual that they lead to the formation of representations and
associations in memory that are quite different to those formed as a result of everyday
experience.
Once symptoms have developed, most psychological therapies either rely on some form of exposure
therapy (usually involving the client imagining events during their traumatic experience) in an attempt
to extinguish fear symptoms, or adopt other treatments that focus on the client's trauma memories or
their meanings, and international treatment guidelines have recommended trauma‐focused
psychological treatments as the first‐line treatment for PTSD (Ehlers et al., 2010). Therapies that possess
this exposure element include imaginal flooding, eye movement desensitisation and reprocessing
(EMDR), and cognitive restructuring, but there is an ever‐expanding set of emerging treatments for
PTSD (see Cukor, Spitalnick, Difede, Rizzo, & Rothbaum, 2009), which meta‐analyses suggest may all
be equally effective in treating PTSD symptoms (Benish, Imel, & Wampold, 2008), and which contain
additional elements helping to address factors such as self‐blame, to reduce guilt, bolster effective
problem‐solving coping, and improve emotional regulation (e.g., Kulkani, Barrard, & Cloitre, 2014;
Tran et al., 2016).
Psychological debriefing
Over the past 30 years or so there has been a growing belief amongst mental health professionals that
PTSD can be prevented by immediate and rapid debriefing of trauma victims within 24–72 hr after the
traumatic event (Caplan, 1964; Bisson, 2003). The exact form of the intervention can vary, with the
most widely used techniques referred to as crisis intervention or critical incident stress
management(CISM) (Everly, Flannery, & Mitchell, 2000). The purpose of these interventions is to
ensure the participants that they are normal people who have experienced an abnormal event, to
encourage them to review what has happened to them, to express their feelings about the event, and to
discuss and review support and coping strategies in the immediate post‐trauma period. Psychological
debriefing has been used with survivors, victims, relatives, emergency care workers, and providers of
mental health care (Bisson, 2003). The scale of this type of intervention can be gauged by reactions to
the terrorist attacks on the World Trade Centre on 9/11, when more than 9,000 counsellors went to
New York to offer immediate aid to victims and families of the attack (McNally, Bryant, & Ehlers, 2003).
Critical incident stress debriefing includes a number of components, including:
PHOTO 6.4 The psychological impact of the devastating Asian tsunami of December 2004 is difficult to calculate.
Over the past 20 years it was felt that immediate counseling of victims was the best way to prevent the development of
PTSD. However, more recent research has suggested that such immediate interventions may not be helpful, and in many
cases may impede natural recovery.
Explanation of the purpose of the intervention
Asking participants to describe their experiences
Discussion of the participants feelings about the event
Discussion of any trauma‐related symptoms the participant may be experiencing
Encouraging the participant to view their symptoms as normal reactions to trauma
Discussing the participant's needs for the future
As laudable as immediate professional help may seem in these circumstances, there is much criticism of
psychological debriefing and its value as an intervention for PTSD. First, it is not clear whether victims
will gain any benefit from being counselled by strangers and possibly ‘coerced’ into revealing thoughts
and memories that in the immediate wake of the trauma may be difficult to reveal. Second, many of
the survivors of severe trauma do not display symptoms of psychological disorders, nor will they
develop PTSD. Psychological debriefing techniques make little attempt to differentiate these survivors
from those who may genuinely need longer‐term guidance and treatment. Third, controlled
comparative studies that have attempted to evaluate the effects of psychological debriefing techniques
suggest there is little convincing evidence that debriefing reduces the incidence of PTSD—and indeed it
may in some cases impede natural recovery following trauma (Bisson, 2003; McNally, Bryant, & Ehlers,
2003). Most recent reviews of early psychological interventions for the prevention of PTSD suggest that
no psychological intervention can be recommended for routine use following traumatic events with
adults (Roberts, Kitchiner, Kenardy, & Bisson, 2009), or with children (National Institute for Health and
Care Excellence, 2018) (Photo 6.4).
Exposure therapies
Arguably the most effective form of treatment for PTSD is exposure therapy, in which the sufferer is
helped by the therapist to confront and experience events and stimuli relevant to their trauma and their
symptoms. Exposure‐based treatments are more effective than medications (Powers et al., 2010), and
have been found to result in therapeutic benefits that can persist over a minimum of 5‐years post‐
therapy (Foa & McLean, 2016). In vivo exposure involves the direct confrontation of feared objects,
activities, or situations by a person under the guidance of the therapist. For example, a woman with
PTSD who fears the location where she was assaulted may be assisted by her therapist in going to that
location and directly confronting those fears (as long as it is safe to do so).
Exposure therapy typically requires between 8 to 15 90‐minute sessions occurring either once or twice a
week (Foa & McLean, 2016), and the rationale behind exposure therapy is that (a) it will help to
extinguish associations between trauma cues and fear responses (Foa & Rothbaum, 1998), and (b) it will
help the individual to disconfirm any symptom‐maintaining dysfunctional beliefs that have developed
as a result of the trauma (e.g., ‘I can't handle any stress’) (Foa & Rauch, 2004). For the individual
suffering PTSD, exposure to their fear triggers is often a difficult step to take and exposure may even
make symptoms worse in the early stages of treatment (Keane, Gerardi, Quinn, & Litz, 1992). As a
result, up to a third of clients may drop out of treatment because of the need for them to re‐live their
experiences (Imel, Laska, Jakupcak, & Simpson, 2013). This being the case, exposure can be tackled in a
number of different forms—especially in various imaginal forms. This can be achieved (a) by asking the
client to provide a detailed written narrative of their traumatic experiences (Resick & Schnicke, 1992),
(b) with the assistance of virtual reality technology using computer‐generated imagery (Peskin et al.,
2019), or (c) by simply asking the client to visualise feared, trauma‐related scenes for extended periods of
time (known as imaginal flooding) (Keane, Fairbank, Caddell, & Zimering, 1989). Such imaginal
treatments are usually then supplemented with subsequent in vivo exposure that would require graded
exposure to real trauma‐related cues.
A somewhat controversial form of exposure therapy for PTSD is known as Eye movement
desensitisation & reprocessing therapy (EMDR) (Shapiro, 1989, 1995). In this form of
treatment, the client is required to focus their attention on a traumatic image or memory while
simultaneously visually following the therapist's finger that is moving backwards and forwards in front of
their eyes. This continues until the client reports a significant decrease in anxiety to the image or
memory. The therapist then encourages the client to restructure the memory positively, by thinking
positive thoughts in relation to that image (e.g., ‘I can deal with this’). The rationale for this procedure is
that combining eye movements with attention to fearful images encourages rapid deconditioning and
restructuring of the feared image (Shapiro, 1995, 1999). There is evidence that EMDR is more effective
than no treatment, supportive listening and relaxation (McNally, 1999), but some studies have shown
that it has a higher relapse rate than CBT (Devilly & Spence, 1999). Nevertheless, recent reviews suggest
that EMDR is one of the most effective treatments for PTSD, despite its controversial status (Bradley,
Greene, Russ, Dutra, & Westen, 2005). Critics of EMDR argue that, although it does have some success
in treating the symptoms of PTSD, it is little more than just another form of exposure therapy.
However, experimental studies have demonstrated that the eye movement component of EMDR is
essential for successful treatment no matter how this might be achieved (e.g., up and down eye
movements are as effective as side to side movements) (Lee & Cuijpers, 2013). What does appear to be
important is that the eye movement task—however achieved—should tax working memory and so
weaken traumatic memories (van den Hout & Engelhard, 2012).
Eye movement desensitisation & reprocessing therapy (EMDR) A form of exposure
therapy for PTSD in which clients are required to focus their attention on a traumatic image or
memory while simultaneously visually following the therapist’s finger moving backwards and
forwards before their eyes.
Cognitive restructuring
There are various forms of cognitive restructuring therapy for PTSD, but most attempt to help clients
do two things: evaluate and replace intrusive or negative automatic thoughts; and evaluate and change
dysfunctional beliefs about the world, themselves and their future that have developed as a result of the
trauma (Marks, Lovell, Noshirvani, Livanou, & Thrasher, 1998; Foa & Rothbaum, 1998). For example,
Foa & Rothbaum (1998) suggested that two basic dysfunctional beliefs mediate the development and
maintenance of PTSD. These are (a) ‘the world is a dangerous place’, and (b) ‘I am totally
incompetent’. Foa & Cahill (2001) argued that immediately after a severe trauma, all victims develop a
negative view of the world and themselves, but for most individuals these beliefs become disconfirmed
through daily experience. However, those who avoid trauma‐related thoughts will also avoid
disconfirming these extreme views and this will foster the development of chronic PTSD. While
exposure therapy alone may encourage experiences that disconfirm these dysfunctional beliefs, cognitive
therapists have proposed that procedures that directly attempt to alter PTSD‐related cognitions should
also be included in the treatment (Resick & Schnicke, 1992), and meta‐analyses that have compared
cognitive restructuring therapies and exposure therapies do not find that one approach necessarily
outperforms the other (Watkins, Sprang & Rothbaum, 2018).
SELF‐TEST QUESTIONS
Can you describe the main symptoms of PTSD and how they may differ from the
symptoms found in other anxiety disorders?
What are the diagnostic differences between PTSD and acute stress disorder?
Can you list some of the important risk factors for PTSD, and describe how they might
contribute to the development of PTSD?
We discussed four main theories of the aetiology of PTSD (conditioning theory, emotional
processing theory, ‘mental defeat’, and dual representation theory), can you describe the
main features of at least two of these and discuss their similarities and differences?
What are the main treatments for PTSD, and how have these been derived from theories
of the aetiology of PTSD?
SECTION SUMMARY
CHAPTER OUTLINE
7.1 MAJOR DEPRESSION
7.2 BIPOLAR DISORDER
7.3 THE TREATMENT OF DEPRESSION
7.4 NONSUICIDAL SELF-INJURY (NSSI)
7.5 SUICIDE
7.6 DEPRESSION AND MOOD DISORDERS REVIEWED
LEARNING OUTCOMES
When you have completed this chapter, you should be able to:
1. Describe the characteristics and main diagnostic criteria of major depression and bipolar
disorder.
2. Compare and contrast at least two biological theories of the aetiology of depression and
mood disorders.
3. Compare and contrast at least two psychological theories of the aetiology of depression
and mood disorders.
4. Distinguish between biological and psychological theories of depression.
5. Describe and evaluate the role of cognitive factors in explaining the development of
depression.
6. Compare and contrast biological, behavioural, and cognitive therapies for depression and
mood disorders.
7. Summarise the main psychological characteristics of suicide and nonsuicidal self‐injury
(NSSI).
My name is Sally; I'm 36 and live with my partner and two young children in a small village. But about 6
months after my daughter Georgia was born I started to feel really low and was crying all the time. I stopped
eating and stopped going to the local parent and baby group or meeting up with my friends. I thought I wasn't a
good enough mum and would sit at home and dwell on all the reasons why I wasn't doing a good enough job.
Although my mood improved after a few months, it has recently become much worse and I am finding it difficult to
cope. Everyone else is coping well and I feel that I am completely useless.
Sally's Story
Introduction
In Sally's Story, her account of the experience of depression gives considerable insight into the various
disabling features of this psychological problem. These include the overwhelming feelings of sadness
and lethargy. Each day is as miserable as the next in an unrelenting cycle of emptiness, and Sally ends
up hardly ever experiencing pleasure or any positive emotion. Those suffering depression lack initiative
and often move and speak more slowly than nonsufferers (Sobin & Sackeim, 1997). They spend more
time alone and may spend long periods ruminating about their low mood and its causes. Episodes of
depression and low mood sometimes follow significant life events such as the birth of a child or may
follow a series of more day‐to‐day difficulties, but depression can also emerge for what appears to be no
good reason at all. Some people experience just a single episode of depression and recover relatively
quickly; others may endure chronic or recurrent depression that may cause problems for much of their
life. Beck (1967) described depression as a ‘paralysis of will’ in which individuals experience only
pessimism and hopelessness about their lives. This can often lead to suicidal thinking, and suicide often
feels like the only solution for many individuals suffering depression.
We all experience periods of depression in our lives, and we can often attribute these periods of
depression to specific events. Two types of events are particularly important in triggering periods of
depression: losses and failures. Experiences such as losing a job or the death of a loved one are likely to
trigger periods of sadness, lethargy, and rumination. Similarly, failures—such as failing an exam or
failing to persuade someone you like to date you—can also lead to periods of hopelessness and the
fostering of negative cognitions associated with pessimism and self‐doubt. Nevertheless, most of us can
shake off these feelings within a few days or weeks and get on productively with life. For others, however,
the symptoms of depression may linger and spread to all aspects of their lives—emotional, behavioural,
cognitive, and physical—and this can result in diagnosable bouts of depression that are debilitating
enough to prevent an individual from living normal day‐to‐day lives. In extreme cases, the symptoms of
depression can develop even without the occurrence of a precipitating life event, such as a loss or a
failure, and will often persist for much longer than would be expected following a loss such as a
bereavement.
Depression is the prominent emotion in mood disorders, but it can often be associated with its
opposite—namely mania. Mania is an emotion characterised by boundless, frenzied energy and
feelings of euphoria. As we shall see later, individuals who have a bipolar disorder frequently
oscillate between deep depression and frenetic mania.
Depression involves emotional, motivational, behavioural, physical, and cognitive symptoms. The
emotional experiences of depressed individuals are usually restricted to negative ones and these are often
described as ‘sad, hopeless, miserable, dejected, and discouraged’. Such individuals are often close to
tears and have frequent crying episodes. Only very rarely do depressed individuals report experiencing
pleasant or positive emotions, they often exhibit a loss of sense of humour and rarely display positive
facial expressions (Sloan, Strauss, & Wisner, 2001). Anxiety is also commonly experienced with
depression, which suggests that many sufferers may experience simultaneously a range of negative
emotions (Bakish, 1999) that may be reflective of a single underlying symptomatology. Depressed
individuals exhibit a range of motivational deficits, and these include a loss of interest in normal daily
activities or hobbies. They exhibit a lack of initiative and spontaneity, and frequently report ‘not caring
anymore’ and not getting pleasure from activities that they previously enjoyed. This lack of initiative
may manifest itself initially in social withdrawal (depressed individuals regularly report wanting to stay
where they are and to be left alone), and appetite and sexual desire can also be significantly reduced.
Depressed individuals exhibit a number of behavioural symptoms, including a slowness of speech and
behaviour generally (Joiner, 2002). They become physically inactive, stay in bed for long periods, and
reports of decreased energy, tiredness, and fatigue are common. Even the smallest of tasks seem to
require substantial physical exertion. Depression also has an embodied element to it. People who
experience depression also exhibit characteristic postures and movements that are an integral feature of
their depression experience, and the role of these embodiments is discussed more fully in Focus Point
7.1. Physical symptoms also include sleep disturbance such as middle insomnia (waking up during the
night and having difficulty getting back to sleep) and terminal insomnia (waking early and being unable
to return to sleep). In some cases, depression can be associated with oversleeping (hypersomnia), where
the individual indulges in increased daytime sleeping. Depressed individuals also report regular
headaches, indigestion, constipation, dizzy spells, and general pain (Fishbain, 2000).
Arguably the most disabling of the symptoms of depression are its cognitive features, and, as we shall see
later, many theories of depression view these cognitive symptoms as central and as factors that need to
be addressed in order to complete effective treatment. In particular, depressed individuals tend to have
developed extremely negative views of themselves, the world around them, and their own future (Beck,
1987; Strauss, 2019), and this generates pessimistic thinking where sufferers believe nothing can improve
their own lot. This in turn leads to a lack of initiative, with individuals reporting impaired ability to
think, concentrate or make decisions. This inability to affect the future also generates other problematic
beliefs, such as a sense of worthlessness, shame, and guilt. Because of this, many depressed individuals
develop the dysfunctional belief that others would be better off if they were dead, and this can often
lead to transient but recurrent suicidal thoughts (see Section 7.5)
There are two main types of clinical depression. The most common is major depression, and the
second is bipolar disorder. Major depression (sometimes known as unipolar depression) is one of
the most common of all the psychological problems and is characterised by relatively extended periods
of clinical depression that cause significant distress to the individual and impairment in social or
occupational functioning. Bipolar disorder is characterised by periods of mania that alternate with
periods of depression, and this leads individuals with bipolar disorder to describe their lives as an
‘emotional roller‐coaster’. Sufferers experience the extremes of these emotions in ways that cause
emotional discomfort and distress.
This chapter now discusses the diagnosis, prevalence, and aetiology of major depression, and then—in
Section 7.2—the diagnosis, prevalence, and aetiology of bipolar disorder.
major depressive episode Episode of major depression, defined by the presence of five or
more depressive symptoms during the same 2-week period, as stated by the DSM-5.
At least five of the following are present, including either depressed mood or loss of interest:
Depressed mood most of the time
Less interest or enjoyment of most activities
Significant weight change not associated with dieting
Insomnia or excessive sleep
Excessive increase or reduction in physical movement
Substantial fatigue or lack of energy
Feelings of worthlessness or excessive or inappropriate guilt
Lack of concentration or ability to think or make decisions
Recurrent thoughts of death and suicide or suicide attempt
The symptoms are not better accounted for by schizoaffective disorder or other mental disorder
or due to the effects of a substance or other medical condition
TABLE 7.2 Summary: DSM‐5 criteria for major depressive disorder, single episode & recurrent
Major Depressive Disorder
Presence of a single major depressive episode (not attributable to normal and expected reactions
to bereavement, etc.) without previous manic or hypomanic episode where symptoms are not
better accounted for by other disorders
The symptoms must cause clinically significant distress or impairment in social, occupational, or
other forms of functioning.
Chronic mood disturbances primarily characterised by depressive symptoms can also be diagnosed,
although these conditions must have been apparent for at least 2 years, and would normally not be
severe enough to disrupt normal social and occupational functioning and warrant a diagnosis of major
depression. The most significant of these is dysthymic disorder, in which the sufferer has
experienced at least 2 years of depressed mood for more days than not. Individuals diagnosed with this
disorder experience many of the behavioural and cognitive characteristics of major depression, but
these are less severe (meeting only two or more of the symptom criteria for major depression) (Table
7.3).
dysthymic disorder A form of depression in which the sufferer has experienced at least 2
years of depressed mood for more days than not.
TABLE 7.3 Summary: DSM‐5 criteria for dysthymic disorder
TABLE 7.4 Comorbidity of Major Depressive Disorder with Other DSM Disorders
After Kessler et al. (1996).
melatonin A hormone that acts to slow organisms down, making them sleepy and less
energetic.
Biological theories
Genetic factors
There is good evidence that depressive symptoms run in families, and this suggests the possible existence
of an inherited or genetic component to major depression, and first‐degree relatives of major
depression sufferers are around two to three times more likely to develop depressive symptoms than are
individuals who are not first‐degree relatives of sufferers (Gershon et al., 1982; Maier et al., 1992).
Heritability based on twin studies can often vary considerably between studies, but meta‐analyses and
genome studies generally estimate heritability to be between 30% and 40% (Sullivan, Neale, & Kendler,
2000; Lubke et al., 2012), with adoption studies suggesting that the inherited component arises from
genetic factors with rearing experiences contributing to approximately an equal extent (Kendler,
Ohlsson, Sundquist, & Sundquist, 2017).
However, after 2 decades of genetic research the specific genes contributing to this inherited component
appear to be diverse—not least because major depression is a heterogeneous condition with a broad
range of symptoms. A recent genome‐wide meta‐analysis of depression identified 102 independent gene
variants and 269 genes associated with depression, including genes and gene pathways associated with
synaptic structure in the brain and factors associated with brain neurotransmitters (Howard et al., 2019)
Candidate genes that may be important include the serotonin transporter gene (SLC6A4) that can
enhance or terminate the action of the brain neurotransmitter serotonin (see next section on
Neurochemical Factors) (Clarke, Flint, Attwood, & Munafò, 2010; Levinson, 2006). However, there is
growing evidence that a gene such as the serotonin transporter gene may increase the risk of depression
only when it interacts with certain environmental risk factors, and risk factors that have been identified
include childhood maltreatment and serious interpersonal stressors (i.e., major events that primarily
affect the quality or quantity of one's interpersonal relationships) (Vrshek‐Schallhorn et al., 2014; Caspi
et al., 2003). So, just as with psychotic symptoms (see Chapter 8), there may be a diathesis–stress
relationship in depression between inherited genes and experienced stressors, such that severe
depression symptoms only develop when there is an interaction between a high‐risk inherited gene and
certain stressful life events.
Neurochemical factors
Depression and mood disorders have been shown to be reliably associated with abnormalities in the
levels of certain brain neurotransmitters, and three neurotransmitters are particularly significant—
namely the monoamines serotonin, norepinephrine and dopamine (Delgado & Moreno, 2000),
and major depression is often associated with low levels of these neurotransmitters. For example,
depression is associated with decreased serotonin transporter binding in the midbrain and amygdala
(Gryglewski, Lanzenberger, Kranz, & Cumming, 2014), and decreased 5‐HT1A receptor binding in
frontal, temporal, and limbic regions (Sargent et al., 2000), all suggesting that depression is associated
with some form of monoaminergic dysfunction. However, impaired monoamine pathway activity alone
is unlikely to be sufficient to cause depression, and studies that have experimentally manipulated
monoaminergic pathway activity have shown that this induces depression only in those people with
either a family history of depression or have suffered previous episodes of depression (Ruhé, Mason, &
Schene, 2007).
A number of factors led to findings about the importance of monoamines such as serotonin and
norepinephrine. First, in the 1950s it was noticed that many medications for high blood pressure also
caused depression (Ayd, 1956), and this effect was found to be the result of such medications decreasing
brain serotonin levels. The 1950s also saw the development of drugs that significantly alleviated the
symptoms of depression. The main ones were tricyclic drugs (such as imipramine) and monoamine
oxidase(MAO) inhibitors (such as trancylcypromine). Both of these drugs have their effects by
increasing levels of both serotonin and norepinephrine in the brain. These findings led to the
development of neurochemical theories of depression that argued that depression was caused by either
low norepinephrine activity (Bunney & Davis, 1965) or low serotonin activity (Golden & Gilmore, 1990).
Because these neurotransmitters are necessary for the successful transmission of impulses between
neurones, their abnormally low levels in depressed individuals may account for the cognitive,
behavioural, and motivational deficits found in major depression. In addition to abnormalities in
serotonin and norepinephrine levels, it is also believed that low levels of the neurotransmitter dopamine
might be involved in major depression (Naranjo, Tremblay, & Busto, 2001). Dopamine is significantly
involved in the reward systems in the brain, and so depleted dopamine levels may be responsible for
deficits in this system in depression giving rise to a lack of motivation, initiative, and pleasure (Treadway
& Zald, 2011).
tricyclic drugs Drugs which block the reuptake of both serotonin and norepinephrine.
The medications that are often prescribed for people suffering major depression are assumed to have
their beneficial effects by preventing the reuptake by the presynaptic neurone of neurotransmitters
serotonin and norepinephrine. This results in higher levels of these neurotransmitters in the synapse,
and this facilitates the transmission of impulses to the postsynaptic neurone—thus facilitating brain
activity. More recently, the development of selective serotonin reuptake inhibitors (SSRIs) (such
as fluoxetine) and serotonin‐noradrenaline reuptake inhibitors (such as Duloxetine) has allowed
researchers to assess the specific roles of serotonin and noradrenaline in depression, and researchers
now believe that serotonin levels in particular play a central role in major depression (Figure 7.2).
However, this picture is relatively simplistic, and more recent neurochemical theories of mood disorders
suggest that interactions between different neurotransmitters may be important. Some researchers
suggest that depression is associated more with an imbalance in neurotransmitters than with deficits in
specific neurotransmitters (Rampello, Nicoletti, & Nicoletti, 2000; Ressler & Nemeroff, 1999). Other
theorists have suggested that low levels of serotonin interact with levels of norepinephrine in rather
complex ways, such that combinations of low levels of both serotonin and norepinephrine produce
depression, but low levels of serotonin and high levels of norepinephrine result in mania (Mandell &
Knapp, 1979) (see also Chapter 4, Section 4.1.1 for a fuller discussion of how antidepressant
medications may have their effect on depression symptoms).
FIGURE 7.2 Neurones release the neurotransmitters serotonin and norepinephrine from their endings when they fire and
these help transmission between brain cells. Some of the neurotransmitter molecules are recaptured by the neurone using a re‐
uptake mechanism, and this can occur before they are received by the receptor neurone, thus weakening the transmission
between neurones. Both tricyclic drugs and SSRIs have their effect by blocking the reuptake of these neurotransmitters and so
ensure that neural transmission is more effective. Tricyclic drugs block the reuptake of both serotonin and norepinephrine,
while SSRIs selectively block the reuptake only of serotonin.
anterior cingulate cortex (ACC) The frontal part of the cingulate cortex resembling a
‘collar’ form around the corpus callosum, used for the relay of neural signals between the right
and left hemispheres of the brain.
Neuroendocrine factors
Depression is regularly associated with problems in the regulation of body cortisol levels—a hormone
that is secreted in times of stress. We mentioned earlier that the hippocampus is important in
adrenocorticotropic hormone secretion, and that depressed individuals frequently exhibit hippocampal
abnormalities (Mervaala et al., 2000). These hippocampal abnormalities are regularly linked with high
levels of cortisol (an adrenocortical hormone)—especially high cortisol levels at the time of wakening
(Vrshek‐Schallhorn et al., 2013)—and patients receiving chronic corticosteroid therapy for endocrine
problems have smaller hippocampal volumes and higher depression ratings than non‐patient controls
(Brown et al., 2004). The hypothalamic‐pituitary‐adrenocortical (HPA) network is the biological system
that manages and reacts to stress, and triggers the secretion of cortisol in response to stress. It is the lack
of inhibitory control over cortisol secretion that is linked with depression, and around 80% of
individuals who are hospitalised because of their depressive symptoms show poor regulation of the HPA
network (Heuser, Yassourides, & Holsboer, 1994) and are more likely to be prone to future bouts of
depression (Aubry et al., 2007). Furthermore, increased early life stress (a known risk factor for
depression) has been shown to cause hyperactivity of the HPA network that persists into adulthood
(Pariante & Lightman, 2008).
Depression has also been associated with abnormalities in the immune system and in particular with
increases in inflammation, and this may lead to symptoms of depression by inducing a tryptophan‐
metabolising enzyme that decreases the availability of serotonin in the brain (Cowen, 2015). The
relationship between inflammation and depression is of current interest to researchers and is discussed
more fully in Focus Point 7.3.
Psychological theories
Psychodynamic explanations
There are a number of different psychodynamic views of depression (see Blatt & Homann, 1992), but
the most well established is the psychoanalytic account pioneered by Freud (1917/1963) and Abraham
(1916/1960). This view argues that depression is a response to loss, and, in particular, a response to the
loss of a loved one such as a parent. Because, according to psychoanalytic theory, such losses return the
individual to the oral stage of development, psychoanalysis argues that depression has a functional role
to play, in that it returns the person to a period in their life when they were dependent on others (their
parents). During their depressed state, this regression to the oral stage allows the individual to become
dependent on their relationships with others in order to utilise the support that this will offer.
FOCUS POINT 7.3 DEPRESSION AND INFLAMMATION
One problem with this psychoanalytic interpretation is that not everyone who experiences depression
has lost a loved one, and this led Freud to propose the concept of symbolic loss in which other kinds
of losses within one's life (e.g., losing a job) are viewed as equivalent to losing a loved one. These losses
then cause the individual to regress to the oral stage of development and may trigger memories of
inadequate parental support during childhood. In addition, parental loss is no longer seen as a necessary
condition for the development of depression, and poor parenting is a more significant risk factor (Lara
& Klein, 1999). Support for this view comes from studies that have shown a relationship between risk for
depression in adulthood and having experienced a particular kind of parenting style known as
affectionless control (Garber & Flynn, 2001). This type of parenting is characterised by high levels
of overprotection combined with a lack of warmth and care.
symbolic loss A Freudian concept whereby other kinds of losses within one’s life (e.g. losing a
job) are viewed as equivalent to losing a loved one.
There is some empirical support for this psychoanalytic view of depression. For example, individuals
who report that their childhood needs were not adequately met by their parents are more likely to
become depressed after experiencing a loss (Goodman, 2002). Nevertheless, there are a number of
difficulties with the psychoanalytic view. First, much of the empirical evidence that is consistent with this
view is also consistent with many other theories of depression, so the evidence does not help to
differentiate between theoretical approaches. Second, many individuals who do experience parental loss
or poor parenting do not go on to develop depression. Psychoanalytic approaches do not clearly explain
why this is the case. Finally, because of the way that psychodynamic theories are formulated, many of
the key aspects of the theory are difficult to test. Psychodynamic concepts used to explain depression,
such as introjection, fixation at the oral stage of development, and symbolic loss are all difficult to
operationalise and measure, and so verify empirically. This difficulty is compounded by the Freudian
belief that such processes are thought to operate at the unconscious level.
Behavioural theories
The most obvious characteristics of depressed individuals include a lack of motivation and initiative, a
considerably diminished behavioural repertoire, and a view of the future that lacks positive and fulfilling
experiences. Some theorists have suggested that these characteristics provide evidence that depression
results from a lack of appropriate reinforcement for positive and constructive behaviours (Lewinsohn,
1974). This leads to the extinction of existing behaviours, and to a ‘behavioural vacuum’ in which the
person becomes inactive and withdrawn. It is certainly the case that periods of depression follow life
‘losses’ such as bereavement, retirement, or redundancy, and each of these events represent the loss of
important sources of reward and reinforcement for social and occupational behaviours (see also Figure
7.7 showing how suicide rates increase during a financial recession). In support of this account, it has
been shown that depressed individuals report fewer rewards in their life than nondepressed individuals,
and introducing rewards into the lives of depressed individuals helps to elevate their mood and forms
the central feature of behavioural activation(BA) therapy which encourages depressed individuals
to approach activities they may have been avoiding (Lewinsohn, Youngren, & Grosscup, 1979; Jacobson
et al., 1996; see Chapter 4, Section 4.1.1).
behavioural activation therapy (BA) A therapy for depression that attempts to increase
clients’ access to pleasant events and rewards and decrease their experience of aversive events
and consequences.
The fact that life ‘losses’ are likely to result in the reduction of reinforcing events for the depressed
individual also leads to a vicious cycle that can establish depression as a chronic condition. For example,
once a person becomes depressed, then their lack of initiative and withdrawal is unlikely to lead to the
development of other alternative sources of reinforcement. Unfortunately, the negative disposition of
the depressed individual is likely to be an active contributor to a lack of reinforcement—especially social
reinforcement. For example, depressed individuals are less skilled at interacting with others than
nondepressed individuals (Joiner, 2000; Segrin, 2000), and a chronically depressed individual will often
communicate negative attitudes, appear withdrawn and unresponsive, and tend to seek excessive
reassurances about themselves and their future. As a consequence, studies suggest that when interacting
with depressed individuals, nondepressed control participants exhibit less positive social behaviour, are
less verbal, and are less positive than when interacting with a nondepressed individual (Gotlib &
Robinson, 1982).
The frequent failure of depressed individuals to elicit reinforcing reactions from individuals with whom
they are communicating has led to interpersonal theories of depression. These theories argue that depression is
maintained by a cycle of excessive reassurance seeking from depressed individuals that is
subsequently rejected by family and friends because of the negative and repetitive way in which
depression leads the individual to talk about their problems (Joiner, 1995). Excessive reassurance seeking
is defined as ‘the relatively stable tendency to excessively and persistently seek reassurances from others
that one is lovable and worthy, regardless of whether such assurance has already been provided’ (Joiner,
Metalsky, Katz, & Beach, 1999, p. 270), and the negative beliefs about themselves, their world and their
future leads depressed individuals to doubt any reassurances they are given by friends and family, and
this continual doubting can have a negative impact on relationships with partners (Fowler & Gasiorek,
2016). Excessive reassurance seeking in depressed individuals predicts future depressive symptoms
(Haeffel, Voelz, & Joiner, 2007; Joiner et al., 1999) and can be associated with motivation to obtain self‐
confirming negative feedback, which is another risk factor for depressive symptoms and interpersonal
rejection that creates a vicious cycle of rejection and depression (Davila et al., 2009), and in turn can
exacerbate the symptoms of depression (Evraire & Dozois, 2011).
However, we must be cautious about how we interpret this evidence as a causal factor in depression.
First, much of the research on the link between lack of reinforcement and depression has been
retrospective in nature, and it is quite reasonable to suppose that depressed individuals may
underestimate the extent of the actual rewards in their life. Second, we need to understand whether
excessive reassurance seeking and seeking negative feedback are dispositional factors that create a risk
for depressive symptoms or whether depressive symptoms themselves elicit these characteristics (Evraire
& Dozois, 2011).
negative schema A set of beliefs that tends individuals towards viewing the world and
themselves in a negative way.
Beck argued that the depressed individual's negative schema maintained some interrelated aspects of
negative thinking that Beck called the negative triad. In particular, depressed people hold negative
views of themselves (e.g., ‘I am unattractive’), negative views of their future (e.g., ‘I will never achieve
anything’), and of the world (e.g., ‘the world is a dangerous and unsupportive place’). This set of negative
beliefs eventually generates self‐fulfilling prophecies. That is, the depressed individual interprets events
negatively, fails to take the initiative, and then inevitably experiences failure (Figure 7.4). The negative
triad of beliefs leads to a number of systematic biases in thinking, including arbitrary inference,
selective abstraction, overgeneralisation, magnification and minimisation, personalisation, and all‐or‐
none thinking (Table 7.5).
negative triad A theory of depression in which depressed people hold negative views of
themselves (e.g. ‘I am unattractive’), of their future (e.g. ‘I will never achieve anything’) and of
the world (e.g. ‘The world is a dangerous and unsupportive place’).
FIGURE 7.4 Beck's negative schema in depression.
This figure shows how the negative biases in the thinking of depressed individuals leads to a vicious cycle in which depression becomes a
self‐fulfilling prophecy.
TABLE 7.5 Thinking biases in Beck's model of depression
Arbitrary Jumping to a conclusion when evidence is lacking or is actually contrary to the
inference conclusion
Selective Abstracting a detail out of context and missing the significance of the total
abstraction situation
Overgeneralisation Unjustified generalisation on the basis of a single incident (e.g., making a single
mistake and concluding “I never do anything right”)
Magnification and Perceiving events as either totally bad or neutral or irrelevant. Catastrophising is
minimisation an example of magnification, in which the individual takes a single fact to its
extreme (e.g., a scratch on a new car means the car is wrecked and needs
replacing)
Personalisation The propensity to interpret events in terms of their personal meaning to the
individual rather than their objective characteristics (e.g., believing that a frown on
another person's face means they are annoyed specifically with you)
All‐or‐none Events are labelled as black or white, good or bad, wonderful or horrible (e.g.,
thinking assuming that everyone will either accept you or reject you)
There is considerable evidence that depressed individuals do show the negative cognitive biases that
Beck's theory predicts. First, some studies have shown attentional biases to negative information in
depressed individuals that results in them prioritising that negative information. In the emotional Stroop
procedure, depressed individuals are slower at naming the colour of negative words than positive words,
suggesting that their attention is drawn towards the meaning of such words (Gotlib & Cane, 1987; Epp,
Dobson, Dozois, & Frewen, 2012). Also, in a dichotic listening procedure, depressed individuals have
greater difficulty ignoring negative words that are presented as distractors than do nondepressed
participants (Ingram, Burnett, & McLaughlin, 1994). The exact nature of this attentional bias is
unclear, and some studies have failed to replicate these experimental effects (e.g., Mogg, Bradley,
Williams, & Mathews, 1993). Nevertheless, there is sufficient evidence to suggest that there is a bias
towards processing negative information in depression—especially if it is information that is specifically
relevant to depression (rather than more general negative information) (Gotlib, Gilboa, & Sommerfeld,
2000). Most recently, there is emerging evidence that depressed individuals may explicitly have problems
disengaging attention from negative information that has captured their attention, and this lack of
cognitive control may also cause difficulties when it comes to emotion regulation (Joorman & Stanton,
2016; LeMoult & Gotlib, 2019, see Figure 7.5).
FIGURE 7.5 Depression is associated with (a) cognitive biases in self‐referential processing, attention, interpretation,
and memory, (b) the use of maladaptive versus adaptive cognitive emotion regulation strategies, and (c) deficits in cognitive
control over mood‐congruent material, which in turn, can contribute to cognitive biases and the use of maladaptive emotion
regulation strategies (e.g., rumination, distraction), all of which exacerbate and sustain symptoms of depression.
From LeMoult and Gotlib (2019).
Second, important memory biases are also apparent in depression, with depressed individuals able to
recall more negative words than positive words in explicit memory tests (Mathews & MacLeod, 1994),
but this again seems to apply predominantly to depression‐relevant material rather than threat‐relevant
material generally (Watkins, Mathews, Williamson, & Fuller, 1992). Furthermore, studies have indicated
that depressed individuals will remember more negative than positive information about themselves
(Alloy, Abramson, Murray, Whitehouse, & Hogan, 1997), and of particular interest is the biased recall
of autobiographical memories by depressed individuals. Depressed individuals tend to favour
recalling negative autobiographical events, recall fewer positive autobiographical experiences than
nondepressed individuals, recall overgeneral and less detailed autobiographical memories than
nondepressed individuals, and are more likely to either avoid or ruminate about events from their past
(Kohler et al., 2015; Raes, Hermans, Williams, & Eelen, 2005; Williams & Scott, 1988). Subsequent
studies have suggested that there may be an association between experiencing early life trauma (such as
childhood abuse) and reduced autobiographical memory specificity (Raes, Hermans, Williams, & Eelen,
2005),and that poorly detailed autobiographical memories may be linked to the deficits in problem‐
solving ability that are characteristic of depressed individuals (Pollock & Williams, 2001).
RESEARCH METHODS IN CLINICAL PSYCHOLOGY 7.1
USING EXPERIMENTAL PSYCHOPATHOLOGY METHODS TO
UNDERSTAND DEPRESSION
Theories of depression that allude to biases in thinking and dysfunctional beliefs (such as Beck's
cognitive theory of depression) need to be empirically tested to find out whether such biases do
mediate depression. One important way of doing this is to use experimental psychopathology to study
the psychological processes that underlie basic mental health problems such as depression
(Vervliet & Raes, 2013).
pessimistic thinking A form of dysfunctional thinking where sufferers believe nothing can
improve their lot.
In summary, Beck's cognitive theory of depression has been significantly influential in determining the
way we conceptualise, research, and treat depression. It has generated a range of research on cognitive
biases in depression and has contributed substantially to cognitive‐based treatments of depression (see
Section 7.3). However, it is still unclear whether the negative cognitive biases defined by Beck's theory
actually cause depression, or whether these biases are simply a consequence of experienced depression.
Further research will be needed to clarify issues such as this.
battered woman syndrome The view that a pattern of repeated partner abuse leads
battered women to believe that they are powerless to change their situation.
learned helplessness A theory of depression that argues that people become depressed
following unavoidable negative life events because these events give rise to a cognitive set that
makes individuals learn to become ‘helpless’, lethargic and depressed.
These difficulties and inconsistencies in the original learned helplessness theory led to the development
of a revised theory that included the important concept of attribution (Abramson, Seligman, &
Teasdale, 1978; see Watkins, 2019, for a discussion of the original Abramson et al., 1978, study and a
critique of the attributional style approach to understanding depression). Attributional theories of
depression argue that people learn to become helpless, or more specifically ‘hopeless’, because they
possess certain attributional styles that generate pessimistic thinking. Attributions are the explanations
that individuals have for their behaviour and the events that happen to them. In particular, Abramson et
al. (1978) argue that people become depressed when they attribute negative life events primarily to
factors that either cannot easily be manipulated or are unlikely to change. In particular, people who are
likely to become depressed attribute negative life events to (a) internal rather than external factors (e.g., to
personal traits rather than outside events), (b) stable rather than unstable factors (e.g., things that are
unlikely to change in the near future), and (c) global rather than specific factors (e.g., causes that have an
effect over many areas of their life rather than being specific to one area of functioning). Table 7.6
provides an example of the range of attributions that someone might make in relation to failing a maths
exam. In this case, the global, stable, and internal attribution is ‘I lack intelligence’, and this attribution
is likely to have a number of negative consequences. First, it is the kind of cause that is not easily
changed so that future failures might be avoided. Second, it reflects negatively on the individual's self‐
concept, and so is likely to reduce self‐esteem. Third, it is a global attribution, and so the individual is
likely to believe that they will fail at many other things, and not just a maths exam, and this is likely to
lead to the kinds of pessimistic thinking typical of depression. In contrast, if the student had attributed
their failure to specific, unstable factors, such as ‘I am fed up with maths’ or ‘my maths test was
numbered 13’, they would have been less likely to experience helplessness or reduced self esteem
(because these are factors that could change quite easily)
attribution theories Theories of depression which suggest that people who are likely to
become depressed attribute negative life events to internal, stable and global factors.
TABLE 7.6 Why I failed my GCSE maths exam
People who become depressed tend to attribute negative life events to internal, stable global causes (in this example ‘I
lack intelligence’ is an example of this). In contrast, had the individual attributed their failure to specific, unstable
factors (such as ‘I am fed up with maths’ or ‘My maths test was numbered 13’), they are less likely to experience
helplessness. GCSE = General Certificate of Secondary Education.
In order to test the attributional account of depression, Peterson et al. (1982) developed the
Attributional Style Questionnaire (ASQ) which measures tendencies to make the particular kinds of
causal inference that are hypothesised to play a causal role in depression. A number of studies have
subsequently found that use of the global‐stable attributional style is a vulnerability factor for future
depression (Butters, McClure, Siegert, & Ward, 1997; Chaney et al., 2004)—especially following
negative life events (Hankin & Abramson, 2002). A study by Metalsky, Joiner, Hardin, and Abramson
(1993) gave students the ASQ prior to a midterm exam and then measured depressive symptoms over
the subsequent 5 days. They found that the students' enduring depressive reactions during this period
were predicted by a global‐stable attributional style together with low self‐esteem and exam failure. This
suggests that the global‐stable attributional style in the context of a negative life event (e.g., exam failure)
is a good predictor of subsequent depression. Finally, a computer‐based attention bias modification task
(see Focus Point 6.4, Chapter 6) can be used to create either negative or positive attributional styles, and
individuals in whom the positive attributional style is induced subsequently report less depressed mood
in response to a stressor than individuals in whom a negative attributional style is induced (Peters,
Constans, & Mathews, 2011). Experimental studies such as this suggest a direct causal link between
attributional style and depressed mood.
Hopelessness theory
The attributional/helplessness account of depression has been further refined to account for the fact
that attributional style appears to interact with a number of other factors to cause depression.
Abramson, Metalsky, and Alloy (1989) suggested that the tendency to attribute negative events to
global/stable causes represents a diathesis which, in the presence of negative life events, increases
vulnerability to a group of depressive symptoms, including retarded initiation of voluntary responses,
apathy, lack of energy, and psychomotor retardation. This cluster of symptoms is known as
hopelessness, which is an expectation that positive outcomes will not occur, negative outcomes will
occur, and the individual has no responses available that will change this state of affairs. Hopelessness
theory is very similar to attributional/helplessness accounts in that negative life events are viewed as
interacting with a global/stable attributional style to generate depressed symptomatology. However,
hopelessness theory also predicts that other factors, such as low self‐esteem, may also be involved as
vulnerability factors (Metalsky et al., 1993). Many studies have supported the hopelessness theory by
confirming that depression can be predicted by a combination of negative attributional style, negative
life events, and low self‐esteem (Alloy, Lipman, & Abrahamson, 1992; Bohon, Stice, Burton, Fudell, &
Nolen‐Hoeksema, 2008; Metalsky et al., 1993; Metalsky & Joiner, 1992), and that the negative
attributional style is significantly more related to hopelessness depression symptoms (e.g., lethargy,
hopelessness, difficulty making decisions) than endogenous depression symptoms (e.g., loss of interest in
sex, loss of appetite, loss of weight) (Joiner, 2001). In addition to predicting many symptoms of
depression, hopelessness is also a construct that has been shown to predict suicidal tendencies and, in
particular, suicidal ideation and completed suicide (Conner, Duberstein, Conwell, Seidlitz, & Caine,
2001; Wolfe et al., 2019).
Nevertheless, despite the enhanced ability of the evolved model to predict depressive episodes, there are
still a number of limitations to hopelessness theory: (a) many of the studies claiming to support
hopelessness theory have been carried out on healthy or only mildly depressed participants who are not
representative of individuals who are clinically depressed (Coyne, 1994), (b) a majority of studies testing
the model are correlational in nature, and so cannot provide any evidence on the possible causal role of
hopelessness cognitions in generating depression (Henkel, Bussfield, Moller, & Hegerl, 2002), (c) the
model does not explain all of the depressive symptoms required for a DSM‐5 diagnosis, only those
related to hopelessness, and (d) there is some evidence that the negative attributional style disappears
during remission or recovery from depression (Hamilton & Abramson, 1983), and this suggests that it
may not be a universal or enduring feature of individuals who experience depression. This latter fact
raises the question of what comes first, the negative attributional style or symptoms of depression (but
see Research Methods in Clinical Psychology 7.2).
Questionnaire studies are usually designed to see whether there are any relationships (i.e.,
correlations) between different measures. This is very useful first step in researching a topic
because it tells us what measures or constructs appear to be strongly associated. For example,
measures of negative attributional style (attributing negative events to global/stable causes) are
found to be highly correlated with measures of depression (Alloy et al., 1992).
Theorists who support the hopelessness theory of depression would say that these kinds of
correlations provide support for that theory, that is, support for the view that a negative
attributional style is a causal factor in developing depression. However, when two measures are
highly correlated we must be cautious for at least two reasons:
1. We cannot infer that there is a causal relationship between these measures, because their
association might be mediated by some third variable that has not been measured. For
example, negative attributional style and depression might be highly correlated because
they are both related to the number of negative life events the people have experienced
2. Similarly, if there is a causal relationship between two variables that are highly correlated,
we do not know the direction of that causal relationship. While hopelessness theory
predicts that negative attributional style should cause depression, a correlation between
these two variables is just as likely to imply that depression causes a negative attributional
style.
One way to overcome some of these difficulties in interpreting results from correlational studies
is to conduct what are known as prospective studies.
Prospective studies take measures of the relevant variables at a particular point in time (usually
called Time 1) and then go back to the same participants at some future time or times and take
the same measures again (usually called Time 2). Using this method a researcher can see if
measures of a variable at Time 1 (e.g., negative attributional style) predict, or are correlated
with, measures of variables taken at Time 2 (e.g., depression). In addition, because the
researchers will have taken measures of depression at both Times 1 and 2, they can also see if
levels of negative attributional style predict changes in depression scores between Times 1 and 2.
This procedure allows the researcher to make much stronger statements about the possible
causal direction of a relationship between two variables, and whether one variable is a risk factor
for the other (i.e., whether negative attributional style is a risk factor for increased depression
over time).
Such a prospective study was undertaken by Robinson and Alloy (2003). Using undergraduate
students as participants, they took measures of negative attributional style (using the Cognitive
Style Questionnaire) and depression (using the Beck Depression Inventory) at Time 1. Regular
prospective assessments then took place every 6 months for 2.5 years (Time 2, Time 3 …etc.).
Even when the level of depressive symptoms at Time 1 was taken into account, they found that
measures of variables such as negative attributional style predicted the incidence and number
of future depressive episodes. They concluded that negative cognitive style (including negative
attributional style) was a risk factor for future depression.
Rumination theory
Depressed individuals spend a good deal of time indulging in ruminative activities which may either
increase the risk of depression or increase the probability of relapse following recovery from depression.
Rumination is a tendency to repetitively dwell on the experience of depression or its possible causes,
either in a repetitive or ‘brooding’ fashion (Crane, Barnhofer, & Williams, 2007) or in an analytical way
which attempts to seek explanations for the experience (Watkins, 2008), and is a maladaptive form of
emotion regulation that has been linked to periods of stress and an inability to deploy more adaptive
forms of emotion regulation (such as reappraising emotional events) (Vanderhasselt, Brose, Koster, & De
Raedt, 2016). Tendency to ruminate has been shown to predict the onset of depressive episodes
(Morrow & Nolen‐Hoeksema, 1990; Nolen‐Hoeksema, 2000), and relapse back into bouts of depression
(Michalak, Holz, & Teismann, 2011). Rumination in depressed individuals appears to be driven by
meta‐cognitive beliefs that rumination is a necessary process to undertake in order to resolve depression
(Papageorgiou & Wells, 2001), and these beliefs appear to contribute to the repetitive or perseverative
nature of depressive rumination (Chan, Davey, & Brewin, 2013; Hawksley & Davey, 2010). Other
studies have indicated that rumination is associated with overgeneral autobiographical memory
(Sumner, 2012)—which is a common characteristic of depressed individuals. Rumination is also a
vulnerability factor for depression during the transition from early to middle adolescence (Abela &
Hankin, 2011), and is higher in women than men—possibly contributing to the significantly higher
depression rates in women than men (Nolen‐Hoeksema, 2000).
Rumination The tendency to repetitively dwell on the experience of depression or its possible
causes.
SELF‐TEST QUESTIONS
What are the two main mood disorders?
Name at least five of the symptoms that must be present during a 2‐week period for a
diagnosis of major depression.
Why has it been difficult to gauge the prevalence of depressive disorders?
Can you describe some of the problems involved in diagnosing major depression?
Describe the evidence that suggests there is a genetic component to major depression
What role do the neurotransmitters serotonin, dopamine, and norepinephrine play in
depression?
How have abnormalities in certain brain areas been linked to the experience of
depression?
How are cortisol levels supposed to be involved in the development of depression?
Why is the individual's response to loss so important in psychodynamic theories of
depression?
Can you describe how behavioural theories and interpersonal theories explain the
development of depression?
What is Beck's ‘negative triad’?
What is the evidence that depressed individuals hold negative beliefs about themselves and
the world?
What are the benefits and the limitations of learned helplessness as an explanation of
depression?
What kinds of attributions are likely to lead to depressed thinking?
What are the important features of hopelessness theory of depression?
What effect does rumination have on depressive symptoms?
SECTION SUMMARY
Descriptions offered by people with bipolar disorder give valuable insights into the various
mood states associated with the disorder:
DEPRESSION
I doubt completely my ability to do anything well. It seems as though my mind has slowed
down and burned out to the point of being virtually useless…[I am] haunt[ed]…with the total,
the desperate hopelessness of it all…Others say, “It's only temporary, it will pass, you can get
over it”, but of course they haven't any idea of how I feel, although they are certain they do. If
I can't feel, move, think, or care, then what on earth is the point?’
HYPOMANIA
At first when I'm high, it's tremendous…ideas are fast…like shooting stars you follow until
brighter ones appear…All shyness disappears, the right words and gestures are suddenly
there…uninteresting people, things become intensely interesting. Sensuality is pervasive; the
desire to seduce and be seduced is irresistible. Your marrow is infused with unbelievable feelings
of ease, power, well‐being, omnipotence, euphoria…you can do anything…but, somewhere this
changes’.
MANIA
The fast ideas become too fast and there are far too many…overwhelming confusion replaces
clarity…you stop keeping up with it—memory goes. Infectious humour ceases to amuse. Your
friends become frightened…everything is now against the grain…you are irritable, angry,
frightened, uncontrollable, and trapped’.
Clinical Commentary
This provides an insight into how the different mood states in bipolar disorder are
experienced, and how the transition from a depressive episode moves through the mild
manic episode called hypomania to full blown mania. Typical of the transition from
depression to full‐blown mania are (a) the overwhelming flow of thoughts and ideas that
lead to the sufferer seeming incoherent and interrupting on‐going conversations, (b) the
temptation to indulge in inappropriate sexual interactions as everyone around becomes
a focus of interest and shyness is lost, and (c) the inevitable drift by the sufferer into
irritability, frustration and anger as friends and acquaintances try to quell the excesses
of thought and behaviour.
TABLE 7.7 Summary: DSM‐5 criteria for a manic episode
Unusual and continual elevated, unreserved, or irritable mood and unusual and continual
increase in energy levels lasting at least a week
Presence of at least three of the following:
Inflated self‐esteem or grandiosity
Less need for sleep
Increased talkativeness
Racing thoughts
Easily distractible
Increase in goal‐directed activity or unintentional and purposeless motions
Unnecessary participation in activities with a high potential for painful consequences
Bipolar Disorder II
Presence or history of at least one major depressive episode(s)
Presence or history of at least one hypomanic episode(s)
No history of manic episode(s)
Symptoms are not better accounted for by schizoaffective disorder or other disorders
Unusual and continual elevated, unreserved, or irritable mood and unusual and continual
increase in energy levels lasting at least a week
Presence of at least three of the following:
Inflated self‐esteem or grandiosity
Less need for sleep
Increased talkativeness
Racing thoughts
Easily distractible
Increase in goal‐directed activity or unintentional and purposeless motions
Unnecessary participation in activities with a high potential for painful consequences
A noted change in functionality which is not usually seen in the individual and changes in
functionality and mood are noticeable by others
The episode is not due to the use of medication, drug abuse, or other treatment
For at least 2 years there have been many periods with hypomanic symptoms that do not meet the
criteria for a hypomanic episode and many periods with depressive symptoms that do not meet
the criteria for a major depressive episode. These symptoms have not been absent for more than
2 months at a time
No major depressive episode, manic episode, or hypomanic episode has been present during the
first 2 years of the disorder
The episode is not due to the use of medication, drug abuse, or other treatment
Cyclothymic disorder is a mild form of bipolar disorder in which the sufferer has mood swings over
a period of years that range from mild depression to euphoria and excitement. It is characterised by at
least 2‐years of hypomania symptoms that do not meet the criteria for a manic episode, and the sufferer
will experience alternating periods of withdrawal then exuberance, inadequacy and then high self‐
esteem, and so on (Table 7.10).
Cyclothymic disorder A form of depression characterized by at least 2 years of hypomania
symptoms that do not meet the criteria for a manic episode and in which the sufferer
experiences alternating periods of withdrawal then exuberance, inadequacy and then high self-
esteem.
Bipolar disorder is much less common than major depression, and epidemiological studies suggest a
lifetime prevalence rate of around 1% for bipolar I disorder (Pini et al., 2005). A systematic review of
global prevalence data from all bipolar spectrum disorders showed a 12‐month prevalence rate of 0.8%
when data were compared across 20 geographic regions of the world (Ferrari, Baxter, & Whiteford,
2011).
Biological theories
Genetic factors
There is good evidence that bipolar disorder has an inherited component. For example, family studies
have indicated that 10–25% of first‐degree relatives of bipolar disorder sufferers have also reported
significant symptoms of mood disorder (Gershon, 2000), and it has been estimated that approximately
7% of the first‐degree relatives of sufferers also have bipolar disorder (Kelsoe, 2003)—this is compared
to a lifetime prevalence rate in the general population of between 0.4% and 1.6%. Twin studies also
suggest a significant inherited component to bipolar disorder. For example, Table 7.11 shows the
concordance rates for bipolar disorder in sets of monozygotic (MZ) twins and dizygotic (DZ) twins
(Kelsoe, 2003). Concordance rates average 69% and 29% for MZ and DZ twins respectively, suggesting
that sharing all genes as opposed to half of genes more than doubles the risk for developing bipolar
disorder. More recent twin studies have reported heritability estimates as high as 0.93 for bipolar I
disorder (Kieseppä, Partonen, Haukka, Kaprio, & Lönnqvist, 2014).
TABLE 7.11 Concordance rates in selected twin studies of bipolar disorder
After Kelsoe (2003).
Neurochemical factors
Bipolar disorder has also been shown to be reliably associated with abnormalities in levels of brain
neurotransmitters. Like major depression, bipolar disorder seems to be associated with dopamine and
norepinephrine irregularities. Converging findings from pharmacological and imaging studies support
the view that overactive dopamine receptor availability and a hyperactive reward processing network in
the brain underlies mania (Ashok et al., 2017), but serotonin does not appear to play such a central role
in bipolar symptoms, and this is evidenced by the fact that serotonin reuptake is not a sufficient
condition for antidepressant efficacy in bipolar depression (Fountoulakis, Kelsoe, & Akiskal, 2012). In
addition, the manic episodes in bipolar disorder are also found to be associated with high levels of
norepinephrine (Altshuller et al., 1995; Bunney, Goodwin, & Murphy, 1972). These findings linking
bipolar disorder to both overactivity and deficits in a range of neurotransmitters testifies to the
complexity of the pathophysiology of this particular mental health problem, and the interactions
between these various effects is still not clearly understood (Sigitova, Fišar, Hroudová, Cikánková, &
Rabouch, 2016).
SELF‐TEST QUESTIONS
What are dysthymic disorder and cyclothymic disorder?
What is the distinction between bipolar disorder I and bipolar disorder II?
Describe the evidence that suggests there is a genetic component to bipolar disorder.
What are the main types of pharmacological treatments for bipolar disorder?
What factors trigger either periods of depression or periods of mania in bipolar disorder?
SECTION SUMMARY
Stepped-care models A treatment for psychopathology that emphasises that the type of
treatment provided for those individuals should be tailored to the severity of their symptoms
and their personal and social circumstances.
The stepped‐care model provides a framework in which to organise the provision of services, and
supports patients, carers and practitioners in identifying and accessing the most effective interventions.
In stepped care the least intrusive, most effective intervention is provided first; if a person does not
benefit from the intervention initially offered, or declines an intervention, they should be offered an
appropriate intervention from the next step.
Focus of the intervention Nature of the intervention
Step 4: Severe and complex depression; risk to life; Medication, high‐intensity psychological
severe self‐neglect interventions, electroconvulsive therapy, crisis
service, combined treatments, multiprofessional
and inpatient care
Step 3: Persistent subthreshold depressive symptoms Medication, high‐intensity psychological
or mild to moderate depression with inadequate interventions, combined treatments,
response to initial interventions; moderate and severe collaborative care and referral for further
depression assessment and interventions
Step 2: Persistent subthreshold depressive Low‐intensity psychosocial interventions,
symptoms; mild to moderate depression psychological interventions, medication and
referral for further assessment and interventions
Step 1: All known and suspected presentations of Assessment, support, psychoeducation, active
depression monitoring and referral for further assessment
and interventions
7.3.1 Biological Treatments
Antidepressant medications
The three main types of medication for depression are (a) trycyclic drugs (such as imipramine), (b) MAO
inhibitors (such as tranylcypromine), and (c) SSRIs (such as Prozac) and serotonin and norepinephrine
reuptake inhibitors (SNRIs). The first two types of drug increase levels of both serotonin and
norepinephrine in the brain, while SSRIs and SNRIs act selectively on serotonin and prevent its
reuptake by the presynaptic neurone (see Figure 7.2). Treatment outcome studies have generally
indicated that depressed individuals given these forms of medication benefit when compared with
individuals taking placebos (Cipriani et al., 2018), and around 50–65% of individuals taking trycyclic
drugs show improvement (Gitlin, 2002) along with around 50% taking MAO inhibitors (Thase, Trivedi,
& Rush, 1995). Although these forms of drug treatment can be effective for some people, they often
have significant physical and psychological side effects (see Chapter 4, Table 4.1), and the most recently
developed of these drugs, SSRIs and SNRIs, do have some benefits over trycyclic drugs and MAO
inhibitors in that they produce fewer side effects (Enserink, 1999) and it is harder to overdose on them
(Isbister, Bowe, Dawson, & Whyte, 2004). However, SSRIs such as fluoxetine (Prozac) take around 2
weeks to begin to have an effect on symptoms (which is roughly the same as trycyclics) and also have
their own side effects such as headache, gastric disorders, and sexual dysfunction (Rosen, Lane, &
Menza, 1999) (see Chapter 4, Section 4.1.1 for a fuller discussion of antidepressant drugs, their
effectiveness, and how they might work when they are effective).
There is also controversy about whether SSRIs such as Prozac increase the risk of suicide. Recent meta‐
analyses suggest that increased risk of suicide with the use of SSRIs cannot be ruled out, but these risks
should be balanced against the relative effectiveness of SSRIs in treating depression (Gunnell, Saperia,
& Ashby, 2005). Nevertheless, despite these cautions, the drugs that have been developed to treat
depression do help to alleviate symptoms in a majority of cases, they provide relatively rapid relief from
symptoms in around half of those treated, and they are effective not only with bouts of major
depression but also with chronic depressive disorders such as dysthymic disorder (Hellerstein, Kocsis,
Chapman, Stewart, & Harrison, 2000). However, relapse is a common occurrence after drug treatment
for depression has been withdrawn (Reimherr et al., 2001), and a more effective treatment may be to
combine drug therapy with psychological therapies such as CBT (Kupfer & Frank, 2001).
In contrast to those drugs prescribed for major depression, the drug therapies of choice for bipolar
disorder are rather different. The traditional treatment for bipolar disorder has been lithium
carbonate. Around 80% of bipolar disorder sufferers who take lithium benefit from it, and the drug
can provide relief from symptoms of both manic and depressive episodes (Curran & Ravindran, 2014;
Won & Kim, 2017). There is some debate about how lithium actually moderates the symptoms of
bipolar disorder. Early views suggested that lithium stabilises the activity of sodium and potassium ions
in the membranes of neurones, and it is the instability of these ions that gives rise to the symptoms of
bipolar disorder (Swonger & Constantine, 1983). Other accounts argue that it changes synaptic activity
in neurons in such a way as to help neurotransmitters bind to a receiving neuron, thus helping to
increase neuronal plasticity (Ghaemi, Boiman, & Goodwin, 1999; Won & Kim, 2017). However,
treatment of bipolar symptoms with lithium carbonate does have some disadvantages. First,
discontinuation often increases the risk of relapse (Suppes, Baldessarini, Faedda, & Tohen, 1991), and
second, an added disadvantage of lithium treatment is the difficulty in prescribing a suitable dosage on
an individual basis. Lithium is a toxic substance, and the effective dose for alleviating symptoms is often
close to the toxic level. As a consequence, an overdose can cause delirium, convulsions and, in rare
cases, can be fatal. More recently, combinations of antipsychotic drugs and SSRIs have been used
successfully to address symptoms of bipolar disorder (e.g., a combination of the antipsychotic drug
olanzapine and the antidepressant SSRI drug fluoxetine), and this combination has been shown to
have significant effects on both mania and depression symptoms (Deeks & Keating, 2008).
lithium carbonate A drug used in the treatment of bipolar disorder.
fluoxetine (Prozac) A selective serotonin reuptake inhibitor (SSRI) which reduces the uptake
of serotonin in the brain and is taken to treat depression.
One drug that has received a good deal of recent publicity for its potential as an antidepressant
medication is the anaesthetic ketamine. The few randomised controlled trials that have tested the
effectiveness of ketamine to date suggest that it may have significant antidepressant effects when taken
either orally or intravenously (Rosenblat et al., 2019) and may do so by increasing neural plasticity
(Moda‐Sava et al., 2019). But a fuller judgement on the suitability of ketamine as an antidepressant
awaits the outcome of more robust randomised controlled trials and further research on the
mechanisms by which it may reduce depressive symptoms.
PHOTO 7.1 Jack Nicholson's character in the famous 1975 film One Flew over the Cuckoo's Nest was subjected to
ECT treatment, and this unsympathetic portrayal led many to view ECT as a form of patient management rather than
treatment.
Nevertheless, despite these criticisms, ECT may still have a role to play in the treatment of severe
depression in both major and bipolar depression when symptoms are resistant to pharmacological
treatment (Medda, Perugi, Zanello, Ciuffa, & Cassano, 2009), and the almost immediate beneficial
effects of ECT may be helpful in alleviating depression when suicide is a real possibility.
Psychoanalysis
In Section 7.1.2 we discussed some of the psychodynamic explanations of depression. Central to these
accounts is the view that depression is a response to loss (perhaps of a loved one) and may manifest as
symbolic loss, in which other kinds of losses (such as losing a job) are seen as equivalent to losing a loved
one. Psychodynamic theories (such as those developed by Freud and Abraham) argue that the
individual's response to loss is to turn their anger at the loss inwards, and this in turn can develop into
self‐hate resulting in low self‐esteem (Frosh, 2012, chapter 13). The aim of psychodynamic therapy,
therefore, is to help the depressed individual achieve insight into this repressed conflict and to help
release the inwardly directed anger. Psychodynamic therapy will do this by using various techniques to
help people explore the long‐term sources of their depression (see Chapter 4, Section 4.1.1), and this
will involve exploring conflicts and problematic relationships with attachment figures—such as parents
—and discussing long‐standing defensive patterns. For example, the psychodynamic therapist may use
free association or dream interpretation to help the individual recall early experiences of loss
that may have contributed to repressed conflicts and symptoms of depression. In this way,
psychodynamic therapies attempt to bring meaning to the symptoms of depression and help the
individual understand how early experiences may have contributed to their symptoms and affected their
current interpersonal relationships.
Evidence for the therapeutic efficacy of psychodynamic therapies in the treatment of depression is
meagre. This is in part because processes within psychodynamic therapies are difficult to objectify and
study in a controlled way. Psychodynamic therapists also differ significantly in the way they interpret
psychodynamic principles in practice. A controlled study by the American Psychiatric Association (APA,
1993) reported that there was no evidence for the long‐term efficacy of psychodynamic treatment of
depression, although some more recent studies have indicated that short‐term psychodynamic
interventions may be effective in significantly reducing some symptoms of depression (Leichsenring,
2001; Lemma, Target, & Fonagy, 2011).
Behavioural activation
Behavioural theories of depression emphasis that depression may be triggered by a life‐event loss (such
as a bereavement), and this event may represent the loss of important sources of reward and
reinforcement for the individual. This leads the depressed individual into a vicious cycle where this lack
of reward generates depressive symptoms, and in turn, the individual's depressive behaviour may
ultimately lead to aversive social consequences in the form of negative social reactions from friends and
family (Coyne, 1976). This view has led to the development of behavioural activation (BA) therapies for
depression that attempt to increase the client's access to pleasant events and rewards and decrease their
experience of aversive events and consequences (Lewinsohn et al., 1980; Turner & Leach, 2012). Early
BA programmes attempted to achieve these goals through daily monitoring of pleasant/unpleasant
events and the use of behavioural interventions that developed activity scheduling (e.g., scheduling
reinforcing activities so that they will reinforce less attractive activities). They also include social skills
and time management training (Lewinsohn & Shaffer, 1971; Zeiss et al., 1979). The use of BA was
given a further boost by the fact that a number of studies demonstrated that cognitive change is just as
likely to occur following purely behavioural interventions as after cognitive interventions (Jacobson &
Gortner, 2000; Simons, Garfield, & Murphy, 1984). That is, reductions in negative thinking and
negative self‐statements in depression can be decreased by behavioural interventions that contain no
explicit cognitive change components. Recent developments of BA include the self‐monitoring of
pleasant/unpleasant experiences and the identification of behavioural goals within major life areas (e.g.,
relationships, education, etc.) that can be targeted for development and reinforcement, and BA has also
been shown to be beneficial for individuals suffering chronic depression (Erickson & Hellerstein, 2011).
Treatment in Practice 7.1 gives an example of how a brief BA programme for depression is structured
and executed (Lejuez, Hopko, LePage, Hopko, & McNeil, 2001).
CLINICAL PERSPECTIVE: TREATMENT IN PRACTICE 7.1
BRIEF BEHAVIOURAL ACTIVATION TREATMENT FOR
DEPRESSION (BATD)
Behavioural activation treatment for depression (BATD) is conducted over 8–15 sessions, and
sessions progress through the following stages:
1. Assessing the function of depressed behaviour; weakening access to positive reinforcement
(e.g., sympathy) and negative reinforcement (e.g., escape from responsibilities); establishing
rapport with the client and introducing the treatment rationale.
2. Increasing the frequency and subsequent reinforcement of healthy behaviour; clients begin
a weekly self‐monitoring exercise that serves as a baseline assessment of daily activities and
orients clients to the quantity and quality of their activities, and generates ideas about
activities to target during treatment.
3. Emphasis is shifted to identifying behavioural goals within major life areas, such as
relationships, education, employment, hobbies and recreational activities, physical/health
issues, and spirituality.
4. Following goal setting, an activity hierarchy is constructed in which 15 activities are rated
ranging from ‘easiest’ to ‘most difficult’ to accomplish. With progress being monitored by
the therapist, over a period of weeks the client progressively moves through the hierarchy
from easiest to most difficult. Patients are urged to identify weekly rewards that can be
administered if activity goals are met.
Meta‐analyses of random controlled trials have generally reported significantly better effects of BA in
treating depression than appropriate control conditions (Ekers et al., 2014), but there is less consensus
on whether BA is superior to other treatments such as CBT or antidepressant medication (Dimijian et
al., 2016).
Cognitive therapy
As we saw in Section 7.1.2, dysfunctional cognitions appear to play an important part in the
maintenance of depressive symptoms. Beck's Cognitive Theory of depression (Beck, 1967, 1987) argues
that depression is maintained by a systematic set of dysfunctional negative beliefs that form a negative
schema though which the depressed individual views themselves, their world, and their future. From this
theory Beck developed one of the most successful and widely adopted therapeutic approaches for
depression, and this has come to be known by various names including cognitive therapy, cognitive
retraining, or cognitive restructuring. The thrust of this approach is (a) to help the depressed
individual identify their negative beliefs and negative thoughts, (b) to challenge these thoughts as
dysfunctional and irrational, and (c) to replace these negative beliefs with more adaptive or rational
beliefs (see Strauss, 2019). For example, depressed individuals tend to hold beliefs and attributional styles
that are overgeneralised. They will respond to a specific failure (such as failing their driving test) with
statements such as ‘Everything I do ends in failure’ or ‘The world is against me’. The cognitive therapist
will attempt to identify these overgeneralised beliefs and challenge them as irrational—using, if at all
possible, relevant examples from the client's own experiences. In addition to this, the client will be asked
to monitor the negative automatic thoughts that give rise to negative beliefs and depressive
symptoms, often using a form which allows them to link the automatic thoughts to particular situations
and outcomes, and to think through possible rational alternatives to the negative automatic thought
(Table 7.13). The overall philosophy of cognitive therapy for depression is to correct the negative
thinking bias possessed by depressed individuals, and in some cases this aim can be supplemented with
the use of reattribution training (Beck, Rush, Shaw, & Emery, 1979). Reattribution training
attempts to get the client to interpret their difficulties in more hopeful and constructive ways rather than
in the negative, global, stable ways typical of depressed individuals (see Table 7.6).
cognitive therapy A form of psychotherapy based on the belief that psychological problems
are the products of faulty ways of thinking about the world.
negative automatic thoughts Negatively valenced thoughts that the individual finds difficult
to control or dismiss.
CBT is significantly more effective at reducing depression symptoms than treatment‐as‐usual (López‐
López et al., 2019), and works more quickly than interpersonal psychotherapy (Mulder, Boden, Carter,
Luty, & Joyce, 2017), Outcome studies have shown that cognitive therapy is usually at least as effective as
drug therapy in treating the symptoms of depression (Rush, Beck, Kovacs, & Hollon, 1977), and some
have shown that it is superior to drug therapy at 1‐year follow‐up (Blackburn & Moorhead, 2000;
Hollon, Shelton, & Davis, 1993). DeRubeis et al. (2005) compared cognitive therapy with drug therapy
(paroxetine) and a placebo‐control condition. After 8 weeks they found improvement in 43% of the
cognitive therapy group, 50% of the drug treatment group, against only 25% in the placebo group, and
these levels of improvement were maintained at 16 weeks. Cognitive therapy also appears to have
longer‐term beneficial effects by preventing relapse compared to medication (Dobson et al., 2008;
Hensley, Nadiga, & Uhlenhuth, 2004), but a combination of cognitive therapy with drug treatment
appears to be superior to either treatment alone (Kupfer & Frank, 2001).
Cognitive therapy has also been successfully adapted to treat individuals with bipolar disorder in
conjunction with appropriate medication (Newman et al., 2002) in both individual (da Costa et al.,
2010) and group (Gomes et al., 2011) settings. These interventions help the sufferer with medication
compliance, mood monitoring, anticipating stressors, interpersonal functioning, and problem solving
(Danielson, Feeny, Findling, & Youngstrom, 2004; Scott, Garland, & Moorhead, 2001).
While there is no doubt that cognitive therapy is successful in helping to treat depression, there is still
some debate about how it achieves these effects. We have seen earlier that cognitive change is just as
likely to occur following purely behavioural treatments as they are after cognitive treatments. Cognitive
therapy contains both elements of cognitive restructuring which aims to change cognitions directly, and
behavioural exercises designed to establish new cognitions—so is the cognitive restructuring element
entirely necessary? In addition, there is evidence that cognitive therapy not only changes negative
cognitions, but also results in improvements in abnormal biological processes (Blackburn & Moorhead,
2000). This raises the question of whether cognitive therapy has its effects by changing cognitions or
biological processes. Nevertheless, regardless of how it works, cognitive therapy certainly does work, and
recent evidence suggests that it not only reduces the occurrence of negative cognitions in depression,
but it also helps to dissociate negative cognitions from the symptoms of depression better than other
treatments (Beevers & Miller, 2005).
Mindfulness‐based cognitive therapy (MBCT)
A critical issue in the treatment of depression is how to predict and eliminate possible relapse after
remission or successful treatment. In the case of major depression, it appears that the risk of relapse
increases with every consecutive bout of depression, and this increased risk also means that depression
can reoccur with less and less external provocation (such as a stressful life event) (Kendler, Thornton, &
Gardner, 2000). This increased risk of relapse in recovered depressed individuals appears to be caused
by periods of negative mood (dysphoria) activating patterns of negative or depressogenic thinking such
as self‐devaluation and hopelessness (Ingram, Miranda, & Segal, 1998; Segal, Gemar, & Williams,
1999). That is, as soon as the recovered depressed individual begins to feel depressed again, this
reactivates negative thinking that leads to a downward spiral to relapse. Mindfulness‐based
cognitive therapy(MBCT) was developed in order to try to combat this linkage between periods of
dysphoria and the onset of negative thinking, and it aims to get individuals to take a ‘decentred’
perspective by being aware of negative thinking patterns and viewing them purely as mental events
rather than accurate reflections of reality (Teasdale, 1988; Teasdale, Segal, & Williams, 1995). MBCT
is based on an integration of aspects of CBT and components of the mindfulness‐based stress reduction
programme that contains elements of meditation and provides training in the deployment of attention
(Kabat‐Zinn, 1990). Clients are taught to become more aware of, and relate differently to, their
thoughts, feelings, and bodily sensations, and treat thoughts and feelings as passing events in the mind
rather than identifying with them. It also teaches skills that allow individuals to disengage from habitual
dysfunctional cognitive routines and depression‐related patterns of ruminative thought. Studies suggest
that MBCT can (a) significantly reduce the probability of future relapse. Ma and Teasdale (2004) found
that MBCT reduced relapse from 78% to 36%, and in participants who had experienced four or more
bouts of depression only 38% of those receiving MBCT relapsed compared with 100% in the
treatment‐as‐usual control group, and (b) significantly reduce symptoms of mood disorders generally
(Hofmann, Sawyer, Witt, & Oh, 2010). More recently, some of the mechanisms of change that make
MBCT effective have been identified, and these include reducing repetitive negative thinking (e.g.,
rumination), and facilitating self‐compassion and positive affect (Mackenzie, Abbott, & Kocovski, 2018).
These findings suggest that teaching previously depressed individuals to adopt a detached, decentred
relationship to their depression‐related thoughts and feelings can have significant therapeutic gains. A
Science Oxford Live lecture on the science of mindfulness by Professor Mark Williams can be found at
https://www.youtube.com/watch?v=wAy_3Ssyqqg.
TABLE 7.13 Example of a thought record form used to record the negative automatic thoughts (“hot thoughts”)
experienced by depressed individuals
Situation Moods Automatic Evidence Evidence Alternative/balanced Rate
thoughts that that does thoughts moods
(images) supports not now
the hot support
thought the hot
thought
Who? What do you What was Write an alternative or Re‐rate
What? feel? going balanced thought. moods
When? through Rate how much you listed in
Where? your mind believe in each column
just before alternative or balanced 2 as
you started thought (0–100%) well as
to feel this any
way? new
Any other moods
thoughts? (0–
Images? 100%)
In hotel Depressed I need The hurt When I'm
room— 100% something to inside is with other
alone— Lonely make me unbearable people,
Sunday 100% feel numb Killing talking
10 p.m. Empty and take myself will about
100% away the solve all this myself,
Confused pain People have things begin
100% Nothing is tried to help to feel better
Unmotivated going right me and I've I've felt like
100% for me been given this before
Stressed I'm many drugs, and have
worthless—I none of managed to
can never which work get myself
achieve through it
anything Some
At the mornings I
moment I wake up
simply feel feeling and
like ending it thinking
all differently,
What is so there may
there to look be hope
forward to in My friends
life? tell me that
I feel like an I have
empty shell something
to offer
I do laugh
when I'm
with others
This form relates these thoughts to possible situational triggers and attempts to get the depressed individual to think up
evidence that might be contrary to that “hot thought .” (After Greenberger & Padesky, 1995)
SELF‐TEST QUESTIONS
What is a stepped‐care model for the treatment of depression?
What drugs are important in treating depression, and how do they have their effect?
What are the important components of social skills training for depression?
What is the rationale behind behavioural activation therapy for depression?
How does cognitive therapy attempt to eradicate negative thinking?
What is reattribution training?
What is MBCT and what role does it play in the control of depression?
What are the main types of pharmacological treatments for bipolar disorder?
What is transcranial magnetic stimulation?
SECTION SUMMARY
nonsuicidal self-injury (NSSI) The act of deliberately causing injury to one’s body
without conscious suicidal intent.
Deliberate self‐harm is predominantly an adolescent activity with surveys suggesting that between 7.5%
and 45% of adolescents may have deliberately self‐harmed at some time (e.g., Cipriano, Cella, &
Cotrufo, 2017; Plener, Libal, Keller, Fegert, & Muehlenkamp, 2009), rising to 38.9% among university
students, and 4–23% among adults (Andover, 2014; Whitlock et al., 2011). People typically engage in
self‐harm when they are alone and experiencing negative thoughts and feelings (e.g., having a bad
memory, feeling angry, experiencing self‐hatred, or numbness) (Nock, Prinstein, & Sterba, 2009). This
suggests that self‐injury is performed as either a means of self‐soothing or of help‐seeking (e.g., with the
end goal of enlisting others to help the individual cope with their negative feelings or negative self‐
image) (Muehlenkamp et al., 2009). Many adolescents who self‐harm do not usually suffer any long‐
term psychological effects from doing so, but there are groups of individuals who are more at risk of
developing self‐harm activities, and these include depressed adolescents (Hawton & James, 2005)—
especially those going through inter‐personal crises, or individuals with existing mental health problems
such as eating disorders (Wedig & Nock, 2010), excessive alcohol intake (Hussong, 2003), substance
abuse (Greydanus & Shek, 2009; Koob & Kreek, 2007), and psychosis (Gerard, de Moore, Nielssen, &
Large, 2012). In particular, adolescents at risk of deliberate self‐harm show intrapersonal vulnerabilities
such as higher physiological arousal in response to frustrating tasks and stressful events (Nock & Mendes,
2008; Nock, Wedig, Holmberg, & Hooley, 2008), and poor verbal communication and social problem‐
solving skills (Nock & Mendes, 2008; Nock & Photos, 2006) (Client's Perspective 7.2).
TABLE 7.14 DSM‐5 diagnostic criteria for nonsuicidal self injury
Over the previous year on at least five occasions the individual has intentionally self‐inflicted
damage to the surface of their body to induce bleeding, bruising, or pain with the anticipation
that the injury will lead to only minor or moderate physical injury
Presence of at least two of the following:
Negative feelings or thoughts such as depression, anxiety, and suchlike immediately prior to
the self‐injury
Before the self‐injury a period of fixation with the intended self‐injury which is hard to resist
Preoccupation with self‐injury occurs frequently even when not acted upon
The self‐injury is carried out with the expectation that it will relieve a negative feeling or
induce a positive feeling during or directly after the self‐injury
The self‐injury does not occur only during states of psychosis, delirium, or intoxication
There is no suicidal intent
Preventing self‐harm can be difficult because acts of self‐harm are often impulsive and carried out in
secret, and denial is a common feature of those who self‐harm—especially when the self‐harm may
have a positive effect by providing temporary relief from their difficulties. Similarly, most self‐injurers
also report feeling little or no pain during self‐harming, and this also makes the activity difficult to detect
(Favazza, 1996; Nock & Prinstein, 2004). However, it may be possible to target vulnerable groups and to
ensure that they have access to mental health services and support services. As we have noted,
vulnerable groups that have been identified include (a) depressed adolescents; (b) those with
interpersonal crises, such as those who have lost a partner or have run away from home; and (c) those
who have previously self‐harmed (especially in conjunction with substance misuse and conduct disorder)
(Hawton & James, 2005).
Effective treatments for NSSI are still being developed, but psychological interventions such as
dialectical behaviour therapy, CBT, and mentalisation‐based therapy (MBT) have all been shown to
reduce levels of self‐harming (Ougrin, Tranah, Stahl, Moran, & Asarnow, 2015). The evidence on the
effectiveness of medications for NSSI is less convincing, but some antidepressant and antipsychotic
drugs have been shown to reduce the level of self‐harming during treatment with these medications
(Turner, Austin, & Chapman, 2014).
SELF‐TEST QUESTIONS
How is nonsuicidal self‐injury defined and what kinds of problems lead adolescents in
particular to self‐harm?
What are the most common forms of nonsuicidal self‐injury?
What psychological functions is nonsuicidal self‐injury thought to serve?
SECTION SUMMARY
7.5 SUICIDE
The World Health Organization estimates that 800,000 people commit suicide each year (WHO,
2019), but the number of people who attempt suicide can be up to 20 times higher. The WHO report
also described other sobering facts about suicide. Suicide is the third highest cause of death worldwide
amongst 15–19 year‐olds and does not occur in just high‐income countries but is a global phenomenon
with over 79% of global suicides occurring in low‐ and middle‐income countries in 2016 (Figure 7.6).
suicide The action of killing oneself intentionally.
Suicidal phenomena have become more common in teenagers and adolescents. In the UK there were
177 suicides among 15–19 year olds in 2017, compared with 110 in 2010. However, the UK male
suicide rate in 2017 was 15.5 deaths per 10,000 which was the lowest since the time series began in
1981. The suicide rate for females in the UK in 2017 was 4.9 deaths per 100,000, which has remained
at roughly the same rate since 2007 (Office for National Statistics, 2018). The reasons for the increase in
adolescent suicide rates is unclear, but a number of factors may be relevant: (a) modern teenagers are
probably exposed to many of the life stressors experienced by adults, yet may lack the coping resources
to deal with them effectively (Reynolds & Mazza, 1994); (b) suicide is also a sociological as well as a
psychological phenomenon, and media reports of suicide often trigger a significant increase in suicides
(Gould, Jamieson, & Romer, 2003). This is especially true in the case of adolescents and teenagers,
where news of celebrity suicides are often associated with increases in the rate of teenage suicide
attempts (Focus Point 7.5); and (c) there has been a recognised increase in stress‐ and anxiety‐related
problems amongst children and adolescents over the past decade, and this is discussed in Focus Point 1.1
in Chapter 1.
When Nirvana lead singer Kurt Cobain committed suicide in April 1994 it had a significant
impact on young people who saw Cobain as the spokesman for their troubled generation. The
sudden deaths of celebrities in this way have given prominence to social factors that may
influence suicide—especially amongst the young. That is, reporting of suicide in the media may
trigger ‘contagion’ effects in which young people imitate their idols. However, the evidence for
media contagion effects on suicide rates is equivocal. Some studies have found evidence for
increased rates of adolescent suicide after high‐profile media stories about suicide (Littman,
1985; Motto, 1970; Phillips & Carstensen, 1986), while others have failed to find any effect. In
particular, Martin and Koo (1997) investigated the effect of the suicide of Kurt Cobain on
Australian adolescent suicide rates for the 30‐day period after his suicide. They found no
evidence for a ‘suicide contagion’ effect, with suicide rates for the 30‐day period after Cobain's
death being lower than rates for the same period in some previous years. Nevertheless, a recent
systematic review of the effects of media reporting on suicide rates concludes that media
reporting and suicidality are probably related, suggesting that the media need to be responsible
about the way they report celebrity suicides in order to minimize imitation by vulnerable groups
(Sisask & Varnik, 2012).
FIGURE 7.7 Suicide and the economic recession. Time trend analysis of suicide rates in 50 US states between 1999
and 2010. Vertical line shows onset of the economic recession.
After Reeves et al. (2012).
In a large‐scale study of risk factors carried out for the World Health Organization, Borges et al. (2010)
found that risk factors for suicidal behaviours in both developed and developing countries included
being female, being young, lower education and income, unmarried status, unemployment, parent
psychopathology, childhood adversities, and a current DSM psychiatric diagnosis. A combination of
these risk factors was able to predict suicide attempts with some accuracy. Perhaps not surprisingly, life
stress is one of the most significant predictors of suicide, and suicide attempts are often preceded by a
significant negative life event. The types of life events that may trigger suicide can differ across age
groups. For adolescents and teenagers these are more likely to be relationship issues, separations, and
interpersonal conflicts, in middle age they are more likely to be financial issues, and in later life they
tend to be related to disability and physical health (Rich, Warstadt, Nemiroff, Fowler, & Young, 1991).
Finally, there is also a genetic element to suicidal behaviour. Both twin studies and adoption studies
support the view that suicidality has an inherited component that may be as high as 48% (Joiner, Brown,
& Wingate, 2005) and which is independent of the heritability of other psychiatric disorders (Rujescu,
Zill, Rietschel, & Maier, 2009). In addition, recent twin studies have also indicated that suicidal ideation
has a substantial inherited component of 57% (Dutta et al., 2017). This genetic component may be
related to factors controlling low levels of serotonin metabolites in the brain which have been found to
be associated with suicidal behaviour in individuals suffering major depression (e.g., Lutz, Mechawar, &
Turecki, 2017).
This diversity of risk factors has led researchers to argue that suicide probably results from a complex
interplay between sociocultural factors, traumatic events, psychiatric history, personality traits and
genetic vulnerability (e.g., Balazic & Marusic, 2005; Rujescu et al., 2009), all of which will need to be
included in a comprehensive model of suicide aetiology (O'Connor & Portzky, 2018).
SELF‐TEST QUESTIONS
Can you name the main risk factors for suicide?
What are the best ways of identifying and preventing suicide?
SECTION SUMMARY
7.5 SUICIDE
The World Health Organization estimates that 800,000 people worldwide commit suicide
each year.
One in four people with a diagnosis of depression are likely to attempt suicide at least once
in their lifetime.
The prevalence rate for suicide attempts for developed countries is between 0.4% and 2%
in any 1 year.
Women are three times more likely to attempt suicide than men, but the rate for successful
suicide is four times higher in men than women.
Risk factors for suicide include an existing psychiatric diagnosis, low self‐esteem, poor
physical health and physical disability, and experiencing a significant negative life event.
There is an inherited component to suicide which may be as high as 48%
The main forms of intervening to prevent suicide include 24‐hour helplines and telephone
support lines (e.g., the Samaritans), school‐based educational programmes warning about
the early signs of suicidal tendencies, and the WHO BIC procedure.
Both medications for mood disorders and CBT can be helpful in reducing suicide risk in
vulnerable people.
CHAPTER OUTLINE
8.1 THE NATURE OF PSYCHOTIC SYMPTOMS
8.2 THE DIAGNOSIS OF SCHIZOPHRENIA SPECTRUM DISORDERS
8.3 THE PREVALENCE OF SCHIZOPHRENIA SPECTRUM DISORDERS
8.4 THE COURSE OF PSYCHOTIC SYMPTOMS
8.5 THE AETIOLOGY OF PSYCHOTIC SYMPTOMS
8.6 THE TREATMENT OF PSYCHOSIS
8.7 EXPERIENCING PSYCHOSIS REVIEWED
LEARNING OUTCOMES
When you have completed this chapter, you should be able to:
1. Describe the main clinical symptoms of psychosis, and the key features of the main
diagnostic categories defining schizophrenia spectrum disorders in DSM‐5.
2. Describe and evaluate the main biological theories of the aetiology of psychosis—
especially the role of genetics, brain neurotransmitters, and brain abnormalities.
3. Describe, evaluate, and compare the main psychological and sociocultural theories of the
aetiology of psychosis.
4. Describe a range of treatments for psychotic symptoms, including biological,
psychological, familial, and community care interventions.
Looking back I had a lot of factors that contributed to my psychosis. I was a quiet child, lacking in confidence.
My parents divorced when I was 8 years old and my father was treated for depression so it seems to run in the
family. My father was a disinterested one, reinforcing my feelings of worthlessness.
I found it hard at secondary school to make friends and not having a lot of money meant I was singled out. While
it wasn't physical, mainly name calling and being spat at, it reinforced the feeling that I didn't deserve to be here.
By now I was hearing ‘inside’ voices in my head telling me I was useless, shouting words like ‘Bitch!’ and ‘Die!’.
I was having severe mood swings. I thought about self‐harming and became controlling about my food intake. At
age 14 I started taking drugs and drinking alcohol. Between 14 and 21 I had a cannabis and cocaine addiction
which I overcame. At age 24 I was planning my suicide when my father died. Within 3 days I was having
extreme audio and visual hallucinations such as whispering and people calling my name and seeing deceased
people, dead bodies and shadows as well as everyday objects. People also transformed into other people in front of
me leading me to believe they were possessed by the dead. I also heard menacing voices issuing commands. I
experienced strange smells, tasting poison in my food and on one occasion felt someone stroking my hair. I thought
that my mind was being controlled, that I could communicate with the dead and that, because of this, the
government was spying on me and plotting to kill me.
Jo's Story (from Hayward, Meddings, & Harris, 2015)
Introduction
Psychotic symptoms can be crippling and are often characterised by disturbances in thought and
language, sensory perception, emotion regulation, and behaviour. Sufferers may experience sensory
hallucinations and also develop thinking biases which may lead to pervasive false beliefs or delusions
about themselves and the world around them. Individuals with psychotic symptoms may often withdraw
from normal social interaction because of these disturbances of perception and thought, and this can
result in poor educational performance, increasing unproductivity, difficulties in interpersonal
relationships, neglect of day‐to‐day activities and a preoccupation with a personal world to the exclusion
of others. As a result, many individuals exhibiting psychotic symptoms fall to the bottom of the social
ladder or even become homeless because they cannot hold down a job or sustain a relationship—a
phenomenon known as downward drift (Hollingshead & Redlich, 1958) (see Section 8.5.3).
delusions Firmly held but erroneous beliefs that usually involve a misinterpretation of
perceptions or experiences.
Jo's Story is a common example of how psychotic symptoms can manifest themselves. She hears voices
criticizing her and calling her names, believes other people are controlling her, finds it hard to develop
friendships, and uses alcohol and street drugs to help control her emotions and feelings. Eventually Jo
begins to experience sensory hallucinations in which she sees dead bodies and everyday objects
transform into menacing and frightening figures, she hears threatening voices and begins to develop
paranoid thoughts. In time she feels so hopeless about the future that she just wants to die.
Psychosis is a collective name given to an extensive range of disparate symptoms that can often leave
an individual feeling frightened and confused, and the presence of different combinations of these
symptoms may lead to a diagnosis of any one of a number of schizophrenia spectrum disorders.
DSM‐5 has moved away from a single overriding diagnostic category (schizophrenia) split into a series
of subtypes (paranoid, disorganised, catatonic, undifferentiated) and now lists a number of separate
psychotic disorders that range across a spectrum depending on severity, duration and complexity of
symptoms. The main diagnostic categories in DSM‐5 are schizophrenia, schizotypal personality
disorder, delusional disorder, brief psychotic disorder, and schizoaffective disorder, and we discuss these
individually later in this chapter. But first we will discuss the key cognitive and behavioural features that
define psychosis—combinations of which give rise to the different diagnoses. These key features include
delusions, hallucinations, disorganised thinking, abnormal motor behaviour, and negative symptoms
(indicative of diminished emotional expression) (Focus Point 8.1).
schizophrenia spectrum disorders The name for separate psychotic disorders that range
across a spectrum depending on severity, duration and complexity of symptoms.
The symptoms of psychosis have been reported throughout history, but because the symptoms
can be so varied and wide ranging ‘schizophrenia’ has only gradually been isolated as a single
diagnostic category to cover these heterogeneous characteristics.
The European psychiatrist, Emil Kraepelin (1896), was the first to distinguish schizophrenia
from a range of other psychiatric symptoms (such as manic depressive illness). He did this by
bringing together a number of contemporary diagnostic concepts including paranoia, catatonia
and hebephrenia (symptoms indicative of incoherence and fragmentation of personality) under
the general term dementia praecox. He assumed that dementia praecox was a single disease
that manifested itself in late adolescence or early adulthood and had a deteriorating prognosis
from which there was no recovery. In contrast to Kraepelin, the Swiss psychiatrist Eugen
Bleuler (1911) believed that the onset of dementia praecox was not simply restricted to
adolescence and early adulthood and also believed that it did not inevitably lead to dementia.
He preferred to use the term schizophrenia (from the Greek schiz, to spilt, and phren, the mind),
because he felt that it properly described the splitting of different psychological functions within
a single personality. Unfortunately, this term has also had its problems, with the popular belief
that the term schizophrenia refers to a split or double personality. In order to try and unify the
various symptoms under a single diagnostic category, Bleuler used the concept of the ‘breaking
of associative threads’ as being central to all of the symptoms of schizophrenia. That is,
effective thinking, communication, and action were not possible if the ability to associate things
together was disrupted. In this respect, it is interesting to note that in later sections in this
chapter we will see that there is evidence that at least some of the clinical symptoms of
schizophrenia may be determined by dysfunctions in associative and attentional processes (see
Section 8.5.2).
8.1.1 Delusions
These are firmly held but erroneous beliefs that (a) usually involve a misinterpretation of perceptions or
experiences and (b) become fixed beliefs that are resistant to change even in light of conflicting or
contradictory evidence. Such delusions are the most common symptom of psychosis in individuals with
a diagnosis of schizophrenia (Baker, Konova, Daw, & Horga, 2019), are experienced by over 75% of
those individuals hospitalised because of their psychotic symptoms (Maher, 2001) and may be the result
of abnormalities in inference or the process of shaping beliefs through experience (Hemsley & Garety,
1986). While some delusions may be clearly bizarre (e.g., the individual may believe that their entire
internal organs have been taken out and replaced by those of someone else), others may not (e.g., a
paranoid belief that the individual is constantly under surveillance by the police). Regardless of how
bizarre a delusion is, the sufferer is often able to bring reason and logic to support their delusion—even
though the underlying belief itself is clearly absurd (Maher, 2001). This ability to support absurd beliefs
with logical thought has led some clinicians to suggest that delusions may be the result of an inability to
integrate perceptual input with prior knowledge even though rational thought processes are still intact
(Frith, 1996; Frith & Dolan, 2000). For other clinicians it is suggestive of the development of biased
information processing and the development of dysfunctional beliefs about the world (e.g., Freeman,
Garety, Kuipers, Fowler, & Bebbington, 2002; Morrison, 2001a), or decision‐making processes that lead
the individual to ‘jump to conclusions’ on the basis of minimal evidence (e.g., Moritz & Woodward,
2005; Dudley, Taylor, Wickham, & Hutton, 2016).
The main types of delusion found in those experiencing psychosis are (a) persecutory delusions
(paranoia), in which the individual believes they are being persecuted, spied upon, or are in danger
(usually as the result of a conspiracy of some kind); (b) grandiose delusions, in which the individual
believes they are someone with fame or power or have exceptional abilities, wealth, or fame (e.g., Jesus
Christ, or a famous music star); (c) delusions of control, where the person believes that their
thoughts, feelings, or actions are being controlled by external forces (e.g., extraterrestrial or supernatural
beings), and this is often associated with the belief that control is being exerted through devices (such as
the radio) which are sending messages directly to the person's brain; (d) delusions of reference,
where the individual believes that independent external events are making specific reference to them
(e.g., in Jo's Story at the beginning of this chapter); (e) nihilistic delusions, where the individual
believes that some aspect of either the world or themselves has ceased to exist (e.g., the person may
believe that they are in fact dead) or a major catastrophe will occur; and (f) erotomanic delusions
when an individual falsely believes that another person is in love with him or her (see Focus Point 8.2).
persecutory delusions Delusions in which the individual believes they are being persecuted,
spied upon, or are in danger (usually as the result of a conspiracy of some kind).
grandiose delusions Delusions in which the individual believes they are someone with fame
or power or have exceptional abilities, wealth or fame.
delusions of control Delusions where the person believes that his or her thoughts, feelings or
actions are being controlled by external forces (e.g. extraterrestrial or supernatural beings).
delusions of reference Delusions where the individual believes that independent external
events are making specific reference to him or her.
nihilistic delusions Delusions where individuals believe that some aspect of either the world
or themselves has ceased to exist (e.g. the person may believe that they are in fact dead).
erotomanic delusions A relatively rare psychotic delusion where an individual has a
delusional belief that a person of higher social status falls in love and makes amorous advances
towards them.
One common feature of psychotic thought is that sufferers frequently believe that their thoughts are
being interfered with or controlled in some way, either by being openly broadcast to others or by having
thoughts planted into their mind by external forces. This type of delusion (sometimes known as ‘hearing
voices’, see Section 8.5.2 on interpretational biases) is so common that it may offer some insight into the
cognitive deficits underlying a majority of psychotic thought. For example, in an experimental study,
Blakemore, Oakley, and Firth (2003) used hypnosis to generate beliefs in nonclinical participants that
their self‐generated actions could be attributed to an external source. They found that such erroneous
beliefs generated higher than normal levels of activation in the parietal cortex and cerebellum and they
suggest that these areas of the brain may be altered during psychotic episodes so that self‐produced
actions and thoughts are experienced as external.
8.1.2 Hallucinations
People suffering psychotic symptoms regularly report sensory abnormalities across a broad range of
sensory modalities, and this is usually manifested as perceiving things that are not there.
Hallucinations can occur in any modality (e.g., auditory, olfactory, gustatory, and tactile), but the most
common are auditory hallucinations that are reported by around 80% of sufferers (Laroi et al., 2012).
Auditory hallucinations are usually manifested as voices, and these can be experienced as external voices
commanding the individual to act in certain ways, two or more voices conversing with each other, or a
voice commentating on the individual's own thoughts. In all cases these voices are perceived as being
distinct from the individual's own thoughts. Research of brain areas involved in speech generation and
the perception of sounds suggests that when sufferers claim to hear ‘voices’ this is associated with neural
activation in these areas of the brain associated with language (Keefe, Arnold, Bayen, McEvoy, &
Wilson, 2002; Dollfus et al., 2018), and the sufferer attributes them to external sources.
hallucinations A sensory experience in which a person can see, hear, smell, taste or feel
something that isn’t there.
Visual hallucinations are the second most common type of hallucination and can take either a diffuse
form as in the perception of colours and shapes that are not present, or they can be very specific such as
perceiving that a particular person (e.g., a partner or parent) is present when they are not (see Jo's Story
at the beginning of this chapter). Other hallucinations can be tactile and somatic (e.g., feeling that one's
skin is tingling or burning) or olfactory and gustatory (e.g., experiencing smells that are not present or
foods that taste unusual).
For those who believe that their hallucinations are real, such experiences can be extremely frightening.
However, while some individuals suffering psychosis are convinced their hallucinations are real, many
others are aware that their hallucinations may not be real. This suggests that psychotic episodes may be
associated with a reality‐monitoring deficit. That is, individuals suffering psychotic symptoms may have
difficulty identifying the source of a perception and difficulty distinguishing whether it is real or
imagined. In support of this possibility, Brebion et al. (2000) found that when individuals diagnosed with
schizophrenia and nonclinical controls were asked to remember words that had either been generated
by themselves or been generated by the experimenter, individuals with a diagnosis of schizophrenia
differed in three important ways from nonclinical controls. First, they were more likely to identify items
as having been in the generated list of words when they were not (false positives); second, they were
more likely to report that words they had generated themselves were generated by the experimenter;
and third, they were more likely to report that spoken items had been presented as pictures. These
results suggest that individuals diagnosed with schizophrenia have a reality‐monitoring deficit (i.e.,
a problem distinguishing between what actually occurred and what did not occur), and a self‐
monitoring deficit (i.e., they cannot distinguish between thoughts and ideas they generated
themselves and thoughts or ideas that other people generated).
self-monitoring deficit Where individuals cannot distinguish between thoughts and ideas
they generated themselves and thoughts or ideas that other people generated.
derailment A disorder of speech where the individual may drift quickly from one topic to
another during a conversation.
loose associations Disorganised thinking in which the individual may drift quickly from one
topic to another during a conversation.
clanging A form of speech pattern in schizophrenia where thinking is driven by word sounds.
For example, rhyming or alliteration may lead to the appearance of logical connections where
none in fact exists.
neologisms Made up words, frequently constructed by condensing or combining several
words.
word salads When the language of the person experiencing a psychotic episode appears so
disorganised that there seems to be no link between one phrase and the next.
Psychotic symptoms frequently exhibit a range of attributes that indicate disordered thinking.
The following are examples of some of the more common of these disorganized speech
symptoms.
Word Salad
In many cases, the language of the person experiencing a psychotic episode appears so
disorganised that there seems to be no link between one phrase and the next, and this is known
as a ‘word salad’. Some word salads simply do not seem to be attempts to communicate
anything structured and appear to drift without substance from one unconnected sentence to
the next:
‘Everything is going around in slow motion. The boxes are clanging and chattering to be let out. Behind my
forehead the past is surfacing mixing a bottle of acid solution. A stake jams a door that leads to a mirage of
broken appearances. Inside a box, pounding fists try to pull down my imagination. The ground work is split
into hundreds of pieces; each fragment is separate as if it had some kind of individual purpose. The truth is
locked up in a unit’.
In other cases, word salads appear to be sets of phrases or words linked by association to the
previous phrase. For example, in answer to the question ‘What colour is your dress?’, a sufferer
answered ‘red…Santa Claus…flying through the sky….God’. This is known as loose
association or derailment and makes it very difficult to follow the conversation of an individual
when a single, often unimportant word from the previous sentence becomes the focus of the
next sentence.
Neologisms
In order to try and communicate, many individuals suffering psychotic symptoms often make
up words and use them in their attempts to communicate. These are called neologisms, and are
frequently constructed by condensing or combining several words. Some examples given by
individual sufferers are the following:
Circlingology Study of a rolling circle; a fruit can in the form of a cylinder rolling
Clanging
People exhibiting psychotic symptoms often try to communicate using words that rhyme, and
this is known as ‘clanging’. In other cases, sufferers only appear able to construct sentences if
the words in them rhyme—and this communication may begin with a sensible response but
then degenerate into nonsense because of the urge to ‘clang’ as the following transcript shows:
TH:
‘What colour is your dress?’
CL:
‘Red. . . . Like a bed’.
TH:
‘Why is it like a bed?’
CL:
‘Because it's dead’.
TH:
‘Why is a bed like being dead?’
CL:
‘I dunno. . . maybe it's a med’.
TH:
‘What's a med?’
CL:
‘A bled’.
avolition An inability to carry out or complete normal day-to-day goal-oriented activities, and
this results in the individual showing little interest in social or work activities
alogia A lack of verbal fluency in which the individual gives very brief, empty replies to
questions
SECTION SUMMARY
brief psychotic disorder The sudden onset of at least one of the main psychotic symptoms,
with this change from a nonpsychotic state to the appearance of symptoms occurring within 2
weeks and being associated with emotional turmoil or overwhelming confusion.
8.2.3 Schizophrenia
A diagnosis of schizophrenia is given when there is a range of symptoms covering cognitive,
behavioural, and emotional dysfunction and also impaired occupational or social functioning—but no
single symptom is characteristic of this diagnosis. Table 8.3 shows the DSM‐5 diagnostic criteria for
schizophrenia, and two or more of the five symptoms must be present for a significant proportion of
time during a 1‐month period or longer. It is also important, and these recognise that symptoms must be
associated with impaired functioning across areas such as work, interpersonal relations, or self‐care.
Prodromal symptoms often precede the active phase, and residual symptoms may follow it (see below).
Similarly, negative symptoms or social isolation are common during the prodromal phase and can be a
significant indicator of later full‐blown symptoms of psychosis (Lencz, Smith, Auther, Correll, &
Cornblatt, 2004). Additional symptoms displayed by individuals with a diagnosis of schizophrenia may
include inappropriate affect (e.g., laughing inappropriately), depressed mood, anxiety or anger, disturbed
sleep patterns, and lack of interest in eating. Individuals with a diagnosis of schizophrenia may often
show a lack of insight into their symptoms and may be hostile and aggressive. However, aggression is
more common in younger males and for individuals with a past history of violence, nonadherence to
treatment, substance abuse, and impulsivity (DSM‐5, American Psychiatric Association, 2013, p. 101;
Johnson et al., 2016; Buchanan, Sint, Swanson, & Rosenheck, 2019). It must, however, be emphasised
that the vast majority of people with a diagnosis of schizophrenia are not aggressive and are more likely
to be the victims of violence and aggression than be perpetrators.
TABLE 8.2 Summary: the main DSM‐5 diagnostic criteria for brief psychotic disorder
At least two of the following must be present for a significant period of time during a 1‐month
period:
Delusions
Hallucinations
Disorganised speech
Highly disorganised or catatonic behaviour
Negative symptoms such as diminished emotional expression
The ability to function in one or more major areas such as work, self‐care, or interpersonal
relationships is markedly diminished
Continuous signs of the disturbance last for at least 6 months
The disorder is not directly attributable to the use of a substance or medication and is not better
explained by other mental disorder
8.2.4 Schizoaffective Disorder
Schizoaffective disorder is diagnosed when an individual displays symptoms that meet the criteria
for schizophrenia (discussed previously) but where there is also a significant mood episode reflecting
either depression or mania that are present for the majority of the duration of the illness.
Schizoaffective disorder will frequently impair occupational functioning and may be associated with
restricted social functioning, difficulties with self‐care, and an increased risk for suicide (see Table 8.4).
TABLE 8.4 Summary: main DSM‐5 diagnostic criteria for schizoaffective disorder
A continuous period of illness during which there is a major mood episode (major depressive or
manic)
Delusions or hallucinations for 2 or more weeks without the occurrence of a major mood episode
Symptoms for a major mood episode are present for the majority of the duration of the illness
The disorder is not directly attributable to the use of a substance or medication and is not better
explained by other mental disorder
SELF‐TEST QUESTIONS
What are the four main schizophrenia spectrum disorder diagnostic categories in DSM‐5?
SECTION SUMMARY
Some studies have identified some consistent cultural differences in the prevalence of schizophrenia
within individual countries. For example, in a UK‐based study, King et al. (2005) found that the
reporting of psychotic symptoms was higher in ethnic minority groups than in ethnic White individuals.
This increase in the reporting of symptoms was twice as high in people of African‐Caribbean origin as
in Whites. There have been a number of hypotheses that have attempted to explain this apparent
cultural difference, and there is at least some evidence that the higher symptom levels in Black American
men than White American men may be the result of racial disparities in mental health treatment
between Blacks and Whites in the US (Whaley, 2004). However, higher levels of stress experienced by
ethnic minorities may also be a reason why ethnicity is a risk factor for psychosis, and some studies have
shown that this differential effect of ethnicity disappears once psychological stress levels are controlled
for (Cohen & Marino, 2013; Tortelli et al., 2018). In other within‐country studies, immigrants have been
shown to have significantly higher rates of schizophrenia diagnosis than members of the endogenous
population. A personal or family history of migration is an important risk factor, and immigrants from
developing countries are at greater risk than those from developed countries (Cantor‐Graae & Selten,
2005). At least part of the explanation for the higher incidence in immigrants can be traced to the stress
caused by many of the initial consequences of immigration, such as language difficulties,
unemployment, poor housing, and low socio‐economic status (Hjern, Wicks, & Dalman, 2004). Finally,
the incidence of schizophrenia is similar for males and females, although females tend to have a later
age of onset and fewer hospital admissions, and this may be the result of females attaining higher levels
of social role functioning before illness, which confers a better outcome (Hafner, 2000; Murray & Van
Os, 1998; Angermeyer, Kuhn, & Goldstein, 1990).
Finally, delusional disorder and brief psychotic disorder are new diagnostic categories within
schizophrenia spectrum disorders, but DSM‐5 estimates that the lifetime prevalence rate for delusional
disorder is around 0.2% and that brief psychotic disorder may account for 9% of cases of first‐onset
psychosis (DSM‐5, American Psychiatric Association, 2013).
SELF‐TEST QUESTIONS
What is the estimated lifetime prevalence rate for a diagnosis of schizophrenia worldwide?
Some ethnic and cultural differences in the prevalence rates of schizophrenia have been
found. Can you describe some of these differences?
SECTION SUMMARY
prodromal stage The slow deterioration from normal functioning to the delusional and
dysfunctional thinking characteristic of many forms of schizophrenia, normally taking place
over an average of 5 years.
During the development of DSM‐5, a case was made for including what was to be called attenuated
psychotic symptoms syndrome (also known as ‘psychosis risk syndrome’) (Woods, Walsh, &
McGlashan, 2010). This would have been characterised by mild psychotic symptoms that don’t meet the
diagnostic criteria for full‐blown schizophrenia, but would enable clinicians to identify at least some
individuals who were in the prodromal state for subsequent schizophrenia spectrum disorders. However,
the final decision was to omit this category from DSM‐5 because of the poor diagnostic reliability
revealed in an earlier clinical trial (Carpenter & van Os, 2011) and the fact that only a proportion of
those who exhibit prodromal symptoms go on to warrant a full diagnosis of schizophrenia (studies
suggest a conversion rate of between 20 and 50%, Corcoran, First, & Cornblatt, 2010).
active stage The stage in which an individual begins to show unambiguous symptoms of
psychosis, including delusions, hallucinations, disordered speech and communication, and a
range of fullblown symptoms.
residual stage The stage of psychosis when the individual ceases to show prominent signs of
positive symptoms (such as delusions, hallucinations or disordered speech).
CASE HISTORY 8.1 THE PRODROMAL STAGE—IDENTIFYING
THE EARLY SIGNS
‘Fifteen‐year‐old Caitlin was an excellent student with many friends when she entered the
ninth grade. One year later, she suddenly became restless in school, stopped paying attention
to her teachers, and eventually failed all of her subjects. At home she appeared increasingly
withdrawn and isolated, spending hours sleeping or watching television. The previously
even‐tempered adolescent became angry, anxious, and suspicious of those around her, and
was occasionally seen talking to herself while making repetitive, odd hand motions. Several
years later, hearing voices and insisting that the CIA was hatching an elaborate plot to
murder her and her family, she was diagnosed with schizophrenia’.
(https://www.disorders.org/personality‐disorders/schizophrenia/can‐we‐prevent‐or‐delay‐
schizophrenia/)
Clinical Commentary
This description of the development of Caitlin’s symptoms is typical of the prodromal
stage of schizophrenia. She became withdrawn, ill‐tempered, anxious, and suspicious
and showed a marked decline in academic performance. Unfortunately, these signs are
often difficult to differentiate from many of the behavioural changes exhibited by
normal individuals as they progress through adolescence, so diagnosis at an early stage is
often difficult. These difficulties with early diagnosis are unfortunate, because evidence
suggests that the earlier treatment begins after the development of actual psychosis, the
more rapid the immediate recovery and the better the overall outcome.
Some more specific prodromal features associated with schizophrenia include:
Peculiar behaviours
Impairment in personal hygiene and grooming
Inappropriate affect (e.g., laughing when talking about something sad)
Vague, overly elaborate, or circumstantial speech
Poverty of speech
Odd beliefs or magical thinking
Unusual perceptual experiences
SELF‐TEST QUESTIONS
What are the main stages through which psychotic symptoms normally develop?
What are the factors that may contribute to relapse following recovery from an acute
psychotic episode?
SECTION SUMMARY
Genetic factors
It has always been known that psychotic symptoms appear to run in families, and this suggests that there
may well be some form of inherited predisposition. That psychosis has an inherited component has
been supported by the results of concordance studies. If an individual is diagnosed with
schizophrenia, Table 8.5 shows the probability with which a family member or relative will also develop
the disorder. This shows that the probability with which the family member or relative will develop
schizophrenia is dependent on how closely they are related—or more specifically, how much genetic
material the two share in common (Gottesman et al., 1987; Cardno et al., 1999). Studies have suggested
that an individual who has a first‐degree relative diagnosed with schizophrenia is 6–10 times more likely
to develop psychotic symptoms than someone who has no first‐degree relatives diagnosed with
schizophrenia (Schneider & Deldin, 2001; Chou et al., 2017).
TABLE 8.5 Concordance rates for individuals with a diagnosis of schizophrenia (after Gottesman, McGuffin, &
Farmer, 1987)
Relation to Proband % Diagnosed with Schizophrenia
Spouse 1.00
Grandchildren 2.84
Nieces/Nephews 2.65
Children 9.35
Siblings 7.30
Dizygotic (Fraternal) Twins 12.08
Monozygotic (Identical) Twins 44.30
However, simply because psychotic symptoms tend to run in families does not establish a genetic basis
for this psychopathology. For example, some family environments may have dysfunctional elements (e.g.,
difficulties in communication between family members) that may give rise to the development of
psychosis. In order to examine the genetic basis more carefully, many researchers have undertaken twin
studies, in which they have compared the probability with which monozygotic (MZ) and dizygotic
(DZ) twins both develop symptoms indicative of schizophrenia. MZ twins share 100% of their genetic
material, whereas DZ twins share only 50% of their genes, so a genetic explanation of psychotic
symptoms would predict that there would be greater concordance in the diagnosis of schizophrenia in
MZ than in DZ twins. This can clearly be seen in Table 8.5 where the concordance rate for MZ twins is
44% but falls to only 12% in DZ twins. Twin studies have indicated that the heritability estimate for
schizophrenia is high at between 64 and 81% (Sullivan, Kendler, & Neale, 2003; Lichtenstein et al.,
2009), which makes schizophrenia one of the most heritable of psychiatric disorders (Gejman, Sanders,
& Kendler, 2011).
As convincing as these data may seem, there are still problems in interpreting twin studies. For example,
(a) MZ twins will always be the same sex whereas DZ twins may not be; (b) MZ twins are usually
physically identical, unlike DZ twins, and this may lead to family and friends treating MZ twins more
similarly than they would DZ twins (i.e., MZ twins could experience more similar environmental factors
than DZ twins); and (c) MZ twins are likely to have shared the same placenta prior to birth whereas DZ
twins do not, and this would mean that any interuterine abnormalities would be more likely to affect
both MZ twins through the shared placenta (Davis & Phelps, 1995) (see also Chapter 1, Figure 1.2 for a
discussion of how twin studies may overestimate the heritability of a psychopathology when compared
with molecular genetics research).
However, many of these difficulties of interpretation can be overcome by studying the offspring of MZ
and DZ twins rather than the twins themselves (Gottesman & Bertelsen, 1989). If one MZ twin develops
psychotic symptoms and the other does not, any genetic element in psychosis should still show up in the
children of either of the two MZ twins. That is, the children of the MZ twins should still exhibit similar
rates of risk for schizophrenia (because they have inherited the same predisposition)—even though one
of their parents developed schizophrenia and the other did not. This is exactly what Gottesman &
Bertelson (1989) found: 16.8% of the offspring of the MZ twins who were diagnosed with schizophrenia
were likely to develop psychotic symptoms themselves, and 17.4% of the offspring of the MZ twins who
were not diagnosed with schizophrenia were also likely to develop psychotic symptoms. This suggests
that a genetic risk factor has been passed on to offspring, even though one set of parents did not develop
schizophrenia themselves. A similar kind of family‐based study of the genetics of schizophrenia is
known as a familial high‐risk study, which begins with biological parents with schizophrenia and
then studies their offspring in a longitudinal study to identify the frequency with which these offspring
develop psychotic symptoms. One such study has found that children of a parent with schizophrenia are
six times more likely to develop a schizophrenia spectrum disorder than offspring of parents without a
diagnosis of schizophrenia (Goldstein, Buka, Seidman, & Tsuang, 2010).
Another way of tackling the problems of separating out the influence of genetic inheritance and
environmental experience is to look at the incidence of schizophrenia in children who are biologically
similar but have been reared apart (adoption studies). If there is an important genetic element to
psychosis, then we would expect the children of a mother diagnosed with schizophrenia to have similar
probabilities of developing schizophrenia regardless of whether they had been reared with their mother
or not. A seminal study by Heston (1966) compared 47 adopted children who were reared apart from
their schizophrenic biological mothers with 50 control adopted children whose mothers were not
diagnosed with schizophrenia. He found symptoms of psychosis in 16.6% of the adopted children of
the schizophrenic mothers and no symptoms in the adopted children of mothers without schizophrenia.
Studies of adopted children conducted in Denmark have shown similar results. Kety (1988) and Kety et
al. (1994) found that adopted children who develop psychotic symptoms are significantly more likely to
have had biological relatives with a diagnosis of schizophrenia (21.4%) than adoptive relatives with a
diagnosis of schizophrenia (5.4%). These types of studies provide strong evidence for a genetic
component to schizophrenia and psychosis.
adoption studies Research conducted on children who have been reared by individuals other
than their biological parents.
However, some more recent adoption studies suggest that genetic liability still interacts with
environmental factors to predict the development of psychotic symptoms. Wahlberg et al. (2004) found
that in adopted children, inherited genetic factors were an important predictor of a diagnosis of
schizophrenia but only in combination with certain environmental factors found in the adopted home
environment. In this particular study, an adopted child was more likely to be diagnosed with
schizophrenia if they had a biologically inherited predisposition and they were also brought up in an
adopted home environment where there were dysfunctional communication patterns (see Section 8.5.3).
While genetic inheritance is an important predictor of psychotic symptoms, this is further evidence that
genetic factors interact with environmental factors in a way predicted by diathesis–stress models.
Not only are these kinds of genetic studies important in determining whether a diagnosis of
schizophrenia has a significant inherited component, they are also beginning to show that individual
symptoms associated with a diagnosis of schizophrenia may also have an important inherited
component, and these include factors such as experiencing hallucinations (Hur, Cherny, & Sham, 2012),
volume of gray matter in specific brain regions (van Haren et al., 2012), and catatonia (Beckmann &
Franzek, 2000). However, for some other psychotic symptoms there is less evidence for overriding
genetic determination, and these include some negative symptoms such as anhedonia (Craver & Pogue‐
Geile, 1999) and delusions (Cardno & McGuffin, 2006; Varghese et al., 2013).
Finally, recent genetic studies of schizophrenia and its related disorders have begun to show that there
are genetic overlaps between schizophrenia and some other psychiatric disorders, such as bipolar
disorder and autism, suggesting that variation in a specific gene or set of genes may simultaneously
affect the development of these different diagnoses (Gejman, Sanders, & Kendler, 2011; Fromer et al.,
2013).
Molecular genetics
If, as seems likely, there is a genetic component to psychosis, how is it transmitted between related
individuals, and how does this inherited component influence the development of psychotic symptoms?
In recent years, much effort has been directed at attempting to identify the specific genes through which
the risk for psychosis may be transmitted (Henriksen, Nordgaard, & Jansson, 2017), the chromosomes
on which these genes are located (see Owen & Doherty, 2016), and how these genes, their possible
defects, and their interaction with environmental factors may give rise to psychotic symptoms
(Andreasen, 2001; Zwicker, Denovan‐Wright, & Uher, 2018).
These endeavors have often involved genetic linkage analyses, in which blood samples are collected
in order to study the inheritance patterns within families that have members diagnosed with
schizophrenia. Linkage analyses work by comparing the inheritance of characteristics for which gene
location is well known (e.g., eye colour) with the inheritance of psychotic symptoms. If the inheritance
of, for example, eye colour follows the same pattern within the family as psychotic symptoms, then it can
reasonably be concluded that the gene controlling psychotic symptoms is probably found on the same
chromosome as the gene controlling eye colour, and is probably genetically linked to that ‘marker’
characteristic in some way. Research Methods in Clinical Psychology 8.1 illustrates an example of how a
particular trait of those diagnosed with schizophrenia, in this case poor eye‐tracking of a moving object,
can be used as a genetic marker to track other psychotic symptoms that may be linked genetically to this
characteristic (Research Methods in Clinical Psychology 8.1).
Using analyses such as these, genes associated with the development of psychotic symptoms have been
identified on a number of chromosomes including 8 and 22 (Kendler et al., 2000), 2, 3, 5, 6, 11, 13, and
20 (Badner & Gershon, 2002; Levinson et al., 2002), and 1 and 15 (Gejman, Sanders, & Kendler, 2011).
Also, other techniques, such as genome‐wide association studies (GWAS) allow researchers to
identify genetic loci that are associated with schizophrenia. In 2014 the Psychiatric Genomics
Consortium, an international consortium of scientists conducting analyses of genetic data in relation to
mental health problems, identified 108 different genetic loci that were associated with schizophrenia
(Schizophrenia Working Group of the Psychiatric Genomics Consortium, 2014), confirming that
schizophrenia is a condition underpinned by very many genes that in various combinations can create a
large variety of symptoms with a range of symptom severities. GWAS can also identify rare mutations
in genes that might give rise to psychotic symptoms—especially those mutations that give rise to ‘copy
number variations’ (CNVs) which refers to an abnormal copy of DNA in a gene (either a deletion or a
duplication). Mutations resulting in DNA deletions (International Schizophrenia Consortium, 2008) as
well as mutations causing DNA duplications (Levinson et al., 2011; Kirov et al., 2009) have been found
to be associated with schizophrenia, and one candidate gene called DRD2 encodes information specific
to dopamine receptors in the brain (Lawford et al., 2005)—a neurotransmitter that has frequently been
implicated in psychotic symptoms (see following section on brain neurotransmitters). Other studies have
regularly identified 22q11.2 deletion syndrome, which is a deletion found on the long arm of
chromosome 22, as one of the strongest known genetic risk factors for schizophrenia (Owen & Doherty,
2016), and a high proportion of adults with this deletion have a diagnosis of schizophrenia (25% in
22q11.2 deletion syndrome compared with 0.75% in the general population) (Murphy, Jones, & Owen,
1999). The physical phenotype of this deletion is very variable, but cognitive impairment—which is a
significant feature of schizophrenia—is a common characteristic.
However, while many studies have shown associations between individual genes and schizophrenia
symptoms, there have also often been failures to replicate many of these findings (Kim, Zerwas, Trace,
& Sullivan, 2011). This probably testifies to the heterogeneity of schizophrenia as a diagnostic category
and the fact that different people with a diagnosis of schizophrenia may not have the same underlying
genetic factors contributing to their symptoms. For example, many of the mutations causing the CNVs
mentioned above are very rare, and so schizophrenia symptoms may be caused by many different and
very rare gene mutations. Second, some of the gene factors associated with schizophrenia may have
their impact on quite specific aspects of psychological functioning. For example, deficits in executive
functioning (planning, working memory, problem solving) are known to be characteristic of
schizophrenia, and some genes have been identified that are associated specifically with executive
functioning deficits in schizophrenia (Harrison & Weinberger, 2004; Owen, Williams, & O'Donovan,
2004). But executive functioning deficits are also associated with many other psychiatric disorders, such
as autistic spectrum disorder, intellectual disabilities, and bipolar disorder, and so this particular gene
mutation may not be purely a risk factor for schizophrenia and we might predict it to be present
without an individual necessarily developing psychotic symptoms.
Brain neurotransmitters
Cognition and behaviour are very much dependent on the efficient working of brain neurotransmitters
which enable effective communication between brain cells and functionally different parts of the brain
itself. It is not surprising, therefore, that many researchers have suspected that the thought disorders,
hallucinations, and behaviour problems characteristic in the diagnosis of schizophrenia may be caused
by malfunctions in these brain neurotransmitters. The biochemical theory of schizophrenia that has
been most prominent over the past 50 years is known as the dopamine hypothesis, and this account
argues that the symptoms of schizophrenia are importantly related to excess activity of the
neurotransmitter dopamine. There are a number of factors that have led to the implication of excess
dopamine activity.
dopamine hypothesis A theory which argues that the symptoms of schizophrenia are related
to excess activity of the neurotransmitter dopamine.
First, the discovery of antipsychotic drugs that helped to alleviate the symptoms of psychosis (such as
the phenothiazines) led to the discovery that such drugs acted by blocking the brain's dopamine
receptor sites and so reduced dopamine activity (Schneider & Deldin, 2001). Interestingly, while the
administration of antipsychotic drugs alleviated many of the positive symptoms of schizophrenia, such
as thought disorder and social withdrawal, they also had the side effect of producing muscle tremors
very similar to those seen in Parkinson's disease, and it was already known that Parkinson's disease was
caused by low levels of dopamine. In contrast, when people suffering Parkinson's disease were given the
drug L‐dopa to raise brain dopamine levels, they often began to exhibit psychotic symptoms (Grilly,
2002). This evidence implies that either high levels of brain dopamine or excess dopamine activity is
responsible for many of the symptoms of psychosis. Subsequent research has suggested that many
antipsychotic drugs have their effect by binding specifically to dopamine receptors and reducing brain
dopamine activity (Burt, Creese, & Snyder, 1977).
Second, during the 1970s it was noticed that there was a strong link between excessive use of
amphetamines and a syndrome known as amphetamine psychosis. When taken in high doses for
long periods of time, amphetamines produce behavioural symptoms in humans and animals that closely
resemble symptoms of psychosis. These include paranoia and repetitive, stereotyped behaviour patterns
(Angrist, Lee, & Gershon, 1974). Subsequently we have learned that amphetamines produce these
disturbed behaviour patterns by increasing brain dopamine activity, and giving amphetamines to those
diagnosed with schizophrenia actually increases the severity of their symptoms (Faustman, 1995).
amphetamine psychosis A syndrome in which high doses of amphetamines taken for long
periods of time produce behavioural symptoms in humans and animals that closely resemble
symptoms of psychosis.
Third, brain imaging studies have indicated that individuals diagnosed with schizophrenia show
excessive levels of dopamine released from areas of the brain such as the basal ganglia—especially when
biochemical precursors to dopamine, such as dopa, are administered to the individual (Carlsson, 2001;
Goldsmith, Shapiro, & Joyce, 1997).
Finally, post‐mortem studies have found increased levels of dopamine and significantly more dopamine
receptors in the brains of deceased schizophrenia sufferers—especially in the limbic area of the brain
(Seeman & Kapur, 2001).
So, how might dopamine activity be involved in the production of psychotic symptoms? Figure 8.1
illustrates two important dopamine pathways in the brain, the mesolimbic pathway and the
mesocortical pathway. These two pathways begin in the ventral tegmental area of the brain but
may have quite different effects on the appearance of psychotic symptoms. First, an excess of dopamine
receptors only seems to be related to the positive symptoms associated with schizophrenia
(hallucinations, delusions, disordered speech). This is consistent with the fact that antipsychotic drugs
only appear to attenuate positive symptoms and have little or no effect on negative symptoms (the
behavioural symptoms associated with flattened affect), and this effect of excess dopamine appears to be
localised in the mesolimbic dopamine pathway (Davis, Kahn, Ko, & Davidson, 1991). However, the
mesocortical pathway begins in the ventral tegmental area but projects to the prefrontal cortex, and the
dopamine neurons in prefrontal cortex may be underactive. This has important implications for
cognitive activity because the prefrontal cortex is the substrate for important cognitive processes such as
working memory, and these cognitive processes contribute to motivated and planned behaviour
(Winterer & Weinberger, 2004). In this way, dopamine activity might account for both the positive and
the negative symptoms observed in schizophrenia, but because antipsychotic drugs block dopamine
receptors only in the mesolimbic pathway, this accounts for why such drugs only affect positive
symptoms.
mesolimbic pathway One of two important dopamine pathways in the brain, which may be
impaired during schizophrenia. The other pathway is the mesocortical pathway.
mesocortical pathway One of two important dopamine pathways in the brain, which may
be impaired during schizophrenia. The other pathway is the mesolimbic pathway.
While the dopamine hypothesis has been an influential biochemical theory of schizophrenia for more
than 30 years, there is still some evidence that does not fit comfortably within this hypothesis. First, while
antipsychotic drugs are usually effective in dealing with many of the symptoms of schizophrenia, they
do not start having an effect on symptoms until about 6 weeks after treatment has commenced. This is
unusual, because antipsychotic drugs are known to start blocking dopamine receptors in the brain
within hours of administration, so we would expect improvement to be immediate (Sanislow & Carson,
2001; Davis, 1978). Second, many new antipsychotic drugs are effective despite having only a minimal
effect on brain dopamine levels (e.g., clozapine) or appear to be effective because they not only block
dopamine receptors but also block other neurotransmitters such as serotonin (Nordstrom et al., 1995).
Other neurotransmitters that have been implicated in psychotic symptoms include serotonin,
acetylcholine, glutamate, and gamma‐aminobutyric acid (GABA) neurons (Stone, Morrison, & Pilowsky,
2007; Brisch et al., 2014), and this is perhaps not so surprising given that serotonin neurons regulate
dopamine neurons in the mesolimbic pathway, and glutamate and dopamine dysregulation may interact
with each other. A full understanding of the role of neurotransmitters in psychotic symptomatology will
only result from a full understanding of how these brain neurotransmitters affect each other, and how
this interaction influences brain processes that give rise to both positive and negative symptoms.
enlarged ventricles Enlargement of the areas in the brain containing cerebrospinal fluid,
associated with schizophrenia.
Second, schizophrenia is associated with reduced volume of gray matter in the prefrontal cortex
(Buchanan, Vladar, Barta, & Pearlson, 1998). This area plays an important role in a number of
cognitive processes, the most important being executive functioning, which enables planning, goal‐
directed behaviour, and decision‐making; it also mediates speech and coordinates working memory.
Deficits in executive functioning would encompass poor performance on cognitive tasks associated with
speed and accuracy, abstraction/categorisation, memory, and sustained attention, and they are also
associated with negative symptoms of schizophrenia such as blunted affect and social withdrawal
(Antonova, Sharma, Morris, & Kumari, 2004; Artiges et al., 2000; Pinkham, Penn, Perkins, &
Lieberman, 2003). In particular, individuals with a diagnosis of schizophrenia who exhibit negative
symptoms show significantly lower prefrontal cortex metabolic rates than nonsufferers (Potkin et al.,
2002), and they have reduced prefrontal cortex blood flow when undertaking decision‐making card
sorting tasks such as the Wisconsin card sort test (WCST) (Weinberger, Berman, & Illowsky, 1988). All
of this is consistent with the fact that the mesocortical dopamine pathway extends to the prefrontal
cortex, and that dopamine neurons in the prefrontal cortex are relatively less active in individuals with
schizophrenia. Most recent evidence suggests that these deficits in prefrontal cortex functioning in
schizophrenia are not necessarily due to a reduction in the number of neurons in this area, but to
disrupted synaptic connections between neurons in the glutamatergic, GABAergic, and dopaminergic
pathways (Seshadri, Zeledon & Sawa, 2013) and to a reduction in the dendritic spines of neurons
which reduces the connectivity between these cells (Paspalas, Wang, & Arnsten, 2013; McGlashan &
Hoffman, 2000). This reduced connectivity between neurons in an area of the brain responsible for
executive functioning may well give rise to the disordered speech and behavioural disorganisation often
found in schizophrenia. Figure 8.2 shows how a positron emission tomography (PET) scan reveals
decreased frontal lobe activity in a schizophrenia sufferer compared with a healthy control participant,
as well as the enlarged ventricles in the brain of the schizophrenia sufferer.
dendritic spines Small protrusion from a neuron’s dendrite that receives input from a single
synapse of an axon.
FIGURE 8.2 Abnormalities in dopamine activity may be linked to the brain's mesocortical pathway and the mesolimbic
pathway. Both begin in the ventral tegmental area, but the former projects to the prefrontal cortex and the latter to the
hypothalamus, amygdala, hippocampus and nuclear accumbens. The dopamine neurons in the prefrontal cortex may be
underactive (leading to the negative symptoms of schizophrenia), and this underactivity may then fail to inhibit dopamine
neurons in the mesolimbic pathway causing an excess of dopamine activity in this pathway (resulting in positive symptoms)
(e.g., Davis, Kahn, Ko, & Davidson, 1991).
Third, brain imaging studies have also shown abnormalities in the temporal cortex, including
limbic structures, the basal ganglia, and the cerebellum (Shenton, Dickey, Frumin, &
McCarley, 2001; Gur, Cowell, et al., 2000; Gur, Turetsky, et al., 2000). Abnormalities in neural activity
in the temporal lobe‐limbic system are more associated with the positive symptoms of schizophrenia
such as hallucinations and symptoms of thought disorder (McCarley et al., 2002), and auditory
hallucinations have been shown to be associated with neural activation in the temporal lobes‐limbic
system (Shergill, Brammer, Williams, Murray, & McGuire, 2000). These deficits are also associated with
reduced volume in the temporal cortex and hippocampus in individuals with a diagnosis of
schizophrenia (Steen et al., 2006; Fischer et al., 2012). Furthermore, impaired hippocampal function in
schizophrenia could underlie a range of symptoms because of the role of the hippocampus in memory
for events and facts and in pattern completion, all of which are disrupted in schizophrenia and could
give rise to spurious associations, chaotic speech, and hallucinations (Tamminga, Stan, & Wagner, 2010;
Lieberman et al., 2018).
temporal cortex Abnormalities in this brain area are associated with symptoms of
schizophrenia.
basal ganglia A series of structures located deep in the brain responsible for motor
movements.
cerebellum The part of the brain at the back of the skull that coordinates muscular activity.
These findings tend to suggest that abnormalities in different areas of the brain may each be associated
with different symptoms of psychosis. Some individuals with a diagnosis of schizophrenia show
abnormalities in some of these brain areas, but many others show abnormalities in all of them—which
explains why many exhibit both positive and negative symptoms (Kubicki et al., 2002).
One final issue, of course, is what causes these structural and functional differences in the brains of
individuals with a diagnosis of schizophrenia? One factor we have already mentioned is genetic
mutation in genes that control the development of the brain and its associated cognitive processes.
Many of the neurological defects found in schizophrenia research are ones that could have occurred
only during early brain development when the complex structure of the brain is developing, and this
suggests that prenatal factors may be important in causing subsequent brain abnormalities (Allin &
Murray, 2002). In particular, individuals diagnosed with schizophrenia do not show the normal
hemispheric asymmetry in brain development that occurs during the second trimester of pregnancy (4–
6 months), and this may give rise to deficits in those areas of the brain concerned with language and
associative learning (Sommer, Aleman, Ramsey, Bouma, & Kahn, 2001). In addition, brain damage or
abnormalities that occur after the third trimester of pregnancy are normally self‐repairing through a
process known as glial reactions. That such repair is not found in post‐mortem studies of the brains of
individuals diagnosed with schizophrenia suggests that brain areas must have been damaged or suffered
abnormal development prior to the third trimester (Brennan & Walker, 2001).
Another important consideration is that environmental factors may influence the early development of
the brain either during gestation or at birth. Risk factors that have been postulated include birth
complications, maternal nicotine consumption during pregnancy, and maternal infections. Birth
complications—such as reduced supply of oxygen to the brain—appear to occur at a higher rate in
individuals who eventually display symptoms of psychosis (Brown, 2011; Walker, Kessler, Bollini, &
Hochman, 2004), but, of course, not everyone who suffers birth complications then develops psychosis,
so other factors must be involved. Cigarette smoking during pregnancy has been associated with
significantly higher risk of schizophrenia symptoms in the offspring—possible as a result of maternal
nicotine consumption on offspring brain development (Niemelä et al., 2016). And the probability of an
offspring developing schizophrenia is also significantly higher in mothers who have suffered an infection
during pregnancy (Brown & Derkits, 2010), and one particular infection that has been widely studied in
this respect is influenza. For example, one study suggested that a mother's exposure to influenza during
the first trimester of pregnancy resulted in a sevenfold increase in the probability of their offspring
developing psychotic symptoms (Brown et al., 2004). However, the effect sizes in studies such as these
are small, and there have been many failed attempts to demonstrate that maternal influenza is a risk
factor for offspring psychosis, so the jury is still out on this issue. Some of the evidence for a role of
maternal influenza on offspring psychosis is discussed in Focus Point 8.4.
Finally, if the causes of schizophrenia can be traced to early brain development, then why don't at‐risk
individuals develop psychotic symptoms until they are usually well into adolescence? There may be at
least two reasons for this. First, we have argued that many of the symptoms of schizophrenia—and
especially the negative symptoms can be traced to abnormalities in the prefrontal cortex—the area that
controls many complex cognitive activities. However, the prefrontal cortex is a brain structure that only
fully matures in adolescence and early adulthood, so any developing deficits in that brain region are
only likely to manifest in an obvious way at maturation (Giedd, 2004), and this is consistent with
adolescents at risk for psychosis showing prodromal symptoms such as social withdrawal, shallow
emotion, and deterioration in school work during their early teens. Second, late adolescence is also a
period associated with increased stress, and especially exposure to stressors that the individual will not
have experienced before (e.g., sexual relationships, and social and educational responsibilities). Stress
increases cortisol levels which in turn activates brain dopamine activity, and any factor that stimulates
brain dopamine activity in at risk individuals is likely to trigger the onset of psychotic symptoms
(Walker, Mittal, & Tessner, 2008; Schifani et al., 2018).
FIGURE 8.3 This PET scan shows sections of brains from a patient diagnosed with schizophrenia (right) and a
healthy control (left). In the top pictures, higher activation is indicated by red areas, and these are more widespread in the
brain of the healthy control. The bottom pictures show ventricular enlargement in the brain of the individual diagnosed
with schizophrenia (indicated by dark blue areas), and this is a common feature of the brains of individuals diagnosed with
schizophrenia.
Reproduced by permission of Monte S Buchsbaum, Mount Sinai School of Medicine.
In summary, there is now good evidence that psychotic symptoms are associated with deficits in
important brain areas. These deficits manifest as lower brain volume, neurotransmitter imbalances—
especially in dopamine, serotonin, glutamate, and GABA pathways—and poorer performance on
cognitive neuropsychological tasks. Particularly important brain areas exhibiting deficits are the
prefrontal cortex, the temporal cortex, and the hippocampus, and deficits in functioning in the
prefrontal cortex and temporal cortex can be clearly associated with both negative and positive
symptoms respectively. The causes of these structural and functional brain deficits in schizophrenia are
unclear, although gene mutations, environmental factors influencing prenatal development, and
maternal infections are all possibilities that are being currently researched.
8.5.2 Psychological Theories
Over the past 40 years or so most research has been focused on genetic and biological theories of
schizophrenia, and psychological models have generally received less attention. However, the past 10
years has seen a resurgence of interest in psychological models of psychosis—especially cognitive
models that view psychotic symptoms as the result of cognitive biases in attention, reasoning and
interpretation (Savulich, Shergill, & Yiend, 2012; Ramos & Torres, 2016). We begin this section by
discussing some traditional psychological interpretations of psychosis—especially psychodynamic and
behavioural accounts. We then move on to consider cognitive accounts of psychosis, including both
cognitive deficits (impairments in cognitive functioning) and cognitive biases (the tendency to attend to a
certain type of stimulus or interpret ambiguity in just one particular direction).
Psychodynamic theories
Freud (1915, 1924) hypothesised that psychosis is caused by regression to a previous ego state which
gives rise to a preoccupation with the self—this is known in psychoanalytic terminology as regression to
a state of primary narcissism characteristic of the oral stage of development. This regression is
thought to be caused by cold and unnurturing parents, and the regression to a state of primary
narcissism gives rise to a loss of contact with reality. Freud described the symptoms of thought disorder,
communication disorder, and withdrawal typical of psychosis as evidence of a self‐centred focus, and he
argued that any attempts to re‐establish contact with reality gave rise to the hallucinations and delusions
characteristic of psychosis.
primary narcissism Regression to a previous ego state which gives rise to a preoccupation
with the self.
In the 1950s and 1960s, many psychodynamic explanations of psychosis were related to dysfunctional
family dynamics, and championed by such contemporary psychodynamic theorists as Gregory Bateson
and R.D. Laing. Prior to this, Fromm‐Reichmann (1948) had developed the concept of the
‘schizophrenogenic mother’—literally a mother who causes schizophrenia! According to Fromm‐
Reichmann, schizophrenogenic mothers were cold, rejecting, distant, and dominating. Such mothers
demanded dependency and emotional expressions from their children but simultaneously rejected
displays of affection and even criticised the dependency that they implicitly attempted to foster in their
children. This account suggests that when subjected to such conflicting messages and demands from a
dominant close relative, the child withdraws and begins to lose touch with reality—at least in part as a
way of avoiding the stresses and conflicts created by the mother.
schizophrenogenic mother A cold, rejecting, distant and dominating mother who causes
schizophrenia according to Fromm-Reichmann.
The empirical evidence supporting these psychodynamic theories of psychosis is meagre (Harrington,
2012). First, genetic accounts of psychosis are now largely accepted as important contributors to
psychosis—even by psychodynamic theorists and have been incorporated in some way into
psychodynamic theories. In some cases it is argued that inherited biological predispositions may
facilitate regression to earlier psychological states (Willick, Milrod, & Karush, 1998), whereas others
suggest biological predispositions may prevent the individual from developing an ‘integrated self ’, and
this gives rise to the disrupted behaviour patterns exhibited in individuals diagnosed with schizophrenia
(Pollack, 1989). Second, there is very little evidence that mothers of individuals displaying psychotic
symptoms actually possess the characteristics of the schizophrenogenic mother described by Fromm‐
Reichmann (Waring & Ricks, 1965).
Behavioural theories
There are a number of views that suggest a role for learning and conditioning in the development of
psychotic symptoms—if not as a full theory of psychosis, then as an explanation of why unusual
behaviour patterns are typical of many forms of psychosis. Ullman & Krasner (1975) argued that the
bizarre behaviours of individuals diagnosed with schizophrenia developed because they are rewarded by
a process of operant reinforcement. That is, because of the disturbed family life often experienced by
individuals diagnosed with schizophrenia and the attentional difficulties that are a central feature of the
psychopathology, such individuals tend to find it difficult to attend to normal social cues and involve
themselves in normal social interactions. Instead, their attention becomes attracted to irrelevant cues,
such as an insect on the floor, an unimportant word in a conversation, a background noise, etc.
Attention to irrelevant cues such as these makes their behaviour look increasingly bizarre, and as a result
it gets more and more attention, which acts as a reinforcer to strengthen such behaviours.
There is some limited evidence to support the view that inappropriate reinforcement may generate
some bizarre behaviours, and it may account for the frequency of inappropriate behaviour emitted by an
individual diagnosed with schizophrenia. For example, Focus Point 8.5 describes a study conducted
some years ago by Ayllon, Haughton & Hughes (1965). They reinforced a female resident in a
psychiatric hospital for carrying a broom. Whenever she was observed holding the broom nurses were
asked to approach her, offer her a cigarette, or give her a token which could be exchanged for a cigarette
(Focus Point 8.5). This study suggests that what look like quite bizarre and inappropriate behaviours can
be developed by simple contingencies of reinforcement. Further support for a learning view comes from
evidence that extinction procedures can be used to eliminate or to significantly reduce the frequency of
inappropriate behaviours simply by withdrawing attention or withholding rewards when these
inappropriate behaviours are emitted. Ayllon (1963) describes the behaviour of a 47‐year‐old female
diagnosed with schizophrenia who insisted on wearing around 25 pounds of excess clothing, even in hot
weather. This individual's bizarre clothing habits were, however, soon returned to normal when a
weight limit was set each time she tried to enter the hospital dining room. On each day she was allowed
into the dining room only if she weighed 2 pounds less than the previous day. This could be achieved
only by discarding some of the excess clothing and within 14 weeks she was down to wearing quite
normal clothing. The fact that inappropriate behaviours can be eliminated and acceptable social and
self‐care behaviours developed using operant reinforcement procedures does suggest that at least some
of the unusual behaviours emitted by individuals diagnosed with schizophrenia may be under the
control of contingencies of reinforcement.
FOCUS POINT 8.5 CAN PERFECTLY NORMAL
PSYCHOLOGICAL PROCESSES CAUSE BIZARRE
BEHAVIOUR?
A revealing study by Ayllon, Haughton, and Hughes in 1965 provides insight into some of the
processes that might generate the kinds of bizarre and apparently irrational behaviour that
make up some forms of psychopathology.
They used operant reinforcement methods (see Chapter 1, Section 1.1.3) to reward a female
patient diagnosed with schizophrenia for carrying a broom.
Whenever she was observed holding the broom a nurse would approach her, offer her a
cigarette, or give her a token which could be exchanged for a cigarette. After a period of this
reinforcement, the patient was carrying the broom around for most of the day, and even taking
it to bed with her when she slept.
At this point, the researchers called in two psychiatrists (who were unaware of the
reinforcement schedule) to give their opinions on the nature of the behaviour. One of them
gave the following reply:
‘Her constant and compulsive pacing, holding a broom in the manner she does, could be seen
as a ritualistic procedure, a magical action. . .Her broom would be then: (1) a child that gives
her love and she gives him in return her devotion, (2) a phallic symbol, (3) the scepter of an
omnipotent queen. . . .this is a magical procedure in which the patient carries out her wishes,
expressed in a way that is far beyond our solid, rational and conventional way of thinking and
acting’. (Ayllon et al., 1965, p. 3)
First, this psychodynamic explanation given by one of the psychiatrists is a good example of
how easy it is to over‐speculate about the causes and meaning of a behaviour when the real
causes are unknown.
Second, it shows how behaviour that is viewed as representative of psychopathology can be
acquired through a perfectly normal learning mechanism (in this case operant reinforcement).
Cognitive theories
Cognitive deficits
Cognitive deficits are one of the core features of schizophrenia that evolve during the development of
the disorder. These deficits include dysfunctions in working memory, attention, processing speed, visual
and verbal learning, and consequential deficits in reasoning, planning, abstract thinking, and problem
solving (Heinrichs & Zakzanis, 1998; Fioravanti, Bianchi, & Cinti, 2012). A decline in these cognitive
processes is one of the earliest signs of psychosis and occurs in the prodromal phase as well as
throughout the development of explicit symptoms. The deterioration in working memory efficiency is
also an identifiable precursor of relapse (Hui et al., 2016). Working memory impairment is particularly
important because of its role in many complex forms of cognition, including maintaining
representations in an activated easily accessible state required during activities involving reasoning and
language (Baddeley, 2012), and studies show that working memory deficits are highly correlated
with poor cognitive performance in individuals with a diagnosis of schizophrenia (Johnson et al., 2013;
Gold et al., 2019). Research in Clinical Psychology 8.2 describes how working memory capacity can be
tested and compared across healthy controls, individuals with a diagnosis of schizophrenia, and
individuals with an alternative mental health diagnosis (Research Methods In Clinical Psychology 8.2).
These cognitive deficits almost certainly have their origins in the biological and neurological factors
discussed in Section 8.5.1, including abnormalities in brain neurotransmitter activity and abnormalities
in brain structure and function, and this decline in cognitive function probably underlies many of the
cognitive biases discussed later in this section that contribute to psychotic thinking and give rise to
disorganised thought, hallucinations, delusional beliefs, paranoia, hearing voices, and deficits in social
cognition and emotion regulation.
Cognitive biases
Of specific interest to cognitive theorists are the hallucinations and delusional beliefs that are regularly
developed during psychotic episodes, with over 50% of individuals diagnosed with schizophrenia
exhibiting paranoid beliefs (Guggenheim & Babigian, 1974). This raises the issue of why so many
sufferers should develop these particular kinds of delusions. Amongst individuals diagnosed with
schizophrenia who are living in the community, Harris (1987) found that they were 20 times more likely
than nonsufferers to report intrusive or confrontational experiences, such as threats from landlords,
police enquiries, burglaries, and unwanted sexual propositions, so there may be some basis in experience
to the development of persecutory beliefs. However, researchers have pointed out that paranoid
delusions may also be the result of cognitive biases that have been developed by the sufferer (in much
the same way that cognitive biases may underlie the experience of anxiety and its related disorders, see
Chapter 6, Section 6.4.1). In the following section, we consider the evidence for four types of biased
cognition: attentional biases, attributional biases, reasoning biases, and interpretational biases,
as well as the potential role of theory of mind (TOM) impairments. In particular, these types of
accounts have led to a greater understanding of how individuals develop paranoid ideation in delusional
disorder, and why “hearing voices” is such a prominent and distressing feature of psychosis.
RESEARCH METHODS IN CLINICAL PSYCHOLOGY 8.2
TESTS OF WORKING MEMORY IN SCHIZOPHRENIA
Working memory is a limited capacity system used for temporarily holding information
available in awareness so it can be manipulated, transformed into a more useful form, or used
in other more complex cognitive tasks such as reasoning or decision‐making.
Working memory capacity differs from individual to individual, and there are a number of
cognitive tests that can be used to determine this capacity.
One is the digit‐span task. In this task participants are presented with a series of digits and
must repeat them back. If they do this successfully, they are given a longer series of digits.
Working memory capacity is judged by the number of digits that the individual can successfully
remember and repeat back.
In an alternative task (sometimes called the AX‐CPT, continuous performance task) the
participant is instructed to make a response which distinguishes between stimuli. For example,
in a typical study, the target letter is X. However, there is an added constraint: X is only a target
when it is preceded by the letter A. Thus, if a participant sees A‐X‐A‐X, both Xs are targets. If
a participant sees B‐X‐B‐X, neither of those Xs are targets. Researchers can manipulate how
likely it is that a target appears. Thus, the AX‐CPT measures a person's ability to maintain a
goal state (e.g., that X must follow A to be a target).
Such tests can be used to test the working memory capacity of individuals with a diagnosis of
schizophrenia and compared with healthy controls or individuals with a different mental health
diagnosis.
This figure shows that individuals with a diagnosis of schizophrenia (SZ) perform significantly
worse on an AX‐CPT continuous performance task (left panel) and a digit span task (right
panel) than healthy controls (HC) and individuals with a diagnosis of bipolar disorder but
without psychotic symptoms (BPD‐). But the bipolar group that does exhibit psychotic
symptoms (BPD+) performs as badly as the schizophrenia group (Frydecka et al., 2014),
suggesting that both groups SZ and BPD+ have working memory deficits when compared with
healthy controls.
Attentional biases
Anxiety disorders are typically associated with attentional biases towards threatening stimuli
(Chapter 6, Section 6.4.1) and individuals suffering psychotic symptoms also show some similar
attentional biases. There is evidence that individuals with delusional disorder selectively attend to
pathology congruent information. For example, individuals with persecutory delusions exhibit
attentional biases towards stimuli that have emotional meaning or are paranoia relevant (Fear, Sharp, &
Healy, 1996; Bentall & Kaney, 1989; Moritz & Laudan, 2007). Interestingly, individuals prone to
persecutory delusions may have a bias towards avoiding attending to some forms of threatening stimuli.
For example, they are slower to locate angry faces than control participants (Green, Williams, &
Davidson, 2001) and make fewer fixations and show reduced attention to the salient information of
facial features than controls (Loughland, Williams, & Gordon, 2002; Philllips & David, 1998). This is
particularly interesting given that potentially angry facial expressions would have added significance for
someone with a persecutory delusion and suggests that although they may be initially attentive to threat,
they then adopt an avoidance strategy that involves avoiding fixating on threatening stimuli (Green,
Williams, & Davidson, 2003). However, what is not yet clear is whether attentional biases to threat in
paranoia are specifically linked to psychotic symptoms, or whether they may alternatively be developed
by a history of stress or trauma (Gibson, Cooper, Reeves, Olino, & Ellman, 2019), and it is relevant that
individuals with persecutory beliefs often report stressful experiences in the period immediately prior to
the onset of full persecutory delusions (Freemanet al., 2019).
Attributional biases
Early research indicated that individuals with delusional beliefs (particularly persecutory beliefs) have a
bias towards attributing negative life events to external causes (Bentall, 1994; Bentall & Kinderman,
1998; Bentall, Corcoran, Howard, Blackwood, & Kinderman, 2001). For example, using the
Attributional Style Questionnaire (see Chapter 7, Section 7.1.2), Kaney & Bentall (1989) found that
patients with paranoid delusions made excessively stable and global attributions for negative events (just
like depressed individuals) but also attributed positive events to internal causes and negative events to
external causes. A subsequent study by Bentall, Kaney, & Dewey (1991) found that this tendency of
individuals with paranoid delusions to attribute negative events to external causes was only evidenced
when there was a perceived threat to the self—they did not necessarily attribute negative events to
external sources when describing the experiences of others. These studies all suggest that individuals
exhibiting paranoid delusions have had significantly more negative, threatening life events than control
individuals without a diagnosis, and have also developed a bias towards attributing negative events to
external causes. At the very least, such an attributional bias will almost certainly act to maintain
paranoid beliefs and maintain their delusions that someone or something external is threatening them.
However, more recent studies have called into question the generality of a specific externalizing bias to
negative events and indicated that paranoia may instead be associated with a decreased sense of self‐
causation generally, and the significance of this psychological process in creating paranoid beliefs still
requires further research (Langdon, Still, Connors, Ward, & Catts, 2013a,b; Randjbar, Veckenstedt,
Vitzthum, Hottenrott, & Moritz, 2010).
PHOTO 8.2 Suspicious Minds. People vulnerable to paranoid thinking try to make sense of unusual internal experiences
by using those feelings as a source of evidence that there is a threat, and they then incorporate other evidence around them to
substantiate that belief (e.g., interpreting the facial expressions of strangers in the street as additional evidence that they are
threatened). Freeman (2007) argues that these paranoid interpretations often occur in the context of emotional distress, are
often preceded by stressful events (e.g., difficult interpersonal relationships, bullying, isolation), and happen against a
background of previous experiences that have led the person to have beliefs about the self as vulnerable, others as potentially
dangerous, and the world as bad. In addition, living in difficult urban areas is likely to increase the accessibility of such
negative views about others.
Reasoning biases
Over the past 20 years or so, considerable evidence has accrued to suggest that individuals with
delusional disorders exhibit reasoning biases. One form of this is known as ‘jumping to
conclusions’. That is, such individuals make a decision about the meaning or importance of an event
on the basis of significantly less evidence than someone without a delusional disorder. This has been
demonstrated using the classic Jumping to Conclusions task (Huq, Garety, & Hemsley, 1988;
Westermann, Salzmann, Fuchs, & Lincoln, 2012). In this task, participants view two jars each
containing 100 beads: Jar A with 85 red beads and 15 yellow beads and Jar B with 85 yellow beads and
15 red beads. The experimenter hides the jars from view and then, one by one, draws a series of beads
from one of the jars and asks the participant to say which jar the beads are being drawn from. The
fewer the number of beads drawn before the participant reaches a decision indicates a greater jumping
to conclusions bias. Typically, individuals with a delusional disorder draw three beads or fewer before
making a decision (Fine, Gardner, Craigie, & Gold, 2007; Peters & Garety, 2006), and this jumping to
conclusions bias has been shown in individuals that are delusion‐prone (Colbert & Peters, 2002), at high
risk for psychosis (Broome et al., 2007), and are suffering a first episode of psychosis (Falcone et al.,
2010). These studies collectively suggest that jumping to conclusions may create a biased reasoning
process that leads to the formation and acceptance of delusional beliefs and eventually to delusional
symptoms (Savulich, Shergill, & Yiend, 2012). However, it's not clear whether jumping to conclusions is
symptomatic of all delusional beliefs (e.g., delusions of grandeur or reference), or whether it is restricted
to persecutory delusions (Startup, Freeman, & Garety, 2008).
jumping to conclusions The process of making a decision about the meaning or importance
of an event on the basis of insufficient evidence.
The reason why individuals with psychotic symptoms jump to conclusions is also not clear. Jumping to
conclusions is correlated with cognitive impairments—particularly working memory impairments
(Ormrod et al., 2012; Garety et al., 2013), implying that sufferers cannot hold information in working
memory long enough to make fully informed judgments. Alternatively, individuals with a diagnosis of
schizophrenia may simply make decisions based on less evidence or may have a tendency to ‘overvalue’
each individual piece of evidence (Evans, Averbeck, & Furl, 2015).
One final issue is how reasoning biases might interact with the experiences of psychosis‐prone
individuals to cause persecutory symptoms? One particular model that attempts to deal with these issues
is the threat‐anticipation model of persecutory delusions (Freeman, Garety, Kuipers, Fowler, &
Bebbington, 2002). This model argues that four factors are important in contributing to the
development of cognitive biases involved in persecutory ideation: (a) anomalous experiences (such as
hallucinations) that do not appear to have a simple and obvious explanation (and are therefore open to
biased interpretations); (b) anxiety, depression, and worry, that would normally cause a bias towards
negative thinking and threatening interpretations of events (see Chapter 6, Section 6.4.1); (c) reasoning
biases on the part of the individual which lead them to seek confirmatory evidence for their persecutory
interpretations rather than question them (Eisenacher & Zink, 2017); and (d) social factors, such as
isolation and trauma, which add to feelings of threat, anxiety, and suspicion. Of particular importance
is the relationship between anxiety and jumping to conclusions, where anxiety is known to increase the
tendency to jump to conclusions even in a healthy population, and so to generate and reinforce
paranoid ideation (Lincoln, Peters, Schafer, & Moritz, 2009). In addition, there is now a growing body
of evidence showing that there is a significant association between state anxiety and jumping to
conclusions in individuals with a diagnosis of schizophrenia (Ellet, Freeman, & Garety, 2008; Lincoln,
Lange, Burau, Exner, & Moritz, 2009).
‘Brian, my brother, started smoking at a very young age, in his teens. He was a daily smoker and he used to smoke
the equivalent of a pack of cigarettes a day. I had a phone call once from the police in High Wycombe saying they
had found him. He was talking like a Rastafarian and he believed he was John the Baptist. I had to get him
sectioned which absolutely broke the family up. My father and mother had very old‐fashioned ideas about mental
illness—you didn’t speak about it—and they practically disowned him. He came to live with me. He would be
awake all night and sleep all day. One doctor asked me if he was smoking cannabis and I said he was—she
believed that was what triggered his downfall. They put him on medication because they believed he was
schizophrenic—he was hearing voices, saw messages in the paper and was having delusions of grandeur. I believe
the last time anyone saw him was around High Wycombe in 1996 and he was basically living the life of a
down‐and out. I believe his problems were brought on by the smoking. He had to live 28 days off it while in
hospital and he improved. He seemed in better shape to me’.
This BBC news interview describes how one woman believed that smoking cannabis had
caused her brother to develop psychotic symptoms. There has long been a view that regular
psychotropic drug use may be related to the development of psychotic symptoms, and this has
focussed on the relationship between cannabis use and subsequent diagnosis of schizophrenia
(Arsencault, Cannon, Witton, & Murray, 2004). Apart from legal substances such as alcohol
and tobacco, cannabis is the most widely used illicit drug used by schizophrenia sufferers, and
substance use disorder generally is estimated to be 4.6 times higher in schizophrenia sufferers
than the general population (Regier et al., 1990).
Cross‐sectional studies have shown that individuals diagnosed with schizophrenia use cannabis
significantly more often than other individuals in the general population (Degenhardt & Hall,
2001). Some have argued that this relationship between cannabis use and schizophrenia reflects
a form of ‘self‐medication’, in which individuals may start using cannabis because of a
predisposition for schizophrenia (Khantzian, 1985). However, others have argued for a direct
causal link between cannabis use and schizophrenia, and case history studies frequently
describe psychotic episodes being preceded by the heavy use of cannabis (Wylie & Burnett,
1995), and cannabis use being associated with earlier onset of symptoms (Buhler, Hambrecht,
Löffler, An der Heiden, & Hafner, 2002).
Prospective studies that have monitored cannabis use and psychotic symptoms in individuals
over a lengthy period of time appear to indicate that there is indeed a causal link between
cannabis and the development of psychotic symptoms. First, Andreasson, Allebeck, Engstrom,
and Rydberg (1987) found a dose‐response relationship between cannabis use at 18 years and
later increased risk of psychotic symptoms. Subsequent prospective studies found that 18‐year‐
olds meeting the criteria for cannabis dependence had rates of subsequent psychotic symptoms
that were twice the rate of young people not meeting these criteria (Fergusson, Horwood, &
Swain‐Campbell, 2003). Also, this relationship could not be explained by high cannabis use
being associated with any pre‐existing psychiatric symptoms (Fergusson, Horwood, & Ridder,
2005). Statistical modelling of these longitudinal data show that the direction of causality
appears to be from cannabis use to psychotic symptoms and not vice versa (Fergusson et al.,
2005), and this causal link has recently been supported using a genetic approach to estimating
the risk of psychotic symptoms following cannabis use (Vaucher et al., 2017). In addition,
further studies have demonstrated that cannabis use increases the risk of psychotic symptoms
but has a greater impact on those that already have a vulnerability to schizophrenia (Verdoux,
Gindre, Sorbara, Tournier, & Swendsen, 2003; Henquet et al., 2005). Finally, meta‐analyses
suggest that the mean age of onset of psychotic symptoms among cannabis users is almost 3
years earlier than that of non‐cannabis users—even when other factors such as tobacco use are
controlled for (Myles, Newall, Nielssen, & Large, 2012), and patients with schizophrenia using
cannabis are more regularly hospitalised than non‐cannabis users, and have higher relapse rates
(van Dijk, Koeter, Hijman, Kahn, & van den Brink, 2012; Schoeler et al., 2016).
So if there is a causal link between cannabis use and schizophrenia, what is the mechanism that
mediates this link? First, there may be a neurological explanation. Research suggests that
cannabis has an important effect on brain chemistry, and the compound tetrahydrocannabinol
(THC) that is found in cannabis can release the neurotransmitter dopamine (Tanda, Pontieri, &
Di Chiara, 1997; Arnold, Boucher, & Karl, 2012). Excess dopamine activity has been identified
in the aetiology of schizophrenia, and heavy cannabis use may therefore raise brain dopamine
activity to levels triggering psychotic episodes (but see contrary evidence reported by Bloomfield
et al., 2014). In addition, regular cannabis use may also affect the course of brain maturational
processes associated with schizophrenia, and has been shown to result in smaller cerebellar
white‐matter volume in schizophrenia patients who use cannabis regularly (Solowij et al., 2011).
Alternatively, Freeman, Garety, Kuipers, Fowler& Bebbington (2002) have argued that
anomalous experiences (that do not have a simple and obvious explanation) are one of the
fundamental factors contributing to the development of delusional thinking, and psychoactive
street drugs such as cannabis are likely to increase the frequency of such anomalous
experiences. If the individual is in an anxious state and already feeling isolated, then these
anomalous experiences are likely to be interpreted threateningly and give rise to the persecutory
and paranoid ideation often found in schizophrenia.
In conclusion, despite over 30 years of research highly suggestive of a link between cannabis
use and psychosis, that tantalizing piece of evidence confirming a direct causal link is still
elusive, as is a clear indication of the mechanism by which any causal link might operate.
Interpretational biases
In addition to attentional, attributional and reasoning biases, accounts of psychotic delusions have been
supplemented by findings that a number of other information processing biases may be involved
in the development of delusions. For example, an enhanced tendency to interpret ambiguous
information as threatening is likely to increase perception of risk of personal harm and negative
interpretations of hallucinations that may facilitate stress and paranoia. Such negative interpretation
biases have been observed in individuals with psychosis (Savulich, Shergill, & Yiend, 2012) and these
biases appear to be present before the onset of psychotic symptoms, suggesting that such negative biases
could contribute to the development of psychosis (Yiend et al., 2019).
In addition, Morrison (2001) has argued that many individuals diagnosed with schizophrenia have a
bias towards interpreting cognitive intrusions such as ‘hearing voices’ in a negative or threatening
way. In this case, a perfectly normal auditory hallucination may then be interpreted as threatening (e.g.,
‘I must be mad’, ‘the Devil is talking to me’, ‘if I do not obey the voices they will hurt me’), and this
misinterpretation causes anxiety, negative mood, and physiological arousal which produces more
auditory hallucinations, which are in turn interpreted negatively, and so on (Baker & Morrison, 1998).
Interestingly, hearing voices is not restricted to individuals suffering psychosis, and around 13% of
healthy individuals report hearing voices (Bevan, Read, & Cartwright, 2011). However, what
characterises hearing voices in individuals suffering psychosis is the distress that these voices induce.
Individuals with a diagnosis of schizophrenia report the voices they hear as more unacceptable,
uncontrollable, and distressing than healthy individuals (Morrison, Nothard, Bowe, & Wells, 2004), and
interpreting voices as dominating or insulting is associated with distress (Vaughan & Fowler, 2004). In
addition, Peters, Williams, Cooke, & Kuipers (2012) found a link between beliefs about voices and both
emotional and behavioural responses to voices. Beliefs about the omnipotence of voices was significantly
associated with measures of distress, and beliefs about the intent of voices (e.g., malevolence vs.
benevolence) was associated with resistance to or engagement with voices respectively.
However, what is not yet fully clear is why individuals suffering psychosis develop the negative
interpretations of voices that they do—such as them being uncontrollable, external, dominating, and
distressing. Waters et al. (2012) argue that the aberrant voices heard by both healthy and psychosis‐
suffering individuals are generated by hyperactivation of auditory neural networks that may be
triggered by environmental or internal factors, and it is failures in signal detection that lead the
individual to accept these voices as real and meaningful but not self‐generated. In addition to this, the
deficits in working memory and executive functioning exhibited by individuals suffering psychosis means
that they in particular may be unable to suppress these voices using deliberative, top‐down processing,
nor are they able to easily distract from them. The distress this uncontrollability will cause is likely to be
one source of interpretational bias, which will lead the individual increasingly to interpret the voices as
threatening, and the voice content as malevolent. In addition to voices being potentially uncontrollable
in this way, hearing voices is often a consequence of traumatic experiences such as physical assault,
sexual trauma, and victimisation (Mueser et al., 1998), and the voices may become distressing because
they reflect elements of the original traumatic experiences (Garety, Kuipers, Fowler, Freeman, &
Bebbington, 2001; Luhrmann et al., 2019)
Finally, as individuals suffering psychosis increasingly come to interpret the voices they hear as external
and uncontrollable, they will develop a ‘relationship’ with these voices, and the nature of this
relationship may determine the level of distress and disability and control that the voices elicit in the
individual. Hayward, Berry, & Ashton (2011) have identified a number of different types of
relationships that individuals with a diagnosis of schizophrenia have with their voices. Often this is a
‘power’ struggle, in which the sufferer is constantly engaged in trying to regain power over their voices.
It can lead some individuals to become socially isolated as they withdraw into the world of their voices.
But for others who are already socially withdrawn, it may lead to the generation of hallucinations and
delusions to make sense of the world of their voices (Hoffman, 2007).
theory of mind (TOM) The ability to understand one’s own and other people’s mental
states.
FIGURE 8.4 Two typical cartoons taken from the study by Corcoran, Cahill, & Frith (1997). In type (a) jokes the
participant needs to infer the mental state of one of the characters to understand the joke, and if individuals with
persecutory delusions lack a ‘theory of mind’ they will find these jokes difficult to understand. The type (b) joke is an
example from their physical/behavioural joke, where only interpretation of the physical events in the cartoon is needed to
understand the joke. Corcoran et al. found that individuals with persecutory delusions found type (a) jokes more difficult to
understand, whereas people without persecutory delusions were equally able to understand both types (a) and (b).
Metacognitive deficits
Metacognition is the ability to think about thinking or to consciously monitor ongoing thoughts in order
to make judgements based in information from our senses and combine this with prior knowledge. Such
abilities are necessary for planning complex tasks, awareness of the cognitive requirements necessary for
complex tasks, and exerting cognitive control over the different cognitive requirements for a task (Cella,
Reeder, & Wykes, 2015). Metacognition has been found to be impaired in individuals with
schizophrenia spectrum disorders, and this has been linked to a range of symptoms and disabilities
including negative symptoms, lack of insight, poor self‐awareness and self‐knowledge (such as an
inability to know what one knows) (e.g., Pinkham, 2017). Metacognitive deficits also have a
significant impact on important aspects of social cognition, including understanding speech nuances,
facial expressions, and hence the intentions of others and also means the individual with a diagnosis of
schizophrenia has poorer insight into their own mental states (Lysaker, Dimaggio, & Brüne, 2014). In
effect, the inability to ‘oversee’ and manage the different cognitive processes involved in activities such as
social cognition and decision planning means that the psychosis sufferer's experience of these activities is
fragmented, resulting in an inability to integrate information required for successful completion of tasks
and also to form an integrated sense of self (Gagen, Zalzala, Hochheiser, Schnakenberg, & Lysaker,
2019). A number of treatments are being developed to help improve metacognition in individuals with a
diagnosis of schizophrenia, including cognitive remediation approaches (see also Section 8.6.2 this
chapter), metacognitive training, and metacognitive interpersonal therapy for psychosis, and these
interventions have been shown to result in modest improvements in measures of metacognition and
cognitive functioning generally (Reeder et al., 2017; Lysaker, Gagen, & Schweitzer, 2018).
Social factors
The highest rates of diagnosis of schizophrenia are usually found in poorer inner‐city areas and in those
of low socio‐economic status, and this has given rise to two rather different sociocultural accounts of
schizophrenia. The first is known as the sociogenic hypothesis. This claims that individuals in low
socio‐economic classes experience significantly more life stressors than individuals in higher socio‐
economic classes, and these stressors are associated with unemployment, poor educational levels, crime,
and poverty generally. Having to endure these stressors may trigger psychotic symptoms in vulnerable
people. A study conducted in Denmark indicated that factors associated with low socio‐economic status
may be risk factors for psychosis, and these include unemployment, urbanicity, low educational
attainment, lower wealth status, low income, parental unemployment, and lower parental income
(Byrne, Agerbo, Eaton, & Mortensen, 2004; Lee et al., 2018). Studies conducted on immigrants have
also indicated that such groups have a higher incidence of the diagnosis of schizophrenia, and this has
been attributed to the stress caused by many of the initial consequences of immigration, such as
language difficulties, unemployment, poor housing, and low socio‐economic status (Hjern, Wicks, &
Dalman, 2004). However, while this evidence provides some support for the sociogenic hypothesis, there
is little evidence that socio‐economic class per se increases the risk of psychotic symptoms. In particular,
parental socio‐economic class is not a significant risk factor for a diagnosis of schizophrenia (Byrne,
Agerbo, Eaton, & Mortensen, 2004), and studies of individuals with a diagnosis of schizophrenia have
indicated that, although they may be of low socio‐economic status, they are as likely to have parents
from a higher socio‐economic class as a low one (Turner & Wagonfeld, 1967).
sociogenic hypothesis The theory that individuals in low socio-economic classes experience
significantly more life stressors than individuals in higher socio-economic classes, and these
stressors are associated with unemployment, poor educational levels, crime and poverty
generally.
An alternative explanation for the fact that individuals diagnosed with schizophrenia appear to have low
socioeconomic status is that the intellectual, behavioural, and motivational problems afflicting
individuals with psychotic symptoms mean they will suffer a downward drift into unemployment,
poverty, and the lower socio‐economic classes as a result of their disorder. This is known as the social‐
selection theory and claims that individuals displaying psychotic symptoms will drift into lifestyles
where there is less social pressure to achieve and no need to hold down a regular job, and they can cope
with their difficulties on a simple day‐to‐day basis. This hypothesis is supported by the fact that many
individuals diagnosed with schizophrenia may have parents with high socio‐economic status, even
though they themselves are living in poverty‐ridden areas of towns and cities (Turner & Wagonfeld,
1967). In addition, longitudinal studies have suggested there may be a reciprocal relationship between
socio‐economic status and mental health where each factor influences the other (Mossakowski, 2014).
social-selection theory Argues that there are more individuals diagnosed with schizophrenia
in low socio-economic groups because after they have developed psychotic symptoms they will
drift downwards into unemployment and low-achieving lifestyles
One final sociocultural view of schizophrenia is known as social labelling, in which it is argued that
the development and maintenance of psychotic symptoms is influenced by the diagnosis itself (Modrow,
1992). In particular, if someone is diagnosed as ‘schizophrenic’ then it is quite possible (a) that others
will begin to behave differently towards them, and define any deviant behaviour as a symptom of
schizophrenia, and (b) that the person who is diagnosed may themselves assume a ‘role’ as someone who
has a disorder, and play that role to the detriment of other—perhaps more adaptive—roles. At the very
least this is likely to generate a self‐fulfilling prophecy, in which a diagnosis leads to the individual, their
family, and friends behaving in ways which are likely to maintain pathological symptoms. Evidence for
such an effect can be found in the classic study by Rosenhan (1973) in which eight individuals without
any symptoms of psychopathology presented themselves at psychiatric hospitals complaining of various
psychotic symptoms. Not only were these ‘normal’ individuals immediately diagnosed with
schizophrenia, they were subsequently treated in an authoritarian and uncaring manner by hospital
staff; began to feel powerless, bored, and uninterested; and even had great difficulty being viewed and
treated as ‘normal’ once they had left the hospital! (see Bentall, 2019, for a fuller discussion of this
study).
social labelling The theory that the development and maintenance of psychotic symptoms
are influenced by the diagnosis itself.
Familial factors
There is a general belief across most theoretical perspectives on schizophrenia that the characteristics of
the family are in some way important in making an individual vulnerable to acquiring psychotic
symptoms. As we have already seen, some psychodynamic views believed it was certain characteristics
possessed by the mother that was important in precipitating psychosis (the schizophrenogenic mother,
Section 8.5.2). However, more recently, attention has turned from the characteristics of individual
family members to the patterns of interactions and communications within the family.
Some approaches suggest that the risk factor within families for the development of psychotic symptoms
lies in the nature of the way that parents and children communicate. In the 1950s, Bateson, Jackson,
Haley, & Weakland (1956) argued that psychosis may develop in families where communication is
ambiguous and acts to double‐bind the child. This double‐bind hypothesis claims that the
individual is subjected within the family to contradictory messages from loved ones (e.g., a mother may
both request displays of affection, such as a hug, and then reject them as being a sign of weakness). This
leaves the individual in a conflict situation, in which they may then eventually withdraw from all social
interaction. Focus Point 8.7 offers some examples of double‐bind situations and conversations, and it is
clear from these examples that, whichever of the themes the child reads into the ambiguous message,
they are in a no win situation (Focus Point 8.7).
double-bind hypothesis Theory advocating that psychotic symptoms are the result of an
individual being subjected within the family to contradictory messages from loved ones.
The double‐bind hypothesis has subsequently been superseded by more empirical research which has
identified a construct called communication deviance(CD) in families and which is related to the
development of psychotic symptoms. CD is a general term used to describe communications that would
be difficult for an ordinary listener to follow and leave them puzzled and unable to share a focus of
attention with the speaker. Such communications would include (a) abandoned or abruptly ceased
remarks or sentences, (b) inconsistent references to events or situations, (c) using words or phrases oddly
or wrongly, or (d) use of peculiar logic. Studies have demonstrated that CD is a stable characteristic of
families with offspring who develop psychotic symptoms (Wahlberg et al., 2001). When children with a
biological predisposition to schizophrenia have been adopted and brought up in homes with adopted
parents who do not have a biological predisposition for schizophrenia, CD has been found to be an
independent predictor of the adopted child developing psychotic symptoms (Wahlberg et al., 2004;
Roisko, Wahlberg, Hakko, Wynne, & Tienari, 2011). This suggests that CD is a risk factor for a
diagnosis of schizophrenia that is independent of any biological or inherited predisposition, and that
CD is not simply the product of a shared genetic defect between parents and offspring.
Another construct that has been closely linked to the appearance and reappearance of psychotic
symptoms is known as expressed emotion(EE). The importance of the family environment in
contributing to psychotic symptoms was first recognised when it was found that individuals who left
hospital following treatment for psychosis were more likely to relapse if they returned to live with
parents or spouses than if they went to live in lodgings or to live with siblings (Brown, Carstairs, &
Topping, 1958). From this it was discovered that many of the discharged patients were returning to
environments where communications were often hostile and critical. This led to the development of the
construct of EE which refers to high levels of criticism, hostility and emotional involvement between key
members of a family, and some examples of high EE are shown in Activity Box 8.1. Since its
development, EE has been shown to be a robust predictor of relapse (Kavanagh, 1992) and, in
particular, relapse involving positive psychotic symptoms. Families high in EE tend to be intolerant of
the sufferer's problems and have inflexible strategies for dealing with their difficulties and symptoms.
High EE families also have an attributional style that tends to blame the sufferer themselves for their
condition and the consequences of their symptoms (Weisman, Nuechterlein, Goldstein, & Snyder, 2000;
Barrowclough, Johnston, & Tarrier, 1994). It is not clear how high EE within a family might influence
tendency to relapse—but a recent 20‐year prospective study suggests that EE is a valid predictor of
relapses and rehospitalisations (Cechnicki, Bielanska, Hanuszkiewicz, & Daren, 2013). One mechanism
by which EE may trigger relapses is through a high sensitivity to stress in psychosis sufferers. The stress
caused by EE may trigger cortisol release in the hypothalamic‐pituitary‐adrenal system, and this is
known to increase dopamine activity and so reactivate symptoms in vulnerable individuals (Walker,
Mittal, & Tessner, 2008). The link between EE and psychotic symptoms is further supported by the fact
that some studies have shown that interventions to moderate the high EE levels in a family may actually
have a beneficial effect on relapse, suggesting a possible causal link between high EE and relapse
(Hogarty et al., 1986; Tarrier et al., 1988). Finally, cultural factors also appear to moderate the effect of
EE on symptoms and relapse. Aguilera, Lopez, Breitborde, Kopelowicz, & Zarate (2010) found that EE
was less likely to cause relapse in Mexican immigrants to the US than in the indigenous population.
However, as immigrants became more familiar with American culture (language and media) EE became
increasingly related to relapse, which suggests that the EE‐schizophrenia relapse link may be mediated
by cultural differences in warmth, mutual interdependence, and kin relationships (Singh, Harley, &
Suhail, 2013).
The following are some visualisations inspired by double‐bind theory where the verbal message
may contradict the implied message therefore invalidating both.
SELF‐TEST QUESTIONS
What is the diathesis–stress perspective that is used to explain the aetiology of psychotic
symptoms?
Concordance studies, twin studies and adoption studies are used to determine the extent
of genetic factors in psychosis. Can you give examples of these types of methods?
What are genetic linkage analyses and how are they used to identify the specific genes
through which the risk for psychosis may be transmitted?
What is the dopamine hypothesis and how did the role of dopamine in psychosis come to
be discovered?
What abnormalities can be found in the brains of individuals diagnosed with
schizophrenia, and which brain areas are most affected by these abnormalities?
Can you describe the evidence supporting the view that brain abnormalities in individuals
diagnosed with schizophrenia may result from abnormal prenatal development?
What are the main features of psychodynamic explanations of psychosis?
What are the main cognitive deficits found in individuals with a diagnosis of
schizophrenia, and how do these deficits affect thinking and behaviour?
Can you describe some of the attentional deficits that are characteristic of psychosis and
explain how they might contribute to the clinical symptoms?
A number of cognitive biases have been implicated in the development of some psychotic
symptoms. What are these biases and how might they contribute to factors such as
delusional thinking?
What is metacognition and how do deficits in metacognition affect social cognition and
self‐identity in psychosis sufferers?
What is a sociocultural theory of psychosis? Can you describe and evaluate the significance
of at least two sociocultural accounts of psychosis?
What is double‐bind hypothesis and how does it try to explain the development of
psychotic symptoms?
What are expressed emotion and communication deviance, and what is the evidence that
they constitute a risk factor for the development of psychotic symptoms?
SECTION SUMMARY
8.5 THE AETIOLOGY OF PSYCHOTIC SYMPTOMS
The overarching approach to understanding psychosis is a diathesis–stress perspective in which
a combination of genetically‐inherited predisposition (diathesis) and environmental stress
are thought to cause psychotic symptoms.
Concordance studies suggest that an individual who has a first‐degree relative diagnosed with
schizophrenia is 10 times more likely to develop psychotic symptoms than someone who
has no first‐degree relatives diagnosed with schizophrenia.
The concordance rate for schizophrenia in MZ twins is 44% but falls to 12% in DZ twins.
Heritability estimates for schizophrenia are high at between 64 and 81%.
Adoption studies show that the probability of an adopted child developing schizophrenia is
linked to the probability of the biological mother developing schizophrenia and not to the
probability of the adopted mother developing schizophrenia.
Genetic linkage analyses have helped to identify some of the specific genes through which the
risk for psychosis might be transmitted.
Genome‐wide association studies (GWAS) allow researchers to identify rare mutations in genes
that might give rise to psychotic symptoms.
108 different genetic loci have been identified that are associated with schizophrenia
symptoms.
The main biochemical theory of schizophrenia is the dopamine hypothesis, which argues that
psychotic symptoms are related to excess activity of the neurotransmitter dopamine.
Two important dopamine pathways in the brain, the mesolimbic pathway and the mesocortical
pathway, may be impaired during schizophrenia.
Psychotic symptoms are associated with brain abnormalities, including smaller brain size
and enlarged ventricles (the areas in the brain containing cerebrospinal fluid).
Schizophrenia is specifically associated with reduced volume of gray matter in the prefrontal
cortex which affects executive functioning, decision‐making and working memory.
Brain imaging studies have also shown abnormalities in the temporal cortex, including limbic
structures, the basal ganglia, and the cerebellum.
Evidence suggests that schizophrenia may also be associated with birth complications,
maternal nicotine consumption during pregnancy, and maternal infections during
pregnancy.
Psychodynamic theories of psychosis have claimed that it is (a) due to regression to a state
of primary narcissism, or (b) develops because of a ‘schizophrenogenic mother’ who fosters
psychotic symptoms in her offspring (psychodynamic theories).
At least some inappropriate behaviour patterns exhibited by individuals diagnosed with
schizophrenia may be developed and maintained through processes of operant
reinforcement (behavioural theories)
Cognitive deficits in working memory, attention, processing speed, and visual and verbal
learning are core features of schizophrenia.
Individuals with persecutory delusions exhibit attentional biases towards stimuli that have
emotional meaning or are paranoia relevant.
Delusional disorder is often associated with cognitive biases in attention, attribution,
reasoning, and interpretation.
Delusional disorders are associated with a reasoning bias called ‘jumping to conclusions’ where
the individual infers meaning on the basis of very little evidence.
There is evidence that individuals diagnosed with schizophrenia may not be able to
understand the mental states of others (a ‘TOM’ deficit), and this may be a factor in the
development of delusions—especially delusions of persecution.
Metacognition has been found to be impaired in individuals with schizophrenia spectrum
disorders, and this has been linked to a range of symptoms and disabilities including
negative symptoms, lack of insight, poor self‐awareness, and self‐knowledge.
The sociogenic hypothesis claims that individuals in low socio‐economic classes experience
significantly more life stressors than those in higher socio‐economic classes, and this is
more likely to contribute to the increased prevalence of the diagnosis of schizophrenia in
low socio‐economic groups.
Social‐selection theory argues that there are more individuals diagnosed with schizophrenia in
low socio‐economic groups because after they have developed psychotic symptoms they
will drift downwards into unemployment and low‐achieving lifestyles.
Social labelling theory claims that once an individual has been diagnosed with schizophrenia,
such labelling is likely to give rise to circumstances which will tend to maintain psychotic
symptoms.
High levels of expressed emotion (high levels of criticism, hostility and emotional involvement
between members) and communication deviance (poorly structured means of communication
between family members) within the families of individuals diagnosed with schizophrenia
have been shown to be associated with relapse and the development of positive symptoms.
prefrontal lobotomy A surgical procedure that involves severing the pathways between the
frontal lobes and lower brain areas.
Antipsychotic drugs
Specially developed antipsychotic drugs and medications are the first line of intervention for psychotic
symptoms, and arguably the most effective treatment for the positive clinical symptoms. The main
classes of drugs used for the treatment of psychotic symptoms are known as antipsychotics or
neuroleptics (because some of these drugs produce undesired motor behaviour effects similar to the
symptoms of neurological diseases such as Parkinson's disease). Nowadays, antipsychotic drugs can be
divided into two broad groups usually labelled first‐ and second‐generation drugs (see Table 4.2 in
Chapter 4), and see also Chapter 4, Section 4.1.1 for a further discussion of antipsychotic medications).
First generation antipsychotics (sometimes also called typical antipsychotics) consist of the traditional
drugs that have been developed over the past 50 years (such as chlorpromazine and haloperidol), and
second‐generation antipsychotics (sometimes called atypical antipsychotics) refer to those that have been
developed in recent years. In the years after their development, it was originally thought that second‐
generation drugs were more effective over a broader range of symptoms than first‐generation drugs
(Citrome, Bilder, & Volavka, 2002), were associated with less risk of relapse (Leucht et al., 2003) and
with less risk of involuntary motor behaviour side effects (Csernansky & Schuchart, 2002).
neuroleptics One of the main classes of drugs used for the treatment of psychotic symptoms.
The development of new drug treatments for psychosis is an important on‐going process, and it still
unclear what biochemical mechanisms many of these drugs influence to have their successful
therapeutic effects (see Chapter 4, Section 4.1.1). However, antipsychotic drugs have become a central
feature of treatment for psychotic symptoms and significantly more independent research is needed to
improve their effectiveness, reduce their unpleasant side‐effects, and improve patient adherence to
medication regimes.
It has been known for some time that the sooner psychotic symptoms are detected and treated,
the better the long‐term prognosis (Marshall et al., 2005). In the UK, this has led to the
establishment of multiple discipline clinical teams whose purpose is to provide an intensive case
management of at risk individuals and to educate GPs and physicians in recognition and
response to subclinical symptoms. The aims of this team are to reduce the duration of
untreated psychosis to less than 3 months, and the team will then provide intensive case
management for the patient over the next 3–5 years. Early intervention has been shown to
produce better clinical outcomes than standard service treatment (Garety et al., 2006), is cost
effective (Singh, 2010), and significantly reduces the risk of second relapse (Alvarez‐Jimenez,
Parker, Hetrick, McGorry, & Gleeson, 2011). However, it is still unclear whether these services
might merely be delaying psychosis without necessarily reducing long‐term risk (Preti & Cella,
2010; Fusar‐Poli, McGorry, & Kane, 2017). Even so, early intervention services have been
established in a number of countries world‐wide, including the UK, Australia and New
Zealand, Norway and Denmark, and Canada and the United States.
Finally, there is still vigorous debate about the success of CBTp in alleviating psychotic symptoms and
enabling recovery. Early randomised controlled trials indicated that CBTp was effective in helping to
reduce hallucinations and delusions, and decreased both positive and negative symptoms while lifting
mood and improving life functioning (Bustillo, Lauriello, Horan, & Keith, 2001; Wykes, Steel, Everitt, &
Tarrier, 2008). However, effect sizes were often modest (Hazell, Hayward, Cavanagh, & Strauss, 2016),
and while CBTp is more effective than either usual treatment or attentional control conditions, it does
not appear to perform significantly better than other forms of therapy for treating psychosis (Jones et al.,
2018; Newton‐Howes & Wood, 2013, but see Hutton, 2013 for a critical commentary on this meta‐
analysis). However, it is clear that CBTp does provide a useful adjunct to treatment for psychotic
symptoms, is well tolerated by sufferers, and can be used in conjunction with medication or other forms
of psychosocial intervention (e.g., Sommer et al., 2012).
Personal therapy
When individuals diagnosed with schizophrenia are discharged from hospital after an acute episode,
they usually find themselves in a challenging environment in which their cognitive skills and their ability
to cope leave a lot to be desired. As a consequence, relapse rates are usually high. Personal therapy is
a broad‐based cognitive‐behaviour programme that is designed to help such individuals with the skills
needed to adapt to day‐to‐day living after discharge. Clients are taught a range of skills in either a group
setting or on an individual basis, and these include (a) learning to identify signs of relapse (e.g., social
withdrawal) and what to do in such circumstances, (b) acquiring relaxation techniques designed to help
the client deal with the anxiety and stress caused by challenging events (e.g., to reduce levels of anger
that might give rise to unnecessary aggression), (c) identifying inappropriate emotional and behavioural
responses to events, and learning new and adaptive responses (e.g., to help with gaining and maintaining
employment and accommodation), (d) identifying inappropriate cognitions and dysfunctional thinking
biases that might foster catastrophic and deluded thinking (and so help the client to prevent intrusive
catastrophic thinking), (e) learning to deal with negative feedback from others and to resolve
interpersonal conflicts (known as ‘criticism management and conflict resolution’), and (f) learning how to
comply with medication regimes (Hogarty et al., 1997b; Hogarty et al., 1997a).
family psychoeducation Family intervention designed to educate the family about the
nature and symptoms of psychosis and how to cope with the difficulties that arise from living
with someone with a diagnosis.
applied family management An intensive form of family intervention which goes beyond
education and support to include active behavioural training elements.
Outcome studies have indicated that family interventions significantly reduce the risk of relapse by
around 50–60% (MacFarlane, 2016), improve outcomes in early psychosis (Claxton, Onwumere, &
Fornells‐Ambrojo, 2017), reduce symptoms, and improve the sufferer's social and vocational functioning
for periods up to 2 years (Huxley, Rendall, & Sederer, 2000; Pharoah, Mari, Rathbone, & Wong, 2010),
and family interventions that are conducted for longer than 9 months appear to be particularly effective
(Kopelowicz & Liberman, 1998). Studies suggest that no one form of family intervention is necessarily
more effective than others (Huxley, Rendall, & Sederer, 2000), but family psychoeducation interventions
without accompanying behavioural training components may be less effective at achieving some goals,
such as medication adherence (Zygmunt, Olfson, Boyer, & Mechanic, 2002).
Reduce their number of hospital admissions, in terms of both frequency and duration
Find and keep suitable accommodation
Sustain family relationships
Increase social networks and relationships
Improve their money management
Increase medication adherence
Improve their daily living skills
Undertake satisfying daily activities (including employment)
Improve their general health
Improve their general quality of life
Stabilise symptoms
Prevent relapse
Receive Help at an early stage
CORE CHARACTERISTICS: Assertive Outreach involves targeting clients with severe
and enduring mental health problems who have difficulty engaging with services:
What is of some concern, however, is the apparent prevalence of substance and chemical abuse by
individuals suffering psychosis and living in the community. The lifetime prevalence rate for substance
abuse among people diagnosed with schizophrenia is around 50%, and may be significantly higher in
those who are homeless (Kosten & Ziedonis, 1997). In an Australian study, Wallace, Mullen, & Burgess
(2004) found that between 1975 and 1995, substance abuse problems for individuals diagnosed with
schizophrenia increased from 8.3 to 26.1%, and significantly higher rates of criminal conviction were
found for those with substance abuse problems (68.1% compared to 11.7%). We know that regular use
of some substances (such as cannabis—see Focus Point 8.6) can directly increase the risk of developing
positive symptoms, and that the use of others (such as cocaine and amphetamines) can exacerbate these
symptoms (Laruelle & Abi‐Dargham, 1999). The challenge for community care programmes is to tackle
what appears to be increasing levels of substance abuse in individuals with psychotic symptoms living in
the community, and, in so doing, to decrease the risk of relapse and hospitalisation.
8.6.5 Summary of Treatment for Psychosis
Treating psychotic symptoms is a relatively long‐term process. This will often begin with subclinical
symptoms being picked up by an early intervention team, and may require immediate and urgent
treatment with antipsychotic drugs to deal with the positive symptoms found during early psychotic
episodes. Psychological therapies may be required to deal with the longer‐term cognitive and
behavioural deficits that may restrict full social and occupational functioning, and family‐based
interventions will help to maintain a stable, stress‐free environment in which the risk of relapse is
minimised. Long term community care is often overseen by a case manager who will help the sufferer with
their medication regimes, residential supervision, vocational training, and regular access to mental
health services. NICE recommends that a number of different interventions are considered in planning
for recovery from a first episode of schizophrenia—and these can include both medications and
psychotherapy (NICE, 2016). However, there are a range of differing views across the medical,
psychological, and social spectrum about what is the best approach to take for the long‐term treatment
of individuals with schizophrenia.
FIGURE 8.5 The occurrence of violence in schizophrenia as a consequence of two developmental trajectories stemming
from antisocial or violent behaviour prior to onset of the disorder, and no violent behaviour prior to disorder onset. Note the
different types of primary explanations for these two different trajectories
(after Bo, Abu‐Akel, Kongerslev, Haahr, & Simonsen, 2011).
SELF‐TEST QUESTIONS
What are antipsychotic drugs, how are they thought to deal with the psychotic symptoms,
and how are they categorised?
What problematic side effects do antipsychotic drugs have?
What are the important features of social skills training for individuals diagnosed with
schizophrenia?
What is CBTp and how is it used to treat individuals diagnosed with schizophrenia? With
what particular types of symptoms is it most effective?
What is cognitive remediation training (CRT)?
Can you describe a typical family‐based intervention for psychosis and the factors that
such interventions are designed to address?
What are the different types of community care programmes provided for individuals
diagnosed with schizophrenia, and is there any evidence for their effectiveness in
controlling psychotic symptoms?
SECTION SUMMARY
CHAPTER OUTLINE
9.1 DEFINING AND DIAGNOSING SUBSTANCE USE DISORDERS
9.2 THE PREVALENCE AND COMORBIDITY OF SUBSTANCE USE
DISORDERS
9.3 CHARACTERISTICS OF SPECIFIC SUBSTANCE ABUSE DISORDERS
9.4 THE AETIOLOGY OF SUBSTANCE USE DISORDERS
9.5 THE TREATMENT OF SUBSTANCE USE DISORDERS
9.6 SUBSTANCE USE DISORDERS REVIEWED
LEARNING OUTCOMES
When you have completed this chapter, you should be able to:
1. Describe the main diagnostic criteria for substance use disorders and be able to define key
terms such as craving, tolerance, and withdrawal.
2. Describe the specific characteristics of a range of substances that give rise to dependency
and abuse, including specific stimulants, sedatives, and hallucinogenic drugs.
3. Describe and evaluate the psychological, physical health, and economic costs of specific
substance use disorders.
4. Describe a developmental model of substance dependency and evaluate the risk factors
that contribute to the different stages in this model.
5. Describe and evaluate the efficacy of a range of psychological and biological treatment
methods for treating substance use disorders.
My name is Tim and I am from Yorkshire. I had a normal life until I was 12 years old, and then my mother and
father started to fight. The fights were very violent and quite frightening; I have since learnt that this was mostly
my mother's fault. It became apparent that we were left outside Pubs a lot but it seemed normal. My two Brothers
and I suffered a terrible few years the scars are still with us.
Our house was sold and we ended up on a bad Council estate in Sheffield which has since been knocked down.
The violent drinking bouts got worse and I left home although I was 15. I still found a job but suffered terribly
over leaving my younger Brothers. I found a bedsit and a job and the peace was heavenly. I then moved to Derby to
live with my Uncle's family and eventually got married to a lovely lady and had two daughters.
My drinking started in Derby. No one thing made me drink but I gradually drank more and more over the years. I
started my own catering business and was extremely successful. I employed 65 staff and enjoyed all the benefits of
being my own boss. I had money, cars, and plenty of time to drink!! I did not know then what would happen
because of my drinking. My wife told me about my behaviour but I ignored her and her advice. I would not listen
to anyone. Worst of all I was out on the road driving to my catering sites and drinking all day. I still functioned
but I do not know how to this day nor do I know how I kept my licence.
I sold my Company and borrowed £100,000 from the Bank to buy—yes, you guessed—a Pub/Restaurant.
What a nightmare—my own “Booze” on tap!! Needless to say the venture was doomed from the start. I drank
morning noon and night and had plenty of friends or so I thought. Eventually my Wife left me and went back to
her Parents, and I do not blame her.
I went Bankrupt and moved to a bedsit once again. I then went on cider and anything else I could get. I had
defrauded the Customs and Excise while I was drinking so I ended up in Prison for 12 months which was a
disaster. They put me in charge of the Officers mess and bar!!!! Needless to say I was in seventh heaven and came
out a complete wreck and moved from City to City for 10 years. I was sacked from numerous chefs' jobs and was
in and out of several mental hospitals all over the Country. I did stay dry for a while but when my father died I
started to drink again and went back to Prison as I wanted the peace and friendship I found the first time, however
this was not meant to be and I found it very hard to cope without the booze second time around.
I was begging in Soho when I decided to try and turn my life around. I moved to Leicester where—through
Alcoholics Anonymous—I stopped drinking. I did have re‐lapses but following hepatitis, jaundice, and a bleeding
throat I stopped four and a half years ago. I could not suffer those terrible withdrawals again and I still have the
scars of drinking—epilepsy and digestive problems. But I am dry.
Tim's Story
Introduction
A drug can be very loosely defined as any substance, other than food, that affects either our bodies or
our minds in some way. Such substances may give us energy, relax us when nervous, change our ways of
thinking, distort our perceptions, or change our moods. They can, of course, have these effects either for
better or for worse, and the short‐term benefit of a substance may lead to longer term physical and
mental costs (as the experience of Tim, above, clearly demonstrates). Nevertheless, in most Westernised
cultures, drugs are almost a normal part of daily life. We use drugs to wake up in the morning (caffeine
in tea and coffee), to stay alert during the day (nicotine in cigarettes), to reduce pain (aspirin and
paracetamol), to control our physical shape (dieting pills), and to relax (alcohol, sleeping pills). While the
use of drugs in this way may seem to provide benefits to daily living, there are a number of problems
that arise out of this culture: (a) while many of these substances have short‐term benefits they may have
longer term negative physical and psychological effects with persistent use (e.g., alcohol), (b) many
people either become psychologically or physically addicted to a drug, and continue to use the drug
when it no longer has the original benefits (e.g., sleeping pills and dieting pills), and (c) many people
move on from taking legal drugs to taking illegal substances, many of which are physically damaging,
highly addictive, and frequently blight social, educational, and occupational performance (e.g., cocaine,
heroin, solvents, and hallucinogens such as lysergic acid diethylamide(LSD)). Furthermore, in addition
to traditional illicit drugs such as cocaine and ecstasy, recent years have been characterised by a
dramatic rise in the number of newer classes of psychostimulants—usually known as synthetic cathinones,
but more frequently referred to as ‘legal highs’, or alternatively as ‘bath salts’ or ‘PlantFood’ on the
basis of how their packaging is used to disguise the contents (Baumeister, Tojo, & Tracy, 2015) (see
Focus Point 9.5).
drug A substance that has a physiological effect when ingested or otherwise introduced into the
body.
bath salts ‘Bath salts’ is the name for an emerging group of drugs containing synthetic
chemicals related to cathinone, which is an amphetamine-like stimulant found in the khat plant.
The abuse and misuse of drugs has become one of society's biggest problems. Substance abusers often
pay a high personal cost for their dependency in terms of failed relationships, ruined careers, poor
health, and premature death (Photo 9.1). Society also pays a high cost in terms of lost productivity and
the strain such abuse puts on national health resources. In 2019, the World Drug Report (WDR; United
Nations Office on Drugs and Crime 2019a) estimated that (a) in 2017 over 271 million people
worldwide aged 15–64 had used an illicit drug at least once in the previous year, representing 1 in every
18 people in the world; (b) there were an estimated 165,000 deaths worldwide from illicit drug use in
2017, and added to this there were an additional 184,000 direct deaths from alcohol use (Ritchie &
Roser, 2019); (c) with an estimated 188 million people using cannabis in 2017, it remains the world's
most widely used illicit substance; and (d) in terms of prevalence, amphetamine‐type stimulants (ATS)
(including ‘ecstasy’) remain second only to cannabis, with an estimated 40 million users worldwide in
2017 (Figure 9.1).
PHOTO 9.1 Paul Gascoigne was well known as a talented footballer throughout Europe, having played for teams such
as Newcastle United, Tottenham Hotspur, Lazio, and Rangers. But since retiring from professional football, his life had
become dominated by his dependency on alcohol and its associated mental health problems. Like most people with a
substance use disorder, his health suffered, his problems curtailed a promising coaching career, and he has had numerous
run‐ins with the law. Despite a willingness to enter rehabilitation, his many relapses are well known, and such relapses are
a common feature of treatment for severe substance use disorder.
Even legal drugs such as tobacco and alcohol are problematic and usage regularly leads to death, illness,
and impoverishment. There are more than one billion tobacco smokers worldwide, of whom around
80% live in low‐ and middle‐income countries, and tobacco kills more than eight million people each
year, of whom seven million die as a direct result of tobacco use (World Health Organization, 2020b).
The world's population consumes an average of 6.4 L of alcohol a year per person (an equivalent of 53
bottles of wine per person older than 15 years), and the harmful use of alcohol results in 2.8 million
premature deaths a year (Ritchie & Roser, 2019). In the UK, the percentage of the population that
reports smoking cigarettes has declined to 15.1% in 2017 (down from 39% in 1980, and 21% in 2008)
(Office for National Statistics, 2018), and alcohol consumption in the UK has been slightly decreasing
over the past 20 years. This is exemplified by a steady decrease in the percentage of both men and
women drinking over 8 units a day between 2006 and 2017 (see Figure 9.2), and a decline in the
numbers in treatment for alcohol problems since 2014 (NHS, 2019a).
Although drug use in adolescents and school children in the UK is still a recognised problem and may
well lead to lifelong dependency and health problems, the numbers of young people in specialist
substance misuse services has shown a decrease of almost 35% since its peak in 2008–2009 (Public
Health England, 2018). However, despite these encouraging figures, 15% of young people in 2016 said
they had taken drugs in the previous year, up from 10% in 2014 (NHS Digital, 2017), and Department
for Education data for 2016–2017 show that school exclusions for alcohol and drug use have increase
substantially in recent years, with fixed term exclusions up 34% since 2012–2013 (Department for
Education, 2018). While these figures suggest that there may be a reduction in young people accessing
specialist substance abuse services in the UK in recent years, other factors indicate that the number of
young people using drugs may have started to increase again in most recent times (Public Health
England, 2018). Unfortunately, the evidence indicates that once an individual has used one illegal drug,
a majority will go on to abuse more than one (e.g., cocaine, cannabis, crack cocaine) (Tsuang et al.,
1998), and multiple drug abuse significantly increases other risks to well‐being such as being in a car
crash, mental health problems, violent behaviour, and promiscuous sexual behaviour (Greenwood et al.,
2001).
FIGURE 9.1 Global trends in drug use 2006–2017. The global prevalence of illicit drug use worldwide has grown
only slightly between 2006 and 2017 (at between 4% and 6%) while the prevalence of people with illicit drug use
disorders has remained relatively stable over this time (prevalence rate <1%) (United Nations Office on Drugs and Crime,
2019a).
FIGURE 9.2 Percentage of men and women in England who drank more than 8 units in a day. The percentage of men
and women drinking more than 8 units of alcohol a day has fallen significantly between 2006 and 2017 (NHS,
2019a,b,c).
The significant risk to physical health, mental health, social integration, and productivity posed by
substance abuse and dependence makes it quite a suitable subject for prevention and treatment. If we
look at Tim's story at the beginning of this chapter, we can see that his alcohol abuse and dependence
resulted in failed relationships; a ruined career and business; criminality; physical health problems such
as hepatitis, jaundice, and epilepsy; and mental health problems requiring hospitalisation. The
remainder of this chapter looks at some of the physical and psychological factors that lead to
dependence on, and abuse of, a range of substances, and how these problematic behaviour patterns can
be treated. But first, it is necessary to describe some of the terminology commonly used in this area of
psychopathology, and to look at the more general criteria for diagnosing and describing substance abuse
and dependence.
substance abuse A pattern of drug or substance use that occurs despite knowledge of the
negative effects of the drug, but where use has not progressed to full-blown dependency.
Social Substance use results in failure to fulfill major role obligations at work, school,
impairment or home
Individual persists with substance use despite recurrent social and interpersonal
problems caused by the substance
The individual may withdraw from family activities and hobbies in order to use
the substance
addiction When a person’s ‘normal’ body state is the drugged state (so that the body requires
the substance to feel normal).
craving The strong subjective drive that addicts have to use a particular substance.
psychological dependence When individuals have changed their life to ensure continued
use of a particular drug such that all their activities are centred on the drug and its use.
Tolerance The need for increased amounts of a substance in order to achieve similar effects
across time.
withdrawal Where the body requires the drug in order to maintain physical stability, and lack
of the drug causes a range of negative and aversive physical consequences (e.g. anxiety, tremors
and, in extreme cases, death).
TABLE 9.2 Basic terminology in the study and treatment of substance use disorders
Terminology Definition
Addiction Drug use to the point where the body's ‘normal’ state is the drugged state (so the body
requires the drug to feel normal).
Psychological The user's tendency to alter their life because of the drug and to centre their activities
dependence around the drug
Craving A strong subjective drive to use the substance
Tolerance The need for greater amounts of the drug or substance to achieve intoxification (or the
desired effect) or a markedly diminished effect with continued use of the same amount
of the drug or substance (DSM‐IV‐TR, p. 192)
SELF‐TEST QUESTIONS
Can you define the terms craving, tolerance, and withdrawal?
What are the main diagnostic criteria for substance use disorder?
What is craving?
SECTION SUMMARY
SECTION SUMMARY
stimulants Substances that increase central nervous system activity and increase blood
pressure and heart rate.
sedatives Central nervous system depressants which slow the activity of the body, reduce its
responsiveness, and reduce pain tension and anxiety. This group of substances includes alcohol,
the opiates and their derivatives (heroin, morphine, methadone and codeine), and synthesised
tranquillizers such as barbiturates.
FIGURE 9.4 A drug chart showing how the different substances described in this chapter overlap across categories.
Alcohol has its physical and psychological effects when its main constituent, ethyl alcohol, is absorbed
into the bloodstream through the lining of the stomach and intestine. Alcohol then reaches the brain
and central nervous system via the bloodstream. At first, alcohol acts to relax the individual, and it does
this by influencing the receptors associated with the neurotransmitter gamma‐aminobutyric acid
(GABA). This facilitates this neurotransmitter's inhibitory function by preventing neurons firing and
making the drinker feel more relaxed (Harvey et al., 2002). Initially, this makes the drinker more
talkative, friendly, confident, and happy. As more alcohol is absorbed into the central nervous system,
the second stage of intoxication makes the drinker become less able to make judgements and talk less
coherently, memory is affected, and they may switch from being relaxed and happy to emotional and
aggressive. Finally, the physical effects of alcohol intoxication include motor coordination difficulties (in
balance and walking), slowed reaction times, and blurred vision. This course of the effect of alcohol is
known as biphasic, because the initial effects act as a stimulant (making the drinker reactive and happy),
but the later effects act as a depressant (making the drinker sluggish and experience negative emotions).
We can see how drinking alcohol can be appealing to many people because of its initial effects (i.e., it
helps alleviate stress after a busy day at work, increases sociability, reduces inhibitions, etc.). However,
many of the so‐called effects of alcohol are actually mythical, and result from a drinker's expectations
about the effects of alcohol rather than its real effects. For example, in a couple of classic studies, Lang,
Goeckner, Adessor, and Marlatt (1975) and Wilson and Lawson (1976) gave participants a disguised
nonalcoholic beverage when they were expecting alcohol. They subsequently reported increases in
sexual arousal and aggression, even though they had become less physiologically aroused. Expectations
about the effects of alcohol appear to play an important role in drinking behaviour, with positive
expectancies about the effects of alcohol being a significant predictor of its use (Sher, Wood, Wood, &
Raskin, 1996).
delirium tremens (DTs) A severe form of alcohol withdrawal that involves sudden and
severe mental or nervous system changes.
fetal alcohol syndrome Physiological risk associated with heavy drinking in women, in
which heavy drinking by a mother during pregnancy can cause physical and psychological
abnormalities in the child.
The DSM‐5 diagnostic criteria for alcohol use disorder is provided in Table 9.4 and is defined by a
cluster of behavioural and physical symptoms such as evidence of tolerance effects and withdrawal
symptoms that develop within 4–12 hr of restricted consumption. However, many individuals with
alcohol dependence may never experience withdrawal once a pattern of compulsive drinking develops
in which their whole life centres around obtaining and consuming alcohol. Work performance and
childcare or household responsibilities may be significantly affected either by the aftereffects of drinking
(e.g., hangovers) or by being intoxicated while trying to perform these functions. Interestingly, a US
national survey indicated that workplace alcohol use and impairment directly affected an estimated 15%
of the US workforce, with 1.6% working under the influence of alcohol, and 9.2% working with a
hangover (Frone, 2006), and lost productivity features as the dominant economic cost of alcohol
consumption in many countries around the world (Rehm et al., 2009). Alcohol abuse is also
characterised by the drinker putting themselves at physical risk while intoxicated, including drink
driving and becoming engaged in violent arguments (see The Costs of Alcohol Use Disorders section).
Such individuals will also continue to drink when they know that their drinking is a cause of significant
social or interpersonal problems (such as their physical abuse of family members, or by causing
problems in their relationship with a partner) (Focus Point 9.2).
alcohol use disorder A problematic pattern of alcohol use leading to clinically significant
impairment or distress.
TABLE 9.4 Summary: DSM‐5 diagnostic criteria for alcohol use disorder
A pattern of alcohol use causing impairment or distress leading to at least two of the following
within a 12‐month period:
Alcohol is taken in greater amounts or for longer than was intended
A continuing desire or unsuccessful efforts to control alcohol use
A lot of time is spent in acquiring, using and recovering from the effects of alcohol
Craving, or a strong desire to use alcohol
Alcohol use results in a failure to fulfil major life roles at work, home and so forth
Persistent alcohol use despite the effect on interpersonal, recreational, or social interactions or
despite having an ongoing physical or psychological problem that is likely to have been caused or
made worse by alcohol
Tolerance symptoms associated with high alcohol use
Withdrawal symptoms associated with high alcohol use
Prevalence of use
The 12‐month and lifetime prevalence rates for alcohol use disorder are an alarming 13.9–29.1%
respectively (Grant et al., 2015). Dependence is more prevalent among men than women, in younger
and unmarried adults, and those in lower socio‐economic groups (Hasin, Stinson, Ogburn, & Grant,
2007). There are some ethnic differences in prevalence rates, with White Americans being more likely to
be diagnosed than Black Americans, and rates of diagnosis are also inversely related to educational
level. Alcohol dependence and abuse is frequently associated with abuse of other drugs, and is highly
comorbid with other psychiatric disorders. For example, heavy alcohol use is often part of polydrug
abuse or abuse of more than one drug at a time, and over 80% of alcohol abusers are smokers (e.g.,
Shiffman et al., 1994). This strong relationship between alcohol and tobacco abuse may be a result of
nicotine acting to suppress the aversive, sleep‐promoting effects of alcohol (Sharma, Lodhi, Sahota, &
Thakkar, 2015). Alcohol use disorder is also associated with other mental health diagnoses such as major
depressive disorder, bipolar I disorder, and antisocial personality disorder (Grant et al., 2015).
SELF‐HELP GROUPS
The most commonly sought source of help for alcohol‐related problems are community self‐
help groups such as Alcoholics Anonymous (AA) (http://www.alcoholics‐anonymous.org.uk).
AA describes what it calls 12 steps that alcoholics should achieve during the recovery process
(https://www.alcoholics‐anonymous.org.uk/About‐AA/The‐12‐Steps‐of‐AA), and the 12‐step
programme has been shown to achieve long‐term abstinence in around 25% of participants
and a significant decrease in alcohol consumption in 78% (Ouimette, Finney, & Moos, 1997).
Many of the beneficial effects of self‐help groups such as AA may be attributable to the client
replacing social networks of drinking friends with other AA members.
PHARMACOTHERAPY
Drugs have been developed that attempt to block alcohol‐brain interactions that might promote
alcohol dependency. One of these is the drug naltrexone, which helps prevent relapse in those
recovering from alcohol dependency. Acamprosate has also been shown to be successful as a
treatment, with outcome studies suggesting that acamprosate enabled twice as many clients to
remain abstinent 1‐year later than did psychosocial therapy alone (Swift, 1999). In addition,
some drugs, such as ondansetron, have been shown to be effective with early‐onset alcoholics
who began drinking heavily before 25 years of age (Johnson et al., 2000).
naltrexone An opioid receptor antagonist which has been found to be beneficial in the
control of hyperactivity and selfinjurious behaviour
BRIEF INTERVENTIONS
Many people with alcohol‐related problems frequently receive brief periods of treatment, such
as counselling (five or fewer sessions). Such treatments are usually conducted by GPs, nursing
staff, or trained counsellors and consist mainly of communicating alcohol‐relevant health
advice, providing information on the negative consequences of drinking, and offering practical
advice on community resources that might help achieve moderation or abstinence. Controlled
trials in the US and Canada have demonstrated that this approach significantly reduced
alcohol‐related problems and increased use of health care services (Fleming, Barry, Manwell,
Johnson, & London, 1997; Israel et al., 1996). Brief interventions are particularly valuable for
helping those in the early stages of alcohol use who are at risk of developing full‐blown alcohol
use disorders.
Summary
Alcohol use disorders are alarmingly prevalent in most societies and cause significant short‐term and
long‐term impairment, including impairment to occupational, educational and social functioning, and
they have important detrimental long‐term effects on health. Alcohol use disorders are also closely
associated with a range of social problems, such as drink driving, violent crime, and criminal activities
generally. It is still unclear why some people acquire an alcohol dependency, although alcohol use
disorders are highly comorbid with other psychiatric disorders—including other substance abuse
disorders. This suggests that, for many people, alcohol use may become a means of coping with adverse
or challenging life experiences because most alcohol users have an expectancy that drinking alcohol will
have beneficial effects (e.g., reduce tension, make social interactions easier).
Prevalence of use
After alcohol, nicotine is the second most widely used drug worldwide, and kills up to half of its users
(World Health Organization, 2013). About one‐third of the adult global population smokes and, among
teenagers aged 13–15 years, about one in five smokes worldwide. While the rate of smoking is gradually
falling in developed nations, it is rising by 3.4% per year in the developing world (World Health
Organization, 2015), and nearly 80% of the world's 1.1 billion smokers live in low‐ and middle‐income
countries (World Health Organization, 2020b). Evidence suggests that around 50% of those who start
smoking in their adolescent years will go on to smoke for at least a further 15–20 years. In the UK, the
percentage of the population that reports smoking cigarettes has declined to 15.1% in 2017 (down from
39% in 1980, and 21% in 2008) (Office for National Statistics, 2018), but the level of use in those people
who do smoke is also still unacceptably high, with smokers reporting an average 11.3 cigarettes a day
(Office for National Statistics, 2017a)—and these figures are worrying when we come to look at the
adverse long‐term health consequences of smoking (discussed later). The overall decrease in smoking
prevalence since 1980 seems to be mainly due to the increase in people who have never smoked or only
occasionally smoked, with the proportion of adults who have never smoked rising from 43% in 1982 to
55% in 2010 (UK Government Statistics, 2010). A quarter of children in the UK aged 11–15 have tried
smoking at least once, and in 2011 5% of children were regular smokers. It is worth noting that almost
two thirds of smokers in the UK said they wanted to give up, but over half said it would be difficult to
go without a cigarette for a day, and one of the main DSM‐5 criteria for substance use disorder is
repeated unsuccessful attempts to control use of the substance. However, use of e‐cigarettes has
offered the opportunity for many smokers to quit smoking traditional cigarettes, and the number of
‘vapers’ in the UK has risen to approximately 3.2 million in the UK in 2018, and more than half of
people using e‐cigarettes give their reason for vaping as an aid to stopping cigarette smoking (Office for
National Statistics, 2019a).
Finally, legislation prohibiting smoking in workplaces and enclosed public areas is being introduced in
many countries across the world, and was introduced in England in 2007. The effect of this legislation
has been to significantly reduce exposure to secondhand smoke (SHS exposure among children declined
by nearly 70%), decrease the number of hospital admissions for cardiac problems, and is associated with
a statistically significant increase in the number of smokers making a quit attempt (Bauld, 2011).
tobacco use disorder A problematic pattern of tobacco use leading to clinically significant
impairment or distress.
TABLE 9.5 Summary: DSM‐5 diagnostic criteria for tobacco use disorder
A pattern of tobacco use causing impairment or distress leading to at least two of the following
within a 12‐month period:
Tobacco is taken in greater amounts or for longer than was intended
A continuing desire or unsuccessful efforts to control tobacco use
A lot of time is spent in acquiring and using tobacco
Craving, or a strong desire to use tobacco
Tobacco use results in a failure to fulfil major life roles at work, home, and so on
Persistent tobacco use despite the effect on interpersonal, recreational, or social interactions or
despite having an ongoing physical or psychological problem that is likely to have been caused or
made worse by tobacco
Tolerance symptoms associated with high tobacco use
Withdrawal symptoms associated with high tobacco use
Twelve‐month and lifetime prevalence rates for DSM‐5 nicotine use disorder in a US population are
estimated at 20.0–27.0% respectively (Chou et al., 2016) and is found to be as high as 92% in
individuals with lung cancer (Paik et al., 2019). Tobacco use disorder is also found to be comorbid with
a range of other psychiatric disorders, with the most common being alcohol/substance use disorder,
depressive, bipolar, anxiety, personality disorders, and ADHD (DSM‐5, American Psychiatric
Association, 2013) with comorbidity ranging from 22% to 32% in these cases (Focus Point 9.3).
As we have already noted, smokers find it extremely hard to quit the habit—even though they
may be fully aware of the health implications of their habit, and even when they themselves are
already suffering from smoking‐related diseases. Since around 80–90% of all smokers would
meet DSM‐5 criteria for substance dependence, successfully treating nicotine dependence is
likely to need a range of approaches, including psychological and pharmaceutical.
Smoking is difficult to treat because (a) smokers are constantly suffering nicotine withdrawal
symptoms when not smoking, and this drives the craving for further cigarettes, and (b) smokers
come to use cigarettes as a way of dealing with any negative mood (not just those associated
with withdrawal), so any life problems that cause negative affect and stress are likely to trigger
the desire to smoke.
For these reasons, treatment programmes for smokers tend to have poor success rates and high
relapse rates. Around 40% of smokers report attempts to quit in a given year and make an
average of 2.1 attempts to do so (Borland, Partos, Yong, Cummings, & Hyland, 2012) and it's
estimated that it may take regular smokers up to 30 attempts before they successfully quit the
habit (Chaiton et al., 2016).
There are some important predictors of whether an attempt to quit will fail, and these include
(a) a diagnosis of major depression (Glassman, 1993)—50% of smokers who make repeated
unsuccessful attempts to quit can be diagnosed with major depression, (b) regular bouts of
negative mood which increase cigarette cravings, and (c) whether the person has to spend
periods of time in environments where smoking is common and cigarettes are readily available
(e.g., pubs and bars).
Some of the main forms of intervention for smoking are the following:
E‐CIGARETTES
Since being introduced to the market in 2003 global use of e‐cigarettes has increased
exponentially and is frequently used as a means of quitting smoking tobacco cigarettes. An e‐
cigarette is a handheld battery‐powered vaporiser that simulates many of the behavioural
features of smoking such as the hand‐to‐mouth action but without burning the tobacco that can
create the tar that is considered responsible for many forms of smoking‐related disease. This
activity is known as ‘vaping’ and in the UK the number of users has increased from 700,000 in
2012 to 3.6 million in 2019 of whom 54% are smokers and the majority of the remainder are
ex‐smokers, only 0.8% of users are never‐smokers (Action on Smoking and Health UK, 2019).
There is evidence from randomised controlled trials that using e‐cigarettes can help people quit
smoking long‐term compared with placebos (McRobbie, Bullen, Hartmann‐Boyce, & Hajek,
2014), and may be more successful in helping smokers quit than nicotine replacement therapy
(NRT) (see next item) (Hajek et al., 2019). However, while these results are encouraging, there
are still some uncertainties about the health status of e‐cigarettes, and as of 2019 e‐cigarettes
were still banned in some countries such as Japan, Brazil, and Singapore, and their sale
regulated in many other countries (e.g., sale is prohibited to children under 18 years in the UK).
In addition, there is worrying evidence that young people who use e‐cigarettes are more likely
to go on to smoke cigarettes (US Public Health Service, 2014).
BUPROPION
This is a mild antidepressant drug that acts as a selective inhibitor of dopamine and
noradrenalin reuptake and is thought to act directly on the brain pathways involved in
dependence and withdrawal. Bupropion is significantly more effective than a placebo control,
and 19% of those taking the drug had not smoked in the 12 months following the treatment.
The UK National Institute for Clinical Excellence recommends using bupropion if NRTs are
inappropriate (National Institute for Health and Care Excellence, 2018).
AVERSION THERAPY
This treatment attempts to replace the pleasant feelings associated with smoking a cigarette
with negative consequences such as feeling ill or nauseous. One form of aversion therapy is
known as rapid smoking, where the smoker puffs on a cigarette roughly every 4–5 s until
they feel ill and cannot take anther puff (Spiegler & Guevremont, 2003). This type of treatment
is known to reduce craving but has had limited success at controlling actual smoking behaviour
(Houtsmuller & Stitzer, 1999).
COMPLIMENTARY THERAPIES
Two forms of complimentary therapy frequently used by smokers in order to try and quit are
hypnotherapy and acupuncture. There is some evidence that hypnotic and suggestion‐
based approaches do yield higher rates of abstinence relative to waiting list and no treatment
controls, but there is little systematic evidence to suggest that hypnotherapy is more effective
than equivalent placebos (Green & Lynn, 2000; Villano & White, 2004)—so those ‘stop
smoking in one session’ signs outside your local holistic health centre might be somewhat
misleading! There is some evidence that compared with control participants, acupuncture can
help smokers to reduce their levels of smoking over a number of years (He, Medbo, &
Hostmark, 2001). However, there is little more than anecdotal evidence that acupuncture is an
effective means of quitting smoking (Villano & White, 2004).
passive smoking The breathing in of air that contains other people’s smoke.
Summary
A significant number of regular smokers meet the diagnostic criteria for nicotine dependency, which
makes it an activity of concern for both clinical psychologists and medical doctors. In part, smoking
appears to be acquired as a result of the effect of nicotine on brain dopamine reward pathways and
maintained by the smoker's need to reverse the unpleasant nicotine withdrawal effects that are
experienced between cigarettes or during abstinence. Tobacco use disorder does not have many of the
short‐term costs associated with alcohol dependency (such as impairment of occupational and social
functioning), but it does have significant medium‐ to long‐term health costs, and is the single largest
cause of premature death worldwide.
The main active ingredient in cannabis is THC (Δ9‐tetrahydrocannabinol), and the amount of THC in
cannabis will determine the strength of its psychoactive effects. THC is generally believed to have low
addictive properties, although it is still possible for regular cannabis users to become dependent on the
drug (see below). THC has a mild stimulant effect by increasing heart rate, and has its psychoactive
effects by influencing cannabinoid brain receptors CB1 and CB2 found in the hippocampus,
cerebellum, and striatum (Zou & Kumar, 2018). These receptors appear to influence levels of dopamine
in those brain areas known to play a role in mediating reward and pleasure experiences, and this seems
to be the route by which cannabis has its most important positive psychoactive effects.
Cannabis was used in the mid‐twentieth century for its supposed medicinal properties, which included
its analgesic effects (see Focus Point 9.4), but it was smoked mainly for pleasure. It is now an illegal drug
in most countries even though its effects on behaviour and health are less severe than many other illicit
drugs.
Prevalence of use
Cannabis remains the most widely used illicit substance globally, and in contrast to many other common
drugs, its use has increased significantly worldwide in the past 25 years. There were an estimated 188
million people using cannabis in 2017 with an estimated annual prevalence of 3.3–4.4% of the adult
population aged between 15 and 64 years, and cannabis use has been estimated to have increased by
over 30% between 1998 and 2017 (United Nations Office on Drugs and Crime, 2019b). In the UK
cannabis has around two to five million regular users, with around one in five adolescents reporting
occasional or regular use of cannabis (Taylor et al., 2016).
FOCUS POINT 9.4 THE MEDICAL APPLICATIONS OF
CANNABIS
Long before it became an illegal drug, cannabis was used primarily for medicinal purposes. It
was known to have relaxing and analgesic effects, and was used in the 1970s to reduce the
nausea and lack of appetite caused by chemotherapy in cancer patients (Sallan, Zinberg, & Frei,
1975). Neurophysiological studies have shown that cannabis has moderate analgesic effects, and
these are caused by the active ingredient in cannabis, THC, helping to block pain signals
reaching the brain (Richardson, Kilo, & Hargreaves, 1998). These analgesic effects are more
powerful than codeine and of longer duration.
Because of the potential medical applications of cannabis as a powerful analgesic, there have
been significant lobbies in many countries to legalise cannabis for medical use. In a UK survey,
individuals reported the medicinal use of cannabis with chronic pain, multiple sclerosis,
depression, arthritis, and neuropathy (Ware, Adams, & Guy, 2005). Cannabis has also been
involved in the treatment of patients with seizures, glaucoma, asthma, and anxiety (Mather,
2001). Outcome studies that have employed double‐blind randomised controlled trials and
placebo controls in patients with neuropathic pain or multiple sclerosis have demonstrated that
cannabis reduced the severity of reported pain significantly more in the cannabis treated than
in the placebo group (Berman, Symonds, & Birch, 2004; Zajicek et al., 2003). However, there
may be insufficient evidence that cannabis alleviates other forms of pain (Nugent et al., 2017).
Problems with the medical application of cannabis are (a) that it is still an illegal drug in most
developed countries, and (b) smoking cannabis may not be the healthiest way to take the drug
given the potential health risks associated with smoking (Mather, 2001). However, many
governments are now licensing the use of cannabis‐based drugs for use with specific patient
groups. For example, in 2005, the UK Home Office licensed the drug Sativex for individual
patient use (such as those with multiple sclerosis where cannabis has been shown to ease
stiffness, muscle spasms and pain). Sativex avoids the problems of smoking by providing the
active ingredients THC and cannabidol in a mouth spray. Also, in 2019, the NHS in England
approved a further cannabis‐based medicine, Epidyolex, to be used in the treatment of epilepsy.
cannabis use disorder Disorder usually develops over a period of time that is characterised
by continuing increased use of cannabis and reduction in pleasurable effects.
cannabis intoxication Symptoms of intoxication after recent use of cannabis begin with a
‘high’ feeling followed by symptoms that include euphoria with inappropriate laughter and
grandiosity, sedation, lethargy, impairment in short-term memory, impaired judgment, distorted
sensory perception and impaired motor performance.
TABLE 9.6 Summary: DSM‐5 diagnostic criteria for cannabis use disorder
A pattern of cannabis use causing impairment or distress leading to at least two of the following
within a 12‐month period:
Cannabis is taken in greater amounts or for longer than was intended
A continuing desire or unsuccessful efforts to control cannabis use
A lot of time is spent in acquiring, using, and recovering from the effects of cannabis
Craving or a strong desire to use cannabis
Cannabis use results in a failure to fulfil major life roles at work, home, and so on
Persistent cannabis use despite the effect on interpersonal, recreational, or social interactions or
despite having an ongoing physical or psychological problem that is likely to have been caused or
made worse by cannabis
Tolerance symptoms associated with high cannabis use
Withdrawal symptoms associated with high cannabis use
Studies have identified a number of risk factors for developing cannabis dependency, and these include
(a) age of onset—the earlier that first use is recorded the higher the likelihood of cannabis dependency
(Taylor, Malone, Iacono, & McGue, 2002), (b) tobacco smoking and regularity of cannabis use are both
independent predictors of cannabis dependency (Coffey, Carlin, Lynskey, Li, & Patton, 2003), (c)
impulsiveness and unpredictability of moods (Simons & Carey, 2002), (d) a diagnosis of conduct
disorder and emotional disorders during childhood (Meltzer, Gatward, Goodman, & Ford, 2003), and
(e) dependence on alcohol and other drugs (Degenhardt, Hall, & Lynskey, 2001).
Like many substance use disorders, cannabis use disorder is associated with a number of other
psychiatric diagnoses, and these include anxiety and panic disorder (Thomas, 1996), major depression
(Chen, Wagner, & Anthony, 2002), increased tendency for suicide (Serafini et al., 2012), and
schizophrenia (see Focus Point 8.6 in Chapter 8). This once more begs the question of whether
individuals suffering psychological problems are likely to resort to cannabis use as a form of self‐
medication, or whether cannabis use is linked to a future increase in psychiatric diagnoses (Forti,
Morrison, Alexander, & Murray, 2007). The evidence on this is far from clear, although prospective
studies indicate that (a) there is a causal link between regular cannabis use and the development of
psychotic symptoms typical of schizophrenia (Fergusson, Horwood, & Ridder, 2005; see also Chapter 8,
Focus Point 8.6), and (b) daily cannabis users may double their risk of subsequently developing
symptoms of anxiety and depression (Patton et al., 2002). Also, in a longitudinal New Zealand study,
McGee, Williams, Poulton, and Moffitt (2000) found that mental health problems at age 15 years were a
predictor of cannabis use at age 18 years, but that cannabis use at age 18 predicted increased risk of
mental health problems at age 21 years. While these studies tend to suggest that regular cannabis use
indeed predicts increased risk for subsequent mental health problems, the causal relationship may not
be direct. For example, both heavy cannabis use and mental health problems are also associated with
factors like low socio‐economic status, childhood behavioural problems, parental neglect, etc., and it
may be these factors that act as causes of both cannabis use and subsequent mental health problems.
amotivational syndrome A syndrome in which those who take up regular cannabis use are
more likely to be those who exhibit apathy, loss of ambition and difficulty concentrating.
There is also some debate about whether regular cannabis use has long‐term physical health
consequences. First, cannabis generally contains more tar than normal cigarettes and so presents a
significant risk for smoking‐related diseases such as cancer. Studies have suggested that cannabis smoke
can cause mutations and cancerous changes (Marselos & Karamanakos, 1999), but there is only modest
epidemiological evidence suggesting that cannabis users are more prone to cancer than nonusers
(Ghasemiesfe et al., 2019; Zhang et al., 1999). Second, regular cannabis use does appear to be
associated with a reduction in the male hormone testosterone (Grinspoon, Bakalar, Zimmer, & Morgan,
1997), and there is a possibility that this could cause impaired sexual functioning in the young males
who are cannabis' main users. Third, chronic cannabis use does appear to impair the efficiency of the
body's immune system (Nahas, Paton, & Harvey, 1999), although as yet there has been no obvious effect
of this found on the rate of physical illnesses in cannabis users (Meier et al., 2016). Overall, the most
probable adverse effects of cannabis on health generally include a dependence syndrome, increased risk
of psychotic episodes for those individuals with a prior vulnerability to such episodes, and accidental
injury as a result of cognitive deficits that can be experienced within 72 hours of using cannabis.
cocaine A natural stimulant derived from the coca plant of South America which, after
processing, is an odourless, white powder that can be injected, snorted or, in some forms (e.g.
crack cocaine), smoked.
caffeine A central nervous system stimulant that increases alertness and motor activity and
combats fatigue; found in a number of different products, including coffee, tea, chocolate and
some over-thecounter cold remedies and weight-loss aids.
Stimulant use is most common among individuals aged 12–25 years, and first regular use occurs on
average at age 23 years (DSM‐5, American Psychiatric Association, 2013). Stimulants are often first used
for purposes such as controlling weight or to improve work or school performance and can be used daily
or in ‘binges’ in which high doses are used every hour or so. Addiction can occur very rapidly, and
individuals using amphetamines or cocaine can develop a stimulant use disorder as rapidly as 1
week, and sufferers can often develop conditioned responses to drug‐related stimuli (e.g., craving when
seeing white powder) which contributes to relapse and treatment difficulties (Volkow et al., 2006).
Approximately 16 million adults in the US used prescription stimulants in 2015 (that is 6.6% of the
adult population), most without use disorder (Compton, Han, Blanco, Johnson, & Jones, 2018). The
estimated 12‐month prevalence of Amphetamine‐type stimulants (ATS) disorders in the US is 0.2%
among 12–17 year‐olds and 0.2% among adults (DSM‐5, American Psychiatric Association, 2013, p.
564). Risk factors for developing stimulant use disorder include comorbid bipolar disorder,
schizophrenia, and antisocial personality disorder. Childhood conduct disorder is also associated with
later development of stimulant‐related disorders.
Cocaine
Cocaine is a natural stimulant derived from the coca plant of South America. After it has been
processed, cocaine is an odourless, white powder that can be injected, snorted, or in some forms (e.g.,
crack cocaine), smoked. When used for recreational purposes it is usually snorted and absorbed into the
bloodstream through the mucus membrane of the nose. The ‘rush’ caused by a standard dose of
cocaine takes approximately 8 minutes to take effect, and lasts for about 20 minutes. The ‘rush’ often
brings feelings of euphoria, and has its initial effects on the brain to make users feel excited and
energised. After these initial effects, the drug then affects other parts of the central nervous system to
produce increased alertness, arousal, and wakefulness. The main effects of cocaine are caused by the
drug blocking the reuptake of dopamine in the brain. This facilitates neural activity and results in
feelings of pleasure and confidence (Volkow et al., 1997) (see Focus Point 9.8).
Prevalence of use
The lifetime prevalence rate of cocaine use in developed countries is between 1% and 3% (World
Health Organization, 2020a). In European countries this varies between 0.1% and 2.7%, with the UK,
The Netherlands and Spain being at the upper end of this range (Statista, 2018). Use is reported at
around 1.8% in the US (John & Wu, 2017).
Cocaine dependence Occurs when the individual finds it difficult to resist using the drug
whenever it is available and leads to neglect of important responsibilities.
CLIENT'S PERSPECTIVE 9.1 COCAINE DEPENDENCY
It all began in 1983 when I first starting doing Cocaine. At first it was something that I did
about once a week. Usually on a Friday night and into Saturday. I would use it to go out to the
bars and go drinking. I would buy a gram, and usually there would be a little left over for
Saturday morning. This went on for several months and as I came in contact with more people
who liked Coke, I would start to split grams with people during the week. This increased till I
was doing that everyday, this took about a year to develop. I was fixing business machines and
would collect a little money everyday, by the second year I would buy Coke at least once a day. I
thought that I would try selling it, to help with the costs that were starting to add up. But, the
Coke I would buy would always end up being snorted by myself and a couple of close friends.
By the third year nearly all of my money was going for Coke. Food became secondary to me
and I would skip days eating to be able to afford Coke. I started to hang out with a guy who
shot his Coke up with needles. We became best friends, I would fix a machine and he would be
waiting for me in my car, we would instantly go and buy Coke with the money that I had just
made, occasionally stopping somewhere for a sandwich, which was all that I would eat any
more.
This went on till I was made homeless but we managed to get a cheap flat to share. I started to
get concerned that I had a habit that I could not kick. I saw many people wreck their lives
during this period. In fact my business was in serious trouble as I never paid my bills. Part of the
problem was that Coke was really ‘the’ thing to do in this town at the time. Seemed everyone I
knew was into it. It was a real social drug. I stopped doing it at bars, and would go back to the
flat and just lay around doing Coke all the time. My friend shot his, I snorted mine. In
desperation to make more money I expanded the territory that I was working and would drive
80 miles to do service calls. I was doing about a gram a day, my friend doing the same in his
veins. My attitude in life became one of giving up and thinking that I would die eventually, but
that was OK, as long as I could do Coke till I did.
Alan's Story
Clinical Commentary
Like many people, Alan began using cocaine as a recreational drug, taking it mainly at weekends and
when socialising in bars. Because the drug has a relatively brief ‘high’ (around 30 minutes), users
require more and more regular doses in order to maintain the euphoria generated by cocaine, and the cost
of this leads to significant financial problems. As is typical of individuals with cocaine dependency, Alan
began to neglect his responsibilities, including failing to pay bills and losing his home. Eventually,
psychological dependency was complete when his life revolved entirely around acquiring and taking the
drug.
Clinical Commentary
There are few deaths that are directly attributable to cocaine use, but cocaine is known to exacerbate
already existing cardiovascular problems because of its effect on blood‐pressure levels and heart rate.
Amphetamines
Amphetamines are a group of synthetic drugs used primarily as a central nervous system stimulant.
Common forms are amphetamine itself (Benzedrine), dextroamphetamine (Dexedrine), and
methamphetamine (Methedrine). These are highly addictive drugs that are used primarily to generate
feelings of energy and confidence, and to reduce feelings of weariness and boredom. They were
originally synthesised in the 1920s as an inhalant to aid breathing but came to be used later as a means
of appetite control and to combat feelings of lethargy and depression. When used in small doses,
amphetamines enable individuals to feel alert, confident, and energised. They also help motor
coordination but, contrary to popular belief, do not help intellectual skills (Tinklenberg, 1971). They
also have a number of physical effects, such as increasing blood pressure and heart rate, but can cause
headaches, fevers, tremors, and nausea.
Amphetamines have their effects by causing the release of neurotransmitters norepinephrine and
dopamine, and simultaneously blocking their reuptake. They are normally taken in a pill or capsule
form, but in the case of methamphetamine it can be taken intravenously or by ‘snorting’. In its clear,
crystal form, the latter is known as ‘ice’, ‘crank’, or ‘crystal meth’, and dependence on
methamphetamine can be particularly rapid. With the use of higher doses, and during withdrawal,
users experience a range of negative symptoms, including anxiety, paranoia, irritability, confusion, and
restlessness (Kaplan & Sadock, 1991) (Focus Point 9.6).
Amphetamine intoxication Amphetamine use, which normally begins with a ‘high’ but is
equally likely to be followed by stereotyped, repetitive behaviour, anger, physically aggressive
behaviour, and impaired judgement.
Caffeine
We are probably all familiar with taking caffeine in one form or another—as are around 85% of the
population of the world. Caffeine can be found in a number of different products, including coffee, tea,
chocolate, and some over‐the‐counter cold remedies and weight‐loss aids. Caffeine is a central nervous
system stimulant that increases alertness and motor activity and combats fatigue. However, it can also
reduce fine motor coordination and cause insomnia, headaches, anxiety, and dizziness (Paton & Beer,
2001). Caffeine enters the bloodstream through the stomach and increases brain dopamine levels in a
similar way to amphetamine and cocaine. Caffeine in the body reaches its peak concentration within an
hour, and has a half‐life of 6 hours, which implies that if you have a cup of coffee at 4 p.m. that contains
200 mg of caffeine, you will still have around 100 mg of caffeine in your body 6 hours later at 10 p.m.
So while caffeine may have beneficial short‐term effects on alertness, it may have detrimental longer‐
term effects which may prevent you from sleeping. The average caffeine intake per day in most of the
developing world is less than 50 mg, compared with highs of 400 mg in the UK and other European
countries. It is taken more by men than women, and caffeine intake usually decreases with age, with
older people showing more intense reactions and reporting greater interference with sleep.
Although caffeine consumption is almost a daily occurrence for most people, it can have both positive
and detrimental effects. On the positive side, the benefits of moderate caffeine intake are increased task
focus through alertness, attention, and cognitive function, and also elevated mood and fewer depressive
symptoms and lower risk of suicide. However, high doses of caffeine can induce psychotic and manic
symptoms, and most commonly, anxiety (Broderick & Benjamin, 2004). These anxiety‐generating effects
of high doses of caffeine make individuals with panic disorder and social anxiety disorder particularly
vulnerable to their effects (Davey, 2018; Lara, 2010). Research on the effects of regular caffeine intake
has increased significantly in recent years, and has led to the inclusion of caffeine use disorder as a
research diagnosis in DSM‐5 (American Psychiatric Association, 2013, pp. 792–793). Research that has
investigated these diagnostic criteria has found that around 8% of individuals would meet the diagnostic
criteria for caffeine use disorder which is defined as (a) unsuccessful attempts to cut down use, (b)
continued caffeine use despite having physical or psychological problems that would be exacerbated by
caffeine, and (c) aversive withdrawal symptoms (Sweeney, Weaver, Vincent, Arria, & Griffiths, 2019).
opiates Opium, taken from the sap of the opium poppy. Its derivatives include morphine,
heroin, codeine and methadone.
Opioids
The opioids consist of opium—taken from the sap of the opium poppy—and its derivatives, which
include morphine, heroin, codeine, and methadone. In the 1800s, opium was used mainly to treat
medical disorders, because of its ability to relax the patient and reduce both physical and emotional
pain. Both morphine and heroin were new drugs derived from opium during the late 1800s and early
1900s. Both were used as analgesics, but over time it became apparent that both were highly addictive,
and even after having been successfully treated with morphine or heroin, patients were unable to give
up using them. Finally, a synthetic form of opium, called methadone, was developed in Germany
during World War II. Unlike the other opioids, methadone can be taken orally (rather than injected)
and is longer lasting. Heroin is currently the most widely abused of the opioids. It is purchased in a
powder form and is normally taken by injection usually directly into a vein (known as ‘mainlining’). In
contrast, methadone is frequently used as a replacement drug for heroin abusers because of its slow
onset and weaker effects.
Heroin A highly addictive drug derived from morphine, often used illicitly as a narcotic.
In the 1990s, heroin became the recreational drug of choice for many in Europe and the US. Most
opioids and their derivatives cause drowsiness and euphoria. In addition, heroin gives a feeling of
ecstasy immediately after injection (known as a ‘rush’, which lasts for 5–15 minutes). For about 5–6
hours after this rush, the user forgets all worries and stresses, experiences feelings of euphoria and well‐
being, and loses all negative feelings. However, as with many other drugs, individuals who regularly use
heroin rapidly develop tolerance effects and experience severe withdrawal symptoms that begin about 6
hours after they have injected the dose.
Opioids have their effects by depressing the central nervous system, and the drug attaches to brain
receptor sites that normally receive endorphins and stimulates these receptors to produce more
endorphins (Gerrits, Wiegant, & Van Ree, 1999). Endorphins are the body's natural opioids, and release
of these neurotransmitters acts to relieve pain, reduce stress, and create pleasurable sensations.
endorphins The body’s natural opioids. The release of these neurotransmitters acts to
relieve pain, reduce stress and create pleasurable sensations.
Prevalence of use
The estimated annual prevalence of opiate use (drugs extracted from the opium poppy such as
morphine and heroin) in 2016 was 0.3–0.4% of the global population aged between 15 and 64 (around
19 million people) (World Health Organization, 2018), and prescription opioid use in 2014 was around
0.7% (32.4 million adult users) (Vasilev, Milcheva, & Vassileva, 2016). While the majority of the world's
opiate users are in Asia, most prescription opioid users are in North America, followed by Asia and
Europe. In the UK there has been a steady decrease in the number of problem opiate users (mainly
heroin users) from 281,000 in 2004 to 257,000 in 2015.
opioid use disorders The development of tolerance to opiates, in which the user has to use
larger and larger doses to experience equivalent physical and psychological effects. Also
associated with severe withdrawal effects.
controlled drug user A long-term drug user who has never been in specialized treatment
and who displays levels of occupational status and educational achievement similar to the
general population.
unobtrusive heroin user A long-term heroin user who has never been in specialised
treatment and who displays levels of occupational status and educational achievement similar to
the general population.
hallucinogens Psychoactive drugs which affect the user’s perceptions. They may either
sharpen the individual’s sensory abilities or create sensory illusions or hallucinations.
phencyclidines Group of common hallucinogenic drugs, which includes PCP, ‘angel dust’,
and less potent compounds such as ketamine, cyclohexamine and dizocilpine.
Prevalence of use
In the US and Europe, use of LSD peaked during the 1960s and 1970s and has been gradually
declining ever since as stimulant drugs such as cocaine and amphetamines became the recreational drug
of choice. In the UK, the use of LSD has declined from 4% in 1996 to 0.5% in 2011–2012 (Home
Office, 2012). In the US, the 12‐month prevalence rate for LSD use in adults was 0.3% in 2011, a figure
that has also been declining significantly over the past 20 years (National Institute on Drug Abuse,
2012).
Ecstasy
Most readers will by now be aware of the drug MDMA (3,4‐methylenedioxymethamphetaime)—better
known as the ‘clubbing drug’ ecstasy. Over the past 2 decades, ecstasy has often been the drug of choice
for those regularly attending techno‐dance parties, raves, or nightclubs. It is usually taken in pill form
and acts as both a stimulant and hallucinogen. It gives the user added energy to continue partying, and
elevates mood. Ecstasy has its effects by releasing the neurotransmitters serotonin and dopamine
(Malberg & Bronson, 2001; Vegting, Reneman, & Booij, 2016). Elevated levels of serotonin generate
feelings of euphoria, well‐being and sociability, and sounds and colours are experienced more intensely
(high levels of brain serotonin are also found in individuals with bipolar disorder experiencing a manic
phase). Effects can be experienced within around 20 minutes of taking the dose and will last for around
6 hours. However, high levels of brain dopamine can cause psychotic symptoms, such as paranoid
thinking and confusion, and these are symptoms often experienced by regular Ecstasy users (Photo 9.2).
Prevalence of use
Global use of ecstasy‐group substances is estimated at 0.2–0.6% of the adult population (between 10.5
and 28 million users) and is comparable to levels of cocaine use (United Nations Office on Drugs and
Crime, 2012). There was some evidence that usage was declining, but most recent evidence suggests a
possible resurgence of ecstasy use in Europe and the US. In the UK, there were estimated to be around
0.2 million ecstasy users in 2011–2012, although the number of users had fallen from 1.8% of the
population in 2000 to 1.4% in 2011–2012 but may have risen again in recent years to 1.7% of the
population in 2018 (Statista, 2019).
SELF‐TEST QUESTIONS
In addition to this developmental view, we also need to look at some of the neurological and
behavioural processes that underlie substance use disorders. These include the neurocircuitry of
addiction (i.e., the reward pathways in the brain that make substance use pleasurable), and the
psychology of ‘craving’ (i.e., the way in which liking and wanting a drug becomes conditioned to
external cues which trigger the desire for that drug). These two processes are highlighted and described
in Focus Points 9.8 and 9.9.
With most of the drugs of abuse that we've discussed earlier in this chapter, you'll notice that
they all either have pleasurable effects on mood or they help people to feel less bad (e.g., by
alleviating negative moods or withdrawal symptoms) (Koob & Le Moal, 2008). Research on
both humans and animals suggests that a broad range of drugs have their pleasurable effects by
activating the natural reward pathways in the brain by converging on a common circuitry in the
brain's limbic system (Feltenstein & See, 2008), and many of these drugs also cause permanent
adaptive effects on this common pathway that contribute to the progression and maintenance
of addiction (Taylor, Lewis, & Olive, 2013). Drugs achieve their pleasurable effects by
influencing the dopamine system, and in particular, the dopaminergic neurons in the ventral
tegmental area(VTA) of the midbrain and subsequent areas in the limbic forebrain—
especially the nucleus accumbens(NAc). This VTA‐NAc pathway is arguably the most
important reward pathway in the brain and gives rise to the pleasurable effects caused not only
by drug use but also by food, sex, and social interactions (Kelley & Berridge, 2002; Tobler,
Fiorillo, & Schultz, 2005), and it has also been implicated in other addictive behaviours such as
pathological overeating, gambling, and sex addictions (Nestler, 2005). Several additional brain
areas have been shown to interact with this VTA‐NAc pathway and are implicated in drug
addiction. These include regions in the amygdala, hippocampus, hypothalamus, and several
regions of the frontal cortex (Koob & Le Moal, 2008; Nestler, 2001).
ventral tegmental area (VTA) Part of the midbrain associated with the dopamine
system.
nucleus accumbens (NAc) Part of the limbic forebrain and dopamine system.
The figure shows the converging actions that a number of different drugs of abuse have by
affecting different components in the VTA‐NAc reward pathway (after Nestler, 2005).
Stimulants directly increase dopaminergic transmission (DA) in the NAc. Opiates do the same
indirectly by inhibiting GABAergic interneurons in the VTA which disinhibits dopamine
neurons in the VTA. Opiates also directly act on opiate receptors in the NAc. Nicotine seems to
activate VTA dopamine neurons directly. Alcohol appears to have a number of different effects
at different points in the pathway, and the effects of cannabis are also complex but may act
directly on NAc neurons themselves. Finally, phencyclidine (PCP) may act directly on neurons
in the NAc.
We continue by describing some of the risk factors that influence substance use at different
developmental stages (Figure 9.5).
9.4.1 Experimentation
Availability
Whether an individual can readily get access to the substance is an important factor in the early stages
of substance use, and factors such as whether the drug is legally available (e.g., alcohol and cigarettes)
and its cost are important determinants of initially experimenting with the drug. For example, there is
evidence for an inverse relationship between the use of a drug and its cost—especially amongst young
adolescents. This is true for alcohol and for cigarettes (Room, Babor, & Rehm, 2005; Stead & Lancaster,
2005), which suggests that strategies such as enforcing the minimum age for purchase of tobacco and
alcohol and increasing the price of these commodities may be effective means of controlling early use
(Ogilvie, Gruer, & Haw, 2006). Availability and cost are also significant factors influencing the use of
illicit drugs. For example, when Mexican heroin production increased from 8 to 50 metric tons a year
between 2005 and 2009, availability became one of the main factors identified in the decision by an
individual to start using heroin in the US (Cicero, Ellis, Surratt, & Kurtz, 2014).
Familial factors
Two important factors that can influence early substance use are (a) whether the substances are
regularly used by other family members, and (b) whether the family environment is problematic. For
example, if a child's parents both smoke then the offspring is significantly more likely to smoke at an
early age, and if both parents regularly drink alcohol, then the child is also more likely to drink at an
early age (Hawkins et al., 1998; Mattick et al., 2017). Similarly, having older siblings and spouses that
abuse drugs, significantly increases the risk of drug abuse (Kendler, Ohlsson, Sundquist, & Sundquist,
2013). Neglectful parenting also increases the use of alcohol, cigarettes, and cannabis by the child
(Cadoret, Yates, Troughton, Woodworth, & Stewart, 1995), and the negative background factors that
predict longer‐term substance use include (a) substance use in the childhood home, (b) extreme poverty
in the childhood home, (c) marital or legal problems in the household, (d) childhood neglect and abuse
—especially childhood sexual abuse (Sartor et al., 2013), and (e) serious psychiatric illness in the
household (Alverson, Alverson, & Drake, 2000; Wills, DuHamel, & Vaccaro, 1995).
Media influences
Substance use in young adolescents is significantly influenced by advertising and exposure to the
product in media contexts such as television programmes and magazines. Exposure to advertising has
been shown to be an important factor in encouraging children to take up smoking (While, Kelly, Huang,
& Charlton, 1996), and a US study found that exposure to in‐store beer displays, magazines with
alcohol advertisements, and television beer advertising predicted drinking in school‐age children
(Ellickson, Collins, Hambarsoomians, & McCaffrey, 2005) (Photo 9.3). While banning direct advertising
of a product has been shown to produce a significant fall in adolescent use of that product (Saffer,
1991), substance use (such as smoking) is still linked to indirect exposure to images found in fashion,
entertainment and gossip magazines, and TV and films—especially when alcohol and tobacco products
are regularly associated with attractive people enjoying themselves (Carson, Rodriguez, & Audrain‐
McGovern, 2005). In addition, while advertisements for smoking‐related products such as e‐cigarettes
might seem to be a good way of diverting young people away from tobacco to safer options, studies
suggest that exposure even to e‐cigarette advertisements may inadvertently decrease the perceived risks
of smoking among non‐smokers (Kim, Popova, Halpern‐Felsher, & Ling, 2017). It may not just be
explicit advertising that influences first drug use, but any exposure to a drug regardless of the context.
PHOTO 9.3 Exposure to in‐store beer and alcohol displays have been shown to be one factor that encourages children to
begin drinking alcohol.
Mood regulation
One of the main reasons for using drugs is that they have important mood altering effects. Alcohol
makes the drinker friendly, confident and relaxed (Harvey et al., 2002), smokers claim that smoking
cigarettes has a relaxing and calming effect (Ikard et al., 1969), stimulants—such as cocaine and
amphetamines—affect reward pathways in the brain causing feelings of euphoria, energy, and
confidence (Taylor, Lewis, & Olive, 2013; Volkow et al., 1997), opiates—such as heroin—generate
immediate feelings of ecstasy and feelings of well‐being and loss of negative emotions, and
hallucinogens—such as cannabis—produce feelings of relaxation, euphoria, sociability, and sharpened
perceptions. Interestingly, most of the drugs that we've discussed in this chapter have their pleasurable
effects by activating a common brain circuitry that governs reward and pleasure, and a full discussion of
this circuitry is provided in Focus Point 9.8. Given these rewarding effects of drug use, it makes sense to
assume that regular use can be achieved as a result of drug use being reinforced by these pleasurable
consequences. Furthermore, if taken regularly enough, many drugs can cause permanent changes to the
brain reward system in such a way as to facilitate addiction (Taylor et al., 2013).
reward pathways The brain neurocircuitry that make substance use pleasurable.
In addition to the intrinsic pleasurable effects of some substances, a good deal of research has been
carried out on the putative tension or stress‐reducing effects of drugs such as nicotine and alcohol.
There is some evidence that alcohol may reduce tension, even in individuals who are not alcohol
dependent (Sher & Levenson, 1982), and this is consistent with the drinker's everyday belief that
drinking is a good way to unwind, such as after a demanding day at work. However, the picture is not
quite that simple. Alcohol appears to reduce responding in the presence of both negative and positive
affect, and so may simply have an arousal‐dampening effect regardless of the valency of the drinker's
emotional state (Strikze, Patrick, & Lang, 1995). Subsequent studies have indicated that alcohol has its
apparent arousal‐dampening effects by altering perception and attention. The alcohol‐intoxicated
individual has less cognitive capacity available to process all ongoing information, and so alcohol acts to
narrow attention and means that the drinker processes fewer cues less well. This is known as alcohol
myopia (Steele & Josephs, 1990), and means that the drinker's behaviour is likely to be under the
influence of the most salient cues in the situation. In lively, friendly environments this will result in the
drinker processing only these types of cues, and as a consequence will feel happy and sociable and will
not have the capacity to simultaneously process worries or negative emotions. However, in drinking
situations where there are no happy, lively cues (such as in the case of the unhappy, lone drinker), the
reduced cognitive processing can result in attentional‐focusing on negative thoughts, experiences, and
emotions and means that the drinker experiences more negative affect than if they had abstained.
alcohol myopia The situation where an alcohol-intoxicated individual has less cognitive
capacity available to process all ongoing information, and so alcohol acts to narrow attention
and means that the drinker processes fewer cues less well.
FOCUS POINT 9.9 CRAVING
The pleasurable effects of drug use can also act as potent stimuli that will generate “craving”
responses to cues associated with the drug. This occurs through a process of conditioning in
which the drug acts as a powerful unconditioned stimulus (UCS) that reinforces conditioned
responses to drug cues (see Chapter 1, Section 1.3.2, for a discussion of classical and operant
conditioning) (Robinson & Berridge, 2003). This means that cues such as the sensory features of
the drug (e.g., a white powder), the environment in which it is taken (e.g., a pub), or the people
the user socialises with when taking the drug can all become cues which elicit craving for the
drug. Cues for a particular drug can actually trigger responses in the user that are very similar
to those associated with actual use of the drug, and these include pleasurable feelings,
physiological arousal, and activation of brain reward centres (Filbey & DeWitt, 2012). In
addition, craving can induce attentional biases that enhance processing of drug cues, as well as
drug anticipatory responses that exacerbate the craving (Field, Mogg, Mann, Bennett, &
Bradley, 2013). Although generated by nonconscious classical conditioning process, craving is
often characterised as a conscious state that intervenes between the unconscious cues and
consumption (Andrade, May, & Kavanagh, 2012). Craving has significant effects on those
suffering substance abuse disorders. People who crave a substance do use that substance more
than people who don't crave it (Berkman, Falk, & Lieberman, 2011), and as you can imagine,
craving is a significant obstacle to successful treatment and frequently leads to relapse (Evren,
Durkaya, Evren, Dalbudak, & Cetin, 2012; Paliwal, Hyman, & Sinha, 2007).
In the case of nicotine, we saw in Section 9.3.2 that there is evidence that regular smokers use nicotine
as a means of coping with stress (Parrott & Garnham, 1998, 1999; Schachter, 1982). This begins when
the smoker lights a cigarette in order to alleviate nicotine withdrawal symptoms, but after regular use,
many smokers come to associate smoking with tension relief generally, and so become conditioned to
having a cigarette during or after any stressful experience (Kassel, 2000). This gives feelings of relaxation
and improved concentration as nicotine levels increase, and functions to reinforce the act of smoking.
Consistent with this view is the longitudinal finding that increases in negative affect and stressful life
events are associated with increases in smoking (Wills, Sandy, & Yaeger, 2002). However, while smoking
may have an immediate apparent stress‐reducing effect for the smoker, long‐term smoking actually
increases the smoker's responses to stress by creating a hyper‐responsiveness in the body's stress system
—especially in adolescents when they are at the start of their smoking career (Holliday & Gould, 2016).
This developed hyperresponsiveness to stress as a result of nicotine consumption may cause an addictive
vicious cycle in which smoking becomes the stress‐reducing response to increased stress sensitivity.
Finally, many drugs are powerful reinforcers that can condition the effects of drugs to stimuli and cues
associated with the drug (e.g., seeing a cigarette packet, or a white powder), and this gives rise to the
concept of ‘craving’ (Robinson & Berridge, 2003). Individuals who acquire learnt cravings for a drug
are more likely to consume more of that drug and to have a significantly higher rate of relapse following
abstention. The role of craving is discussed more fully in Focus Point 9.9.
In summary, there is clear evidence that many substances appear to have mood regulating effects—often
caused by their effects on a common brain reward circuitry in the limbic system, and so substance use
may be maintained by these effects. However, in many cases, these effects are more complicated than
they appear to the user. For example, alcohol appears not to have a simple mood enhancing effect, but
has an attentional‐focusing effect which makes drinkers feel relaxed and happy only when there are
happy cues for them to focus on. In the longer term nicotine use increases stress hypersensitivity even
though the smoker perceives smoking as a stress‐reducing activity. In addition, drugs can often have
important conditioned effects that will generate craving for the drug when drug‐related cues are
encountered.
Self‐medication
Rather than simply being used as a means of reducing the effects of everyday tensions and stressors,
substance use can become regular as a means of self‐medication when the individual is suffering
more severe adjustment difficulties such as those caused by diagnosable psychiatric disorders. This view
is supported by the fact that substance use disorders are highly comorbid with a range of other
psychiatric disorders including bipolar disorder, depression, eating disorders, schizophrenia, personality
disorders, and anxiety disorders such as OCD, post‐traumatic stress disorder, and panic disorder
(Brooner, King, Kidorf, Schmidt, & Bigelow, 1997; Regier et al., 1990) (see Table 9.3), and self‐
medication is frequently reported by substance users as a motive for using the substance (Sbrana et al.,
2005). There is evidence that anxiety disorders and mood disorders predate the onset of substance use
disorders such as alcohol dependency (Lazareck, Robinson, & Bolton, 2012; Liraud & Verdoux, 2000),
which is again consistent with the view that drugs are used for medication purposes after a psychiatric
disorder has already developed. However, if individuals who use drugs for self‐medication purposes are
aware of the longer‐term negative effects of these drugs, why do they continue to use them? Drake,
Wallach, Alverson, and Mueser (2002) suggest a number of reasons for continued use: (a) the drug has
intrinsic rewarding effects and leads to physical dependence, (b) the lives of individuals with psychiatric
disorders are so miserable that the medicinal effects of the drug offset its negative effects, and (c) the
drug may not only reduce tension and negative affect, it may have other positive consequences such as
helping the individual to cope in social situations. Finally, if substance abusers do genuinely use drugs to
self‐medicate, then their choice of drug should be consistent with their psychiatric symptoms. For
example, we would expect someone with ADHD to prefer amphetamines to alcohol due to its
stimulating properties, but individuals with anxiety would prefer alcohol to amphetamines because of
the anxiolytic effects of alcohol. However, there is relatively little evidence in support of these more
detailed predictions from the self‐medication account (Lembke, 2012).
self-medication Self-administration of often illicit drugs by an individual to alleviate
perceived or real problems, usually of a psychological nature.
Cultural variables
There are some cultural factors that will influence the transition from first use to regular use. For
example, alcohol consumption differs significantly across different countries, and is most prevalent in
wine‐drinking societies (such as France, Italy, and Spain) where drinking alcohol is widely accepted as a
social and recreational activity (deLint, 1978). Increased use in these countries may be caused by the
regular availability of alcohol in a range of situations, such as drinking wine with meals and the
availability of wine and alcohol in a broad range of social settings. There are also some culturally
determined differences in beliefs about the effects of drugs which appear to affect the frequency of use,
and Ma and Shive (2000) found that white Americans reported significantly less risk associated with a
range of drugs (alcohol, cigarettes, cocaine, and cannabis) than African or Hispanic Americans. The
former group was found to use all of these drugs significantly more than the latter groups.
Genetic predisposition
Based on twin, adoption and family studies, the heritability component for substance use disorders
generally has been estimated anywhere between 30% and 80% (Agrawal & Lynskey, 2008; Li &
Burmeister, 2009), around 40–60% for alcohol use disorders (Schuckit, 2009; van der Zwaluw & Engels,
2009), and around 40–48% for cannabis use (Verweij et al., 2010). Twin studies have indicated that the
concordance rates for alcohol abuse in MZ and DZ twins respectively are 54% and 28% (Kaji, 1960),
indicating a strong genetic component in alcohol abuse, and a similar concordance‐based genetic
predisposition has been found in twin studies of cannabis abuse (Kendler & Prescott, 1998), nicotine
dependency (True et al., 1999), and drug abuse generally (Tsuang et al., 1998). Adoption studies also
support a role for genetic inheritance in alcohol use disorders and drug abuse generally (Cadoret,
Troughton, O'Gorman, & Heywood, 1986; Kendler et al., 2012). In addition to family, twin, and
adoption studies, molecular genetic studies have begun to identify some of the genes that may be
involved in substance use disorders. For example, research has studied the link between smoking and
SLC6A3 (DAT1), a gene that regulates the reuptake of dopamine (e.g., Pomerleau, Collins, Shiffman, &
Pomerleau, 1993), and variability in this gene can make individuals more or less sensitive to the effects
of nicotine. In addition, the gene P450 2A6 (abbreviated as CYP2A6) may contribute to nicotine
dependence by coding for enzymes that regulate nicotine (Murphy, 2017), and greater metabolism of
nicotine is associated with smoking faster and smoking more (Park et al., 2017). Nevertheless, while the
role of some genes has been identified in the aetiology of some substance use disorders, identification of
genetic risk variants has been challenging because genome‐wide associations studies (GWAS) require
large sample sizes to obtain significant findings (Tawa, Hall, & Lohoff, 2016; Stringer et al., 2016).
However, while there are important genetic considerations in understanding substance use disorders,
there is a strong effect of interactions between genetics and environment risk factors. For instance, in a
large‐scale study in Finland, Dick et al. (2007), found that heritability was an important factor in
determining alcohol use, but this was facilitated even more in those that had many peers who drank
alcohol compared with those who had fewer peers who drank alcohol. Cloninger (1987) has also argued
that a genetic predisposition for alcohol dependency will be activated only by experiencing
environmental stress (e.g., low socio‐economic status). However, other studies have indicated that there
may be a more general genes–environment interaction, where a genetic predisposition for alcohol
dependency will only cause alcohol abuse if other environmental factors are present. These
environmental factors are not necessarily stressors, but include factors which might facilitate alcohol use,
such as living in places where there are large numbers of young people (Dick, Rose, Viken, Kaprio, &
Koskenvuo, 2001) or peer pressure or parental modelling (Rose, 1998). These environmental factors
seem to be important because they are likely to initiate drinking. Once drinking has started, genetic
factors then appear to play a significant role in determining regular use, abuse and dependency (Heath,
1995), and this process is illustrated in Figure 9.6.
FIGURE 9.6 Gene‐environment interactions in substance use disorders. The initiation of substance use is influenced
largely by environmental factors; the use of the addictive substance is affected largely by genetic factors
From Wang, Kapoor, & Goate (2012).
Genetic factors may have their effect by influencing regular use and tolerance levels to drugs such as
alcohol or affecting central nervous system responses to the drug (Schuckit, 1983). For example, alcohol
dependency requires that the user has to drink a lot and to do this regularly, and many individuals who
develop alcohol use disorders appear to have inherited a strong tolerance for alcohol. That is, they
report low levels of intoxication after a drinking bout and they show fewer signs of physical intoxication
(such as body sway) (Schuckit, 1994; Schuckit & Smith, 1996), and these higher thresholds for
intoxication may permit heavier and heavier bouts of drinking that are typical of alcohol use disorders.
There is considerable evidence that certain genes influence sensitivity to alcohol, and it is the
inheritance of these genes that determines whether a drinker will become dependent. The main
candidate is a gene known as ALDH2 (Luczak et al., 2014; Wall, Shea, Chan, & Carr, 2001). Alcohol
metabolism in the liver goes through two main stages, and these are the conversion of alcohol into a
toxic substance called acetaldehyde followed by conversion of acetaldehyde into nontoxic acetic acids.
ALDH2 is thought to affect the rate at which acetaldehyde is metabolised—if it is metabolised more
slowly, then the individual will begin to feel the negative effects of its toxicity, such as nausea, headaches,
stomach pains, and physical signs of intoxication. Interestingly, many Asians are known to have a
mutant allele for ALDH2 which slows acetaldehyde metabolism and allows it to build up after a
drinking bout, and makes drinking large amounts of alcohol aversive. This appears to be an important
factor in explaining why Asians develop alcohol use disorders at only about half the rate of non‐Asians
(Tu & Israel, 1995). Other mutant forms of this gene which allow rapid metabolism of acetaldehyde
may be the inherited factor that causes tolerance effects in some individuals and leads to regular alcohol
use and dependence. That such a tolerance is inherited is supported by the fact that sons of heavy
alcohol users report being less intoxicated than others after a standard amount of alcohol and show
fewer physical signs of intoxication (Schuckit et al., 1996). Recent studies have identified a form of the
ALDH2 allele in White American college students that is associated with lower rates of alcohol use
disorder, lower levels of drinking, and with alcohol‐induced headaches (Wall, Shea, Luczak, Cook, &
Carr, 2005), and suggests that variations in the form of this gene can have an important influence on
alcohol consumption and alcohol use disorders.
Poverty
Without doubt there are important socio‐economic factors at work in determining whether individuals
will use drugs and develop from being regular users into being drug dependent. One such factor is
poverty. There is evidence that an individual's first experience with an illicit drug increases in probability
if they live in or near an economically poor neighbourhood (Petronis & Anthony, 2003). This is perhaps
not surprising given that such individuals may well be unemployed, have no other forms of recreation
available to them, have little hope of occupational or educational fulfillment, and already live in
subcultures that revere drug dealing as a high‐status profession. Such conditions are perfect for the
downward spiral into drug abuse and dependency and are likely to represent circumstances in which the
individual will have poor access to treatment services and long‐term psychiatric support (Cook &
Alegria, 2011). Endemic drug use in poor communities fosters other problems, including infections such
as HIV and hepatitis C contracted through injecting drugs intravenously (Rosenberg, Drake, Brunette,
Wolford, & Marsh, 2005). Finally, substance dependency in poor areas fosters crime in the form of
robbery, fraud, and prostitution as the only means of securing money to buy drugs, and is a strong risk
factor for first‐time homelessness (Thomson, Wall, & Hasin, 2013).
9.4.4 Summary
Substance use disorder has to be viewed as a developmental process that progresses through a number
of well‐defined stages, and different factors are involved in establishing substance use at these different
stages. The main stages of development that we have highlighted are experimentation (what influences
first use of a substance?), regular use (what factors influence the move from experimentation to regular
use?), and abuse and dependency (what makes some people continue to use drugs even though this
activity is having significant negative effects on their lives and their health?). It is important to
understand that use may be confined to any one of these stages, and many regular users can often
function relatively successfully in their social, work and family environments. However, in terms of
understanding psychopathology, it is the development from regular use to abuse and dependency that is
of most interest to us as practitioners and clinical psychologists.
SELF‐TEST QUESTIONS
What factors lead young people to experiment with drugs?
How do peer group influences affect whether a young person will try a new drug?
What factors lead individuals to become regular users of a substance?
In what ways do different drugs such as alcohol, nicotine, cocaine, heroin, and cannabis
alter the user's mood?
Research suggests a broad range of drugs have their effects by activating the natural
reward pathway in the brain—what areas of the brain are involved in this pathway?
How does craving develop, and how does it contribute to relapse?
What is the evidence that regular drug use can become a form of self‐medication when the
user has severe adjustment difficulties?
What are the main factors that maintain substance abuse and dependency?
How have twin and adoption studies shown that there is an inherited component to
alcohol use disorder?
How does the gene ALDH2 influence whether a drinker is likely to become alcohol
dependent?
What is the evidence that regular substance use might cause long‐term cognitive deficits?
What is the link between poverty and substance use disorders?
SECTION SUMMARY
9.4.1 Experimentation
Early use is influenced by the availability of the drug and its economic cost.
Whether substances are regularly used by family members and whether the family
environment is problematic can also hasten experimentation with drugs.
Peer groups influence first substance use as individuals start using a substance in order to
self‐categorise themselves as members of a particular group.
Substance use in young adolescents is significantly influenced by advertising and exposure
to substances (such as cigarettes and alcohol) in the media.
Alcoholics Anonymous (AA) A support group for individuals who are alcohol dependent
and are trying to abstain.
Drug‐prevention schemes are now widespread and take many forms. Their purpose is to try and
prevent first use of a drug or to prevent experimentation with a drug developing into regular use—
usually through information about the effects of drugs and to develop communication and peer‐
education skills. In the UK, government‐sponsored schemes have a number of elements, which focus
respectively on young people (especially in schools), communities (targeting young people and their
parents who may be specifically at risk), treatment, and availability. 24‐hour telephone helplines and
websites also provide constant advice and information (e.g., talktofrank.com). Prevention schemes are
often local and tailored to the specific needs of the community. They aim to educate schoolteachers and
parents on how to deal with specific drug‐related incidents and how to provide drug advice to young
people. Many schemes train young people themselves to deliver drug education information to their
peers. Particular strategies that drug prevention schemes use are (a) peer‐pressure resistance
training, where students learn assertive refusal skills when confronted with drugs, (b) campaigns to
counter the known effects of the media and advertising (e.g., by combating tobacco advertising with
antismoking messages), (c) peer leadership, where young people are training to provide anti‐drugs
messages to their peers, and (d) changing erroneous beliefs about drugs (e.g., that use is more prevalent
than it is, or that a drug's effects are relatively harmless). The evidence on the effectiveness of these
types of schemes is difficult to gauge because they take place across different types of communities
characterised by different risk factors and employ a range of different strategies over different
timescales. However, systematic reviews of school‐based prevention programmes do indicate that such
programmes can be effective in reducing smoking and alcohol use, and have protective effects against
drugs and cannabis use (Das, Salam, Arshad, Finkelstein, & Bhutta, 2016). Some studies have also
indicated that at the very least such schemes do appear to delay the onset of drug use—even if longer‐
term effects are difficult to evaluate (Faggiano et al., 2008; Sussman et al., 1995).
Drug-prevention schemes Communitybased services whose purpose is to try to prevent first
use of a drug or to prevent experimentation with a drug developing into regular use – usually
through information about the effects of drugs and through developing communication and
peer-education skills.
peer leadership A strategy used by drug prevention schemes where young people are trained
to provide anti-drugs messages to their peers.
Residential rehabilitation centres are also important in the treatment and longer‐term support of
individuals with substance use disorders. Such centres allow people to live, work, and socialise with
others undergoing treatment in an environment that offers advice, immediate support, group and
individual treatment programmes, and they enable the client to learn the social and coping skills
necessary for the transition back to a normal life. In such centres, detoxification programmes can be
monitored and supported with the help of peripatetic key workers. Residential rehabilitation
programmes usually combine a mixture of group work, psychological interventions, social skills training,
and practical and vocational activities. In the UK, clients would normally begin residential
rehabilitation after completing inpatient detoxification. Despite the support offered by residential
rehabilitation centres, the percentage of clients in such centres who do not complete their treatment
programme is often unacceptably high (Eastwood et al., 2018; Westreich, Heitner, Cooper, Galanter, &
Guedj, 1997) and, perhaps not surprisingly, noncompleters fare significantly less well than completers
(Aron & Daily, 1976; Berger & Smith, 1978). However, a number of studies have clearly indicated that
longer stays in residential rehabilitation centres are consistently associated with better outcomes, with a
minimum stay of 3‐months recommended (Simpson, 2001).
Residential rehabilitation centres Centres that allow people to live, work and socialise
with others undergoing treatment in an environment that offers advice, immediate support, and
group and individual treatment programmes enabling clients to learn the social and coping
skills necessary for the transition back to a normal life.
Aversion therapy
This treatment has been regularly used in the context of a number of substance disorders, but most
notably with alcohol dependency. Using a classical conditioning paradigm, clients are given their drug
(the conditioned stimulus) followed immediately by another drug (the aversive UCS) that causes
unpleasant physiological reactions such as nausea and sickness (Lemere & Voegtlin, 1950). The
assumption here is that pairing the favoured drug with unpleasant reactions will make that drug less
attractive. In addition, rather than physically administering these drugs in order to form an aversive
conditioned response, the whole process can be carried out covertly by asking the client to imagine
taking their drug followed by imagining some upsetting or repulsive consequence. This variant on
aversion therapy is known as covert sensitisation (Cautela, 1966). However, there is limited evidence
that aversion therapy has anything but short‐lived effects (Wilson, 1978; but see Elkins et al., 2017, for a
procedure with alcohol use disorder that reports some longer‐term benefits 12 months after aversion
therapy), and outcomes are significantly less favourable when clients with long‐standing substance
dependency are treated in this way (Howard, 2001). Nevertheless, aversion therapy can be used as part
of a broader treatment package involving community support, detoxification, and social skills training.
covert sensitisation The association of an aversive stimulus with a behaviour the client
wishes to reduce or eliminate.
Contingency management therapy Behavioural therapy which aims to help the individual
identify environmental stimuli and situations that have come to control symptoms such as
substance use.
controlled drinking A variant of BSCT in which emphasis is put on controlled use rather
than complete abstinence.
There is stigma associated with drug abuse and substance use disorders—to the point where
there is often a public outcry if treatment programmes propose controversial interventions that
seem counterintuitive to the nondrug user. Recent examples include (a) treating heroin addicts
by giving them supervised heroin injections, and (b) giving drug addicts financial rewards for
staying ‘clean’.
In the first example, addicts are given daily injections of heroin in supervised clinics in an
attempt to wean them off the drug (known colloquially as ‘shooting galleries’). A study of this
intervention based in London, Darlington, and Brighton in the UK divided heroin addicts into
three groups, giving one group heroin and giving the other two intravenous methadone and
oral methadone. All three groups showed improvement, but the heroin‐using group fared much
better than the other two, with 75% having stopped using street heroin and also having
significantly reduced their involvement in crime (Strang et al., 2010).
Giving drug addicts monetary rewards for staying clean also has a significant effect on the
success of treatment. In the ‘Harbour Steps’ trial run in Lambeth in London, addicts earn a
small credit each time they give a crack‐free urine sample, and can be tested up to three times a
week. Such programmes are significantly more effective at establishing abstinence than control
treatments (Lussier, Heil, Mongeon, Badger, & Higgins, 2006).
Despite the success of these types of programmes, and despite the fact that people will rarely
criticise rewarding people for losing weight or giving up smoking, there is often a public
reluctance to endorse programmes that appear to ‘reward’ individuals who have an illegal
substance use disorder, and there have been problems establishing clinics that use these types of
programmes in countries such as Germany, The Netherlands, Canada, and the UK because of
public criticism. It's clear that many people still believe that drug addiction is solely the ‘fault’ of
the addict and as such is self‐inflicted (Crisp, Gelder, Rix, Meltzer, & Rowlands, 2000). Until we
can change this unhelpful and discriminating view of those with substance use disorders, it may
continue to be difficult to propose, develop, and finance these types of interventions.
Many methods of collecting data about substance abuse are relatively unreliable. Self‐report is
obviously problematic, because users will often have reason to lie about their drug use (if it
involves legal issues such as child custody), or their recall of drug use may be affected by the
changed states of consciousness caused by regular use of certain substances. Even blood and
urine samples can be very variable in the data they provide (Spiehler & Brown, 1987) and are
certainly not suitable for estimating longer‐term drug use.
However, one relatively reliable method of collecting data about drug use is through hair
sample analysis (e.g., Uhl & Sachs, 2004). Small amounts of the drug will accumulate in hair
after use and, because head hair grows at approximately 0.8–1.3 cm per month, a record of
drug use is available over a period of weeks or months after intake. A hair sample of only 3–5
cm in length is required to provide a record of drug use over the previous 3–4 months, and
high‐performance chromatography is used to identify the concentrations of any drugs taken up
into the hair sample.
Hair sample analysis is not only used as a more reliable way of collecting research data about
previous drug use, it is becoming widely used for medico‐legal purposes where the user needs to
prove long‐term abstinence (especially in cases related to rehabilitation and legal custody). It is
also used to provide a longer‐term record of drug use in the case of individuals who may have
died from overdose (Tagliaro, Battisti, Smith, & Marigo, 1998) and has been used to detect the
use of opiates, cocaine, cannabis, and amphetamines (Jurado & Sachs, 2003).
Nevertheless, hair sample analysis is not a fool‐proof way of estimating drug use, and it does
have its own drawbacks as a methodology. For instance, it is not suitable as a measure of
current drug use, but only as a method of estimating previous medium‐term use. It also cannot
be used on those who present with very short hair or no head hair!
In recent years, ‘third‐wave’ CBT methods (see Chapter 4) have also been applied to the treatment of
substance use disorders. For example, metacognitive therapy for alcohol use disorder attempts to
identify and modify dysfunctional metacognitive beliefs about drug use (e.g., ‘I cannot stop thinking
about using alcohol’, ‘I have no control over alcohol because my brain is abnormal in some way’) and
this has been found to be useful in reducing weekly alcohol use in preliminary studies (Caselli, Martino,
Spada, & Wells, 2018). Similarly, mindfulness practices have also been applied to cravings for cigarettes
and alcohol and have shown benefits in terms of both craving reduction and reducing the extent to
which craving leads to substance use (Tapper, 2018).
Some approaches recommend CBT primarily when substance abuse disorder is comorbid with another
psychiatric disorder such as anxiety or depression. However, systematic reviews and RCTs indicate only
moderate support for CBT when used alone to treat some forms of substance dependency such as
alcohol use disorder (Coates, Gullo, Feeney, Young, & Connor, 2018; Hides, Samet, & Lubman, 2010),
with the implication that more potent forms of CBT still need to be developed and may need to be used
in combination with other forms of treatment to be effective.
detoxification A process of systematic and supervised withdrawal from substance use that is
either managed in a residential setting or on an outpatient basis.
Those drugs that help reduce withdrawal symptoms include clonidine (that reduces noradrenergic
activity in the brain, Baumgartner & Rowan, 1987) and acamprosate, a drug that helps to reduce the
cravings associated with withdrawal (Mason, 2001). Basic anxiolytic and antidepressant drugs can also
be used to improve mood and alleviate negative emotions experienced during withdrawal (Cornelius et
al., 1995).
Antabuse (disulfiram) has been used for over 60 years in the detoxification of individuals with
alcohol dependency. It affects the metabolism of alcohol so that the normal process of converting toxic
alcohol products into nontoxic acetic acids is slowed, and this causes the individual to feel nauseous or
vomit whenever they take alcohol. However, the use of Antabuse does have some problems. It is rarely
effective when patients are given the drug to take unsupervised, and noncompliance and dropout from
such programmes are high (Fuller et al., 1986). Second, it does have a number of side effects and in
some rare cases causes liver disease and hepatitis (Mohanty, LaBrecque, Mitros, & Layden, 2004).
However, when taken in properly supervised programmes, Antabuse has been shown to be more
effective at reducing drinking behaviour than placebo controls (Chick et al., 1992; Fuller, & Gordis,
2004; Skinner, Lahmek, Pham, & Aubin, 2014), including having a benefical effect on short‐term
abstinence and days until relapse (Jorgensen, Pedersen, & Tonnesen, 2011). Indeed, some long‐term
studies of alcohol treatment have suggested that abstinence rates of 50% are achievable up to 9 years
after initial treatment with the supervised and guided use of alcohol deterrents such as Antabuse
(Krampe et al., 2006), and that Antabuse can increase treatment effectiveness when combined with
CBT and individual‐based treatment (Pettinati et al., 2010; Skinner et al., 2014).
A further set of drugs used to treat substance use disorders are those that influence brain
neurotransmitter receptor sites and prevent the neuropsychological effects of stimulants, opiates, and
hallucinogens. For example, drugs such as naltrexone, naxolone, and the more recently developed
buprenorfine attach to endorphin receptor sites in the brain. This prevents opioids from having their
normal effect of stimulating these sites to produce more endorphins that create the feeling of euphoria
when drugs such as heroin are taken. Such drugs do appear to reduce craving for opiates and they help
the therapeutic process when combined with other forms of psychological therapy (Streeton & Whelan,
2001). However, such drugs do come with a cost. Dosage has to be properly regulated, otherwise the
client may be thrown rapidly into an aversive withdrawal (Roozen, de Kan, van den Brink, Kerkhof, &
Geerlings, 2002), and narcotic antagonists such as naltrexone and naxolone are effective only for as long
as the client is taking them. However, because these drugs affect the release of endorphins, they have
been used to help treat a number of substance use disorders, including alcohol (O'Malley, Krishnan‐
Sarin, Farren, & O'Connor, 2000), cocaine and opiate dependency (O'Brien, 2005). The reason why
such drugs may be effective over a range of substances that have their psychoactive effects across
different brain neurotransmitter pathways is because they suppress the release of endorphins, and
endorphin receptors are closely associated with the brain's reward centres (Leri & Burns, 2005).
naltrexone An opioid receptor antagonist which has been found to be beneficial in the control
of hyperactivity and selfinjurious behaviour.
naxolone One of a set of drugs used to treat substance use disorders which influence brain
neurotransmitter receptor sites and prevent the neuropsychological effects of stimulants, opiates
and hallucinogens.
drug replacement treatment Involves treating severe cases of substance abuse and
dependency by substituting a drug that has lesser damaging effects.
9.5.6 Summary
The treatment of substance use disorders is inevitably a multifaceted one, with most mental health
services providing a range of treatments (detoxification, skills training, behavioural and cognitive
therapies, and family and couple therapies) in a variety of settings (e.g., individual, community‐based, or
residential). Treatments usually involve a combination of drug‐based detoxification, psychological
therapy, and skills training, and will usually attempt to involve the client's family and friends in the
therapeutic process. Substance use disorders are difficult to treat and we described some of the
difficulties at the outset of this section on treatment. Nevertheless, outcomes are often good, and total
abstinence is an achievable goal—even with severely addictive substances such as opiates and stimulants.
For example, a long‐term study of heroin dependence in a small town in the south‐east of England 33‐
years after initial treatment found that 42% of those treated had been abstinent for 10 years (Rathod,
Addenbrooke, & Rosenbach, 2005). This suggests that long‐term dependency is not inevitable after
exposure to addictive drugs, and individuals can often control their use as well as receive effective
treatment for their dependency when required.
SELF‐TEST QUESTIONS
Why are substance use disorders particularly hard to treat?
Can you describe the different kinds of community‐based programmes that help to
prevent or treat substance use disorders?
How successful is residential rehabilitation in treating substance use disorders?
How have the principles of behaviour therapy been adapted to treat substance use
disorders?
Can you name the main features of aversion therapy, contingency management therapy,
and controlled drinking?
What are some of the benefits of using controlled drinking goals rather than complete
abstinence?
What are abstinence violation beliefs and how do cognitive behaviour therapies (CBT)
attempt to treat them?
What is meant by the term detoxification, and how are drugs used in detoxification
programmes?
How have drugs such as naltrexone, naxolone, and buprenorfine proved to be useful in the
treatment of substance use disorders?
SECTION SUMMARY
CHAPTER OUTLINE
10.1 DIAGNOSIS AND PREVALENCE
10.2 CULTURAL AND DEMOGRAPHIC DIFFERENCES IN EATING
DISORDERS
10.3 THE AETIOLOGY OF EATING DISORDERS
10.4 THE TREATMENT OF EATING DISORDERS
10.5 EATING DISORDERS REVIEWED
LEARNING OUTCOMES
When you have completed this chapter, you should be able to:
1. Describe the characteristics and main diagnostic criteria of the three main eating
disorders.
2. Describe the cultural and demographic distribution of eating disorders, and evaluate why
this information is important in understanding eating disorders.
3. Compare and contrast a range of risk factors for eating disorders, covering risk factors at
different levels of explanation (such as genetic, developmental, cultural, and psychological).
4. Describe, compare and contrast at least two interventions commonly used in the treatment
of eating disorders.
For as long as I can remember, I've wanted to do everything under the sun—and be the best at it. If I got a C I'd
be really hard on myself, and my parents made it pretty clear they wanted me to get a scholarship, since paying for
college would be a challenge.
Plus, things weren't so great at home. I'd always had a terrible relationship with my dad. I felt like he ignored me
most of the time. He could be pretty scary. Like screaming at me for little things ‐ like leaving crumbs on the
kitchen table after making a snack. I'd tell him when he hurt my feelings, but he'd just walk away and slam the
door. On top of it all, he and my Mum were fighting a lot, too.
It was hard to be at school and even harder to be at home. As a result, I began eating less. Starving myself wasn't
my actual goal at first—just more of a response to everything going on in my life. But I started to lose weight.
Soon, my clothes got looser. Then I became a vegetarian, also cutting out all foods with chemicals and
preservatives. I lost even more. I felt I had finally found something I could completely control—my weight. Even
though my life felt crazy, I could do this one thing very well and, initially, I got a high from this accomplishment.
Gaining or losing a single pound determined my mood for the whole day.
I remember watching a film in health class about the dangers of anorexia. I even hung warning posters around
school during Eating Disorders Awareness Week. But I never connected my own weight loss to anorexia. Denial,
of course, is a symptom of the disease. A voice in my head kept telling me the less food I let touch my lips, the
more stable and safe I would be. My friends and family kept telling me I was too skinny, but no one could force
me to eat. And, to be honest, it made me feel powerful that I could ignore pleas and starve myself. Even as my
bones poked out from under my skin, I could not admit to anyone—including myself—how incredibly sick I was.
Amy's Story
Introduction
Disorders of eating are complex and have their roots in psychological, sociological, and cultural
phenomena. In many of today's cultures, individuals are torn between advertising that implores them to
eat a range of foods high in calories and campaigns designed to promote selective and healthy eating.
(see Activity Box 10.1 on the book’s website). Eating behaviour is also influenced by media
representations of ideal body shapes. These prompt appearance‐conscious individuals to control and
restrict their eating in order to achieve these media‐portrayed ideals of a slim body—ideals which are
usually underweight. Given these pressures, and the psychological factors that accompany them, it is not
surprising that eating patterns can become pathological and result in disorders of both undereating
(anorexia nervosa) and overeating (bulimia nervosa and binge‐eating disorder).
Amy's story at the start of this chapter illustrates what a slippery slope the descent into an eating
disorder can be. The story starts with someone who is troubled in various spheres of her life, including
school life and home life. In Amy's case, this leads more by accident than design to a reduction in eating.
Eventually, controlling eating becomes a goal in itself, and a source of satisfaction when even the
smallest of dietary goals are met. The obvious physical consequences of lack of nutrition then become
exhaustion and lack of concentration, menstrual irregularities, and proneness to infections, insomnia,
dizzy spells, and sensitivity to cold. Amy's story also highlights some of the potential risk factors that
have been found to predict the development of an eating disorder such as anorexia, and these include
high levels of perfectionism and parents who exhibit coercive parental control or who are hostile and
unresponsive to the individual's needs. Also characteristic of the disorder, and featured in Amy's story,
are the need to control eating as a central feature of eating disorders generally, the development of very
durable and resistant beliefs about the need to diet and control eating, and the use of denial as a means
of avoiding confronting the disorder and challenging dysfunctional beliefs about eating. Focus Point 10.1
summarises some of the warning signs of anorexia nervosa, many of which will have been apparent as
Amy developed her own eating disorder (Focus Point 10.1).
Although men are being increasingly diagnosed with eating disorders (see Murray et al., 2017, for a
discussion of eating disorders in men), these disorders are largely conditions that are diagnosed in
women (Walters & Kendler, 1994). Many women consider themselves to be overweight despite having a
body mass index (BMI) in the normal range. Studies of college women suggest that 43% were
currently dieting despite 78% of them having a healthy BMI (Fayet, Petocz, & Samman, 2012), while
community studies indicate that between 25 and 30% of females claim to be dieting or actively
attempting to lose weight (McVey, Tweed, & Blackmore, 2004; Wardle & Johnson, 2002). Dieting is
often a significant precursor of anorexia nervosa symptoms (Patton et al., 1997) and can become an
entrenched habit that is resistant to both psychological and pharmacological treatment, and this may
contribute to the persistence of symptoms in some anorexia nervosa sufferers (Walsh, 2013).
body mass index (BMI) A way of measuring a healthy weight range, derived by using both
height and weight measurements.
Similarly, recent figures suggest that obesity is increasing significantly in Western cultures. In the UK,
obesity rates have increased significantly in the last 20 years, with 67% of men and 62% of women in
the UK considered to be overweight (a BMI >25, see Activity Box 10.1), and 29% of the population
considered to be obese (a BMI >30) (NHS Digital, 2019). This suggests that both undereating and
overeating have reached almost epidemic proportions, with surveys suggesting that 6.3% of the UK
population exhibit disordered eating patterns leading to either underweight or overweight (McBride,
McManus, Thompson, Palmer, & Brugha, 2013). Apart from the cultural pressures that can trigger
overeating and undereating, psychological factors also represent both risk factors and outcomes of
disordered eating. As we shall see later in this chapter, developmental and psychological processes
appear to act as vulnerability factors in the development of eating disorders, and eating disorders
themselves can result in psychological symptoms such as low self‐esteem, substance misuse, and suicidal
ideation (Neumark‐Sztainer & Hannan, 2000).
This chapter covers the three main eating disorders, namely anorexia nervosa, bulimia nervosa, and
binge‐eating disorder, and for each we discuss diagnosis and prevalence, the role of sociocultural factors,
aetiology, and treatment.
Weight loss is often viewed as an important achievement (see the example of Amy at the beginning of
this chapter), and weight gain as a significant loss of self‐control. Even when individuals suffering
anorexia do admit they may be underweight, they often deny the important medical implications of this
and will continue to focus on reducing fat in areas of their body that they still believe are too ‘fat’. Table
10.1 sets out the Diagnostic and Statistical Manual of Mental Disorders, 5th Edition (DSM‐5) diagnostic
criteria for anorexia nervosa, and this stresses objective levels for judging the severity of the symptoms
based on BMI. DSM‐5 has adopted the World Health Organization lower limit for normal body weight
of a BMI of 18.5 kg/m2 as a level below which body weight should be considered as low enough to
trigger the possibility of a diagnosis of anorexia nervosa if other criteria are met. The criteria also
emphasise the pathological fear of weight ‐gain in sufferers and the distortions in self‐perception that
accompany anorexia. DSM‐5 also distinguishes two types of anorexia nervosa. These are the
restricting type in which self‐starvation is not associated with concurrent purging (e.g., self‐inducing
vomiting or use of laxatives), and the Binge‐eating/purging type anorexia nervosa, where the
sufferer regularly engages in purging activities to help control weight gain.
Because of the severe physical effect of this disorder on the body, anorexia nervosa is usually associated
with a number of biological symptoms that are effects of the self‐imposed starvation regime. These
include (a) tiredness, cardiac arrhythmias, hypotension, low blood pressure, and slow heartbeats
resulting from altered levels of body electrolytes, such as sodium and potassium; (b) dry skin and brittle
hair; (c) kidney and gastrointestinal problems; (d) the development of lanugo (a soft, downy hair) on the
body; (e) the absence of menstrual cycles (amenorrhoea); and (f) hypothermia, often resulting in feeling
cold even in hot environments. In many cases, starvation has the effect of severely weakening the heart
muscles as the body uses these muscles as a source of protein. As a result, mortality rates (including
suicides) in anorexia nervosa and bulimia nervosa are still unacceptably high, ranging from 5 to 8%
(Herzog et al., 2000; Steinhausen, Seidel, & Metzke, 2000), with one in five of those deaths the result of
suicide (Arcelus, Mitchell, Wales, & Nielsen, 2011).
Anorexia nervosa begins to develop usually around adolescence. It rarely begins before puberty or after
40 years of age. Onset can be associated with a stressful life event, such as leaving home (Tozzi, Sullivan,
Fear, McKenzie, & Bulik, 2003), and is often preceded by a period of changed eating patterns, such as
self‐imposed dieting. Fortunately, most individuals with a diagnosis of anorexia nervosa will remit and
be symptom free within 5 years. However, for others, hospitalisation may be required to restore weight
and address other medical complications caused by self‐starvation.
There is a tendency to think of eating disorders as modern ailments driven by cultures obsessed
with projecting ideal body shapes to impressionable young people. However, examples of
disordered eating behaviour can be found throughout history, and many resemble the eating
disorders we find today (Keel & Klump, 2003). Cases of self‐starvation have been reported in
classical and medieval times, often as a means of achieving heightened spirituality amongst
religious devotees. Bell (1985) called this holy anorexia and cited the example of St.
Catherine of Siena who began self‐starvation at the age of 16 years and continued until her
death in 1380 (at the age of 32). Like modern‐day anorexics, St. Catherine portrayed herself as
being afflicted by an inability to eat, and all attempts by peers and superiors to induce eating in
such fasting saints usually failed. From the sixteenth to the eighteenth centuries, reports of self‐
starvation were relatively common (McSherry, 1985), with the case of Mary, Queen of Scots
(1542–1587) being a prominent one. During the nineteenth century, study of self‐starvation
became more systematic within the medical profession, with Marce (1860) describing a form of
hypochondria in which ‘young girls, who at the period of puberty and after a precocious
physical development, become subject to inappetency carried to the utmost limits’ (1860, p.
264). Probably the first use of the term anorexia nervosa was by Imbert (1840), who
characterised anorexie nerveuse by loss of appetite, refusal to eat, and emaciation (cited in
Vandereycken & Van Deth, 1994). However, while these historical examples bear a formalistic
similarity to modern eating disorders, the issue of the motivation behind self‐starvation in these
historical examples is important. At least some of the earliest examples of self‐starvation appear
to be motivated by religious and spiritual factors, while examples from the eighteenth and
nineteenth centuries were justified as either forms of convalescence or hysterical paralysis
(Habermas, 1996). However, Habermas (1989) quite rightly points out that individuals with
eating disorders tend to hide their goal of losing weight and give other explanations for their
refusal to eat. This may also be true of the historical examples we have reviewed here.
Historical examples resembling bulimia nervosa are much rarer than those resembling anorexia
nervosa. Most examples taken from classical times through to the nineteenth century report
individuals exhibiting periods of fasting followed by a binge‐purge cycle, which suggests that
bingeing and purging was rarely found outside of the context of fasting or self‐starvation (Keel
& Klump, 2003). However, in the seventeenth century, Silverman (1987) reports a description
of fames canina—a disorder characterised by large food intake followed by vomiting (Stein &
Laakso, 1988). Interestingly, however, when symptoms similar to bulimia are reported in
historical writings, most cases involve adult men. This is quite unlike the current disorder of
bulimia, which is primarily an affliction of females.
This brief review suggests that disordered eating (especially self‐starvation) has been around as
long as people have been able to write about it and report it. In different periods of history, the
motivations for self‐starvation appear to be different, although the symptoms remain
remarkably similar. One implication of this is that disordered eating symptoms similar to
modern disorders have been around for a considerable period of history. However, changes in
contemporary sociocultural factors may influence the frequency and prevalence of such
disorders by providing a motivation for disordered eating (e.g., religious fasting would have
provided a suitable trigger for self‐starvation in vulnerable individuals in classical and medieval
times). In addition, sociocultural factors can also provide socially acceptable means of hiding
the psychological reasons for self‐starvation and loss of appetite. For example, when fasting
became an acceptable form of convalescence from illness in the eighteenth and nineteenth
centuries, this may have provided a suitable means of hiding the anorexic individual's simple
desire to restrict and control their eating (just as the trend to diet to achieve a media‐driven thin
ideal serves the same purpose today).
TABLE 10.1 Summary: DSM‐5 diagnostic criteria for anorexia nervosa
A significantly reduced calorie intake relative to the requirements of the body leading to a
considerably low body weight
Intense fear of gaining weight or becoming fat
A disruption in the way that the patient evaluates their body or shape, increasing undue influence
of body weight or shape on self‐evaluation
DSM‐5 cites the 12‐month prevalence rate for anorexia nervosa among young females as around 0.4%,
with a female‐to‐male ratio of around 10:1 making anorexia primarily a female disorder. Lifetime
prevalence rate for females by age 20‐years is 0.8%, with peak onset age 19–20‐years (Stice, Marti, &
Rohde, 2013), and recent data from an 8‐month surveillance study of young people aged 8–17 years in
the UK and Ireland suggest that the incidence of anorexia nervosa diagnosis for young women is 25.66
per 100 000, with a mean age of 14.6 years, and a ratio of women to men of 10:1 (Petkova et al., 2018;
see also Demmler, Brophy, Marchant, John, & Tan, 2020). In many cases anorexia can be a persistent
condition, and a 30‐year long‐term outcome study indicated that one in five individuals with a diagnosis
during adolescence had a chronic eating disorder 30 years later (6% anorexia, 2% binge eating, 11%
other eating disorders) (Dobrescu et al., 2020).
There is some evidence that cultural and societal factors can affect the frequency of anorexia, so that
prevalence rates may differ across cultures and across time (see Section 10.2) (Miller & Pumariega,
2001). However, recent analysis suggests that anorexia may represent a similar proportion of the general
psychiatric population in several Western and non‐Western nations, with numbers of cases increasing
significantly in Asian countries where industrialisation and urbanisation is occurring (Pike & Dunne,
2015). This converging evidence suggests that anorexia nervosa may not be just a disorder of affluent
Western cultures (Keel & Klump, 2003).
High rates of comorbidity exist between anorexia and other psychiatric disorders. For example, studies
suggest between 50 and 68% of anorexia sufferers also have a lifelong diagnosis of major depression
(Halmi et al., 1991), and between 15 and 69% of anorexia sufferers also meet diagnostic criteria for
OCD or Obsessive‐Compulsive Personality Disorder (OCPD) at some time during their life (Hudson,
Pope, Jonas, & Yurgelson‐Todd, 1983; Wonderlich, Swift, Slotnick, & Goodman, 1990). Surveys suggest
significant levels of comorbidity between anorexia nervosa and anxiety disorders such as OCD (21%),
panic disorder and agoraphobia (25%), social anxiety disorder (30%) and specific phobias (25%), and
with substance abuse disorders (34%) (Jordan et al., 2008). In addition, anorexia nervosa puts the
sufferer at significantly increased risk of a suicide attempt which is five to six times greater than the risk
for someone without an eating disorder (Udo, Bitley, & Grilo, 2019) (Photo 10.1).
bulimia nervosa (BN) An eating disorder, the main features of which are recurrent episodes
of binge eating followed by periods of purging or fasting.
PHOTO 10.1 In 2020 award winning singer‐songwriter Taylor Swift revealed her struggle to overcome an eating
disorder. During her 2015 world tour, she revealed she actively restricted her food intake. Some days she would ‘starve a
little bit and just stop eating’. She also kept lists of everything she ate and exercised constantly until she was a size double‐
zero (size two in the UK). She struggled with this condition for several years and constantly denied having a problem when
she was confronted about her weight. These experiences are typical of someone with a restrictive eating disorder ‐ exercising
regularly specifically to lose weight, restricting food intake to the point of eating little or nothing for a whole day, and
denying any problem when confronted. Taylor Swift talks openly about these experiences in her Netflix film Miss
Americana.
Adapted from https://www.bbc.co.uk/news/entertainment‐arts‐51234055.
TABLE 10.2 Summary: DSM‐5 diagnostic criteria for bulimia nervosa
Sara was the youngest child in her family, with two brothers several years older than her who
both left home before she entered high school. Sara tried hard to please her older brothers even
though they teased her and would often call her names, telling her she was stupid and ugly.
Her father worked long hours as a salesman and was often away from home travelling or
staying late in the office. When Sara was 13 she discovered that her father had been having an
affair and that her mother had known of this for some years. Sara was very angry with her
father and also with her mother for allowing him to ‘get away with it’. She tried to be
supportive of her mother but felt hurt and confused, and their close relationship was damaged.
Sara didn't feel able to confide in her mother anymore and felt strong resentment towards her
father, who she could not forgive.
Over the next 2 years Sara felt increasingly isolated and unhappy at home and spent as little
time there as possible. When Sara was 16 she met her first boyfriend, Kyle, who was 4 years
older than her. At first Sara was very happy in the relationship and after 4 months she moved
out to live with Kyle. Soon, the relationship became difficult and they often argued about
money and household chores, as well as about Sara's belief that Kyle flirted with other women.
Two years into their relationship Sara discovered that Kyle had been cheating on her with one
of her close friends. When Sara confronted him, Kyle confessed what he had done but blamed
Sara for being ‘boring’ and ‘a nag’. He told her that he had never found her attractive and had
always wanted to be with her friend instead. Sara was angry and upset but blamed herself for
not making more of an effort to be attractive.
Sara reluctantly moved back into her parents' house, although her relationship with them had
not improved in recent years. Feeling isolated and unhappy she began a crash diet and lost
some weight quickly. Her father often ridiculed her weight loss efforts while her mother
encouraged her to ‘make more of herself ’. Sara became increasingly unhappy after her initial
weight loss and became tired, irritable and preoccupied with food. One evening she saw a
Facebook update telling her that Kyle and her friend were now engaged and that her friend was
pregnant. She felt jealous and angry that all her efforts to lose weight and ‘improve herself ’ had
been for nothing. Sara stuffed herself with food until she couldn't eat any more. Feeling out of
control and ashamed she made herself vomit.
The next day she resolved to eat even less but a week later she binged again. Although she tried
to stop, Sara felt caught in a pattern and was soon bingeing several times a week. Her eating
became wildly erratic but her weight stayed much the same. Her parents didn't realize that
anything was wrong and regularly chastised her for not making more of her life, while Sara
began to feel increasingly desperate and alone and even thought of trying to kill herself.
Although she still saw her friends and was able to hold down a job, Sara was now locked into a
cycle of bingeing and vomiting known as bulimia nervosa.
Clinical Commentary
Sara’s case contains a number of elements that are typical of individuals who develop bulimia. She had
a difficult home environment and experienced teasing about her weight and appearance, as well as having
to adjust to significant changes in parental relationships. Her subsequent relationship was difficult and
when it came to an end she probably felt that she had little control over her life, and she had no obvious
means of escape from her situation. Arguments with her boyfriend had reinforced her belief that she had
‘let herself go’, and she resolved to diet to lose weight. Dieting then triggered feelings of extreme hunger
which led to binge eating following anger at her boyfriend’s infidelity. Following the binge, feelings of
self‐disgust and shame led to purging. This started a vicious cycle in which, after each binge, Sara
resolved to eat less but inevitably ended up bingeing again.
Rosa was a binge eater but had not had a food binge for over three‐and‐a‐half years when she
travelled to attend the wedding of her friend's daughter. Rosa was normally a confident,
professional woman, who enjoyed her work and had just successfully completed an important
project, which often left her feeling down and empty. She had spent 3 years attending
Overeaters Anonymous (OA) and knew she needed to avoid food—especially when she was
feeling low.
Rosa managed to keep herself occupied during the day of the wedding, but as nighttime came,
the bluster of the after wedding party made it easy for her to disappear—physically and
emotionally—into a binge. She started with a plate of what would have been an ‘abstinent’
meal (an OA concept for whatever is included on one's meal plan): pasta salad, green salad, cold
meats, and lots of bread. The food was plentiful, but Rosa still wanted more and spent the next
three‐and‐a‐half hours eating. Very soon she started to feel guilty and ashamed and began to
surreptitiously steal food from plates out of the gaze of the other guests.
When most of the guests had left the dining room she began helping herself to the cakes and
desserts. Then, beginning to feel desperate, Rosa began to pile the food high on her plate, so
that if other guests saw her she could always escape with a large amount of food. By now, the
food tasted of nothing to her, but she couldn't stop eating it. Eventually she realized what she
had been doing, felt that she was out of control, and ran crying back to her room.
This event was the beginning of a 6‐month relapse into binge eating for Rosa. During the
relapse, she binged on foods and refined carbohydrates, started smoking cigarettes in an attempt
to control the binging, and was driven to excessive exercise after each binge.
Throughout the relapse, Rosa went to therapy and to OA. Finally, a combination of
antidepressants and a structured food plan that excluded refined sugars, breads, crackers, and
similar carbohydrates helped her to bring her bingeing under control and manage her eating.
Rosa was eventually able to stop taking the antidepressants and continued to be active in OA.
Clinical Commentary
Rosa's case history is a good example of how a person can lose control of their own eating patterns and
eating behaviour. Features that are typical of binge‐eating disorder include (a) eating significantly more
than a normal meal portion in one session; (b) an uncontrollable urge to continue eating despite the
situation and surroundings; (c) forcing oneself to eat food that is unpalatable or contaminated; (d) a
desire to conceal her overeating from others; and (e) subsequent shame, self‐disgust, and depression when
the binge episode is over.
SELF‐TEST QUESTIONS
What are the three main eating disorders defined by DSM‐5?
Can you describe the main diagnostic criteria for anorexia nervosa, bulimia nervosa, and
binge‐eating disorder?
What are the prevalence rates for the main eating disorders, and how do incidence rates
compare between males and females?
Both anorexia nervosa and bulimia nervosa are highly comorbid with other psychiatric
disorders—which ones?
SECTION SUMMARY
SECTION SUMMARY
1. Few prospective studies have investigated factors that predict future onset of specific eating
disorders.
2. No prospective studies have investigated factors that predict future onset of diagnostic
levels of eating disorder symptoms (e.g., weekly binge eating for a period of 3 months).
3. Very few studies have examined the temporal sequencing of the emergence of empirically
established risk factors.
4. No study has provided a comprehensive test of potential interactions between a broad
range of risk factors.
5. No study has tested the ability of a multivariate interactive or mediational model of
aetiology to predict future onset of an eating disorder.
6. Research on the validity of scales used to assess putative risk factors is limited.
Figure 10.2 illustrates a recent attempt to classify the risk factors for anorexia and bulimia across a
developmental timeframe, and this shows how important risk factors are at a number of different levels
of description. These include ‘prenatal’ risk factors such as gender and ethnicity, early developmental
influences that generate eating difficulties such as infant sleeping and eating patterns (Herle et al., 2020),
early experiences such as sexual abuse and physical neglect (Molendijk, Hoek, Brewerton, & Elzinga,
2017), dispositional factors such as low self‐esteem, perfectionism and negative self‐evaluation affect
(Bardone‐Cone, Lin & Butler, 2017), familial factors such as parental obesity and parental attitudes to
weight, adolescent attitudes to dieting and exercise, and comorbid psychological disorders such as OCD
and social phobia (see also Lindberg & Hjern, 2003). What you will also see from Figure 10.2 is that
different eating disorders such as anorexia and bulimia often share many of the same risk factors, and it
is not clear why an individual may develop one of these disorders rather than the other. As a
consequence it is often difficult to separate out theories of the aetiology of anorexia and bulimia, and
many of the following theories are addressed at understanding eating disorders generally rather than
individual eating disorders specifically.
In addition to simply identifying individual risk factors that predict future eating disorders, some
researchers have begun to investigate how risk factors may interact to predict the onset of eating
disorders, and this process is a first step in understanding their aetiology in terms of the potency of
individual variables and how different variables might interact with each other. For example, Stice and
Desjardins (2018) discovered that (a) low BMI was the most potent predictor of anorexia nervosa, and
body dissatisfaction amplified this predictive relationship; (b) overeating was the most potent predictor
of bulimia nervosa, and positive expectancies for thinness and body dissatisfaction amplified this
relationship; and (c) body dissatisfaction was the most potent predictor of binge‐eating disorder, and
overeating, low dieting, and thin‐ideal internalisation amplified this relationship. While still only being
findings that make prediction of eating disorders more precise, any future models of eating disorder
aetiology, no matter how complex, will have to take these findings into account.
FIGURE 10.2 Classification of the known risk factors for anorexia and bulimia across a developmental time frame.
This shows how important risk factors are at a number of different levels of description.
After Jacobi, Hayward, De Zwaan, Kraemer, & Agras, 2004.
In the following section, we look at risk factors in more detail and try as best we can to understand how
these risk factors might have their effects on the development of eating disorders.
Combinations of brain mechanisms and reward pathways in the brain may also be involved in
generating eating disorders as a result in their role in triggering either satiation or food ‘craving’ or
‘liking’ (Berridge, Ho, Richard, & DiFeliceantonio, 2010), and we have already indicated that there may
be some genetic influence on the strength of these effects. For example, self‐starvation and maintaining
a low body weight may be reinforced by the endogenous opioids that the body releases during
starvation to reduce pain sensation (Hardy & Waller, 1988), and dieting and restrictive eating may itself
become rewarding and override the rewarding properties of foods (Walsh, 2013; Foerde, Steinglass,
Shohamy, & Walsh, 2015). In anorexia, starvation may directly increase the levels of opioids, thus
producing a state of euphoria; however, because bulimia sufferers are not necessarily overweight, this
disorder may be accompanied by low levels of opioids, and this is known to promote craving. In support
of this latter hypothesis, Brewerton, Lydiard, Laraia, Shook, & Ballenger (1992) did find low levels of
the opioid beta‐endorphin in bulimia sufferers. Nevertheless, it is still difficult to interpret the
significance of this finding because low opioid levels may be a consequence of the cravings that accompany
bulimia rather than a cause of them. Also, low levels of serotonin metabolites have been found in
individuals with a diagnosis of anorexia and bulimia (Kaye, Ebert, Raleigh, & Lake, 1984; Carrasco,
Dyaz‐Marsa, Hollander, Cesar, & Saiz‐Ruiz, 2000). Serotonin promotes satiety, and so people with low
levels of serotonin metabolites may be prone to binge eating; low levels of serotonin are also associated
with depression, so this might also be a reason why eating disorders are so often comorbid with mood
disorders such as depression. The problem with this as a cause of binge eating problems is that animal
studies have shown that enforced dieting tends to reduce serotonin functioning, so low levels of
serotonin metabolites in individuals with either anorexia or bulimia may have been caused by any prior
dieting behaviour (Chandler‐Laney et al., 2007). In the case of obese individuals and those who
experience regular urges to binge eat (both bulimia nervosa and binge‐eating disorder), evidence
suggests that at risk individuals show hyperresponsivity of brain reward circuitry to high‐calorie food
tastes, and overconsumption of high‐calorie food results in increased reward and attention region
responsivity to cues associated with intake of these high‐calorie foods (Stice & Burger, 2019). Finally,
dopamine is a brain neurotransmitter involved in the pleasurable and rewarding consequences of food,
and women diagnosed with anorexia and bulimia exhibit greater expression of the dopamine
transporter gene DAT suggesting that they might be more susceptible to the rewarding and
pleasurable effects of eating (Frieling et al., 2010; Thaler et al., 2012).
endogenous opioids A compound that the body releases to reduce pain sensation.
While most of the evidence seems to suggest that various brain areas and neurotransmitters are involved
in eating disorder symptoms, we still cannot be sure that these are genuine causal factors or whether
they are consequences of behaviours associated with eating disorders such as dieting or bingeing.
PHOTO 10.3 Social Media Is Not Real Life!. Many young women post glamourous photos of themselves on social
media platforms such as Instagram, Facebook, and Twitter. These postings influence millions of young women by showing
what appears to the perfect, glamourous, happy, fit lives of other people. But those photos of toned, tanned bodies of
beautiful people are hardly reflective of real life. Australian social media star Essena O'Neill decided to spill the beans.
What is supposed to be a spontaneous, candid selfie can take over 50 shots and a retouching app to perfect. Posting another
photo, this time of her in a bikini, she encouraged fans to ask themselves why someone would post a shot like that in the first
place. ‘What is the outcome for them? Make a change? Look hot? Sell something?’ she asked. ‘I thought I was helping
young girls get fit and healthy. But then I realised that placing any amount of your self‐worth on your physical form is so
limiting. I could have been writing, exploring, playing, anything beautiful and real…Not trying to validate my worth
through a bikini shot with no substance’. And, of a workout selfie, she wrote, "‘A 15‐year‐old girl that calorie restricts and
excessively exercises is not goals’. Social media platforms where people are either explicitly or implicitly encouraged to post
‘perfect’ pictures of themselves has become yet another form of media associated with heightened valuation of body image
and eating concerns and is likely to influence the weight concerns and restrictive eating practices of both women and men
who use social media on a daily basis (e.g., Rodgers & Melioli, 2016).
While the preceding evidence suggests that media images of idealised thin body shapes are relevant in
determining attitudes towards body shape—the question we need to ask is how this media‐based
pressure is converted into the eating problems that meet DSM‐5 criteria for a psychological disorder.
The most obvious route is that idealised media images generate dissatisfaction with the individual's own
body shape (especially in comparison to extreme ideals), and recent network analyses (see Focus Point
2.2 in Chapter 2 for an explanation of network analyses) of eating disorder symptoms indicate that
over‐evaluation of weight and shape and judging one's self‐worth largely by these factors is a central
causal feature of all eating disorders, including anorexia nervosa, bulimia nervosa, and binge‐eating
disorder (DuBois, Rodgers, Franko, Eddy, & Thomas, 2017; Solmi et al., 2018). Body dissatisfaction
is usually defined as the gap between one's actual and ideal weight and shape (Polivy & Herman, 2002),
and most theories of eating disorders implicate body dissatisfaction as an important component of the
aetiology, especially when the individual tends to judge their own self‐worth by body weight and shape
(e.g., Stice, 2001; Polivy & Herman, 1985; van den Berg, Thompson, Obremski, & Coovert, 2002;
Murphy, Straebler, Cooper, & Fairburn, 2010). Body dissatisfaction is likely to trigger bouts of dieting
in order to move towards the ideal body shape, and regular or excessive dieting is also a common
precursor to eating disorders (Polivy & Herman, 1987; Stice, 2001). Figure 10.3 provides a schematic
representation of one model describing how body dissatisfaction might be generated through media,
peer, and parental influences, and then this itself will affect dieting behaviour and eating disorder
symptomatology such as bulimia (e.g., Rogers, Chabrol, & Paxton, 2011). There is no doubt that body
dissatisfaction and dieting are important predictors of all eating disorders (Joiner, Heatherton, Rudd, &
Schmidt, 1997; Steiger, Stotland, Trottier, & Ghadirian, 1996; Stice, Shaw, & Nemeroff, 1998), but it is
important to note that they are not sufficient conditions for an individual to develop an eating disorder.
For example, (a) many individuals may believe that their actual body shape is quite disparate from their
ideal, yet be quite happy with that fact (Polivy & Herman, 2002), and (b) many individuals who express
real body dissatisfaction do not necessarily go on to develop an eating disorder. Similarly, while dieting is
usually an activity that precedes an eating disorder, many individuals who diet regularly do not go on to
develop an eating disorder. This suggests that additional psychological factors are necessary for body
dissatisfaction and dieting to develop into an eating disorder, and we discuss some of these factors in
Section 10.3.4. Nevertheless, body dissatisfaction and dieting are vulnerability factors, and this is
demonstrated in part by the fact that occupations that require an individual to control and monitor
their weight (usually through either exercise or dieting) have higher incidences of eating disorders.
These include fashion models (Santonastaso, Mondini, & Favaro, 2002), actors, athletes (Sudi et al.,
2004), figure skaters (Monsma & Malina, 2004), and ballet dancers (Ravaldia et al., 2003) (Figure 10.4).
Finally, because body dissatisfaction has been identified as a central node in network analyses of eating
disorder symptoms (meaning that it is likely to causally influence other symptoms), then it is a factor that
needs to be targeted early on in the treatment of an eating disorder (DuBois, Rodgers, Franko, Eddy, &
Thomas, 2017; Solmi et al., 2018).
body dissatisfaction The gap between one’s actual and ideal weight and shape.
dieting A restricted regime of eating, followed in order to lose weight or for medical reasons.
FIGURE 10.3 This is one model of how body dissatisfaction might mediate eating disorder symptoms and is known as
the tripartite model (van den Berg, Thompson, Obremski, & Coovert, 2002; Yamamiya, Shroff, & Thompson, 2008). In
this model, the effect of influences from media, parents and peers is mediated by internalization of social ideals and social
comparison, leading to body dissatisfaction, disordered eating and negative affect.
FIGURE 10.4 Research on female body shape dissatisfaction has demonstrated that females consistently overestimate
their own body size compared to (a) the body size that they thought that men would like most, and (b) the body size they
think that most women would like to have. Interestingly, women rated the body size they thought men would like most as
significantly slimmer than the body size that men themselves rated as most attractive.
Peer influences
Just like the media, peer attitudes and views can seriously influence an adolescent's view of their body,
their weight, and their eating and dieting activities, and adolescent girls tend to learn their attitudes to
slimness and dieting through their close contact with their peers (Levine, Smolak, Moodey, Shuman, &
Hessen, 1994). Peer pressure can influence attitudes to body shape and eating in a variety of ways. In
some cases, attitudes to eating and body shape within a peer group converge towards those that are
socially valued (such as dieting or restricted eating) (Meyer & Waller, 2001), and this convergence also
results in the group adopting psychological characteristics that may facilitate pathological eating
behaviours, such as perfectionism. A study of adolescent schoolgirls by Eisenberg, Neumark‐Sztainer,
Story, & Perry (2005) found that the use of unhealthy weight‐control behaviours (e.g., self‐induced
vomiting, laxatives, diet pills, or fasting) was significantly influenced by the dieting behaviour of close
friends, and this influence was effective in generating unhealthy weight‐control behaviours regardless of
whether the individual was overweight, normal weight or underweight. Despite these findings, it is
difficult to determine whether peer influence (a) determines attitudes towards eating and body shape (it
is possible that peer groups recruit members on the basis of shared concerns rather than directly
changing the attitudes of their members), or (b) has a significant role in the development of eating
disorders (while peer pressure can increase the tendency to diet or to be dissatisfied with one's body
shape, these factors do not automatically lead to the development of eating disorders).
Familial factors
We have noted earlier that eating disorders have a tendency to run in families, and while this may in
part be due to inherited characteristics, it may also be a result of the direct influence of family attitudes
and dynamics on the behaviour of those in the family. In particular, Minuchin (Minuchin et al., 1975;
Minuchin, Rosman. & Baker, 1978) has argued that eating disorders are best understood by considering
the family structure of which the sufferer is a part. This family systems theory view argues that the
sufferer may be embedded in a dysfunctional family structure that actively promotes the development of
eating disorders (see Dallos, 2019, for a review of Minuchin's seminal works on family systems theory).
The family structure may inadvertently, but actively, reinforce a child's disordered eating, and this can
function to distract from dealing with other conflicts within the family (such as a deteriorating
relationship between the child's mother and father). In Minuchin's view the families of individuals with
eating disorders tend to show one or more of the following characteristics: (a) enmeshment, in which
parents are intrusive, overinvolved in their children's affairs, and dismissive of their children's emotions
and emotional needs (Minuchin et al., 1978); (b) overprotection, where members of the family are
overly concerned with parenting and with one another's welfare, and this can often be viewed by the
child as coercive parental control (Shoebridge & Gowers, 2000; Haworth‐Hoeppner, 2000); (c) rigidity,
where there is a tendency to maintain the status quo within the family; and (d) lack of conflict
resolution, where families avoid conflict or are in a continual state of conflict. A study of 181 families
where one of the adolescent children had been diagnosed with an eating disorder revealed profiles of
problematic family functioning when compared with the profiles of balanced families (Cerniglia et al.,
2017). The families with eating‐disordered adolescent children reported interpersonal boundary
problems, avoidance of conflict resolution, and high scores on measures or enmeshment and rigidity, all
largely supporting the factors outlined by Minuchin's description of the dysfunctional family that is a
risk factor for eating disorders. How these characteristics of the sufferer's family influence the
development of an eating disorder is unclear, although the family may focus on the disorder once it has
developed in order to avoid dealing with other difficult and important problems within the family. The
disorder may serve a functional purpose for both the parents (by distracting attention away from other
family difficulties such as a problematic relationship between mother and father) and the eating
disordered child (as a tool for manipulating the family) (Minuchin et al., 1978). As we shall see later, the
issue of how a dysfunctional family environment may generate an eating disorder is still unclear, but it
may do so by generating specific psychological characteristics in the child that play an active role in the
acquisition and maintenance of the disorder (Polivy & Herman, 2002).
family systems theory A theory which argues that the sufferer may be embedded in a
dysfunctional family structure that actively promotes psychopathology.
enmeshment A characteristic of family systems theory in which parents are intrusive, over-
involved in their children’s affairs, and dismissive of their children’s emotions and emotional
needs.
overprotection A characteristic of family systems theory where members of the family are
overconcerned with parenting and with one another’s welfare, and this can often be viewed by
the child as coercive parental control.
rigidity A characteristic of family systems theory where there is a tendency to maintain the
status quo within the family.
lack of conflict resolution A characteristic of family systems theory where families avoid
conflict or are in a continual state of conflict.
As an important part of the family, mothers may have a specific influence on the development of eating
disorders in their children. Mothers of individuals with an eating disorder are themselves more likely to
have dysfunctional eating patterns and psychiatric disorders (Hill & Franklin, 1998; Hodes, Timimi, &
Robinson, 1997) and these problematic maternal eating patterns appear to produce feeding problems in
their offspring at an early age (Whelan & Cooper, 2000) some of which may give rise to weight‐gain,
disordered eating and emotional problems in their offspring later in life (Easter et al., 2013; Micall, De
Stavola, Ploubidis, Simonoff, & Treasure, 2014). Mothers of sufferers also tend to excessively criticise
their daughters' appearance, weight, and attractiveness when compared with mothers of nonsufferers
(Hill & Franklin, 1998; Pike & Rodin, 1991), and there is a significant inverse relationship between a
mother's critical comments and her daughter's chances of successful recovery following treatment
(Vanfurth, Vanstrien, Martina, Vanson, & Hendrickx, 1996).
While this research strongly implicates the involvement of familial factors in the aetiology of eating
disorders, Polivy & Herman (2002) quite rightly point out that most of the studies are retrospective and
correlational in nature and so do not imply causation. There may indeed be some form of intrafamilial
transmission of disordered eating patterns within families, but it is quite likely that some other factor
(biological, psychological, or experiential) may be necessary to trigger the severe symptoms typical of a
clinically diagnosable disorder (Steiger, Stotland, Trottier, & Ghadirian, 1996).
Laboratory procedures have been developed that provide an objective behavioural measure of
the tendency to ‘binge’ eat, and one of these is the food preload test (see Polivy, Heatherton,
& Herman, 1988).
This test begins by asking participants to eat a filling preload (e.g., a 15 oz. chocolate milkshake
or a large bowl of ice cream) under the pretence of rating its palatability.
After eating the preload and making their ratings, participants are then told they can eat as
much of the remaining milkshake (or ice cream) as they wish.
The real measure of interest is the amount of milkshake or ice cream that the participant eats
at the end of the study—this is a measure of how willing the individual is to continue eating
after having already had a full, filling portion of food.
This experimental procedure has shown that willingness to continue eating is a function of a
number of factors, including whether the individual (a) is a restrained eater (has a tendency to
dieting or has distorted attitudes about eating), (b) has low self‐esteem, and (c) is in a negative
mood. Restrained eaters will even eat more food than nondieters even if they rate the food as
relatively unpalatable.
Eating disorders are very much associated with negative affect (usually depressed mood), and mood
disorders are often comorbid with both anorexia and bulimia (Braun et al., 1994; Brewerton et al.,
1995). While negative mood and stress is a commonly reported antecedent of eating disorders (Ball &
Lee, 2000), there is some disagreement about whether negative affect is a cause or just a consequence of
the disorder. Nevertheless, negative affect has been proposed to play a number of discrete roles in the
aetiology of eating disorders. Experimental studies have indicated that induced negative mood does
increase body dissatisfaction and body‐size perception in bulimia sufferers (Carter, Bulik, Lawson,
Sullivan, & Wilson, 1996), and it may contribute in part to eating disorders through this route. Negative
mood states have also been shown to increase food consumption and bingeing in individuals who are
dieting or who have distorted attitudes about eating, and this may represent a role for negative mood in
generating the bingeing and purging patterns typical of bulimia sufferers (Culbert, Racine, & Klump,
2015; Herman, Polivy, Lank, & Heatherton, 1987) (Research Methods in Clinical Psychology 10.2). For
example, individuals with bulimia try to alleviate their negative mood by eating, and purging allows
them to use eating as a mood regulation process without gaining weight. However, when the bulimia
sufferer begins to realise that their eating is out of control, this activity no longer provides relief from
negative mood, and purging may take over as a means of relieving guilt, self‐disgust, and tension
(Johnson & Larson, 1982). This is consistent with laboratory‐based studies that show that bulimia
sufferers show reduced anxiety, tension, and guilt following a binge‐purge episode (Sanftner & Crowther,
1998). Studies such as these suggest that the negative mood possessed by individuals with eating
disorders may not simply be a consequence of the disorder but may play an active role in generating
symptoms by increasing body dissatisfaction and being involved in processes of mood regulation which
act to reinforce disordered eating behaviours.
A second prominent characteristic of individuals with eating disorders is low self‐esteem (Mora,
Rojo, & Quintero, 2017). This low self‐esteem may simply be a derivative of the specific negative views
that those with eating disorders have of themselves (such as being ‘fat’, having an unattractive body, or
in bulimia, having a lack of control over their eating behaviour). However, there is some evidence to
suggest that low self‐esteem may have a role to play in the development of eating disorders. First, it is a
significant prospective predictor of eating disorders in females (suggesting that it is not just a
consequence of eating disorders) (Button, Sonugabarke, Davies, & Thompson, 1996). Second, eating
disorders such as anorexia are viewed by some researchers as a means of combating low self‐esteem by
demonstrating control over one specific aspect of the sufferers own life—i.e., their eating (Troop, 1998).
In this sense, self‐esteem may be implicated in the development of eating disorders because controlled
eating is the individual's way of combating their feelings of low self‐esteem. In addition, low self‐esteem
may be an important factor in making a dissatisfaction with body shape into an indicator of general self‐
worth (‘I am dissatisfied with my body, therefore I am a worthless person’) and in such circumstances
self‐worth (and self‐esteem) can be raised only by dieting or purging to deal with the body dissatisfaction
problem.
Individuals diagnosed with anorexia, and, to a lesser extent, those diagnosed with bulimia both score
high on measures of perfectionism, and this personality characteristic has regularly been implicated
in the aetiology of eating disorders (Garner, Olmsted, & Polivy, 1983; Bastiani, Rao, Weltzin, & Kaye,
1995). Perfectionism is multifaceted and can be either self‐oriented (setting high standards for oneself) or
other‐oriented (trying to conform to the high standards set by others). It can also be adaptive (in the
sense of trying to achieve the best possible outcome) or maladaptive (in terms of striving to attain what
may well be unachievable goals) (Bieling, Israeli, & Antony, 2004; Bardone‐Cone, Lin, & Butler, 2017).
Perfectionism is a predictor of bulimic symptoms in women who perceived themselves as overweight
(Joiner, Heatherton, Rudd, & Schmidt, 1997), and both self‐oriented and other‐oriented perfectionism
have been found to be predict the onset of anorexia (Tyrka, Waldron, Graber, & Brooks‐Gunn, 2002).
Perfectionism is also one of the few personality traits that predicts the maintenance of eating disorders
at 10‐year follow‐up (Holland, Bodell, & Keel, 2013). Other research has suggested that the perfectionist
characteristics displayed by individuals with eating disorders may actively contribute to their disordered
eating. For example, Strober (1991) has argued that self‐doubting perfectionism predisposes individuals
to eating disorders. Perfectionism is highly associated with measures of body dissatisfaction and drive for
thinness (Ruggiero, Levi, Ciuna, & Sassaroli, 2003), and so it is not difficult to see how perfectionism
may be an indirect causal factor in the aetiology of eating disorders as it drives the dieter to achieve the
perfect body shape or the stringent dieting goals they set themselves (Keel & Forney, 2013). Interestingly,
perfectionism is a characteristic of many psychological disorders (Egan, Wade, & Shafran, 2011) and is
one of the best predictors of comorbidity across the anxiety disorders (Bieling, Summerfelt, Israeli, &
Antony, 2004). So, if perfectionism does play a causal role in eating disorders, we need to ask why it was
an eating disorder that developed and not any one of a number of other disorders that have
perfectionism as a prominent feature.
10.3.7 Summary
This section on aetiology has ended by concluding that a number of psychological and cognitive
processes may be important common factors in the acquisition and maintenance of all eating disorders,
and these psychological factors include the defining of self‐worth in terms of control over eating, low
self‐esteem, clinical perfectionism, interpersonal problems, and intolerance of negative moods such as
depression. Many of these psychological factors may be influenced by exposure to media ideals of body
shape, peer attitudes to controlled eating, and familial factors—such as intrafamily conflict or
dysfunctional mother–daughter interactions. Traumatic life events also appear to be risk factors for
eating disorders, and childhood maltreatment has been one specific form in which trauma has been
researched in relation to eating disorders. There is an inherited component to eating disorders, although
twin studies have tended to emphasise that unique environmental experiences are equally as important
as genes in the aetiology of eating disorders. Although a very small number of genes have been
identified that may contribute to eating disorders risk, the influence of epigenetic processes may be
more important. In these cases individual genes with eating disorder relevance may be switched on or
off depending on significant life experiences such as starvation, weight loss, or trauma. Finally, eating
disorder symptoms have been found to be associated with a number of brain mechanisms and reward
pathways, including opioid, serotonin, and dopamine pathways, but it is still unclear whether these
neurobiological processes are causes of eating disorder symptoms or are themselves consequences of
those symptoms. Similarly, deficits in the cognitive control of behaviours that may result in overeating
may contribute to conditions such as binge‐eating disorder and bulimia, but more research is needed to
determine the causal direction of these relationships.
SELF‐TEST QUESTIONS
Can you name some of the important risk factors for anorexia and bulimia?
Can you describe some of the biological factors that might be involved in the development
of an eating disorder?
What role might brain neurotransmitters play in the acquisition and maintenance of
eating disorder symptoms?
Can you name some of the important sociocultural factors that influence the development
of eating disorders? What evidence is there that these factors influence body dissatisfaction
and attitudes to dieting?
What are the important dispositional factors associated with eating disorders? Do they
have a causal role to play in the development of an eating disorder?
What are the main features of the tripartite model of eating disorders?
SECTION SUMMARY
ok so last night i was purging ya know and i kinda came out with more force that i was expecting
well anyway i leaned closer and actually only go half in the toilet. The other half down the side
and on my sock and the floor. i spent forever cleaning up but my mum found the leftovers and
asked if i was sick i said no i think she bought it but she'll be on the look out so how can i hide it
better? Seriously i need help!
i really don't want you to become bulimic because it makes you feel awful, you get headaches,
light‐headedness, irregular heartbeats and you can rip your throat and you burst blood vessels and
all this bad stuff. However, if you do start, to save you a lot of pain, make sure you drink lots of
water with everything, and diet coke is good too. Don't try and purge orange juice because it
hurts like hell. Make sure you chew everything thoroughly too, otherwise it can get stuck in your
throat. And if you see blood when you purge, it is not a good sign and you should stop for a while.
ok im so sure someone would have to have an answer for me … ok so i'm really good at making
myself sick … but 4 the last 2 days i cant get anything up … its nasty tho bc i definatly get dry
heaves and i gag for about 10 minutes at a time … and the last time i tried it i was choking … i
could still breath but i was gasping … i freaked myself out … and i know to chew really well and
all that, so i know thats the problem … u think its because my esophagus is irritated or something
like that?? please give me a bit of advice… thanks
Why is it that I can go without food for most of the day and then when it comes to the evening I
go totally crazy and binge, then afterwards I feel really bad and hate myself and vow never to eat
that much again, but I always do. Can somebody please give me some ideas on how to stop
binging? I'd really appreciate it.
after you eat would you go to the bathroom? I am relatively new at it but have had some luck but
just not as much coming up as I thought. How soon after and how long would you do it for?
In the table are some examples that illustrate how bulimia sufferers will actively swap experiences, information about
the best ways of purging, and how best to conceal their activities.
In the following sections, we describe and discuss the main forms of treatment that have been used for
eating disorders. These are pharmacological treatments, family therapy, and CBT. The UK National
Institute for Health and Care Excellence (NICE) recommends a number of different treatments
depending on the nature of the eating disorder (NICE, 2017). For anorexia nervosa in adults these
include individual eating‐disorder‐focussed CBT (CBT‐ED), the Maudsley Anorexia Nervosa
Treatment for Adults (MANTRA), and specialist supportive clinical management (SSCM); for bulimia
nervosa and binge‐eating disorder the recommendations are CBT‐ED and guided self‐help
programmes. Self‐help programmes are an important component of the treatment provision for both
bulimia and binge‐eating disorder, and bulimia self‐help groups that use structured manuals and
require minimum practitioner management can show significant treatment gains—especially when
these help the patient to identify triggers for bingeing and develop preventative behaviours for purging
(Cooper, Coker, & Fleming, 1994). They can be equally or more effective than CBT in establishing
remission from bingeing and purging symptoms (Bailer et al., 2004; Priemer & Talbot, 2013).
Alternative delivery systems do allow access to services for sufferers who might, for whatever reason, not
receive other forms of treatment. These include treatment and support via telephone therapy, email, the
Internet, computer‐software CD‐ROMs, and virtual reality techniques (see Chapter 4, Section 4.1.2),
and assessment of the effectiveness of these methods is encouraging (Myers, Swan‐Kremeier,
Wonderlich, Lancaster, & Mitchell, 2004; Wagner et al., 2013) (Table 10.6).
Pharmacological treatments
Because both anorexia and bulimia are frequently comorbid with major depression, eating disorders
have tended to be treated pharmacologically with antidepressants such as fluoxetine (Prozac) (Kruger &
Kennedy, 2000). There is some evidence that such treatment can be effective with bulimia when they
are compared with placebo conditions, but this evidence is still far from convincing (e.g., Grilo et al.,
2007). Some studies have indicated a modest reduction in the frequency of bingeing and purging with
such antidepressants compared to placebo controls (e.g., Wilson & Pike, 2001; Bellini & Merli, 2004),
but dropout rates can still be unacceptably high (Bacaltchuk & Hay, 2003). More significant treatment
gains are reported if antidepressant medication is combined with psychological treatments such as CBT
(Pederson, Roerig, & Mitchell, 2003). The benefits with joint drug and CBT programmes appear to be
reciprocal in that CBT helps to address the core dysfunctional beliefs in bulimia (see below), and
antidepressant drug treatment appears to reduce the tendency to relapse following cognitive behavioural
treatment (Agras et al., 1992). CBT plus antidepressants can also be effectively used in a stepped‐care
approach in which CBT comprises the initial step with the addition of fluoxetine for nonresponders
after six sessions (Mitchell et al., 2011).
TABLE 10.6 NICE guidelnes for use of CBT‐ED with anorexia nervosa
Individual CBT‐ED programmes for adults with anorexia nervosa should:
typically consist of up to 40 sessions over 40 weeks, with twice‐weekly sessions in the first 2 or 3
weeks
aim to reduce the risk to physical health and any other symptoms of the eating disorder
encourage healthy eating and reaching a healthy body weight
cover nutrition, cognitive restructuring, mood regulation, social skills, body image concern, self‐
esteem, and relapse prevention
create a personalised treatment plan based on the processes that appear to be maintaining the
eating problem
explain the risks of malnutrition and being underweight
enhance self‐efficacy
include self‐monitoring of dietary intake and associated thoughts and feelings
include homework, to help the person practice in their daily life what they have learned.
Pharmacological treatments with anorexia have tended to be significantly less successful than with
bulimia, but the studies assessing drug treatment with anorexia has been relatively limited in number
(Pederson, Roerig, & Mitchell, 2003; Claudino et al., 2006). Antidepressants would rarely be used as the
sole intervention for anorexia but might be helpful in preventing relapse and improving anxiety and
depression symptoms (Marvanova & Gramith, 2018). Nevertheless, outcome studies so far have found
very little effect of antidepressants on either weight gain in anorexia or significant changes in other core
features of the disorder, such eating attitudes, or body shape perceptions (Attia, Haiman, Wals, & Flater,
1998; Biederman et al., 1985). Pharmacological treatments of eating disorders also have the added
disadvantage of higher dropout rates from treatment than psychological therapies (Fairburn & Wilson,
1992) and also have a number of physical side effects.
Family therapy
One of the most common therapies used with eating disorder sufferers—and particularly anorexia
sufferers—is family therapy. This stems mainly from the theories of Minuchin (Minuchin et al.,
1975; Minuchin, Rosman, & Baker, 1978—see Dallos, 2019), whose family systems theory view argues
that the sufferer may be embedded in a dysfunctional family structure that actively promotes the
development of eating disorders (see Section 10.3.2). In particular, this view argues that the eating
disorder may be hiding important conflicts within the family (such as a difficult relationship between the
sufferer's parents), and the family may be implicitly reinforcing the eating disorder in order to avoid
confronting these other conflicts. As we noted in Section 10.3.2, the families of individuals with eating
disorders exhibit the characteristics of enmeshment, overprotectiveness, rigidity, and lack of conflict
resolution, and family therapy can be used to unpack and address these dysfunctional family
characteristics. Treatment in Practice Box 10.1 is an example of how family therapy is applied in the
context of an adolescent family member with anorexia. This example shows how family therapy can be
used to explore concerns about relationships and emotional expression within the family, as well as
individual feelings of failure, shame, and guilt. Exploring these issues throws up other conflicts and
difficulties within the family and how the anorexia sufferer may see themselves as trapped within these
existing relationships and conflicts (Dallos, 2004).
Sandy is 17 and for 2 years had been suffering with anorexia of such severity as to require two
brief stays in hospital. She was living with her parents and older brother. Two older brothers
had left home.
Though all were invited only Sandy and her parents attended for family therapy which took
place at intervals of 3–4 weeks over 18 months. The sessions, with the full permission of all of
the family were of 1‐hour length with the therapist in the room with them and a team
observing from behind a one‐way mirror. The team usually joined the family and the therapist
after 40–50 minutes and held a reflective discussion with each other in front of the family in
which they shared their ideas about the family's problems, ideas, understandings, and feelings.
The family were then invited to comment and then held a closing discussion with the therapist
when the team had left the room. The core idea of family therapy is that problems such as
anorexia are not simply, or predominantly individual but are related to wider stresses and
distress the family are experiencing. In addition it is recognised that the sense of failure and
blame associated with conditions such as anorexia can paralyse families' abilities to help each
other.
Initially each member of the family was asked to describe what they saw as the main problems
and invited to offer their explanations of the causes of the problem. This was followed by a
focus on two broad areas: (a) the impact of the problems on each of them and their
relationships with each other, and, in turn (b) the influence that they could exert on the nature
of the problems. Initially the parents indicated that the distress caused by Sandy's anorexia was
the main problem for all of them. However, it quickly emerged that the parents had very
different ideas about what caused the problems and what to do to help. Mr. Sinclair had a
medical and practical view and Mrs. Sinclair a more relational and emotional one. Through the
use of a genogram (family tree) it was revealed that both parents had themselves had very
negative experiences of being parented that made it hard for them to know how to comfort and
help their children. It also transpired that their marriage was in serious difficulty. Sandy
commented that her parents' conflicts upset her and she felt caught in the middle in trying to
meet the emotional needs of both of her lonely parents. In effect she felt like she was a therapist
for her own family (Interestingly, she has gone on to study psychology at university).
The therapist and the reflecting team discussed the possible impacts that the parents' own
experiences may have played on how they acted towards Sandy. Along with this there were
discussions of a variety of related issues, such as the pressure on young women to confirm to
stereotypes of thinness and starving as an attempt to exert control in one's life. Some marital
work was done separately with the parents to look at their childhoods, their marriage, their own
needs and how these affected Sandy. Mrs. Sinclair in particular felt she had failed as a mother
but was relieved to hear that the team did not see it in this way. Sandy gained considerable
insight into how the family dynamics across the generations had impacted on her and her
parents. She became independent enough to go to university but initially struggled to separate,
as did her parents. Some struggle with her weight continues but she is confident that she will
cope in the long term.
Case history provided by Dr. Rudi Dallos, Consultant Clinical Psychologist (Somerset
Partnership Trust) and Director of Research, Clinical Psychology Teaching Unit, University of
Plymouth
A more recent family‐based therapy for eating disorders is the Maudsley Approach, which has a
number of stages beginning with a stage which focuses on how the family can help solve the problems
they are facing, a second stage which helps the family to challenge the eating disorder symptoms, and a
third stage which develops family relations and activities once recovery from eating disorder symptoms
has occurred (Eisler, 2005; Lock et al., 2010).
However, while there is significant support for the use of family therapy in eating disordered individuals
—especially those with a diagnosis of anorexia nervosa (e.g., Rosman, Minuchin, & Liebman, 1975),
well‐controlled treatment outcome research remains somewhat limited (Cottrell, 2003; Downs & Blow,
2013).
The Maudsley Anorexia Nervosa Treatment for Adults (MANTRA)
The MANTRA model is a treatment programme for anorexia nervosa developed at the Maudsley
Hospital in London and has been recommended as a first‐line treatment by NICE since 2017. This
model suggests that anorexia typically develops in people with anxious, perfectionist and obsessional
personality traits at times of stress or increased developmental demands, and dietary restriction becomes
a way of managing negative emotions and coping with stress (Schmidt & Treasure, 2006). In the
MANTRA model treatment targets are key anorexia nervosa maintaining factors including (a) a
thinking style characterised by rigidity, detail focus, and fear of making mistakes; (b) an inexpressive,
avoidant emotional and relational style; (c) positive beliefs about the utility of anorexia for the
individual; and (d) the response of close others (especially family members) characterised by high
expressed emotion (criticism and anger) and enabling and accommodating the eating disorder. The
treatment model is supported by recent evidence that MANTRA results in significant improvements in
BMI and reduction in eating disorder symptoms and distress levels (Schmidt et al., 2015).
Cognitive behaviour therapy (CBT)
The treatment of choice for bulimia is generally considered to be CBT. The UK NICE guidelines for
the treatment of eating disorders makes its strongest recommendation for the use of CBT with bulimia,
usually for 16–20 sessions over a 4–5 month period (Wilson & Shafran, 2005). CBT for bulimia is based
on the transdiagnostic cognitive model developed by Fairburn and colleagues (Fairburn, Shafran, &
Cooper, 1999, see Section 10.3.6). According to this model, individuals with bulimia have a long‐
standing pattern of negative self‐evaluation that interacts with concerns about weight, shape, and
attractiveness. Such individuals come to evaluate their worth solely in terms of their weight and shape—
largely because this is often the only area of their lives that they can control. They develop idealised
views of thinness that are often unachievable and as a result end up in a constant state of dissatisfaction
with their body shape and their weight. This leads them to excessive dieting, and as a result of this
dietary restriction they lapse into episodes of bingeing. This in turn invites the use of weight
compensation methods such as vomiting and laxative abuse. Each episode of bingeing and purging is
followed by a more determined effort to restrict eating which leads to an ever vicious cycle of bingeing
and purging (see Figure 10.5). Treatment in Practice Box 10.2 shows the three stages of CBT that are
required to deal with both the symptoms of bulimia and the dysfunctional cognitions that underlie these
symptoms. These cover (a) meal planning and stimulus control, (b) cognitive restructuring to address
dysfunctional beliefs about shape and weight, and (c) developing relapse prevention methods. In stage 1,
individuals are taught to identify the stimuli or events that may trigger a binge episode (such as a period
of stress or after an argument with a boyfriend or parent),and are also taught not to indulge in extremes
of eating behaviour (e.g., dieting and bingeing) and that normal body weight can be maintained simply
by planned eating. In stage 2, dysfunctional beliefs about weight, body shape, and eating are identified,
challenged, and replaced with more adaptive cognitions. For example, beliefs that relate eating and
weight to self‐worth, such as ‘no one will love me if I am a single pound heavier’ are challenged. In
stage 3 relapse prevention is often encouraged with the use of behavioural self‐control procedures that
enable the individual to structure their daily activities to prevent bingeing and purging, and rewarding
oneself for ‘good’ behaviours (e.g., sticking to a planned eating programme).
FIGURE 10.5 Wilson, Fairburn, and Agras' (1997) cognitive model of the maintenance of bulimia and on which
contemporary CBT for bulimia is based. Low self‐esteem leads to concerns about weight, followed by dietary restriction,
which—when such dieting fails—leads to binge eating and subsequent purging. Following purging, individuals become
more determined to restrict eating, and a vicious cycle is established that maintains the bingeing‐purging pattern.
CLINICAL PERSPECTIVE: TREATMENT IN PRACTICE 10.2
THE THREE STAGES OF CBT FOR BULIMIA NERVOSA
Outcome studies indicate that CBT of bulimia is successful in reducing bingeing, purging, and dietary
restraint, and there is usually an increase in positive attitudes towards body shape (Compas, Haaga,
Keefe, Leitenberg, & Williams, 1998; Richards et al., 2000). In addition, follow‐up studies suggest that
therapeutic gains can be maintained for up to 5 years following treatment (Fairburn et al., 1995).
Importantly, when CBT is effective it has been found to significantly reduce not only the behavioural
aspects of bulimia such as bingeing and purging (Agras, 1997; Wilson, Fairburn, & Agras, 1997) but also
has beneficial effects on core ‘cognitive’ aspects of bulimia such as beliefs about dietary restraint and
low self‐esteem (Anderson & Maloney, 2001). CBT has also been shown to be a more comprehensive
treatment of bulimia than antidepressant drugs (Whittal, Agras, & Gould, 1999), and other
psychotherapeutic interventions such as psychodynamically oriented therapy (Walsh et al., 1997) and
interpersonal psychotherapy (Wilson, Fairburn, Agras, Walsh, & Kraemer, 2002). When effective, CBT
also reports immediate improvement, including 76% of clients showing an improvement in the
frequency of binge eating and 69% showing improvement in the frequency of purging within 3 weeks
of the start of treatment (Wilson et al., 1999). Much of this rapid improvement can be ascribed to the
behavioural homework assignments that are a unique feature of CBT and which appear to help
alleviate depression and enhance self‐efficacy (Burns & Spangler, 2000; Fennell & Teasdale, 1987).
An ‘enhanced’ form of CBT has been developed for use with all forms of eating disorder (labelled
Enhanced Cognitive Behavioural Therapy for Eating Disorders or CBT‐E) and uses
strategies and procedures to address the over‐evaluation of shape and weight in eating disorder sufferers
by focussing on targeting these cognitive mechanisms (de Jong, Schoorl, & Hoek, 2018), and this can be
helpful with sufferers who are significantly underweight (e.g., anorexia nervosa sufferers) by increasing
their motivation to change and then helping them to regain weight while at the same time addressing
psychological issues relating to shape and weight (Cooper & Fairburn, 2011). Preliminary outcome
studies with anorexia sufferers suggest that a majority were able to complete this enhanced programme
with a substantial increase in weight and reduction in eating disorder symptoms (Fairburn et al., 2013),
and Table 10.6 lists the NICE guidelines for use of CBT‐ED with anorexia nervosa.
Prevention programmes
Finally, clinicians are aware of the importance of prevention programmes that put eating disorders
into a social context and try to prevent eating disorders occurring. School‐based prevention programmes
attempt to (a) educate vulnerable populations about eating disorders, their symptoms, and their causes;
(b) help individuals to reject peer and media pressure to be thin; and (c) target risk factors for eating
disorders such as dieting, dissatisfaction with body image, etc. In a review of effective prevention
programmes, Stice, Becker, and Yokum (2013) identified two contemporary prevention programmes
whose effectiveness could be empirically verified. These were the Body Project intervention (in which
young women critique the thin‐ideal in a series of exercises) (Stice, Rohde, & Shaw, 2012;
www.bodyprojectsupport.org), and the Healthy Weight intervention which attempts to improve dietary
intake and physical activity (Stice, Trost, & Chase, 2003). The most effective prevention programmes
have been found to be those that are the most interactive and selectively target individuals at high risk
for eating disorders (Stice, Shaw, & Marti, 2007). Future developments in prevention programmes will
probably need to extend the number of risk factors that they target (i.e., not just body dissatisfaction but
also dieting and negative affect), and also consider ways in which they can incorporate the prevention of
both eating disorders and obesity into individual programmes (see Khanh‐Dao Le, Barendregt, Hay, &
Mihalopoulos, 2017, for a recent review).
SELF‐TEST QUESTIONS
Can you name the three main forms of treatment for eating disorders? Which ones are
more suited to bulimia or to anorexia, and why?
What is the rationale for adopting a CBT approach to the treatment of bulimia nervosa,
and what are the important stages of this treatment?
How successful are pharmacological interventions in the treatment of eating disorders?
What are the main features of family therapy for eating disorders?
SECTION SUMMARY
CHAPTER OUTLINE
11.1 DEFINING PATHOLOGICAL SEXUAL BEHAVIOUR
11.2 SEXUAL DYSFUNCTIONS
11.3 PARAPHILIC DISORDERS
11.4 SEXUAL PROBLEMS REVIEWED
LEARNING OUTCOMES
When you have completed this chapter, you should be able to:
1. List the various types of sexual dysfunction and their place in the sexual cycle, and
describe their diagnostic criteria.
2. Compare and contrast various theories of the aetiology of sexual dysfunctions.
3. Describe and evaluate both psychological and biological treatments for sexual
dysfunctions.
4. Describe the basic characteristics of paraphilic disorders and their diagnostic criteria.
5. Describe and evaluate both psychological and biological explanations for the aetiology of
paraphilic disorders.
6. Describe and evaluate behavioural, cognitive, and biological treatments for paraphilic
disorders.
7. Discuss some of the conceptual and ethical issues involved in defining and diagnosing
sexual and gender identity problems.
‘So, I have kind of a weird problem here. Normally, getting an erection is no problem for me. I wake up with one
every morning, masturbate all the time, sometimes pop them up at inopportune moments, etc. I'm 21, by the way.
Also in most sexual encounters there's no problem. I'd say 90% of the time. However, the problem arises (or fails
to arise!) when I'm in bed with someone I'm extremely attracted to and very interested in. To date this has only
happened with two men, but it is happening with the second of those two people right now and it's driving me
crazy. Things will start out fine (i.e., erect) and I can be rolling around in bed with an erection for an hour, but it
seems that as soon as it's time for my penis to do its duty (that is when foreplay ends and penetration commences),
it deflates and vehemently refuses to be resuscitated. I don't know what to do! It's unendingly embarrassing, but
the worst is that it only occurs with people I really like’.
James' Story
Introduction
Sexual behaviour plays a central role in most of our lives. It is a very personal and individual topic that
we very rarely discuss openly with others. Sexual development is also an important part of our lives,
where we learn about the nature of sexual behaviour and develop our own personal likes and dislikes
about sexual activities. Furthermore, during adolescence and early adulthood, sexual performance is
often related to self‐esteem and so becomes an important contributor to psychological development.
The importance of sexual behaviour and the critical role it may play in many of our relationships
means that it can regularly affect psychological functioning and quality of life generally. There is no
definition of what is sexually ‘normal’, but clinical psychologists may become involved when an
individual becomes distressed by their sexuality or their sexual activities or when these cause interpersonal
difficulties. James story is one of problematic sexual performance, where his normally active libido fails
him just prior to intercourse with men he particularly likes. This provides a simple example case that
clinical psychologists are likely to encounter when sexual performance has an impact on psychological
well‐being, causing anxiety and distress, and feelings of guilt, shame, and depression.
sexual dysfunctions Problems with the normal sexual response cycle (e.g. lack of sexual
desire or pain during intercourse).
gender dysphoria A gender identity disorder in which an individual has a sense of gender
that is opposite to his or her biological sex.
gender identity The internal sense of being either male or female. Usually congruent with
biological gender, but not always, as in gender identity disorder.
Most of us take our sexual identity for granted. We do not question that we are the sex we were
born as, and we find that behaving as either a male or a female is natural and effortless. Our
gender identity seems to have been determined for as long as we have lived, and we think,
act, and dress accordingly. However, many individuals develop a sense of gender dysphoria
(unhappiness with their own gender) and feel that they have a sense of gender that is not
congruent with the biological sex they were born with. In such circumstances, the individual
may see themselves biologically developing as a man or a woman (e.g., growing a beard, or
developing breasts) but cannot shake off the belief that underneath the physical appearance
they are of a different gender.
In a survey of over 150,000 people in the US, around 1 in every 200 persons endorsed having a
gender identity problem or being transgender (Crissman, Berger, Graham, & Dalton, 2017).
The UK Office for National Statistics (2017) reports that estimates for the size of the
transgender community in the UK may range from 65,000 to 300,000, and in 2008 the Gender
Identity Research & Education Society reported that there were 6,200 people in the UK who
had transitioned to a new gender role via medical intervention.
Many people who experience gender identity problems opt to change their body's physical
characteristics to be consistent with their feelings of gender. This can be achieved by using
hormonal treatments to change secondary sexual characteristics or undergoing gender
reassignment surgery to alter the genitalia to be consistent with their feelings of gender. Despite
the radical nature of gender reassignment surgery, outcome studies tend to indicate that a large
majority of clients who undergo the full treatment are generally satisfied with the outcome and
express no regrets about their decision (Smith, van Goozen, Kuiper, & Cohen‐Kettenis, 2005;
Gijs & Brewaeys, 2007), and such individuals report improvements in life satisfaction, sexual
satisfaction, and mental health (Monstrey, Vercruysse, & De Cuypere, 2009).
But gender identity is not simply a binary matter where there are only two alternatives, male
and female, many people feel that they do not fit obviously into either of these categories. The
Office for National Statistics in its ‘Trans data position paper’ adopted the term Trans to
describe such individuals and defined it as ‘an umbrella term referring to individuals whose
gender identity or gender expression falls outside of the stereotypical gender norms’ (O'Neil,
McWhirter, & Cerezon, 2008).
Gender dysphoria is also the title of a diagnosable psychopathology in DSM‐5. In 2013 the
American Psychiatric Association changed the diagnostic category called ‘gender identity
disorder’ to ‘gender dysphoria’ and removed this condition from a general chapter on sexually‐
related problems and assigned it to a chapter of its own. The DSM‐5 criteria for gender
dysphoria in adolescents and adults are:
A marked discrepancy between an individual's expressed gender and assigned gender, over
a period of at least 6 months, as expressed by at least two of the following:
A discrepancy between expressed gender and sex characteristics
An intense desire to be without existing sex characteristics due to a discrepancy between
expressed and assigned gender
An intense desire for the sex characteristics of another gender or to be treated as another
gender
An intense belief that one has the typical feelings and reactions of another gender
Symptoms cause significant distress or impairment in performing major occupational,
social, or interpersonal life functions
As a psychopathology diagnostic category, this inclusion of gender dysphoria in DSM‐5 has
caused considerable controversy. Labelling these criteria as ‘gender dysphoria’ appears to imply
that anyone who has gender identity problems has a diagnosable psychiatric disorder, but it is
only the final criterion (symptoms cause distress and impairment) that makes this diagnostic
category eligible as a potential mental health problem.
However, it is clear that many articles and studies alluding to the DSM‐5 category of gender
dysphoria implicitly view the gender identity issues as mental health problems in their own right
—which, of course, they are not (Davy & Toze, 2018). Associating gender identity issues
directly with psychiatric disorders stigmatises transgender individuals, and although it is the case
that transgendered individuals as a group are particularly vulnerable to common mental health
problems such as anxiety and depression, they are no more likely to suffer major psychiatric
disorders such as schizophrenia and bipolar disorder than any member of the general
population (Dhejne, Van Vlerken, Heylens, & Arcelus, 2016).
In 2017, in an attempt to destigmatise transgender individuals, Denmark became the first
country taking official action to ensure that the country's medical code titles did not associate
terms such as ‘gender’ or ‘transgender’ with ‘dysphoria’, ‘problem’, or ‘incongruence’. This
action explicitly recognises that a mismatch between one's birth gender and adopted gender
identity is not necessarily pathological and shifts the emphasis in treating individuals from fixing
a ‘psychiatric disorder’ to resolving any distress caused by the mismatch. We wait to see how the
next edition of DSM reacts to these controversial and important matters!
SELF‐TEST QUESTIONS
What are the sociocultural problems involved in defining pathological sexual behaviour?
SECTION SUMMARY
Erectile disorder
The basic feature of erectile disorder is the repeated failure to obtain or maintain erection during
partnered sexual activities, on all or almost all occasions over a significant period of time (see Table
11.3). This is often associated with low self‐esteem, low self‐confidence, and a decreased sense of
masculinity. Male erectile disorder is one of the most common of the sexual dysfunctions in men, is
usually the disorder that is most commonly referred for treatment, and is likely to have a significant
impact on the sexual satisfaction of both the sufferer and his partner. Approximately 13–21% of men
aged 40–80 years experience occasional problems with erections, but only 2% of men younger than 40
years do (DSM‐5, American Psychiatric Association, 2013, p. 427). The causes of male erectile disorder
are complex, and appear to range across physical, psychological, and sociocultural factors. Hormonal
and vascular problems such as high blood pressure, diabetes, and heart disease are associated with
erectile disorders (Berman et al., 1999) as are activities such as smoking and excessive alcohol
consumption (Westheimer & Lopater, 2002; Lee, Ho, Yip, Fan & Lam, 2010). Psychological factors that
may affect the ability to achieve and maintain erection include severe depression (Seidman, 2002), and
marital, financial, and occupational stress (Morokoff & Gillilland, 1993).
erectile disorder The inability to maintain an adequate erection during sexual activity.
Around 10 per cent of males report erection problems, but this increases to 20 per cent in the
over 50s.
TABLE 11.2 Summary: DSM‐5 diagnostic criteria for male hypoactive sexual desire disorder
Incessantly or recurrently deficient sexual/erotic thoughts or desire for sexual activity for a period
of at least 6 months, causing significant distress to the patient
The sexual dysfunction is not better accounted for by a nonsexual mental disorder or as a
consequence of relationship or other significant stressors and is not attributable to the effects of a
medication/substance or other medical condition
At least one of the following occurs during at least 75% of sexual activity for a period of at least 6
months causing significant distress to the patient:
Difficulty in obtaining an erection during sexual activity
Difficulty in maintaining an erection until the completion of sexual activity
Decrease in erectile rigidity
The sexual dysfunction is not better accounted for by a nonsexual mental disorder or as a
consequence of relationship or other significant stressors and is not attributable to the effects of a
medication/substance or other medical condition
Disorders of orgasm
Orgasm is the third stage of the sexual cycle when sexual stimulation has been sufficient to enable the
individual's sexual pleasure to peak. In both males and females this involves a rhythmic muscular
contraction of the genitals which results in a release of sexual tension, and in the male is accompanied
by ejaculation of semen. There are three DSM‐5 defined disorders of this stage, and these are female
orgasmic disorder, delayed ejaculation, and early ejaculation.
female orgasmic disorder Marked absence, delay or infrequency of orgasm and markedly
reduced intensity of orgasmic sensations.
early ejaculation The onset of orgasm with minimal sexual stimulation. Treatment for this
disorder is typically sought by men under the age of 30 years.
FOCUS POINT 11.2 CONSTRUCTING FEMALE SEXUAL
DYSFUNCTION
Sexual dysfunction is not diagnosed simply on the basis of problems in sexual desire and
performance but is dependent on the condition also causing marked distress and interpersonal
difficulty. Clearly, many people may only rarely experience sexual desire and very rarely indulge
in sexual activity—but a sizable proportion of those people are quite happy with this state of
affairs and indeed may even advocate and seek sexual abstinence. But what happens if attempts
are made to make such people feel inadequate or in some way ‘abnormal’?
In 1966, a New York gynaecologist called Robert A. Wilson published a best‐selling book called
Feminine Forever, in which he argued that the menopause robbed women of their femininity, their
sexuality and ruined the quality of their lives. He labelled postmenopausal women as ‘castrates’
and described the menopause as a ‘deficiency disease’ that should be treated pharmacologically
with hormone replacement therapy. The book and its ensuing publicity had two effects—it
made some postmenopausal women begin to believe they were inadequate and had a disorder
that needed treatment. It made many others—especially those in the feminist movement that
was developing at the time—believe that menopausal symptoms as described by Wilson were
not a medical deficiency, but the creation of a sexist society (Houck, 2003).
More recently, other writers have argued that the pharmaceutical industry has also attempted to
manipulate women's beliefs about their sexuality in order to sell their products (Moynihan,
2006). Some drug companies claim that sexual desire problems affect up to 43% of American
women (Moynihan, 2003), and can be successfully treated with, for example, hormone patches.
However, others claim that this figure is highly improbable and includes women who are quite
happy with their reduced level of sexual interest (Bancroft, Loftus, & Long, 2003).
Tiefer (2006) lists a number of processes that have been used either wittingly or unwittingly in
the past to ‘medicalise’ what many see as normal sexual functioning. These include:
1. Taking a normal function and implying that there is something wrong with it and it should
be treated.
2. Imputing suffering that is not necessarily there.
3. Defining as large a proportion of the population as possible as suffering from the disease.
4. Defining a condition as a ‘deficiency’, disease, or disease of hormonal imbalance.
5. Taking a common symptom that could mean anything and making it sound as if it is a
sign of a serious disease.
While sexual dysfunctions are sometimes caused by medical conditions (see Section 11.2.2), lack
of sexual desire and interest is itself often portrayed as a medical condition in need of
treatment. Yet a reduction in sexual interest and desire can be a healthy and adaptive response
to normal changes in body chemistry or as a normal reaction to adverse life stressors or
relationship changes.
Delayed ejaculation
This is a persistent or recurrent delay in, or absence of, ejaculation following a normal sexual
excitement phase that causes the individual marked distress and interpersonal difficulty. This is the least
common of the male sexual complaints, and less than 1% of men will complain of problems in
reaching ejaculation over a period of 6 months or longer (DSM‐5, American Psychiatric Association,
2013, p. 425). The clinician must make judgements about whether ejaculation is problematically
delayed by taking into account the client's age and the degree of sexual stimulation. The problems can
be caused by physical factors such as low testosterone levels (Stahl, 2001), alcohol consumption, and
prescription drugs such as antidepressants and anxiolytic drugs, all of which can affect the response of
the sympathetic nervous system (Seagraves, 1995; Altman, 2001). (Table 11.6).
TABLE 11.5 SummaryDSM‐5 diagnostic criteria for female orgasmic disorder
Delay, infrequency or absence of ejaculation in at least 75% of partnered sexual activity for a
period of at least 6 months causing significant distress to the patient
The sexual dysfunction is not better accounted for by a nonsexual mental disorder or as a
consequence of relationship or other significant stressors and is not attributable to the effects of a
medication/substance or other medical condition
Early ejaculation
This is the persistent or recurrent onset of orgasm and ejaculation with minimal sexual stimulation
within 1 minute of penetration, and before the person wishes it to happen. Early or premature
ejaculation is not unusual when aspects of the sexual activity are novel (e.g., the person is indulging in
novel sex acts, with new partners, or has sex only rarely), and this must be taken into account by the
clinician when making a diagnosis. Most young males learn to delay orgasm with continued sexual
experience and age, but some continue to have premature ejaculation problems and so may seek
treatment for the disorder, but these are typically men under the age of 30 years (Bancroft, 1989). In
men between the ages of 18 and 70 years, 20–30% express concern about premature ejaculation, but in
practice only around 1% of men would meet the DSM‐5 diagnostic criteria for early ejaculation (DSM‐
5, American Psychiatric Association, 2013, p. 444). Early ejaculation has been linked to infrequent
climactic sex (Spiess, Geer, & O'Donohue, 1984), overresponsiveness to tactile stimulation (Rowland,
Cooper, & Slob, 1996), sexual performance anxiety (Dunn, Croft, & Hackett, 1999; McMahon, Jannini,
Serefoglu, & Hellstrom, 2016), and, in some cases, with physical or biological causes (Metz, Pryor,
Nesvacil, Abuzzhab, & Koznar, 1997) (Table 11.7).
TABLE 11.7 Summary: DSM‐5 diagnostic criteria for premature (early) ejaculation
Continual or recurring pattern of ejaculation occurring in least 75% of partnered sexual activity
with approximately 1 minute of vaginal penetration and before the patient desires it for a period
of at least 6 months causing significant distress to the patient
The sexual dysfunction is not better accounted for by a nonsexual mental disorder or as a
consequence of relationship or other significant stressors and is not attributable to the effects of a
medication/substance or other medical condition
Persistent or recurring difficulties with at least one of the following over a period of at least 6
months:
Sexual vaginal penetration
Vulvovaginal or pelvic pain during vaginal penetration
Distress about vulvovaginal or pelvic pain during or in anticipation of vaginal penetration
Tensing or tightening of the pelvic floor muscles during vaginal penetration
The sexual dysfunction is not better accounted for by a nonsexual mental disorder or as a
consequence of relationship or other significant stressors and is not attributable to the effects of a
medication/substance or other medical condition
The disgust emotion has evolved as a means of protecting the individual from disease and
contamination, and research suggests that high levels of disgust sensitivity may play a significant
role in a number of sexual dysfunctions. For instance, the evolutionary purpose of the disgust
emotion is to prevent pathogens entering the body, and the mouth and vagina represent areas
of the body that are likely to become the focus of disgust reactions as the individual attempts to
ensure they do not allow pathogens or disease‐relevant substances into their body (Rozin,
Nemeroff, Horowitz, Gordon, & Voet, 1995). In addition, the stimuli usually involved in sexual
encounters are highly disgust relevant, and these include saliva, sweat, semen, and body odours.
As a result, if an individual has developed a strong sensitivity to disgust‐relevant stimuli, this
may well interfere with normal sexual behaviour.
It is only recently that disgust has been investigated in the female sexual dysfunctions of
vaginismus and dyspareunia. Vaginismus is the persistent and involuntary contraction of
the muscles of the vagina when vaginal penetration occurs, and dyspareunia is female genital
pain experienced during or after coitus. Studies by de Jong and colleagues have found that
women suffering vaginismus displayed a general enhanced dispositional disgust propensity (de
Jong, van Overveld, Schultz, Peters, & Buwalda, 2009), and women diagnosed with both
vaginismus and dyspareunia exhibited enhanced automatic sex‐disgust associations in an
implicit association task (IAT) (Borg, de Jong, & Schultz, 2010). Based on this growing body of
research, de Jong, van Lankveld, Elgersma, and Borg (2010) have argued that enhanced disgust
sensitivity in relation to sexual activity may be an important part of the mechanism in
disruption of sexual arousal, and this conclusion makes intuitive sense given that sexual
activities emerge as a primary disgust elicitor in factor analysis and cluster analysis studies of
disgust (Marzillier & Davey, 2004). Thus, if sexual activities are a natural disgust elicitor for
many people, then heightened disgust propensity and sensitivity will enhance anxiety by
priming avoidance goals to sexually related stimuli and activities.
Also, it may be the case that sexual arousal itself directly interacts with disgust, and in both men
and women, disgust sensitivity is reduced as sexual arousal increases (Oaten, Stevenson, Tapp,
Case, & Cousins, 2019; Borg & de Jong, 2012). If there is a bidirectional relationship between
disgust and sexual arousal, then we would also expect sexual arousal to be affected by changes
in disgust sensitivity. Given what appears to be this important relationship between disgust and
sexual behaviour, interventions that directly attempt to reduce disgust sensitivity may help to
alleviate sexual dysfunctions such as vaginismus and dyspareunia that are known to be linked to
the disgust emotion.
Psychoanalytic theory
Much of the psychoanalytic approach to understanding behaviour revolves around repressed emotions
and desires, and specifically around repressed sexual desires, so it is not unusual that psychodynamic
theorists have had something to say about the factors underlying sexual dysfunction. Thus,
vaginismus (an involuntary contraction of muscles around the vagina making sexual intercourse
involving penetration painful or impossible) may be seen as a women unconsciously expressing hostility
towards men, premature ejaculation as men expressing repressed hostility towards women, and female
orgasmic disorder as a function of enduring penis envy. Because sexual activity is usually pleasurable—
yet often frowned upon by society—many psychodynamic views see sexual dysfunction as resulting from
this conflicting state of affairs. Still others view male sexual dysfunctions such as male erectile disorder
as a result of an unresolved Oedipus complex based on a continual sexual attachment of the male to his
mother (Fenichel, 1945). Regardless of the theoretical validity of these analyses, these views gave
psychoanalysis an important therapeutic function prior to the development of therapies based on more
objective and more detailed knowledge of sexual dysfunctions such as those pioneered by Masters and
Johnson in the 1970s.
vaginismus The involuntary contraction of the muscles surrounding the vagina when vaginal
penetration is attempted. Of all women who seek treatment for sexual dysfunctions, around
15–17 per cent are suffering from vaginismus
performance anxiety The fear of failing to achieve an acceptable level of sexual performance,
causing an individual to become distanced from the sexual act and fail to become aroused.
Sexual dysfunction and interpersonal problems
Sex is usually an interpersonal activity, and it may be that interpersonal problems may be a cause of at
least some of the sexual dysfunctions. Many clinicians believe that individuals with sexual dysfunctions
have both sexual and interpersonal problems, and that the latter may be an important cause of the
former (e.g., Rosen & Leiblum, 1995). For example, there is an inverse relationship between a woman's
sexual desire and their level of concern about their partner's affection (Nobre & Pinto‐Gouveia, 2006),
and individuals who are angry with their partners are less likely to desire sexual activity (Beck &
Bozman, 1995). If negative emotion is a central feature of a couple's relationship, then emotions such as
resentment, disgust, anxiety, anger, distrust, and depression are likely to significantly interfere with the
development of positive feelings required in the desire and arousal stages of the sexual cycle. If general
communication is poor within a couple, then this is likely to have an important impact on talk about
intimate activities such as sex (Brotto et al., 2016). Studies have indicated that interpersonal difficulties
are apparent in sexual dysfunctions diagnosed in both men and women, and are associated with early
ejaculation and erectile disorders in men (Patrick et al., 2005; Swindle, Cameron, Lockhart, & Rosen,
2004) and sexual dysfunctions generally in women (Clayton, 2003). Men are also significantly more
likely to seek help for their sexual dysfunction if it is associated with interpersonal difficulties
(Papaharitou et al., 2006). However, we must still be cautious about how to interpret these findings
because they indicate only that there is an association between sexual dysfunction and interpersonal
difficulties and we do not know the direction of any causal relationship.
Apart from general difficulties that may have arisen in a relationship, sexual dysfunction may result from
specific deficiencies in sexual knowledge or sexual expertise in one or both of the couple. For example,
women who suffer female orgasmic disorder often have partners who are awkward or inexperienced
lovers (LoPiccolo, 1997; Kaplan, 1974), and individuals who develop sexual dysfunctions often lack the
knowledge and skills required to fully stimulate their partner or satisfy themselves (LoPiccolo & Hogan,
1979). Sexual problems can also develop if one member of a couple is overanxious about pleasing the
other, giving rise to performance anxiety that may inhibit sexual feelings and responsiveness to sexual
stimuli (Kaplan, 1974).
Finally, untangling the role that interpersonal difficulties may play in causing sexual dysfunction is
problematic. This is because interpersonal difficulties are very often a central outcome of sexual
dysfunction anyway (Brotto et al., 2016). However, therapies for sexual dysfunction that focus on the
relationship between couples are often successful, and this suggests that at least some of the causes of
some sexual dysfunctions lie in the details of individual relationships.
Remote causes Include feelings of shame and guilt about sexual activity, general feelings of
inadequacy, feelings of conflict brought about by long-term life stress, and suchlike.
Biological causes
There is considerable debate about whether sexual dysfunctions are the result of psychological or
organic (biological) causes. In the period following the pioneering work of Masters and Johnson,
researchers began to focus on the importance of psychological factors in the aetiology of sexual
dysfunction. However, nowadays, there is a belief that organic or biological factors may be an
underlying factor in many cases and that these may combine with psychological factors to generate a
chronic disorder.
Biological causes can be classified into three broad categories: (a) dysfunction caused by an underlying
medical condition, (b) dysfunction caused by hormonal abnormalities, and (c) changes in sexual
responsiveness caused by ageing.
A whole range of medical conditions can give rise to sexual desire and performance problems. For
example, male erectile and orgasmic disorders are associated with high blood pressure, diabetes, heart
disease, cigarette smoking, and alcoholism. Dysfunctions are also associated with a variety of
medications such as antidepressant and anxiolytic drugs, and with treatments for hypertension and renal
problems (Berman et al., 1999; Altman, 2001; Wincze & Weisberg, 2015). Medical conditions that
reduce blood flow to the penis (such as blocked arteries or heart disease) will influence the ability to
reach and maintain an erection (Stahl, 2001), and other medical conditions may cause central nervous
system damage that affects sexual performance and desire, and these include diabetes, multiple sclerosis,
and renal problems (Frohman, 2002). Female arousal and orgasm is also affected by medical conditions
in much the same way that these conditions influence erection and ejaculation in men. Female arousal
and orgasmic disorder has been linked to multiple sclerosis and diabetes, and both antidepressant (e.g.,
SSRIs such as Prozac, Kronstein et al., 2015) and anxiolytic medications can affect sexual desire in
women in much the same way that they do in men (Hensley & Nurnberg, 2002). Similarly, sexual pain
disorders may have an organic or medical origin, and these may range from painful allergic reactions to
contraceptive creams, condoms or diaphragms (e.g., in the case of female dyspareunia) to
gynaecological diseases and infections of the vagina, bladder, or uterus (which may cause symptoms of
vaginismus) (Brown & Ceniceros, 2001). Nevertheless, although these forms of organic disorder may be
an underlying cause of sexual desire and performance problems, it is quite likely that they will often
generate associated psychological problems that give rise to a diagnosable sexual dysfunction. For
example, sexual pain or disability caused by disease or medical conditions can give rise to anxiety about
sexual performance or to relationship difficulties.
dyspareunia A genital pain that can occur during, before or after sexual intercourse. Some
clinicians believe this is a pain disorder rather than a sexual dysfunction.
Sexual desire and subsequent arousal and orgasm are dependent on levels of the sex hormones
testosterone, estrogen, and prolactin, and imbalances in these hormones can cause sexual desire
problems in both men and women. In women, either high or low levels of estrogen can cause sexual
desire problems, and estrogen levels can be affected if a woman is taking the birth pill, which will
artificially raise her estrogen levels, or is being given anti‐estrogen therapy for breast cancer that will
lower estrogen levels (Amsterdam, Wheler, Hudis, & Krychman, 2005). High prolactin levels have the
effect of suppressing the hormones responsible for the normal functioning of the ovaries and testes, and
high prolactin levels can therefore lead to menstrual irregularity and/or fertility problems. In men,
erectile dysfunction is usually associated with high levels of prolactin, and erectile problems can be
eased with the use of drugs that lower prolactin levels (Spollen, Wooten, Cargile, & Bartztokis, 2004).
However, in middle‐aged and elderly men low prolactin levels are associated with reduced enjoyment of
orgasmic experiences and lower levels of physical activity generally (Corona et al., 2014).
prolactin A hormone from the pituitary gland stimulating milk production after childbirth.
Finally, one of the important variables that affect sexual functioning is age, and the prevalence of sexual
dysfunction in both males and females increases with age. For example, reports of erectile problems in
men increase significantly after 50 years of age (Laumann, Gagnon, Michael, & Michael, 1994; Gareri,
Castagna, Francomano, Cerinara, & De Fazio, 2014), and a study of Australian men over the age of 40
years indicated that 34% of those men surveyed reported one or more reproductive health disorder,
including erectile dysfunction (21%), lower urinary tract symptoms (16%), and prostrate disease (14%)
(Holden et al., 2005). Such findings may indicate that levels of male hormones generally decrease with
age, or that reproductive health disorders may significantly affect sexual functioning. Sexual desire and
performance also decreases with increased age in women, and the menopause has a significant influence
here. Menopause is associated with decreases in estrogen and testosterone levels that can exacerbate
female sexual dysfunction (Graziottin & Leiblum, 2005). Studies suggest that around one in four women
report a loss of sexual desire after the menopause, and this is associated with fluctuations in levels of
estrogen and testosterone. However, menopause is associated not only with physical changes but also
with psychological changes, and loss of sexual desire in postmenopausal women has been shown to be
associated with physical factors such as lower hormonal levels, vaginal dryness and psychological factors
such as depression and living with children (Gracia et al., 2004).
One final biological factor that has been researched in the aetiology of sexual dysfunctions is genetics,
and recent epidemiology and candidate gene studies have suggested a strong genetic influence on
female sexual functioning, with the hope that successful identification of biomarkers and novel genes
underlying female sexual dysfunction will help to improve diagnosis and treatment (Burri, Cherkas, &
Spector, 2009). Similarly, researchers have identified a genetic variant that increases the risk of erectile
dysfunction. They found that DNA variations near the SIM1 gene were associated with a 26%
increased risk of erectile dysfunction, and the SIM1 gene is part of the leptin‐melanocortin system
which has a role in sexual function (Jorgenson et al., 2018).
Sociocultural causes
The level of sexual dysfunction within a society can change depending on a range of cultural and
economic factors within that society. For example, the stress caused by poverty, financial problems, or
unemployment have all been linked to erectile dysfunctions in men (Morokoff & Gillilland, 1993) and
this effect has been identified during the recent economic downturn in European countries (Christensen
et al., 2011).
Summary
We can see from this section that theories of sexual dysfunction are quite wide ranging and encompass
both psychological and biological explanations. There is no doubt that many cases of sexual dysfunction
have an organic or biological basis, and these can range from dysfunctions caused by medical
conditions, hormone imbalances, and changes in biology with age. However, since the pioneering work
of Masters and Johnson, psychological factors have also been identified in the aetiology of sexual
dysfunction, and these include performance anxiety, underlying interpersonal problems, existing
psychopathology such as depression and anxiety, and a variety of life experiences, such as childhood
abuse, psychosexual trauma, and exposure to religious and social taboos.
stop-start technique A technique used to help clients with premature ejaculation where the
client’s partner stimulates the penis until close to ejaculation, at which point the partner is
signalled to stop by the client.
squeeze technique A technique used to help clients with premature ejaculation where the
client’s partner firmly squeezes below the head of the penis just prior to ejaculation.
Current forms of sex therapy involve a number of components, and these different components
are designed to identify specific sexual problems, to address these specific problems with direct
treatment, to deal with associated psychological and relationship issues, and to provide clients
with sexual knowledge and sexual skills. Sex therapy usually treats the couple rather than the
individual who manifests the dysfunction, and couples are urged to share the responsibility for
the sexual problem. The following are some of these separate components which form the
important core stages of sex therapy.
ASSESSMENT
Through interview, the therapist will collect information about specific sexual problems (e.g.,
lack of desire by one partner or erectile problems in a male partner) and discuss current life
issues and past life events that may be contributing to the problem. This stage will usually be
accompanied by a medical examination to determine whether there are organic factors
contributing to the problems.
DEALING WITH ORGANIC DYSFUNCTION
If there are clearly organic or medical factors contributing to the dysfunction (such as low
hormone levels, medical conditions such as diabetes or high blood pressure, or other
medications, such as antidepressants or anxiolytics), then these may be addressed early in the
programme (e.g., by reducing levels of antidepressant drugs).
SEXUAL SKILLS TRAINING
Many types of sexual problem arise through lack of knowledge about the physiology of sex and
a lack of basic technique during lovemaking. The therapist can address these factors by
providing the clients with educational materials such as booklets and videos.
CHANGING DYSFUNCTIONAL BELIEFS:
Sex is associated with a whole range of myths and false beliefs (e.g., ‘too much masturbation is
bad for you’, ‘nice women aren't aroused by erotic books or films’, etc.) (Bach, Wincze, &
Barlow, 2001), and if a client holds these false beliefs they may be preventing full sexual arousal
and satisfaction. Using a range of methods, such as those used in CBT, the therapist will
attempt to identify any dysfunctional beliefs, challenge them, and replace them with more
functional beliefs.
DIRECT INTERVENTION AND BEHAVIOURAL TRAINING:
Depending on the specific sexual dysfunction that has been referred for treatment, the therapist
will advise the clients on the use of a range of behavioural techniques designed to help their
specific problem. These techniques, discussed more fully in Section 11.2.3, include the ‘tease
technique’ for erectile dysfunction, ‘stop‐start technique’, and ‘squeeze technique’ for premature
ejaculation (Semans, 1956; LoPiccolo, 1997), and directed masturbation training for arousal
and orgasmic disorders (Heiman, 2002). Therapists may also teach clients a technique known as
nondemand pleasuring, which involves a couple exploring and caressing each other's body
to discover sexual pleasure rather than achieving orgasm. This allows couples to learn how to
give and receive sexual pleasure without the pressure of needing to achieve orgasm.
DEALING WITH RELATIONSHIP AND LIFESTYLE ISSUES
Sexual dysfunction is often related to conflict within the relationship and to stressful lifestyles
(e.g., one partner may be dominating and controlling or the demands of factors such as family
and work may be causing unnecessary stress). The therapist will usually attempt to identify any
factors that may be contributing to the disorder and advise clients on how to improve these.
A direct treatment method designed to deal with symptoms of erectile disorder and female orgasmic
disorder is the tease technique.This involves the partner caressing the client's genitals, but stopping
when the client becomes aroused (e.g., achieves an erection) or approaches orgasm. This enables couples
to experience sexual pleasure without the need to achieve orgasm and as a result may reduce any
performance anxiety that may have been contributing to erectile problems or arousal and orgasmic
problems (LoPiccolo, 1997). For individuals with arousal or orgasmic problems, directed
masturbation training is often helpful (Heiman, 2002; Faubion & Rullo, 2015). With the use of
educational material, videos, diagrams, and—in some cases—erotic materials, a woman can be taught
step by step to achieve orgasm—even in cases where she has never previously experienced an orgasm.
This method has been shown to be highly effective, and over 90% of women treated with this method
learn how to achieve orgasm during masturbation (Heiman & LoPiccolo, 1988).
tease technique A direct treatment method designed to deal with symptoms of erectile
dysfunction or male and female orgasmic disorder. It involves the partner caressing the client’s
genitals, but stopping when the client becomes aroused (e.g. achieves an erection) or approaches
orgasm.
Couples therapy
As we have mentioned several times in this chapter, sexual dysfunction may be closely associated with
relationship problems. If sexual dysfunctions are a manifestation of broader problems within a
relationship, then the latter need to be effectively addressed. For example, a lack of sexual desire in one
partner may be a way that the partner can exert some control within the relationship—especially if
there are conflicts over power and control. In such cases, underlying sexual dysfunction may entail some
implicit reward for both partners, one partner gaining reward from their ability to control sex, and the
other gaining reward by viewing their partner's lack of desire as a weakness which enables them to see
themselves as controlling. A therapist will explore these issues with a couple and try to identify if there
are any implicit payoffs within the relationship for maintaining the sexual dysfunction.
sexual skills and communication training A treatment method in which a therapist can
help clients to acquire a more knowledgeable perspective on sexual activity, communicate to
partners effectively about sex, and reduce any anxiety about indulging in sexual activity.
couples therapy A treatment intervention for sexual dysfunction that involves both partners
in the relationship.
Biological treatments
Many cases of sexual dysfunction may have biological or organic causes such as medical conditions,
hormone imbalances, changes in biology with age, or are a reaction to other medications being taken by
the client. This indicates that a biological or medical treatment may be appropriate for the disorder.
Biological treatments fall into three broad categories: (a) drug treatments, including medications that
directly influence the organic nature of the disorder; (b) hormone treatments designed to correct any
hormonal imbalances caused by age or illness; and (c) mechanical devices, designed to aid mechanical
functioning during sex (such as achieving erection).
Drug treatments
Perhaps the most well‐known drug treatments for sexual dysfunction are Viagra (sildenafil citrate)
and Cialis (tadafil), both phosphodiesterase type 5 (PDE‐5) inhibitors which are used primarily to
treat erectile dysfunction in men. Viagra acts directly on the tissue of the penis itself. It causes relaxation
of the smooth muscle of the penis that increases blood flow and encourages erection. Studies suggest
that 75% of men taking Viagra can achieve erection within 60 mins of administration (Goldstein et al.,
1998), and in clinical trials Viagra results in significantly more erections and successful intercourse
attempts than a placebo control (Moore, Edwards, & McQuay, 2002) (see Figure 11.1). Viagra has also
proved to be an effective treatment for male erectile disorder, with over 95% of clients treated with
Viagra over a 1–3 year period expressing satisfaction with their erections and their ability to effectively
engage in sex (Carson, Burnett, Levine, & Nehra, 2002). In addition, Viagra has been considered to be
an effective treatment for male erectile disorder in cases where this is due to a medical condition (such as
diabetes or cardiovascular disorder) or as a result of ageing (Salonia, Rigatti, & Montorsi, 2003).
However, Viagra may not be the treatment of choice for many clients because it also has a number of
side effects, such as headaches, dizziness, and facial flushing, and may interact badly with some
medications for cardiovascular disease (Bach, Wincze, & Barlow, 2001).
Viagra (sildenafil citrate) A drug treatment for sexual dysfunction which is used primarily
to treat erectile dysfunction in men.
Cialis (tadafil) A drug treatment, used primarily to treat erectile dysfunction in men.
FIGURE 11.1 Mean number of erections per week (blue) and erections resulting in successful intercourse (red) with
placebo and different doses of Viagra (sildenafil citrate).
From Moore, Edwards, & McQuay (2002).
Other drugs that have proved useful in treating sexual dysfunctions include yohimbine for erectile
dysfunctions, which facilitates norepinephrine excretion in the brain. This appears to have the effect of
correcting any brain neurotransmitter problems that are causing the erectile dysfunction (Mann et al.,
1996). Interestingly, both Viagra and yohimbine have also been shown to be effective in treating female
sexual desire problems (Hernandez‐Serrano, 2001).
yohimbine A drug treatment for sexual dysfunction which is used primarily to treat erectile
dysfunction in men by facilitating norepinephrine excretion in the brain.
Finally, antidepressant SSRIs such as Prozac are also an effective treatment for early ejaculation, and
delayed orgasm is a known side‐effect of SSRIs in depressed individuals who are taking these
medications (Assalian & Margolese, 1996).
Hormone treatments
At least some sexual dysfunctions may result from imbalances in hormone levels, and disorders can
result from either high or low levels of estrogen in women, low levels of testosterone in men, and high
levels of prolactin in men. These hormonal imbalances can be caused by medical conditions or ageing.
Hormone replacement therapy can be used to treat disorders of sexual desire—especially in older
women or women who have undergone hysterectomy, and sexual pain disorders such as genito‐pelvic
pain/penetration disorder can also be helped with estrogen treatment, which can improve vaginal
lubrication in postmenopausal women (Walling, Anderson, & Johnson, 1990).
Mechanical devices
Because an erect penis is such an important contributor to successful sexual penetration, a number of
mechanical devices have been developed that can help the male with an erectile dysfunction achieve
erection.
The first of these is known as a penile prosthesis, and use of these devices is normally reserved for
nonreversible organic‐based erectile problems. The prosthesis consists of a fluid pump located in the
scrotum and a semirigid rod that is surgically inserted in the penis. A discrete squeeze of the pump
releases fluid into the rod that causes the penis to erect. Studies suggest that the penile prosthesis is a safe
and effective means of dealing with erectile dysfunction caused by organic or medical conditions, and
over a period of 7 years since the implant, the penile prosthesis was still successfully dealing with erectile
dysfunction in 82% of patients (Zermann, Kutzenburger, Sauerwein, Schubert & Loeffler, 2006). [Case
History 11.1].
An alternative to the penile prosthesis is a vacuum erection device (VED). This is a hollow cylinder
that is placed over the penis. The client then draws air out the cylinder using a hand pump, and this has
the effect of drawing blood into the penis and causing an erection. As cumbersome as this may seem,
many clients prefer the VED to other more conventional treatments for erectile dysfunction such as
Viagra. Of those given a choice between equally effective VED or Viagra treatments, 33% preferred
the VED—largely because they disliked the adverse side effects of Viagra (Chen, Mabjeesh, &
Greenstein, 2001).
vacuum erection device (VED) A mechanical device normally reserved for non-reversible
organic-based erectile problems.
Mechanical devices to aid penile erection may seem outdated now that effective drugs have been
developed that reliably aid erection during sexual intercourse. However, while around 70% of men
respond well to drugs such as Viagra, this rate is significantly lower in men with diabetes or have been
treated for prostate cancer, and at least some will find mechanical devices preferable. Even though a
‘wonder drug’ such as Viagra may seem like it has solved the problem for most men with erectile
difficulties, new technology is already progressing the range of treatments. These new technologies
include external penis supports, penile vibrators, low intensity extracorporeal shockwave (using
ultrasound to promote the formation of new blood vessels), tissue engineering, nanotechnology, and
endovascular technology (Stein, Lin, & Wang, 2014).
CASE HISTORY 11.1 ERECTILE DYSFUNCTION
R.K., 47, a senior corporate executive had been happily married for 20 years and had three
children but complained of declining erections. Over the preceding 6 months, his erections had
become so weak that he could not penetrate. He stopped trying 3 months ago.
He thought that this was due to his highly stressful lifestyle and pressures at the workplace. He
even took a vacation with his wife hoping that this would improve matters. It only made them
worse. His wife, at first very co‐operative, eventually began to feel rejected and there was a
palpable friction in their marriage.
When first seen at the clinic, R.K. was defensive. ‘How can this happen to someone like me? I
could do it all night, several times a night, night after night. My family doctor says that this kind
of thing is quite common these days and it's probably the stress’.
It turned out that R.K. was a diabetic of 8 years' standing. He also had high blood pressure for
which he was on beta blockers. He was obese (209 lbs.—175 cm) and smoked 40 cigarettes a
day. He partied 7 days a week and drank quite heavily. He had never exercised in his life. Sadly,
his family doctor had never connected any of these to his sexual problem.
Tests revealed that his overall rigidity levels were well below normal and that he had problems
both with his arteries and his veins. He was eventually cured with an inflatable penile prosthesis.
Adapted from www.testosterones.com
Clinical Commentary
R.K. was quick to link his erectile problems with a stressful lifestyle, and his defensive reaction is typical
of a man who values his own sexual performance as an indicator of his own worth. However, once
R.K.'s medical history was investigated, it became clear that there were a variety of organic and lifestyle
factors that were probably contributing to his erectile dysfunction, including a history of diabetes, high
blood pressure, medications that can interfere with sexual arousal, heavy smoking, and heavy drinking.
Because many of the important causes were organic (e.g., diabetes and cardiovascular problems), the
best long‐term solution in this case was to implant a mechanical device such as a penile prosthesis to aid
erection.
SELF‐TEST QUESTIONS
SECTION SUMMARY
11.2 SEXUAL DYSFUNCTIONS
The four stages of the sexual response cycle are desire, arousal, orgasm, and resolution,
and sexual dysfunctions can be diagnosed in any of these individual stages.
11.2.1 Diagnosis of Sexual Dysfunctions
Sexual dysfunctions are a heterogeneous group of disorders characterised clinically by the
individual’s inability to respond sexually or to experience sexual pleasure.
Male hypoactive sexual desire disorder is characterised by a persistent and recurrent deficiency
or absence of desire for sexual activity.
Erectile disorder is the inability to maintain an adequate erection during sexual activity.
Approximately 13–21% of men aged 40‐80 years experience occasional problems with
erections, but only 2% of men younger than 40 years do.
Female sexual interest/arousal disorder is characterised by significantly reduced interest or lack
of interest in sexual activity, erotic thoughts, and reduced sexual sensations during sexual
activity.
Female orgasmic disorder is characterised by a delay or absence of orgasm during sexual
activity, and around 10% of adult women may never have experienced an orgasm.
Delayed ejaculation is a persistent or recurrent delay in ejaculation following a normal sexual
excitement phase.
Early ejaculation is the onset of orgasm with minimal sexual stimulation. Treatment for this
disorder is typically sought by men under the age of 30 years.
Genito‐pelvic pain/penetration disorder refers to four commonly occurring symptoms, namely
difficulty having intercourse, genito‐pelvic pain, fear of pain or vaginal penetration, and
tension of the pelvic floor muscles.
11.2.2 The Aetiology of Sexual Dysfunctions
Sexual dysfunction is more prevalent in women than in men (43% and 31% respectively)
and is more likely amongst those experiencing poor physical and emotional health.
Psychoanalytic theory attempts to account for sexual dysfunctions in terms of repressed
sexual desires or hostility to the opposite sex.
Masters and Johnson developed a two‐factor model of sexual dysfunction where (a) early
sexual experiences give rise to anxiety during sex, and (b) this anxiety leads the individual
to adopt a spectator role during sexual activity which directs attention away from stimuli
providing sexual arousal.
Interpersonal difficulties may be both a cause and an outcome of sexual dysfunctions.
Anxiety and depression are closely associated with sexual dysfunctions, and these negative
emotions may interfere with sexual performance.
The causes of sexual dysfunctions can sometimes be defined in terms of immediate causes
(e.g., lack of sexual knowledge) and remote causes (e.g., feelings of shame and guilt that are a
result of a specific upbringing).
Biological causes of sexual dysfunctions can be classified as (a) dysfunctions caused by an
underlying medical disorder, (b) dysfunction caused by hormonal abnormalities, and (c)
changes in sexual responsiveness with age.
11.2.3 The Treatment of Sexual Dysfunctions
Direct treatments attempt to deal with the specific symptoms of the disorder (e.g., the squeeze
technique for premature ejaculation).
The ‘stop‐start’ technique, squeeze technique, and the tease technique are all specific behavioural
treatments designed to treat premature ejaculation and orgasmic disorders.
If sexual dysfunctions are a manifestation of broader problems then couples therapy can be
adopted.
Biological treatments can be categorised into (a) drug treatments, including medications
that directly influence the organic nature of the disorder; (b) hormone treatment designed
to correct hormone imbalances caused by age or sickness; and (c) mechanical devices,
designed to aid mechanical functioning during sex.
75% of men who take Viagra can achieve erection within 60 minutes of administration,
and over 95% of clients treated with Viagra over a 1–3 year period express satisfaction
with their ability to effectively engage in sex.
Mechanical devices to aid penile erection and penetration include the penile prosthesis and
the VED.
Fetishistic disorder
A diagnosis of fetishistic disorder is given when a person experiences recurrent, intense sexually
arousing fantasies and urges involving nonanimate objects, and this causes them personal distress or
affects social and occupational functioning. Often fetishes are restricted to articles associated with sex,
such as women's clothing or undergarments (bras, stockings, shoes, boots, etc.) or to body parts such as
feet, toes, or hair. The individual with fetishistic disorder may experience strong desires to obtain or
touch these items (e.g., by stealing them from washing lines), may ask a sexual partner to wear them
during sex, or may masturbate while holding, rubbing or smelling these articles. A fetish will usually
have developed by adolescence and may have developed as a result of specific experiences during
childhood or early adolescence. Some individuals exhibit a phenomenon known as partialism, which
is fascination with an individual object or body part to the point where normal sexual activity no longer
occurs. Note that fetishistic disorder is not diagnosed if the object concerned is for the purpose of tactile
genital stimulation (such as a vibrator). However, Focus Point 11.4 provides some case reports from the
British Medical Journal of penile injuries and illustrates the lengths to which some individuals will go to
gain sexual excitement. While the injuries incurred were obviously not amusing for the victims, the
reader may be amused by the reasons given for these injuries! (Focus Point 11.4) (Table 11.10).
fetishistic disorder Recurrent, intense sexually arousing fantasies and urges involving non-
animate objects, and this causes them personal distress or affects social and occupational
functioning.
Transvestic disorder
A diagnosis of transvestic disorder is given when a heterosexual male experiences recurrent, intense
sexual arousal from cross‐dressing in women's attire, and this causes significant distress or impairment
in social or occupational functioning. A Swedish study has indicated that 2.8% of men and 0.4% of
women report at least one episode of transvestic behaviour during their life, and risk factors for this
disorder include same‐sex sexual experiences, being easily sexually aroused, pornography use, and
relatively high masturbation frequency (Langstrom & Zucker, 2005). In this particular disorder, sexual
excitement is achieved primarily because female clothes are a symbol of the individual's femininity
rather than because the garments trigger sexual arousal per se (as would be the case with a simple
fetish). The person with transvestic disorder will often keep a collection of women's clothes, and sexual
arousal is normally caused by the man having thoughts or images of himself as a female. Case History
11.2 is a typical example of transvestic behaviour. Like Chris, most individuals diagnosed with
transvestic disorder are relatively happily married men but are worried what others (including their
wives) may think of their behaviour. As a result, over half of those who admit cross‐dressing usually seek
counselling at some stage because of its effects on their intimate relationships (Doctor & Prince, 1997).
Most men with transvestic disorder have been cross‐dressing for many years, usually since childhood or
early adolescence (Doctor & Fleming, 2001), and many women are happy to tolerate their husbands'
cross‐dressing or even incorporate it into their own sexual activities (Case History 11.2) (Table 11.11).
Because of its close association with the gender dysphoria diagnosis in DSM‐5, and the controversy
surrounding that issue (see Focus Point 11.1), there is also considerable discussion about whether
transvestic fetishism should be conceived as a prima facie mental health problem (Gijs & Carroll, 2011).
transvestic disorder When a heterosexual male experiences recurrent, intense sexual arousal
from cross-dressing in women’s attire, and this causes significant distress or impairment in social
or occupational functioning.
FOCUS POINT 11.4 PENILE INJURIES RESULTING FROM A
VACUUM CLEANER
The following are four cases of penile injury incurred when using a vacuum cleaner in search
of sexual excitement. At least two of these injuries were caused by a ‘Hoover Dustette’ which
has fan blades only 15 cm from the inlet.
Case 1—A 60‐year‐old man said that he was changing the plug of his Hoover Dustette vacuum
cleaner in the nude when his wife was out shopping. It ‘turned itself on’ and caught his penis,
causing tears around the external meatus and deeply lacerating the side of the glans.
Case 2—A 65‐year‐old railway signalman was in his signal box when he bent down to pick up
his tools and ‘caught his penis in a Hoover Dustette which happened to be switched on’. He
suffered extensive lacerations to the glans, which were repaired with cat gut with a good result.
Case 3—A 49‐year‐old man was vacuuming his friend's staircase in a loose‐fitting dressing
gown when, intending to switch the machine off, he leaned across to reach the plug: ‘At that
moment his dressing gown became undone and his penis was sucked into the vacuum cleaner’.
He suffered multiple lacerations to the foreskin as well as lacerations to the distal part of the
shaft of the penis.
Case 4—This patient was aged 68, and no history is available except that the injury was caused
by a vacuum cleaner. The injury extended through the corpora cavernosa and the corpus
spongiosum and caused complete division of the urethra proximal to the corona.
From Citron & Wade, 1980
TABLE 11.10 Summary: DSM‐5 diagnostic criteria for fetishistic disorder
Over a period of 6 months, recurring and strong sexual arousal from the use of nonliving objects
or a highly specific focus on nongenital body parts as part of fantasies, urges, or behaviours,
causing significant distress or impairment in social, occupational, or other areas of life
The fetish is not limited to the clothing used in cross‐dressing or objects such as vibrators and
other genital simulators
Continuing and powerful sexual arousal from cross‐dressing as part of fantasies, urges, or
behaviours, over a period of at least 6 months, causing significant distress or impairment in social,
occupational, or other areas of life
Exhibitionistic disorder
This paraphilic disorder involves sexual fantasies about exposing the genitals to a stranger. These
fantasies are usually strong and recurrent to the point where the individual feels a compulsion to expose
himself or herself, and this compulsion often makes the individual oblivious of the social and legal
consequences of what they are doing (Stevenson & Jones, 1972). The onset of exhibitionistic
disorder usually occurs before 18 years of age, and is often found in individuals who are immature in
their relationships with the opposite sex, and many have problems with interpersonal relationships
generally (Mohr, Turner, & Jerry, 1964). The sufferers urge to expose himself/herself will often lead
them to find a victim in a public place, often a park or a side street, where they expose themselves—
usually to a single victim. The victim's response of shock, fear, or revulsion often forms part of the
gratification that reinforces this behaviour, and the exhibitionist may sometimes masturbate while
exposing himself/herself (especially if he finds the victim's reaction to his behaviour sexually arousing)
or may return home to masturbate while fantasising about the encounter. Exhibitionists will usually
expose themselves to women or children, and while no physical harm is usually involved, the experience
for the victim is often traumatic and may have lasting psychological consequences. A significant
percentage of individuals with a diagnosis of exhibitionistic disorder are nondisclosing, deny any urges
or fantasies related to exposing themselves, and report that incidents of exposure were either accidental
or nonsexual. Men are significantly more likely to be diagnosed with exhibitionistic disorder than
women, and the prevalence rate of exhibitionist acts in men is estimated to be between 2‐4% (DSM‐5,
American Psychiatric Association, 2013, p. 690) (Table 11.12).
exhibitionistic disorder Involves sexual fantasies about exposing the genitals to a stranger.
CASE HISTORY 11.2 TRANSVESTIC DISORDER
‘I am sure I am not the first cross‐dresser to feel this way but when I get the chance to dress at first I am excited, I
can hardly wait to put on the stockings, skirt and get all dolled up and once I am fully dressed I feel so good!
Almost like this is the way I am supposed to be but that feeling does not last……Sometimes it will last an hour
to hours but I have actually had it diminish within 15 mins before I feel guilty and then undress and go back to
my guy clothes. For some reason a light goes on in my head that tells me, I am a guy…. why am I wearing a
skirt?!? and then I quickly undress. But then when I am dressed as a guy I will admire women in skirts and
dresses and wish that could be me and all I want to do is rush home and dress (when my wife is not home). I
would love to tell my wife about my CD'ing but I need to come to terms with it first’.
Chris's Story
Continuing and powerful sexual arousal from exposing one's genitals to an unsuspecting audience,
over a period of at least 6 months, as part of fantasies, urges, or behaviours
The patient has acted on these urges with a nonconsenting person or the urges cause significant
distress or impairment in social, occupational, or other areas of life
Voyeuristic disorder
A diagnosis of voyeuristic disorder is given when an individual experiences recurrent, intense
sexually arousing fantasies or urges involving the act of observing an unsuspecting person who is naked,
in the process of undressing, or engaging in a sexual activity. Sexual arousal normally comes from the
act of looking (‘peeping’), and the individual may masturbate while in the act of observing others.
However, the individual rarely seeks sexual activity with those being observed. Voyeurism usually begins
in early adolescence, and may often constitute that individual's sole sexual activity in adulthood (Kaplan
& Kreuger, 1997). The risk of being discovered while indulging in voyeuristic behaviours may also add
to the excitement that this behaviour engenders. However, we must be clear that voyeurism can be a
perfectly acceptable sexual activity when practiced between consenting individuals, but is clearly
problematic when the voyeur begins seeking non‐consenting victims and violates their privacy.
Voyeuristic acts are the most common of the potentially illegal paraphilic behaviours, with estimates of
the possible lifetime prevalence of voyeuristic acts being as high as 12% in males and 4% in females
(DSM‐5, American Psychiatric Association, 2013, p. 688). (Table 11.13).
Frotteuristic disorder
This involves intense, recurrent sexual urges to touch and rub up against nonconsenting people—
usually in crowded places such as underground trains, buses, cinema or supermarket queues, etc. This is
usually a male activity, and manifests as a sexual urge to rub the genitalia against the victim's thighs and
buttocks or to fondle the victim's genitalia or breasts with his hands. This behaviour is usually
undertaken in a surreptitious way in order to try to make it appear unintentional or as if someone else
in the crowded environment is the culprit. Like exhibitionism and voyeurism, this activity usually begins
in adolescence, but may subside in frequency by the time the individual is in their late 20s. Frotteurism is
considered by many to be a form of sexual assault, and at least part of the excitement for frotteurs is the
feeling of power it gives them over their victim—a feeling that is relatively common in those who
indulge in sexual assault generally. Around 10‐14% of adult males seen in outpatient settings for
paraphilic disorders meet the criteria for frotteuristic disorder (DSM‐5, American Psychiatric
Association, 2013, p. 693) (Table 11.14).
TABLE 11.13 Summary: DSM‐5 diagnostic criteria for voyeuristic disorder
Continuing and powerful sexual arousal from the observance of an unsuspecting person who is
naked, undressing, or engaging in sexual activity, over a period of at least 6 months, as part of
fantasies, urges, or behaviours
The patient has acted on these urges with a nonconsenting person or the urges cause significant
distress or impairment in social, occupational, or other areas of life
The individual experiencing the arousal is at least 18 years of age
Paedophilic disorder
Paedophilic disorder is defined as sexual attraction towards prepubescent children, normally of 13‐
years or younger. To be diagnosed with paedophilic disorder, the individual must be at least 16 years of
age and at least 5 years older than the victim. Recent studies suggest that up to 9% of men have
described having at least one sexual fantasy with a child, and so such fantasies are not that uncommon
in the general population (Seto, 2009). However, DSM‐5 does highlight the fact that paedophilic
disorder is only diagnosed if the individual acts on these fantasies or is distressed by them.
The extensive use of pornography depicting children is usually an indicator of paedophilic disorder,
and the condition often becomes apparent as a sexual interest in children around puberty. Those who
report paedophilic sexual urges, usually report a preference for males or females, or sometimes for both.
Those attracted to females usually prefer 8‐ to 10‐year‐olds, whereas those attracted to males usually
prefer older children (DSM‐IV‐TR, p. 571). Paedophilia generally appears to be a lifelong condition, in
which many with a sexual interest in children will often deny attraction to children despite multiple
sexual approaches to children. Paedophilic disorder, however, is defined by other elements that may
change over time such as guilt, shame, or feelings of isolation. The highest likely prevalence for
paedophilic disorder in males is around 3‐5% (DSM‐5, American Psychiatric Association, 2013, p. 698;
Tenbergen et al., 2015).
TABLE 11.14 Summary: DSM‐5 diagnostic criteria for frotteuristic disorder
Continuing and powerful sexual arousal from touching or rubbing against a nonconsenting
person, over a period of at least 6 months, as part of fantasies, urges, or behaviours
The patient has acted on these urges with a nonconsenting person or the urges cause significant
distress or impairment in social, occupational, or other areas of life
The central feature of the psychopathology is sexual attraction to children, but this is not equivalent to
‘child sexual abuse’, ‘incest’, or ‘child molestation’ because the latter represent criminal acts. It is
important to make this distinction because not all who sexually abuse children are diagnosable with
paedophilic disorder—for example, many who sexually abuse children may opportunistically select
children simply because they are available, and such people do not necessarily have specific fantasies
about having sex with children (Fagan, Wise, Schmidt, & Berlin, 2002). Girls are 3 times more likely
than boys to be sexually abused, and children from low‐income families are 18 times more likely to be
sexually abused (Sedlak & Broadhurst, 1996a,b). The paedophile's sexual activity with children is usually
limited to acts such as undressing the child, exposing themselves, masturbating in the presence of the
child, or gently touching or fondling the child and their genitalia. However, in more severe cases, this
activity can extend to performing oral sex acts with the child, or penetrating the child's vagina, mouth
or anus with fingers, foreign objects, or their penis. In general, paedophiles rarely believe that what they
are doing is wrong and avidly deny their sexual attraction to children. They will also often use
egocentric forms of rationalisation to justify their acts (e.g., the acts had ‘educational value’ or that the
child was consenting or gained pleasure from the activity). Because of this, they often fail to experience
distress or remorse, and so experiencing distress or psychological impairment is not a necessary part of
the diagnostic criteria for paedophilic disorder (Table 11.15).
There are a number of unofficial subtypes of paedophilia. First, some paedophiles limit their activities
to their immediate family (e.g., children, stepchildren, nieces and nephews, etc.) and incest is listed as a
specifying factor in DSM‐5. Men who indulge in incest tend to differ from other paedophiles (a) by
indulging in sexual activity with children of a slightly older age (e.g. an incestuous father may show
sexual interest in a daughter only when the daughter begins to become sexually mature), and (b) by
having a relatively normal heterosexual sex life outside of the incestuous relationship. In contrast,
nonincestuous paedophiles will normally become sexually aroused only by sexually immature children
and are sometimes known as preference molesters (Marshall, Barbaree, & Christophe, 1986).
Second, most paedophiles rarely intend to physically harm their victims (even though they may threaten
their victims in order to prevent disclosure), but some may get full sexual gratification only from
harming and even murdering their victims. This latter group are probably best described as child
rapists; they appear to be fundamentally psychological different to other paedophiles and often have
comorbid diagnoses of personality disorder or sexual sadism (Groth, Hobson, & Guy, 1982).
child rapists A group of paedophiles who only get full sexual gratification from harming and
even murdering their victims.
Continuing and powerful sexual arousal from fantasies, urges, or behaviours involving sexual
activity with a prepubescent child or children (aged 13 years or younger)
The individual has acted on these urges or the urges cause significant distress or impairment in
social, occupational or other areas of life
The individual is at least 16 years old and at least 5 years older than the child or children involved
Does not include an individual in late adolescence in an ongoing sexual relationship with a 12‐ or
13‐year‐old
Because their behaviour is illegal and socially outlawed, and because they need to gain the trust of their
child victims in order to indulge in their sexual activities, most individuals diagnosable with paedophilic
disorder develop elaborate ways of gaining access to children. This can involve taking jobs in
environments where children are frequently found (e.g., schools, residential children's homes, etc.),
gaining the confidence of the parents or family of a child, or more recently by ‘grooming’ children in
Internet chat rooms by pretending to be someone of a similar age to the victim. Focus Point 11.5
provides an example of how paedophiles may ‘groom’ and lure children for sexual purposes on the
Internet (Focus Point 11.5). In a qualitative study of the modus operandi of male paedophiles, Conte,
Wolf, and Smith (1989) were able to describe a standard process through which many paedophiles
operated to attract and isolate their victims and desensitise them to their sexual advances. This process
included (a) choosing an open, vulnerable child who would be easily persuaded and would remain silent
after the abuse, (b) using nonsexual enticements such as purchases or flattery on early encounters with
the child, (c) introducing sexual topics into the conversation, and (d) progressing from non‐sexual
touching to sexual touching as a means of desensitising the child to the purpose of the touching. After
the abuse, the paedophile would use his adult authority to isolate the child and their ‘shared behaviour’
from family and peers.
From www.bewebaware.ca
Finally, it is important to remember that by their very age, the victims of paedophilia are nonconsenting
victims, and that sexual activity with prepubescent children is illegal in most societies. Self‐report studies
indicate that 20% of adult females and 5–10% of adult males recall a childhood sexual assault or sexual
abuse incident (National Center for Victims of Crime, 2011). Furthermore, it is also important to note
that the victims of paedophilia can suffer long‐term psychological problems as a result of their
experiences, and these can manifest as eating disorders, sleep disorders, depression, anxiety disorders
such as panic attacks and phobias, self‐harm, and dissociative disorders, all persevering well into
adulthood. These psychological problems are more intense and more enduring if the abuse occurred at
an early age and the victim knew their abuser well (Kendall‐Tuckett, Williams, & Finkelhor, 1993).
sexual masochism disorder When an individual gains sexual arousal and satisfaction from
being humiliated, and this causes the individual significant distress.
sexual sadism disorder When a person gains sexual arousal and satisfaction from the
psychological or physical suffering of others, and this diagnosis is given if the symptoms cause
the individual significant distress or if the person acts on the impulses with a non-consenting
person.
hypoxyphilia An act performed by sexual masochists which involves the individual using a
noose or plastic bag to induce oxygen deprivation during masturbation.
TABLE 11.16 Summary: DSM‐5 diagnostic criteria for sexual masochism disorder
Continuing and powerful sexual arousal from the act of being humiliated, tied up, beaten, or
made to suffer, over a period of at least 6 months, as part of fantasies, urges, or behaviours
The urges cause significant distress or impairment in social, occupational, or other areas of life
TABLE 11.17 Summary: DSM‐5 diagnostic criteria for sexual sadism disorder
Continuing and powerful sexual arousal from the physical or psychological suffering of another
person, over a period of at least 6 months, as part of fantasies, urges, or behaviours
The patient has acted on these urges with a nonconsenting person or the urges cause significant
distress or impairment in social, occupational, or other areas of life
A number of studies have also identified some of the risk factors involved in paedophilia. These can be
categorised as either remote factors (i.e., factors from the individual's developmental history) or
precipitant factors (i.e., factors that lead directly to the expression of paedophile behaviour). Remote risk
factors for paedophilia include being a victim of childhood sexual abuse (Cohen et al., 2010), or
possessing an inadequate attachment style that results from being brought up in a dysfunctional family
(Hanson & Slater, 1988). Precipitating risk factors include depression, psychosocial stress (for example,
as a result of losing a relationship or a job), alcohol abuse (Fagan, Wise, Schmidt, & Berlin, 2002), social
incompetence, emotional dysregulation, and substance abuse generally (Yakeley & Wood, 2014).
Psychiatric comorbidity is also highly associated with paedophilic disorder, with 93% of paedophiles
being diagnosed with at least one other psychopathology during their lifetime, such as major depression
or anxiety disorders. In addition, 60% of paedophiles are diagnosed with a substance abuse disorder,
and 60% meet the diagnostic criteria for a personality disorder (Raymond, Coleman, Ohlerking,
Christensen, & Miner, 1999). The statistics support the view that psychopathology may be a
precipitating factor in triggering paedophile behaviour.
Because of the way in which psychodynamic theory is couched, it is difficult to find objective evidence
to support these explanations of paraphilias. If such factors do underlie paraphilia behaviour, then
exploring them in psychoanalysis should help to alleviate these diverse sexual activities. However, there
is only modest evidence that psychoanalysis is successful in the treatment of paraphilias (Cohen &
Seghorn, 1969), and it is usually entirely ineffective in treating sexual offenders (Knopp, 1976).
Classical conditioning
One very simple explanation for paraphilic disorders is that unusual sexual urges are the result of early
sexual experiences (such as masturbation) being associated with an unusual stimulus or behaviour
through associative learning (classical conditioning). For example, an adolescent boy's first sexual
experiences may be masturbating to pictures of women dressed in fur or leather (resulting in a fur or
leather fetish), or masturbating after accidentally seeing a neighbour undressing (resulting in voyeurism).
Such early experiences may determine the route an adolescent's sexual development will take, and this
conditioning account is consistent with the fact that many of the paraphilias first manifest in early
adolescence. Support for the classical conditioning account also comes from an early experiment that
attempted to develop a fetish for women's knee‐length leather boots in a group of male volunteers.
Rachman (1966) showed participants slides of a pair of black, knee‐length woman's leather boots (the
conditioned stimulus, CS) followed immediately by a slide of an attractive female nude (the
unconditioned stimulus, UCS). After a number of pairings of the CS with the UCS, participants
showed an increase in penis volume (as measured by a phallo‐plethysmograph) whenever the CS slide
was shown. One participant even generalised this sexual response to pictures of other forms of female
footwear! Nevertheless, while the conditioning of a sexual fetish can be experimentally demonstrated
under controlled conditions, it is unlikely that conditioning is the cause of all paraphilias. It may
account for the initial development of some fetishes, and may also account for why sexual urges initially
become associated with specific activities such as voyeurism, frotteurism, or object‐assisted sexual
behaviours in women (O'Keefe et al., 2009). However, as normal sexual activities become experienced
during adolescence and early adulthood, conditioning theory would predict sexual urges to become
associated with these normal sexual activities and links between sexual urges and early, learnt paraphilia
behaviour should extinguish. Nevertheless, paraphilias frequently persist—even when the sufferer finds
them distressing and even when they are also concurrently engaging in normal sexual behaviour.
cognitive distortions Beliefs held by sexual offenders that enable them to justify their sexual
offending.
TABLE 11.18 Cognitive distortions found in the post‐offending statements of paedophiles and exhibitionists
Adapted from Maletzky (2002).
Function of Paedophilia Exhibitionism
statement
Misattributing ‘She would always run around half‐ ‘The way she was dressed, she was asking
blame dressed’. for it’.
Denying sexual ‘I was just teaching her about sex’. ‘I was just looking for a place to pee’.
intent
Debasing the ‘She always lies’. ‘She was just a slut anyway’.
victim
Minimising ‘She's always been really friendly to me— ‘I never touched her—so I couldn't have
consequences even afterwards’. hurt her’.
Deflecting ‘This happened years ago, why can't ‘It's not like I raped anyone’.
criticism everyone forget about it?’.
Justifying the ‘If I wasn't molested as a kid, I'd never ‘If I knew how to get dates, I wouldn't
cause have done this’. have to expose myself ’.
The cognitive distortions that many sex offenders hold are often the products of more dynamic
cognitive processes. For example, Stermac and Segal (1989) found that sexual offenders interpret sexual
information in a biased way, usually in a manner consistent with their underlying beliefs about the
acceptability of their behaviour. They found that child molesters differed from other respondent groups
by having a predisposition to interpret information as implying benefits could be gained from sexual
contact with children, there was greater complicity on the child's part, and less responsibility on the
adult's part. Finally, research has suggested that sex offenders—and rapists in particular—may have
developed integrated cognitive schemata that guide the offender's interactions with their victims and
justify their behaviour. Polaschek and Ward (2002) called these implicit theories. Offenders use these
schemata as causal theories about themselves, their victims and broader categories of people (such as
women or children). Polaschek and Gannon (2004) identified five types of implicit theory held by
rapists. These included the beliefs that (a) women are unknowable (i.e., ‘sexual encounters will end up
being adversarial because a woman's intentions are unknowable’), (b) women are sex objects (i.e.,
‘women are constantly sexually receptive and so will enjoy sex even when it is forced on them’), (c) the
male sex drive is uncontrollable (i.e., ‘a man's sex levels will build up to a dangerous level if women do
not provide them with reasonable sexual access’), (d) men are naturally dominant over women (i.e., ‘men
are more important in society than women, and a woman should meet a man's needs on demand’), and
(e) the world is a dangerous place (i.e., ‘it is a dog‐eat‐dog world and a man needs to take what he can
from it’). Implicit cognitive theories such as these can provide the sex offender with a justification for
both impulsive and premeditated sexual offences, and can be used as a way of denying both the
significance of the offence and the offender's responsibility for the offence, and further research is
beginning to identify the different forms of dysfunctional schemas that may act as vulnerability factors
for different forms of sexual offending (Sigre‐Leirós, Carvalho, & Nobre, 2015).
implicit theories In sexual offending, integrated cognitive schemas that guide sexual
offenders’ interactions with their victims and justify their behaviour.
Biological theories
As we mentioned earlier, the vast majority of those diagnosed with a paraphilia are male, and so it has
been hypothesised that paraphilia is caused by abnormalities in male sex hormones or by imbalances in
those brain neurotransmitters that control male sexual behaviour. For example, androgens are the
most important of the male hormones, and it may be that unusual sexual behaviour, such as impulsive
sexual offending involving non‐consenting others, may be due to imbalances in these hormones.
However, there is relatively little convincing evidence that abnormal androgen levels play a significant
role in the development of paraphilic behaviour, although androgen levels may help to maintain
paraphilic behaviour once it has been acquired (Buvat, Lemaire, & Ratajczyk, 1996; Thibaut, De La
Barra, Gordon, Cosyns, & Bradford, 2010), and antiandrogen drugs that reduce testosterone levels are
regularly used to reduce the sexual urges of those with paraphilia disorders (Bradford & Pawlak, 1993;
Jordan, Fromberger, Stolpmann, & Muller, 2011). Abnormalities in brain neurotransmitter metabolism
—such as serotonin—have also been associated with paraphilia (Maes, De Vos, Van Hunsel, & Van
West, 2001), although it is unclear whether such abnormalities are a cause of paraphilia or whether they
are a consequence of acquiring paraphilia behaviour and anxiety and depression that is frequently
comorbid with paraphilia.
androgens The most important of the male hormones. Unusual sexual behaviour, such as
impulsive sexual offending involving non-consenting others, may be due to imbalances in these
hormones.
Finally, there are a small number of studies that have identified abnormalities or deficits in brain
functioning with paraphilias. First, abnormalities in the brain's temporal lobe have been associated with
a number of paraphilias, including sadism, exhibitionism and paedophilia (Mason, 1997; Murphy,
1997; Mendez, Chow, Ringman, Twitchell, & Hinkin, 2000), and gray matter volume has been found to
be lower in the temporal lobes of paedophiles than nonpaedophiles (Schiffer et al., 2017). In particular,
these abnormalities appear to be related to dysfunction in the temporal lobes leading to sexual
disinhibition of previously controlled behaviour, and Schiffer et al. (2017) also found that lower gray
matter volume in the dorsomedial prefrontal cortex was associated with a higher risk of reoffending in
paedophilic child molesters. The dorsomedial prefrontal cortex plays a role in processing a sense of self,
theory of mind, empathy, and making morality judgements—and deficits in these abilities may all play a
role in sex offending.
Other studies (albeit based on small samples of participants) have identified deficits in cognitive abilities
in paedophiles that are mediated by striato‐thalamically controlled areas of the frontal cortex (Tost et
al., 2004). These areas are associated with neuropsychological functions that include response inhibition,
working memory and cognitive flexibility, and deficits in these domains are consistent with the finding
that paedophiles frequently have lower than expected IQ scores—often as much as two thirds of a
standard deviation below the population mean (Cantor, Blanchard, Robichaud, & Christensen, 2005).
Summary
Research on the aetiology of paraphilic disorders has largely been restricted to understanding the causes
of those paraphilias that involve sexual offending (e.g., paedophilia)—mainly because of the desire to
understand and prevent criminal activity. However, the research that is available has identified some risk
factors for paraphilia (e.g., hypersexuality, childhood abuse, and neglect), and has also indicated that
some paraphilias are associated with cognitive biases and dysfunctional beliefs that act to maintain
sexual offending and serve to legitimise or justify sexual activities.
Behavioural techniques
In Section 11.3.2 we discussed a number of early theories of paraphilic disorders that viewed these
problems as resulting from classical conditioning processes. In these accounts unconventional stimuli or
events (such as specific stimuli in fetishes, watching others naked in voyeurism, etc.) have become
associated with sexual experiences, such as masturbation, during early adolescence. The assumption of
behaviour therapy is that if these behaviours are learnt through conditioning, then they can also be
‘unlearnt’ through the use of basic conditioning procedures. Three types of technique are described
here. These are aversion therapy, masturbatory satiation and orgasmic reorientation.
Aversion therapy is based on the assumption that inappropriate stimuli have become positively
associated with sexual arousal and sexual satisfaction, and in order to break this association, those
stimuli must now be paired with negative or aversive experiences. For example, treatment of a fur fetish
may involve pairing pictures of fur or women wearing fur clothing with aversive experiences such as an
electric shock or drug induced nausea. Alternatively, a paedophile may be given electric shocks when
shown pictures of naked children. An avoidance component can be added to this treatment in which
the client can avoid the negative outcome by pressing a button which changes the picture from their
preferred sexual stimulus (e.g., fur, naked child) to an acceptable one (e.g., an attractive female). Aversion
therapy can also be used in a covert conditioning form, where the client does not actually experience
the pairing of sexual stimuli with aversive outcomes, but imagines these associations during controlled
treatment sessions. For example, the client may be asked to imagine one of their sexual fantasies and
then to vividly imagine a highly aversive or negative outcome, such as his wife finding him indulging in
his paraphilic sexual activities or being arrested, etc. (Barlow, 1993). Aversion therapy has been used to
treat fetishes, transvestism, exhibitionism, and paedophilia, and there is some evidence that it may have
some treatment benefit when combined with other approaches such as social skills training (Marks,
Gelder, & Bancroft, 1970). However, as we have reported elsewhere in this book, aversion therapy rarely
achieves long‐term success when used alone—and high rates of relapse are associated with the sole use
of aversion therapy (Wilson, 1978; Beech & Harkins, 2012).
covert conditioning Using the client’s ability to imagine events in order to condition
associations between events.
Satiation is an important conditioning principle in which the unconditioned stimulus (in this case sexual
satisfaction) comes to be ineffective because it is experienced in excess, and this leads to extinction of the
sexual urges that had been conditioned to stimuli or events associated with that unconditioned stimulus
(e.g., fetishes, etc.). This has led to the development of masturbatory satiation as a treatment for
paraphilic disorders, in which the client is asked to masturbate in the presence of arousing stimuli (e.g.,
women's underwear if the client has an underwear fetish) and to simultaneously verbalise fantasies on a
tape recorder. Immediately after orgasm, the client is instructed to masturbate again no matter how
unaroused or uninterested they feel and continue for at least an hour (Marshall & Barbaree, 1978). After
a number of these sessions, the client often reports that the stimuli that previously sexually aroused them
has become boring or even aversive (LoPiccolo, 1985). Latency to orgasm increases, and the number of
sexual fantasies elicited by the paraphilic stimulus significantly decreases (Marshall & Lippens, 1977).
An important task for anyone treating paraphilic disorders is not only to suppress inappropriate or
distressing sexual activities (perhaps using the methods described here), but to replace these with
acceptable sexual practices. Orgasmic reorientation is a treatment method that aims to make the
client sexually aroused by more conventional or acceptable stimuli. This is a more explicit attempt to
recondition sexual urges to more conventional stimuli and can be used as an extension of the
masturbatory satiation technique. For example, a male client is first asked to masturbate while attending
to conventionally arousing stimuli (such as pictures of nude females), but if they begin to feel bored or
lose their erection, they are asked to switch to attending to pictures associated with their paraphilia. As
soon as they feel sexually aroused again, they must switch back to attending to the conventional
stimulus, and so on. Although there are a number of individual case studies suggesting that some
variations of orgasmic reorientation may be successful in helping clients to manage their paraphilic
behaviour, there are few controlled outcome studies available to evaluate the success of this method over
the longer term (Laws & Marshall, 1991). Perhaps more important, reorientation treatments beg the
question of what criteria should be used to decide whether a sexual behaviour requires reassignment,
and this is an issue of significant debate, especially in LGBT communities (e.g., Flentje, Heck, &
Cochran, 2013), and the outcome of this debate will have important ramifications for clinical policy and
practice.
Cognitive treatments
We saw in the section on aetiology that dysfunctional beliefs play a central role in developing and
maintaining a number of paraphilic disorders—especially those paraphilias that involve sexual
offending with nonconsenting victims. Cognitive treatment for these paraphilias often involves CBT,
which is adapted to help the client to identify dysfunctional beliefs, to challenge these beliefs and then
replace them with functional and adaptive beliefs about sexual behaviour and sexual partners. Table
11.18 shows a list of the kinds of dysfunctional beliefs held by paedophiles and exhibitionists. These
beliefs act as justifications for sexual offending and are part of a belief system that effectively ‘gives
them permission’ to carry out their offences. Challenging dysfunctional beliefs includes (a)
demonstrating to clients that their dysfunctional beliefs are based on their deviant sexual behaviour
rather than being justifiable reasons for the behaviour, (b) helping clients to see how they might
misinterpret the behaviour of their victims to be consistent with their dysfunctional beliefs, and (c)
discussing dysfunctional beliefs within existing individual and broader social norms in order to
demonstrate that the client's beliefs are not shared by most other members of society (e.g., that women
are not merely objects for sexual gratification). One integrated CBT‐based treatment for sexual offenders
is called the Core Sex Offender Treatment Programme (SOTP) which had been recommended
for use in UK prisons by the Ministry of Justice. Core SOTP extensively adopts CBT methods for
treating imprisoned sex offenders (Beech, Fisher, & Beckett, 1999) and targets risk factors for reoffending
such as sexual preoccupation, sexual preferences for children, offence‐supporting attitudes, lack of
emotional intimacy with adults, impulsive lifestyle, and poor problem‐solving abilities. Despite evidence
that initially supported the efficacy of the programme in marginally reducing recidivism (e.g.,
Schmucker & Lösel, 2008), a later randomised controlled study of the longer‐term effects of the
treatment programme over 8 years showed paradoxically that more treated sex offenders committed at
least one sexual re‐offence when compared with non‐treated control offenders (Mews, Di Bella, &
Purver, 2017). Even though the Core SOTP programme was based on tried‐and‐tested CBT principles,
the Ministry of Justice withdrew the treatment programme in 2017, and it's not clear why the
programme failed to have the predicted outcome on sex offender recidivism. This may simply attest to
the great difficulty in finding effective psychological treatments that have any significant impact on
longer‐term sexual offending behaviour (Dennis et al., 2012).
Cognitive treatment Treatment approach intended to help the client identify and challenge
dysfunctional beliefs.
Relapse‐prevention training
Rather than focus on an all‐embracing ‘cure’ for paraphilias, many forms of treatment focus specifically
on relapse prevention, and this is especially relevant in the case of sexual offenders (see previous
section). Relapse‐prevention training consists primarily of helping clients to identify circumstances,
situations, moods, and types of thoughts that might trigger paraphilic behaviour. For example, a mood
trigger might be a period of stress or anxiety or alcohol abuse that precipitates sexual offending, or close
contact with children might activate paedophile behaviours. Sexual offenders are also taught to identify
the distorted cognitions that might lead to offending (e.g., ‘that child is running around half dressed, so
she must be interested in sex’) and are taught self‐management skills that will enable them to interrupt
sequences of thoughts that lead to offending or to avoid situations that place them at risk (e.g., in the
case of paedophilia, to avoid taking jobs that involve working with or near children, or living near a
school).
antiandrogen drugs A group of drugs that significantly decrease the levels of male
hormones such as testosterone.
An alternative to antiandrogens is the use of antidepressant drugs such as SSRIs (e.g., fluoxetine), and
there is some modest evidence that such drugs will help the individual control sexual urges—especially
if depression is a trigger for indulging in paraphilic behaviour (Kafka & Hennen, 2000). In particular,
the available evidence suggests SSRIs may be helpful for conditions such as hypersexuality (increased
sex drive) (National Institute for Health and Care Excellence, 2015).
CHAPTER OUTLINE
I'm 35 now and for as long as I can remember I've been involved with services. As a child, I was in and out of
foster care as life at home was just one big mess. My parents drank heavily, and all my brothers and sisters—
including me—were neglected, physically, and emotionally. When I was seven, my mum's brother moved in, my
so‐called uncle, and that's when the sex abuse started. I was moved into care because no one could understand why
I was so unhappy all the time, but I couldn't tell anyone, he made me promise, and anyway he was pretty
controlling and scary. I should have been safe in care but there were people who hurt me and interfered with me.
I started self‐harming when I was 13 or 14. It was a complete distraction from all the mess and unhappiness
around me, cutting my arms, being in control of something, and letting the pressure out. Then I got pregnant by an
older guy when I was 16 and he promptly dumped me. That was a terrible time. I was utterly desperate and so
vulnerable, and I ended up giving my baby up for adoption. I will always feel guilty for that.
I had to leave school, though I was pretty rubbish there, and just couldn't focus or concentrate on anything, and
didn't really have anyone I could call a friend, someone who really knew me and what my life was like. I ended
up going from one job to the next, supermarkets, cleaning, you name it. So boring I couldn't hack it.
Then I really started hurting myself bad, so alone, so depressed, so much wanting not to exist, so needing to be
dead. Drinking helped, as did smoking weed, but that feeling of nothingness, the deadened peace, never really
lasted and I ended up doing some pretty stupid things with people. Don't even go there. I was 20 and going from
one fella to the next. Then I started being knocked around and was hurt so badly one time I just did it, tried to kill
myself, that's when I first went to the mental hospital. I was in and out of the wards, it was like a pattern, pick
myself up and start all over again and then bam, I was back to square one. The shrinks and the therapists tried to
help but it was useless, and I just hurt myself more and more. I think they were as close as I was to giving up,
maybe more so.
Jane's Story (from Solts & Harvey, 2015)
Introduction
We all have personalities. Personalities tend to be enduring features of individuals that determine how
we respond to life events and experiences, and they also provide a convenient means by which others
can label and react to us. To this extent, a personality is a global term that describes how you cope with,
adapt to, and respond to a range of life events, including challenges, frustrations, opportunities,
successes, and failures. A personality is something that we inwardly experience ourselves and outwardly
project to others. While personalities tend to be relatively enduring in their main features, most people
will learn and evolve with their experiences, and they will learn new and effective ways of behaving that
will enable them to adapt with increasing success to life's demands. In contrast, some others will have
experienced many difficulties in their lives, experienced much unhappiness, and will struggle to live the
‘normal’ lives of others that they see around them. Their attempts to cope with these difficulties are
often extreme and damaging. We can see many of these latter characteristics in Jane's Story. After a
childhood of neglect and abuse she spent much of her early life in care and then drifted from job to job
and relationship to relationship, experiencing a roller‐coaster of emotions. Her attempts to cope with
her life were damaging and often extreme, from self‐harm to drug abuse, and eventually suicide
attempts, and from childhood to adulthood her approach to life and its problems became more
entrenched and destructive. Jane's life story is characteristic of an individual with a diagnosis of
borderline personality disorder (BDP), which is the most common of the diagnosable personality
disorders.
The personality disorders diagnostic category is arguably the category that creates the most controversy
amongst clinical psychologists and mental health professionals, and we discuss many of these
controversies in this chapter. We also discuss the effects that these debates have had on the diagnostic
criteria for personality disorders found in the most recent edition of the DSM.
For the purposes of clinical diagnosis, DSM‐5 defines a personality disorder as ‘an enduring pattern of
inner experience and behaviour that deviates markedly from the expectations of the individual's culture,
is pervasive and inflexible, has an onset in adolescence or early adulthood, is stable over time, and leads
to distress and impairment’ (DSM‐5, American Psychiatric Association, 2013, p. 645). DSM‐5 has
grouped personality disorders into three distinct clusters (see Table 12.3). Cluster A includes paranoid,
schizoid, and schizotypical personality disorders, and DSM describes individuals with these disorders as
‘appearing odd or eccentric’. Cluster B includes antisocial, borderline, histrionic, and narcissistic
personality disorders, where individuals ‘often appear dramatic, emotional, or erratic’. Cluster C
includes avoidant, dependent, and obsessive‐compulsive personality disorders (OCPDs), and individuals
with these disorders ‘often appear anxious or fearful’ (DSM‐5, American Psychiatric Association, 2013,
p. 646).
Individuals diagnosed with a personality disorder will often deny their psychopathology, will often be
unable to comprehend that their behaviour may on some occasions be contrary to conventional and
acceptable ways of behaving, and will find it difficult to associate their own psychological difficulties
with their own inflexible ways of thinking, behaving, and coping.
personality disorder types Each of six personality disorder traits specified in the alternative
diagnostic schemes published in DSM‐5.
TABLE 12.2 Summary: DSM‐5 diagnostic criteria for general personality disorder
An ongoing rigid pattern of thought and behaviour that is significantly different from the
expectations of the person's culture, displaying manifestation in two or more of the following
areas:
Cognition
Affectivity
Interpersonal functioning
Impulse control
The pattern is constant and long lasting and can be traced back to adolescence or early childhood
The pattern leads to distress or impairment in social, occupational, and other areas of life
The symptoms are not better accounted for by another mental disorder or due to the effects of a
substance or other medical condition
12.1.4 Summary
This discussion should have given you an insight into how the diagnosis of personality disorders may
develop in the immediate future and has provided you with some of the reasons why diagnosis might
need to change. However, the American Psychiatric Association decided against introducing these
changes with the publication of DSM‐5 in 2013 and agreed to provide more time for further research
on the alternative, dimensional approach.
But it is expected that when the new revision of the International Classification of Diseases (ICD‐11) is
published by the World Health Organization (WHO) in 2022, it will adopt a fully dimensional
classification of personality disorders, in which a severity dimension will be the most significant
diagnostic dimension (mild, moderate, and severe), with five personality trait domains serving as
qualifiers (negative affectivity, detachment, dissociability, disinhibition, and anankastia, which are similar
to the higher‐order trait domains in the DSM‐5 alternative model, see Table 12.1) (Bach, 2018; Tyrer,
Mulder, Kim, & Crawford, 2018). This should allow the clinician to make a diagnosis that (a) is more
clinically meaningful in terms of how the severity of the symptoms affects functioning, (b) does not
require the need for multiple comorbid diagnoses (this would be covered by the severity dimension), and
(c) would enable treatment to be directed towards aiding functioning rather than treating dysfunctional
traits.
SELF‐TEST QUESTIONS
What are the problems associated with the current categorical approach to diagnosing
personality disorders?
What is the process for diagnosing personality disorders in DSM‐5's alternative model?
SECTION SUMMARY
A universal distrust and suspicion of others to the extent that their motives are seen as malicious,
as indicated by at least four of the following:
Suspicions that others are misusing, hurting, or misleading him/her
Fixation with unjustifiable doubts about the trustworthiness of friends and such like
Unwilling to confide in others because of fear that the information will be used against
him/her
Sees hidden threats in nonthreatening words or events
Bears persistent grudges
Sees attacks on their character or status that are not apparent to others and is quick to react
angrily
Has ongoing suspicions about the faithfulness of their sexual partner or spouse
Symptoms do not occur exclusively during the course ofany other psychotic disorder
A persistent pattern of separation from social relationships and a restricted range of expression of
emotions in relational situations, as indicated by at least four of the following:
Does not like or want close relationships
Prefers solitary activities
Take little or no pleasure in sexual experiences with another person
Takes pleasure in few, if any activities
Lacks close friends or confidents other than immediate relatives
Indifferent to the praise or criticism of others
Displays emotional coldness, detachment, or flat expression
Symptoms do not occur exclusively during the course of any other psychotic disorder
A persistent pattern of social and relational shortfalls, evidenced by a lack of ease with, and
reduced ability for, close relationships, as well as distortions and peculiarities of behaviour as
shown by at least five of the following:
Beliefs or perceptions which are irrelevant, innocuous, or unrelated
Odd beliefs that influence behaviour and are not within subcultural norms
Strange perceptions of what is occurring around them
Vague or other odd thinking and speech
Suspicious or paranoid ideas
Inappropriate or constricted emotional expression
Odd, eccentric, or strange behaviour or appearance
Lacks close friends or confidents other than immediate relatives
High levels of social anxiety despite familiarity
The pattern does not occur during the course of schizophrenia or other psychotic disorder
There is some evidence that the schizotypal disorder may be very closely related to schizophrenia. First,
schizotypal personality disorder is found to be significantly more common in individuals who have
biological relatives with schizophrenia than those who do not (Nicolson & Rapoport, 1999), suggesting a
possible inherited link between the two. Second, schizotypal personality disorder is significantly more
likely to be found in the offspring of individuals with schizophrenia than in the offspring of individuals
diagnosed with anxiety disorders or no mental disorder (Hans, Auerbach, Styr, & Marcus, 2004). Third,
some of the symptoms of schizotypal personality disorder can be successfully treated with antipsychotic
drugs also used to treat schizophrenia (Schulz, Schulz, & Wilson, 1988). Fourth, cognitive studies have
shown that many of the attentional and working memory deficits found in schizophrenia are also
apparent in individuals diagnosed with schizotypal personality disorder (Barch et al., 2004). Finally,
neuroimaging studies show that schizotypal personality disorder shares many forms of brain pathology
in common with schizophrenia, suggesting it may be a schizophrenia‐spectrum condition (Fervaha &
Remington, 2013). Schizotypal personality disorder is closely related in many ways to schizophrenia and
may even represent a genetic risk factor for schizophrenia (Walter, Fernandez, Snelling, & Barkus, 2016)
(Case History 12.1).
CASE HISTORY 12.1 SCHIZOTYPAL PERSONALITY
DISORDER
‘Ian is 23 and lives at home with his parents. He is unemployed. He spends most of his time
watching TV, and often simply sits and stares into space. He says he just feels ‘out of it’ a lot of
the time. He reports that he seems to see himself from outside, as if watching himself in a film
and reading from a script. He has tried a few jobs but never manages to persist at one for very
long. At his last job, which was in a DIY store, several customers complained to the manager
about Ian talking to them in a rambling and vague way—often about irrelevant things. This led
to Ian being sacked from this job. Ian doesn't understand why people don't seem to like him and
get along with him. He notices that people move away from him on public transport or avoid
talking to him in queues, but nothing he seems to do or say changes this and he now tries to
avoid interactions with others because they make him anxious. He has no close relationships
and complains of feeling lonely and isolated’.
Clinical Commentary
Ian shows many of the diagnosable symptoms of schizotypal personality disorder including unusual
ideas of reference (feeling he is in a film), vague and circumstantial speech in conversations,
suspiciousness, and paranoia about others, a lack of close relationships, and feelings of anxiety in
interactions with others. Currently, these characteristics have led to Ian being unemployed and leading the
life of a relatively uncommunicative ‘loner’ who shows little emotion.
Antisocial personality disorder (APD) A personality disorder, the main features of which
are an enduring disregard for, and violation of, the rights of others. It is characterised by
impulsive behaviour and lack of remorse, and is closely linked with adult criminal behaviour.
Prior to 1980, APD or psychopathy was defined primarily by personality traits such as egocentricity,
deceit, shallow affect, manipulativeness, selfishness, and lack of empathy. However, with the
introduction of DSM‐IV, APD has been defined more in terms of violations of social norms. The
reason given for this shift in emphasis is that personality traits are difficult to measure, and it is easier to
agree a diagnosis on the basis of well‐defined behaviours (such as breaking laws or aggressive
behaviours) (Widiger & Corbitt, 1993)—and these well‐defined, antisocial behaviours are well
represented in the DSM‐5 diagnostic criteria for APD (see Table 12.7). This shift in the diagnostic
criteria has meant that APD has become very closely associated with criminal activity rather than being
purely a psychopathology requiring treatment. This indicates that the changes to the diagnostic criteria
for APD over the years have moved this category more towards identifying criminals and criminal
behaviour and away from identifying psychological factors that might give rise to such behaviour (such
as lack of empathy, superficial interpersonal style, inflated sense of self‐importance, etc.). There is a real
possibility that this move towards defining APD in terms of antisocial activities could fudge the
distinction between psychopathology in need of treatment and criminal behaviour in need of restraint.
TABLE 12.7 Summary: DSM‐5 criteria for antisocial personality disorder (APD)
Pattern of indifference to and violation of the rights of others as shown by at least three of the
following since the age of 15 years:
Lack of conformity to social norms and regularly indulging in unlawful behaviours
Lying, pretending to be someone else, or deceiving others for personal gain
Failure to plan ahead or impulsiveness
Irritability and aggressiveness leading to physical fights and assaults
Reckless indifference to own and other's personal safety
Consistent irresponsible behaviour
Lack of remorse
The person is at least 18 years old
The antisocial behaviour is not associated with symptomsof schizophrenia or mania
Borderline personality disorder (BPD) A personality disorder, the main features of which
are instability in personal relationships, a lack of well-defined and stable self‐image, regular and
unpredictable changes in moods and impulsive behaviour.
TABLE 12.8 Summary: DSM‐5 criteria for borderline personality disorder (BPD)
A long‐term display of instability of relationships, self‐image, and behaviour, as well as high levels
of impulsivity beginning in early adulthood and indicated by at least five of the following:
Desperate attempts to avoid real or imagined abandonment
A pattern of unstable and intense interpersonal relationships, fluctuating between adulation
and deprecation
Constantly unstable self‐image and identity disturbance
Potentially self‐damaging impulsivity in at least two areas such as sex, substance abuse, and
reckless driving
Repeated suicidal behaviour or self‐mutilation
Emotional instability due to reactivity of mood
Unsuitable, intense anger or difficulty controlling anger
Stress‐related paranoid idealisation or severe dissociative symptoms
Because of its close association with mood disorders, depression, and suicide, some researchers have
argued that BDP may well be a form of depression (Gunderson & Elliott, 1985), but in fact it is just as
likely to be comorbid with anxiety disorders or with depressive symptoms (Grant et al., 2008). Zanarini
et al., (1998) found that 96.3% of individuals diagnosed with BPD met the criteria for a mood disorder
(major depression, dysthymia, bipolar II disorder), but 88.4% also met the criteria for an anxiety
disorder, with panic disorder (47.8%) and social phobia (45.9%) being the most prevalent. Interestingly,
64.1% met the criteria for substance abuse disorders—reaffirming the link between BPD and impulsive
behaviour, whereas 53% met the criteria for eating disorders. Another important finding is that BPD is
often comorbid with post‐traumatic stress disorder (PTSD), with 30.2% of those with a diagnosis of
BPD also having a diagnosis of PTSD, and 24.2% of those with PTSD also having a diagnosis of BPD
(Pagura et al., 2010), a finding which is consistent with the view of some clinicians that BPD may be a
form of PTSD—and PTSD and BPD may be products of a history of traumatic childhood experience
related to neglect or physical and sexual abuse (Scheiderer, Wood, & Trull, 2015; Heffernan & Cloitre,
2000). At the very least, these data suggest that BPD represents a behavioural style that may put an
individual at severe risk for a wide range of other psychopathologies, and while the prevalence rates for
BPD in the general community are between 0.2% and 1.8%, this prevalence rises to 15–25% in
psychiatric inpatients (Leichsenring, Leibing, Kruse, New, & Leweke, 2011; Lieb, Zanarini, Schmahl,
Linehan, & Bohus, 2004)
An ongoing pattern of grandiosity, need for adoration and lack of empathy, beginning in early
adulthood and indicated by at least five of the following:
Has a highly exaggerated sense of self‐importance and self‐achievement
Preoccupied with illusions of unlimited success, power, beauty, or ideal love
Believes that they are special and can be understood only by people of similar speciality
Commands excessive admiration
Has unreasonable expectations of favourable treatment
Exploits others for personal gain
Lacks compassion and cannot identify with the needs and feelings of others
Often jealous of others and believes that others are jealous of them
Shows conceited, self‐important behaviour or attitudes
avoidant personality disorder A personality disorder the features of which are avoidance
of a wide range of social situations, feelings of inadequacy, and hypersensitivity to negative
evaluation and criticism.
People with avoidant personality disorder generally have low self‐esteem, and will frequently feel angry
at themselves for being withdrawn and not enjoying the apparent social rewards and intimate
relationships experienced by others (Lynum, Wilberg, & Karterud, 2008). As you can imagine, avoidant
personality disorder has many features in common with social anxiety disorder (see Chapter 6), and
many individuals diagnosed with avoidant personality disorder also receive a diagnosis of social anxiety
disorder (Widiger, 2001; Marques et al., 2012). However, individuals with social anxiety disorder tend to
be made anxious by social situations where particular levels of performance might be required (e.g.,
making a work presentation or having a job interview), whereas the personality disorder is more
associated with (a) fear of personal interactions and social relationships generally, (b) the criticism and
rejection that they believe will be associated with these types of experiences, and (c) difficulties in being
open with people they are close to (Turner, Beidel, Dancu, & Keys, 1986; Marques et al., 2012). In
addition, there is some evidence that avoidant personality disorder is associated with avoidance
behaviour generally, and individuals diagnosed with the disorder show greater avoidance of emotion,
novelty, and other nonsocial events than nonsufferers (Taylor, Laposa, & Alden, 2004). However, some
clinicians believe that avoidant personality disorder and social anxiety disorder are both components of
a broader social anxiety spectrum (Tillfors & Ekselius, 2009), and there is evidence to suggest that
(a) the severity of the symptoms of avoidant personality disorder is significantly increased if it is
comorbid with social anxiety disorder (Ralevski et al., 2005), and (b) there is a genetic link between the
two disorders evidenced by the fact that if an individual is diagnosed with one of the disorders, first‐
degree relatives of that individual are 2–3 times more likely to be diagnosed with either of them (Tillfors
et al., 2001).
‘The way I see it, people like us (with avoidant personality disorder) were born with brains that
were very sensitive to social situations. As a child I used to get so frightened and scared that I
probably unconsciously decided to build up a defence system against terrible feelings in order to
protect myself. I just instinctively knew I had to do something, so my personality was formed in
a way designed to avoid the harm. I hated the fact that other kids would be out to criticise me,
so I adopted avoidance as a defence system. I had very low self‐esteem, so I didn't think anyone
liked me anyway. So I tried to stay away from potentially harmful situations, and lived in a
world of my own. When I was younger, my classmates used to tell me that at parties they would
turn the lights down and dance, but I would sit in the corner playing with my bike‐lights. I
would often stay off school and read books all day—that would comfort me because I liked the
stories. My real life became less important to me, and I didn't participate in social events apart
from just trying to be pleasant when needed. As I grew older, I should have developed a
different defence system, but I couldn't because I had become pretty much a social outcast, and
the fear of being criticised and rejected had got stronger. It was like I was in a vicious circle that
I couldn't get out of ’.
Clinical Commentary
In this personal account of avoidant personality disorder, the individual describes how her desire to avoid
social encounters developed during childhood from a fear of being criticised (and possibly bullied) by her
peers. When avoiding social encounters (e.g., by staying off school), she would reward these avoidance
responses by indulging in enjoyable activities, such as reading stories she liked. At adolescence she
discovers she has become something of a social outcast, and this maintains her low self‐esteem and
feelings of not being liked, which further maintains social avoidance. She shows a number of the
symptoms of avoidant personality disorder, including avoiding activities that involve significant
interpersonal contact because of fears of criticism, disapproval, or rejection, a preoccupation with being
criticised or rejected in social situations, and views herself as socially inept and personally unappealing to
others.
The characteristics of dependent personality disorder appear to fall into two distinctive categories: (a)
attachment/abandonment, in which the individual fears abandonment and constantly seeks attachment
with significant others, and (b) dependency/incompetence, in which the individual has constant feelings
of incompetence which drives them to rely on others (Gude, Hoffart, Hedley, & Ro, 2004). Because of
their self‐doubting and overdependence, individuals with dependent personality disorder often dislike
themselves (Overholser, 1996), which may lead to depression, anxiety, eating disorders, and suicidal
ideation (e.g., Godt, 2002). Disney (2013) provides a thorough critical review of dependent personality
disorder and relevant research (Photo 12.1).
TABLE 12.12 Summary: DSM‐5 criteria for dependent personality disorder
An inescapable and extreme need to be taken care of, leading to submissive and clingy behaviour
and fear of separation, beginning in early adulthood and indicated by at least five of the
following:
Cannot make everyday decisions without an unnecessarily high level of advice and
reassurance from others
Needs others to assume the majority of responsibility for the major areas of his/her life
Struggles to express disagreement with someone for fear of loss of support
Has difficulty initiating/doing things on his/her own
Feels uncomfortable or afraid when left alone due to a fear of not being able to care for
oneself
Urgently seeks to secure another caring and supportive relationship when the previous one
ends
Is unrealistically obsessed with fears of being left to take care of oneself
PHOTO 12.1 From letters and biographies of Wolfgang Mozart it was assumed he may have suffered from bipolar
disorder because of periods of depression followed by bouts of mania. However, more recent analyses suggest he may have
been suffering from dependent personality disorder because of his mood lability, impulsiveness, and negative reactions to his
wife's absences (Huguelet & Perroud, 2005).
Obsessive‐compulsive personality disorder
Individuals diagnosed with obsessive‐compulsive personality disorder show exceptionally
perfectionist tendencies including a preoccupation with orderliness and control at the expense of
flexibility, efficiency, and productivity. They will stick to rules, work schedules and prearranged
procedures to such a degree that the overall purpose of the activity is lost. Diverging from a preset
schedule causes them significant distress, as does failing to achieve the highest of standards in the things
they do, and their attention to detail and their inflexibility will often annoy other people because of the
delays and inconvenience that this may cause. For example, they may hold up a work project by insisting
that their component of the project has to be completed meticulously and in the way in which it was
originally specified. Individuals with obsessive‐compulsive personality disorder nearly always plan ahead
meticulously and are unwilling to contemplate changes to their plan. This means that even hobbies and
recreational activities are approached as serious tasks requiring organisation and scheduling. For
example, they will need to plan a visit to a restaurant well in advance, the menu needs to be checked to
ensure that everyone will be happy with what is on offer, and the quality of the restaurant's service must
be checked with friends who have been there or by consulting dining reviews. If this planning is
disrupted (e.g., if the restaurant is closed when the party arrives), this will cause the individual
considerable distress and a spontaneous alternative will be difficult for them to consider. If things are
not done ‘their way’ this also causes distress, and this may be taken to unnecessary extremes (such as
asking a child to ride its bike in a straight line or telling people that there is only one way to wash the
dishes, etc.), and they will then become upset or angry if people do not comply, although the anger is
rarely expressed directly. Because of this they will rarely delegate tasks, but insist on doing them
themselves, and often become viewed as ‘workaholics’. Their perfectionist tendencies also means that
they often end up hoarding things rather than throwing them away and will adopt a miserly attitude to
spending, believing that money should not be wasted. Because of this, they often end up living at a
standard well below what they can afford. OCPD is one of the most prevalent of the personality
disorders, with a recent large‐scale epidemiology study recording a lifetime prevalence rate of 7.8%
with rates being similar between males and females (Grant, Mooney, & Kushner, 2012) (Table 12.13).
TABLE 12.13 Summary: DSM‐5 criteria for obsessive‐compulsive personality disorder (OCPD)
An ongoing pattern of concern with orderliness, perfection, and mental and interpersonal
control, at the expense of flexibility, openness, and efficiency, beginning in early adulthood and
indicated by at least four of the following:
An obsession with details, rules, lists, organisation, or schedule to the exclusion of the main
point of the activity
Perfectionism that hinders task completion
Excessive devotion to work to the prohibition of social and leisure activities
Inflexibility about matters of morals, ethics, or values
Is unable to dispose of wornout or worthless objects despite them having no sentimental
value
Reluctant to delegate to others unless they submit to exactly his/her way of doing things
Hoards money and is reluctant to spend on their self or others
Is rigid and stubborn
CASE HISTORY 12.2 OBSESSIVE‐COMPULSIVE
PERSONALITY DISORDER (OCPD)
‘Jane likes to describe herself as a perfect mother. She takes pride in keeping an orderly
household and attending all of her daughters' horse‐riding events, while being office manager
in an insurance company. She knows the schedules of each family member and follows rigid
routines to make sure everyone gets to work or school on time. Jane gets very upset when her
teenage daughters want to go out with friends at weekends or in the evenings. She says it takes
away from their family time and all of her efforts and planning are wasted. She refuses to go
out for the evening if this interferes with her planned weekly activities in the house. Her
husband doesn't mind Jane planning his schedule but he does complain when he helps out with
the household chores because she consistently complains that he hasn't followed her instructions
properly. For example, if he does the shopping but does not get the right discounted items, Jane
gets upset and accuses him of being careless and extravagant. Jane continually tells everyone
that if she wants something doing properly, she has to do it herself, and she will religiously clean
the house in exactly the same way every week—whether things are dirty and untidy or not’.
Clinical Commentary
Jane exhibits many of the symptoms of OCPD and probably has the minimum four symptoms required
for a DSM‐5 diagnosis. These are a pre‐occupation with details, rules, lists, order, organisation, or
schedules to the extent that the major point of the activity is lost (e.g., she will do the housework each
week in exactly the same way regardless of whether this is necessary), she is excessively devoted to work
and productivity to the exclusion of leisure activities, she is reluctant to delegate tasks or to work with
others unless they submit to exactly her way of doing things, she shows rigidity and stubbornness, and
adopts a miserly spending style. From this brief case description you can see that Jane frequently gets
upset and anxious about family life because of her rigid perfectionism (and this may well lead to a
comorbid diagnosis of generalised anxiety disorder, see Chapter 6), and her rigid and inflexible behaviour
also puts severe strains on family relationships.
While these characteristics may seem very similar to the symptoms of obsessive‐compulsive disorder
(OCD) (see Chapter 6), the exact relationship between OCPD and OCD has been the subject of debate
for some time. Some clinicians have argued that OCPD is a precursor for the development of OCD
(Krockmalik & Menzies, 2003). However, OCPD is not a necessary precursor of OCD and studies have
found the prevalence of OCPD in patients diagnosed with OCD as ranging only between 23% and
34% (Albert, Maina, Forner, & Bogetto, 2004; Lochner et al., 2011). However, regardless of whether
OCPD is a risk factor for OCD, individuals diagnosed with comorbid OCPD and OCD do appear to
exhibit more severe symptoms, are more functionally impaired, and are likely to develop other problems
such as alcohol dependence and depression (Garyfallos et al., 2010; Gordon, Salkovskis, Oldfield, &
Carter, 2013) (Case History 12.2).
12.2.4 Summary
While the different personality disorders we have discussed may seem to take quite contrasting forms
(e.g., some represent withdrawn and avoidant forms of behaviour, some are characterised by
behavioural and emotional lability and impulsivity, and others are characterised by intense fears of
criticism, rejection, and abandonment), they are all assumed within DSM‐5 to represent enduring
patterns of behaviour that we would consider to be close to the borderline of what is
adaptive/maladaptive, normal/abnormal, or culturally acceptable/unacceptable. Because the
behavioural styles of individuals with personality disorders can be conceptualised as being on normal
personality dimensions—albeit at the extremes of these dimensions (Costa & MacRae, 1990), there is an
issue about what it is that is ‘disordered’ or ‘abnormal’ about personality disorders, and this is likely to
be addressed with dimensional measurements for personality traits in future editions of DSM (see
Section 12.1.3) (see Activity Box 12.1 on the book's website).
SELF‐TEST QUESTIONS
Personality disorders generally consist of a loosely bound cluster of subtypes. What are the
four common features of all personality disorders?
What are the three clusters of personality disorders listed in DSM‐5, what are the
disorders listed in each cluster, and what are their main defining features?
Can you list the diagnostic criteria for (a) APD and (b) BPD?
Schizophrenia spectrum disorder, bipolar disorder spectrum, and social anxiety spectrum
are broader disorder categories associated respectively with which individual personality
disorders?
SECTION SUMMARY
SELF‐TEST QUESTIONS
What is the estimated prevalence rate for personality disorders in the general population?
Do prevalence rates for personality disorders vary with culture and ethnicity?
SECTION SUMMARY
Psychodynamic approaches
In the case of both paranoid and schizoid personality disorders, psychodynamic theorists have argued
that the causes of these disorders lie in the relationships that the sufferer had with their parents. In the
case of paranoid personality disorder, parents may have been demanding, distant, overrigid, and
rejecting (Manschreck, 1996), and the lack of love provided by parents makes the individual suspicious
and lacking in trust of others (Cameron, 1974). In contrast, parents of individuals with schizoid
personality disorder may have rejected or even abused their children, resulting in the child being unable
to give or receive love (Carstairs, 1992). As we shall see later, there is certainly some evidence that
individuals with personality disorders may have suffered childhood abuse and neglect (Johnson et al.,
1999; Waxman, Fenton, Skodol, Grant, & Hasin, 2014), so there is some supportive evidence for this
view.
conduct disorder (CD) A pattern of behaviour during childhood in which the child exhibits
a range of behavioural problems, including fighting, lying, running away from home, vandalism
and truancy.
Many studies have emphasised that adolescent problem behaviours are strong predictors of adult APD.
McGue & Iacono (2005) found that adolescent smoking, alcohol use, illicit drug use, police trouble, and
sexual intercourse (all before 15 years of age) each significantly predicted APD symptoms in later life. In
fact, for those who exhibited four or more of these problem behaviours prior to age 15, there was a 90%
likelihood of subsequent APD diagnosis in males and a 35% probability in females. A possible link
between childhood attention‐deficit/hyperactivity disorder (ADHD) and APD is discussed in Focus
Point 12.2. Furthermore, harsh parental discipline and poverty strongly predict adult APD (Jaffee,
Strait, & Odgers, 2012), so there may be some form of vicious cycle occurring where the antisocial
behaviour of a child can provoke harsh discipline from parents which exacerbates antisocial behaviours.
Perhaps disappointingly, most of these studies merely indicate that adult antisocial behaviour defined by
APD is predicted by adolescent and childhood antisocial behaviour. However, such studies do
demonstrate that the behaviour patterns are often enduring and that these behaviours during childhood
and early adolescence should be taken as indicators of the possible need for intervention. For factors
involved in causing APD we need to explore developmental, psychological and biological factors (Focus
Point 12.3).
Developmental factors
There are a range of views about how familial factors might influence the development of APD, and
because APD is an antisocial disorder, there has been much speculation about how maladaptive
socialisation might have contributed to this pattern of behaviour. One important fact is that there is a
high incidence of APD in the parents of individuals with APD (Paris, 2001), suggesting that one
important developmental factor may be the learning of antisocial behaviours through modelling and
imitation (although this may also indicate a genetic or inherited component—see below). For example,
the children of parents with APD may often see aggressive and deceitful behaviour rewarded—
especially if a parent has had a relatively successful criminal career. Alternatively, parents may have
patterns of parenting which inadvertently reward their children for aggression, impulsivity, and
deceitfulness (Capaldi & Patterson, 1994). For instance, parents may try to calm down an aggressive or
impulsive child by giving him/her toys or sweets—a reaction which is likely to increase the frequency of
such behaviours rather than suppress them.
Parents may play a more discrete role in developing APD tendencies through the emotional relationship
they have with their children. Psychodynamic explanations of APD argue that a lack of parental love
and affection during childhood is likely to lead to the child failing to learn trust (Gabbard, 1990). This
lack of love and affection can take a number of forms, and there is evidence that individuals with APD
come from backgrounds of family violence, poverty, and conflict—including separation and divorce
(Farrington, 1991; Paris, 2001). In such circumstances, the child is likely to have had little experience of
positive emotional relationships and is more likely to have experienced conflict and aggression as a
normal way of life. Finally, some studies have identified both inconsistent parenting and harsh parenting
(e.g., corporal punishment, hitting, kicking, slapping, and emotional coercion, such as insulting,
threatening, or belittling) as being important in developing antisocial behaviours (Burnette, Oshri,
Richards, & Ragbeer, 2012), and parents of individuals with APD frequently fail to be consistent in
disciplining their children and also fail to teach them empathy and responsibility (Marshall & Cooke,
1999). At least one reason for this lack of consistency in parenting is that many of the fathers of
individuals with APD also exhibit the disorder.
Some researchers have suggested that conduct disorder (CD) in childhood is not the only
psychological diagnosis that predicts APD in later life. Lynam (1998) has argued that children
with hyperactivity/attention deficits (such as ADHD) are ‘fledgling psychopaths’ who because
of their impulsivity and attentional problems are likely to develop into long‐term psychopaths—
not least because their underlying problems are of a neuropsychological nature which are likely
to be resistant to behavioural treatments. However, more recent studies that have been based on
structured diagnostic interviews do not necessarily support this view. Lahey, Loeber, Burke, &
Applegate (2005) investigated whether a diagnosis of CD or ADHD in males between 7 and 12
years of age predicted a diagnosis of APD at 18–19 years. While CD predicted subsequent
APD in around 50% of the participants, ADHD predicted APD at rates no better than if the
child had neither ADHD or CD at ages 7–12 years (see figure), suggesting that ADHD during
childhood is not a significant differential predictor of APD in later life. More recent analyses
suggest only a weak link between ADHD and APD in prospective studies (Klein et al., 2012), or
that if youths with comorbid conduct disorder and ADHD do develop APD it is due to the
levels of conduct disorder, not the influence of ADHD (Smith & Hung, 2012).
Lahey et al. (2005) investigated whether a childhood diagnosis of CD or ADHD predicted a diagnosis of APD
at 18–19 years of age. The results show that while around 50% of those diagnosed with either CD or CD and
ADHD went on to develop APD, ADHD did not predict subsequent APD any better than if a child had neither
disorder.
FOCUS POINT 12.3 PREDICTORS OF ANTISOCIAL
BEHAVIOUR AND VIOLENT CRIME
APD is closely associated with criminal and antisocial behaviour, and so some efforts have been
focussed on attempting to identify childhood predictors of these behaviours. The hope here is
that being able to identify such individuals at an early stage may prevent crime and enable
either treatment or reeducation programmes to be directed at individuals at risk of developing
APD.
Some childhood and early adolescent predictors of APD that have been identified include:
A diagnosis of CD in childhood
Persistent and aggressive behaviour before age 11 years
Fighting and hyperactivity
Low IQ and low self‐esteem
Persistent lying
Running away from home
Vandalism
Truancy
Discordant and unstable family life
Educational failure
Adolescent smoking, alcohol use, illicit drug use, police trouble, and sexual intercourse all
before age 15 years.
Having at least one parent diagnosed with APD
Coming from a background of family violence, poverty, and conflict
However, we must be cautious about how we interpret these developmental factors. They may not
represent causal factors in the development of APD but merely represent failures and inconsistencies in
parenting that are a consequence of having a child with severely disruptive and impulsive behaviour. We
must also remember that because an individual with APD may have a parent with the disorder does not
mean that they have learnt such behaviours from the parent—the disorder may involve psychological
and biological dysfunctions that may be inherited rather than learnt (such as maladaptive physiological
reactions that give rise to impulsivity and risk seeking, see below).
Genetic factors
There is clear evidence that APD appears to run in families and apart from the developmental factors
that may contribute to this effect, there is also the real possibility that there is a genetic or inherited
component to APD. Twin studies have demonstrated significantly higher concordance rates for APD in
monozygotic (MZ) twins than in dizygotic (DZ) twins (Lyons et al., 1995), and adoption studies have also
shown that incidence of APD in the adopted child is better predicted by APD in the biological than in
the adopted mother (Ge et al., 1996). Heritability of APD and antisocial psychopathy traits appear to be
relatively high, with recent estimates of heritability between 51% and 81% (Rosenstrom et al., 2017;
Tuvblad, Bezdjian, Raine, & Baker, 2014; Torgersen et al., 2012), with the highest estimates being
related to aggressive aspects of APD (Burt & Donnellan, 2009). In addition, studies suggest that the
heritability of APD and substance use disorder may be related, and this may account in part for the
high comorbidity between these two disorders (Kendler et al., 2003; Gizer et al., 2012).
Molecular genetic studies are beginning to identify the specific gene locations of some APD
characteristics, with the long allele of the serotonin transporter gene 5‐HTTLPR linked specifically with
the emotional deficits found in APD (Sadeh, Javdani, & Verona, 2013). Possession of a genotype
conferring low levels of monoamine oxidase A (MAO‐A) together with the experience of childhood
abuse does appear to predict adult antisocial behaviour (Caspi & Moffitt, 2006), and this genotype is
associated with volume reductions in the amygdala, anterior cingulate, and orbitofrontal cortex (Meyer‐
Lindenburg et al., 2006) and provides a potential pathway from genes, to brain, to antisocial behaviour
that implicates prefrontal and amygdala regions in antisocial behaviour (Raine, 2018, see subsequent
sections).
Finally, epigenetics is the process by which early experiences may affect the expression of certain genes,
and one process that can prevent the expression of a gene is DNA methylation (a process by which
methyl groups are added to the DNA molecule and typically act to repress gene transcription). Greater
methylation of the OXTR gene in children with CD has been associated with callous, unemotional
traits and lower circulating oxytocin—a hormone involved in empathy and amygdala functioning
(Dadds et al., 2014). Similarly, aggressive males have been found to have greater methylation of the
SLC6A4 serotonin transporter gene, as have women with a diagnosis of APD (Wang et al., 2012; Beach,
Brody, Todorov, Gunter, & Philibert, 2011), indicating that antisocial personality may be associated with
abnormalities in the serotonin system. If DNA methylation occurs as a result of particular types of early
experiences (e.g., childhood stress and abuse), then research may be able to identify how certain adverse
early experiences affect DNA expressions that have neurodevelopmental consequences which in turn
facilitate the development of antisocial traits.
Cognitive models
Some recent models have argued that individuals with APD have developed dysfunctional cognitive
schemas that cause their responses to various situations to be extreme, impulsive, and changeable. For
example, Young, Klosko, and Weishaar (2003) have suggested that individuals with APD possess five
important and relatively independent dysfunctional schemas, and—when responding to important
events—they are assumed to switch quickly and unpredictably between schemas in a way that makes
their behavior appear impulsive and unpredictable. Young et al. proposed five important schema
modes that determine the responses and reactions of individuals with APD. As we will see from the
nature of these schema modes, it is claimed that they are developed as a result of abuse and neglect
experienced during childhood (Horowitz, Widom, McLaughlin, & White, 2001; Marshall & Cooke,
1999). The five dysfunctional schemas are (a) the Abandoned and Abused Child mode (generating
feelings of pain, fear of abandonment, and inferiority), (b) the Angry and Impulsive Child mode (where
bottled up aggression is discharged as anger), (c) the Punitive Parent mode (where the individual views
themselves as having done something wrong, evil, and worthless), (d) the Detached Protector mode (a
state where the individual endeavors not to feel the pain and emotion caused by the first three modes),
and (e) the Bully and Attack mode (where the individual hurts other people to overcompensate for, or to
cope with, mistrust, abuse, deprivation, and defectiveness) (Lobbestael, Arntz, & Sieswerda, 2005). The
development of instruments to measure these various schema modes has shown that individuals
diagnosed with APD do indeed score higher on measures of these five dysfunctional modes than
nonclinical participants (Lobbestael, Arntz, & Sieswerda, 2005). Individuals with APD are assumed to
switch rapidly and unpredictably from a ‘Healthy Adult’ mode—where their behaviour appears normal
—to pathological modes, and this can occur rapidly when the individual experiences negative emotions
such as anger (Lobbestael & Arntz, 2012). Because schemas such as these form part of the individual's
normal way of thinking, the person with APD does not recognise them as faulty. If such dysfunctional
schemas do represent important causal factors in the antisocial behaviour exhibited by individuals with
APD, then challenging and replacing these dysfunctional schemas may represent a useful starting point
for treating the disorder (Beck & Freeman, 1990).
A seminal study by Lykken (1957) suggested that individuals with what was then labelled as a
sociopathic personality were unable to learn to avoid psychically aversive stimuli, and this
learning deficit may explain why individuals with APD are able to ignore threatening signals
and also appear to lack the ability to learn from experience about events that have negative
outcomes.
This learning deficit can be demonstrated in a simple laboratory‐based conditioning
experiment, and laboratory studies such as this often serve as good analogues of real‐life
learning situations.
A study by Birbaumer et al. (2005) replicated Lykken's original study. They used a differential
aversive conditioning procedure in which male faces acted as the conditioned stimuli (CSs). For
some participants, faces with a moustache were followed by an aversive unconditioned stimulus
(US) (in this case a painful pressure applied to the hand or arm) (CS+), while faces without a
moustache were followed by nothing (CS‐). For other participants the painful US followed the
faces without moustaches, and the moustached faces were followed by nothing (a
counterbalanced procedure so that conditioning could not be affected by the specific features of
the CS).
Normally, participants would show signs of anxiety during the CS+ (as recorded by
physiological measures such as skin conductance levels), and would also rate the CS+ face as
less pleasant than the CS‐ face. Birbaumer et al. compared the performance of 10 psychopaths
(six of whom met DSM‐IV criteria for APD) with 10 healthy control participants. While the
normal, healthy participants rated the CS+ as significantly less pleasant than the CS‐, the
psychopaths showed no difference in pleasantness ratings even after 16 pairings of CS+ with
the US, suggesting that they had failed to learn the significance of the aversive‐signalling CS+.
Evidence suggests that individuals with BPD have a number of brain abnormalities that may give rise to
impulsive behaviour. They tend to possess relatively low levels of the brain neurotransmitter serotonin,
and this is associated with impulsivity (Norra et al., 2003) and may account for their regular bouts of
depression (see Chapter 7). There is also some evidence for dysfunction in brain dopamine activity in
BPD, and such dopamine activity is known to play an important role in emotion information processing,
impulse control, and cognition (Friedel, 2004). However, much of this evidence is currently
circumstantial and derives mainly from the fact that administration of drugs that influence serotonin
and dopamine activity also appear to influence BPD symptoms.
Neuroimaging studies of individuals with BPD have revealed abnormalities in a number of brain areas,
primarily in frontal lobe functioning and in the limbic system, including the hippocampus and amygdala
(Juengling et al., 2003; Soloff et al., 2003). The frontal lobes play an important part in decision‐making,
and abnormalities in this area in individuals with a diagnosis of BPD could underlie their impulsivity,
and the amygdala is an important part of the brain system controlling and regulating emotion and these
abnormalities may contribute to some of the defining behavioural features of BPD and their emotional
regulation problems (Davies, Hayward, Evans, & Mason, 2020). In addition, activation of the limbic
areas and the amygdala is often excessive, which may be responsible for the extreme emotional reactions
often displayed by individuals with BPD (Lis, Greenfield, Henry, Guile, & Dougherty, 2007; Silbersweig
et al., 2007). Other studies have also identified abnormalities in the medial prefrontal cortex and
anterior cingulate areas, areas of the brain which are crucial for self‐referential processing, successful
social cognition, interpersonal transactions, and emotional regulation (Visintin et al., 2016).
Nevertheless, while these abnormalities are important correlates of BPD symptoms, it is still far from
clear whether these abnormalities represent a consequence of the disorder or a genetically or
developmentally determined cause of the disorder (Lieb et al., 2004).
Psychological theories of BPD
We have seen that a majority of individuals with BPD report experiencing relatively high levels of
childhood abuse and difficult or neglectful parenting, and a number of psychological theories of BPD
attempt to explain how these experiences might relate to the behavioural and emotional problems
characteristic of the disorder.
Some forms of psychodynamic theory, such as object‐relations theory, argue that people are
motivated to respond to the world through the perspectives they have learnt from important other
people in their developmental past. However, if these important others have offered only inadequate
support and love, or in fact been actively abusive, then this is likely to cause the child to develop an
insecure ego, which is likely to lead to lack of self‐esteem, increased dependence and a fear of
separation and rejection—all central features of BPD (Bartholomew, Kwong, & Hart, 2001: Kernberg,
1985). Object‐relations theory also argues that individuals with weak egos engage in a defence
mechanism called splitting, which means that they evaluate people, events or things in a completely
black or white way, often judging people as either good or bad with no shades of grey. This may give
rise to their difficulties with relationships, in which their all‐or‐none assessments mean that someone
they evaluate as ‘good’ can just as quickly become ‘bad’ on the basis of a single act or statement (e.g., if
a partner does not return from a work social event at exactly the time they said they would, the
individual with BDP is likely to respond with anger and threaten to withdraw from the relationship).
People with a diagnosis of BPD also have a tendency to perceive others as quarrelsome, which triggers
negative affect and leads to more quarrelsome behaviour during their interactions with others (Sadikaj,
Moskowitz, Russell, Zuroff, & Paris, 2013). Interestingly, Suvak et al. (2011) found that individuals with
a diagnosis of BPD judged their own emotions primarily on dimensions of valency, and hardly at all on
dimensions of arousal. This suggests that they are likely to judge their emotions in an ‘all or nothing’
way with relatively little intensity control, leading to extreme swings in emotions.
object‐relations theory Argues that individuals with borderline personality disorder (BPD)
have received inadequate support and love from important others (such as parents) and this
results in an insecure ego, which is likely to lead to lack of self‐esteem and fear of rejection.
splitting An element of object relations theory which argues that individuals with weak egos
engage in a defence mechanism by which they evaluate people, events or things in a completely
black or white way, often judging people as either good or bad with no shades of grey.
While object‐relations theory is consistent with the fact that a majority of individuals with BPD have
experienced childhood abuse, conflict and neglect, one problem is that such experiences are common
features of many of the personality disorders (including antisocial, paranoid, narcissistic, and OCPD)
(Klonsky, Oltmanns, Turkheimer, & Fiedler, 2000). This being the case, an account such as object‐
relations theory does not easily explain how such negative early experiences get translated into BPD
rather than these other disorders which also have such experiences as part of their history.
We have already noted the high levels of comorbidity between the different personality disorders
(Marinangeli et al., 2000), and, in particular, between 10% and 47% of individuals with BPD also
display antisocial behaviour and meet the diagnostic criteria for APD (Zanarini & Gunderson, 1997).
This suggests that there may be some commonality of aetiology between the two disorders, and we have
already noted that significant childhood abuse and neglect is apparent in both groups. This has led
Young, Klosko, and Weishaar (2003) to suggest that individuals with BPD may develop a similar set of
dysfunctional schema modes to those acquired by individuals with APD. We have already described
these dysfunctional schemas in relation to APD (Section 12.4.2), and Young et al. have argued that these
dysfunctional schema also determine reactions to events in individuals with BDP, such as dissociation
(Johnston et al., 2009). Subsequent studies have confirmed that individuals with APD and BPD do score
higher than control participants without a personality disorder diagnosis on measures of these
dysfunctional schemas, and also report levels of childhood abuse that were higher than controls
(Lobbestael, Arntz, & Siewerda, 2005). This suggests a significant amount of similarity in both the
developmental history of BPD and APD and in the dysfunctional cognitive schemas that characterise
these disorders. This has led some researchers to argue that APD and BPD may be different
manifestations of one single underlying disorder which may express itself as BPD in women and APD
in men (Paris, 1997; Widiger & Corbitt, 1997).
In July 2005, Brian Blackwell—a 19‐year‐old public schoolboy from Liverpool—killed both his
parents and then used their credit cards to fund a £30,000 spending spree. After his arrest he
was subsequently diagnosed as suffering from narcissistic personality disorder and was reported
to have regularly fantasised about unlimited success, power, and brilliance. He had falsely
claimed to be a professional tennis player and applied for numerous credit cards to help fund
his fantasies. After the killings, Blackwell went on holiday to the US with his girlfriend, spending
huge sums of money while staying at expensive hotels in New York.
Clinical CommentaryMany researchers believe that narcissistic personality disorder is closely
associated with APD, and individuals with the disorder usually show clear signs of deceitfulness,
lying, lack of empathy with the feelings of others, acting impulsively and aggressively, showing
no remorse for acts of harm or violence, and going to any lengths to achieve their own personal
goals. Narcissistic personality disorder is differentiated from APD by the individuals’ grandiose
view of themselves and their need to brag about fantasised achievements. The parents of some
individuals with narcissistic personality disorder undoubtedly dote on them and may treat them
too positively in a way that fosters unrealistic grandiose self‐perceptions (Millon, 1996).
SECTION SUMMARY
Cluster A Disorders
Behavioural and genetic links between Cluster A disorders (paranoid, schizoid and
schizotypal personality disorders) and schizophrenia suggest that they may be part of a
broader schizophrenia spectrum disorder.
Psychodynamic approaches to paranoid personality disorder suggest that parents may have
been demanding, distant, overrigid, and rejecting, giving rise to a lack of trust in others.
The risk of all three types of Cluster A disorder is increased in relatives of individuals
diagnosed with schizophrenia, suggesting a genetic link between schizophrenia and the
Cluster A personality disorders.
Antisocial Personality Disorder
One of the best predictors of APD in adulthood is conduct disorder in childhood.
Adolescent smoking, alcohol use, illicit drug use, police trouble, and sexual intercourse
before the age of 15 significantly predict APD in later life.
APD appears to run in families, suggesting that APD may be acquired through social
learning and imitation.
Psychodynamic approaches to APD suggest that lack of parental love and affection during
childhood and inconsistent parenting is likely to lead to the child failing to learn trust.
Heritability of APD traits appears to be relatively high, with estimates between 51% and
81%, with highest estimates related to aggressive traits.
DNA methylation may occur as a result of particular types of early experiences (e.g.,
childhood stress and abuse), and this has neurodevelopmental consequences which may
facilitate the development of antisocial traits in individuals with a diagnosis of APD
through epigenetic processes.
Individuals with both APD and BPD appear to possess a set of dysfunctional cognitive schemas
that give rise to their unpredictable mood swings and impulsive behavior.
Individuals with APD show a number of physiological and neurological characteristics,
such as physiological indicators of low anxiety, low levels of baseline arousal and reactivity,
lack of learning in simple aversive conditioning procedures, and neurological impairments
indicative of impulsivity.
Cluster C Disorders
Having a family member diagnosed with either social anxiety disorder or avoidant personality
disorder increases the risk for both disorders 2–3 fold, suggesting that both social phobia and
avoidant personality disorder may be part of a broader social anxiety spectrum that has a
genetic element.
Dependent personality disorder has many features similar to depression, including
indecisiveness, passiveness, pessimism, self‐doubting, and low self‐esteem, and drugs used
to treat depression are also successful at alleviating the symptoms of dependent personality
disorder.
Dependent personality disorder is also regularly comorbid with a number of other
psychopathologies, particularly social anxiety disorder, panic disorder and OCD.
The reported comorbidity of obsessive‐–compulsive personality disorder in individuals with OCD
is as low as 22%, suggesting that the two disorders may not be closely related.
Dialectical behavior therapy (DBT) can be split into four distinct stages: (a) addressing dangerous and
impulsive behaviours and helping the client to learn how to manage these behaviours, (b) helping the
client to moderate extremes of emotionality (e.g., learning to tolerate emotional distress), (c) improving
the client's self‐esteem and coaching them in dealing with relationships, and (d) promoting positive
emotions such as happiness.
This approach has been particularly successful with individuals with BPD (Robins & Chapman, 2004;
Bloom, Woodward, Susmaras, & Pantalone, 2012), has been shown to have long‐lasting positive effects
on suicidal and nonsuicidal self‐harm behaviours, depression, interpersonal functioning, anger control,
and rehospitalisation (McMain, Guimond, Streiner, Cardish, & Links, 2012; Linehan, Heard, &
Armstrong, 1993), and it is particularly effective as a treatment for BPD over the longer term when
combined with appropriate medication (Soler et al., 2005).
The normal stages through which CBT would progress in the treatment of a personality
disorder are the following:
1. During the initial sessions the therapist will deal with any coexisting psychiatric problems
(usually specific anxiety disorders such as panic disorder or social anxiety disorder or major
depression).
2. The therapist then teaches the client to identify and evaluate key negative automatic
thoughts (e.g., ‘nobody likes me’ or ‘I am worthless’ in avoidant or dependent personality
disorders).
3. The therapist will then structure the sessions carefully to build a collaborative and trusting
relationship with the client—especially in the case of those disorders where the client is
distrusting or manipulative (e.g., BPD).
4. The therapist may then employ guided imagery to unravel the meaning of new and earlier
experiences that may have contributed to the dysfunctional behaviour patterns (such as
problematic early childhood and parenting experiences).
5. In collaboration with the client, the therapist will prepare homework assignments tailored
to the client's specific issues.
6. Finally, the therapist will apply specific cognitive, behavioural, and emotion‐focused
schema restructuring techniques to dispute core beliefs and to develop new and more
adaptive beliefs and behaviour (see also Schema Therapy).
Two main treatment objective are, first, to help the client develop new and more adaptive core
beliefs, and second, to help the client develop more adaptive problem‐solving interpersonal
behaviours.
FIGURE 12.3 A schematic representation of the way that dysfunctional schema are thought to develop in borderline
personality disorder (BDP). See text for further elaboration.
After Arntz, 1999.
The development of specific CBT procedures for personality disorders is still an active process.
Procedures have been developed that are specific to the cognitive and behavioural requirements of
individual disorders (Beck & Freeman, 1990), and current attempts are being made to identify
maladaptive schemata and dysfunctional beliefs that are important determinants of the behavioural
styles typical of individual personality disorders (Young, Klosko, & Weishaar, 2003; Beck, Broder, &
Hindman, 2016; see Table 12.16). It is still very early to be able to say with any confidence that CBT
offers an important and effective method of treatment for personality disorders. However, a number of
controlled studies have shown that CBT for personality disorders are superior to non–therapy control
conditions in reducing symptoms (Linehan, Tutek, Heard, & Armstrong, 1994; Linehan et al., 1999)
and are equally as effective as psychodynamic therapy (Leichsenring & Leibing, 2003).
Schemata therapy Central to this approach is the concept of early maladaptive schemas
(EMSs) that are thought to develop during childhood and result in dysfunctional beliefs and
behaviours during adulthood.
TABLE 12.16 Core beliefs and strategies of individuals with personality disorder diagnoses—target beliefs for CBT
From Beck, Broder, & Hindman, 2016.
Personality Core Belief Assumptions Coping Strategies
Disorder About Self
Antisocial I'm a potential If I manipulate, attack, and Lie, cheat, manipulate, threaten
victim. take advantage of others, I others, act in an overly aggressive
can protect myself and get fashion
what I want.
Avoidant I'm unlovable and If I avoid intimacy, I will be Avoid social interaction and negative
vulnerable to less likely to be rejected. If I emotion
negative emotion. avoid dysphoria, I won't fall
apart.
Borderline I'm bad, helpless, If I depend on others, guard Rely on others, be overly vigilant for
defective against being hurt, and act harm, reject others to forestall
(unlovable), likely aggressively, I can survive. inevitable rejection from others
to be abandoned.
Dependent I'm weak and If 1 subjugate myself to Avoid independent problem solving
incapable. others, they'll take care of and decision making, go overboard
me. in pleasing others
Histrionic I'm nothing. If I entertain others, they'll Be inappropriately dramatic,
be drawn to me. seductive, entertaining
Narcissistic I'm inferior. If I demand entitlements and Put others down to display own
accolades, it will show I'm superiority; expect special favours
special. and accommodations, brag about
accomplishments
Obsessive‐ I'm vulnerable to If I'm highly responsible, Act in overly responsible ways,
compulsive bad things maintain order and control, rigidly control self and others, strive
happening. and perform to the highest, for perfection
I'll be okay.
Paranoid I'm vulnerable. If I'm on guard, I can protect Mistrust others, look for hidden
myself. motives
Schizoid I don't fit in. If I avoid others, I'll be okay. Distance themselves from others
Schizotypal I am different and If I get close to others, they'll Mistrust others, try to be different in
vulnerable. be unfriendly. a ‘special’ way
SELF‐TEST QUESTIONS
Can you name the factors that make personality disorders difficult to treat?
What is the evidence in favour of a role for drug treatment in the management of
personality disorders?
What are the difficulties involved in adapting CBTs to the treatment of personality
disorders such as APD and BPD?
What are the main features of object‐relations psychotherapy and dialectical behaviour
therapy as applied to the treatment of personality disorders?
What is schema‐focused cognitive therapy and how does it approach the treatment of
personality disorders?
SECTION SUMMARY
CHAPTER OUTLINE
13.1 THE DIAGNOSIS AND CHARACTERISTICS OF SOMATIC SYMPTOM
DISORDERS
13.2 THE AETIOLOGY OF SOMATIC SYMPTOM DISORDERS
13.3 THE TREATMENT OF SOMATIC SYMPTOM DISORDERS
13.4 SOMATIC SYMPTOM DISORDERS REVIEWED
LEARNING OUTCOMES
When you have completed this chapter, you should be able to:
1. Describe the main diagnostic criteria and symptom characteristics for the DSM‐5 listed
somatic symptom disorders and evaluate some of the issues concerning diagnosis and
comorbidity.
2. Describe and evaluate the main factors contributing to the aetiology of somatic symptom
disorders, and be able to compare psychological and biological explanations.
3. Describe and evaluate three or four treatments that have been developed to address
somatic symptom disorders.
I have a core belief that I am dying, I know it's silly but you can't reason with a core belief, on top of that a huge
mistrust of doctors after one diagnosed my father with anxiety instead of checking him for bowel cancer (which he
actually had) and therefore delaying his treatment. It doesn't get any better when you get one doctor diagnosing you
with a UTI and the other saying that all UTIs are in my imagination so I end up avoiding doctors because all I
am told is that I am making it up to then get a rather nasty infection because it actually wasn't in my head after
all. I would love to know how I am supposed to think rationally when I can't even trust doctors anymore. I don't
spend my entire life worrying, I don't sit around doing nothing, despite the assumption that many GPs make, I am
not stressed but I am busy but as soon as I get a worrying symptom that is it, I can no longer function.
Kirsty's Story
Introduction
How often do we have physical symptoms such as aches and pains that trigger worries about contracting
cancer or heart disease? How often do we worry about becoming ill or even dying—even when we have
no physical symptoms of illness? For some people these everyday experiences are enough to cause
significant distress and to interfere with their normal day‐to‐day living. When such concerns and worries
become obsessive or a source of chronic anxiety or depression, they may be diagnosed as a somatic
symptom disorder. This category of disorders includes somatic symptom disorder, illness anxiety
disorder (formerly known as hypochondriasis or health anxiety), and conversion disorder. This is a new
category of disorders listed in DSM‐5, and all of these disorders share a common feature—the
prominence of somatic symptoms associated with significant distress and impairment. With somatic
symptom disorder individuals find somatic symptoms—either real or imagined—distressing and
spend significant amounts of time in medical settings attempting to seek a diagnosis for symptoms that
may either be trivial or not continuously present. Illness anxiety disorder is a preoccupation with
having or acquiring an illness to the extent that there is a high level of anxiety about health that both
causes distress and interferes with normal daily living. Conversion disorder is when the individual
begins to experience symptoms of altered motor or sensory functioning (e.g. unable to voluntarily move
a hand, or temporary blindness)—even when there is little or no evidence of relevant neurological
impairment. As you can imagine there is often a good deal of overlap in symptoms between these three
disorders, and that is one reason why they have been grouped together in their own chapter in DSM‐5.
In all cases, DSM‐5 emphasises that diagnosis should be made on the basis of positive symptoms such as
distress related to somatic symptoms and dysfunctional thoughts about health and not on excessive
anxiety in the absence of somatic symptoms (as had previously been the case in DSM‐IV‐TR). This is
because it is rarely the case that somatic symptom disorders occur in the absence of actual somatic
symptoms (even with conversion disorder), and so the exaggerated responses that develop to health and
physical symptoms in these disorders can frequently have their basis in actual somatic symptoms. This
can be seen in Kirsty's Story at the beginning of this chapter. She reports having a number of experiences
with GPs in relation to illness that she believes have given rise to her illness anxiety—an illness anxiety
that has become significantly disabling whenever she develops any symptoms of illness. Many
individuals with somatic symptom disorders believe that their problems are genuinely medical and are
often disbelieving when told there is no diagnosable evidence for a medical problem. In addition, those
with symptoms that mimic neurological disorders, such as full or partial blindness or loss of feeling
(anesthesia), genuinely believe they have a disability, but their normal functioning can often be
demonstrated in situations where drugs or hypnosis is used to alter levels of consciousness or where
elegant experimental methods are used to infer ability (e.g., Grosz & Zimmerman, 1970). It is also
important when diagnosing some somatoform disorders to differentiate true disorders from malingering
(Merten & Merckelbach, 2013). Claiming to have a physical illness when a person does not is not just a
ploy to avoid work or other situations that the individual may not enjoy but can also be an actively
deployed coping strategy at times of stress. The difference is that malingerers are fully aware that they
are exaggerating or inventing their symptoms, but individuals with somatoform disorders are not. This
is not an easy distinction to make, but malingerers will tend to be defensive when interviewed about
their symptoms, whereas many with somatoform disorders may often display a surprising indifference
about their symptoms (e.g., those with conversion disorder)—especially when the symptoms to most
people would be disturbing (e.g., blindness, paralysis). This is sometimes known as la belle
indifference, or “beautiful indifference.”
somatic symptom disorder A group of loosely associated disorders all of which can be
characterised by psychological problems manifesting as physical symptoms or as psychological
distress caused by physical symptoms or physical features.
One somatic symptom disorder that is thought to be related to malingering is factitious disorder
(previously known as Munchausen's Syndrome). Rather than being concerned with existing somatic
symptoms, factitious disorder is associated with the deliberate falsification of physical or psychological
symptoms and the induction of injury, illness, or disease through deception, and this may include
reporting fictitious neurological symptoms or deliberately manipulating laboratory tests (e.g., by adding
blood to urine). In the case of malingering, the individual may intentionally produce symptoms for a
specific reason (e.g., to avoid jury service, to avoid working in a stressful environment). In contrast, in
factitious disorder the individual's motivation is to adopt the sick role—perhaps from the attention that
this role may bestow on them, and individuals diagnosed with factitious disorders often have a history of
pathological lying who have developed an extensive knowledge of medicine and medical terminology. A
related disorder is factitious disorder imposed by another, in which parents or carers make up or
induce physical illnesses in others (such as their children). The reasons that drive individuals to
deliberately make others ill is unclear, although such individuals often crave the attention and praise
they receive in caring for someone who is ill (see Focus Point 13.1).
la belle indifference An indifference about real symptoms (especially when the symptoms
would be disturbing to most people) sometimes displayed by individuals with somatic symptom
disorders.
We continue by describing the DSM‐5 diagnostic criteria and the main characteristics of somatic
symptom disorder, illness anxiety disorder, conversion disorder and factitious disorder.
13.1 THE DIAGNOSIS AND CHARACTERISTICS OF SOMATIC
SYMPTOMDISORDERS
Beverley Allitt was a nurse who was convicted in 1993 of killing four children and injuring nine
others at Grantham Hospital, Lincolnshire. While working on a children's ward in the hospital
she was found to be secretly injecting infants with insulin—a drug that induced cardiac arrest,
causing death and brain damage. During the time that she was involved in these killings, she
was also befriending the parents of her victims and displaying what appeared to be a caring and
sympathetic manner. She received 13 life sentences for these crimes, yet her motives for the
killings have never been fully explained. One theory is that she suffered from factitious
disorder imposed by another (previously known as Munchausen's Syndrome by Proxy),
which is a controversial diagnosis in which sufferers are prompted to deliberately falsify illnesses
in others in order to attract attention to themselves.
What motivates some carers and parents to deliberately inflict illness, pain, and even death
knowingly on young children? Most mothers diagnosed with factitious disorder imposed by
another are emotionally needy and require attention and praise, and they receive this when
appearing caring and loving towards their ill child. They often have poor relationships with
their partners, receive little in the way of support outside of the medical environment, and
regularly exhibit low self‐esteem. Many have a good knowledge of medicine and medical
procedures that allows them to cause their child's illness with a minimum of suspicion (Adshead
& Brooke, 2001; Bluglass, 2001).
The syndrome is notoriously difficult to diagnose. This is because most of the victims are very
young children, many of whom may have genuinely experienced acute life‐threatening events
whose causes are difficult to detect (such as sudden infant death syndrome, SIDS) (Galvin,
Newton, & Vandeven, 2005). In such circumstances, carers who present the problems of their
children in unusual ways are often treated with suspicion—especially if their own emotional
needs are consistent with those often found in factitious disorder imposed by another (Pankratz,
2006).
TABLE 13.1 Summary: DSM‐5 diagnostic criteria for somatic symptom disorder
Shows at least one somatic symptom (present for at least 6 months) that causes distress or
disruption in everyday life
Unwarranted thoughts, feelings or behaviours related to the somatic symptoms or associated
health concerns, indicated by at least one of the following:
Disproportionate and persistent thoughts about how serious the symptoms are
Constantly high levels of anxiety about symptoms or health in general
Unwarranted levels of time and energy devoted to symptoms or health concerns
Somatic symptom disorder is also closely associated with other psychiatric diagnoses such as anxiety
disorders and major depression (Gureje, Simon, Ustun, & Goldberg, 1997). In younger individuals it
can be associated with impulsive and antisocial behaviour, suicide threats and deliberate self‐harm,
making the lives of such individuals chaotic and complicated. It is difficult to estimate the prevalence
rates of somatic symptom disorder because it is such a new diagnostic category. Based on previous
similar diagnostic categories DSM‐5 predicted lifetime prevalence rates of around 1% (DSM‐5,
American Psychiatric Association, 2013, p. 312), but subsequent reviews suggest it may be as high as 5–
7% in the general population (Kurlansik & Maffei, 2016), and 17% in the primary care patient
population (Creed & Barsky, 2004). There may also be cultural variations in the way in which somatic
symptoms are either described or accepted which may affect the diagnosis of somatic symptom disorder.
For example, some cultures give negative meaning to many bodily symptoms that in other cultures are
not described in those ways (e.g., too much heat in the body, burning in the head, etc.), and which serve
as the basis for worry about somatic symptoms.
glove anaesthesia A conversion disorder symptom in which numbness begins at the wrist and
is experienced evenly across the hand and all fingers.
hysteria A common term used in psychodynamic circles to describe conversion disorder (prior
to the latter’s inclusion in the DSM).
Some diligence must be taken to ensure that conversion symptoms are not the result of developing
neurological problems, and it is estimated that between 13% and 30% of individuals diagnosed with
conversion disorder have later been found to develop some relevant neurological deficit (Kent,
Tomasson, & Coryell, 1995; Maldonado & Spiegel, 2003). Conversion disorder symptoms can develop
throughout the life course and are often seen to develop after some stressful life event (see Kanaan &
Craig, 2019, for a discussion of this issue). Severity of the symptoms can be linked to the severity of the
life stressor, and important stressful life events that can contribute to conversion disorder include work
experiences and relationship difficulties (Roelofs et al., 2005). However, symptoms can often
spontaneously remit, only to return at a later time, and there is some evidence to suggest that a history
of trauma and childhood abuse may be a vulnerability factor (Bowman & Markand, 1996).
The lifetime prevalence rate of conversion disorder is thought to be less than 1%, and it is significantly
more common in women than in men (Maldonado & Spiegel, 2003). There are also important cultural
differences in the way that conversion disorder manifests itself. For example, Janca, Isaac, Bennett, and
Tacchini (1995) found that sexual and menstrual symptoms were prominent in Western cultures,
complaints of body temperature irregularities are only found in Nigeria, kidney problems only in China,
and body odour complaints only in Japan. In addition, the lower the economic or educational standards
in a culture or community, the higher the prevalence of conversion disorder (Maldonado & Spiegel,
2003). Similarly, the higher the educational standards in a community the more likely it is that the
symptoms will resemble a known medical or neurological disorder (Iezzi et al., 2001). Conversion
disorder is also highly comorbid with other disorders, particularly anxiety disorders such as panic
disorder, and depressive disorders. A study by Sar, Akyuz, Kundakci, Kizltyan, and Dogan (2004) found
at least one other psychiatric diagnosis in 89.5% of a group of individuals with a diagnosis of
conversion disorder.
SELF‐TEST QUESTIONS
What are the main diagnostic criteria for somatic symptom disorder?
What are the main psychiatric disorders that tend to be comorbid with somatic symptom
disorders?
Can you describe the main diagnostic criteria for illness anxiety disorder, and by what
other name was this disorder previously known?
Can you describe the main diagnostic criteria for conversion disorder together with its
main features?
How do cultural factors affect the prevalence rate and manifestation of conversion
disorder symptoms?
What is factitious disorder and what are its main diagnostic criteria?
How does factitious disorder imposed by another differ from basic factitious disorder?
SECTION SUMMARY
Underlying sexual conflict was also seen by psychodynamic theorists as being an important contributor
to other disorders such as somatic symptom disorder and illness anxiety disorder. Freud believed that
repressed sexual energy was often turned inward on the self, transforming it into physical symptoms that
created physical pain or were interpreted as indictors of illness and disease. Indeed, psychodynamic
theorists often view those suffering from somatic symptom disorders as regressing to the state of a sick
child, unconsciously seeking attention and relief from symptoms and responsibilities, and thus reducing
experienced anxiety (Kanaan & Craig, 2019; Kellner, 1990; Kuechenoff, 2002; Phillips 1996).
These psychodynamic accounts appear to make intuitive sense in that those who develop somatic
symptom disorders often appear to have either a history of conflict, stress, and abuse or have recently
experienced an important life stressor (Bowman & Markand, 1996; Roelofs et al., 2005), and an
important aspect of the psychodynamic conflict‐resolution model is that the physical symptoms either
cause relief from anxiety or from having to deal with current conflicts and stress (Temple, 2002).
Consistent with this view, some studies of conversion disorder have shown that onset of the disorder is
preceded by stress or trauma events and that the nature of these events are such that their negative or
distressing effects can be ameliorated by becoming ill (Nicholson et al., 2016). However, in contrast,
other studies have found that as few as 13% of conversion disorder sufferers can point to a traumatic
event preceding symptom onset (Kranick et al., 2011), although this latter study defined traumatic
events as ‘life threatening’ and most life stressors that may precipitate conversion disorder may not be
life threatening at all but just difficult for the individual to cope with (e.g., the break‐up of a relationship
or the birth of a child) (Kanaan & Craig, 2019). Nevertheless, disorders such as somatic symptom
disorder and illness anxiety disorder appear to involve high levels of anxiety (Noyes et al., 1994), and a
sizable minority of those with conversion disorder also fail to exhibit the calming effects of “la belle
indifference” (Gureje et al., 1997). Thus, there is some evidence that somatisation may be an attempt to
alleviate the pain and distress of a recent negative life event, but it is also clear that for many individuals
suffering somatisation symptoms, these symptoms themselves are distressing.
sick role Playing the role of being sick as defined by the society to which the individual
belongs.
Nevertheless, there does seem to be some reasonable evidence that children may learn somatising
attitudes from their parents in various ways, and this may provide a basis for the possible development
of somatic symptom disorders in later life.
As well as interpretation biases, individuals with somatic symptom disorders also exhibit attentional
biases which lead to the processing of health‐relevant information, including increased attention to
bodily sensations and a limited ability to distract from illness‐signalling information (Marcus et al.,
2007; Warwick & Salkovskis, 1990). This attentional bias is also associated with hyperactivation in brain
regions such as the amygdala that are crucial for an arousal‐related fear response (Mier et al., 2017).
Recent studies also suggest that individuals with somatic symptom disorders may also have a memory
bias towards remembering and retrieving illness relevant material. This bias also appears to make it
difficult for somatic symptom disorder sufferers to suppress illness‐related material, which may be
rapidly retrieved from memory when thinking about potential symptoms or illnesses (Wingenfeld,
Terfehr, Meyer, Lowe, & Spitzer, 2013).
FIGURE 13.2 This cognitive model of illness anxiety disorder illustrates how physical symptoms or bodily sensations
evoke negative automatic thoughts about illness. These thoughts then trigger feelings of anxiety, which in turn trigger a range
of behavioural, cognitive, and mood reactions that reinforce biased beliefs and illness anxiety symptoms.
After Warwick (1995).
memory bias Individuals with many psychopathologies may have a bias towards
remembering and retrieving illness relevant material.
One interesting feature of individuals with illness anxiety disorder and somatic symptom disorder is
their tendency to reject diagnoses that disagree with their own beliefs about their health and to seek
further opinions—presumably in the belief that someone will agree with their own view. Smeets, de
Jong, and Mayer (2000) found that individuals with illness anxiety disorder possessed a reasoning bias
that supported this ‘doctor shopping’. They would actively seek out and accept information that agreed
with their own view of their medical state but would ignore or reject arguments against their own
beliefs. This process will inevitably maintain hypochondriacal thinking and generalised anxiety about
health issues. In addition, individuals with somatic symptom disorders show greater attention allocation
to words and phrases that support their own beliefs about their health than those that don't (Witthöft,
Rist, & Bailer, 2009)—a process that is likely to reinforce existing dysfunctional beliefs.
reasoning bias The tendency of individuals with hypochondriasis to reject diagnoses that
disagree with their own beliefs about their health and to seek further opinions – presumably in
the belief that someone will agree with their view.
The preceding evidence strongly suggests that many somatic symptom disorders are maintained by
cognitive factors that take the form of (a) attentional biases to physical threats, (b) biases towards
interpreting body sensations and symptoms as threatening, (c) reasoning biases that maintain beliefs
about illness and being ill, (d) memory biases that facilitate the retrieval of illness relevant material, and
(e) catastrophising of symptoms. However, none of these accounts explain how the individual with a
somatic symptom disorder acquires these thinking and information processing biases. Some insight into
how these biases might develop has been provided by Brown (2004). Brown argues that “rogue
representations” are developed by a range of experiences, and these representations provide
inappropriate templates by which information is selected and interpreted. Rogue representations can be
created by experiences that include (a) a history of physical illness that causes a tendency to interpret
any sensation as a symptom of illness, (b) a history of experiencing emotional states that have strong
physical manifestations (e.g., anxiety is associated with shaking, palpitations, nausea, muscle tension,
chest pain, dizziness, etc.); such experiences might arise from childhood trauma and maltreatment and
result in a tendency to interpret such symptoms fearfully, and (c) exposure to physical illness in others
(e.g., abnormal levels of illness in the family) which creates a memory template by which one's own
physical sensations are interpreted. In support of this account, there is good evidence to suggest that
individuals with somatic symptom disorders do experience these factors with significantly greater
frequency than nonsufferers (Holder‐Perkins & Wise, 2001; Hotopf, Mayou, Wadsworth, & Wessely,
1999; Schrag, Brown, & Trimble, 2004; Iezzi, Duckworth, & Adams, 2001).
For an integrated view of cognitive‐behavioural models of somatic symptom disorders, the review by
Witthöft and Hiller (2010) is recommended.
It was a quiet Friday morning in July 2019 when pandemonium broke out at a school in
Kelantan in northeast Malaysia. A 17‐year‐old female student was at the centre of the
outbreak. This is her account of what happened.
The assembly bells rang.
I was at my desk feeling sleepy when I felt a hard, sharp tap on my shoulder.
I turned round to see who it was and the room went dark.
Fear overtook me. I felt a sharp, splitting pain in my back and my head started spinning. I fell to the floor.
Before I knew it, I was looking into the ‘otherworld’. Scenes of blood, gore, and violence.
The scariest thing I saw was a face of pure evil.
It was haunting me, I couldn't escape. I opened my mouth and tried to scream but no sound came out.
I passed out.
This student's outburst triggered a spontaneous and immediate chain reaction in which students
throughout the school started screaming and running from their classrooms. One girl fainted
and others barricaded themselves into classrooms. By the end of the day up to 39 people were
suffering physical symptoms as a result of this outbreak of ‘mass hysteria’.
‘Mass hysteria’ (sometimes called ‘mass psychogenic illness’) is the rapid spread of physical
symptoms amongst a large group of people. These symptoms are very similar to those
frequently found in panic disorder (see Chapter 6) such as hyperventilation, rapid heartbeat,
sweating, headaches, abdominal pain, chest pains, dizziness, trembling, and feelings of nausea.
These symptoms originate from an arousal‐based nervous system disturbance involving
excitation, loss or alteration of function, and these physical complaints are exhibited
unconsciously and have no obvious corresponding organic origin—and these latter effects are
very similar to those found in somatic symptom disorders (Bartholomew & Wessely, 2002). Such
symptoms are associated with strong feelings of fear and anxiety which help to spread the
condition amongst members of a cohesive group, in this case schoolchildren.
‘Mass hysteria’ is a phenomenon that is not well understood, and it is not listed as a specific
psychopathology in DSM‐5. But while the symptoms experienced are real, there is no obvious
biomedical explanation for them.
The causes of ‘mass hysteria’ appear to lie in psychological and social factors, with deeply
religious and spiritual communities being especially vulnerable to these phenomena, especially
where communities believe in the powers of traditional folklore and the supernatural.
Predisposing factors that have been noted include intense stress within the community affected
(in this example, the students were in the middle of a stressful exam period), conflicts, low
education status, lower socio‐economic status, being a minority group, and history of abuse and
trauma (Haque et al., 2013; Swartz, Blazer, George, & Landerman, 1986; Tarafder et al.,
2016).
So, while many of the physical symptoms of ‘mass hysteria’ resemble autonomic arousal
responses typical of panic disorder, the phenomenon also has many characteristics in common
with somatic symptom disorders—especially the fact that these physical symptoms are
generated unconsciously and with no obvious biomedical explanation.
(Original news item published at https://www.bbc.co.uk/news/world‐asia‐48850490)
Finally, because of the startling symptoms of conversion disorder, such as paralysis and blindness, there
have been a number of studies that have investigated the role of the brain in this particular disorder.
Studies that have monitored the brain waves of individuals with conversion disorder suggest that
sensory information is reaching the appropriate areas of the brain, but they are not being registered in
consciousness. Marshall, Halligan, Fink, Wade, and Frackowiak (1997) carried out a positron emission
tomography (PET scan) study of a conversion disorder patient who had a paralysed left leg. They found
increased activation in the right orbitofrontal and anterior cingulated cortices, but an absence of activity
in the right primary cortex when the patient attempted to move the leg. This suggests that unexplained
paralysis involves some form of inhibition of primary motor activity by brain areas such as the
orbitofrontal and anterior cingulated cortices. Interestingly, this same pattern of excitation and
inhibition can be found in PET scans of individuals who have leg paralysis induced by hypnosis
(Halligan, Athwal, Oakley, & Frackowiak, 2000), suggesting that paralysis caused by conversion disorder
and hypnosis may reflect very similar underlying brain processes. These findings suggest that brain areas
that would normally instigate movement are being activated, but other areas of the brain that would not
be involved are being activated in order to inhibit the movement (e.g., orbitofrontal and anterior
cingulated cortices—see Research Methods in Clinical Psychology 13.1).
Clinical Commentary: This patient exhibits many of the classic symptoms of an individual
with illness anxiety disorder (formerly known as hypochondriasis). He is obsessed with his symptoms by
continually checking his rash to see if it has grown and can talk of nothing else to friends and families.
The continual checking of symptoms and reassurance seeking from friends and family merely act to
maintain his anxiety. He also displays a number of cognitive biases typical of illness anxiety disorder.
He interprets his rash and the explanation given to him by a skin specialist in threatening terms—even
though there are many other explanations for them. He is also unmoved by reassurances from doctors that
his condition is not life‐threatening. He has a bias to dismiss evidence that is not consistent with his own
view of his symptoms and to accepting only evidence that is consistent with his view. Treatment consisted
of CBT (described more fully in the text) to deal with these cognitive biases.
CBT has proven to be particularly effective with those diagnosed with illness anxiety disorder (formerly
hypochondriasis). Such sufferers tend to interpret anything to do with bodily symptoms or health issues
as threatening (Smeets et al. 2000), and CBT can be used to challenge these dysfunctional beliefs and
replace them with more functional health beliefs. Case History 13.1 relates the symptoms of an illness
anxiety disorder patient who was convinced he had leukaemia after developing a harmless rash
(Salkovskis & Warwick, 1986). The treatment for this case involved the client being asked to test either
of two competing hypotheses—(a) that he was suffering from a life‐threatening illness, or (b) he had a
problem with anxiety which was maintained by repeated medical consultation and checking of his
symptoms. He was also asked to stop indulging in behaviours that might maintain his anxiety such as
checking to see if his rash had extended, continually seeking consultations with his doctor, and reading
medical textbooks. After around 30 days his symptoms had significantly reduced. He was no longer
regularly seeking medical reassurance about his symptoms and his self‐rated scores on measures of
health anxiety and illness beliefs had also significantly decreased.
Randomised control trials (RCTs) indicate that CBT for somatic symptom disorders is significantly
more effective at treating symptoms than normal medical care and is still effective at 6‐ and 12‐month
follow‐up (Barsky & Ahern, 2004), but RCTs suggest that it may not necessarily be more effective than
other treatments such as progressive muscle relaxation (Schröder, Heider, Zaby, & Göllner, 2004). Other
studies have begun to look at the feasibility of new third‐wave CBT interventions such as
mindfulness‐based cognitive therapy(MCBT) (see Chapter 4) in the treatment of somatic
symptom disorders, and initial results indicate the effects of mindfulness may be favourably comparable
to those of CBT (Fjorback et al., 2013), and it has a small to moderate effect in reducing pain, symptom
severity, depression, and anxiety associated with somatisation disorders (Lakhan & Schofield, 2013).
13.3.5 Summary
A range of different treatments have been utilised with somatic symptom disorders. Traditionally,
psychodynamic therapy has been an important method of treating hysteria‐based disorders such as
conversion disorder, although the evidence for the medium‐term success of such interventions is
meagre. Both behaviour therapy and CBT have become important interventions over the past 20 years
with CBT being successfully used across a range of somatic symptom disorders to challenge
dysfunctional beliefs and to correct interpretational biases. Third‐wave CBT treatments, such as MCBT,
are being tested as possible new effective interventions for somatic symptom disorders, and finally, drug
treatments can also be effective in helping to alleviate some of the symptoms of somatic symptom
disorders, with the most effective being antidepressants.
SELF‐TEST QUESTIONS
What are the two main difficulties encountered when attempting to treat somatic symptom
disorders with psychological therapies?
Can you describe the main features of behavioural stress management procedures for
somatic symptom disorders?
How are CBT interventions used to treat somatic symptom disorders? What particular
aspects of somatic symptom disorders does CBT target?
What kinds of drug treatments are most effective in treating the symptoms of somatic
symptom disorders?
SECTION SUMMARY
CHAPTER OUTLINE
14.1 THE DIAGNOSIS AND CHARACTERISTICS OF DISSOCIATIVE
DISORDERS
14.2 THE AETIOLOGY OF DISSOCIATIVE DISORDERS
14.3 THE TREATMENT OF DISSOCIATIVE DISORDERS
14.4 DISSOCIATIVE DISORDERS REVIEWED
LEARNING OUTCOMES
When you have completed this chapter, you should be able to:
1. Describe the main diagnostic criteria and symptom characteristics for the DSM‐5 listed
dissociative disorders and evaluate some of the issues concerning diagnosis, comorbidity,
and prevalence.
2. Describe and evaluate the main theories of the aetiology of dissociative disorders.
3. Evaluate the difficulties associated with treating dissociative experiences and describe at
least two types of therapies that have been used to treat dissociative experiences.
This is DID … dissociative identity disorder … multiple personality disorder. We are a freak. I've started writing
this a million times … I don't know how to explain this. I know I hide. I don't want you to know me. I feel
shame about who I am … maybe that word defines me … shame. I lived through childhood abuses that one only
hears about … between the ages of 4 and 20. I think. I am not sure. I don't even know if I remember everything
yet. That's a part of the disorder … forgetting. The other part of the disorder is having 11 other people living
inside of me. Therapy is working. Most of the time I remember when they are out now … in the past they used to
come out and I wouldn't know about it unless they left a clue behind … lots of clues for me to see. Sometimes they
would hurt me … intentionally. Sometimes I would hear them screaming in my head or saying things to me …
sometimes derogatory, sometimes soothing … sometimes they would only cry. Sometimes I would find things that I
couldn't understand. Waking with a teddy bear beside me that I didn't remember. Buying toys and items that I
would never buy … losing money … people saying hello to me in the street who I didn't know. My spouse looks at
me and asks me who I am half the time. My spouse no longer knows me but still loves me and I reciprocate.
Without the support I couldn't make it.
Michael's Story
Introduction
Dissociative disorders generally are characterised by significant changes in an individual's sense of
identity, memory, perception, or consciousness, and these changes can either be gradual or sudden and
transient or chronic. Symptoms of these disorders include an inability to recall important personal or
life events (e.g., dissociative amnesia), a temporary loss or disruption of identity (e.g., dissociative identity
disorder), or significant feelings of depersonalisation in which the person feels that something about
themselves has been altered (depersonalisation disorder). Dissociative symptoms such as these are
often found in the aftermath of severe or prolonged traumatic experiences, such as childhood abuse,
natural disasters, or life‐threatening accidents. Because of this close association with trauma, dissociative
symptoms are often found in individuals with a diagnosis of post‐traumatic stress disorder
(PTSD), and the Diagnostic and Statistical Manual of Mental Disorders, 5th Edition (DSM‐5) recognises this
relationship by placing dissociative disorders in the chapter next to trauma and stressor‐related disorders
(see Chapter 6). The diagnostic criteria for both PTSD and acute stress disorder contain reference to
dissociative symptoms such as amnesia, flashbacks, numbing, and depersonalisation. We discuss the
relationship between PTSD and dissociative symptoms later in this chapter (Section 14.1.1).
Michael's Story presented at the beginning of this chapter describes one particular form taken by
dissociative symptoms, and this is the presence of many distinct identities that each periodically take
control of his behaviour. These are often known as multiple personalities and identities, and the sufferer
often appears to be unaware that they present these different personalities to the world. This is known as
Dissociative identity disorder (DID) and represents a failure to integrate various aspects of
identity, consciousness, and memory. As we shall see, in many cases dissociative disorders develop
because the individual is attempting to cope with psychological distress and conflict that may be related
to earlier traumatic life experiences. Being able to adopt different personalities and repress specific
memories is viewed by many theorists as a way of coping with the anxiety and stress derived from these
earlier life experiences (e.g., Gleaves, 1996).
To a certain degree we all have dissociative experiences at some time during our lifetime, we will
sometimes have brief periods of memory loss, become confused about our identity, and sometimes just
feel ‘strange’ or depersonalised (Kihlstrom, 2001). A community sample study by Seedat, Stein, &
Forder (2003) found that 6% of respondents endorsed four to five lifetime dissociative symptoms, and
approximately one in three endorsed at least one lifetime symptom—suggesting that dissociative
symptoms are relatively common in the general population. Very often, these experiences will coincide
with periods of stress or trauma, and it is common for individuals who have experienced severe trauma
—such as combat troops and survivors of natural disasters or terrorist attacks—to experience these
kinds of dissociative symptoms (Kozaric‐Kovacic & Borovecki, 2005) (Photo 14.1). However, for some
individuals these symptoms either become so severe that they significantly disrupt their day‐to‐day
living, or they become chronic conditions rather than temporary responses to stress and cause significant
distress to the individual. In such circumstances, they may become diagnosable as a dissociative disorder.
In this chapter we discuss three dissociative disorders, namely (1) dissociative amnesia, (2) Dissociative
identity disorder (DID), and (3) depersonalisation disorder. Table 14.1 shows the prevalence and
comorbidity rates for dissociative disorders taken in an American community sample (Johnson, Cohen,
Kasen, & Brooks, 2006). These figures suggest a 12‐month prevalence rate of 9.1% for dissociative
disorders generally in individuals with a mean age of 33 years; however, prevalence rates can vary
considerably depending on the screening methods used in individual studies (Sar, 2011). Such disorders
are also comorbid in around one in three cases with anxiety disorders, eating disorders, mood disorders,
or personality disorders.
PHOTO 14.1 Cases of dissociative disorder increase significantly after war, natural disasters, or terrorist attacks when
individuals experience life‐threatening trauma well beyond that experienced during normal daily living, one such example
being the trauma experienced by those involved in the terrorist suicide bombing attack at the Ariana Grande concert in the
Manchester Arena in May 2017.
Dissociative amnesia is associated with several types of memory disturbances. Localised amnesia is
when the individual is unable to recall events that occurred during a specific time period (e.g., memory
loss for a period of 2 days following a serious car accident). Selective amnesia is when an individual
can recall some, but not all, of the events during a specific time period (e.g., a combat veteran may be
able to recall some events during a violent military encounter, but not others). The final three types of
dissociative amnesia are the least common but represent the most severe types of symptoms.
Generalised amnesia is a failure of recall that encompasses the person's entire life, and such
individuals may suddenly report to police stations or to hospitals as a result of this disorientation.
Continuous amnesia is the inability to recall events from a specific time up to and including the
present, and is also associated with the forgetting of new events as they occur. Systematic amnesia is
a loss of memory that relates to specific categories of information, such as family history (Table 14.2).
Localised amnesia When an individual is unable to recall events that occurred during a
specific time period (e.g. memory loss for a period of 2 days following a serious car accident).
Selective amnesia A memory disturbance where an individual can recall some, but not all, of
the events during a specific time period (e.g. a combat veteran may be able to recall some events
during a violent military encounter, but not others).
Generalised amnesia A failure of recall that encompasses the person’s entire life, and
such individuals may suddenly report to police stations or to hospitals as a result of this
disorientation.
Continuous amnesia A memory disturbance where there is an inability to recall events from
a specific time up to and including the present.
Systematic amnesia A memory disturbance where there is a loss of memory that relates to
specific categories of information, such as family history.
TABLE 14.1 Prevalence and comorbidity of dissociative disorders
From Johnson, Cohen, Kasen, & Brooks (2006).
Dissociative amnesia can present in any age group from young children to adults—but it is difficult to
diagnose in young children because it can be confused with attentional and educational difficulties. An
episode may last for minutes or years, but symptoms can often be alleviated simply by removing the
individual from the circumstances or situation that may have caused trauma or stress (e.g., dissociative
amnesia may spontaneously remit when a soldier is removed from the locality of the battlefield).
Interestingly, individuals with dissociative amnesia are much less disturbed by their symptoms than we
might expect, and this may imply that the amnesia serves some kind of coping function that enables the
individual to deal with stress and trauma (Kihlstrom, 2001).
TABLE 14.2 Summary: DSM‐5 diagnostic criteria for dissociative amnesia
The prevalence rate for dissociative amnesia in a community sample is around 1.8%, with rates being
higher in females than in males (Johnson et al., 2006—see Table 14.1).
host identity The identity that existed before the onset of dissociative identity disorder.
alter identities The identities that develop after the onset of dissociative identity disorder.
A significant factor in the history of DID sufferers appears to be childhood trauma, and surveys suggest
that over 95% of individuals diagnosed with DID report childhood sexual and physical abuse, including
incest (Putnam, 1997; Putnam et al., 1986). Many sufferers report their disorder beginning in childhood,
often before 12‐years of age, and at times of severe trauma (Putnam, 1997), and over 70% of
outpatients with DID report having attempted suicide at least once (DSM‐5, American Psychiatric
Association, 2013). This seems to suggest that DID may be a coping strategy adopted by children and
adolescents to distance themselves from experienced trauma (Atchison & McFarlane, 1994). We will
discuss this issue more fully in the section on aetiology (Table 14.3) (Case History 14.1). Individuals with
DID also exhibit a large number of comorbid conditions, including PTSD, depressive disorders,
trauma‐ and stressor‐related disorders, personality disorders (especially avoidant and borderline
personality disorders), conversion disorder, somatic symptom disorder, eating disorders, obsessive‐
compulsive disorder, and substance use disorders (DSM‐5, American Psychiatric Association, 2013; see
also Table 14.1).
TABLE 14.3 Summary: DSM‐5 diagnostic criteria for dissociative identity disorder
Disturbance of identity marked by at least two distinct personality states, which in some cultures
may be seen as evidence of possession
Recurring breaks in remembering everyday events, personal information, or traumatic events that
is not in line with natural forgetting.
The symptoms cause significant distress or impairment in important areas of functioning
The disturbance is not a normal part of broadly accepted cultural or religious practice, for
example, children having an imaginary friend
The symptoms are not the result of the use of a substance or due to another medical condition.
The prevalence rate for DID is around 1.5% in a community sample (Johnson et al., 2006), but the
number of reported cases has risen significantly in recent years. For example, Elzinga, van Dyck, &
Spinhoven (1998) found that the number of reported cases worldwide rose from 79 in 1980 to 6,000 in
1986, and the vast majority of these have been reported in the US. What then has caused this
significant increase in diagnosed cases of DID? There may be a number of factors, including (a) the
inclusion of DID for the first time as a diagnostic category in DSM‐III published in 1980; (b) early cases
of DID may simply have been diagnosed as examples of schizophrenia rather than a dissociative
disorder (Rosenbaum, 1980), (c) during the 1970s, interest in multiple personality disorder was fuelled
by the publication of Sybil (Schreiber, 1973), a case history describing an individual with 16 personalities
which was later popularised in a Hollywood film; (d) therapists have increasingly used hypnosis in an
attempt to get victims of childhood abuse to reveal details of this abuse or to reveal alter identities, and
there is some evidence that the power of suggestion under hypnosis may be enough to generate
‘multiple personalities’ that were not there in the first place (Piper, 1997; Powell & Gee, 1999); (e)
dissociative disorders such as DID are closely associated with trauma and PTSD, and interest in these
syndromes grew following the experience of veterans of the Vietnam war; and (f) many of the
symptoms of DID can be relatively easily faked, and some experts estimate that as many as 25% of
DID cases are either faked or are induced by therapy (Ross, 1997) (see also Brand et al., 2016, for a
discussion of some of the ‘myths’ surrounding DID).
CASE HISTORY 14.1 THE EMERGENCE OF ‘EVELYN’
The psychiatrist Robert F. Jeans reported the case of a single, 31‐year‐old professional woman
called Gina. Her initial symptoms included sleepwalking and screaming in her sleep, and he
noted that she was uncomfortable about being a woman and about the thought of having a
sexual relationship with her married boyfriend known as T.C. During the course of therapy, he
noticed a second personality emerging, which was called Mary Sunshine by Gina and her
therapist. Mary was more feminine, outgoing, and seductive than Gina. Over time Gina found
evidence that Mary had been controlling her behaviour across various aspects of her life: she
found hot chocolate drinks in the sink (Gina did not like hot chocolate), large sums of money
withdrawn from her bank account, and a sewing machine was delivered—that was presumably
ordered by Mary. Mary also seemed to take over Gina's relationship with T.C. and acted as a
seductive and warm partner, whereas Gina had often been cynical and cold. Eventually a third
personality emerged which appeared to be a synthesis of the features of Gina and Mary. Gina
described how this happened:
I was lying in bed trying to go to sleep. Someone started to cry about T.C. I was sure that it was Mary. I
started to talk to her. The person told me that she didn't have a name. Later she said that Mary called her
Evelyn but that she didn't like that name. I asked her what she preferred to be called. She replied that she will
decide later.
I was suspicious at first that it was Mary pretending to be Evelyn. I changed my mind, however, because the
person I talked to had too much sense to be Mary. She said that she realized that T.C. was unreliable but she
still loved him and was very lonely. She agreed that it would be best to find a reliable man.
She told me that she comes out once a day for a very short time to get used to the world. She promised that she
will come out to see you sometime when she is stronger.
I asked her where Mary was. She said Mary was so exhausted from designing her home that she had fallen
asleep.
(Jeans, 1976, pp. 254–255)
Over time Evelyn appeared more and more and appeared to be an adaptive alter identity that
allowed Gina to cope better with the range of issues in her life. Within months she was Evelyn
all the time, had no recollection of Mary, and later became successfully married to a physician.
Clinical Commentary
Like many alter identities in DID, Mary evolved primarily to take charge of certain areas of Gina’s life
—particularly controlling her feminine role and her relationship with T.C. Typically, Gina had no
recollection of her behaviour when Mary was in control and came to be aware of Mary only by
encountering evidence that a different personality had been controlling behaviour. In this particular case,
Evelyn eventually merged as a synthesis of both Gina and Mary's personalities and this proved to be an
adaptive change that enabled Gina to deal with a range of matters across her life.
As is the case in panic disorder, sufferers of depersonalisation disorder often think they are ‘going
crazy’—especially if this is also associated with a sense of derealisation (a feeling that the world is
strange or unreal). Other common symptoms include disturbances in the sense of time, obsessive
rumination, and somatic concerns. Depersonalisation disorder is also highly comorbid with anxiety
symptoms and depression, and a past history of anxiety and depression is regularly reported in those
suffering depersonalisation disorder (Baker et al., 2003).
In everyday life, depersonalisation experiences can occur when the individual is in transitional
physiological states such as on waking up, when feeling tired, practicing meditation, or following an
acute stressor or scary experience. Interestingly, depersonalisation disorder has been associated with
severe life trauma such as childhood physical and emotional abuse (Simeon, Gralnik, Schmeidler, Sirof,
& Knutelska, 2001), and research suggests that depersonalisation during periods of stress or trauma may
be adaptive in reducing symptoms of anxiety or depression immediately after the event (Shilony &
Grossman, 1993). In fact, depersonalisation may account for the periods of emotional ‘numbing’ that
individuals feel immediately after a severe traumatic experience, and before developing symptoms of
PTSD (see Chapter 6, Section 6.7).
Depersonalisation disorder often develops in late adolescence or early adulthood, with a mean onset age
of 16 years, and less than 20% of sufferers reporting onset after 20 years of age. The 12‐month
prevalence rate for depersonalisation disorder is relatively low at 0.8% (Johnson et al., 2006), but it must
be remembered that individual depersonalisation experiences are significantly more prevalent than this.
Complex PTSD A severe form of PTSD often associated with early age interpersonal trauma
and with dissociative symptoms from that early age.
SELF‐TEST QUESTIONS
What are the main diagnostic features of dissociative amnesia?
Can you name the five types of memory disturbance that occur in dissociative amnesia?
What are the main features of DID and what was it previously called?
Can you describe the difference between host identities and alter identities in DID?
What is the estimated prevalence rate of DID and what problems are involved in
estimating its prevalence?
What are the main features of depersonalisation disorder?
What is complex PTSD and how is it related to dissociative experiences?
SECTION SUMMARY
FIGURE 14.1 Percentage correct recall of to‐be‐remembered neutral and trauma‐related words presented under
conditions of divided attention. Grey bars represent participants who scored high on dissociative experiences, and red bars
represent those who scored low on dissociative experiences.
After DePrince and Freyd, (2004).
An alternative explanation of the memory failures experienced by dissociative disorder sufferers is in
terms of how changes in their physiological and emotional state can influence recall of memories.
State‐dependent memory is a well‐established cognitive phenomenon in which the individual is
more likely to remember an event if they are in the same physiological state as when the event occurred
(Bower, 1981). We have already noted that individuals with dissociative disorders often experience
severely traumatic life events that cause significant changes in mood and physiology when such events
occur (e.g., being involved in a natural disaster such as an earthquake may be experienced during states
of hyperarousal and panic). If the events relating to this experience are encoded in memory during
these unusual emotional states, then it may be that the individual will have difficulty properly recalling
them in less traumatic emotional states (Radulovic, Lee, & Ortony, 2018). State‐dependent learning has
also been used to explain the between‐identity amnesia that is often experienced in DID (Szostak, Lister,
Eckhart, & Weingartner, 1995), and it has been suggested that most between‐identities amnesia will
occur between those alter identities that differ most in their normal mood states (e.g., there will be less
cross‐identity knowledge between identities that display negative emotions, such as sadness or anger, and
those that exhibit mainly positive emotions such as joy and happiness) (Bower, 1981). Nevertheless, while
state‐dependent memory may seem like an appealing explanation of dissociative amnesia, there are
some difficulties with this explanation. First, dissociative amnesia is usually much more severe than has
been reported in basic studies of state‐dependent memory. Second, individuals with DID have problems
with both free recall memory and recognition memory, but state‐dependent memory is usually only
found with the former (Peters, Uyterlinde, Consemulder, & van der Hart, 1998). Third, studies have
effectively demonstrated that different identities in DID can recall autobiographical information from
the other identities when a concealed recognition test is used, suggesting that dissociative amnesia in
DID probably does not entail inter‐identity memory systems or is constrained by state‐dependent
learning (Huntjens, Verschuere, & McNally, 2012).
Finally, one other cognitive theory of dissociative symptoms involves the concept of reconstructive
memory. This view argues that an individual autobiographical memory is stored as a series of discrete
elements associated with that experience (e.g., context, emotional state, sensory and perceptual features,
etc.). These various elements will then be recognised as an autobiographical memory to the extent that
the various elements can be retrieved and associated together (the act of reconstruction). In some cases,
not all of the elements that go to make up an autobiographical memory may be activated, and this may
lead the individual to doubt that the retrieved fragments of memory refer to a memory from his or her
own past. Being unable to recall the relevant elements of an autobiographical experience from memory
is known as a deficit in source‐monitoring ability (Johnson, Hashtroudi, & Lindsay, 1993), and an
example of this is when an individual cannot remember whether they read something in a newspaper or
whether it was just a rumour they heard from a friend. It has been suggested that dissociative amnesia
may result from deficits in both reconstructive memory and source‐monitoring abilities. For some
reason, individuals with dissociative symptoms may not be able to recover from memory sufficient
elements of an autobiographical event to convince them it was an experience that happened to them. In
addition, a deficit in reality monitoring (a form of source monitoring required to distinguish mental
contents arising from experience from those arising from imagination), may also lead them to doubt that
they have actually had a particular experience (Johnson & Raye, 1981), and both of these processes may
contribute to dissociative amnesia. Consistent with this view are findings that women who have
experienced childhood sexual abuse and score high on dissociative experiences have greater difficulty
than nonabused control participants in distinguishing between words they had seen in a memory test
and words they imagined seeing (McNally, Clancy, Barrett, & Parker, 2005; Clancy, Schachter, McNally,
& Pitman, 2000)—a finding which suggests that they may well have a deficit in reality monitoring.
However, deficits in reality monitoring can work both ways. They can prevent a person from identifying
an autobiographical memory as one they have actually experienced, but they may also lead to the
individual identifying an imagined event as an actual experience. This may be the basis for what have
now become known as false recovered memories of trauma (Loftus, 1993), in which various
therapeutic techniques are used to try to recover repressed childhood memories of trauma but which
may actually generate false memories of events that did not occur. Such techniques may inadvertently
lead the client to falsely recognise imagined experiences as ones that actually happened, and this issue is
discussed more fully in Focus Point 14.2 and Research Methods in Clinical Psychology 14.1.
There has been a belief amongst many therapists and clinicians that individuals can forget
traumatic or stressful events in their life for relatively lengthy periods of time, and this view
stems back to the original works of Freud who believed that severe trauma was repressed to the
unconscious mind because it was too painful to tolerate. Many of the symptoms of dissociative
disorders seem to support this belief—especially because many of these disorders are
characterised by amnesia, and childhood abuse is a common factor in the history of many with
dissociative disorders. However, attempting to confirm that memories have been repressed is a
difficult process. For example, it is often difficult to find corroborative evidence even when
repressed memories of abuse have been recovered, because many of the recovered memories
may be of abuse that the perpetrators will be unwilling to substantiate. There are therefore a
number of issues to address when considering repressed memories. In particular these are:
Can memories of early childhood trauma or abuse be repressed?
If they can be repressed, can they subsequently be recovered?
If so‐called repressed memories are recovered, are they accurate?
Individuals who have suffered amnesia for stressful life events may occasionally recall what have
now come to be known as false recovered memories of trauma. That is, they may actually recall
events that they believed happened, but which objective evidence subsequently suggests did not
happen. A classic example of this is described in Focus Point 14.2.
We have described in Section 14.2 why we think some people might be prone to recalling
memories that are false, but how do we go about studying this phenomenon experimentally?
False recognition—the mistaken belief that one has previously encountered a novel item—has
been studied extensively in the laboratory and the methods used to investigate this have been
applied to the study of false recovered memories in individuals with dissociative disorder
symptoms.
In the laboratory procedure, participants are presented with lists of words, and each list is
composed of words associated to a single non‐presented ‘theme word’. For example, a list may
consist of words associated with sweet (such as sour, sugar, bitter, candy, etc.). After hearing the lists,
participants are then given a recognition test where they are presented with words (a) that were
presented in the previous lists, (b) words that have not been presented before but are related to
the theme words (known as false targets), and (c) a control set of words that have never been
presented before but which are not related to the theme words.
Using college students as participants, many studies have suggested that rates of false
recognition to false targets is high—so even nonclinical populations often believe they have seen
words in the original lists when in fact they have not (i.e., exhibit false recognition) (Roediger &
McDermott, 1995; Schachter, Norman, & Koutstaal, 1998).
A number of studies have used this paradigm to test whether individuals with dissociative
disorder symptoms have particularly high levels of false recognition. Clancy, Schachter,
McNally, & Pitman (2000) indeed found that a group of women who reported recovered
memories of childhood sexual abuse was more prone to false recognition in this laboratory
procedure than other groups (such as women who believed they were sexually abused as
children but could not remember it, and women with no history of childhood sexual abuse).
Interestingly, people who report having been abducted by space aliens also exhibit proneness to
false recognition (Clancy, McNally, Schachter, Lenzenweger, & Pitman, 2002).
Experimental studies such as these suggest that false recognition of past experiences are not
uncommon, and clinicians need to be aware of this when dealing with the recall of traumatic
experiences in clients with dissociative disorders. In addition, it suggests that some groups of
people (i.e., those who claim to have recovered memories of past traumas) are particularly
prone to false recognition. It is not at all clear why this is so, but individuals who have
undergone severe stress may have deficits in source monitoring—i.e., remembering how, when,
and why a memory was acquired, and this can be manifested in this simple false recognition lab
test as well as in real life.
14.2.5 Biological Explanations
Dissociative disorders generate symptoms—such as amnesia—that prima facie look as though they may
have been generated by neurological defects or abnormalities in brain processes. Even so, there is very
little evidence that these amnesic symptoms are caused by underlying deficits in brain function. First,
memory loss tends to be selective and in many cases it is transitory. This suggests that if there are brain
abnormalities causing these symptoms, these too must be selective and transitory. One such candidate
that has been suggested is undiagnosed epilepsy (Sivec & Lynn, 1995). Epileptic seizures are known to
be associated with DID and with symptoms of depersonalisation disorder such as blackouts and déjà vu.
Even so, the symptoms of some dissociative disorders—such as DID—are very complex, and it is
unlikely that undiagnosed bouts of epilepsy could explain the intricate way in which knowledge about
alter identities is suppressed or recovered by the sufferer.
epilepsy A disorder of the nervous system characterised either by mild, episodic loss of
attention or sleepiness or by severe convulsions with loss of consciousness.
An alternative biological explanation alludes to the role of the hippocampus. Brain scan studies have
suggested that the hippocampus is the area of the brain that brings together the various elements of an
autobiographical memory and integrates them to provide the individual with a memory that they
recognise as a past personal experience. Given that individuals with dissociative disorders appear to
have problems recalling and integrating memories of certain experiences (such as childhood abuse), this
may be caused by abnormalities in the hippocampus. Bremner, Krystal, Charney, & Southwick (1996)
have argued that neurotransmitters released during stress can modulate memory function—particularly
at the level of the hippocampus—and this release may interfere with the laying down of memory traces
for high stress incidents such as childhood abuse. In addition, extended periods of stress may also cause
long‐term, semi‐permanent alterations in the release of these neurotransmitters, causing long‐term
amnesic effects for experiences related to trauma. More recent functional magnetic resonance imaging
research has suggested that the prefrontal cortex may play an important role in inhibiting the activity of
the hippocampus in individuals with dissociative amnesia, a process that will result in memory
repression (Kikuchi et al., 2010).
What evidence is there that alter identities in DID are a construction of the therapeutic process?
Supportive evidence includes:
1. Alter identities are significantly less well defined in childhood and appear in adulthood usually after
treatment by a therapist has begun (Spanos, 1994; Lilienfield et al., 1999).
2. Relatives of individuals with DID rarely report having seen evidence of alter identities before
treatment (Piper & Mersky, 2004).
3. Individuals who develop DID usually have strong imaginations and a rich fantasy life that enables
them to play different roles with some ease (Lynn, 1988).
4. There is some evidence that many cases of DID may be diagnosed by only a relatively small
number of clinicians who may have a therapeutic style that allows alter egos to develop; for
example, in a Swiss survey, Modestin (1992) found that 66% of the DID diagnoses in the country
were made by less than 10% of the clinicians in the survey.
5. Individuals diagnosed with dissociative disorders are very susceptible to suggestion and hypnosis
(Bliss, 1980; Butler, Duran, Jasiukaitis, Koopman, & Spiegel, 1996), and hypnotherapy is a
common form of treatment for DID; Spanos (1996) argues that such susceptible individuals may
adopt the “hypnotic role” and simple produce the kind of behaviour that the therapist wants.
6. Spanos (1994) noted that those who support DID as a diagnostic category have described a wide
range of symptoms that may be indicative of DID and this justifies constant probing in therapy to
confirm a diagnosis, and this may occur to the point where therapists insist to doubting clients that
they do have multiple alter egos (Mersky, 1995); consistent with the desire of many clinicians to
diagnose DID is the finding that the prevalence of diagnosed DID has increased dramatically since
1980 (Elzinga, van Dyck, & Spinhoven, 1998).
Nevertheless, there is still debate about whether most cases of DID are strategic enactments or not.
Gleaves (1996) has provided a vigorous defence of the psychiatric view that DID is a legitimate
diagnostic category and not a construction of the therapeutic process. He argues that (a) it is not
surprising that the rate of DID diagnosis has increased significantly in recent years because this may be
a result of less scepticism about the diagnostic category and a reduction in the misdiagnosis of DID as
schizophrenia; (b) there is relatively little evidence that hypnotherapy actively contributes to the
development of DID symptoms because the number of clients diagnosed with DID after hypnotherapy
is as low as one in four; (c) core symptoms of DID, such as amnesia, are frequently found in DID
sufferers before their first treatment session (Coons, Bowman, & Milstein, 1988), so DID cannot be
entirely constructed as a result of therapy; and (d) rather than being openly collusive with the therapist
about their symptoms, many individuals with DID are highly reluctant to talk about their symptoms
and have an avoidant style that is not conducive to revealing a history of abuse or the existence of
multiple personalities (Kluft, 1994).
As an epilogue to this ongoing debate, it is worth discussing an interesting study conducted by Spanos,
Weekes, & Bertrand (1985). They designed an experiment based on the famous case of Kenneth
Bianchi, who was accused of a series of murders and rapes in Los Angeles in the 1980s. During his
psychiatric evaluation under hypnosis, Bianchi revealed evidence of DID symptoms and eventually of
an alter identity called Steve whom he claimed committed the rapes and murders. When he came out
of the hypnotic state, Bianchi claimed to know nothing about Steve or the murders or what he had said
under hypnosis. Table 14.5 provides a transcript of part of the discussion between the clinician and
Bianchi while the latter was under hypnosis. Spanos, Weekes, & Bertrand (1985) claim that this is an
excellent example of how Bianchi's alter identity was constructed via the therapeutic discussion.
Constructing ‘Steve’ served a useful purpose for Bianchi, because it allowed him to plead not guilty to
murder by reason of insanity (i.e., his supposed DID). In their experimental study, Spanos et al. (1985)
asked three groups of students to act out variations of the hypnotherapy procedure undergone by
Bianchi. All groups were instructed to play the role of individuals accused of murder. Group 1 was then
hypnotised and underwent questioning taken almost verbatim from the Bianchi transcript. Group 2
were also hypnotised and told that under hypnosis many individuals reveal evidence of hidden multiple
personalities, but this aspect was then not directly addressed in the interview. Group 3 were a control
condition that were not hypnotised and were given little or no information about hidden multiple
personalities. After the interviews, all participants were questioned about whether they had a hidden
personality or second identity. In Group 1—which underwent a procedure similar to Bianchi—81%
admitted a second personality. In Group 2—whose interview did not allude to hidden personalities—
only 31% revealed an alter identity. Only 13% of those in Group 3 admitted a hidden personality.
Spanos et al. (1985) argued that these results provided evidence that alter identities can be developed as
a result of the demand characteristics of the interview style of the therapist, and that such alter
identities are strategic enactments that serve the purposes of the client (e.g., by diverting or avoiding
blame for their behaviour). Nevertheless, while this study provides convincing evidence that some alter
identities can be developed by the therapist's interviewing style, it is still not evidence that all alter
identities are strategic enactments.
TABLE 14.5 Transcript of the discussion that took place between accused murderer, Kenneth Bianchi, and a clinician
while Bianchi is under hypnosis (see text for further elaboration)
After Schwarz (1981), pp. 139–143.
Clinician: I've talked a bit to Ken, but I think that perhaps there might be another part of Ken that I
haven't talked to. And I would like to communicate with that other part. And I would like
that other part to come to talk to me….And when you're here, lift the left hand off the chair
to signal to me that you are here. Would you please come, Part, so I can talk to you … Part,
would you come and lift Ken's hand to indicate to me that you are here… Would you talk to
me, Part, by saying ‘I'm here’?
Bianchi: Yes.
Clinician: Part, are you the same as Ken or are you different in any way
Bianchi: I'm not him.
Clinician: You're not him. Who are you? Do you have a name?
Bianchi: I'm not Ken.
Clinician: You're not him? OK. Who are you? Tell me about yourself. Do you have a name I can call
you by?
Bianchi: Steve. You can call me Steve.
14.2.7 Summary
We posed two questions at the outset of this section on aetiology: (a) How do the normally integrated
components of consciousness become dissociated in the dissociative disorders? and (b) Do the distinctive
symptoms of dissociative disorders (such as amnesia and multiple personalities) have a specific function?
We have reviewed a range of theories about how the elements of consciousness become dissociated in
these disorders and how memories might become suppressed. Cognitive theories try to explain these
dissociative symptoms primarily by attempting to describe the mechanisms that might mediate effects
such as selective amnesia. We reviewed two specific accounts, state‐dependent memory and
reconstructive memory. The latter additionally argued that individuals with dissociative disorders may
suffer deficits in source‐monitoring ability and reality monitoring—both may prevent an individual from
identifying an autobiographical memory as one they have actually experienced. Some relatively
undeveloped biological accounts also intimate that selective amnesia may result from abnormal brain
processes (such as epilepsy) or specific brain functions which inhibit the ability of the hippocampus to
lay down or recall specific memories. An alternative view of the striking symptoms of dissociative
disorders is that many of them may be a construction of the therapeutic process. In particular, directive
therapeutic approaches (including hypnotherapy) may encourage the client to create alter identities that
did not exist prior to therapy or to recall false memories of events that had never happened. We
concluded that while some symptoms of dissociative disorders may be developed by overly directive
therapy techniques, it is highly unlikely that this explains all dissociative symptoms. Finally, in relation to
our second question, it is quite probable that the symptoms of dissociative disorders (particularly
selective amnesia and alter identities) do serve some kind of palliative role, and in the psychodynamic
view they may allow the sufferer to repress traumatic memories that are too painful to tolerate.
SELF‐TEST QUESTIONS
Can you describe some of the main risk factors for dissociative disorders?
What is the psychodynamic concept of repression, and how does it account for the
symptoms of dissociative disorders?
What is the evidence that fantasy and early dissociative experiences may play a role in the
development of dissociative disorders?
Can you describe the procedure for a laboratory‐based experiment designed to investigate
deficits in memory processes in individuals with dissociative disorders?
What is state‐dependent memory and how does it attempt to account for dissociative
amnesia?
Can you explain how deficits in source monitoring ability or reality monitoring might
account for both dissociative amnesia and false recovered memories of trauma?
What is the evidence that alter identities in DID are a construction of the therapeutic
process?
SECTION SUMMARY
5. Some overly directive therapeutic styles may lead to the recovery of false memories, with the
potential broad range of negative consequences that this might have for the client and their family
(see Focus Point 14.2).
6. In DID, integrating alter identities into a single, functional identity is an extremely difficult process
—many clients find that having a series of alter identities is a useful way of explaining their
behaviour to others and absolving the ‘host’ identity from blame and responsibility, and breaking
this down in difficult (Hale, 1983). For example, in a survey of 153 clients undergoing therapy for
DID, Piper (1994) found that only 38 of 153 (25%) achieved a stable integration of their alter
identities, but other longer term studies have intimated at more optimistic outcomes, with Kluft
(2000) finding a successful integration rate of 68% over a period of 3 months after therapy.
7. All dissociative disorders are usually comorbid with a range of other common psychopathologies,
and particularly with anxiety disorders, depression, and PTSD; dealing with these comorbid
problems will also usually be a requirement in therapy.
As we mentioned at the beginning of this section, therapies for dissociative disorders are relatively
underdeveloped, but we will discuss the most commonly used ones. These are psychodynamic therapy,
hypnotherapy, and—to a lesser extent—drug therapy.
14.3.2 Hypnotherapy
This is a method that is used relatively regularly with those who suffer dissociative disorders. This is
because sufferers are unusually susceptible to suggestion and hypnosis (Bliss, 1980), and at least some
clinicians believe that dissociative symptoms such as amnesia or multiple identities may be the result of
a form of ‘self‐ hypnosis’ by which individuals are able to restrict certain thoughts and memories
entering consciousness (Frischholz, Lipman, Braun, & Sachs, 1992). Using hypnotherapy, the clinician
can help guide the client through the recall of repressed memories. Hypnosis is also used to help people
to regress to childhood states in an attempt to help them recall significant events that they may have
repressed. Drugs such as sodium amobarbital and sodium pentobarbital can also be used
concurrently with hypnotherapy to help clients recall past events (Ruedrich, Chu, & Wadle, 1985). One
assumption in the hypnotherapy approach is that hypnosis will recreate the physical and mental state
the client was in prior to experiencing any trauma, and this will help the individual to recall events
during earlier stages of their life. This is known as age regression, and while some clients find this
helpful in recalling and dealing with repressed memories, there is no objective evidence that hypnosis
does recreate any of the physical or mental states experienced earlier in life. Hypnotherapy is also used
in the treatment of DID in order to help bring potential alter identities into consciousness and to
facilitate the fusion of identities. However, although widely used in the treatment of dissociative
disorders there have been no systematic group‐ or single‐case studies of the effectiveness of this
technique (Cardena, 2000).
sodium amobarbital A drug which can be used concurrently with hypnotherapy to help
clients recall past events.
sodium pentobarbital A drug which can be used concurrently with hypnotherapy to help
clients recall past events.
age regression In hypnotherapy, the recreation of the physical and mental state that a client
was in prior to experiencing any trauma in order to help the individual recall events during
earlier stages of his or her life.
14.3.4 Summary
Treating dissociative disorders can often be a lengthy process, whether through conventional
psychotherapy or hypnotherapy. This picture is additionally clouded by the fact that many health
insurers are increasingly requiring treatments that are empirically supported, and in some countries—
such as the Netherlands—psychoanalysis is no longer being reimbursed as a treatment for dissociative
disorders (Brand, 2012). More recently, staged treatments have been developed that are assessed using
empirically based methods (e.g., schema therapy for DID), and may provide more evidence‐based
alternatives to traditional psychoanalysis and hypnotherapy (Baars et al., 2011; Brand, Lanius,
Vermetten, Loewenstein, & Spiegel, 2012; Huntjens, Rijkeboer, & Arntz, 2019). Apart from the
therapies we have discussed in this section, many individuals with dissociative disorders can often be
treated with cognitive behaviour therapy (CBT) for their depression and anxiety symptoms. Because
both dissociative and PTSD symptoms may be an outcome of extreme trauma, those therapies used to
treat PTSD also have some success in dealing with dissociative symptoms (e.g., therapies such as
cognitive restructuring, eye movement desensitisation and reprocessing—EMDR, see Chapter 6).
SELF‐TEST QUESTIONS
What are the main problems facing clinicians who attempt to treat dissociative disorders?
Can you describe the main characteristics of psychodynamic therapies for dissociative
disorders?
What is the evidence that hypnotherapy is an effective treatment for dissociative disorders?
SECTION SUMMARY
CHAPTER OUTLINE
15.1 THE DIAGNOSIS AND ASSESSMENT OF NEUROCOGNITIVE
DISORDERS
15.2 TREATMENT AND REHABILITATION FOR NEUROCOGNITIVE
DISORDERS
15.3 NEUROCOGNITIVE DISORDERS REVIEWED
LEARNING OUTCOMES
When you have completed this chapter, you should be able to:
1. Describe some of the cognitive impairments that characterise neurocognitive disorders.
2. Describe some of the main methods that clinical neuropsychologists use to assess cognitive
functioning in neurocognitive disorders.
3. Describe a range of types of neurocognitive disorders and evaluate their causes.
4. Describe, compare, and contrast the various types of treatment and rehabilitation
programmes that have been developed to deal with neurocognitive disorders.
Within the last 8 months I've been at war with the cooker. I put the oven on at a temperature which I know is
right only to find the meat burnt or not cooked because what I thought was the right temperature was not. It gets
me so annoyed. I also forget what time I put things in the oven—even if I repeat it to myself a few times and keep
looking at the clock. If I do something else, go upstairs, for example, I cannot remember what time the food went in
no matter how I try.
I've had to give up driving. I kept losing concentration and my speed just got faster and faster. I could have caused
an accident—especially when feeling disoriented. I still work part‐time, but that is slipping something awful. I do
things at work alone as much as possible so no one can see the mistakes I'm making. I've had all the tests now, but
the neurologist tells me all the findings so far are consistent with AD (Alzheimer's disease). To be honest, trying to
get through the day is like knitting with a knotted ball of wool. Every now and again I come to a knot. I try to
unravel it but can't, so I knit the knot in. As time goes by, there are more and more knots.
Paddy's Story
Introduction
The majority of the disorders we have discussed in this book so far appear to have psychological origins.
That is, people have experiences that give rise to problematic ways of thinking and behaving, and these
ways of thinking and behaving may cause distress and form the basis for diagnosable psychopathologies.
In contrast, neurocognitive disorders have their origins in damage or abnormalities in the biological
substrates that underlie thinking and behaving. This damage or degeneration can be caused by disease,
physical trauma (such as brain injury), or genetic predispositions causing irreversible changes in the
brain and central nervous system. By definition, the causes of neurocognitive disorders are biological
and can usually be identified as biochemical imbalances in the brain and central nervous system or
direct or indirect damage to brain tissue. Despite the fact that the causes of neurocognitive disorders are
primarily physical, psychology is centrally important in the diagnosis, assessment, and rehabilitation of
individuals suffering such disorders. For example, some of the first signs of neurocognitive disorders
(such as dementia, brain injury, or stroke) are deficits in basic cognitive functions such as perception,
learning, memory, attention, language, and visuospatial skills, and also deficits in what are known as
executive functions (i.e., those skills that involve problem‐solving, planning, and engaging in goal‐
directed behaviour). Clinical psychologists are therefore actively engaged in assessing these abilities and
interpreting whether any deficits are early signs of neurocognitive disorders. In addition, neurocognitive
disorders do not only generate deficits in basic cognitive functioning, they can also affect disposition and
personality. An individual diagnosed with a neurocognitive disorder may become both depressed and
anxious and require suitable treatment for these conditions. They may also display radical changes in
personality and behaviour, such as impulsivity or outbursts of aggressive behaviour. These also need to
be managed and treated. Finally, clinical psychologists are also centrally involved in the development of
rehabilitation programmes that may have a variety of aims, including (a) restoring previously
affected cognitive and behavioural functions (although this is often a difficult task), (b) helping clients to
develop new skills to replace those that have been lost as a result of tissue damage (e.g., learning to use
memory aids), (c) providing therapy for concurrent depression, anxiety or anger problems, and (d)
providing clients and carers with skills and advice that will help them structure their living environment
in a way that will help to accommodate changes in cognitive and behavioural abilities.
executive functions Cognitive skills that involve problem-solving, planning and engaging in
goal‐directed behaviour.
rehabilitation programmes Treatment programmes that usually combine a mixture of
group work, psychological interventions, social skills training and practical and vocational
activities.
At the beginning of this chapter we described Paddy's Story, which recounts the experiences of someone
who is in the early stages of Alzheimer's disease. This describes her awareness of her memory lapses,
mistakes, and periodic disorientation that give rise to frustration, anxiety, and depression. For many who
are in the early stages of a degenerative neurocognitive condition, these experiences can be both
frequent and frightening.
We continue this chapter by discussing some of the more general characteristics of neurocognitive
disorders and the diagnostic and assessment issues that are relevant to them.
anterograde amnesia Memory loss for information acquired after the onset of amnesia.
Also known as anterograde memory dysfunction.
anterograde memory dysfunction Memory loss for information acquired after the onset
of amnesia. Also known as anterograde amnesia.
Language deficits
The individual may appear to be rambling during conversations and have difficulty conveying what they
have to say in a coherent manner. They may also have difficultly reading and understanding the speech
of others. Language deficits are one of the most common features of neurocognitive disorders, and are
collectively known as aphasias. Language impairments can take many forms, including (a) an inability
to comprehend or understand speech or to repeat speech accurately and correctly; (b) the production of
incoherent, jumbled speech (known as fluent aphasia); and (c) an inability to initiate speech or
respond to speech with anything other than simple words (known as nonfluent aphasia). A distinction
can be made between Broca's aphasia and Wernicke's aphasia. Disruption of the ability to speak
is known generally as Broca's aphasia, and consists of difficulties with word ordering (agrammatism),
finding the right word (anomia) and articulation. It is characterised by laborious nonfluent speech
involving mispronounciation rather than mis‐selection of words. In contrast, Wernicke's aphasia is a
deficit in the comprehension of speech involving difficulties in recognising spoken words and converting
thoughts into words. Damage to different areas of the left hemisphere (which controls speech) are
specific to each of these deficits. Wernicke's aphasia is associated with damage to regions behind the
frontal lobes whereas Broca's aphasia is more likely to result from damage to the left frontal lobe itself
(see Table 15.1).
nonfluent aphasia An inability to initiate speech or respond to speech with anything other
than simple words.
Broca’s aphasia Disruption of the ability to speak consisting of difficulties with word
ordering, finding the right word and articulation.
apraxia Loss of the ability to execute or carry out learnt (familiar) movements, despite having
the desire and the physical ability to perform the movements.
Wisconsin card sorting task A widely used test of executive functioning where individuals
must sort cards for a number of trials using one rule (e.g. colour) and then sort cards using a
different rule (e.g. shape).
Test batteries such as these also provide useful information on the source of any deficits (such as closed
head injury, alcohol abuse, Alzheimer's disease, stroke, etc.), whether the damage occurred during
childhood development, and whether any deficits are progressive. Focus Point 15.1 provides an example
of one of the basic tests—the trail making test—that provides a measure of information processing
speed and a range of recognition and visuo‐motor integration abilities. Many of these tests are so
extensive that they may take as long as 6 hours to administer, requiring substantial patience and stamina
on the part of both the clinician and the patient. In contrast, some other tests have been developed to
be quick and simple to implement, and to provide a reasonably reliable indication of general level of
impairment. One such test is the Mini‐Mental State Examination(MMSE), which is a brief 30‐
item test used to screen for dementia (e.g., in Alzheimer's disease) and takes about 10 minutes to
administer (see Focus Point 15.2) (see also Tsoi, Chan, Hirai, Wong, & Kwok, 2015, for a systematic
review and meta‐analysis of cognitive tests to detect dementia).
Difficulties of diagnosis
Diagnosis is made difficult by the fact that the symptoms and deficits found in neurocognitive disorders
often closely resemble those of other psychopathologies. For example, cognitive deficits typical of
neurocognitive disorders are a regular feature of dissociative disorders (e.g., amnesia), and schizophrenia
(e.g., language deficits, information‐processing deficits, and deficits in executive functions). Motor
coordination deficits, paralysis, and impairments of sensory input are also found in somatic symptom
disorders such as conversion disorder (e.g., hysterical paralysis and blindness, see Chapter 13). In
addition, in the early stages of a degenerative neurological disorder people will start to experience
cognitive impairments that affect their daily lives, and this will often lead to the development of
psychological problems (e.g., depression and anxiety) that compound the difficulties of diagnosis (see
Paddy's Story at the beginning of this chapter). Indeed, prior to the development of modern brain
scanning technology, it was often the case that a neurological disorder could be diagnosed only by
autopsy after the death of the sufferer (Patton & Shepherd, 1956). Even so, with the range of tests
available today psychological problems can still be misdiagnosed as neurological ones (e.g., Iverson,
2006), and vice versa (e.g., Sumpter & McMillan, 2005), and this has important implications for
rehabilitation and subsequent care.
The MMSE is a good instrument for assessing cognitive function in dementia and takes about
10 minutes to administer.
Orientation
What is the (year) (season) (date) (day) (month)?
5□
Where are we: (country) (city) (part of city) (number of flat/house) (name of street)?
5□
Registration
Name three objects: 1 second to say each.
Then ask the patient to name all three after you have said them.
Give one point for each correct answer.
3□
Attention and calculation
Serial 7s: Ask the patient to begin with 100 and count backwards by 7. Stop after 5 subtractions
(93, 86, 79, 72, 65). One point for each correct.
5□
Recall
Ask for the three objects repeated above (under Registration).
Give one point for each correct.
3□
Language
Name a pencil and watch (Show the patient a wrist‐watch and ask him or her what it is). Repeat for
pencil. (two points).
Repeat the following: ‘No ifs, ands or buts’ (one point).
Follow a three‐stage command: ‘Take a paper in your right hand, fold it in half and put it on the
floor’ (three points).
Read and obey the following: Close your eyes (one point).
Write a sentence (one point).
Copy a design (one point). On a clean piece of paper, draw intersecting pentagons (as below),
each side about 1 in. and ask him or her to copy it exactly as it is. All 10 angles must be present
and two must intersect to score 1 point. Tremor and rotation are ignored.
9□
Total score____________
A score of 20 or less generally suggests dementia but may also be found in acute confusion,
schizophrenia or severe depression. Mild Alzheimer's is usually linked to an MMSE score of
21–26, Moderate Alzheimer's to scores of 10–20, and severe Alzheimer's to an MMSE score of
less than 10.
To add to the difficulties of diagnosis, the symptoms of a range of neurological disorders overlap
considerably. For example, damage to specific areas of the brain as a result of a closed head injury can
give rise to similar cognitive deficits as those found in broader degenerative disorders such as
Alzheimer's disease. Similarly, a single causal factor (such as a brain tumour) can manifest as a range of
different symptoms including speech disorder, deficits in sensory perception, or emotionality and
aggressiveness. These are some of the reasons why neurological assessment is thorough and multifaceted
and continues to be administered throughout the period of diagnosis and rehabilitation.
Delirium
The main feature of a delirium is a disturbance of attention and awareness, and the disturbance in
attention is reflected in a reduced ability to direct, focus, sustain, and shift attention, and the disturbance
develops over a short period of time. The person may not understand simple questions, and may be
unable to shift attention from answering one question to answering another. Delirium often occurs in
the context of other neurocognitive disorders, and may be accompanied by memory and learning
deficits, disorientation, and perceptual disturbances such as hallucinations. There may be evidence that
the delirium is a result of the physiological consequences of a general medical condition, substance
intoxication or withdrawal, or use of a medication or a toxin and may be associated with disturbances
in the sleep‐wake cycle, such as a reversal of the day‐night cycle, restlessness, and hyperactivity. The
individual may also exhibit emotional disturbances such as anxiety, fear, depression, irritability, anger,
euphoria, and apathy, accompanied by rapid and unpredictable shifts from one emotional state to
another.
dementias The development of multiple cognitive deficits that include memory impairment
and at least one other specific deficit.
The symptoms of delirium can develop rapidly over hours to days but may begin abruptly after specific
traumatic events, such as head injury. Equally, delirium may resolve in just a few hours, but alternatively
persist for weeks to months especially in the elderly. Delirium appears to result from widespread
disruption of brain metabolism and neurotransmitter activity that can be triggered by a range of events.
These can include traumatic head injury, substance intoxication or withdrawal, surgery, sleep loss,
malnutrition, and psychological stress generally. Delirium is particularly common in older people, and
particularly hospitalised older people. The community prevalence rate for delirium is 1–2%, but
increases with age, rising to 14% amongst those over 85 years‐of‐age (DSM‐5, American Psychiatric
Association, 2013, p. 600) (Table 15.2).
A reduced ability to focus, direct, and sustain attention and awareness, developing over a short
period of time (hours to a few days) and fluctuates in severity throughout that time
Additional disturbances in cognitive functioning are also observed
Disturbances are not a result of a preexisting neurological condition and do not occur during the
course of a coma or other reduced level of arousal state
There is no evidence that the disturbance is as a direct physiological result of another medical
condition, substance use, or withdrawal
In the major neurocognitive disorders, language function may deteriorate to the point where the
individual's conversation is vague or empty, and they may be unable to name individual everyday
objects (such as tie, dress, desk, lamp, etc.). The condition may also (but not necessarily) be associated
with apraxia (impaired ability to execute motor activities) and agnosia (the failure to recognise or
identify objects despite intact sensory functioning). Disturbances in executive functioning are also
common and are evidenced by the individual having difficulty coping with new tasks, shifting mental
sets, generating novel verbal information, and using or recalling basic general knowledge. Simple tests
for executive functioning include asking the individual to count to 10, recite the alphabet, do
subtraction, or state as many animals as possible in 1 minute. Poor judgement and poor insight are also
common features of major neurocognitive disorders. They may underestimate the risks involved in
certain activities (e.g., driving) and indulge in inappropriate behaviours, such as making inappropriate
jokes, neglecting personal hygiene, or disregarding conventional rules of social conduct.
TABLE 15.3 Summary: DSM‐5 diagnostic criteria for major neurocognitive disorder
Significant cognitive deterioration from previous level in at least one of the cognitive domains
based on:
Concern of the patient, informant, or doctor that there has been a substantial decline in
cognitive function
A substantial impairment in cognitive performance, preferably as documented by standard
testing
The cognitive deterioration interferes with self‐reliance in everyday activities
The deficit does not occur in the context of delirium
The deficit is not better accounted for by another mental disorder such as major depressive
disorder or schizophrenia
TABLE 15.4 Summary: DSM‐5 diagnostic criteria for mild neurocognitive disorder
Limited cognitive deterioration from previous level in at least one of the cognitive domains based
on:
Concern of the patient, informant, or doctor that there has been a limited decline in
cognitive function
A limited impairment in cognitive performance, preferably as documented by standard
testing
The cognitive deterioration does not interfere with self‐reliance in everyday activities
The deficit does not occur in the context of delirium
The deficit is not better accounted for by another mental disorder such as major depressive
disorder or schizophrenia
DSM‐5 lists a number of specific major neurocognitive disorders, and we will look at these individually
in more detail next. These are Alzheimer's disease, vascular neurocognitive disorders, NCD due
to Parkinson's disease, NCD due to traumatic brain injury. NCD due to human immunodeficiency virus
(HIV) infection, NCD due to Huntington's disease, NCD due to prion disease, and NCD with Lewy
bodies.
FIGURE 15.1 Functional MRI (fMRI) scans showing activation during a motor task for HIV patients with normal
cognitive (NL), minor cognitive–motor disorder (MCMD), and HIV associated dementia (HAD). Darkened regions
indicate areas of activation. Compared with NL, patients with MCMD and HAD have significantly less activation.
From Tucker et al., 2004.
spongiform encephalopathy A fatal infectious disease that attacks the brain and central
nervous system. Commonly known as ‘mad cow disease’ or variant Creutzfeldt–Jakob disease
(vCJD).
prion disease Prion disease represents a group of conditions that affect the nervous system in
humans and animals.
TABLE 15.6 Summary: DSM‐5 diagnostic criteria for neurocognitive disorder due to prion disease
stroke A sudden loss of consciousness resulting when the rupture or occlusion of a blood vessel
leads to oxygen lack in the brain.
The most common causes of infarction are an embolism or a thrombosis. A cerebral embolism is a
blood clot that forms somewhere in the body before travelling through the blood vessels and lodging in
the brain, causing the brain cells to become damaged as a result of oxygen starvation. Cerebral
thrombosis occurs when a blood clot (thrombus) forms in an artery (blood vessel) supplying blood to
the brain. Furred‐up blood vessels with fatty patches of atheroma (an abnormal inflammatory
accumulation of macrophage white blood cells within the walls of arteries) may make a thrombosis
more likely. The clot interrupts the blood supply and brain cells are starved of oxygen.
infarction The injury caused when the blood flow to the brain is impeded in some way,
resulting in damage to the brain tissue fed by that blood flow.
haemorrhage When a blood vessel in the brain ruptures and affects local brain tissue.
cerebral embolism A blood clot that forms somewhere in the body before travelling through
the blood vessels and lodging in the brain, causing the brain cells to become damaged as a result
of oxygen starvation.
Cerebral thrombosis An injury caused when a blood clot (thrombus) forms in an artery
(blood vessel) supplying blood to the brain. The clot interrupts the blood supply and brain cells
are starved of oxygen.
Haemorrhaging in the brain is often the result of hypertension or high blood pressure, and is often due
to an aneurysm or bulging in the wall of the blood vessel—usually an artery at the base of the brain.
aneurysm A localized bulging in a blood vessel caused by disease or weakening of the vessel
wall.
Strokes are remarkably common—especially in individuals over the age of 55 years. In the UK, an
estimated 100,000 people a year suffer a stroke—although the incidence of strokes in the UK has fallen
by 19% from 1990 to 2010 (The Stroke Association, 2017). Strokes are the fourth most common cause
of death in the UK, and the single most common cause of disability, and over 1.2 million people in the
UK are stroke survivors (The Stroke Association, 2017). Symptoms of a stroke often occur very
suddenly and unexpectedly. Symptoms include numbness, weakness, or paralysis on one side of the
body (signs of this may be a drooping arm, leg, a lowered eyelid, or a dribbling mouth), slurred speech
or difficulty finding words or understanding speech, sudden blurred vision or loss of sight, confusion or
unsteadiness, and a severe headache. The type and severity of symptoms will depend entirely on the
brain area affected by the CVA. The most common longer‐term symptoms of stroke include aphasia,
agnosia, apraxia (see Table 15.1), and paralysis, and a DSM‐5 diagnosis of vascular neurocognitive
disorder is established if there is good evidence for a cerebrovascular event as the cause of the disability,
and the criteria are met for major or mild neurocognitive disorder (see Table 15.3 and 15.4) (Table
15.8). One of the most common forms of stroke is thrombosis in the left‐middle cerebral artery,
affecting the left hemisphere. This will cause disability to the right hand side of the body (which is
controlled by the left hemisphere) and also cause significant impairment in language ability (e.g.,
aphasia) because the left hemisphere is critically involved in language generation and comprehension.
As well as physical and cognitive deficits, individuals who have suffered a stroke also exhibit emotional
disturbance, often manifested as depressed mood or as emotional lability. Depression in particular is a
common and significant consequence of strokes, and a recent meta‐analysis indicated that 29% of
stroke victims suffer depression up to 10 years after their stroke, with cognitive impairment being one of
the main predictors of poststroke depression (Ayerbe, Ayis, Wolfe, & Rudd, 2013). Levels of depression
are also correlated with the severity of both physical and cognitive deficits (Kauhanen et al., 1999),
suggesting that there may be a link between degree of disability and depression, and depression is
associated with a significantly increased risk of early mortality (Pan, Sun, Okereke, Rexrode, & Hu,
2011). Recovery from physical and cognitive impairment is also significantly retarded in those with
depression (Morris, Robinson, Andrezejewski, Samuels, & Price, 1993; Robinson, Lipsey, Rao, & Price,
1986). There is a tendency here to conclude that the disabilities resulting from a stroke may cause
depression that in turn inhibits recovery. However, the picture is rather more complex than this and
there seems to be a bidirectional link between stroke and depression (Wium‐Andersen et al., 2019). For
example, depression is a risk factor for strokes. In a prospective study, May et al. (2002) found that men
with significant depressive symptoms were significantly more likely to suffer a stroke within the following
14 years than those without significant depression symptoms. Similarly, treating poststroke depression
with antidepressant medication also has the effect of significantly decreasing mortality rates over a 9‐
year period (Robinson et al., 2000). All of this suggests that depression is an important feature of
disability caused by strokes as well as a risk factor for strokes and is an area where clinical psychologists
might be suitably employed to manage depression in attempts to improve recovery rates and reduce
mortality rates.
TABLE 15.8 Summary: DSM‐5 diagnostic criteria for vascular neurocognitive disorder
Degenerative disorders
Degenerative disorders represent those neurocognitive disorders that are characterised by a slow, general
deterioration in cognitive, physical, and emotional functioning as a result of progressive physical
changes in the brain. Deterioration occurs gradually over a number of years, and degenerative disorders
are most frequently a feature of older age, and around 5–8% of the general population aged 60 and
over are estimated to have dementia at any one time (World Health Organization, 2019). Degenerative
disorders can affect both the cerebral cortex and subcortical regions of the brain. Those that affect
cortical areas cause impairments in cognitive abilities such as memory, language, attention, and
executive functioning (causing amnesia, aphasia, agnosia, slowed thinking, and confusion see Table
15.1). Disorders affecting subcortical regions of the brain may in addition cause emotional disturbances
and motor coordination difficulties. There are currently an estimated 850,000 people in the UK with
dementia, and one in six people over 80 years‐of‐age have dementia (Alzheimer's Society, 2020a), with
the most common cause of degenerative dementia in the UK being Alzheimer's disease (contributing
55%). Degenerative disorders that in addition significantly affect subcortical areas, and so affect
emotional behaviour and motor coordination, are Parkinson's disease and Huntington's disease.
Diagnosis of degenerative disorders is difficult and complex. First, a degenerative disorder has to be
distinguished from the normal process of ageing. Normal ageing naturally results in a moderate
deterioration of cognitive abilities (such as forgetting or cognitive slowness) and a deterioration in
physical abilities (such as problems with balance and motor coordination). However, degenerative
disorders compound this natural process because they represent an active pathological organic
deterioration of the brain. Second, it is often difficult to distinguish between the different degenerative
disorders that may affect cognitive and physical functioning. Many manifest with very similar cognitive
impairments, such as amnesia. Third, degenerative disorders are most frequently found in the elderly,
and this particular population will often present with a wide range of psychological and medical
problems that complicate diagnosis. For example, anxiety and depression are common features of old
age and may complicate neurological testing. Performance during assessment may also be affected by
other physical illnesses or the effects of medications for other ailments (Shepherd et al., 2015). Finally,
how a degenerative disorder manifests on presentation may differ significantly between individuals
depending on factors such as their level of education, level of family and social support, and their
psychological history. In effect, two individuals with the same disorder may present themselves quite
differently and perform quite differently in assessments depending on a range of social and
psychological factors. Overall prevalence estimates for diagnosed dementia disorders are around 1–2%
at age 65 years and as high as 30% by age 85 years (DSM‐5, American Psychiatric Association, 2013, p.
608).
The following sections continue by describing in detail some of the main degenerative disorders. These
cover neurocognitive disorder due to Alzheimer's disease, frontotemporal neurocognitive disorder,
neurocognitive disorder due to Parkinson's disease, neurocognitive disorder with Lewy bodies, and
neurocognitive disorder due to Huntington's disease.
Known risk factors for Alzheimer's disease include the following (Barranco‐Quintana, Allam, Del
Castillo, & Navajas, 2005): (a) age—which is the principal marker for risk of the disease, (b) sex
—prevalence is higher in women than in men, (c) genetics—having a first‐degree relative with the
disease significantly increases risk, (d) family history of dementia—nearly 40% of those with
Alzheimer's disease have a family history of dementia, (e) a history of head injury (McDowell, 2001), (f)
high levels of anxiety (Becker et al., 2018), and (g) low educational status and low wealth (Cadaret al.,
2018). Interestingly, some activities appear to have a direct or indirect protective value by predicting
lower rates of Alzheimer's disease (even in those with a family history of dementia), and these include
physical activity, smoking, drinking moderate levels of alcohol, and diets high in vitamins B6, B12, and
folic acid (Barranco‐Quintana et al., 2005). However, we must be cautious about how we interpret these
factors. For example, smoking probably protects against Alzheimer's disease largely because it is likely to
prevent smokers from reaching old age, which is when the disease becomes prevalent. In addition, low
educational status may be correlated with Alzheimer's disease because it may adversely affect
performance on the cognitive tasks used to diagnose the disease. Several studies also suggest that factors
such as diet (e.g., a Mediterranean diet), regular exercise, and engagement in intellectual activities also
appear to be useful in warding off cognitive decline (Hamer & Chida, 2009; Lee et al., 2018; Williams,
Plassman, Burke, Holsinger, & Benjamin, 2010). For example, high levels of cognitive activity appears to
protect against cognitive decline even in individuals who have a high genetic risk for Alzheimer's disease
and have already developed the plaques and tangles in their brain that are associated with Alzheimer's
(Wilson, Scherr, Schneider, Tang, & Bennett, 2007).
The DSM‐5 diagnostic criteria for Neurocognitive disorder due to Alzheimer's disease is given in Table
15.9. However, Alzheimer's disease itself is difficult to differentiate from other forms of degenerative
dementia, and it is often easier to identify Alzheimer's disease by successively eliminating other types of
disorder that cause dementia symptoms, such as Parkinson's disease, Huntington's disease,
hypothyroidism, HIV infection, substance abuse, or head trauma. This can be achieved using thyroid
function tests, blood tests, and a battery of neuropsychological tests of cognitive function. However, the
recent identification of some of the genes that carry a high risk for the development of Alzheimer's
disease mean that genetic testing can be used to identify Alzheimer's as the possible cause of the
dementia or cognitive decline. In addition, neuroimaging plays an important part in the diagnosis of
Alzheimer's disease, by helping to exclude alternative causes of dementia such as brain tumour, cerebral
atrophy, and cerebrovascular disease.
Beta amyloid plaques Abnormal cell development, possibly caused by abnormal protein
synthesis in the brain, which clump together with the consequence of killing healthy neurons.
Neurofibrillary tangles Abnormal collections of twisted nerve cell threads which result in
errors in impulses between nerve cells and eventual cell death.
TABLE 15.9 Summary: DSM‐5 diagnostic criteria for neurocognitive disorder due to Alzheimer's disease
Another factor that is thought to be important in Alzheimer's disease is the faulty production of the
brain neurotransmitter acetylcholine, and Alzheimer's disease appears to affect structures involved in
the production of acetylcholine (Hampel et al., 2018). The enzyme acetylcholinesterase normally breaks
down acetylcholine after use so it can be recycled, but in Alzheimer's disease acetylcholine levels fall too
low and memory and other brain functions are impaired. A number of drug treatments can be utilised
to help facilitate acetylcholine production in the brain, and we discuss these in more detail in the
following section on treatment and rehabilitation.
There also appears to be a significant inherited component to Alzheimer's disease with an estimate of
up to 50% of the first‐degree relatives of sufferers also developing the disorder (Korten et al., 1993). In
addition, twin studies suggest that the heritability of the disease is high ‐ between 58% and 79% (Gatz
et al., 2006). Up to 43 susceptibility genes have been identified for late‐onset Alzheimer's disease
(Kamboh, 2018), and these include ApoE4 and GAB2 (Bertram & Tanzi, 2008; Bookheimer &
Burggen, 2009), and currently, genetic testing to identify such genes is often used to enable diagnosis.
Some of these genes may increase susceptibility to Alzheimer's disease while others may have a more
direct cause in generating proteins that cause the beta amyloid plaques that result in damage to brain
tissue (e.g., causing overproduction of the beta amyloid plaques that leads to loss of healthy neurones).
In addition, some genes may play a role in early‐onset Alzheimer's disease, whereas others appear to be
linked to late onset (Bertram & Tanzi, 2008). All this suggests that, while there is no doubt about the
importance of an inherited component to Alzheimer's disease, it may have multiple causes, and a
number of genetic mechanisms may contribute to the factors which cause degenerative brain damage
(Focus Point 15.3).
FIGURE 15.3 Beta amyloid plaques and neurofibrillary tangles in the cerebral cortex in Alzheimer's disease. Plaques
appear to be caused by abnormal protein synthesis in the brain and they clump together killing healthy neurones. Tangles
consist of abnormal collections of twisted nerve cell threads causing errors in impulses between nerves. Certain genes such
as ApoE4 have also been identified that promote the formation of abnormal amyloid plaques (Bookheimer & Burggen,
2009).
From Blennow, de Leon, & Zetterberg, 2006.
FOCUS POINT 15.3 THE PROS AND CONS OF GENETIC
TESTING FOR DEGENERATIVE BRAIN DISORDERS
Some people may wish to use genetic testing to assess their risk of developing dementia later in
life. Genetic testing is not a straightforward issue and individuals need to think very carefully
before deciding to take such a test. The experience might be very difficult emotionally, may not
provide conclusive results either way, and may cause practical difficulties.
Possible advantages of genetic testing include that it might:
help genetic researchers understand the disease better and so lead to improved treatment
encourage someone to adopt a healthier lifestyle
allow people who are at a high risk of developing dementia to benefit from new treatments
that may become available in the future
help people to plan for the future
However, genetic testing may create problems, for the following reasons:
A genetic defect cannot be repaired, and effective treatment to slow the disease is not yet
generally available. A genetic test might therefore raise anxiety without offering a clear
course of action.
There is a risk of reading too much into the test results. Testing positive for one or two
gene variants does not mean a person will definitely develop Alzheimer's and testing
negative does not guarantee that they will be free from Alzheimer's.
People testing positive for any genetic test could face discrimination affecting their ability
to buy property, get insurance, or plan financially for their old age. However, following a
brief moratorium on the use of genetic information by UK insurance companies, a
voluntary code of practice was agreed between the UK Government and insurers in 2018
clarifying how information on genetic testing can be used in a way that does not
discriminate against those who have received positive tests (HM Government, 2018).
Those sufferers who develop psychological problems experience memory problems and exhibit deficits
in learning, judgement, and concentration, as well as becoming socially withdrawn and apathetic. It is
estimated that up to 75% of individuals with Parkinson's disease may eventually develop dementia, and
these symptoms can occur as early as 1–2 years after onset of the disease (Ehrt & Aarsland, 2005;
Williams‐Gray, Foltynie, Lewis, & Barker, 2006). As well as signs of cognitive impairment, Parkinson's
sufferers also regularly exhibit symptoms of psychosis and depression. Hallucinations occur in between
16% and 40% of sufferers, and this has often been considered as a medication‐induced phenomenon.
That is, the drugs used to facilitate substantia nigra dopamine production in sufferers are also known to
produce psychosis‐like symptoms. However, there is reason to believe that psychosis symptoms such as
hallucinations may also be intrinsic to the disease and result from progressive dementia or impairments
in primary visual processing (Williams‐Gray et al., 2006). Studies also suggest that depression is a
significant feature of Parkinson's disease in between 25% and 40% of sufferers (Leentjens, 2004), and
this is often considered to be an understandable reaction to having to cope with a chronic and
debilitating disease. However, as in the case of Alzheimer's disease, depression is also a significant
predictor of subsequent Parkinson's diagnosis, with the incidence of depression increasing significantly
in the 3 years prior to diagnosis of Parkinson's (Leentjens, Van den Akker, Metsemakers, Lousberg, &
Verhey, 2003). The fact that depression appears to be a biological risk factor for a number of
degenerative dementias has given rise to the view that depression may be accompanied by an
allostatic state (a biological state of stress) that can accelerate disease processes and cause atrophy of
nerve cells in the brain, in turn leading to dementia (McEwen, 2003).
Post‐mortem studies of individuals with Parkinson's disease suggest an association between dementia
and Lewy body deposition. Lewy bodies are abnormal protein deposits that disrupt the brain's normal
functioning. These Lewy body proteins are found in an area of the brain stem where they deplete the
neurotransmitter dopamine, causing Parkinson's symptoms. These abnormal proteins can also diffuse
throughout other areas of the brain, including the cerebral cortex, causing disruption of perception,
thinking, and behaviour. Around 80% of individuals with Parkinson's disease will develop dementia with
Lewy bodies, but others without Parkinson's disease can develop neurocognitive disorders purely as a
result of the development of Lewy bodies. This led DSM‐5 to introduce a new diagnostic category
known as Neurocognitive disorder with Lewy bodies. A diagnosis of NCD due to Parkinson's
disease is given if Parkinson's disease clearly precedes the onset of the neurocognitive disorder, and a
diagnosis of NCD with Lewy bodies is given if there is no convincing evidence of Parkinson's disease in
the aetiology (Tables 15.11 and 15.12). The prevalence of Parkinson's disease in the US begins at
around 0.5% between 65 and 69 years of age and rises to 3% by 85 years of age. Estimates of NCD
with Lewy bodies in the general elderly population range from 0.1% to 5% (DSM‐5, American
Psychiatric Association, 2013, p. 619), and this form of neurocognitive disorder may account for up to
30% of all dementias (Figure 15.4).
Lewy bodies Abnormal protein deposits that disrupt the brain’s normal functioning.
NCD due to Huntington's disease
Huntington's disease is an inherited, degenerative disorder of the central nervous system, caused by
a dominant gene. This means that everyone who inherits the gene from one of his/her parents will
develop the disease with 50% likelihood (see Focus Point 1.4 in Chapter 1 for a fuller explanation of the
genetics of Huntington's disease). Symptoms of the disorder do not normally occur until after the age
of 35 years and can have an even later onset (however, the earlier the onset, the more severe the disease
tends to be). It is principally a movement disorder, with the first observable behavioural symptoms
manifesting themselves as clumsiness and an involuntary, spasmodic jerking of the limbs. However,
many early signs of the disease tend to be radical changes in temperament. The individual may become
rude, exhibit unpredictable mood changes, and switch dramatically from depression to euphoria.
Cognitive functioning is affected as the disease develops, and this is manifested as impairments in
memory, attention, and decision making leading to full dementia. In addition, as the disease progresses,
the sufferer may also begin to exhibit psychotic symptoms, including hallucinations and delusions. The
general psychological syndrome associated with Huntington's disease includes affective symptoms,
cognitive deficits, personality disorganisation, bloody‐mindedness, early loss of common sense,
hallucinations, delusional ideation, odd behaviours, and obsessions (Wagle, Wagle, Markova, & Berrios,
2000). Psychopathological symptoms associated with the disease include depression, mania,
schizophrenia, paranoia, anxiety, and obsessive‐compulsive behaviours (Barquero‐Jimenez & Gomez‐
Tortosa, 2001). Huntington's disease affects an estimated 3 to 7 people per 100,000 who have European
ancestry. The disorder is less common in other populations, including those of Japanese, Chinese, and
African descent (US National Library of Medicine, 2020).
TABLE 15.11 Summary: DSM‐5 diagnostic criteria for neurocognitive disorder due to Parkinson's disease
mutant Huntingtin (mHtt) A protein which causes cell death in the basal ganglia and
contributes to Huntington’s disease.
TABLE 15.13 Summary: DSM‐5 diagnostic criteria for neurocognitive disorder due to Huntington's disease
SELF‐TEST QUESTIONS
Can you define the terms amnesia, aphasia, agnosia, and apraxia?
What is the difference between Broca's aphasia and Wernicke's aphasia?
What specific skills are impaired when neurological disorders cause deficits in executive
functioning?
Can you name some of the difficulties involved in diagnosing specific neurological
disorders?
What kinds of individual abilities are assessed using the Wechsler Adult Intelligence Scale
(WAIS) IV or the Adult Memory and Information Processing Battery (AMIPB)?
Can you describe the DSM‐5 diagnostic criteria for delirium?
Can you name two or more types of cerebral infection that may cause cognitive deficits?
What are the main causes of traumatic brain injury?
What are some of the emotional and psychological consequences of suffering a stroke?
Can you name the main neurocognitive degenerative disorders?
What is frontotemporal neurocognitive disorder and how does it differ from other
neurocognitive disorders?
Can you describe the main characteristics of Alzheimer's disease?
What are the main risk factors for Alzheimer's disease and what changes in the brain are
thought to occur during the disorder?
What areas of the brain are affected by Parkinson's disease, and what are the cognitive and
emotional symptoms associated with the disorder?
What are Lewy bodies, and how are they involved in neurocognitive deficits?
SECTION SUMMARY
Drug treatments
There has been some success in recent years in developing drugs that can help to slow the progress of
degenerative disorders such as Alzheimer's disease and Parkinson's disease. In the case of Alzheimer's
we have already noted that the disease is often associated with abnormalities in the production of the
brain neurotransmitter acetylcholine. During the course of the disease, an enzyme called
acetylcholinesterase breaks down acetylcholine and leads to depletion of the neurotransmitter
dopamine. In order to combat this effect, drugs have been developed that prevent acetylcoline
breakdown in the synaptic cleft by acetylcholinesterase and increase its uptake in the postsynaptic
receptor. The most common of these drugs are donepezil, rivastigmine, and galantamine and they are
collectively known as cholinesterase inhibitors (Petersen, Thomas, Grundman, & Thal, 2005).
Randomised, double‐blind, placebo‐controlled trials suggest that treatment for 6 months with
cholinesterase inhibitors produces moderate improvements in cognitive function in those with mild to
moderate Alzheimer's disease (Hitzeman, 2006). They may also help to slow memory decline (Birks,
2006), and prospects are best when treatment begins early in the course of the disease (Seltzer, 2006).
Although the emphasis has been on identifying early signs of Alzheimer's so that drug treatment can
begin as soon as possible, there is also some evidence that cholinesterase inhibitors such as donepezil can
improve cognition in individuals with severe Alzheimer's (Winblad et al., 2006), and accumulating
evidence that donepezil can also help to alleviate behavioural symptoms, mood disturbances, and
delusions associated with Alzheimer's (Cummings, McRae, & Zhang, 2006). However, recent systematic
reviews of cholinesterase inhibitors found no evidence that they could prevent dementia (Cooper, Li,
Lyketsos, & Livingston, 2013; Sharma, 2019). In 2011 the UK National Institute for Health and Care
Excellence (NICE) recommended to the NHS that donepezil, rivastigmine, and galantamine be made
available as part of the management of mild and moderate Alzheimer's disease, and those to be
targeted should score 10 points or higher on the MMSE (NICE, 2011)(see Focus Point 15.2). However,
more recent guidelines question whether prescribing anticholinergic drugs is appropriate before
randomised controlled trials are conducted to compare the potential efficacy of such medications
(NICE, 2018). While some cholinesterase inhibitors (especially galantamine) may have some effects in
slowing cognitive decline in Alzheimer's disease (e.g., Li, Zhang, Zhang, & Zhao, 2019), there is as yet
no reliable evidence for the efficacy of any drug treatments for frontotemporal neurocognitive disorder
(Schwarz, Froelich, & Burns, 2012).
Parkinson's disease is associated with degeneration in the substantia nigra area of the brain, where the
important neurotransmitter dopamine is produced. The main drug that is used to counteract this
decline in dopamine is levodopa, a natural amino acid that the brain converts into dopamine to
replace the depleted neurotransmitter. Although the drug has been relatively successful in helping suffers
to control tremor and other motor symptoms, there is little evidence that levodopa alleviates any of the
cognitive impairments associated with Parkinson's dementia (Morrison et al., 2004), but may prevent a
decline in cognitive function (Ikeda, Kataoka, & Ueno, 2017). Levadopa administration has to be closely
supervised because it also has a number of potential side effects including, hypertension, and delusions
and hallucinations similar to those found in schizophrenia and amphetamine psychosis.
levodopa A natural amino acid that is converted by the brain into dopamine and is used in the
treatment of Parkinson’s disease.
Medication can also be successful in reducing disability following cerebrovascular accidents such as
strokes. Thrombolytic therapy is the use of drugs to break up or dissolve blood clots—one of the
main causes of strokes. The most commonly used thrombolytic drug is tissue plasminogen activator (t‐
PA), and if this is administered within the first 3 hours of a stroke then disability is significantly reduced
(Albers, 1997; Hacke et al., 2004). Nevertheless, the success of this treatment is critically dependent on
the individual being able to identify the early signs of a stroke and seek rapid treatment, and stroke
patients begin to have cognitive impairments within 24‐hr on stroke onset. Although early
administration of thrombolytic therapy can significantly aid survival and physical recovery, there is only
modest evidence that administration of t‐PA is associated with improvement in cognitive function
(Rosenbaum Halevi et al., 2019; Nys, van Zandvoort, Algra, Kappelle, & de Haan, 2006).
Thrombolytic therapy The use of drugs to break up or dissolve blood clots – one of the
main causes of strokes.
Medication is also used in the treatment of brain deficits caused by cerebral infections. Bacterial
infections, such as certain types of encephalitis and meningitis, are treatable with antibiotics. However,
many viral infections are much more problematic, and steroids can be used to combat viral infections
such as herpes encephalitis. In the case of HIV‐1 associated dementia, newly developed
antiretroviral drugs are proving to be effective in reducing the severity of HIV dementia and the
prevalence of diagnoses of neurocognitive disorder (Nath & Sacktor, 2006; Crum‐Cianflone et al.,
2013). Usually, up to three to four antiretroviral drugs are used that act at different stages of the virus
life cycle. This produces a dramatic reduction in viral load (the level of virus in the blood) and prevents
further immune damage.
antiretroviral drugs Chemicals that inhibit the replication of retroviruses, such as HIV.
Finally, mood disorders (such as depression) are a common feature of neurological disorders, including
stroke, traumatic brain injury, and degenerative disorders, and depression can often adversely affect the
course of the disorder, prevent recovery, and increase mortality rates (Leentjens, 2004; Ramasubbu &
Patten, 2003; Robinson, Lipsey, Rao, & Price, 1986). The use of drugs such as selective serotonin
reuptake inhibitors (SSRIs) and tricyclic antidepressants to help alleviate depressed mood has proven to
be successful in improving recovery from strokes (Hackett, Anderson, & House, 2005), alleviating
symptoms of depression in Parkinson's disease and Alzheimer's disease (Modegro, 2010; Weintraub et
al., 2005), and improving mood and cognitive performance following traumatic head injury (Horsfield
et al., 2002). In at least some of these disorders (e.g., Parkinson's disease) there is a view that depression
is an integral symptom of the disorder—especially because depression often precedes and predicts other
symptoms of the disease as well as affecting outcome. So tackling depression can be considered a direct
treatment of the disorder itself rather than dealing with a side effect of disability (e.g., Leentjens, 2004).
deep brain stimulation (DBS) A form of treatment for Parkinson’s disease which uses a
surgically implanted, batteryoperated device called a neurostimulator to deliver electrical
stimulation to the ventral intermediate nucleus of the thalamus or the subthalamic nucleus area
in the basal ganglia.
This section continues by providing some examples of cognitive rehabilitation procedures that have
been shown to be effective in the rehabilitation of specific impairments. We then look at an example of
the holistic rehabilitation method.
Attention deficits
One form of rehabilitation training for attention deficits is known as Attention Process Training
(APT), and this uses a number of different strategies to promote and encourage attentional abilities
(Sohlberg & Mateer, 2010). Exercises include listening to an auditory piece that contains target words
that must be responded to by pressing a buzzer. Learning to shift attention appropriately is also
encouraged by learning to attend to a new word following identification of a preceding target word.
APT has been shown to be superior to basic therapeutic support in promoting attention and memory
functioning (Sohlberg, McLaughlin, Pavese, Heidrich, & Posner, 2000) and has also been shown to
provide gains in other everyday skills such as independent living and driving ability (Sohlberg & Mateer,
2001). An alternative approach to dealing with attention deficits is not to try and improve attention
itself, but to provide the client with some compensatory skills that will allow them to effectively manage
their slowed information processing (Fasotti, Kovacs, Eling, & Brouwer, 2000). This is known as time
pressure management(TPM) and is an alternative to ‘concentration’ training of the kind taught by
the APT procedure.
Attention Process Training (APT) A form of rehabilitation training for attention deficits
that uses a number of different strategies to promote and encourage attentional abilities.
time pressure management (TPM) An approach to dealing with attention deficits which
aims not to try to improve attention itself, but to provide clients with some compensatory skills
that will allow them to effectively manage their slowed information processing.
CLINICAL PERSPECTIVE: TREATMENT IN PRACTICE 15.1
THE VIRTUAL REALITY KITCHEN
This virtual reality computer programme provides a safe and controlled environment for
patients with brain injury to learn and improve basic daily skills such as preparing a meal of a
can of soup and a sandwich. All necessary objects are found on the computer screen and can
be accessed by using the computer mouse. Prompts appear on the screen initiating actions,
sequencing actions, and providing reinforcing feedback for correct actions. For example, one of
the first steps for preparing a can of soup is to remove the can from the cupboard. If this does
not occur within a predetermined time, the cupboard door is highlighted by a pulsating colour.
If the action is still not initiated, a verbal cue tells the patient to ‘open the cupboard’. Each
action performed by the patient is recorded and their performance can be quantitatively
assessed over time. Training in virtual environments such as this results in improved
performance on the tasks over time and performance on the virtual task correlates well with
performance on the tasks in a real kitchen.
Visuospatial deficits
A number of programmes have been developed for the rehabilitation of unilateral visual neglect and to
compensate for partial deficits in visual perception caused by neurological disorders. One such example
is the computer‐assisted training programme designed to aid visual scanning (Pizzamiglio, Guariglia,
Antonucci, & Zoccolotti, 2006). This consists of a series of tasks in which the patient is asked to (a) read
out coloured numbers projected on to a wall (scanning the full frontal environment), (b) manually track
a red ball projected onto a wall (helping coordination of scanning and physical movement), (c) react to
moving images as they are projected in front of them (facilitating detection of stimuli in space), and (d)
move the projected image of a wheelchair down a simulated three‐lane road while avoiding obstacles.
Visual scanning training has been shown to reduce unilateral visual neglect symptoms and to generalise
to improved performance on a real‐life wheelchair obstacle course.
gestural training A form of rehabilitation training for limb apraxia in which the client is
taught to recognise gestures and postures that are appropriate and in context.
In contrast, computer‐based virtual reality (VR) environments have been developed to enable disabled
individuals with brain injury to learn and improve basic daily living skills in a safe and controlled
environment. For example, Zhang et al. (2003) developed a virtual kitchen in which the patient can
learn the sequence of behaviours required to make a bowl of soup or prepare a sandwich (see
Treatment in Practice 15.1). Zhang et al. (2003) found that training in virtual environments such as this
resulted in improved performance on both tasks, and performance on the virtual task correlated well
with performance at the tasks in a real kitchen, and learning in virtual environments may be equally as
effective as other cognitive retraining procedures (Jacoby et al., 2013). Recent systematic reviews of
virtual reality training suggest that VR is a feasible and effective tool in the treatment of a range of
neurological disorders, including dementia, stroke, spinal cord injury, Parkinson's disease, and multiple
sclerosis (Schiza, Matsangidou, Neokleous, & Pattichis, 2019).
Finally, a number of specific techniques exist to help individuals with aphasia and traumatic brain
injury improve their ability to name objects, improve writing skills, and improve sentence production,
and these may range from the use of cuing techniques to help the patient name specific objects to
semantic feature analysis (SFA) designed to improve lexical retrieval by increasing the level of activation
within a semantic network (e.g., Boyle, 2010; Coelho, McHugh, & Boyle, 2000).
Memory deficits
Procedures for dealing with memory impairments mainly revolve around what are known as
compensatory strategies—that is, providing patients with specific strategies for remembering
material on a daily basis. Compensatory strategies of this kind tend to be more efficient than simple
remedial strategies and more easily generalisable to daily activities (Cotelli, Manenti, Zanetti, &
Miniussi, 2012; Nadar & McDowd, 2010). Compensatory strategies may involve assistive technology
such as using diaries to aid recall of daily events, labelling cupboards to remember where everyday
items are stored or located, or using a pager to remind the individual of important daily events. Wilson,
Emslie, Quirk, and Evans (1999) report the case study of a young man called George who had severe
memory impairments after a head injury sustained in a road traffic accident. The pager could be used
to remind George about a range of tasks and activities during the day. Using assistive technology such
as a pager as a memory prompt has been shown to be effective, easy to use, and significantly reduces the
number of memory and planning problems experienced by people with traumatic brain injury (Wilson,
Emslie, Quirk, Evans, & Watson, 2005). Assistive technology (e.g., smartphones) can also be used
effectively in many different ways for memory support. For example, a range of assistive technology
devices has now been developed to support people with dementia and their carers to manage their daily
activities and to enhance safety, for example, electronic pill boxes, picture phones, and mobile tracking
devices (Van der Roest, Wenborn, Pastink, Droes, & Orrell, 2017). In one example, a smartphone app
has been developed to help individuals with dementia recognise friends and family. The app is installed
on the phones of patients and friends, family, and caregivers and uses GPS tracking to flash an alert to
the patient when one of the group members is nearby. The app then tells the patient who is
approaching and what their relationship is to the person approaching. It then provides information and
pictures of the relevant group member (Steele, 2016). Assistive technology can be used in a variety of
ways to aid impaired memory. Computers, tablets, and smartphones can function as simple memory
aids to enhance prospective memory (e.g., by acting as an electronic diary), by providing memory
training exercises, or by instructing the patient in the use of memory strategies (Kapur, Glisky, & Wilson,
2004).
Teaching remembering strategies is also beneficial. One technique involves training the patient in the
use of visual imagery mnemonics in order to help store and retrieve items and events to be
remembered. Ten weeks of training in visual imagery techniques has been shown to result in significant
improvement in memory functioning 3 months after treatment (Kaschel et al., 2002). Errorless
learning is also a technique that has proven to be helpful in training individuals with amnesia.
Errorless learning is a training procedure where people are prevented—as far as possible—from making
any errors while learning a new skill or new information (Baddeley & Wilson, 1994), and in the context
of memory impairments it is useful for teaching new knowledge or training specific skills such as helping
the patient to find and use the right word to name objects. There is still some debate over whether
memory aids (e.g., smartphones, tablets, pagers, diaries, and personal organisers) are superior to
memory treatments (e.g., attempts to train better memory functioning), but in many cases a combination
of both aids and ‘treatments’ seems to be most effective (e.g., Middleton & Schwartz, 2012; Ownsworth
& McFarland, 1999).
Errorless learning A training procedure used in training individuals with amnesia where
people are prevented – as far as possible – from making any errors while learning a new skill or
new information.
goal management training (GMT) A procedure that involves training in problem solving
to help evaluate a current problem, followed by specification of the relevant goals, and
partitioning of the problemsolving process into subgoals or steps.
Many types of intervention for executive functioning deficits focus on both behavioural and emotional
regulation, and aim to train the individual in self‐regulation when confronting a problem and managing
their way through the sequence of cognitive and behavioural actions required to solve a problem. One
such procedure is known as self‐instructional training(SIT) where the individual learns a set of
instructions for talking themselves through particular problems. Such types of intervention have been
shown to raise personal self‐awareness of deficits and increase use of successful problem‐solving
strategies, but importantly these methods have the additional beneficial effects of improving emotional
self‐regulation and reducing outward expressions of anger and frustration (Medd & Tate, 2000;
Ownsworth, McFarland, & Young, 2000).
Finally, because of the nature of neurocognitive disorders and the frequent need for long‐term care,
programmes of support and training are increasingly becoming available for caregivers. These include
programmes to provide emotional support for caregivers, programmes to provide appropriate
management and coping skills for living with individuals with disabilities, and local or national support
groups that provide advice and information.
SELF‐TEST QUESTIONS
What is the difference between a restorative treatment and compensatory skills training?
How have drugs been used in the treatment of neurological disorders? Do such drugs help
in the treatment of the cognitive deficits caused by the disorder?
Can you describe at least one specific intervention designed to address each of the
following: attention deficits, visuospatial deficits, apraxia, language and communication
deficits, memory deficits, and deficits in executive functioning?
What are holistic rehabilitation methods, and how do they differ from specific restorative
interventions?
What are the main problems encountered by those giving care to people suffering from
neurological disorders, and what interventions have been developed to help them?
SECTION SUMMARY
CHAPTER OUTLINE
16.1 THE DIAGNOSIS AND PREVALENCE OF CHILDHOOD AND
ADOLESCENT PSYCHOLOGICAL PROBLEMS: SOME GENERAL ISSUES
16.2 DISRUPTIVE BEHAVIOUR PROBLEMS
16.3 CHILDHOOD AND ADOLESCENT ANXIETY AND DEPRESSION
16.4 THE TREATMENT OF CHILDHOOD AND ADOLESCENT
PSYCHOLOGICAL PROBLEMS
16.5 CHILDHOOD AND ADOLESCENT PSYCHOLOGICAL PROBLEMS
REVIEWED
LEARNING OUTCOMES
When you have completed this chapter, you should be able to:
1. Describe and evaluate some of the difficulties involved in diagnosing and treating
childhood and adolescent psychological problems.
2. Describe the characteristics of at least two disruptive behaviour disorders.
3. Describe the characteristics of childhood and adolescent anxiety and depression.
4. Compare and contrast theories of the aetiology of disruptive behaviour disorders,
childhood anxiety, and childhood depression.
5. Describe the characteristics of some of the main therapeutic methods used to treat
childhood and adolescent psychological problems and evaluate their efficacy.
My own troubles began when I was 3 years old and my father died abruptly of a brain tumour. A few years later
my mother was diagnosed with breast cancer and died when I was 11 years old. Watching so many important
people die was frightening and confusing. Even so, the most traumatic event of my childhood was my placement
into foster care. Although my aunts and uncles hinted at the possibility, I never believed they would give me away. I
threatened to jump off a high building if they went ahead with the plan. I knew they were conspiring to banish me
and I did not trust a single one of them. But they did it anyway. I'd had enough of the cycle of attachment and
desertion and decided I wasn't going to become attached to my new foster parents. To the outside world I was
withdrawn and detached. Yet towards myself I was overwhelmed by intense feelings of rage and hatred. My foster
mother repeatedly spoke of her disappointment in me and angrily talked about sending me away. I knew from my
brother that foster children often go from place to place and that being physically or sexually abused was common.
During the following 2 years I continued to float through time and space in a state of numb, disorganised misery,
going through the emotions but not really alive. I was aware of my impairment, and ashamed of it. I believed I
was peculiar. When I was about 16, I became absorbed with the idea that my central problem was a bodily
defect, and I focussed on one aspect of my anatomy after another, determined to find the specific flaw. At 17 I
developed the sensation of a lump in my throat and became convinced I was about to choke to death. I had no
labels for any of my experiences, so I did not realise this latest state was a form of anxiety. Every night I stayed
awake to the point of exhaustion.
Frank's Story
Introduction
The study of childhood and adolescent psychopathology is fraught with a range of difficulties that are
not experienced in the study of adult psychopathology. First, any behavioural or psychological problems
have to be assessed in the context of the child as a developing organism. For example, bed‐wetting is
quite normal in infancy but might be a sign of anxiety or adjustment problems after the age of 5 years.
Similarly, shyness and withdrawal from social contact are often normal during periods of social
development as the child attempts to understand the rules of social interaction and learns how to
verbally communicate with others. However, in early adolescence these may represent the first signs of
psychopathology. In addition, children may often go through brief stages of development when they
exhibit behavioural problems or fears and anxieties, but these problems often disappear as rapidly as
they appeared. Most parents have experienced a child who refuses to eat or very suddenly becomes
frightened of noises, strangers, or certain types of animals, only for this to disappear within a matter of
weeks or even days. Second, because of their immaturity, children will tend to have poor self‐knowledge.
They may feel that something is wrong but be unable to label it as anxiety or depression, or convey
clearly how they feel to others. In such circumstances, the clinician has to infer psychological states from
overt behaviour and decide whether that behaviour is unusual for the developmental stages through
which the child is passing. With these issues in mind, clinicians have tended to organise childhood
psychological problems into two broad domains based on the general behavioural characteristics of the
child. The first domain covers externalising disorders, which are based on outward‐directed
behaviour problems such as aggressiveness, hyperactivity, noncompliance, or impulsiveness. The second
domain covers internalising disorders, which are represented by more inward‐looking and
withdrawn behaviours and may represent the experience of depression, anxiety, and active attempts to
socially withdraw. The former are now more commonly known as disruptive behaviour disorders, and
include the Diagnostic and Statistical Manual of Mental Disorders, 5th Edition (DSM‐5) diagnosable disorders
such as conduct disorder (CD) and attention deficit hyperactivity disorder(ADHD). The
internalising disorders are still difficult to diagnose reliably, but DSM‐5 does include guidelines for
diagnosing childhood separation anxiety, generalised anxiety disorder(GAD), and major
depressive disorder. The childhood and adolescent disorders discussed in this chapter are covered in
a number of different chapters in DSM‐5, including the chapter on disruptive, impulse control, and
conduct disorders (oppositional defiant disorder [ODD], conduct disorder), neurodevelopmental
disorders (attention‐deficit/hyperactivity disorder, ADHD), anxiety disorders (childhood separation
anxiety, childhood GAD), depressive disorder (childhood major depression), and obsessive‐compulsive
and related disorders (childhood OCD).
separation anxiety A childhood anxiety problem involving an intense fear of being separated
from parents or carers.
Frank's Story described at the beginning of this chapter illustrates the kinds of experiences that might give
rise to psychological distress in childhood and adolescence and how this distress may be manifested in
behaviour. This personal account describes the negative emotional impact of the death of his mother
and father, his feelings of abandonment and impotency, and how this affected his ability to form
relationships. This in turn gave rise to feelings of guilt, shame, and inadequacy, and in adolescence
finally manifested as specific psychological problems such as body dysmorphic disorder and somatic
symptom disorder. He related this story as an adult and as such was able to look back on his childhood
and put his behaviour and emotions into a perspective that enabled him to understand them. But events
often seem confusing and uncontrollable to a child, and a clinician has to interpret what a child might
be feeling and experiencing from their behaviour alone (for example, much of Frank's behaviour might
be seen as internalising, and suggestive of anxiety and depression). Nevertheless, even during an
upbringing that is relatively trauma free, children will frequently experience childhood as a threatening
and frightening time. They will develop anxieties as they experience new people and new situations
(Crijnen, Achenbach, & Velhulst 1999), will worry about many of their everyday activities such as
attending school (Ollendick, King, & Muris, 2002; Vasey, 1993), and develop behavioural problems such
as temper tantrums, eating irregularities, nightmares, and phobias. As the child moves into adolescence
even more challenges await as the individual develops sexually, changes physically, encounters
educational and occupational challenges, and moves into a new period where feelings of responsibility
and self‐worth are expected of them. It is perhaps not surprising at this stage that many adolescents
encounter feelings of confusion, anxiety, and depression while attempting to cope with these changes
(Lerner, 2002), and it is also not surprising therefore that the initial symptoms of many of the adult
mental health disorders we have covered earlier in this book first begin to develop during adolescence
(e.g., schizophrenia, paraphilic disorders, somatic symptom disorders, etc.).
In the following section of this chapter we look briefly at the difficulties involved in addressing
psychological problems of childhood and adolescence, and then look at the prevalence rates of specific
disorders. The remainder of the chapter looks in detail at the diagnosis, aetiology, and treatment of the
following: (a) attention deficit and disruptive behaviour disorders, and (b) childhood anxiety and
depression.
TABLE 16.1 Childhood Risk Factors for Adult Mental Health Problems
Childhood experience (risk Adult mental Reference Chapter
factor) health problem
Abnormal Parent‐Child Social anxiety disorder Moore, Whaley, & Sigman (2004) 6
Interaction Style Narcissistic and Johnson et al. (2003) 12
obsessive compulsive
personality disorder
Antisocial personality Gabbard (1990) 12
disorder
Borderline personality Graybar & Boutilier (2002) 12
disorder
Histrionic personality Bender, Farber, & Geller (2001) 12
disorder
Childhood abuse (physical OCD Grisham et al. (2011) 6
and sexual) PTS Wolf et al. (2012) 6
Depression (reduced Raes, Hermans, Williams, & 7
autobiographical Eelen (2005)
specificity)
Psychosis McGrath et al. (2017) 8
Substance use Sartor et al. (2013) 9
disorder
Suicide and suicidal Gould & Kramer (2001) 7
ideation
FIGURE 16.1 The prevalence of any diagnosable mental health problem in childhood. Percentage of children (boys and
girls) with a diagnosable mental health problem in age groups 5–10 years, 11–16 years, and 17 to 19 years of age.
From Office for National Statistics (2018) Mental health of children and young people in England, 2017.
https://files.digital.nhs.uk/A0/273EE3/MHCYP%202017%20Trends%20Characteristics.pdf
SELF‐TEST QUESTIONS
Can you describe four difficulties involved in the detection and diagnosis of childhood
psychological disorders?
What kinds of childhood events act as precursors or risk factors for adult mental health
problems?
How prevalent are childhood psychological disorders?
SECTION SUMMARY
‘Mark, age 14, has more energy than most boys his age. But then, he's always been overly
active. Starting at age 3, he was a human tornado, dashing around and disrupting everything in
his path. At home, he darted from one activity to the next, leaving a trail of toys behind him. At
meals, he upset dishes and chattered nonstop. He was reckless and impulsive, running into the
street with oncoming cars, no matter how many times his mother explained the danger or
scolded him. In the playground, he seemed no wilder than the other kids. But his tendency to
overreact—like hitting playmates simply for bumping in to him—had already gotten him into
trouble several times. His parents did not know what to do. Mark's doting grandparents
reassured them, “Boys will be boys. Do not worry, he'll grow out of it”. But he did not’.
From http://www.nimh.nih.gov/publicat/adhd.cfm#adhd2
TABLE 16.2 Summary: DSM‐5 Diagnostic Criteria for Attention Deficit Hyperactivity Disorder (ADHD)
An ongoing pattern of inattention and/or hyperactivity and impulsivity that interferes with
normal functioning or development, as marked by the following:
Inattention. At least six of the following for at least 6 months:
Not paying close attention to details or making careless mistakes
Difficultly in maintaining attention in activities
Does not listen when spoken to directly
Ignores instructions
Has difficulty organising
Dislikes or avoids tasks which require sustained mental effort
Loses things needed for tasks
Easily distractible
Forgetful in daily activities
Hyperactivity and Impulsivity. At least six of the following for at least 6 months:
High level of fidgeting
Not sitting still or leaving seat when expected to sit
Runs or climbs in situations where it is inappropriate
Unable to engage in activities quietly
Excessive talking
Blurts out an answer before the question is finished
Has difficulty awaiting their turn
Interrupts or intrudes on others frequently
Symptoms were present before the age of 12
Symptoms are present in at least two settings
Symptoms reduce the quality of educational, social, or occupational ability
Symptoms do not occur during schizophrenia or another psychotic disorder and are not better
explained by another mental disorder
In terms of its course, ADHD is usually first recognised by parents when the child is a toddler, but not
all hyperactive toddlers go on to develop ADHD. The disorder is usually first recognised and diagnosed
after the child first begins schooling, and this is because learning and adjustment at school is significantly
affected by the disorder. As the child develops into adolescence, symptoms usually attenuate and become
less pronounced, although about half will continue to show symptoms well into adulthood and this can
detrimentally affect intellectual functioning and intelligence quotient (IQ) (Bridgett & Walker, 2006).
There is much discussion in the literature about whether ADHD is a culturally constructed disorder—
that is, whether the rates of diagnosis differ in different cultures because of differing cultural perceptions
of children's behaviour. The evidence on this is equivocal. Some studies suggest very similar rates of
diagnosis across different cultures and ethnic groups (Bailey & Owens, 2005; Rohde et al., 2005),
whereas others indicate differing rates of ADHD in different countries (Dwivedi & Banhatti, 2005).
Studies indicating different rates of diagnosis may do so because different cultural environments may
directly affect a child's behaviour (e.g., Buddhist cultures tend not to tolerate externalising behaviours),
or may affect the attitudes of parents and clinicians towards what is acceptable behaviour. For example,
Zwirs, Burger, Buitelaar, & Schulpen (2006) found that detection of externalising disorders was
significantly lower in a sample of non‐Dutch parents (Moroccan, Turkish, and Surinamese) than Dutch
parents and that cultural contexts may have an important influence on whether ADHD symptoms are
detected and reported.
Biological factors
Genetic factors
There is now considerable evidence pointing to the involvement of an inherited susceptibility to ADHD.
ADHD appears to be one of the most heritable psychiatric disorders, and pooled data from 20 twin
studies report a mean heritability estimate of 76% (Faraone & Mick, 2010). Adoption studies also
suggest that ADHD in the adopted child is more likely to occur if a biological parent has ADHD than if
an adopted parent has ADHD (Faraone et al., 2005). However, what is inherited is significantly less easy
to determine (Faraone & Larsson, 2019). A meta‐analysis of gene linkage studies has revealed a region
on chromosome 16 that has the most consistent linkage evidence (Coghill & Banaschewski, 2009).
However, identifying individual genes is much more problematic, and reviews of genome‐wide
association studies have suggested that any individual gene variant for ADHD must have a very small
individual effect (Neale et al., 2010). However, many of those genes identified may underlie
abnormalities in neurotransmitter systems—particularly the dopamine, norepinephrine, and serotonin
systems (Waldman & Gizer, 2006). Specific genes that may be involved are the dopamine transporter
gene, the dopamine D4 and D5 receptors and SNAP‐25, a gene that controls the way dopamine is
released in the brain and promotes the plasticity and adaptability of neuron synapses.
However, while susceptibility to ADHD appears to have a significant genetic component, additional
studies strongly indicate a genes‐environment interaction. That is, what is inherited is a vulnerability to
ADHD, but ADHD becomes manifest only when certain environmental influences are found. For
example, Kahn, Khoury, Nichols, & Lanphear (2003) found that children with two copies of the 10‐
repeat allele of a DAT1 gene (a gene related to dopamine regulation in the brain) who were exposed to
maternal prenatal smoking exhibited significantly higher levels of hyperactivity, impulsiveness, and
oppositional behaviours than a control group of children who possessed these genes but whose mothers
did not smoke during pregnancy. In addition, children who possessed only one of the risk factors (the
high‐risk genotype or a mother that smoked during pregnancy) did not show significantly higher levels
of ADHD symptoms than children who possessed neither of the risk factors. Studies such as this
indicate that, while inherited factors are critically important in the aetiology of ADHD, they may
constitute a vulnerability that converts into ADHD only if certain environmental factors are present
(Coghill & Banaschewski, 2009). Other environmental risk factors that have been proposed include pre‐
or perinatal complications and maternal drinking during pregnancy (Mick et al., 2002; Milberger,
Biederman, Faraone, Guite, & Tsuang, 1997; Milberger, Biederman, Faraone, & Jones, 1998).
Neuroscience
Magnetic resonance imaging (MRI) studies of the brains of individuals with ADHD have revealed a
number of significant differences between ADHD sufferers and nonsufferers (e.g., Cortese, 2012; Krain
& Castellanos, 2006; Rubia, 2018). First, there is consistent evidence that the brains of children with
ADHD are smaller than those of healthy comparison children, and develop more slowly. Overall brain
volume has been shown to be smaller by an average of 3.2%, with the main areas affected being the
frontal, parietal, temporal, and occipital lobes (Durston et al., 2004), and ADHD is also associated with
a global reduction in gray matter (Nakao, Radua, Rubia, & Mataix‐Cols, 2011). Other brain areas
exhibiting decreased volume in ADHD include the frontal cortex, basal ganglia, and cerebellum (Krain
& Castellanos, 2006). Comparisons of the development of brain structures in children with ADHD and
typically developing controls also suggests that the median age by which 50% of the cortex reaches peak
thickness is 10.5 years in children with ADHD, but only 7.5 years in normally developing controls (Shaw
et al., 2007). A range of studies has also indicated that brain volume in specific brain areas is inversely
correlated with a variety of ADHD symptoms. For example, children with ADHD are known to have
deficits in executive functioning (involving planning and problem solving) which involves the frontal
lobes, and specifically have difficulty in selective attention, working memory, and inhibiting responses
(Fair, Bathula, Nikolas, & Nigg, 2012; Mueller, Hong, Shepard, & Moore, 2017) (Research Methods in
Clinical Psychology 16.1). Another area of the brain that regularly exhibits abnormalities in association
with ADHD symptoms is the cerebellum (Cherkasova & Hechtman, 2009). In ADHD, abnormalities
are usually found in the cerebellum's influence on the cortico‐striatal‐thalamo‐cortical circuits, and these
circuits are involved in choosing, initiating, and carrying out complex motor and cognitive responses
(Alexander, DeLong, & Strick, 1986; Graybiel, 1998). In this case it is not hard to imagine how
dysfunctions in these pathways may result in the disruption of the planning and execution of behaviour.
RESEARCH METHODS IN CLINICAL PSYCHOLOGY 16.1
COGNITIVE TESTS OF ADHD
A variety of tests have been devised that are capable of differentiating between children with
ADHD and control participants. The aim of most of these tasks is to test attention or to
determine whether the individual is able to successfully inhibit responses when required to do so
(see Seidman, 2006).
The Continuous Performance Test (CPT): The CPT is a computerised visual
vigilance/attention task in which the child is seated before a computer monitor and instructed
to observe a string of letters presented randomly and at varying speeds. Children are instructed
to press the space bar as quickly as possible following all letters except the letter X. Children
with ADHD are less able to inhibit responses following the presentation of the target letter X
and also emit longer reaction times following letters that should be responded to (Epstein,
Johnson, Varia, & Conners, 2001).
The Stroop Task: This is generally considered a test of ability to inhibit responses. In the
task, a word describing a colour (e.g., red) is presented in a different colour (e.g., green), and the
participant has to respond as quickly as possible by naming the colour ink that the word is
written in. Children with ADHD take more time to respond and make more errors than control
participants (Shin, 2005).
The Trail Making Test: This is a measure involving connecting circles on a page (Reitan,
1958). The child is instructed to connect the circles by drawing lines alternating between circles
labelled with numbers and letters in sequential order until they reach the circle labelled ‘End’
(see Focus Point 15.1). Most studies show that children and adults with ADHD perform
significantly worse than control participants (Rapport, Van Voorhis, Tzelepis, & Friedman,
2001).
The Controlled Word Association Test (COWAT): This test measures verbal fluency in
response to single letters, which taps into phonological associations, and category fluency
(‘name all the animals you can beginning with the letter…’) (Benton, Hamsher, & Sivan, 1983).
This test appears to measure speed of access to words, persistence at a task, and processing
speed. The majority of studies show impaired performance on this task in children with ADHD
compared to controls (Dinn, Robbins, & Harris, 2001).
Conners' Parent Rating Scale (CPRS): The CPRS (Conners, Sitarenios, Parker, &
Epstein, 1998) is an 80‐item scale completed by the child's parent using a 4‐point scale. This
instrument has well‐accepted reliability and validity and is considered to be standard in ADHD
diagnosis (Barkley, 1991). Norms by age are available for males and females in 3‐year intervals.
Prenatal factors
As we have already noted, at least some prenatal experiences appear to interact with a genetic
predisposition to put an individual at risk for ADHD. These experiences include maternal smoking and
drinking during pregnancy (Mick et al., 2002) and general complications associated with childbirth,
such as low birthweight, respiratory distress, and birth asphyxia (Getahun, et al., 2013; Tannock, 1998).
Schmitz et al. (2006) found that pregnant mothers smoking greater than 10 cigarettes per day were
significantly more likely to give birth to children with ADHD than nonsmoking mothers – even when
other potential confounding factors such as maternal ADHD, ODD, birth weight, and alcohol use
during pregnancy were controlled for. In addition, a study by Milberger, Biederman, Faraone, Chen,
and Jones (1997) found that 22% of mothers of children with ADHD reported smoking a pack of
cigarettes a day during pregnancy compared with only 8% of mothers whose children did not develop
ADHD. Milberger, Biederman, Faraone, Guite, and Tsuang (1997) hypothesised that prenatal exposure
to nicotine caused abnormalities in the dopaminergic neurotransmitter system, resulting in the offspring
having difficulties inhibiting behaviour.
Environmental toxins
Some early accounts of ADHD did allude to the possibility that hyperactivity resulted from various
biochemical imbalances caused by such factors as food additives (Feingold, 1973), refined sugar cane
(Goyette & Conners, 1977), and lead poisoning (Thompson et al., 1989). However, there is only modest
evidence that artificial food colorings contribute to hyperactivity (Nigg, Lewis, Edinger, & Falk, 2012),
but there is some support for the fact that both the levels of lead in the blood and chronic exposure to
nicotine or tobacco smoke may increase hyperactivity (e.g., Fung & Lau, 1989; Polanska, Jurewicz, &
Hanke, 2012), although it must be emphasised that most children with ADHD do not show raised levels
of lead in the blood.
Psychological factors
Parent–child interactions
ADHD appears to run in families, and this may have implications beyond the fact that there is a genetic
component to the disorder. For example, it also means that children with ADHD are more likely to be
brought up by parents who also have the disorder, and this may exacerbate any symptoms that are
caused by the genetic component alone. For example, fathers who are diagnosed with ADHD have been
found to be less effective parents (in terms of exhibiting ineffective discipline and adopting traditionally
conservative father roles) than parents without an ADHD diagnosis (Arnold, O'Leary, & Edwards,
1997), and this may exacerbate any disruptive characteristics the ADHD child may exhibit.
Psychodynamic approaches to ADHD have also pointed to the possible role of inconsistent and
ineffective parenting of children with ADHD. Bettelheim (1967) proposed that hyperactivity resulted
when a predisposition to ADHD is accompanied by authoritarian parenting methods. He argued that
such parents are likely to become impatient with a disruptive and hyperactive child, resulting in a
vicious cycle whereby constant attempts to discipline the child cause even more defiant reactions on the
part of the child who reacts by defying rules across a range of life contexts (e.g., school, social situations,
etc.).
Learning theorists have suggested that parents may exacerbate ADHD symptoms in a rather different
way. Individuals with ADHD exhibit impulsive and disruptive behaviour that in many cases will require
the need for control by the parent. In such circumstances the attention from the parent that these
behaviours demand may be rewarding or reinforcing them, thus increasing their frequency and
intensity. While there is no direct evidence to support this view, indirect support comes from studies
showing that time out from positive reinforcement can act as an effective procedure for reducing negative
and disruptive behaviour in children with ADHD (Fabiano et al., 2004).
Nevertheless, while parent–child interactions of various kinds may exacerbate ADHD symptoms, there
is no evidence to suggest that they are the sole cause of these symptoms (Johnston & Marsh, 2001).
Summary of ADHD
The evidence on the aetiology of ADHD strongly indicates that there is a significant genetic component
to the disorder, and it may be one of the most heritable psychological disorders. However, it is not fully
clear yet whether this genetic component merely bestows a vulnerability for the disorder or whether it
may be a direct cause of abnormalities that underlie ADHD; it is also not yet been possible to clearly
identify the genes through which this heritability is transmitted. There are clearly some brain
abnormalities that characterise ADHD, including reduced overall brain volume and gray matter, and
reduced brain volume particularly in areas such as the frontal cortex, basal ganglia and cerebellum, and
abnormalities in the frontal lobes may contribute to the deficits in executive functioning that are found
in ADHD using cognitive tests. Some pre‐ and perinatal factors have been identified that may
contribute to abnormal development, and these include maternal smoking and drinking during
pregnancy, and complications at birth, including low birthweight, respiratory distress, and birth
asphyxia. There is also some evidence that dysfunctional parenting may contribute to the behavioural
symptoms of ADHD in children, but there is no evidence that dysfunctional parenting is a sole cause of
ADHD.
An ongoing pattern of behaviour where the rights of others or social norms are infringed, as
shown by at least three of the following over a 12‐month period:
Bullying or threatening others
Starting fights
Using a weapon to do serious physical harm
Physical cruelty to others
Physical cruelty to animals
Mugging or similar crimes
Forcing another into sexual activity
Fire setting to destroy/seriously damage property
Deliberate destruction of another's property
Breaking into buildings or cars
Lies to get goods or favours
Shoplifting or similar
Stays out at night despite parental intervention, starting from before the age of 13
Has run away from home at least twice or once for a long period of time
Often misses school, starting from before the age of 13
The disturbances cause significant impairment in social, academic, or occupational functioning
If the patient is 18 years or older, the condition is not better explained by antisocial personality
disorder
There are two main subtypes of conduct disorder based on the age of onset. Childhood‐onset type
is defined by the onset of at least one criterion characteristic of conduct disorder prior to 10 years of
age. Adolescent‐onset type is defined by the appearance of conduct disorder symptoms only after
the age of 10 years. Such individuals are less likely to be physically aggressive than those with
childhood‐onset type, and will usually have better peer relationships.
Like individuals with antisocial personality disorder (see Chapter 12), children and adolescents with
conduct disorder display little empathy with the feelings and intentions of others and will usually believe
that their aggressive reactions to others are justified. They will frequently try to blame others for their
misdeeds, and exhibit little or any genuine guilt for their antisocial actions. Risk‐taking, frustration,
irritability, impulsivity, and temper tantrums are regularly associated with conduct disorder and result in
higher accident rates for such individuals. Conduct disorder is also associated with early onset of a
range of behaviours, including sexual behaviour, drinking, smoking, substance abuse, and general risk‐
taking behaviour (e.g., dangerous and erratic driving). Finally, the disorder is more common in males
than in females, and males with a diagnosis will outnumber females by a ratio of at least 4 to 1 and
often up to 10 to 1 (Merikangas et al. 2010; Zoccolillo, 1993).
It is important to mention at least three issues related to diagnosis of conduct disorder. First, individuals
diagnosed with conduct disorder will usually be under 18 years of age and diagnosed with the disorder
only at a later age if the criteria for antisocial personality disorder are not met. Second, the clinician will
need to take account of the social context in which behaviours characteristic of conduct disorder are
found. For example, in certain deprived inner‐city areas, behaviours characteristic of conduct disorder
may be seen as being protective. That is, they may represent the norm for that environment and may
serve an adaptive function in dealing with poverty and the threatening behaviour of others. In addition,
immigrants from war‐ravaged countries can have a reputation for violence because such behaviour has
been necessary for survival in their previous countries. Clinicians must be sure that a diagnosis of
conduct disorder is made only when the characteristic behaviours are symptomatic of dysfunction
rather than a reaction to a specific social context. Third, a related category of disruptive behaviour
disorders in DSM‐5 is known as oppositional defiant disorder (ODD). ODD is a diagnosis usually
reserved for those children who do not meet the full criteria for conduct disorder (e.g., extreme
aggression and violence) but who have regular temper tantrums, refuse to comply with requests or
instructions, or appear to deliberately indulge in behaviours that annoy others. ODD is common in
preschool children and may even be a precursor to later childhood conduct disorder (Lahey, McBurnett,
& Loeber, 2000). It is found more often in families where childcare has been disrupted (through the
child experiencing a number of different caregivers) or in families where at least one parent has a
history of mood disorders, antisocial personality disorder, ADHD or substance abuse.
Biological factors
Genetic factors
There is now some evidence that conduct disorder and its associated behaviours of aggressiveness and
criminality may have a genetic component. Twin studies specifically involving conduct disorders have
found heritability estimates between 45% and 67% (Viding & McCrory, 2012), while twin studies also
suggest that aggressive and violent behaviour (e.g., fighting, cruelty to animals) has a significant inherited
component (Edelbrock, Rende, Plomin, & Thompson, 1995). Adoption studies have also reported
significant genetic and environmental influences on both conduct disorder and criminal behaviour
(Simonoff, 2001). Some studies have even identified a specific gene, GABRA2, which is associated with
childhood conduct disorder (Dick et al., 2006), and, interestingly, this gene is also related to adult
alcohol dependence and drug dependence in adolescence. Two other candidate genes, 5HTTLPR and
MAOA have also been linked to conduct disorder. These genes influence serotonin and monoamine
neurotransmitters, and it may be the effect of this on socioemotional information processing in the
prefrontal cortex that facilitates responses such as aggression and threat reactivity found in conduct
disorder (Ficks & Waldman, 2014).
However, the results from more candidate gene association studies have been modest, with only a small
number of candidate genes identified to date (Viding et al., 2013). What these studies suggest is that
there is probably an inherited component to conduct disorder – perhaps in the form of inherited
temperamental characteristics—but that environmental factors also play an important role in
determining behaviour patterns typical of conduct disorder, and may interact with inherited
predispositions. For example, using a large sample of over 1,000 children from the Dunedin
Multidisciplinary Health and Development Study, Caspi et al. (2002) found that levels of the MAOA
gene interacted with childhood maltreatment, and those individuals who had low levels of MAOA and
had been maltreated were more likely to develop conduct disorder than individuals who had any other
combination of MAOA and maltreatment.
Neuropsychological deficits
Just as with ADHD, conduct disorder is associated with neuropsychological deficits in cognitive
functioning, including deficits in executive functioning (planning and self‐control), verbal IQ , and
memory (Lynam & Henry, 2001). Low IQ is also associated with conduct disorder, and is particularly
associated with early‐age onset conduct disorder independently of related socio‐economic factors such
as poverty, race, or poor educational attainment (Lynam, Moffitt, & Stouthamer‐Loeber, 1993).
Nevertheless, there is some doubt about whether executive functioning deficits occur in conduct
disorder in the absence of ADHD symptoms. For example, Oosterlaan, Scheres, & Sergeant (2005)
found that while executive functioning deficits were found in children with comorbid conduct disorder
and ADHD, no deficits were found in children diagnosed solely with either ODD or CD. Similarly,
meta‐analyses have found that antisocial behaviour generally is associated with poor executive
functioning, but this association is predominantly driven by relationships between poor executive
functioning and criminality and externalising behaviours generally rather than conduct disorder
specifically (Ogilvie, Stewart, Chan, & Shum, 2011)
Children with a diagnosis of conduct disorder also exhibit reduced activation in brain areas such as the
amygdala, ventral striatum, and prefrontal cortex—brain regions that contribute to emotion, reward,
and learning to associate behaviour with rewards (Alegria, Radua, & Rubia, 2016; Blair, 2013), all
features that may contribute to lack of empathy, misconstruing of the intentions of others, and a failure
to learn from rewarding and aversive experiences.
Prenatal factors
A number of prenatal factors have been identified in the aetiology of conduct disorder. These include
maternal smoking and drinking during pregnancy and prenatal and postnatal malnutrition. Maternal
smoking has been found to predict the early emergence of conduct problems in the offspring, especially
socially resistant and impulsively aggressive behaviour (Wakschlag, Pickett, Kasra, & Loeber, 2006) and
conduct disorder is specifically associated with maternal drinking of one or more drink during the first
trimester (Larkby, Goldschmidt, Hanusa, & Day, 2011), but this may be restricted to mothers and
children of low socio‐economic status (Monuteaux et al., 2006). Similarly, delinquent behaviour and
poor moral judgement has been found to be higher in children prenatally exposed to alcohol (Schonfeld,
Mattson, & Riley, 2005). Recent studies also suggest that externalising behaviours are associated with
prenatal malnutrition, especially deficits in proteins, iron, and zinc (Liu & Raine, 2006). However, we
must be cautious about how we interpret all of these findings because correlations between prenatal
exposure and conduct disorder may be significantly confounded with other risk factors, such as parental
depression, family disadvantage, and genetic influences (Maughan, Taylor, Caspi, & Moffitt, 2004). If
so, prenatal exposure may simply be a risk factor for the development of conduct disorder rather than a
direct cause of the condition.
Psychological factors
Cognitive factors
Conduct disorder is associated with the development of deviant moral awareness. For example, most
children grow up learning that certain behaviours are morally acceptable and others are morally and
socially unacceptable. However, children with conduct disorder fail to acquire this moral awareness.
They are content to achieve their goals using violence and deceit, they have little respect for the rights
of others and show little or no remorse for their antisocial acts. Much of this lack of awareness of moral
standards may come from the fact that they may have developed highly biased ways of interpreting the
world. For example, a child with conduct disorder regularly interprets the behaviour of others as hostile
or challenging, and this appears to give rise to their aggressive reactions. Dodge (1991, 1993) has
proposed a social‐information processing model of antisocial and aggressive behaviour in which a
history of trauma, abuse, deprivation, and insecure attachment may give rise to specific information
processing biases. These include hypervigilance for hostile cues and attributing minor provocations to
hostile intent, and these biases give rise to unwarranted fear and to aggressive reactions. If the child is
brought up in a family environment where they learn aggressive behaviour in child‐parent interactions
(Patterson, Reid, & Dishion, 1992), and if they also have their own experiences with successful
aggressive tactics, they will evaluate aggression as an adaptive social strategy and use it proactively (see
Figure 16.2). In support of this hypothesis, Gouze (1987) found that aggressive children direct their
attention selectively towards hostile social cues and have difficulty diverting their attention away from
these cues. Aggressive children also exhibit what is called a ‘hostile attributional bias’ (Nasby, Hayden, &
DePaulo, 1979), in which they will interpret not only ambiguous cues as signalling hostility but also
many cues which are generated by benign intentions (e.g., Dodge, Bates, & Pettit, 1990). Once a hostile
attribution is made, studies also suggest that there is a 70% probability of an aggressive response,
compared with only 25% probability following a benign attribution (Dodge, 1991). Because of such
information processing and attributional biases, the individual with conduct disorder may be locked into
a cycle of hostile interpretations and aggressive responding that becomes difficult to break—especially
as continued aggressive behaviour by the sufferer is likely to generate genuine hostile intentions from
others in the future.
Socio‐economic factors
Delinquent, violent behaviour has been shown to be highly associated with poverty, low socio‐economic
class, unemployment, urban living, and poor educational achievement, and such factors may be a cause
of conduct disorder rather than a consequence of it (Lahey, Miller, Gordon, & Riley, 1999). A
longitudinal study of familial and socio‐economic predictors of conduct disorder in a Scottish cohort
indicated that individuals with conduct problems were more likely to have mothers that smoked during
pregnancy, less likely to be living with both parents, have poor general health, and have a parent who
agrees with smacking as a form of punishment (Wilson et al., 2013). So, poverty is likely to affect
conduct disorders through mediating factors such as disrupted family life, parental stress, poor
educational opportunities, and parental neglect. For example, Granero, Louwaars, & Ezpeleta (2015)
found that the relationship between low socio‐economic status and ODD in boys was mediated by harsh
parenting practices and inconsistent parental discipline, suggesting that some of the effects of poverty
on conduct disorder may be a result of the type of parenting found in poverty conditions rather than by
poverty per se.
FIGURE 16.2 Dodge's (1991) social‐information processing model of antisocial and aggressive behaviour in which a
history of trauma, abuse, deprivation, and insecure attachment can give rise to information processing biases including a
tendency to interpret even benign cues as signalling hostility.
Figure taken from Krol, Morton, & De Bruyn, 2004.
However, an instructive study by Costello, Compton, Keeler, and Angold (2003) suggests that poverty
may also have a direct causal effect on the level of conduct disorder in a local population. They studied
conduct disorder in American Indian children before and after a casino opened on their reservation
which provided income that moved many families out of poverty. This constituted an interesting natural
experiment on the role of poverty in childhood disorders. Before the casino opened, children of poor
families suffered more symptoms of psychopathology than those of non‐poor families. However, after
the casino had opened the children of those families who moved from the poor to non‐poor class
showed a significant drop in symptoms of conduct disorder and ODD. Levels of these symptoms in
families who remained poor after the casino opened did not change. This study provides a striking
example of how poverty may represent a genuine causal factor for conduct disorder. However, the exact
mechanisms that mediate the relationship between poverty and conduct disorder remain unclear.
SELF‐TEST QUESTIONS
Can you name the main symptoms of ADHD?
What are the different subtypes of ADHD and what other disorders is ADHD likely to be
comorbid with?
What is the evidence that ADHD is genetically determined?
How might deficits in executive functioning cause ADHD symptoms?
What prenatal factors have been identified as risk factors for ADHD?
How might parent–child interactional styles exacerbate the symptoms of ADHD?
Do children diagnosed with ADHD have a theory of mind (TOM) deficit?
What are the four main categories of symptoms found with conduct disorder?
What is ODD and how does it differ from conduct disorder?
Can you summarise the biological factors that may be involved in the aetiology of conduct
disorder?
What is the evidence that children develop symptoms typical of conduct disorder by
mimicking the violent activities they see around them in the media or displayed by peers?
How can interpretation biases account for behaviours typical of conduct disorder?
What socio‐economic variables act as risk factors for conduct disorder?
SECTION SUMMARY
Separation anxiety
As the name suggests, separation anxiety is an intense fear of being separated from parents or
carers. It is commonly found in many children at the end of the first year of life, but in most children
this fear gradually subsides. However, in others it persists well into the school years and may also re‐
appear in later childhood following a period of stress or trauma. Older children with separation anxiety
will become distressed at being away from home and will often need to know the whereabouts of
parents. They may also develop exaggerated fears that their parents will become ill, die, or be unable to
look after them. Consequences of this anxiety include a reluctance to attend school, to stay at friend's
homes overnight, and many will require that a parent or carer stay with them at bedtime until they have
fallen asleep. As with most childhood anxiety disorders, sufferers will also report physical complaints
such as stomach aches, headaches, nausea, and vomiting (Table 16.4). Separation anxiety is often a
normal feature of early development, but it can be triggered and exaggerated by specific life stressors
(such as the death of a relative or pet, an illness, a change of schools, or moving home).
The estimated prevalence rate of diagnosable separation anxiety is approximately 4% in children 6‐ to
12‐months old, and has a 12‐month prevalence rate of 1.6% in adolescents (DSM‐5, American
Psychiatric Association, 2013, p. 192). However, once they have reached the age for school attendance,
many children suffering separation disorder go on to exhibit school refusal problems including social
anxiety disorder (Egger, Costello, & Angold, 2003) (See Chapter 6, Section 6.3).
tic disorders Uncontrollable physical movements such as facial twitches, rapid blinking or
twitches of the mouth.
Excessive anxiety surrounding separation from those to whom the individual is attached, as shown
by at least three of the following:
Disproportionate distress when anticipating or experiencing separation from home or
attachment figures
Ongoing and unnecessary concern about losing attachment figures or potential harm to
them
Ongoing and unnecessary concern about an unexpected event which causes separation from
attachment figures
Ongoing aversion to going out or away from home because of fear of separation
Ongoing and unnecessary fear of being left alone or without attachment figures
Ongoing aversion to going to sleep alone or sleeping away from home
Repeated nightmares around separation
Complaints of physical symptoms such as headaches or nausea when separated or
anticipating separation from attachment figures
The anxiety lasts at least 4 weeks in children and 6 months in adults
The disturbance causes significant impairment in important areas of functioning
The disturbance is not better explained by another mental disorder
CASE HISTORY 16.2 CHILDHOOD AND ADOLESCENT OCD
Andy was a 13‐year‐old boy diagnosed with isolated testicular relapse of acute lymphoblastic
leukaemia 40 days before coming to psychiatric attention. He was first diagnosed with acute
lymphoblastic leukaemia at age 10. After his initial diagnosis, he experienced remission at the
end of chemotherapy induction and finished his treatment for acute lymphoblastic leukaemia.
Three years later he began further drug treatment for his leukemia, and the psychiatric
consultation‐liaison service evaluated him after he expressed bothersome obsessive thoughts,
compulsive behaviours, and insomnia beginning 24–36 hours after he had completed a 28‐day
course of steroid drug treatment. Andy had no history of psychiatric illness or treatment.
At his initial interview, Andy described increasingly bothersome obsessions over the previous 2
days. He felt that he was ‘going crazy’ and feared that he would forget how to talk and lose his
cognitive abilities. He repeated mantras, reassuring himself that if he remained calm, these
bothersome thoughts would pass. He sought reassurance from his mother and the interviewers,
and he repeated the ‘ABCs’ to reassure himself that he could think clearly. His mood was
dysphoric, which he attributed both to insomnia and worry surrounding his constant
bombardment of unwanted thoughts. He reported no depressive symptoms, perceptual
disturbances, or suicidal thoughts. He displayed no manic symptoms.
Andy's symptoms rapidly worsened within 24 hours. His thoughts became dominated by fears
that he would be condemned to hell and that he deserved this fate. He noted images in his mind
of self‐harm and harming family members. He struggled against these thoughts and images, as
well as guilt from having them, by repeatedly telling family members that he loved them. He
continued to deteriorate and struck himself in the head with the blunt end of an ax. He stated
that he had no desire to die but had become convinced of the validity of the emerging thoughts
that he should harm himself. Although he did not sustain serious injury, he was admitted to a
child and adolescent psychiatric inpatient unit for safety.
Clinical Commentary
Andy’s symptoms are typical of many adolescents suffering OCD. In his case, these are
obsessive thoughts about going mad and harming himself and others. In an attempt to
try and prevent his obsessive thoughts entering consciousness, he indulges in protective
behaviours, such as repeating mantras, seeking reassurances from adults, and reciting
the alphabet. The symptoms appear to be precipitated by a stressful illness, and such
stressors are common precursors of OCD symptoms in both children and adults. OCD
symptoms would normally appear very slowly and have a gradual onset, unlike Andy’s
which appeared very rapidly over a period of a few weeks. Such rapid acquisition may
have been facilitated by the abrupt cessation of steroid drugs that he was receiving as
part of his treatment for leukaemia.
Age of onset for childhood OCD can be as early as 3–4 years of age, but the mean age of onset is more
likely to be around 10 years (Swedo et al., 1989). Childhood OCD is regularly found to be comorbid
with a range of other disorders, including tic disorders, Tourette's syndrome, other anxiety
disorders, and eating disorders (Geller et al., 1996). Specifically, over 60% of children seeking treatment
for OCD symptoms also have a lifetime history of tics or Tourette's syndrome (Leonard et al., 1992),
and 50% of children with Tourette's syndrome subsequently develop OCD (Leckman, 1993). This
suggests that childhood OCD and tic disorder may be different manifestations of the same underlying
disorder (Swedo, 1994) (see Focus Point 16.1).
Tourette’s syndrome A disorder in which motor and vocal tics occur frequently throughout
the day for at least 1 year.
Tourette's syndrome (also known as Tourette's or TS) is a disorder with onset in childhood,
characterised by the presence of multiple physical (motor) tics and at least one vocal (phonic)
tic. It is important to understand that these are chronic and involuntary. Someone with TS may
be able to suppress them for a period but eventually they have to let the tics out.
Tics usually start in childhood around the age of 7 and are likely to persist throughout life,
though the symptoms often decrease towards the end of adolescence. The first symptoms are
usually facial tics such as rapid blinking or twitches of the mouth, but TS may start with sounds
such as throat clearing and sniffing, or even with multiple tics of movements and sounds. Tics
can be either simple or complex. Simple tics are of short duration and may include eye
blinks, shoulder shrugging, sniffing, grunting, or extensions of the extremities. Complex
motor tics are of longer duration and can consist of combinations of simple tics (e.g., head
turning plus shoulder shrugging). They can often appear purposeful when the tic consists of
imitating another person's movements, making tic‐like obscene or sexual gestures, or uttering
socially unacceptable words. For Tourette's disorder, both motor and vocal tics must be present.
Tic disorders are more common in children than in adults, in special education populations
than in general populations of children and among boys more than among girls (Knight et al.,
2012). Tic disorders are relatively common in children, with a point prevalence of transient tic
disorder of 2.99%. Tourette's disorder is less common, with a point prevalence of only 0.77%
(Knight et al., 2012).
Tourette's syndrome and behavioural tics are often comorbid with a diagnosis of OCD in
childhood. Studies suggest that up to 60% of children seeking treatment for OCD have a
lifetime history of tics (Leonard et al., 1992), and some theorists believe that OCD is a
heterogenous disorder with an inherited component that can manifest either as OCD
obsessions or compulsions, or as behavioural or vocal tics (Pauls et al., 1995).
OCD symptoms and behavioural and vocal tics can cause obvious problems for a child, with
them being a source of anxiety and fear for the sufferer, and provoking ridicule and
victimisation by peers (Storch et al., 2006). The severity of behavioural and vocal tics is usually
directly related to levels of stress, so learning how to control stress can greatly reduce symptoms
(e.g., by learning relaxation techniques). In some cases, a less socially acceptable tic can be
replaced with a more socially acceptable one using behaviour therapy methods, and medication
can also be used to help control the condition. Treatments normally used with OCD symptoms
(such as exposure with response prevention or cognitive behaviour therapy(CBT)—see
Chapter 6, Section 6.6.4) can also be used with behavioural tics (Turner, 2006; Verdellen,
Keijsers, Cath, & Hoogduin, 2004).
The factors that contribute to pathological worrying in children are as yet unclear, but recent evidence
suggests that negative information processing biases and executive functioning deficits may play a role
(Songo et al., 2020). Children and adolescents with a diagnosis of GAD show an attentional bias
towards negative information (Dalgleish et al., 2003; Waters, Bradley, & Mogg, 2014; but see Abend et
al. 2018, for evidence of a failure to find attentional bias in GAD). They have a bias towards
interpreting ambiguous information as threatening (Pasarelu, Dobrean, Balazsi, Podina, & Mogoase,
2017), and have poor attentional control over worrisome thoughts (Geronimi, Patterson, & Woodruff‐
Borden, 2016).
Specific phobias
Specific fears and phobias are often common in the normal development of children. For example, fears
of heights, water, spiders, strangers, and separation often occur in the absence of individual learning
experiences and appear to represent characteristics of normal developmental stages through which the
child develops. A fear may appear suddenly and intensely, but then disappear almost as quickly (Poulton
& Menzies, 2002). However, for some children a fear may persist and become problematic in that it
prevents normal daily functioning. One such example in childhood is social phobia (social anxiety
disorder), and this often begins in childhood as a fear of strangers (Hudson & Dodd, 2011). Most
children will grow out of this fear by around 2–3 years of age, but some still persist with their fear of
social situations and may find it very difficult to speak to strangers or to be in the presence of strangers.
If pushed into social situations, they will often become mute, blush, withdraw or show extreme
emotional responses (e.g., burst into tears) (Vasey, 1995). However, children and adolescents with social
phobia are usually well adjusted in all other situations that do not involve significant social interaction
(e.g., at home), and this differs from separation disorder where the latter is characterised by clinging and
demanding behaviour at home.
The prevalence for specific phobias in 8–9‐year‐olds is estimated to be around 7% for boys and 10%
for girls (Lichtenstein & Annas, 2000), but up to 76% of adolescents in some cultures report at least one
fear and up to 36% meet the lifetime criteria for specific phobia, indicating that specific phobias are a
common and enduring problem during childhood and adolescence (Benjet, Borges, Stein, Mendez, &
Medina‐Mora, 2012).
Genetic factors
Twin and familial studies of childhood anxiety disorders tend to indicate a significant and stable
inherited component. In a familial study of childhood OCD, Pauls et al. (1995) found that rates of
OCD were significantly greater in the first‐degree relatives of children with OCD than the relatives of
control participants without OCD. However, the inherited component appeared to be nonspecific, since
children with OCD were just as likely to have first‐degree relatives with behavioural tics as with specific
OCD symptoms. More recent twin studies of anxiety disorders in 7–9‐year‐olds have suggested a stable
heritability averaging 54% across anxiety disorders (Trzaskowski, Zavos, Haworth, Plomin, & Eley,
2012) that may be transmitted by many genetic variants, each with only a modest or small effect in itself
(Trzaskowski et al., 2013). Recent research involving linkage studies, genome‐wide association studies,
and candidate gene studies have identified a number of genes that may interact with environmental
factors such as childhood trauma, environmental adversity, and stressful life events to create heightened
risk for GAD and neuroticism (Gottschalk & Domschke, 2017). These genes include 5‐HTT, NPSR1,
COMT, MAOA, CRHR1, and RGS2 which have their effects within the serotonergic and
catecholaminergic systems of the central nervous system.
Trauma and stress experiences
Table 16.1 provides striking examples of how childhood trauma and stress represent significant risk
factors for a range of later diagnosable adult psychological disorders. It is also self‐evident that these
experiences will inevitably cause significant psychological stress during childhood (see Frank's Story at the
beginning of this chapter). Many of these experiences (such as childhood physical and sexual abuse)
represent extreme experiences for any individual, and there are clear links between such experiences
and childhood anxiety generally (e.g., Feerick & Snow, 2005; Whiffen & MacIntosh, 2005). However,
during childhood, even many experiences that seem relatively unexceptional may seem stressful to a
child who is relatively inexperienced in the world, and these can provide significant events that trigger
bouts of anxiety and distress. For example, living with illnesses such as asthma or eczema has been
shown to significantly increase childhood anxiety and reduce quality of life (Gillaspy et al., 2002; Lewis‐
Jones, 2006), and the death of a pet—to whom a child may become significantly attached—can cause
prolonged anxiety and depression (Kaufman & Kaufman, 2006). Even an event such as a minor road
traffic accident may be a new and frightening experience to a child, and a common consequence of
such an experience in childhood is a mixture of post‐traumatic stress disorder (PTSD) symptoms,
anxiety, and depression (Schafer, Barkmann, Riedesser, & Schulte‐Markworth, 2006).
Parenting style
Children are highly dependent on their parents or carers for guidance and emotional support during
their development, so it is not surprising that dysfunctional forms of parenting may cause psychological
and adjustment problems during childhood. Parents may be detached, rejecting, overcontrolling,
overprotective, or demanding, and each of these different parenting styles may cause anxiety and
maladjustment in the child. Research into how parenting style may influence childhood anxiety is
relatively underdeveloped at present. However, some studies do suggest links between overprotective
and overanxious parenting and a child who is overanxious or suffers anxiety separation (Giotakos &
Konstantopoulos, 2002; Rapee, 1997), and this appears to result from the parents' overprotectiveness
generating a lack of confidence and feelings of inadequacy in the child (Dadds, Heard, & Rapee, 1991;
Woodruff‐Borden, Morrow, Bourland, & Cambron, 2002). Specifically, Rapee (2001) has argued that
there may be a reciprocal relationship between child temperament and parenting whereby parents of
children with an anxious temperament are more likely to become over‐involved with the child in an
attempt to reduce the child's distress. However, this over‐involvement is likely to increase the child's
vulnerability to anxiety by increasing the child's perception of threat, reducing the child's perceived
control over threat, and increasing avoidance of threat (e.g., Gallagher & Cartwright‐Hatton, 2009).
Hudson & Rapee (2002) provided some experimental support for this view by reporting that mothers of
children with an anxiety disorder were more likely to be intrusive while the child was completing a
puzzle task than were mothers of non‐anxious control children. Most of these studies of overprotective
parenting tend to suggest that it is only the mother's overprotection that is likely to generate anxiety in
the child, and studies are less likely to find these effects with fathers. However, these effects of parenting
are often moderated by other factors (such as the child's existing level of trait anxiety), and broad
generalisations about the effects of overprotective parenting on offspring anxiety are difficult to make
because of significant methodological differences between studies (Percy, Cresswell, Garner, O'Brien, &
Murray, 2016) (see https://www.psychologytoday.com/gb/blog/why‐we‐worry/201705/helicopter‐
snowplow‐and‐bubble‐wrap‐parenting) for a discussion of ‘Helicopter, Snowplow, and Bubble‐Wrap
Parenting’.
PHOTO 16.2 Are you frightened of this animal? Professor Andy Field has developed a procedure for investigating how
exposure to information about potential threats affects fear acquisition in children. This photo shows an Australian Quoll,
an animal not well known to most children in the northern hemisphere. Some are then told they are potentially dangerous
and others told that they are harmless and benign. Children told they are dangerous subsequently fear them more and avoid
possible contact with them in experimental approach tasks, and this fear can often last up to 6 months.
While overprotective parents appear to generate anxiety in their children, so do parents who are
rejecting and hostile. Children who experience rejecting or detached parents also show increased levels
of anxiety and are often overly self‐critical and have poor self‐esteem (Chartier, Walker, & Stein, 2001;
Hudson & Rapee, 2001). For example, anxiety sensitivity (concern over the physical symptoms of
anxiety, such as trembling, shaking, etc.) is known to be a factor that mediates emotional distress in both
adulthood and childhood, and this has been linked to exposure to parental threatening, hostile, and
rejecting behaviours (Scher & Stein, 2003). Recently, studies have investigated the differential effects on
childhood anxiety that might be made separately by mothers and fathers. Moller, Majdandzic, de Vente,
& Bogels (2013) have argued that the evolved basis of sex differences in parenting means that mothers
and fathers will convey different aspects of anxiety to their offspring. Mothers will tend to transmit
caution and information about threat (with overanxious mothers transmitting more anxiety to their
offspring), whereas fathers may be more likely to teach their offspring how to explore the environment
and compete with others (and in doing so, reduce anxiety).
Clearly, parenting style is likely to be an important factor influencing the development of anxiety
symptoms in children. Both overprotective and overrejecting styles may have adverse effects and
facilitate anxiety and its cognitive and behavioural correlates (e.g., hypervigilance for threat, avoidance
behaviour, etc.). Further research is clearly required in this area to clarify the various mechanisms by
which such parenting styles might have their effects.
In younger children, childhood abuse or neglect is closely related to the development of depression and
appears to generate feelings of worthlessness, betrayal, loneliness, and guilt (Dykman et al., 1997; Wolfe
& McEachran, 1997). In addition, predictors of depression in children younger than 5 years of age
include parental marital partner changes, mother's health problems in pregnancy, child's health over the
first 6 months of life, maternal anxiety, and marital satisfaction early in the child's development, and
the mother's attitude towards caregiving (Najman et al., 2005). Many of these risk factors may affect the
quality of mother–child interactions during early development, and these may be a significant factor in
the development of childhood depression. Childhood adversity generally is also a risk factor for
childhood and adolescent depression, and this includes factors such as financial hardship and chronic
illness (Hazel et al., 2008). In such cases, the positive relationship between negative life events and
depressive symptoms is found only in those adolescents characterised by an elevated level of depressive
rumination (see Chapter 7, Section 7.1.2 for a discussion of depressive rumination), suggesting that a
tendency to ruminate mediates the relationship between negative childhood events and depression
(Kosiak, Blaut, Klosowska, & Kosiak‐Pilch, 2019)
FOCUS POINT 16.2 THE PROTOTYPICAL ADOLESCENT AT
RISK FOR DEPRESSION
Lewinsohn, Rohde, and Seeley (1998) provide the following description of the prototypical
adolescent most at risk for adolescent depression:
‘The prototypical adolescent most likely to become depressed is a 16‐year‐old female who had
an early or late puberty. She is experiencing low self‐esteem/poor body image, feelings of
worthlessness, pessimism, and self‐blame. She is self‐conscious and overly dependent on others,
although she feels that she is receiving little support from her family. She is experiencing both
major and minor stressors, such as conflict with parents, physical illness, poor school
performance, and relationship breakups; she is coping poorly with the ramifications of these
events. Other psychopathologies include anxiety disorders, smoking, and past suicidality, are
probably present’ (p. 778).
Genetic factors
Studies of the heritability of childhood depression have been variable in their findings. Family, twin and
adoption studies suggest that genetic influences on childhood depression may be indirect and have their
effects in combination with environmental risks (Rice, 2009; Thapar & Rice, 2006), and in particular,
prepubertal depression is strongly associated with psychological adversity (such as negative childhood
experiences), and has a significantly lower heritability component than adult depression (Rice, 2010).
Just as with adult depression, we would expect any genetic influences on depression to be present from
birth, but it may be the case that these genetic predispositions may only have their effect at the
appropriate time during the developmental process that allows these genes to be expressed or allows
gene effects to interact with specific early life experiences (such as adversity or stress). Indeed, some
genetic influences may have their effects only when hormonal and physical developments occur in the
child (e.g., during puberty), and those genes that influence levels of cortisol and growth hormone
secretion may be responsible for an increased incidence of depression during puberty (Reinecke &
Simons, 2005). Childhood depression may also be influenced by genes associated with the serotonin
transporter system. One example is SLC6A4 which is associated with a reduction in serotonin expression
(Caspi, Harari, Holmes, Uher, & Moffitt, 2010). Reductions in serotonin expression are associated with
a vulnerability to depression, and during early development serotonin modulates neuroplasticity, and
dysfunction in these systems is known to contribute to the physiopathology of depression (Kraus,
Castrén, Kasper, & Lanzenberger, 2017).
Familial studies have indicated a strong link between parental depression and childhood depression, and
a child with a depressed parent is almost four times more likely to suffer childhood depression than one
without a depressed parent (Hammen & Brennan, 2001). While this is evidence that is consistent with
an inherited view of childhood depression, it could also imply that the behaviour of depressed parents
may create adverse early experiences that precipitate depression in the child (see later). Finally, the fact
that childhood experiences may be significantly more important than inherited factors in causing
childhood depression comes from adoption studies. Such studies have provided little or no evidence for
a genetic influence on depressive symptoms in childhood (Rice, Harold, & Thapar, 2002).
Psychological factors
Two major areas of research into the aetiology of childhood depression are (a) the role of parent‐child
interaction and (b) the development of dysfunctional cognitions that shape and support depressive
thinking in childhood.
We have already noted that children who have depressed parents are themselves more prone to
depression, and this relationship could be mediated in a variety of ways. First, parents who are
depressed may simply transmit their negative and low mood to their children through their interactions
with them, and children may simply model the behavioural symptoms of depression exhibited by their
parents (Jackson & Huang, 2000). Alternatively, depressed parents may not be able to properly respond
to their children's emotional experiences, and in so doing may leave the child either feeling helpless or
unable to learn the necessary emotional regulation skills required to deal with provocative experiences.
In support of this view, (a) Beeghly et al. (2017) found that infants aged 2 to 18 months received more
maternal social support the less the mother exhibited maternal depression symptoms, and (b) a study of
mother–child interactions by Shaw et al. (2006) found that depressed mothers were significantly less
responsive to their children's expressions of distress than nondepressed mothers. These findings suggest
that depressed mothers may be less sensitive or less knowledgeable about their offspring's emotional
distress, and this lack of responsiveness may facilitate internalising symptoms typical of childhood
depression. Unfortunately, many parents exhibit symptoms of depression—especially in the immediate
postpartum period, and this factor may be significant in generating internalising symptoms in their
infant offspring. For example, Paulson, Dauber, & Leiferman (2006) estimated that 14% of mothers and
10% of fathers exhibited significant levels of depressive symptoms up to 9 months after the birth of a
child. In addition, mothers who were post‐partum depressed were less likely to engage in healthy
feeding and sleeping practices with their infant, and depression in both mothers and fathers was
associated with fewer positive enrichment activities with the child (e.g., reading, singing songs, and
telling stories). Finally, those youths (especially girls) who are the offspring of depressed mothers also
show increased interpersonal impairment and risk of interpersonal dysfunction (Hammen, 2012), and
these romantic and interpersonal dysfunctions may not only be a consequence of depression, they may
well intensify and maintain the depression (Hammen, 2009).
As the child grows older, symptoms of depression often come to be associated with dysfunctional
cognitive characteristics that may function to maintain depressed behaviour. For example, attributional
models of adult depression suggest that the depressed individual may have acquired (a) a pessimistic
inferential style (attributing negative events to stable, global causes), (b) a tendency to catastrophise the
consequences of negative events, and (c) a tendency to infer negative self‐characteristics (see Chapter 7)
(Abramson, Metalsky, & Alloy, 1989), and studies of attributional style in childhood and adolescence
strongly suggest that a stable attributional style does not appear until early adolescence, suggesting that
attributional theories of depression may not apply to children until they reach adolescence (Cole et al.,
2008). Research on cognitive factors in adolescent depression has tended to focus mainly on the role of
pessimistic inferential styles and how this style interacts with negative experiences. For example,
children and adolescents with a pessimistic inferential style have been shown to be more likely to
experience increases in self‐reported depressive symptoms following negative events than children who
do not possesses this inferential style (Hilsman & Garber, 1995), and a pessimistic inferential style
interacts with daily hassles (daily annoyances, like being caught in a traffic jam) to predict increases in
depressive symptoms (Brozina & Abela, 2006). Such studies suggest that as the depressed child develops
cognitively, they may develop negative ways of construing events that, in conjunction with negative
experiences, act to maintain depressed symptomatology, and longitudinal studies have pin‐pointed a
time around 13–14 years of age, when depressive attributional styles begin to become stable attributes
of the individual and may be a risk factor for lifelong depression (Cole et al., 2008).
pessimistic inferential style The attribution of negative events to stable, global causes.
SELF‐TEST QUESTIONS
Can you name four different types of diagnosable childhood anxiety disorder?
How might negative information provide a basis for the learning of anxious responding?
What is the evidence that ‘overprotective parents’ generate anxiety in their children?
How prevalent is depression in childhood and adolescence?
Can you name some risk factors that may make individuals vulnerable to adolescent
depression?
What is the pessimistic inferential style and how might it contribute to depression in
children and adolescents?
SECTION SUMMARY
Specific behaviour therapy techniques such as systematic desensitisation (see Chapter 4) can also
be successfully adapted to treat anxiety‐based problems in children (King, Muris, Ollendick, & Gullone,
2005), although in vivo methods appear to be significantly more successful than ‘imaginal’
desensitization, where the child has to imagine being in fearful situations. Sturges and Sturges (1998)
report the successful use of systematic desensitisation to treat an 11‐year‐old girl's elevator phobia.
Following an injury to her hand in an elevator door, she refused to ride in elevators. The clinicians
developed a behavioural hierarchy in which the sequential steps involved approaching and entering an
elevator while reciting self‐calming statements that she had agreed with the therapists. The child very
quickly resumed elevator use with no reoccurrence of anxiety up to 1 year later.
Finally, behaviour management techniques can be used in a range of environments and can even
be taught to parents as an aid to controlling and responding to their children in the home. For example,
teaching parents to identify and reward positive behaviour also helps to prevent parents from focussing
on the negative and disruptive behaviours exhibited by children with both ADHD and conduct disorder.
This has the effect of facilitating adaptive behaviour in the child and reducing the parent's negative
feelings towards the child (Kazdin & Weisz, 2003).
behaviour management techniques Treatment methods that can be used in a range of
environments and can even be taught to parents as an aid to controlling and responding to their
children in the home.
Systemic family therapy A family intervention technique based on the view that
childhood problems result from inappropriate family structure and organisation. The
therapist is concerned with the boundaries between parents and children, and the ways in
which they communicate.
2. Parent management training attempts to teach parents to modify their responses to their
children so that acceptable rather than antisocial behaviours are reinforced and this is used
especially with the families of children diagnosed with conduct disorder (Kazdin, 2006). This
method has been shown to effectively decrease antisocial behaviour and has long‐term beneficial
effects (Brestan & Eyberg, 1998; Dishion & Andrews, 1995).
3. Functional family therapy(FFT) incorporates elements of systemic family therapy and CBT.
This approach views childhood problems as serving a function within the family, and they may
represent maladaptive ways of regulating distance or intimacy between other family members, and
this type of therapy attempts to change maladaptive interactional patterns and improve
communication (Alexander & Parsons, 1982). In this context, you may also want to have a look at
Treatment in Practice Box 10.1, where Sandy—suffering from anorexia—may be using her eating
problems as a means of distancing herself from her parents' conflicts and marriage problems.
These various forms of family therapy have been used successfully with children with conduct disorder,
ADHD, childhood depression, anxiety problems and eating disorders (see Carr, 2018, for an evidence‐
based review). Meta‐analyses of systemic and family therapies generally conclude that family
interventions have a positive effect when compared with no treatment and some alternative treatments
(Hazelrigg, Cooper, & Borduin, 1987), and a meta‐content analysis of 47 randomised controlled
outcome studies found that systemic family therapy was effective for treating children with eating
disorders, conduct problems, and substance use disorders (von Sydow, Behar, Rothers‐Schweitzer, &
Retzlaff, 2006).
CBT for anxious children is also constructed to enable the child to become aware of problematic
thoughts and feelings (Kendall, Kane, Howard, & Siqueland, 1990). A typical treatment programme
involves (a) recognition of anxious feelings and somatic reactions, (b) understanding the role of
cognitions and self‐talk in exacerbating anxious situations, (c) learning the use of problem‐solving and
coping skills to manage anxiety, (d) using self‐evaluation and self‐reinforcement strategies to facilitate the
maintenance of coping, and (e) implementing a plan of what to do in order to cope when in an anxious
situation. CBT has been successfully used to treat a range of childhood anxiety disorders, including
OCD, GAD, specific phobias, social phobia, and separation anxiety and has been used with children
aged 8–18 years (Banneyer, Bonin Price, Goodman, & Storch, 2018; James, James, Cowdrey, Soler, &
Choke, 2015) and can also effectively reduce comorbid mood and behavioural symptoms (Mahdi,
Jhawar, Bennett, & Shafran, 2019). Long‐term follow‐up studies suggest that treatment gains are
maintained over 3 years after treatment (Kendall & Southam‐Gerow, 1996). CBT for childhood anxiety
appears to be equally as effective as medication alone (O'Kearney et al., 2006) and family
psychoeducation alone (Kendall et al., 2008) but is less effective for children under 4 years of age
(Rapee, Abbott, & Lyneham, 2006). More recent developments include computerised CBT and
Internet‐delivered CBT for children and adolescents (Stallard, Richardson, Velleman, & Attwood, 2011;
Vigerland et al., 2016), CBT for violent behaviour in children (Ozabaci, 2011) and childhood insomnia
(Paine & Gradisar, 2011). Finally, family interventions can also be used successfully to teach parents how
to use basic CBT procedures to address their children's anxiety, and such family‐based interventions
already show much promise (Cartwright‐Hatton et al., 2011).
Play therapy A range of play‐based therapeutic and assessment techniques that can be used
with younger children who are less able to communicate and express their feelings.
Play therapy is a term used to cover a range of therapies that build on the normal
communicative and learning processes of children. Clinicians may use play therapy to help a
child articulate what is troubling them, to manage their behaviour (e.g., impulsive or aggressive
behaviour), and to learn adaptive responses when the child is experiencing emotional problems
or skills deficits. The following are two examples of specific play therapies, one designed to help
children practice self‐control (the ‘Slow Motion Game’) and the other to enable children to
communicate any distress they are feeling (the ‘Puppet Game’).
THE SLOW MOTION GAME
Therapeutic rationale
It is well known that children learn best by doing. The Slow Motion Game (by Heidi Kaduson;
see Kaduson & Schaefer, 2001, pp. 199–202) was designed to have children actively practice
self‐control over their movements in a playful group context.
Description
Materials needed: stopwatches for each child, cards (see below), dice, poker chips, paper, and
colouring materials.
To begin, the therapist introduces the concept of self‐control, discussing how it is very difficult
to maintain self‐control when we are moving too fast. Next, the children are asked to illustrate
what fast moving looks like. Once it is clear that the children understand the concept of self‐
control, each child is given a stopwatch. In the centre of the table are cards created by the
therapist with various scenes that the children must act out in slow motion. For example,
playing soccer, doing jumping jacks, or taking a math test. The children are instructed to roll
the die to see who goes first. The highest number goes first, and that child picks a card and goes
to the front of the room with the therapist. The therapist tells the group what that child is going
to do in ‘very slow motion’. On the count of three, all of the children start their stopwatches.
Every 10 seconds, the group reports to the child performing the task how much time has passed.
When the child has reached a full minute, the group yells ‘Stop’. Having successfully completed
the task, the child receives a poker chip. Then the next child (working in a clockwise direction)
picks a card and the game starts again. Once each child has had a turn, the time is increased to
2 minutes, and the second round begins. At the end of the second round, each player will have
two chips each, and a snack or treat is provided as a reward. The therapist can also give each
child a certificate for ‘Achievement in Slow Motion’.
Applications
The Slow Motion Game is successful with any group of children that has difficulty maintaining
self‐control. Also, common board games can be effectively used to increase children's self‐
control. For example, Jenga, Operation, Perfection, and Do Not Break the Ice.
Therapeutic rationale
Puppets serve a crucial role in play therapy. Frequently, children project their thoughts and
feelings onto puppets. In this way, puppets allow children the distance needed to communicate
their distress. Furthermore, the puppets serve as a medium for the therapist to reflect
understanding and provide corrective emotional experiences in the context of the children's
play. Most children naturally project their experiences onto the puppets. However, some
children are too fearful and withdrawn to become involved in any aspect of therapy. By using
the puppet as a symbolic client (a game created by Carolyn J. Narcavage; see Kaduson &
Schaefer, 1997, pp. 199–203), the therapist is able to engage these children and overcome
resistance. The creation of the symbolic client removes the focus from the child, thereby
increasing the child's comfort level and allowing him or her to remain at a safe emotional
distance.
Description
Materials needed: puppets
Once the therapist recognises that the child is frightened, the therapist might show the child a
puppet, remark that it is frightened, and reassure it of its safety. Next, the therapist should enlist
the help of the child in comforting the puppet. By completing these few simple steps, the
therapist has achieved three essential goals: The therapist has (a) responded and empathised
with the child's feelings in a nonthreatening manner, (b) begun the child's participation in
therapy, and (c) started fostering a positive therapeutic relationship with the child. The puppet
often becomes a safety object for the child throughout therapy.
Applications
This technique is particularly effective for any child between 4 and 8 years of age who is
anxious or withdrawn in the beginning stages of therapy. A variation of this technique would
be to have the puppet present with the same problem as the child and to enlist the child's help
in brainstorming solutions to solve the puppet's problem.
There has been some criticism of play therapy in the past, with critics suggesting that it lacks an
adequate research base to justify its use (Campbell, 1992; Reade, Hunter, & McMillan, 1999). However,
more recent randomised controlled trials and meta‐analyses of outcome studies suggest that children
treated with play therapy function significantly better after therapy than those who have had no
treatment (e.g., Bratton, Ray, Rhine, & Jones, 2005; Masoudifard, Mahmoodi, Bagheri, Dolatian, &
Kabir, 2019). Play therapy also appears to be effective across modalities, settings, age, and gender and
has positive effects on children's behaviour generally, their social adjustment, and their personality.
SELF‐TEST QUESTIONS
What drugs are used in the treatment of ADHD and childhood depression? What are the
risks of using such drugs?
What is the evidence that SSRIs and SSNIs are dangerous for children and adolescents
below the age of 18 years?
How have behaviour therapy techniques been adapted to treat childhood behavioural
problems?
What are the different types of family intervention that might be used in dealing with
childhood mental health problems?
How can CBT be used to treat childhood and adolescent anxiety and depression?
What is play therapy and what aspects of childhood psychopathology can it be used to
treat?
What preventative strategies have been developed to help prevent child and adolescent
common mental health problems such as anxiety and depression?
SECTION SUMMARY
CHAPTER OUTLINE
17.1 THE HISTORY OF CATEGORISING AND LABELLING
NEURODEVELOPMENTAL DISABILITIES AND DIVERSITIES
17.2 SPECIFIC LEARNING PROBLEMS
17.3 INTELLECTUAL DISABILITIES
17.4 AUTISTIC SPECTRUM DISORDER (ASD)
17.5 NEURODEVELOPMENTAL DISABILITY AND DIVERSITY REVIEWED
LEARNING OUTCOMES
When you have completed this chapter you should be able to:
1. Discuss the different ways in which learning and developmental problems are categorised
and labelled.
2. Describe and compare the various types of specific learning problems, their aetiology and
treatment.
3. Describe the various forms of intellectual disability
4. Compare and contrast genetic, biological, and environmental causes of intellectual
disabilities.
5. Describe and evaluate the main forms of intervention, care and support for intellectual
disabilities
6. Describe the diagnostic criteria for autistic spectrum disorder (ASD)
7. Compare and contrast theories of the aetiology of ASD
8. Describe and evaluate the main forms of intervention, care, and support for individuals
with a diagnosis of ASD.
During childhood, no one knew what I had, I was considered ‘crazy’ by a doctor at age one, because I had
constant tantrums, which only ended, 1 day, when my mother took me to the beach during a holiday. My nerves
suddenly calmed down, by the sight, and the soothing sounds of the sea. I was beginning to say my first words,
and started some progress.
Despite the progress, I still had strange behaviours, like spinning plastic lids, jars and coins. I rejected teddy bears
that other toddlers liked, but held on to other objects, like dice (which had a smooth surface and were pleasant to
touch). I was terrorised by everyday noises, like planes passing by, thunder, machinery, drills, balloons bursting,
and any sudden noise.
Being the firstborn, my mother did not take notice of behaviours like rocking back and forth, or spending time on a
rocking horse in the day care centre as a toddler instead of playing with other kids.
Despite socialising difficulties, my interest for reading, and learning the alphabet pleased my mother. Instead of
pointing out pictures in a newspaper my mother was reading, I asked her what the letters were, and that prompted
her to teach me to read before starting school.
Socially, I had problems that worried people. I was not able to recognise people easily, and was not able to decode
nonverbal cues. My mother complained about always having to spell things out to me. While my younger (non‐
autistic) brother seemed to know instinctively when to bring up a subject, or when to say a joke, I was a nuisance,
because I could not tell if somebody was angry, sad, tired, etc, just by looking at him/her. I took things literally
and was terrorised by my mother's ‘threats’ which my younger brother did not take seriously. She uttered threats
like ‘I will send you away’ when we behaved badly. My brother was able to understand that she never means it,
however I was terrorised by them.
One thing that discouraged socialising was that most others did not like to talk about insects, calculators, or space
all the time. Other people liked my subjects ‘once in a while’ and got angry if I went on and on. My mother
constantly reminded me not to talk about the same things over and over. Changing subjects was hard for me. I was
fixated on certain subjects like entomology, and arachnology. Nobody cares to hear about the chelate pedipalps of
pseudoscorpions.
George's Story
Introduction
Neurodevelopmental disorders are a category of disabilities that typically begin to manifest in early
development and may affect intellectual, social, and motor development and as a consequence will have
effects on academic achievement, the development of social behaviours, and subsequent occupational
functioning. In this chapter, we will cover three categories of neurodevelopmental disorder—specific
learning disorders, intellectual disabilities, and autistic spectrum disorder (ASD). Each of these
categories is characterised either by a specific impairment in learning or control (e.g., language or
communication disorders, such as dyslexia), different ways of processing information about the world
(e.g., autistic spectrum disorder), or global impairments to intellectual, social, and motor skills (e.g.,
intellectual disability). A neurodevelopmental disorder can be considered as a significant, lifelong
condition that is usually present from birth, but it may often not be recognised until the individual's
behaviour appears different at certain milestones in their development. Failing to sit up, to talk, read, or
attend to what is going on in the world are all possible signs of a learning disability if these activities do
not appear as expected at normal developmental intervals. Most neurodevelopmental disorders are
permanent conditions, but with suitable support and encouragement many people with these conditions
contribute to society in significant ways (see Focus Point 17.1).
FOCUS POINT 17.1 NEURODIVERSITY
The term ‘neurodiversity’ was coined in 1998 by Australian sociologist Judy Singer who was
diagnosed with autistic spectrum disorder, and the neurodiversity movement had its origins in
the Autistic Rights Movement that began in the 1990s. Neurodiversity is a term used to reflect
the variation in neurocognitive functioning in human beings, and to promote the idea that there
is not just one ‘normal’ or ‘healthy’ type of brain or form of neurodevelopment and that one
form of neurocognitive functioning is no more right or wrong than any other form. As a
consequence, many clinicians and researchers who integrate the concept of neurodiversity into
their research and their practice refuse to view neurodivergence as intrinsically pathological.
Subsumed under the concept of neurodiversity are a number of conditions usually found in
mental health diagnostic manuals, including autistic spectrum disorder, dyslexia, ADHD (see
Chapter 16), and bipolar disorder (see Chapter 7), and therapists who embrace neurodiversity
attempt to help individuals with these characteristics to find ways of living that are more in
harmony with their neurological dispositions. For example, people with autistic spectrum
disorder have a greater than normal capacity for processing information even from rapid
presentations and are better able to detect information defined as critical—an attribute that
may help to explain the higher than average prevalence of people with autism spectrum
disorders in the IT industry (Remington, Sweetenham, & Lavie, 2012). Similarly, individuals
with dyslexia may struggle with word based tasks, but in some visuospatial skills they can often
outperform nondyslexic peers (Duranovic, Dedeic, & Gavrić, 2014) which suggests they may
not necessarily be learning disabled in the traditional sense of that phrase but may just be
learning differently.
George's Story describes the early life of someone diagnosed with ASD. This involves early difficulties in
interpreting nonverbal behaviour, impairment in communicating with others, and a repetitive
preoccupation with individual objects, activities, or topics. This personal account provides a striking
insight into how these characteristics can affect normal day‐to‐day living during childhood. George
prefers indulging in stereotyped behaviours, such as rocking, to playing with other kids. He is unable to
understand both normal verbal innuendo and the nonverbal body language that most people learn to
understand implicitly. This causes him to be seen by others as ‘difficult’, uncommunicating, and ‘a
nuisance’, all of which in turn causes him to feel more anxious and distressed. Most
neurodevelopmental disorders, no matter how specific, cause difficulties across the whole range of life
activities, including educational, social, and occupational, but the degree to which those with
neurodevelopmental diversity characteristics have problems in these areas of functioning will depend on
their background and family circumstances and the nature and degree of the disability.
This chapter will look in detail at the various neurodevelopmental diversities, their aetiology, and the
various support options that are available for these diversities (Focus Point 17.1).
intellectual disabilities A modern term replacing mental retardation to describe the more
severe and general learning disabilities.
There is considerable diversity across different areas of the world about how various learning disabilities
have been labelled. In the UK, Europe, and much of Australasia the term learning disability has
often been used as an umbrella term to cover disorders across all three of the main categories described
previously—and it is especially used in this way by health and social care services. In DSM‐IV‐TR, the
term mental retardation referred to a specific diagnostic category of disorder defined as significantly
below average intellectual functioning characterised by an IQ of 70 or below (DSM‐IV‐TR, p. 49).
However, the term mental retardation is now quite rightly frowned upon as stigmatising, demeaning,
and discriminatory and was replaced in the Diagnostic and Statistical Manual of Mental Disorders, 5th Edition
(DSM‐5) by the label intellectual disability and in the International Classification of Diseases (ICD‐11) by
intellectual development disorder—a change that was required by a federal statute in the United States
that was known as Rosa's Law (https://www.govtrack.us/congress/bills/111/s2781, 2010). There is as
yet no genuine international consensus on the use of these categories and labels, and even within
countries these terms can change quite frequently to reflect shifts in social attitudes towards individuals
with different learning abilities.
SELF‐TEST QUESTIONS
How are the terms specific learning disability, intellectual disability, and autistic spectrum
disorder defined?
SECTION SUMMARY
Specific learning disabilities such as these are often commonly comorbid with other childhood
psychological problems, and studies suggest that specific learning disabilities can be diagnosed in 79%
of children with bipolar disorder, 71% with attention‐deficit‐hyperactivity disorder (ADHD), 67% with
autism, and slightly lower percentages with anxiety and depression (18–19%) (Mayes & Calhoun, 2006).
Literacy problems generally are associated with increased risk for both externalising and internalising
disorders in childhood, and this may be due either to the stressors associated with academic failure
(causing anxiety and depression), or the fact that certain types of cognitive deficit (such as attention
deficits) may be common to a number of different disorders, including specific learning disorders and
disruptive behaviour disorders such as ADHD (Maughan & Carroll, 2006).
TABLE 17.1 Summary: DSM‐5 diagnostic criteria for specific learning disorder
Impediments in learning and using academic skills, marked by at least one of the following over a
6‐month period:
Inaccurate or slow and struggling reading
Difficulty understanding the meaning of read words
Spelling difficulties
Difficulties in expression through writing
Difficulties understanding numbers
Difficulties with mathematical reasoning
The affected academic skills are substantially below what would be expected for the individual's
age
The difficulties are not better accounted for by intellectual disabilities, vision or hearing
difficulties, or other mental or neurological disorders
TABLE 17.2 Specific learning disabilities
Disability Description Example
symptoms
Read slowly
and with poor
comprehension
Problems with written expression Writing skills are substantially below Regular errors
those expected for chronological age, in spelling,
intelligence, and educational level grammar, or
punctuation
Failure to
understand
arithmetic
concepts
Difficulty
recalling the
right word
Dyslexia
Dyslexia is a complex pattern of learning difficulties associated with difficulties in word recognition
while reading, poor spelling, and difficulties with written expression. Reading may be characterised by
word distortions, substitutions, or omissions, and is generally slow, with the child having difficulty fully
comprehending what has been read. Around 3 to 17.5% of school‐age children have specific
developmental reading problems (DeFries, Fulker, & BaBuda, 1987; Shaywitz et al., 1998), and between
60 and 80% of those diagnosed are likely to be boys (Shaywitz, Shaywitz, Fletcher, & Escobar, 1990),
and this gender difference may be due to a number of factors, including (a) there are higher referral
rates for males because they may be more disruptive than girls in learning environments, (b) girls may at
least partially offset their reading difficulties by enjoying reading more than boys (Chiu & McBride‐
Chang, 2006), (c) girls may have more effective coping strategies than boys for dealing with reading
difficulties (Alexander‐Passe, 2006), and (d) boys do have a slower and more variable processing speed
and worse inhibitory control than girls (Arnett et al., 2017). Most longitudinal studies suggest that
reading problems can often be persistent and chronic, which does not simply represent a developmental
lag in reading ability but is evidence for a specific learning disability (Bruck, 1992; Scarborough, 1990;
Francis, Shaywitz, Stuebing, Shaywitz, & Fletcher, 1996). Figure 17.1 shows that, even though children
with impaired reading skills show an improvement in reading ability with age, a gap in reading ability
remains across time between children with reading impairments and those without. In individuals
suffering dyslexic problems, writing skills fall significantly below those expected for the child's
chronological age, IQ , and educational history. The child will have difficulty composing written text and
will exhibit grammatical errors, punctuation errors, poor paragraph organisation, spelling errors, and
poor handwriting (Photo 17.1).
FIGURE 17.1 Trajectory of reading skills over time in nonimpaired and dyslexic readers. Both nonimpaired and
dyslexic readers improve their reading scores as they get older, but the gap between nonimpaired and dyslexic readers remains
suggesting that dyslexia is a deficit and not a developmental lag.
From Shaywitz (2003).
PHOTO 17.1 Dyslexia includes deficits in spelling and writing as well as reading. Other symptoms of dyslexia can
include poor comprehension, reversal of words or letters while reading, and difficulty decoding syllables or single words and
associating them with specific sounds (phonics). Here, a child with dyslexia attempts to reproduce a teacher's sentence.
Will and Deni McIntyre/Science Source, National Audubon Society Collection/Photo Researchers, Inc. Reproduced with permission.
Dyscalculia
The main feature of dyscalculia is that mathematical or arithmetical ability falls significantly short of
that expected for the child's chronological age, IQ , and educational history. Individual skills that may be
problematic in dyscalculia are (a) understanding or naming mathematical terms, (b) decoding problems
into mathematical terms, (c) recognising and reading numerical symbols or arithmetical signs, (d)
copying numbers or symbols correctly, (e) remembering to conduct certain mathematical operations
(such as ‘carrying’ figures when making calculations), and (f) following sequences of mathematical steps
in the correct order. It is estimated that around 3–4% of school‐age children may suffer from
developmental dyscalculia, with the male to female ratio as high as 4:1 (Reigosa‐Crespo et al., 2012).
Language disorder
Language disorder concerns problems with language acquisition and use as a result of problems in
vocabulary comprehension and production and in the construction of sentences. This usually results in
a significantly smaller vocabulary size, with grammatical and tense errors. These problems will appear
during early development and persist into adolescence and adulthood (Table 17.3). General features of
this particular problem include a limited amount of speech, limited vocabulary, difficulty learning new
words, difficulty finding the right word (e.g., unable to come up with the word car when pointing to a
car), shortened sentences, simple grammatical structures (e.g., use of relatively few verb forms), omission
of critical parts of sentences, unusual word order, and slow language development generally. Language
disorder is often comorbid in younger children with speech sound disorder (discussed next), reflecting a
general problem with the fluidity of language and erratic speech rhythms. Language disorder can be
identified as early as age 2–3 years (Eisenwort et al., 2004), but milder forms may not become apparent
until early adolescence.
Ongoing difficulties in the attainment and use of language (including spoken and written), due to
difficulties in understanding and emitting that include the following:
Reduced vocabulary
Limited sentence structure ability
Difficulties in dialogue
Abilities are substantially below what would be expected for the patient's age
Symptoms start in early development
The difficulties are not better accounted for by vision or hearing difficulties, motor dysfunction, or
other mental or neurological disorders
speech sound disorder Persistent difficulty with speech sound production that interferes with
speech intelligibility or prevents verbal communication of messages.
TABLE 17.4 Summary: DSM‐5 Diagnostic Criteria for Speech Sound Disorder
Ongoing difficulty with speech sound production that causes problems with speech understanding
or prevents verbal communication
The difficulty causes limitations in effective communication, interfering with social participation,
academic or occupational performance
Symptoms start in early development
Difficulties are not better accounted for by congenital or acquired conditions including cerebral
palsy, deafness, or other medical conditions
stuttering A disturbance in the normal fluency and time patterning of speech that is
inappropriate for the individual’s age.
TABLE 17.5 Summary: DSM‐5 Diagnostic Criteria for Childhood‐Onset Fluency Disorder (Stuttering)
Ongoing disturbance of normal fluency and time patterns in speech, inappropriate to the
patient's age and language skills, as marked by at least one of the following:
Sound and syllable repetitions
Sound prolongation of consonants and vowels
Broken words
Filled or unfilled pauses in speech
Word substitution to avoid difficult words
Word pronunciation with excessive physical tension
Monosyllabic whole‐word repetitions
The disturbance causes anxiety about speaking or limitation in effective communication
Symptoms start in early development
Difficulties are not better accounted for by speech‐motor or sensory deficit or other medical
condition
Dyslexia
As we mentioned earlier, dyslexia is a condition which affects both reading and written expression and
is a persistent, chronic condition in which reading ability lags behind that of nonimpaired individuals
for the course of most of their lifetime. The development of dyslexia can be predicted by a number of
risk factors, including difficulty recognising rhymes at age 4 years (Bradley & Bryant, 1985), difficulty
naming everyday objects at age 5 years (Wolf, Bally, & Morris, 1986), and difficulty learning syntactic
rules at age 2–3 years (Scarborough, 1990). However, the main causes of dyslexia now appear to be
identified as abnormalities in specific areas of the brain such as the temporoparietal region (Shaywitz &
Shaywitz, 2005). These abnormalities may be the result of genetic factors, and they may give rise to the
difficulties that individuals have in decoding and comprehending written material. We will review these
theories of the aetiology of dyslexia by looking in turn at evidence related to genetic inheritance,
cognitive factors, and brain functions.
dyslexia A persistent, chronic learning disability in which there are developmental deficits in
spelling, reading and writing abilities.
Genetic factors
As early as 1950, it was reported that more than 80% of children with dyslexia also had other family
members with the disability, with further studies suggesting that between 23 and 65% of children with
dyslexia have a parent with the disorder (Scarborough, 1990). In addition, 40% of the siblings of
sufferers will also exhibit symptoms of dyslexia (Pennington & Gilger, 1996). This suggests that dyslexia
runs in families and so may have an important genetic component, and evidence for this genetic
component is supported by studies suggesting that dyslexia concordance rates are significantly higher in
monozygotic (MZ) than in dizygotic (DZ) twins (Stevenson, Graham Fredman, & McLoughlin, 1987),
and heritability estimates have in some cases been as high as 0.72 (Plomin & Kovas, 2005). Using
genetic markers for dyslexia (see Chapter 1, Section 1.3), linkage studies have implicated genes on a
number of chromosomes in the aetiology of dyslexia, including loci on chromosomes 2, 3, 6, 15, and 18
(Fisher & DeFries, 2002), and the involvement of several genes spanning these regions in the aetiology
of dyslexia has been reported (Mascheretti et al., 2017). There is evidence from molecular genetics to
suggest that many of these genes participate in brain development, and cause the differences in brain
development associated with dyslexia (Galaburda, LoTurco, Ramus, Fitch, & Rosen, 2006; Scerri &
Schulte‐Koene, 2010).
Cognitive factors
Research on the aetiology of dyslexia has converged on the view that reading disabilities in dyslexia are
caused primarily by difficulties in differentiating the elements of speech (phonemes) and associating
these sounds with the letters in a written word (Shaywitz, 2003). This is known as the phonological
theory of dyslexia where, in order to learn to read, the child must learn to recognise that letters and
letter strings represent the sounds of spoken language. The deficits in dyslexia impair the child's ability
to break up a spoken word into its basic phonological elements and link each letter to its corresponding
sound. This deficit is quite independent of other abilities, such as general intelligence, reasoning,
vocabulary, and use of syntax (Share & Stanovich, 1995; Shankweiler et al., 1995). Shaywitz & Shaywitz
(2005) characterise the experience of the dyslexic in the following way: ‘The problem is that the affected
reader cannot use his or her higher‐order linguistic skills to access the meaning until the printed word
has first been decoded and identified. Suppose, for example, an individual knows the precise meaning of
the spoken word ‘apparition’; however, she will not be able to use her knowledge of the meaning of the
word until she can decode and identify the printed word on the page and it will appear that she does not
know the word's meaning’ (p. 1302).
phonological theory The view that reading disabilities in dyslexia are caused primarily by
difficulties in differentiating the elements of speech (phonemes) and associating these sounds
with the letters in a written word.
However, phonological theory may not be the single answer to understanding dyslexia. First,
phonological theory has difficulty explaining all the behavioural symptoms of dyslexia (e.g., many
individuals with a diagnosis of dyslexia are both physically and socially immature, and have trouble
reading social cues) (Ramus & Ahissar, 2012), and not all individuals with dyslexia show a phonological
deficit (Pennington et al., 2012). Conversely, not all individuals with a phonological deficit are given a
diagnosis of dyslexia (van der Leij et al., 2013). This suggests that phonological theory may not be the
sole explanation of dyslexia, with the possibility that various constellations of underlying cognitive
abnormalities may lead to some of the symptoms of dyslexia (van Bergen, van der Leij, & de Jong,
2014).
Brain function
Associated with problems in relating written letters to corresponding sounds are problems in brain
functioning in dyslexia—especially in the temporoparietal areas of the brain. Post‐mortem studies
of the brains of individuals with dyslexia suggest abnormalities in the temporo‐parietal brain region
(Galaburda et al., 1985), and the number and organisation of neurones in the posterior language area
of the cortex (Galaburda, 1993). Nevertheless, these abnormalities found in post‐mortem studies might
simply represent the consequences of a lifetime of poor reading rather than a causal factor in dyslexia.
However, functional magnetic resonance imaging (fMRI) studies of the brains of young children with
dyslexia indicate that they show significantly less activation in a number of left hemisphere sites when
reading than do nonimpaired children. These areas include the inferior frontal, superior temporal,
parietotemporal, and middle‐temporal‐middle occipital gyri (Shaywitz et al., 2002). This represents a
common finding from functional brain imaging studies suggesting that a failure of proper functioning in
left hemisphere brain systems is associated with impaired reading in children with dyslexia (Xu, Yang,
Siok, & Tan, 2015). Studies of lesions of the temporoparietal areas of the brain also indicate that this
area may be critical for analysing the written word and transforming the symbol into the sounds
associated with the linguistic structure of the word (Damasio & Damasio, 1983; McCandliss, Cohen, &
Dehaene, 2003). Interestingly, brain imaging studies also suggest that individuals with dyslexia may
attempt to compensate for the lack of function in the temporo‐parietal areas of the brain by using other
brain areas to help them identify words and associate them with sounds. These compensatory effects
involve brain sites required for physically articulating a word, enabling the individual with dyslexia to
develop an awareness of the sound structure of a word by forming the word with their lips, tongue, and
vocal apparatus (Brunswick, McCrory, Price, Frith, & Frith, 1999). Compensatory effects such as this
may explain why reading performance in children with dyslexia improves with age but still fails to reach
the standard of nonimpaired children (see Figure 17.1).
Dyscalculia
Dyscalculia appears to be a specific but chronic condition, in which sufferers may perform better than
average on measures of IQ , vocabulary, and working memory but still perform significantly poorly on
tests of mathematical ability (Landerl, Bevan, & Butterworth, 2004). This problem appears to be the
result of specific disabilities in basic number processing, and can take three basic forms: (a) a problem in
the memorising and retrieval of arithmetic facts, (b) developmentally immature strategies for solving
arithmetic problems, and (c) impaired visuospatial skills resulting in errors in aligning numbers or
placing decimal points (Geary, 1993, 2004).
Dyscalculia appears to have a familial component, but as yet the underlying genetic contributors to this
are not well researched (Monuteaux, Faraone, Herzig, Navsaria, & Biedeman, 2005; Carvalho & Haase,
2019), even though abnormalities in brain function associated with dyscalculia may be partially
transmitted genetically (von Aster, Kucian, Schweiter, & Martin, 2005), and may be associated with the
genes that mediate mathematical ability generally (Plomin & Kovas, 2005). However, a number of
studies have also implicated prenatal factors such as fetal alcohol spectrum disorder (FASD) and low
birthweight (O'Malley & Nanson, 2002; Shalev, 2004). Brain functions specialising in number
processing are located in various areas of the brain, and fMRI studies have implicated abnormalities in
the left parietotemporal and inferior prefrontal cortex areas of the brain and the intraparietal sulcus in
mathematics disorder (Dehaene, Molko, Cohen, & Wilson, 2004; Molko et al., 2003). Thus, the current
evidence suggests a possible genetic or developmental cause that results in abnormalities of function in
those areas of the brain responsible for processing numbers and arithmetic calculations.
Communication disorders
Many communication disorders may be caused by organic problems relating to abnormal development
of the physical apparatus required to make and articulate sounds. For example, speech sound disorder
and stuttering can be associated with physical causes such as hearing impairment, cleft palate, cerebral
palsy, and ear, nose, and throat problems. In addition, some theories of stuttering argue that this
disorder results from problems with the physical articulation of sounds in the mouth and larynx
(Agnello, 1975). However, organic difficulties related to sound production may not represent the whole
picture. For example, there is growing evidence of a familial and genetic component to communication
disorders such as stuttering (Canhetti‐Oliveira & Richieri‐Costa, 2006; Frigerio‐Domingues & Drayna,
2017) that indicates that the heritability of stuttering may be as high as 80–85% (Ooki, 2005; Fagnani,
Fibiger, Skytthe, & Hjelmborg, 2011). There is also evidence from brain scan studies of abnormalities in
certain brain circuits that are related to stuttering. One such circuit is the basal ganglia‐thalamo‐
cortical motor circuit, which, if impaired, may affect the ability of the basal ganglia to produce timing
cues for the initiation of the next motor segment in speech (Alm, 2004). The fact that stuttering may be
a problem associated with the sequential production of sounds and words is supported by evidence
which suggests that stuttering rarely occurs in one‐word utterances and is affected by the length and
grammatical complexity of utterances (Bloodstein, 2006). Furthermore, stuttering is often a
consequence of brain injury in the basal ganglia, suggesting that it is an important area in the
production of normal speech (Tani & Sakai, 2011).
Finally, there is some evidence that the production of sounds in communication disorders may be
affected by emotional factors such as anticipatory anxiety or lack of control over emotional reactions
(Karrass et al., 2006). However, at least some researchers view this association between disorders such as
stuttering and anticipatory anxiety as secondary, and as a conditioned consequence of previous
stuttering experiences (Alm, 2004).
altered auditory feedback (AAF) A form of treatment for stuttering in which delayed
auditory feedback or a change in frequency of the voice is given to clients when they are
speaking.
SELF‐TEST QUESTIONS
What are the defining characteristics of specific learning disorder as a diagnostic category?
What are the individual skills that may be impaired in dyscalculia?
What are the main characteristics of language disorder, speech sound disorder, and
childhood‐onset fluency disorder?
What is the evidence that dyslexia is an inherited disorder?
Can you describe the phonological theory of dyslexia?
What areas of the brain appear to be most affected in dyslexia?
What is the evidence that communication disorders might be associated with physical
rather than psychological causes?
Can you describe treatments for stuttering such as AAF and prolonged speech?
SECTION SUMMARY
intellectual disability A modern term replacing mental retardation to describe the more
severe and general learning disabilities.
Deficits in intellectual functions as confirmed by clinical assessment and standard intelligence tests
Deficits in adaptive functioning resulting in an inability to meet development and sociocultural
standards for personal independence and social responsibility
Symptoms start in developmental period
DSM‐5 also allows intellectual disability to be specified according to its severity into mild, moderate,
severe, and profound, with mild being the least disabling where sufferers may only be mildly cognitively
impaired, socially ‘immature’ rather than impaired, and may be able to deal with daily living tasks given
appropriate support. In the case of profound severity the individual may have little understanding of
symbolic communication and is dependent on others for all aspects of daily care, their health, and their
safety. Clinicians assessing individuals with intellectual disabilities would also gather information about
disabilities from other reliable independent sources, such as teachers and medical doctors.
17.3.2 Alternative Approaches to Defining Intellectual Disability
Rather than simply taking a negative approach to diagnosis and focusing on an individual's limitations,
impairments, and deficits, other views attempt to highlight those factors that might be required to
facilitate better intellectual and adaptive functioning in the individual. People with intellectual
disabilities differ significantly in the severity of their disabilities, with some able to function almost
without notice in everyday life, while others may require constant supervision and sheltered
environments in which to live. Similarly, individuals also differ significantly in their personalities. Some
will be passive, placid and dependent, while others may be aggressive and impulsive. These kinds of
issues mean that each individual with an intellectual disability is likely to differ in terms of both their
level of functioning and what is required to achieve any form of adaptive functioning. In this sense, the
notion of ‘intellectual disability’ is more of a social construction than a diagnostic category (a term that
is a product of particular historical and cultural conditions rather than medical or psychological science)
(Webb & Whitaker, 2012).
The American Association on Intellectual & Developmental Disabilities (AAIDD) has promoted a more
individualised assessment of a person's skills and needs rather than an approach based solely on
categorising intellectual and adaptive impairments. This approach emphasises that individuals have
both strengths and limitations and that an individual's limitations need to be described in a way that
enables suitable support to be developed. So, rather than simply forcing the individual into a diagnostic
category, this approach evaluates the specific needs of the individual and then suggests strategies,
services, and supports that will optimise individual functioning. Supports are defined as ‘the resources
and individual strategies necessary to promote the development, education, interests, and personal well‐
being of a person with intellectual disabilities’. Supports can be provided wherever necessary by parents,
friends, teachers, psychologists, doctors, and GPs or any other appropriate person or agency. People
with intellectual disabilities frequently face major stigma and prejudice, and they are often confronted
with significant barriers to realising their own potential, but approaches such as that advocated by the
AAIDD are designed to enable individuals with intellectual disabilities to achieve their potential. In the
UK, the Special Education Needs & Disability Act of 2001 extended the rights of individuals with
intellectual disabilities to be educated in mainstream schools, and schools are required to draw up
accessibility strategies to facilitate the inclusion of pupils with intellectual disabilities and to make
reasonable adjustments so that they are not disadvantaged. As a result of such changes in attitude,
support, and legislation, more than half of those people with intellectual disabilities in the UK now live
with their parents or carers.
accessibility strategies Programmes that extend the rights of individuals with intellectual
disabilities to be educated according to their needs in mainstream schools.
Biological causes
Biological factors represent the largest known group of causes of intellectual disabilities, and we will
divide these into three main categories: (a) chromosomal disorders, (b) metabolic causes, and (c)
perinatal causes.
Chromosomal disorders
For many years now, it has been known that forms of intellectual disability are genetically linked to
abnormalities in the X chromosome (the chromosome that also determines biological sex), and these
abnormalities will often manifest as physical weaknesses in the chromosomes or abnormalities resulting
from irregular cell division during the mother's pregnancy. Chromosomal abnormalities occur in around
5% of all pregnancies, and the majority usually end in spontaneous miscarriages. However, it is
estimated that 0.5% of all newborn babies have a chromosomal disorder, but many of these die soon
after birth (Smith, Bierman, & Robinson, 1978). Chromosomal disorders account for around 25–30%
of all diagnosed cases of intellectual disability and the two most prominent forms are Down syndrome
and fragile X syndrome.
PHOTO 17.2 The typical facial features of children born with Down syndrome or fragile X syndrome. Individuals with
Down syndrome almost always possess an extra chromosome in pair 21, while in fragile X syndrome the X chromosome
shows physical weaknesses and may be bent or broken.
Down syndrome was first described by British doctor Langdon Down in 1866. However, it was not
until 1959 that French geneticist Jerome Lejeune first reported that individuals with Down syndrome
almost always possess an extra chromosome in pair 21 which is usually caused by errors in cell division
in the mother's womb (also known as Down syndrome trisomy 21) (Photo 17.2). Down syndrome
occurs in around 1.5 of every 1,000 births (a prevalence rate of 0.15%) (Simonoff, Bolton, & Rutter,
1996)—although this may vary from country to country. An estimated 37,000 people in the UK had
Down syndrome in 2011(a prevalence rate of 0.66 per 1,000 people, Wu & Morris, 2013). Risk is
related to the age of the mother, and for women aged 20–24 years the risk is 0.07%, and this rises to
1.1% for women aged 40 years, and up to 3.3% in women aged over 45 years (Morris, Wald, Mutton, &
Alberman, 2003). Although this link between maternal age and incidence of Down syndrome has been
known for some time, it is still unclear how maternal age contributes to the chromosomal abnormalities.
However, recent animal studies suggest that as a mother ages she will have lower amounts of the
proteins cohesin and securin in her eggs, and these proteins help to keep chromosomes together at
their centres. Lower levels of these proteins therefore lead to instability in chromosome pairs and the
possibility that chromosome division will happen unevenly (Nabti, Grimes, Sarna, Marangos, & Carroll,
2017).
Down syndrome A disorder caused by the presence of an extra chromosome in pair 21 and
characterised by intellectual disability and distinguishing physical features.
The majority of individuals with Down syndrome have moderate to severe intellectual impairment with
a measurable IQ usually between 35 and 55. They also have distinctive physical appearance with eyes
that slant upward and outward with an extra fold of skin that appears to exaggerate the slant. They are
usually shorter and stockier than average, with broad hands and short fingers. They may also have a
larger than normal furrowed tongue that makes it difficult for them to pronounce words easily. They
also suffer physical disability such as heart problems, and appear to age rapidly with mortality high after
40 years of age. Aging is also closely associated with signs of dementia similar to Alzheimer's disease (see
Chapter 15), and this may be a result of the causes of both disorders being closely located on
chromosome 21 (Zigman, Schupf, Sersen, & Silverman, 1995; Selkoe, 1991). Down syndrome can be
identified prenatally in high‐risk parents by using a procedure known as amniocentesis which involves
extracting and analysing the pregnant mother's amniotic fluid. It is a routine procedure for pregnant
mothers that is carried out after week 15 of pregnancy. It is frequently recommended for mothers over
the age of 35 years, and the results of this process can leave prospective parents with difficult decisions
about whether to maintain a pregnancy or not.
amniocentesis A procedure which involves extracting and analysing the pregnant mother’s
amniotic fluid used prenatally in identifying Down syndrome in high-risk parents.
Another important chromosomal abnormality that causes intellectual disability is known as fragile X
syndrome. This is where the X chromosome appears to show physical weaknesses and may be bent or
broken, and fragile X syndrome occurs in approximately 1 in 4000 males and 1 in 8000 females
(Hagerman & Lampe, 1999). Individuals with fragile X syndrome possess mild to moderate levels of
intellectual disability and may also exhibit language impairment and behavioural problems such as
mood irregularities (Eliez & Feinstein, 2001; Zigler & Hodapp, 1991). Like individuals with Down
syndrome, they also have specific physical characteristics, such as elongated faces and large, prominent
ears (see Photo 17.2). Studies suggest there may be a syndrome of fragile X chromosome in which
different individuals manifest rather different symptoms and degrees of disability (Hagerman, 1995). For
example, some may have normal IQ levels but suffer specific learning disabilities. Others may exhibit
emotional lability and symptoms characteristic of autism such as hand‐biting, limited speech and poor
eye contact (Dykens, Leckman, Paul, & Watson, 1988), and around one in three will exhibit symptoms
of ASD (Hagerman, 2006). Intellectual impairment will usually be greatest in males suffering fragile X
syndrome because they only have one X chromosome. Because females possess two X chromosomes the
risk of intellectual disability is less (Sherman, 1996).
fragile X syndrome A chromosomal abnormality that causes intellectual disability where the
X chromosome appears to show physical weaknesses and may be bent or broken.
Metabolic disorders
Metabolic disorders occur when the body's ability to produce or break down chemicals is impaired.
There are many different types of metabolic disorders and many can affect intellectual ability. Such
disorders are often caused by genetic factors and may be carried by a recessive gene. When both
parents possess the defective recessive gene, then their offspring are in danger of developing the
metabolic disturbances linked to that gene. We provide examples of two such genetically determined
metabolic disorders that affect intellectual ability. These are phenylketonuria (PKU) and Tay‐Sachs
syndrome.
recessive gene A gene that must be present on both chromosomes in a pair to show outward
signs of a certain characteristic.
Tay‐Sachs disease is also a metabolic disorder caused by a recessive gene (often found in children of
Eastern European Jewish ancestry). The defective gene results in an absence of the enzyme
hexosominidase A in the brain and central nervous system and this eventually causes neurones to die.
The disorder is degenerative, with infants of around 5 months showing an exaggerated startle response
and poor motor development. Only around 17% of sufferers live beyond 4 years of age (Sloan, 1991)
but those that do show rapid decline in cognitive, motor, and verbal skills. The disorder is relatively rare,
occurring in around 1 in 360,000 live births worldwide, and this rate is being significantly reduced by
effective screening.
Perinatal causes
From conception to the early postnatal period is a dangerous time for an organism that is developing as
rapidly as a human baby. Because of this, there are considerable prenatal and immediately post‐natal
factors that put normal development at risk and may cause lifelong intellectual disability. One type of
risk involves those factors that can adversely affect the foetus's interuterine environment and its food
supply. These include factors such as maternal infections, substance abuse, or malnutrition. Disorders
acquired during prenatal development are known as congenital disorders because they are acquired
prior to birth but are not genetically inherited.
congenital disorders Disorders acquired during prenatal development prior to birth but
which are not genetically inherited.
Maternal diet is one example. For instance, if there is too little iodine in the mother's diet during
pregnancy this can give rise to the condition known as cretinism. The mother's iodine deficiency may
often be caused by a hormonal imbalance called thyroxine deficiency. Children suffering from this
disorder show slow development, intellectual disabilities, and often have a small stature. Thankfully the
condition is relatively rare nowadays because of the availability of iodised table salt and the fact that
most diets now contain sufficient iodine. Similarly, mineral and vitamin deficiencies caused by
maternal malnutrition during pregnancy can also result in intellectual disability and significantly
affect the child's physical and behavioural development (Barrett & Frank, 1987). However, the adverse
effects of maternal malnutrition can often be partially rectified by providing newborn infants with
intellectually supportive environments and appropriate food supplements (Zeskind & Ramsay, 1981;
Super, Herrera, & Mora, 1990). In most Westernised societies maternal malnutrition is relatively rare,
but when it does occur it probably occurs in conjunction with other factors likely to harm the child's
intellectual and physical development, such as maternal drug or alcohol addiction, low socio‐economic
status, and possibly maternal HIV or syphilis infection (see below).
Maternal infectious diseases during pregnancy are another potential cause of intellectual disability in
the offspring. Such diseases are most damaging during the first trimester of pregnancy when the foetus
has little or no immunological protection. Common maternal diseases that can cause intellectual
impairment in the offspring include rubella (German measles), syphilis and HIV amongst others. If a
mother contracts rubella during the first 10 weeks of pregnancy, there is almost a 90% chance that the
baby will develop congenital rubella syndrome(CRS) resulting in abortion, miscarriage, stillbirth,
or severe birth defects. Up to 20% of babies born live will have CRS causing heart disease, deafness,
and intellectual impairment. The incidence of rubella infections during pregnancy in the UK between
2003 and 2016 was 0.23 infections per 100,000 pregnancies, and 16% of these infections led to CRS in
the infant (Bukasa et al., 2018).
In contrast, maternal HIV infection became an important cause of intellectual disability in the
1980s and 1990s. If the mother is not being treated for HIV during pregnancy there is a likelihood that
the infection will be passed on to the foetus. The infection can also be passed on through breastfeeding.
There is then almost a 50% chance that the newborn child will develop moderate to severe
neurodevelopmental and intellectual disabilities (Knox et al., 2018). However, in utero transmission of
HIV can be reduced from 25 to 8% if the mother is given an antiretroviral drug such as zidovudine
during pregnancy and if the newborn child then receives the drug for up to 6 weeks postnatally (Belfer
& Munir, 1997).
maternal HIV infection The incidence of a mother having HIV during pregnancy, leading
to a likelihood that the infection will be passed on to the fetus.
A further significant cause of intellectual disability is maternal drug use during pregnancy. In many
cases the drugs responsible for offspring intellectual disability may be ones taken for medicinal purposes
(such as drugs taken during cancer chemotherapy treatment), but most other cases occur where the
mother is a substance abuser. For instance, US studies indicated that 18% of pregnant women smoke
tobacco, 9.8% drink alcohol, and 4% use illegal drugs (Jones, 2006). Fetal alcohol syndrome(FAS) is
one such example of maternal drug abuse causing childhood intellectual disabilities. Whenever a
pregnant mother drinks alcohol this will enter the foetus's bloodstream and slow down its metabolism
and affect development. If this occurs on a regular basis, then development of the foetus will be severely
impaired. Children suffering FAS will usually have lower birthweight, lower IQ (between 40 and 80) and
suffer motor impairments and deficits in attention and working memory (Niccols, 1994; Burden,
Jacobson, Sokol, & Jacobson, 2005). They will also frequently exhibit distinctive facial characteristics
including slit eyes, short noses, drooping eyelids and thin upper lips. In the UK around 1 in every 6–
7,000 babies born have FAS (National Organisation on Fetal Alcohol Syndrome, 2012). Recently,
attention has also been focussed on the intellectual and developmental effects on children of illegal
drugs use by pregnant mothers. Use of both cocaine and crack cocaine (see Chapter 9) by a pregnant
mother can lead to babies being physically addicted to the drug before birth (known as ‘crack
babies’). There is some evidence that this can adversely affect physical development and brain
development in particular (Hadeed & Siegel, 1989) and result in slow language development (van Baar,
1990). However, it is clear that maternal drug‐taking while pregnant may often occur in contexts that
may also contribute to poor intellectual development in the offspring, and these may include the abuse
of other drugs, pregnancy deprivations (such as dietary imbalances), and economic and social
deprivation (Vidaeff & Mastrobattisa, 2003). As such, this makes it difficult to assess the specific effect of
maternal cocaine use on offspring intellectual development (Jones, 2006).
One final example of a perinatal cause of intellectual disability is anoxia, which is a significant period
without oxygen occurring during or immediately after delivery. Lack of oxygen to the brain during the
birth process can damage parts of the brain that are yet to develop, and as a result can cause both
physical and intellectual impairment (Erickson, 1992). The main neurological birth syndrome caused by
anoxia is cerebral palsy which is characterised by motor symptoms that affect the strength and
coordination of movement. While the primary disabilities are mainly physical, around one third of
those suffering from cerebral palsy will also suffer some form of intellectual, cognitive, or emotional
disability as well.
anoxia A perinatal cause of intellectual disability, being a significant period without oxygen
that occurs during or immediately after delivery.
cerebral palsy The main neurological birth syndrome caused by anoxia which is
characterised by motor symptoms that affect the strength and coordination of movement.
Childhood causes
Although a child may be born healthy, there are potentially a number of early childhood factors that
might put the child at risk of intellectual disability. Very often these factors may operate in conjunction
with other causes such as perinatal problems. We will look briefly at four groups of potential childhood
causes of intellectual disability, namely accidents and injury, exposure to toxins, childhood infections,
and poverty and deprivation.
During their early developmental years, young children will often be involved in accidents, and these
can often be severe enough to cause irreversible physical damage and intellectual impairment (Ewing‐
Cobbs et al., 2006). Common childhood accidents that may cause permanent intellectual disability
include falls, car accidents, near drownings, suffocation, and poisoning. However, at least some of the
injuries that cause intellectual disability in children may not be genuine accidents but may be the result
of physical abuse by others. A retrospective study of head injuries in children aged between 1 and 6
years of age estimated that 81% of cases could be defined as accidents and 19% as definite cases of
abuse (Reece & Sege, 2000). One form of child abuse that is known to cause intellectual disability is
known as shaken baby syndrome. This refers to traumatic brain injury that occurs when a baby is
violently shaken. In comparison to babies who receive accidental traumatic brain injury, shaken baby
injuries have a much worse prognosis, including retinal haemorrhaging that is likely to cause blindness
and an increased risk of mental disability such as cerebral palsy or intellectual impairment (Lind,
Laurent‐Vannier, Toure, Brugel, & Chevignard, 2013). Nevertheless, we must remain cautious about the
degree to which shaken baby syndrome may contribute to intellectual disability because of controversies
over how the syndrome should be diagnosed (e.g., Kumar, 2005).
shaken baby syndrome A form of child abuse that is known to cause intellectual disability. It
refers to traumatic brain injury that occurs when a baby is violently shaken.
During early development children may also be exposed to toxins that can cause neurological damage
resulting in intellectual impairment. One such toxin is lead, which is still frequently found in the
pollution from vehicles that that burn leaded petrol. Lead‐based paint is also found in older properties,
and so may well be a risk factor in children living in deprived, low socio‐economic areas. Lead causes
neurological damage by accumulating in body tissue and interfering with brain and central nervous
system metabolism. Children exposed to high levels of lead have been found to exhibit deficits in IQ
scores of up to 10 points (Dietrich, Berger, Succop, Hammond, & Bomschein, 1993). Even in
Westernised societies aware of the risks associated with exposure to lead the prevalence of lead
poisoning in children aged 1–2 years is still as high as 1% (Ossiander, Mueller, & van Enwyk, 2005).
Prevalence rates are significantly higher than this is developing countries (Sun, Zhao, Li, & Cheng,
2004).
There is evidence to suggest that social deprivation and poverty can themselves contribute to
intellectual disability. Although such factors may not directly cause impairment to the biological
substrates underlying intellectual ability, they may contribute a form of intellectual impoverishment that
can be measured in terms of lowered IQ scores (Garber & McInerney, 1982; Farah, 2017). Social
deprivation and poverty are also inextricably linked to other risk factors for intellectual disability,
including poor infant diet, exposure to toxins (such as lead paint in old or run‐down housing), maternal
drug‐taking and alcoholism, and childhood physical abuse. A cycle of deprivation, poverty, and
intellectual disability is established when young adolescents in deprived environments themselves give
birth to children while still teenagers (Wildsmith, Manlove, Jekielek, Moore, & Mincieli, 2012). Such
teenage mothers are frequently found to live in deprived areas, are often unmarried, live in poverty as a
result of their premature motherhood, and have a significantly lower than average IQ themselves
(Carnegie Corporation of New York, 1994; Borkowski et al., 1992). Studies have shown that teenage
mothers are significantly more likely to punish their children than praise them and are significantly less
sensitive to their children's needs than older mothers (Borkowski et al., 1992; Brooks‐Gunn & Chase‐
Lansdale, 1995). As a result, children born to teenage mothers are at increased risk of problematic
parent‐child interactions, (Leadbeater, Bishop, & Raver, 1996), behavioural difficulties (Fergusson &
Lynskey, 1993), and cognitive disadvantage and educational underachievement (Fergusson &
Woodward, 1999; Brooks‐Gunn, Guo, & Fustenberg, 1993). Consequently, mild intellectual disability is
reckoned to occur three times more frequently in the children of teenage mothers (Borkowski et al.,
1992; Broman et al., 1987). As we said earlier, it is difficult to estimate solely how much this is due to the
teenage mother's age and her parenting practices, because the child of a teenage mother is significantly
more likely to be raised in the kinds of deprived environments that contain many other risk factors for
intellectual disability.
teenage mothers In relation to intellectual disabilities, young mothers who become pregnant
before 18 years of age, and who are likely to have lived in deprived areas prior to giving birth,
are often unmarried, live in poverty as a result of their premature motherhood, and are likely to
have a significantly lower than average IQ.
Finally, one important feature of deprived environments is that they will usually provide significantly
decreased levels of stimulation for young children, including lower rates of sensory and educational
stimulation, lack of one‐to‐one child‐parent experiences, and poverty of verbal communication—all
factors that are thought to be associated with poor intellectual development. However, there are some
views that claim that lack of stimulation can have a direct effect on the early physical development of the
brain and so result in permanent impairments to brain functioning. For instance, neural development of
the brain occurs most extensively and rapidly in the first year after birth (Kolb, 1989), and a rich,
stimulating environment is necessary for full development of the brain's structure (Nelson & Bosquet,
2000). Alternatively, an unstimulating, stressful environment can actually trigger the secretion of
hormones that prevent effective brain development (Gunnar, 1998). In a study comparing children
brought up in deprived inner city areas with a group exposed to good nutrition and provided with a
stimulating environment, Campbell & Ramsey (1994) found that by 12 years of age the deprivation
experienced by the former group had a significant negative effect on brain functioning.
Training procedures
The quality of life of people with intellectual disabilities can be improved significantly with the help of
basic training procedures that will equip them with a range of skills depending on their level of
disability. Types of skills include self‐help and adaptive skills (such as toileting, feeding, and dressing),
language and communication skills (including speech, comprehension, sign language), leisure and
recreational skills (such as playing games, cooking skills), basic daily living skills (using a telephone,
handling money), and controlling anger outbursts and aggressive and challenging behaviour (reducing
the tendency to communicate through aggressive or challenging behaviours such as pushing or
shouting). Training methods can also be used in more severe cases to control life‐threatening behaviours
such as self‐mutilation or head‐banging.
Behavioural techniques that adopt basic principles of operant and classical conditioning are used
extensively in these contexts, and the application of learning theory to training in these areas is also
known as applied behaviour analysis (Davey, 1998; Cooper, 2019). Basic techniques that are used
include operant reinforcement (rewarding correct responses—for example, with attention or praise),
response shaping (breaking down complex behaviours into small achievable steps and then rewarding
each step successively), errorless learning (breaking down a behaviour to be learned into simple
components that can be learned without making errors—errorless learning is stronger and more
durable than learning with errors), imitation learning (where the trainer demonstrates a response for the
client to imitate), chaining (training the individual on the final components of a task first, and then
working backwards to learn the earlier steps), and self‐instructional training (teaching the client to guide
themselves through a task by verbally instructing themselves what to do at each step). Very often,
inappropriate, life‐threatening or challenging behaviours may be inadvertently maintained by
reinforcement from others in the environment (e.g., self‐mutilating behaviour may be maintained by the
attention it attracts from family or care staff). In these cases, a functional analysis can be carried out in
order to help identify the factors maintaining the behaviour, and this is done by keeping a record of the
frequency of the behaviours and noting the antecedents and consequences of the behaviour. Once it is
known what consequences might be maintaining the behaviour, these can be addressed to prevent the
behaviour being reinforced (Mazaleski, Iwata, Vollmer, Zarcone, & Smith, 1993; Wacker et al., 1990)
(Treatment in Practice Box 17.1).
applied behaviour analysis Applying the principles of learning theory (particularly operant
conditioning) to the assessment and treatment of individuals suffering psychopathology.
CLINICAL PERSPECTIVE: TREATMENT IN PRACTICE BOX
17.1 A FUNCTIONAL ANALYSIS OF CHALLENGING
BEHAVIOUR
Some individuals with intellectual disabilities typically display behaviour that may put
themselves or others at risk, or which may prevent the use of community facilities or prevent
the individual having a normal home life. Challenging behaviours may take the form of
aggression, self‐injury, stereotyped behaviour, or disruptive and destructive behaviour generally.
In many cases, a functional analysis may help to identify the factors maintaining challenging
behaviour, and these may range from social attention, tangible rewards such as a hug, escape
from stressful situations, or they may simply provide sensory stimulation. A functional analysis is
undertaken by keeping a record of the frequency of the behaviours and noting the antecedents
and consequences of the behaviour. This will take the form of:
A. What happens before the challenging behaviour (the trigger)
B. What does the individual do? (the behaviour)
C. What does the person get as a result of the behaviour? (the consequence)
A typical ‘ABC’ chart on which family and carers will keep a record of these behaviours will
often look like this:
Date Antecedent (what Behaviour Consequence (what Signature
happened before the (describe exactly happened immediately
behaviour occurred?) what the person after the behaviour?)
did)
special educational needs (SEN) A term used in the UK to identify those who require
instruction or education tailored to their specific needs.
CASE HISTORY 17.1 THOMAS'S STORY
Thomas is 23 and lives with his mum and dad. His brother now lives away, but sees him quite
regularly.
Thomas has Down syndrome and needs a great deal of support. He goes to college 4 days a
week and, on Fridays, attends a project where he is learning living skills and enjoying cooking.
Thomas has a supported work placement for 2 hours a week in a riding stable. He has a hectic
social life with weekly activities including riding, sports, going to the gym, trampolining, and
football. He goes to monthly discos with a group of young people with a learning disability and
is regularly to be found in the local pub playing pool with his friends.
We were transporting him to and from these activities and were concerned that he should be
able to mix more with his own age group. We arranged for 15½ hours' worth of direct
payments for Thomas to choose someone in his peer group to help him access these activities.
There was a great deal of interest in the advert we placed at the local university for a student to
help with this and we have had several different students helping over the past year, who have
become firm friends. Thomas's current helper, Laura, accompanies him on his outings and
Thomas has now become a part of a wider social circle, going to the pub, out for meals, and
watching videos at Laura's house with her friends, which he greatly enjoys.
Thomas's moods, self‐esteem, and well‐being are greatly improved by the stimulation and social
nature of all that he does, as well as the routine and structure it brings to his life.
Thomas and his friends have gained enormous confidence from attending several drama
courses and the group has enjoyed the feeling of empowerment and also the opportunity to
show their feelings. Last year, a group of 11 young adolescents, including Thomas, attended a
week‐long outward bound course run by the Calvert Trust without their families. Afterwards,
the group made a presentation to about 80 people who had been involved in organising or
fundraising their trip, with a very professional PowerPoint presentation and question and
answer session. They were all keen to contribute, wanted to find other groups to make their
presentation to and gained lots of confidence from this. It makes a change from the usual
painting eggs and bingo offered by local services, just not appropriate for a 22‐year‐old.
(From evidence from a family carer given to the Foundation for People with Learning Disabilities' Inquiry
into Meeting the Mental Health Needs of Young People with Learning Disabilities–Count Us In)
Clinical Commentary
Thomas is an example of how individuals with intellectual disabilities can benefit significantly from
accessibility and inclusion strategies. He has work in a supported employment setting and has a full
social life in which he can mix with people of his own age. This approach has the benefit of building
confidence and self‐esteem, as well as providing the individual with a real sense of empowerment.
Inclusion policies have resulted in significant improvements to the quality of life experienced by
individuals with intellectual disabilities, and such individuals now have opportunities to pursue social,
educational, and occupational goals and pursue their own personal development. For example,
individuals with intellectual disabilities now have the right to pursue their own sexual and emotional
development—usually with the support of their own family. Whereas in the past involuntary
sterilization was common for such individuals, appropriate training and counselling now means that
most individuals can be taught about sexual behaviour to a level appropriate for their functioning. This
often means that they can learn to use contraceptives, employ responsible family planning, get married,
and—in many cases—successfully rear a family, either on their own or with the help of local services
(Lumley & Scotti, 2001; Levesque, 1996; but see Schaafsma, Kok, Stoffelen, & Curfs, 2017, for a
discussion of how sex education for individuals with intellectual disabilities might be improved).
However, while inclusion in its broadest sense continues to benefit individuals with intellectual
disabilities, there is still much confusion over the way in which inclusion is defined and operationalised
(Bigby, 2012) and this makes objectively measuring the success of such policies difficult (Martin &
Cobigo, 2011).
Finally, although inclusion is still an important goal for governments and service providers, in 2016 it
was estimated that only 5.7% of individuals with an intellectual or learning disability were in any form
of paid employment in the UK, compared with 74.5% of the working‐age population (Hatton, 2016).
If these figures are correct, then they represent a disappointingly low level of inclusion. Of those that
are employed, most are conscientious and valued workers employed in normal work environments.
Others with more specific needs may need to pursue employment within sheltered workshops or
supported employment settings which provide employment tailored to the individuals own needs.
The UK Government seeks to promote employment as another form of social inclusion for individuals
with intellectual disabilities. Those working in sheltered workshops have been shown to exhibit higher
levels of job satisfaction than those who work outside such a scheme, and those living in a semi‐
independent home and also work in a sheltered workshop show the highest levels of self‐esteem (Griffin,
Rosenberg, & Cheyney, 1996).
sheltered workshops Settings that provide individuals with intellectual disabilities with
employment tailored to their own needs and abilities.
autistic spectrum disorder (ASD) An umbrella term that refers to all disorders that display
autistic-style symptoms across a wide range of severity and disability.
After Adam's first birthday party his mother began to pay attention to some characteristics of
her son's personality that did not seem to match those of the other children. Unlike other
toddlers, Adam was not babbling or forming any word sounds, while others his age were saying
‘mama’ and ‘cake’. Adam made no attempt to label people or objects but would just pronounce
a few noises which he would utter randomly through the day.
At the birthday party and in other situations, Adam seemed uninterested in playing with other
children or even being around them socially. He seemed to enjoy everyone singing ‘Happy
Birthday’ to him, but made no attempt to blow the candles out on the cake—even after others
modelled the behaviour for him.
His parents also noted that Adam had very few interests. He would seek out two or three Disney
toys and their corresponding videotapes and that was it. All other games, activities, and toy
characters were rejected. If pushed to play with something new, he would sometimes throw
intense, unconsolable tantrums. Even the toys he did enjoy were typically not played with in an
appropriate manner. Often he would line them up in a row, in the same order, and would not
allow them to be removed until he decided he was finished with them. If someone else tried to
rearrange the toys he would have a tantrum.
As the months went by and he remained unable to express his wants and needs, Adam's
tantrums became more frequent. If his mother did not understand his noises and gestures, he
would become angry at not getting what he wanted. He would begin to hit his ears with his
hands and cry for longer and longer periods of time.
Source: Adapted from Gorenstein & Cromer (2004).
Clinical Commentary
From a very early age, Adam exhibited symptoms of the triad of impairments typical of autistic
spectrum disorder. He shows (a) no sign of engaging in or enjoying reciprocal social interactions (e.g., the
lack of interest in socialising with others at his birthday party), (b) a significant delay in the development
of spoken speech (illustrated by his failure to form word sounds, label objects, or express his wants and
needs), and (c)a lack of imagination and flexibility of thought (as demonstrated by his inability to use
toys in imaginative play and his inflexibly stereotyped behaviour towards these toys).
Impairments in communication
There is often a prominent delay in the development of spoken language, and in those that do learn to
speak there can be an inability to sustain a conversation. When speech does develop, it may fail to follow
the normal rules of pitch, intonation, or stress, and a child's speech may sound monotonous and
disinterested. Grammatical structures are often immature and more than half of those diagnosed with
autistic disorder fail to speak at all but may utter a range of noises and screams that are often unrelated
to attempts to communicate. Some individuals exhibit what is known as echolalia, which is immediate
imitation of words or sounds they have just heard (e.g., if asked “Do you want a drink?” the child will
reply “Do you want a drink?”). Others that do develop language may only be able to communicate in a
limited way and may exhibit oddities in grammar and articulation. For instance, some exhibit pronoun
reversal in which they refer to themselves as ‘he’, ‘she’, or ‘you’, and this is a feature of speech that is
highly resistant to change (Tramonta & Stimbert, 1970). An autistic child's ability to learn language is a
good indicator of prognosis. Those that have learned meaningful speech by age 5 years are the ones
that are most likely to benefit from subsequent treatment (Werry, 1996; Kobayashi, Murata, &
Yoshinaga, 1992).
echolalia The immediate imitation of words or sounds that have just been heard.
Biological causes
Genetic factors
There had been evidence available for some time that the social and language deficits and psychological
problems reminiscent of ASD often had a family history (Folstein & Rutter, 1988; Piven & Palmer,
1999). In particular, there is evidence for a strong familial aggregation of autistic symptoms, as
demonstrated in studies of sibling reoccurrence risk (i.e., studies investigating the probability of
developing autism given that an individual's sibling is autistic). These studies have estimated that the rate
of autism in the sibling of someone with autism ranges between 2 and 14% (Bailey, Phillips, & Rutter,
1996; Jorde et al., 1990), which is significantly higher than the prevalence rate found in the general
population. ASD also appears to co‐occur with several known genetic disorders such as
phenylketonuria, fragile X syndrome, and tuberous sclerosis (Smalley, 1998; Reiss & Freund, 1990),
implying a genetic link in its aetiology. There are also familial links between ASD and other
psychological problems. For instance, affective disorders are almost three times more common in the
parents of autism sufferers than in the parents of children suffering from tuberous sclerosis or epilepsy.
While we might expect that having a child with a disability might precipitate such psychological
problems, a majority of parents of autistic children developed their affective disorder before the birth of
the child (Bailey, Phillips, & Rutter, 1996).
Numerous twin studies have confirmed this genetic component to ASD. In studies comparing
concordance rates in MZ and DZ twins, Folstein & Rutter (1977) found concordance in 4 out of 11 MZ
twins but none in DZ twins. Subsequent twin studies have found concordance rates of between 60 and
91% for MZ twins and between 0 and 20% for DZ twins (Rutter et al., 1990; Bailey et al., 1995;
Steffenberg et al., 1989; Lichtenstein, Carlstrom, Ramstam, Gillberg, & Anckarsater, 2010), and a
recent meta‐analysis of twin studies has indicated that the heritability level of ASD is substantial, at
between 64 and 91% (Tick, Bolton, Happe, Rutter, & Rijsdijk, 2016). In addition, twin studies have also
demonstrated that each of the symptom components of autistic disorder—social impairments,
communication impairments, and restricted repetitive behaviours—all individually show high levels of
heritability (Ronald, Happe, Price, Baron‐Cohen, & Plomin, 2006).
Molecular genetics have implicated over 100 genes as risk factors for ASD (Satterstrom et al., 2020).
Most of these affect the development of brain synapses or regulate other genes, and while some have a
broad effect on early development, others are more specific to the symptoms of ASD. Figure 17.2
provides an overview of some of the mechanisms by which these gene effects are moderated. These
include (a) abnormalities resulting from gene copy number variations (CNVs), (b) epigenetics, in which
perinatal or early development experiences may modulate the expression of genes (e.g., Wong et al.,
2014), (c) double‐hit mutations (abnormalities resulting from rearrangements in two particular genes),
and (d) sex‐linked modifiers (i.e., sex‐linked genes that may make males more susceptible to ASD)
(Rylaarsdam & Guemez‐Gamboa, 2019). Most of this evidence strongly indicates that ASD is a
complex inherited condition that may involve a range of different genetic influences affecting symptom
expression and severity, including several different gene CNVs (Freitag, Staal, Klauck, Duketis, &
Waltes, 2010).
Perinatal factors
We noted in our discussion of intellectual disabilities, that perinatal factors may play a significant role in
determining intellectual impairment and the same may be true in the case of ASD. A range of birth
complications and pre‐natal factors have been identified as risk factors in the development of ASD, and
these include maternal infections, such as maternal rubella during pregnancy (Chess, Fernandez, &
Korn, 1978), intrauterine exposure to drugs such as thalidomide and valproate (Stromland, Nordin,
Miller, Akerstrom, & Gillberg, 1994; Williams et al., 2001), maternal bleeding after the first trimester of
pregnancy (Tsai, 1987), and depressed maternal immune functioning during pregnancy (Tsai &
Ghaziuddin, 1997). However, many of these risk factors have been identified only in individual case
reports, and they probably account for a very small percentage of cases of ASD (Fombonne, 2002;
Muhle, Trentacoste, & Rapin, 2004). For example, recent studies suggest that congenital rubella
infection has been found to be present in less than 0.75% of autistic populations—largely because of
the near eradication of the disease in Western countries (Fombonne, 1999; however, rubella still affects
up to 5% of pregnant women worldwide and may still be a cause of autism, Hutton, 2016). Some
studies also claim to have linked autism to postnatal events such as a link between ASD, inflammatory
bowel disease, and administration of the measles, mumps, and rubella (MMR) vaccine (Wakefield et al.,
1998). This claim caused some controversy in the UK at the time because it led to many parents
refusing to have their children immunised with the vaccine, and so put them at significant risk for these
infections (see Activity Box 17.1). However, subsequent studies have failed to corroborate an association
between administration of MMR and autism (e.g., Madsen et al., 2002; Hviid, Hansen, Frisch, &
Melbye, 2019). In addition, recent studies have also failed to find any association between infectious
diseases in the first 2 years of life and autism. Rosen, Yoshida, & Croen (2007) found that children with
subsequent diagnoses of autism had no more overall infections in the first 2 years of life than children
without autism.
FIGURE 17.2 Genetic modifiers in autism spectrum disorder. Autism spectrum disorder is estimated to be between 64
and 91% inherited (Tick, Bolton, Happe, Rutter, & Rijsdijk, 2016). However, both genetic and nongenetic factors
modulate the influence of risk genes, resulting in a highly heterogeneous set of symptoms. Examples of genetic modulators
include CNV (abnormalities resulting from gene copy number variations), epigenetics (e.g., maternal complications during
pregnancy may influence gene expression), and double‐hit mutations (where abnormalities result from rearrangements in two
particular genes). Examples of nongenetic modifiers include environmental exposures (e.g., maternal tobacco smoking) and
sex‐linked modifiers (e.g., sex‐linked factors that may protect females from ASD or make males more susceptible to ASD).
From Rylaarsdam & Guemez‐Gamboa, 2019.
Brain function
There is now a good deal of converging evidence from autopsy studies, fMRI studies, and studies
measuring EEG (electroencephalogram) and ERP (event‐related potentials) that autism is associated
with aberrant brain development. Autopsy studies of individuals diagnosed with ASD have revealed
abnormalities in a number of brain areas including the limbic system and the cerebellum. For example,
neurons in the limbic system are smaller and more dense than normal, and the dendrites which transmit
messages from one neurone to another are shorter and less well developed (Bauman & Kemper, 1994).
Abnormalities in the cerebellum appear to correspond to deficits in motor skills such as impaired
balance, manual dexterity and grip often found in individuals with ASD (Gowen & Miall, 2005). Finally,
autopsy studies have also shown overly large brain size and enlarged ventricles in the brain (Bailey et al.,
1998), and many of these abnormalities are typical of prenatal stages of brain development.
Interestingly, children with ASD are born with brains of normal size, but brain size increases
significantly between 12 and 24 months, and brain size at 24 months is positively correlated with autism
symptoms (Hazlett et al., 2017). A brain that is growing at a faster rate than normal may mean that
neurone connections are not being made selectively, and this may affect the functionality in important
parts of the brain such as the frontal and temporal lobes, and the cerebellum—brain areas important
for language, social, and emotional functions.
Anatomical and functional imaging studies have supplemented the evidence from autopsy studies and
given us an insight into how brain abnormalities in autism progress during different developmental
stages. They have confirmed that individuals with autism have abnormalities in a number of brain
regions, including the frontal lobes, limbic system, cerebellum, and basal ganglia (Sokol & Edwards‐
Brown, 2004), and they also confirm that autistic individuals have larger brain size and significantly
poorer neural connectivity than nonsufferers (McAlonan et al., 2005). We have already alluded to the
fact that individuals diagnosed with autism may lack a ‘theory of mind’ (the ability to attribute mental
states to others or to understand the intentions of others) (see Section 17.4.4), and fMRI studies indicate
that this is associated with decreased activation of the prefrontal cortex and amygdala, and these are
areas that an important component of the brain system underlying the intentions of others (Castelli,
Frith, Happe, & Frith, 2002). The larger brain size of individuals with ASD may provide some insight
into the developmental factors that may impair normal brain development in childhood.
In addition, one in four individuals with autistic disorder also exhibit abnormal EEG patterns in the
frontal and temporal lobes, and many of these have actual clinical seizures (Dawson, Klinger,
Panagiotides, Lewy, & Castelloe, 1995; Rossi, Parmeggani, Bach, Santucci, & Visconti, 1995). In
contrast, ERP studies provide information from brain activity about how individuals react to external
stimuli in the environment, and individuals with autism exhibit ERP patterns that indicate disrupted
and abnormal attention to a range of stimuli, including novel stimuli and language stimuli (Courchesne
et al., 1994; Dunn, 1994).
Taken together, these sources of evidence indicate that individuals with ASD exhibit abnormalities in a
number of different brain areas. These brain areas exhibit both anatomical (i.e., structural)
developmental abnormalities, as well as functional abnormalities (i.e., they do not appear to be able to
fulfil the cognitive functions they are intended for). These abnormalities appear to be determined by a
period of unusual brain overgrowth in early childhood (hence studies showing that autistic individuals
develop oversize brains), followed by abnormally slow or arrested growth, and these anomalies in brain
growth occur at a time during development when the formation of brain circuitry is at its most
vulnerable (Courchesne, 2004).
Cognitive factors
Depending on the severity of their symptoms, individuals with ASD often have problems attending to
and understanding the world around them. Most notably, they have difficulty with normal social
functioning. In severe cases they may be withdrawn and unresponsive, while less severe cases may
exhibit difficulty in reciprocal social interaction, including experiencing problems in communication
and in understanding the intentions and emotions of others. Some theorists have argued that these
deficits in social skills are a result of deficits in cognitive functioning (Rutter, Bailey, Bolton, & Couteur,
1994). First, individuals with ASD appear to exhibit deficits in executive functioning, resulting in
poor problem‐solving ability, difficulty planning actions, controlling impulses and attention, and
inhibiting inappropriate behaviour, and these deficits all have an impact on the ability to act
appropriately in social situations. Second, some theorists have argued that individuals with ASD lack a
‘theory of mind’ (TOM). That is, they fail to comprehend normal mental states and so are unable to
understand or predict the intentions of others. Third, and importantly, researchers such as Simon
Baron‐Cohen have argued that individuals with ASD do not just exhibit cognitive impairments, they
also have areas of strength, and one such strength is their ability to systematise information. We discuss
these three cognitive accounts separately.
Even adults with high functioning autism also exhibit theory of mind deficits on some measures. For
example, many of the traditional tests of theory of mind are rather static and somewhat removed from
the dynamic situations an individual with autism will experience in real life. To make such tests more
akin to everyday experiences, Heavey, Phillips, Baron‐Cohen, & Rutter (2000) devised the Awkward
Moments Test, in which participants view a series of TV commercials and then are asked questions
about the events in each. Individuals with Asperger's syndrome were significantly less able to answer
questions about the mental state of the characters in the commercials than an age‐ and gender‐matched
control group without autism. However, the two groups did not differ on scores on questions related to
recall of events within the TV clips (a memory test), suggesting that the poorer scores on mental state
questions by Asperger's syndrome participants was not simply due to a memory deficit. Because of these
difficulties in understanding the mental states of others, individuals with ASD will undoubtedly have
difficulty indulging in symbolic play with others, actively participating in human interactions, and
forming lasting relationships.
However, some researchers argue that theory of mind deficits do not explain all the social interaction
problems found in individuals with ASD. Other processes that need to be considered are problems in
facial and emotion processing, poor attentional focus, and that individuals with ASD may simply have a
disinclination for social interactions rather than a deficit (e.g., Nuske, Vivanti, & Dissanayake, 2013;
Koldewyn, Jiang, Weigelt, & Kanwisher, 2013).
How can we measure whether someone can understand the intentions of others? Baron‐Cohen,
Leslie, & Frith (1985) designed an imaginative procedure that has been used many times to
assess theory of mind abilities in a range of clinical populations. This is known as the Sally‐
Ann False Beliefs Task. In this procedure two dolls are used to act out the story shown in
the image, and at the end children are asked ‘Where will Sally look for her marble?’ Children
who have developed a theory of mind will say that when Sally comes back from her walk she
will look in the basket for her marble because they will understand that she has not seen Ann
move it. Children who are unable to understand that others have different beliefs from
themselves will say that Sally will look in the box because that is where they themselves know it
is.
Baron‐Cohen, Leslie, & Frith (1985) conducted this test with three groups of children, all with a
mental age of over 3 years. One group was diagnosed with autistic disorder, one with Down
syndrome, and the third group consisted of normally developing children. Most of the children
with autism answered incorrectly (saying Sally would look in the box) while most of the children
in the other two groups gave the right answer (saying Sally would look in the basket). The
inclusion in the study of a group of children with Down syndrome showed that the failure on
this task of children with autism could not be attributed to their learning difficulties more
generally. In addition all children correctly answered two control questions ‘Where is the
marble really?’ and ‘Where was the marble in the beginning?’ demonstrating understanding of
the change in the physical location of the marble during the story.
Drug treatments
A number of drugs are used in the treatment of autism symptoms, mainly to help manage problem
behaviours in those with severe symptoms. Antipsychotic medications are the type of drug most
commonly used in the treatment of severe autistic symptoms, and these include haloperidol and,
more recently, risperidone (see also Chapter 4 for a description of antipsychotic drugs). Antipsychotic
drugs such as these have been shown to reduce repetitive and stereotyped behaviours, reduce levels of
social withdrawal, and also reduce symptoms associated with aggression and challenging behaviour,
such as hyperactivity, temper tantrums, mood changes, and self‐abusive behaviour (Malone, Gratz,
Delany, & Hyman, 2005). However, not all children with autism respond well to this class of drugs, and
they can have potentially serious side effects such as sedation, dizziness, increased appetite, and weight
gain and result in jerky movement disturbances (dyskinesias) (Campbell et al., 1997).
The opioid receptor antagonist naltrexone has also been found to be beneficial in the control of
hyperactivity and self‐injurious behaviour, and a study by Symons, Thompson, & Rodriguez (2004)
suggested that the drug decreased self‐injurious behaviours by over 50% in 47% of the participants in
their study. Some studies have even indicated that naltrexone can produce moderate increases in social
interaction and communication (Aman & Langworthy, 2000; Kolmen, Feldman, Handen, & Janosky,
1995).
Most of these training procedures are time consuming and repetitive and require a significant amount
of investment in time and effort by those conducting the training. However, a way of supplementing
treatment by professionals is to train parents themselves so that they can apply these behavioural
techniques at home (Erba, 2000; Boone, 2018). This has a number of benefits. It enables the autistic
child to learn appropriate behaviours in the environment in which he/she is most likely to be using
them (the home), and it frees up professional therapists' time and offers a tiered structure to treatment
that provides a potentially larger number of sufferers with day‐to‐day treatment. Some studies even
suggest that parents may be more effective and efficient trainers than professionals, and a study by
Koegel, Schreibman, Britten, Burkey, & O' Neil (1982) suggested that 25–30 hours of parent training
was as effective as 200 hours of similar treatment by professionals in a clinic setting. Parent‐
implemented early intervention has been shown to improve child communication behaviour,
increased maternal knowledge of autism, enhanced maternal communication style and parent–child
interaction, and reduced maternal depression (McConachie & Diggle, 2007). Parents can not only learn
to use behavioural techniques to train their own children but can also effectively train others who work
with or care for their children to use these techniques (Symon, 2005). This approach effectively expands
the group of individuals associated with an autistic child who are skilled in maintaining a consistent
training regime for that child.
Inclusion strategies
Many home‐based interventions for high‐functioning individuals with ASD teach self‐help strategies,
social and living skills, and self‐management that are designed to help the individual function more
effectively in society. However, even when an individual has effectively acquired many of these skills,
they may still need to be supported through important life transitions, such as finding and keeping a job.
One such support scheme is known as supported employment. This provides support to both the
employee with autism and the employer and includes (a) providing training and support for the
employer on how to manage the employee with autism, (b) provision of job preparation and interview
skills for the employee, (c) support for the employee for as long as it is needed, and (d) regular feedback
sessions with both employee and employer. Supported employment schemes such as this have been
shown to increase the employee's social integration, increase employee satisfaction and self‐esteem
(Kilsby & Beyer, 1996; Stevens & Martin, 1999), and promote higher rates of employment compared to
a matched control group (Mawhood & Howlin, 1999) (Focus Point 17.2—Compensatory Strategies).
Having hidden her ‘quirks’ her whole life, Eloise Stark—a student studying for a psychiatry
doctorate at Oxford University—struggled to make sense of why she felt different until she was
diagnosed with autism at the relatively late age of 27.
‘I adapted to try and fit in. I learned from an early age that you are expected to make eye
contact, then read that, actually, people do not keep constant eye contact and that was
something of an epiphany for me. So I started to look away for 2 seconds for every four
sentences of a conversation. I know that if someone makes a joke, I am expected to laugh,
whether I find it funny or not’. (BBC News, 2020).
Many adults with a diagnosis of ASD can appear quite neurotypical and rarely demonstrate
any behaviours during social encounters that appear atypical. They may show good eye contact
and appropriate social reciprocity, but it appears that many high‐functioning individuals with
ASD spent some time learning and developing compensatory strategies to deal with these
socially related issues.
In a study investigating the types of compensatory strategies used by adults with a diagnosis of
ASD, Livingston, Shah, Milner, and Happé (2020) found that many intellectually able
individuals with a diagnosis of ASD reported using learned compensatory strategies to modify
their social behaviour. Some of these strategies included:
Predicting, planning, and rehearsing conversations before they happen
Mimicking facial expressions, gestures, and tone of voice picked up from other people or
from TV characters
Looking at the bridge of the nose or standing at right angles to the person they are talking
with in order to avoid eye contact
Making eye contact even though it is not appropriate for that particular conversation
Using props in social situations that will take attention away from any potential social faux
pas, props such as pets, children, or an interesting object
Such compensatory strategies can operate at both conscious and subconscious levels, and help
to explain why some individuals do not receive a diagnosis of ASD until well into adulthood
(Livingston & Happe, 2017; Lai et al., 2017).
Summary of support & interventions for individuals with autistic Spectrum disorder
This section has given a flavour of the broad range of support and interventions that are available for
individuals with a diagnosis of ASD. Basic behavioural training methods have proven to be effective at
promoting a range of self‐help, social, and communicative skills in those most severely affected, and this
has been supplemented with the adoption of parent training programmes that extend the range of
individuals with the skills necessary for successful intervention. Drugs are commonly used primarily to
control negative behavioural symptoms such as self‐injurious, challenging, and hyperactive behaviours,
and they may also have some positive impact on communication and social behaviour. High functioning
individuals with ASD can also receive support in the form of supported employment programmes that
help the individual to seek a suitable job and to proposer in that employment.
developmental disabilities A broad umbrella term used, in the USA, to refer to intellectual
disabilities and pervasive developmental disorders such as autism and Asperger’s syndrome.