The Role of Therapist Training
The Role of Therapist Training
Author Manuscript
Clin Psychol Rev. Author manuscript; available in PMC 2011 June 1.
Published in final edited form as:
NIH-PA Author Manuscript
Abstract
Evidence-based treatments (EBT) are underutilized in community settings, where consumers are
often seen for treatment. Underutilization of EBTs may be related to a lack of empirically informed
and supported training strategies. The goals of this review are to understand the state of the literature
for training therapists in psychotherapy skills and to offer recommendations to improve research in
this area. Results of this review of 55 studies evaluating six training methods indicate that multi-
component trainings have been studied most often and have most consistently demonstrated positive
training outcomes relative to other training methods. Studies evaluating utility of reading, self-
directed trainings, and workshops have documented that these methods do not routinely produce
positive outcomes. Workshop follow-ups help to sustain outcomes. Little is known about the impact
of train-the-trainer methods. Methodological flaws and factors that may influence training outcome
and future directions are also reviewed.
Keywords
therapist training; implementation; dissemination; psychosocial treatments
NIH-PA Author Manuscript
The hope that mental health problems can be successfully ameliorated is supported by the
availability of an increasing number of psychosocial treatment approaches with established
efficacy (e.g., Silverman & Hinshaw, 2008). For example, efficacious treatment programs have
been reported to address developmental disorders, behavioral and emotional disorders,
substance abuse, eating disorders, personality disorders, and psychotic disorders, among others
(e.g., Eyberg, Nelson, & Boggs, 2008; Scogin, Welsh, Hanson, Stump, & Coates, 2005).
However, these approaches continue to be underutilized in community settings (Street,
Niederehe, & Lebowitz, 2000) where millions of consumers receive mental health services
Corresponding Author: Amy D. Herschell, Ph.D., Child & Adolescent Psychiatry, Western Psychiatric Institute & Clinic, University of
Pittsburgh School of Medicine, 3811 O’Hara Street - 537 Bellefield Towers, Pittsburgh, PA 15213, 412-246-5897 (phone), 412-246-5341
(fax), [email protected].
Publisher's Disclaimer: This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers
we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting
proof before it is published in its final citable form. Please note that during the production process errors may be discovered which could
affect the content, and all legal disclaimers that apply to the journal pertain.
Herschell et al. Page 2
each year (National Advisory Mental Health Council, 2001[NAMHC]; Ringel & Sturm,
2001).
NIH-PA Author Manuscript
The field’s current focus on EBT dissemination has highlighted both the need for effective
NIH-PA Author Manuscript
implementation strategies, and the lack of data on knowledge transfer and implementation
topics (Fixsen, Naoom, Blase, Friedman, & Wallace, 2005; Gotham, 2004). In general,
significant implementation difficulties and a lack of demonstrated clinical success have been
reported in transitioning treatments from university to community settings (National Institute
of Mental Health, 1998; President’s New Freedom Commission on Mental Health, 2003).
Ironically, the field lacks comprehensive empirical guidelines to support the transfer of EBTs
to community therapists. Little empirical attention has been paid to those who provide
community care and how to most effectively train them to implement psychosocial
interventions, including EBTs (Carroll, 2001; Luborsky, McLellan, Diguer, Woody, &
Seligman, 1997).
guidelines for their documentation, and distributing information about effective services to
professionals, the public, and the media. The Division 12 Task Force spurred a movement
toward EBT, which has included enthusiasm, controversy and concern. Notable were concerns
related to a possible over-focus on manualized treatments and under-appreciation of common
factors and patient diversity. More recently, and perhaps in response, the American
Psychological Association’s Presidential Task Force on Evidence-based Practice (American
Psychological Association Presidential Task Force on Evidence-based Practice, 2006)
broadened the conceptualization of this topic and offered the following definition: “Evidence-
based practice in psychology (EBPP) is the integration of best available research with clinical
expertise in the context of patient characteristics, culture, and preferences.” (p. 273). Even
1The term “therapist” is used broadly and is meant to include professionals who provide psychological services to populations with
clinically-significant mental or behavioral health difficulties. It is meant to include counselors, clinical social workers, psychologists,
psychiatrists, and all other mental or behavior health clinicians.
more recently Kazdin (2008) defined EBTs as interventions or techniques that have “produced
therapeutic change in controlled trials” (p. 147), and evidence-based practice (EBP) as a
broader term referring to “clinical practice that is informed by evidence about interventions,
NIH-PA Author Manuscript
clinical expertise, and patient needs, values, and preferences, and their integration in decision-
making about individual care” (p. 147). These definitions extend prior descriptions that
emphasize related concepts, but also incorporate alternative approaches and guidelines (Spring,
2007). The breadth of definitions reported is apparent in several recent studies and reviews
(Luongo, 2007; Schoenwald, Kelleher, & Weisz, 2008) and just what constitutes an EBT is
still a matter of debate.
A common thread to the debate is that regardless of exactly what constitutes an EBPP, EPT,
or EBP, there is a continuing need to transfer science into practice, which will require effective
and efficient methods for transferring to therapists the skills and knowledge needed to conduct
empirically informed psychotherapies (Fixsen et al., 2005; Gotham, 2004). Similarly, all
psychotherapies are “soft technologies” (Hemmelgarn, Glisson, & James, 2006), meaning that
they are malleable and rely extensively on people (therapists), which further complicates their
implementation.
existing service systems and enhance overall outcomes for consumers of mental health
services. The National Institute of Mental Health (2002) has defined dissemination as “the
targeted distribution of information to a specific audience,” and has defined implementation
as “the use of strategies to introduce or adapt evidence-based mental health interventions within
specific settings” (PA-02-131; p. 1). These concepts have been examined and incorporated in
models designed to guide the communication of new technologies using various methods or
strategies (e.g., Berwick, 2003; Gotham, 2004; Greenhalgh, Robert, MacFarlane, Bate, &
Kyriakidou, 2004). All of these models acknowledge the importance of understanding and
enhancing the methods by which new knowledge can be conveyed and incorporated for routine
application.
previous knowledge and skill, ability, motivation, self-efficacy, and personality. Training
design factors are the focus of this review and include the structure and format of training,
incorporation of learning principles into training, sequencing of training, and the job relevance
of training content. Work environment factors include constraints and opportunities to use the
trained skills, support from supervisors and peers, and organizational culture and climate.
Successful transfer of training to the work setting is not solely determined by any one factor.
Instead, transfer of training is a complex, multi-level process. This review will focus on training
design as one part of a larger process because the successful transfer of psychosocial
innovations, including EBTs from university or research clinics to community clinics, will
require an understanding of the effectiveness of current training methods for assisting post-
graduate professionals to implement new treatment approaches.
the lack of time, support, and opportunities for learning new skills (Essock et al., 2003;
Herschell, Kogan, Celedonia, Gavin, & Stein, 2009). Concerns have also been raised about the
relevance and utility of existing educational programs for professional psychologists (see
Sharkin & Plageman, 2003). Recent guidelines for training practitioners in the use of EBTs
emphasize the importance of using specialized techniques designed to engage, train, and
support practitioners in their use of new technologies, such as a review of treatment manuals,
exposure to intensive didactic training workshops, and participation in closely supervised
training cases (Bruns et al., 2008; Hyde, Falls, Morris, & Schoenwald, 2003). To further our
understanding of this objective, the goals of this review are to understand the state of the
literature for training therapists in psychotherapy and to offer recommendations to improve
research in this area.
Just as Alberts and Edelstein (1990) who reviewed studies from 1979 to 1990 picked up where
Ford left off (reviewed 1960 to 1978), the current review begins where Alberts and Edelstein
left off (1990) and continues through February 2010.
Alberts and Edelstein’s review (1990) included studies divided into two clusters, training in
“traditional,” process-related skills (e.g., empathy, attending, open-ended questions) and
training in “complex verbal skill repertoires” (e.g., clinical assessment, case
conceptualization). Participants most often were graduate students in clinical, counseling, or
school psychology and techniques were studied within the context of the larger graduate
training program. Reviewed studies focused on training discrete skills. A combination of
didactic instruction, modeling, feedback, and practice (rehearsal) were important for skill
acquisition. Methodological flaws originally noted by Ford (1979), continued to be mentioned
as problematic in the Alberts and Edelstein review (1990), which included a lack of validated
dependent variables and control groups, as well as little attention devoted to interactions among
therapist characteristics, target behaviors, training techniques, and instructor’s credentials.
Additional concerns included use of single-method measurement strategies, lack of in vivo
skills assessments, limited external validity of skill assessments, and no follow-up evaluations
to assess maintenance.
NIH-PA Author Manuscript
More specific reviews will be highlighted as they apply to specific areas of this review. For
example, Miller and Binder (2002) completed a review focused on training issues related to
the use of treatment manuals, and Vandecreek and colleagues (1990) completed a review of
psychology continuing education. Stein and Lambert (1995) reviewed literature related to the
impact of graduate training in psychotherapy on therapist behavior and patient outcome. In
that same year (1995) Holloway and Neufeldt reviewed research related to clinical supervision.
Similar to Alberts and Edelstein (1990), this review is meant to provide a more comprehensive
evaluation of the broad therapist training literature. Just as treatment techniques have advanced
in the last 15 years, training strategies have expanded, which likely is due, in part, to the
increased interest in dissemination of EBT to community settings. This expansion is reflected
in the current review by the inclusion of studies that include community-based clinicians rather
than graduate-level trainees as Alberts and Edelstein included (1990).
Methods
Search Strategy
NIH-PA Author Manuscript
Gumbley, & Storti, 2008). Also, training studies were found across a variety of treatment areas
(e.g., child, adult, substance abuse, mental health), which complicated the search.
based mental health providers. Considering that the focus of this review was on training (not
treatment) we included studies that implemented any psychosocial treatment, regardless of its
evidence base. Finally, we included only published empirical studies. Unpublished
dissertations, conceptual articles, and recommendation papers were excluded.
Type 1 studies are the most rigorous. They involve a randomized, prospective clinical trial.
These studies use comparison groups, random assignment, blind assessments, clear inclusion
and exclusion criteria, state-of-the-art diagnostic methods, sufficient sample size and power,
and clearly described statistical methods. Type 2 studies are clinical trials in which an
intervention is made, but some aspects of the Type 1 study requirement are missing. They have
some significant flaws (e.g., no blind assessment, lack of random assignment, short period of
NIH-PA Author Manuscript
observation) but are not necessarily fatally flawed. Type 2 studies do not merit the same
consideration as Type 1 studies, but often do provide useful information. Type 3 studies are
clearly methodologically limited. This group includes studies that are uncontrolled; use pre-
post designs, and retrospective designs. Type 3 studies are often aimed at providing pilot data
and include case-control studies, open treatment studies, and retrospective data collection.
Type 4 studies include reviews with secondary data analysis such as meta-analysis. Type 5
studies are those that are reviews without secondary data analysis. Type 6 studies include case
studies, essays, and opinion papers (Nathan & Gorman, 2002, 2007). Only Types 1 through 3
were included in this review.
Results
Summary of the Literature: Designs and Types of Investigations
Fifty-five studies evaluating training techniques or methods were identified. Methodologies of
these studies were diverse, and included both quantitative and qualitative designs that range
from single-subject designs to randomized control trials. As is demonstrated in Tables 1
through 6, each of which includes studies of different training methods (i.e., written materials
[Table 1], self-directed training techniques [Table 2], workshops [Table 3], workshop follow-
ups [Table 4], pyramid training [Table 5], multicomponent training packages [Table 6]), only
a few of the 55 studies incorporated what would be considered rigorous methodologies. For
example, 14 (25%) studies used a group comparison, 29 (53%) used a pre/post, 5 (10%) used
a single-subject, and 5 (10%) used a survey design. Two (4%) studies used alternative designs
(e.g., time series). Nine of the 14 (64%) comparison studies used random assignment. Of all
the included studies, 19 (35%) included a follow-up, 24 (44%) utilized a multi-method
assessment strategy, and 26 (47%) included standardized measures. In addition, study sample
sizes range from 6 to 3,558 depending on the methodology employed (e.g., single subject versus
survey research) and typically were small for group comparison studies (i.e., approximately
20 per group). According to Nathan and Gorman’s (2002,2007) classification system, only 6
NIH-PA Author Manuscript
(11%) studies were considered Type 1 studies; 20 (36%) were considered Type 2, and 29 (53%)
were considered Type 3 studies. Similar to problems noted in previous reviews (e.g., Alberts
& Edelstein, 1990;J. D. Ford, 1979), these studies suffer from several limitations including: a)
a lack of control groups, b) no measurement of training integrity, c) poor measurement methods,
d) short follow-ups, and e) lack of in-vivo assessments. Additional methodological limitations
included a lack of random assignment, standardized measurement, follow-up assessments, and
patient-level data. Consistent with our focus on training community therapists to use an EBT,
diverse treatment approaches are included that focused, for example, on substance abuse
treatment for adults, motivational interviewing, residential care for persons with mental
retardation, psychosocial and CBT interventions for patients with schizophrenia and their
families, and time-limited dynamic psychodynamic therapy.
Investigators have evaluated a variety of different training approaches, methods, and issues.
Some have investigated the utility of specific training techniques such as workshops and
Anderson, Strupp, Henry, Sachacht et al., 2000; Brooker & Butterworth, 1993; Henry, Strupp,
Butler, Schacht, & Binder, 1993). Investigators also have examined whether individuals with
diverse training backgrounds can implement mental health techniques (e.g., Brooker et al.,
1994; Hawkins & Sinha, 1998) and the importance of treatment adherence and competence
(e.g., Barber, Crits-Christoph, & Luborsky, 1996; Huppert et al., 2001; Multon, Kivliighan, &
Gold, 1996).
What follows is a summary of the key details and results of studies that have been conducted
to examine six different training methods. A summary of the specific details of the studies in
each section is shown in Tables 1–6. A study was included in multiple tables if the study’s
aims addressed more than one topic area. For example, Sholomskas et al. (2005) evaluated the
effectiveness of written materials (Table 1), self-directed training (Table 2), and workshop
follow-ups as training techniques (Table 4); therefore, this study was included in each of the
three mentioned tables. The discussion section more definitively reviews the overall findings,
key limitations, and practice and research implications of this literature.
reading materials (e.g., a treatment manual, see Table 1). One of these studies was considered
a Type 1 study; two were considered Type 2 studies, and two were considered Type 3 studies.
Three of these studies were group comparisons in which sample sizes ranged from 74 to 174
(M = 109), of which two included random assignment. One study used a pre-post comparison
and the fourth study used a single-subject design. Four of the five studies included follow-up
assessments after training (Dimeff et al., 2009;Ducharme & Feldman, 1992;Kelly et al.,
2000;Sholomskas et al., 2005). Studies examined a variety of assessment domains (e.g.,
knowledge, skills) using behavior observation (3 of 5 studies) and self-report (4 of 5 studies)
methods. Two studies included standardized assessment measures that had acceptable
psychometric ratings.
Sholomskas et al., 2005). Reading may be used as a “first step” to introduce information about
a psychosocial treatment, but reading alone does not result in significant changes in skills or
treatment mastery, as indicated in Table 1.
Limitations of Studies Reviewed—In the two (Type 2) group comparison studies, reading
a manual was compared with training programs that differed both in the modality and number
of hours in training, with additional hours being spent in more intensive trainings compared to
reading (Kelly et al., 2000; Sholomskas et al., 2005). This pairing makes it difficult to tease
out whether group differences were related to the modality or intensity (increased dose) of
training. These studies also are limited in that participating therapists were often taking part in
larger implementation efforts (e.g., Sholomskas et al., 2005), so it is not known if findings
would generalize to other groups of therapists in potentially less innovative settings.
materials (e.g., computer, videotape review; see Table 2). Three studies were considered Type
1 studies; one was considered a Type 2 study, and three were considered Type 3 studies. Study
sample sizes ranged from 6 (single-subject design) to 3,558 (survey design). Four studies
included a group comparison (three with random assignment); one study included pre- and
post- testing; one study included only post- testing; and the final study was a single subject,
multiple baseline design. Four of the seven studies included follow-up assessments (Dimeff et
al., 2009;W. R. Miller, Yahne, Moyers, Martinez, & Pirritano, 2004;Sholomskas et al.,
2005). Assessment primarily focused on knowledge and skill using behavior observation and
standardized, self-report methods.
Limitations of Studies Reviewed—While the focus of these studies was similar (i.e., self-
directed and motivated instruction), the specific training techniques differed. Dimeff and
colleagues (2009) reported on an interactive online training experience for Dialectical Behavior
NIH-PA Author Manuscript
Therapy. Miller and colleagues (2004) focused on the use of reading a treatment manual
supplemented with videotape review. The National Crime Victims Research and Treatment
Center (2007) focused on the utility of a web-based training program for Trauma-Focused
Cognitive Behavioral Therapy. Sholomskas et al (2005) focused on the utility of a web-based
training program for Cognitive Behavioral Therapy, and Suda and Miltenberger (1993) studied
the impact of instruction, goal setting, self-monitoring, self-evaluation and self-praise on
positive staff interactions with consumers. Worrall and Fruzzetti (2009) focused on participants
viewing and rating mock Dialectical Behavior Therapy sessions. This variation allows for few
conclusions about any one of these methods (e.g., web-based training, manual and videotape
review) because there are not enough studies on any one of these topics to draw conclusions.
Representativeness of studies also is questionable in that one study in particular (W. R. Miller
et al., 2004) intended to determine if training could change therapist behavior under optimal
conditions, thus therapists were well motivated and perhaps less representative of a larger group
of therapists. Only one of these studies included patients (Suda & Miltenberger, 1993), and
only four included follow-ups (Dimeff et al., 2009;W. R. Miller et al., 2004; Moyers et al.,
2008; Sholomskas et al., 2005). The length of time between post-assessment and follow-ups
was short (1, 3, 4, 8, or 12 months). The inclusion of client outcome as well as longer follow-
NIH-PA Author Manuscript
ups would strengthen the methodology of these studies and any conclusions that could be drawn
from them.
Workshops
Description of Studies—Nineteen studies were reviewed that included an examination of
the effectiveness of workshops as a sole training technique (see Table 3). Four studies were
considered Type 1 studies, two studies were considered Type 2 studies, and the remaining 13
studies were Type 3. Study sample size ranged from 12 to 3,315 (median = 50). Eleven studies
included pre- and post-workshop testing. Another study included a post-test only with
retrospective reporting of a pre-assessment. Each of the six group comparison studies randomly
assigned participants to different training methods or to a control group. The final study relied
on a clinician survey at the end of training as well as review of patient charts. Eight of the
nineteen studies (42%) included a follow-up assessment varying from 1 month to up to 5 years
after training. Studies examined a variety of assessment domains (e.g., attitudes, knowledge,
organizational readiness practice, satisfaction) using predominantly self-report methods, many
of which were not standardized (n = 8). Seven studies supplemented self-report with behavior
observation measures; one study supplemented self-report with chart review. Interestingly,
some of the studies that used behavior observation methods used simulated clients as part of
NIH-PA Author Manuscript
their assessment strategy (Baer, Rosengren, Dunn, Wells, & Ogle, 2004;Baer et al.,
2009;DeViva, 2006;Dimeff et al., 2009;Freeman & Morris, 1999;W. R. Miller & Mount,
2001b;W. R. Miller et al., 2004). Each study had participants interact with a simulated client
(typically an actor); those interactions were audio- or video-recorded and later coded. This
assessment strategy offers a practical, yet methodologically rigorous, method for behavior
observation assessment of potentially knowledge and skill acquisition as well as treatment
adherence (Russell, Silver, Rogers, & Darnell, 2007). Another study included work samples
(Moyers et al., 2008).
Length of workshop training varied considerably, and there may be a relation between training
time and training outcome. For example, Neff et al. (1999) found that one to three hour trainings
found no differences in 3 and 6 hour trainings on the same topic. Some have spent as many as
10 to 15 hours in workshop training with no change in behavior (Byington et al., 1997; Jensen-
Doss et al., 2007). Recent training studies (e.g., W. R. Miller et al., 2004; Sholomskas et al.,
2005) have shown that increases in skill and knowledge of motivational interviewing
techniques may be present immediately following the workshop, but that without ongoing
support (e.g., individualized feedback, continued consultation), gains can be reversed (Baer et
al., 2009; W. R. Miller et al., 2004).
Ducharme and Feldman (Type 2, 1992) found that a “general case” training strategy, in which
multiple examples are chosen and reviewed that represent nearly all possible client responses,
produced criterion levels of generalization even without the use of other strategies. Timing
also is important in that Hawkins and Sinha (Type 3, 1998) found that expert consultation
became more important as therapists had a reasonable amount of knowledge. Parsons and Reid
(Type 2, 1995) found that supervisor feedback enhanced maintenance of staff members’
teaching skills. Milne, Westerman, and Hanner (Type 2, 2002) examined the utility of a relapse
NIH-PA Author Manuscript
prevention module and found that in comparison to a group receiving standard training, the
relapse prevention group demonstrated greater knowledge and generalization of training across
behaviors and clients.
Browder, 1990), “train the trainer” and “Cascading” models (e.g., S. E. Anderson & Youngson,
1990). Three studies focused on this method as is shown in Table 5, all of which were
considered Type 3 studies, using single subject, multiple baseline designs and pre- and post-
comparisons of training knowledge (n = 40) without random assignment. None of these studies
included follow-up assessments. The pre- and post-training study utilized non-standardized
self-report measures to assess attitudes and knowledge (S. E. Anderson & Youngson, 1990).
In contrast, the single-subject design studies used behavior observation methods to assess skills
(Demchak & Browder, 1990; Shore, Iwata, Vollmer, Lerman, & Zarcone, 1995).
increases in staff knowledge about sexual abuse after implementing a cascade training. Studies
with large sample sizes and scientifically rigorous designs are needed to determine if the
benefits of training only supervisors results in effective training for a broader group of
practitioners and their clients.
training effect was “watered down” from supervisor to staff. Larger, more representative and
methologically rigorous replications that include follow-up assessments are needed to confirm
study results and the utility of this training method.
NIH-PA Author Manuscript
findings given that each study included different training protocols (with different components
and timelines) and assessed different constructs. For example, Morgenstern et al. (Type 1,
2001) found that counselors responded well to the CBT content and manualized format of the
training and that adequate skill levels were reached. Henry, Strupp, et al., (1993) found that
their year-long training program successfully changed therapists’ technical skills in Time-
Limited Dynamic Psychotherapy: increases were observed in emphasis on the expression of
in-session affect, exploration of the therapeutic relationship, improved participant-observer
stance, and open-ended questions. Similarly, Lochman and colleagues (2009) found that their
more intensive training condition resulted in substantial treatment benefits for children treated
by trained school counselors in comparison to less intensely trained counselors and a
comparison condition. The two studies that showed little to no gains were Type 3 studies
(Bein, Anderson, Strupp, Henry, Schacht et al., 2000;Brooker & Butterworth, 1993). In a year-
long training with multiple components (reading, 100 hours of seminars, small group
supervision, audio- and video-tape review) for Time-Limited Dynamic Psychotherapy the
majority of therapists did not achieve basic competence in the model (56% did not conduct a
case with at least minimal skill; Bein, Anderson, Strupp, Henry, Sachacht et al., 2000). In
another study, community psychiatric nurses completing a six-month training evidenced little
NIH-PA Author Manuscript
change in their attitudes about schizophrenia and their preference for behavior therapy over
time (Brooker & Butterworth, 1993).
of their work, which were rated for quality. Of those who applied for specific treatment
trainings, as few as 50% were accepted.
NIH-PA Author Manuscript
Discussion
This empirical review of 55 studies evaluating six therapist training methods has found that
there are differences in the number of studies for specific training methods and their respective
effectiveness. Multiple studies have been conducted on multi-component treatment packages
(20), workshops (19), and workshop follow-ups (9). Fewer studies have been completed on
the utility of pyramid (train-the-trainer) models (3), reading (5), and self-directed trainings (7).
Not only have multi-component treatment packages been studied most often, they also have
most consistently demonstrated positive training outcomes relative to other training methods.
Conversely, studies evaluating the utility of reading, self-directed trainings, and workshops
have documented that these methods do not routinely produce positive outcomes. Workshop
follow-ups help to sustain outcomes. Little is known about the impact of pyramid or train-the-
trainer methods.
The literature is limited by a lack of methodological rigor and multiple “pilot” studies
characterized by small sample sizes, limited power, and absent comparison groups, random
assignment, standardized assessment measures, and routine follow-up assessments. The
inclusion of therapists who may not be representative of those providing services in community
NIH-PA Author Manuscript
agencies also has compromised conclusions that can be drawn within this area of investigation.
Few follow-ups have been conducted, and those that have been conducted are generally of
short duration. Patient outcomes also are rarely included in studies. Therefore, we are unable
to understand treatment sustainability or the impact of training on patient outcomes. Despite
significant methodological flaws, what follows is a brief summary of some key lessons learned
from this research, including: a) the level of effectiveness for a variety of training methods, b)
factors that appear to influence training outcome, c) methodological concerns, and d)
recommendations for therapist training and training research.
review confirms what others (e.g., Davis et al., 1999; VandeCreek et al., 1990; Walters, Matson,
Baer, & Ziedonis, 2005) have found: while workshop participants sometimes demonstrate
increases in skills and (more often) knowledge, workshops are not sufficient for enabling
therapists to master skills (Sholomskas et al., 2005), maintain skills over time (Baer et al.,
2004;W. R. Miller et al., 2004), or impact patient outcome (W. R. Miller & Mount, 2001a).
at post and 90-day follow-up using a sophisticated online learning method. In contrast,
Sholomskas et al. (2005), a Type 2 study, found that web-based training was only slightly more
effective than reading a treatment manual. There simply is not yet enough evidence to draw a
NIH-PA Author Manuscript
conclusion about the utility of this training technique. Additional information on the interactive
nature of the online method and other technologies (e.g., podcasts, archived webinars) will be
important to gather given their potential broad application.
Workshop follow-ups that included observation, feedback, consultation, and/or coaching have
improved adoption of the innovation (Type 2; Kelly et al., 2000), retention of proficiency (Type
1; W. R. Miller et al., 2004), and client outcome (Type 2; Parsons, Reid, & Green, 1993),
compared to workshops alone. Essentially, there does not seem to be a substitute for expert
consultation, supervision, and feedback for improving skills and increasing adoption. The
challenge is that these methods are resource intensive as they require the availability of expert
consultation, clinical supervisors, and therapist time, all of which are costly for community-
based mental health agencies. The implementation field needs to determine: a) how to sequence
learning activities to be cost-effective without compromising training and treatment outcome,
and b) how to use technology more effectively. Participants report liking web-based training
(e.g., National Crime Victims Research & Treatment Center, 2007); perhaps we can capitalize
on technology to increase the availability of expert consultation. Additionally, utilizing cost-
effective training methods initially might reduce the amount of expert consultation and
supervision needed later. Hawkins and Sinha (1998) found that consultations appeared to be
NIH-PA Author Manuscript
more effective for therapists with a reasonable amount of pre-training knowledge, but this result
is tentative given that the methodological flaws of this Type 3 study. If results were replicated,
one strategy might be for therapists to complete a web-based training prior to attending a
workshop. Once competency knowledge and skill levels were met, that therapist could proceed
to participate in conference calls with an expert trainer and other therapists from different
agencies as a form of group supervision. Afterward, the therapist could receive individual
supervision and expert consultation on selected cases. This type of training approach might
minimize costs and maximize the potential for skill acquisition by sequencing training
activities, imposing competency standards, and utilizing internet technology.
Pyramid or train-the-trainer training methods also have the potential to be time- and cost-
effective; however, this method has received the least amount of rigorous examination, limited
to only three studies (S. E. Anderson & Youngson, 1990; Demchak & Browder, 1990; Shore
et al., 1995). The ultimate question that remains is that even if effects are watered down from
supervisors to therapists, are the improvements for consumers still clinically meaningful.
Chamberlain and colleagues currently are conducting a large-scale, prospective study to
examine the effects of using a cascading model to transfer components of Multidimensional
Treatment Foster Care (NIMH Grant # 060195) from a research setting (The Oregon Social
NIH-PA Author Manuscript
Learning Center) to the foster care system in San Diego. Initially, the original developers of
the model will train and supervise staff in San Diego to implement the model. In the second
training iteration, the developers will have substantially less involvement. Similarly, Chaffin
and colleagues are examining the utility of a Cascading model for implementing in-home
family preservation/family reunification services. Providers from a well-trained, model, seed
program will serve as implementation agents for sequential implementations at other agencies.
(NIMH Grant #001334). Studies like these will contribute to a better understanding of the
utility of cascading models as a training technique.
The familiar tone of Bickman’s observations (Bickman, 1996) demonstrating that “more is not
always better” resonates in studies examining the effectiveness of multi-component training
packages as a training method. Of the twenty studies in this area, the large majority found
positive training outcomes. However, two (Bein, Anderson, Strupp, Henry, Sachacht et al.,
2000; Brooker & Butterworth, 1993) studies found that therapists did not achieve even basic
competence in the treatment approach after extended (e.g., year-long) training initiatives. One
study (Crits-Christoph et al., 1998) found that only one of three therapies (CBT) demonstrated
learning that carried over from training case to training case. This is somewhat disappointing
NIH-PA Author Manuscript
given the substantial resources invested; however, it highlights the need to understand the utility
of specific components of these training packages and the ease of training specific approaches.
Additional information is also needed on the methods in which therapists should be trained.
Chorpita and Weisz (e.g., Chorpita, Becker, & Daleiden, 2007) have focused on comparing
the benefits of training therapists in a modular based treatment versus individual EBTs, which
will help to inform this area. As these authors have suggested, perhaps training therapists in
one conceptual approach will have broader implications and be well received by therapists
rather than training them in multiple EBTs.
Therapist Characteristics
Therapist characteristics are often mentioned as key factors in treatment implementation and
dissemination. After all, the characteristics of those who receive the training and provide the
treatment could affect implementation on multiple levels such as treatment competence
(Siqueland et al., 2000) and client outcomes (Vocisano et al., 2004). Most EBTs have been
developed by and for doctoral-level clinical professionals (e.g., clinical psychologists,
psychiatrists) within defined theoretical orientations (e.g., behavioral, cognitive-behavioral).
In contrast, community mental health centers employ primarily masters-level therapists to
provide most of the mental health therapy (Garland, Kruse, & Aarons, 2003; Weisz, Chu, &
Polo, 2004). Therapists report their theoretical orientation to be “eclectic” (e.g. Addis &
Krasnow, 2000; Kazdin, 2002; Weersing, Weisz, & Donenberg, 2002) and that they value the
quality of the therapeutic alliance over the use of specific techniques (Shirk & Saiz, 1992).
Small sample sizes and lack of random assignment hinder our ability to determine the degree
to which therapist characteristics are important and which characteristics in particular need to
be addressed by trainers. Therapists are a diverse group with different learning histories,
NIH-PA Author Manuscript
training backgrounds, and preferences. Understanding more about how to tailor training to
maximize learning outcomes for diverse groups will be an important academic endeavor.
Studies that randomly assign therapists to different training conditions could control
characteristics that are common to research studies, such as high motivation and interest in the
treatment approach, while examining factors that could be addressed such as knowledge,
caseload size, and supervisor support, each of which has been raised as impacting training
results. Examining therapist characteristics seems to be a missed opportunity within the
existing research. Much more could be learned if researchers conducted studies of therapists
or at a minimum, included moderator analyses in their existing implementation studies.
of a study to potentially account for findings (e.g., post study interviews; W. R. Miller et al.,
2004; Schoener et al., 2006). Also missing in this literature are multiple studies on how
organizational interventions (Glisson & Schoenwald, 2005) could be used to enhance
NIH-PA Author Manuscript
implementation successes. This may be an emerging area of study (e.g., Glisson et al., 2008;
Gotham, 2006; Schoenwald, Chapman et al., 2008).
Glisson and colleagues (Glisson, Dukes, & Green, 2006) developed the Availability,
Responsiveness, and Continuity (ARC) organizational intervention strategy to improve
services in child welfare and juvenile justice systems, which is now being used to support the
implementation of Multisystemic Therapy in rural Appalachia (Glisson & Schoenwald,
2005). Similarly, the Addiction Technology Transfer Center of New England has implemented
an organizational change strategy, Science to Service Laboratory, in 54 community-based
substance abuse treatment agencies in New England (Squires et al., 2008) since 2003.
Brondino, Scherer, & Hanley, 1997; Henggeler, Schoenwald, Liao, Letourneau, & Edwards,
2002). These existing studies indicate that: a) training supervisors has been shown to facilitate
improvements in staff performance, b) supervision increases therapist knowledge and
proficiency with complex therapeutic procedures, c) supervisor expertise is positively
correlated with therapist treatment adherence, d) supervisor rigidity (over focus on the analytic
process and treatment principles) is associated with low therapist adherence, e) supervisor
feedback appeared to enhance the maintenance of staff members’ skills, and f) supervisors
benefit from receiving specific instruction on how to supervise others in addition to instruction
on treatment content. Even fewer studies have examined the relation of therapist performance
and client outcome to clinical supervision (Holloway & Neufeldt, 1995). A better
understanding of how supervisors should be trained and included in the implementation process
is needed.
Methodological Concerns
Lack of Theory to Drive Implementation Research—This emerging area of research
appears to be suffering from a lack of theory-driven studies. Researchers (Glisson &
Schoenwald, 2005; Gotham, 2004) have highlighted the value in understanding the complex
NIH-PA Author Manuscript
environment of which these training efforts are a part. Despite these recommendations, there
remains a lack of systematic investigations tied together by a strong theoretical framework.
Perhaps there is value in looking to other disciplines with similar missions to understand
potentially relevant theoretical frameworks. For example, the medical field has tried to
implement evidence-based practices. The field of behavioral health may benefit from
incorporating organizational theories from this work such as complexity science adaptive
systems (R. A. Anderson, Crabtree, Steele, & McDaniel, 2005; Scott et al., 2005).
questionable. For example, it is unclear if results from training studies focused on implementing
family therapy for schizophrenia might be applicable to training studies focused on
implementing motivational interviewing. Perhaps the method and dose of training necessary
NIH-PA Author Manuscript
for adequate skill acquisition (competence in a treatment) is specific to each treatment. More
intensive treatment approaches and/or those that require the use of significantly different skills
that a therapist’s current skill set may require more intensive training methods or doses than
less intensive treatment approaches and/or those that are similar to therapists existing skill sets.
increase at the same rate nor do they always positively correlate. Freeman and Morris (1999)
found statistical significance was demonstrated on a knowledge measure, but not on a clinical
vignette where the application of knowledge had to be demonstrated. Similarly, Byington et
al. (1997) found that knowledge improvements were evident, but improvements were not
evident on applying concepts. Reporting only knowledge can lead to a more optimistic or
skewed (Baer et al., 2009) view of training outcome than is possibly accurate.
Lack of Rigor in Study Design and Scope—As mentioned previously, the multiple
methodological flaws limit the conclusions that can be drawn from these studies. There also
is significant heterogeneity among therapists, training methods, training protocols,
interventions trained, and constructs assessed. All of this variability combined with a lack of
methodological rigor in completed studies significantly complicates this area of inquiry. While
this review sought to organize the literature in a meaningful way by using an established
classification system (Nathan and Gorman, 2002, 2007), the categorization of studies should
not be treated as sacrosanct. Nathan and Gorman’s classification system is not the only system
available for classifying research methodologies (e.g., Bliss, Skinner, Hautau, & Carroli,
2008); however, it is the most comprehensive and widely disseminated system with regard to
rank ordering research methods by the degree of scientifically regarded rigor. For example,
Bliss and colleagues (2008) describe different research methodologies, but do not rank order
them.
NIH-PA Author Manuscript
Research Directions
We are just beginning to understand how to train community therapists in psychosocial
treatment skills. Thus far, some methods appear to more effective in changing knowledge and
skill (e.g., multi-component training packages, feedback, consultation, supervision) than others
(e.g., reading a treatment manual, attending workshops). The former methods are notable for
their individualized approach, although it should also be noted that these methods have other
requirements or limitations (e.g., time, cost, intensity). Few studies have directly compared
different methods, which may be one of the main directions for further work. One key question
is, what is the most efficient method in order to achieve initial therapist skill acquisition.
Perhaps an even more important question is whether it is necessary to administer ongoing
training and consultation (feedback) in order to achieve therapist adoption. An ongoing study
by Chaffin and colleagues (NIMH Grant #065667) is evaluating the role of ongoing fidelity
monitoring on the implementation of an EBT at the state level. Results may help to determine
whether this component is essential in maintaining good adherence to a treatment model and
ultimately improved client outcome. Similar research might also examine the benefits of
different training activities, such as supervisor training or use of live coaching/consultation.
Complex, but important questions originally proposed in the review by Ford (1979) continue
NIH-PA Author Manuscript
to remain unanswered, including: a) What is the minimal therapist skill proficiency level that
could serve as a valid cutting point for predicting success or failure in training?, b) Are there
certain complex interpersonal skills that underlie treatment approaches that should be
considered prerequisites for training?, and c) Is there a way to match trainees with a training
method to produce better training outcomes?. However, even simpler questions remain such
as: d) What educational level (e.g., M.A./M.S., M.S.W., Ph.D.) is necessary to be able to benefit
from training?, e) What is the impact of therapist training on client outcomes?, f) How well do
trained skills generalize from training cases to ‘real-world’ clients?, g) Is the impact of training
transient or long-term?, and h) what program/agency or organizational mechanisms/structures/
resources are needed to maximize the likelihood of successful therapist acquisition and
adoption of a psychosocial treatment? To address some of these unanswered questions, Kolko
and colleagues are currently completing a randomized effectiveness trial (NIMH Grant #
074737) to understand the potential benefits of training therapists who are diverse in
background (BA vs. MA/MS/MSW) and service setting (in-home, family-based, outpatient)
in one EBT for child physical abuse, Alternatives for Families: A Cognitive Behavioral
Therapy. This same study will provide information on therapist knowledge, skills, attitudes,
real-world practices, and the impact of these factors on family outcomes. It also will provide
information on supervisor and organizational characteristics that impact implementation over
NIH-PA Author Manuscript
time. Perhaps these efforts as well as some of those included in this review are reflective of a
shift toward applying increasing rigorous methods to the study of psychosocial treatment
implementation. Notable is that 5 of the 6 studies included in this review that were rated as a
Type 1 study were published after 2004 (Baer et al., 2009; Dimeff et al., 2009; Lochman et al.,
2009;W. R. Miller et al., 2004; Moyers et al., 2008).
In summary, surprisingly little research has been conducted to evaluate methods for training
therapists in implementing a broad array of psychotherapy techniques. Clearly, there is a need
to develop and test innovative and practical training methods. Research of training methods
should move beyond examinations of workshop training into developing and testing training
models that are innovative, practical, and resource effective. Large-scale, methodologically-
rigorous trials that include representative clinicians, patients, and follow-up assessments are
necessary to provide sufficient evidence of effective training methods and materials. Without
such trials, the field will continue to try to disseminate evidence-based treatments without
evidence-based training strategies.
NIH-PA Author Manuscript
Ultimately, the current national focus on dissemination requires researchers to examine two
issues together: 1) how well can community therapists be trained to effectively retain and
implement new psychotherapy skills and knowledge and 2) does the application of these new
skills and knowledge increase positive outcomes for clients when delivered in community
settings. Attention to the integration of these complementary objectives will hopefully promote
advances in training technologies that can play a significant role in promoting advancing the
mental health competencies of community therapists and enhancing the quality of care
delivered in everyday practice settings. Ultimately, just as “Evidence-based medicine should
be complimented by evidence-based implementation” (Grol, 1997), so too should evidence-
based psychosocial treatments by complimented by evidence-based implementation.
References
Addis ME, Krasnow AD. A national survey of practicing psychologists’ attitudes toward psychotherapy
treatment manuals. Journal of Consulting & Clinical Psychology 2000;68(2):331–339. [PubMed:
10780134]
Alberts G, Edelstein B. Therapist training: A critical review of skill training studies. Clinical Psychology
Review 1990;10:497–511.
NIH-PA Author Manuscript
[PubMed: 8698958]
Bein E, Anderson T, Strupp HH, Henry WP, Sachacht TE, Binder JL, et al. The effects of training in
time-limited dynamic psychotherapy: Changes in therapeutic outcome. Psychotherapy Research
2000;10:119–132.
Bein E, Anderson T, Strupp HH, Henry WP, Schacht TE, Binder JL, et al. The effects of training in time-
limited dynamic psychotherapy: Changes in therapeutic outcome. Psychotherapy Research
2000;10:119–132.
Berwick DM. Disseminating innovations in health care. Journal of the American Medical Association
2003;289:1969–1975. [PubMed: 12697800]
Bickman L. A continuum of care: More is not always better. American Psychologist 1996;51(7):689–
701. [PubMed: 8694389]
Bliss SL, Skinner CH, Hautau B, Carroli EE. Articles published in four school psychology journals from
2000 to 2005: An analysis of experimental/intervention research. Psychology in the Schools 2008;45
(6):483–498.
Brooker C, Butterworth T. Training in psychosocial intervention: The impact on the role of community
psychiatric nurses. Journal of Advanced Nursing 1993;18:583–590. [PubMed: 8496506]
Brooker C, Falloon I, Butterworth A, Goldberg D, Graham-Hole V, Hillier V. The outcome of training
NIH-PA Author Manuscript
2006;13(1):90–93.
Chagnon F, Houle J, Marcoux I, Renaud J. Control-Group study of an intervention training program for
youth suicide prevention. Suicide and Life-Threatening Behavior 2007;37(2):135–144. [PubMed:
17521267]
Chambless DL, Baker MJ, Baucom DH, Beutler LE, Calhoun KS, Crits-Christoph P, et al. Update on
empirically validated therapies II. The Clinical Psychologist 1998;51:3–15.
Chambless DL, Sanderson WC, Shoham V, Johnson SB, Pope KS, Crits-Christoph P, et al. An update
on empirically validated therapies. The Clinical Psychologist 1996;49:5–18.
Chorpita BF, Becker KD, Daleiden EL. Understanding the common elements of evidence-based practice:
Misconceptions and clinical examples. Journal of the American Academy of Child and Adolescent
Psychiatry 2007;46:647–652. [PubMed: 17450056]
Conner-Smith JK, Weisz JR. Applying treatment outcome research in clinical practice: Techniques for
adapting interventions to the real world. Child and Adolescent Mental Health 2003;8:3–10.
Crits-Christoph P, Siqueland L, Chittams J, Barber JP, Beck AT, Frank A, et al. Training in cognitive,
supportive-expressive, and drug counseling therapies for cocaine dependence. Journal of Consulting
and Clinical Psychology 1998;66:484–492. [PubMed: 9642886]
Davis DA, Thomson MA, Freemantle N, Wolf FM, Mazmanian P, Taylor-Vaisey A. The impact of formal
continuing medical education: Do conferences, workshops, rounds, and other traditional continuing
NIH-PA Author Manuscript
education activities change physician behavior or health care outcomes? Journal of the American
Medical Association 1999;282:867–874. [PubMed: 10478694]
Davis DA, Thomson MA, Oxman AD, Haynes RB. Evidence for the effectiveness of CME: A review of
randomized controlled trials. Journal of the American Medical Association 1992;268:1111–1117.
[PubMed: 1501333]
Demchak M, Browder DM. An evaluation of the pyramid model of staff training in group homes for
adults with severe handicaps. Education & Training in Mental Retardation 1990;25(2):150–163.
DeViva J. The effects of full-day and half-day workshops for health care providers in techniques for
increasing resistant clients’ motivation. Professional Psychology - Research & Practice 2006;37:83–
90.
Dimeff LA, Koerner K, Woodcock EA, Beadnell B, Brown MZ, Skutch JM, et al. Which training method
works best? A randomized controlled trial comparing three methods of training clinicians in
dialectical behavior therapy skills. Behavioural Research and Therapy 2009;47:921–930.
Ducharme J, Feldman N. Comparison of staff training strategies to promote generalization. Journal of
Applied Behavior Analysis 1992;25:165–179. [PubMed: 1582964]
Ellis MV, Ladany N, Krengel M, Schult D. Clinical supervision research from 1981 to 1993: A
methodological critique. Journal of Counseling Psychology 1996;43:35–50.
Essock SM, Goldman HH, Van Tosh L, Anthony WA, Appell CR, Bond GR, et al. Evidence-based
NIH-PA Author Manuscript
practices: Setting the context and responding to concerns. Psychiatric Clinics of North America
2003;26:919–938. [PubMed: 14711128]
Eyberg SM, Nelson MM, Boggs SR. Evidence-based psychosocial treatments for children and
adolescents with disruptive behavior. Journal of Clinical Child and Adolescent Psychology 2008;37
(1):215–238. [PubMed: 18444059]
Fadden G. Implementation of family interventions in routine clinical practice following staff training
programs: A major cause for concern. Journal of Mental Health 1997;6:599–612.
Fixsen, DL.; Naoom, SF.; Blase, KA.; Friedman, RM.; Wallace, F. Implementation research: A synthesis
of the literature. Tampa, FL: University of South Florida, Louis de la Parte Florida Mental Health
Institue, The National Implementatio Research Network; 2005.
Ford JD. Research on training counselors and clinicians. Review of Educational Research 1979;69:87–
130.
Ford JK, Weissbein DA. Training of transfer: An updated review. Performance Improvement Quarterly
1997;10:22–41.
Freeman KA, Morris TL. Investigate interviewing with children: Evaluation of the effectiveness of a
training program for child protective service workers. Child Abuse & Neglect 1999;23(7):701–713.
[PubMed: 10442836]
Garland AF, Kruse M, Aarons GA. Clinicians and outcome measurement: What’s the use? Journal of
NIH-PA Author Manuscript
Gregorie TK. Assessing the benefits and increasing the utility of addiction training for public child welfare
workers: A pilot study. Child Welfare 1994;73:68–81.
Grol R. Personal paper: Beliefs and evidence in changing clinical practice. BMJ 1997;315:418–421.
[PubMed: 9277610]
Hawkins KA, Sinha R. Can front-line clinicians master the conceptual complexities of dialectical
behavior therapy? An evaluation of a State Department on Mental Health training program. Journal
of Psychiatric Research 1998;32:379–384. [PubMed: 9844954]
Hemmelgarn AL, Glisson C, James LR. Organizational culture and climate: Implications for services
and interventions research. Clinical Psychology: Science & Practice 2006;13(1):73–89.
Henggeler SW, Melton GB, Brondino MJ, Scherer DG, Hanley JH. Multisystemic therapy with violent
and chronic juvenile offenders and their families: The role of treatment fidelity in successful
dissemination. Journal of Consulting & Clinical Psychology 1997;65(5):821–833. [PubMed:
9337501]
Henggeler SW, Schoenwald SK, Liao JG, Letourneau EJ, Edwards DL. Transporting efficacious
treatments to field settings: The link between supervisory practices and therapist fidelity in MST
Miller SJ, Binder JL. The effects of manual-based training on treatment fidelity and outcome: A review
of the literature on adult individual psychotherapy. Psychotherapy: Theory, Research, Practice,
Training 2002;39(2):184–198.
NIH-PA Author Manuscript
Miller WR, Mount KA. A small study of training in motivational interviewing: Does one workshop
change clinician and client behavior? Behavioural & Cognitive Psychotherapy 2001a;29:457–471.
Miller WR, Mount KA. A small study of training in motivational interviewing: Does one workshop
change clinician and client behavior? Behavioural & Cognitive Psychotherapy 2001b;29:457–471.
Miller WR, Yahne CE, Moyers TB, Martinez J, Pirritano M. A randomized trial of methods to help
clinicians learn motivational learning. Journal of Consulting and Clinical Psychology 2004;72(6):
1050–1062. [PubMed: 15612851]
Milne D, Westerman C, Hanner S. Can a “Relapse Prevention” module facilitate the transfer of training?
Behavioural and Cognitive Psychotherapy 2002;30(3):361–364.
Morgenstern J, Morgan TJ, McCrady BS, Keller DS, Carroll KM. Manual-guided cognitive-behavioral
therapy training: A promising method for disseminating empirically supported substance abuse
treatments to the practice community. Psychology of Addictive Behaviors 2001;15(2):83–88.
[PubMed: 11419234]
Moyers TB, Manuel JK, Wilson PG, Hendrickson SML, Talcott W, Durand P. A randomized trial
investigating training in motivational interviewing for behavioral health providers. Behavioural and
Cognitive Psychotherapy 2008;36:149–162.
Multon KD, Kivliighan JDM, Gold PB. Changes in counselor adherence over the course of training.
Journal of Consulting Psychology 1996;43:356–363.
NIH-PA Author Manuscript
Nathan, PE.; Gorman, JM. A guide to treatments that work. 2. New York: Oxford University Press; 2002.
Nathan, PE.; Gorman, JM. A guide to treatments that work. 3. New York: Oxford University Press; 2007.
National Advisory Mental Health Council. Blueprint for change: Research on child and adolescent mental
health: A report by the National Advisory Mental Health Council’s Workgroup on Child and
Adolescent Mental Health Intervention Development and Deployment. Bethesda, MD: National
Institutes of Health/National Institute of Mental Health; 2001.
National Crime Victims Research & Treatment Center. TF-CBT Web: First Year Report. Charleston,
SC: Medical University of South Carolina; 2007.
National Institute of Mental Health. Bridging science and service: A report by the National Advisory
Mental Health Council’s Clinical Treatment and Services Research Workgroup. 1998.
National Institute of Mental Health. Dissemination and implementation research in mental health.
Washington, DC: 2002.
Neff JA, Amodei N, Martinez C Jr, Ingmundson P. HIV/AIDS mental health training for health care
providers: An evaluation of three models. American Journal of Orthopsychiatry 1999;69(2):240–
246. [PubMed: 10234389]
Oordt MS, Jobes DA, Fonseca VP, Schmidt SM. Training mental health professionals to assess and
manage suicidal behavior: Can provider confidence and practice behaviors be altered? Suicide and
Life-Threatening Behavior 2009;39(1):21–32. [PubMed: 19298147]
NIH-PA Author Manuscript
Parsons MB, Reid DH. Training residential supervisors to provide feedback for maintaining staff teaching
skills with people who have severe disabilities. Journal of Applied Behavior Analysis 1995;28(3):
317–322. [PubMed: 7592147]
Parsons MB, Reid DH, Green CW. Preparing direct service staff to teach people with severe disabilities:
A comprehensive evaluation of an effective and acceptable training program. Behavioral Residential
Treatment 1993;8(3):163–185.
President’s New Freedom Commission on Mental Health. Achieving the promise: Transforming mental
health care in America. 2003. Retrieved. from
Reed, GM.; Eisman, E. Uses and misuses of evidence: managed care, treatment guidelines, and outcome
measurement in professional practice. In: Goodheart, CD.; Kazdin, AE.; Sternberg, RJ., editors.
Evidence-based psychotherapy: Where practice and research meet. Washington, DC: American
Psychological Association; 2006.
Ringel JS, Sturm R. National estimates of mental health utilization and expenditures for children in 1998.
Journal of Behavioral Health Services & Research 2001;28:319–333. [PubMed: 11497026]
Rubel EC, Sobell LC, Miller WR. Do continuing education workshops improve participants’ skills?
Effects of a motivational interviewing workshop on substance-abuse counselors’ skills and
knowledge. Behavior Therapist 2000;23(4):73–77.
NIH-PA Author Manuscript
Russell MC, Silver SM, Rogers S, Darnell JN. Responding to an identified need: A joint Department of
Defense/Department of Veterans Affairs training program in eye movement desensitization and
reprocessing (EMDR) for clinicians providing trauma services. International Journal of Stress
Management 2007;14(1):61–71.
Saitz R, Sullivan LM, Samet JH. Training community-based clinicians in screening and brief intervention
for substance abuse problems: Translating evidence into practice. Substance Abuse 2000;21(1):21–
31. [PubMed: 12466645]
Schoener EP, Madeja CL, Henderson MJ, Ondersma SJ, Janisse J. Effects of motivational interviewing
training on mental health therapist behavior. Drug and Alcohol Dependence 2006;82:269–275.
[PubMed: 16289396]
Schoenwald SK, Chapman JE, Kelleher K, Hoagwood KE, Landsverk J, Stevens J, et al. A survey of the
infrastructure for children’s mental health services: Implications for the implementation of
empirically supported treatments. Administration & Policy in Mental Health 2008;35:84–97.
[PubMed: 18000750]
Schoenwald SK, Kelleher K, Weisz JR. Buidling bridges to evidence-based practice: The MacArthur
Foundation Child System and Treatment Enhancement Projects (Child STEPS). Administration
and Policy in Mental Health and Mental Health Services Research 2008;35:66–72. [PubMed:
18085433]
Scogin F, Welsh D, Hanson A, Stump J, Coates A. Evidence-based psychotherapies for depression in
NIH-PA Author Manuscript
Task Force on Promotion and Dissemination of Psychological Procedures. Training in and dissemination
of empirically-validated psychological treatments. The Clinical Psychologist 1995;48:3–23.
VandeCreek L, Knapp S, Brace K. Mandatory continuing education for licensed psychologists: Its
NIH-PA Author Manuscript
rationale and current implementation. Professional Psychology Research and Practice 1990;21(2):
135–140.
Vocisano C, Klein DN, Arnow B, Rivera C, Blalock JA, Rothbaum B, et al. Therapist variables that
predict symptom change in psychotherapy with chronically depressed outpatients. Psychotherapy:
Theory, Research, Practice, Training 2004;41(3):255–265.
Walters ST, Matson SA, Baer JS, Ziedonis DM. Effectiveness of workshop training for psychosocial
addiction treatments: A systematic review. Journal of Substance Abuse Treatment 2005;29:283–
293. [PubMed: 16311181]
Weersing VR, Weisz JR, Donenberg GR. Development of the Therapy Procedures Checklist: A therapist-
report measure of technique use in child and adolescent treatment. Journal of Clinical Child and
Adolescent Psychology 2002;31(2):168–180. [PubMed: 12056101]
Weisz JR, Chu BC, Polo AJ. Treatment Dissemination and Evidence-Based Practice: Strengthening
Intervention Through Clinician-Researcher Collaboration. Clinical Psychology: Science and
Practice 2004;11(3):300–307.
Worrall JM, Fruzzetti A. Improving peer supervision ratings of therapist performance in dialectical
behavior therapy: An internet-based training system. Psychotherapy Theory, Research, Practice,
Training 2009;46(4):476–479.
NIH-PA Author Manuscript
NIH-PA Author Manuscript
Table 1
Summary of Studies Examining the Effectiveness of Written Materials (e.g., treatment manuals) as a Training Method
Herschell et al.
Nathan
&
Gorman
(2002)
Criteria Author(s) Sample Design Measurement Method Findings & Comments
# and
Primary Standardized
Clin Psychol Rev. Author manuscript; available in PMC 2011 June 1.
1 Dimeff et al. n = 174 drug Group Comparison Yes 3: 1 and 3 IA, C, S, BO, SR No 2>3>1
(2009) and mental 1. Manual months (30 Sat K, Relative to other
health 2. Online and 90 conditions, those who
treatment training days) reading the treatment
providers 3. Instructor manual had smaller
led 2-day improvements in
workshop knowledge, self-efficacy,
competence, and
adherence and lower
satisfaction
3 Ducharme & n = 9 direct Single-subject multiple baseline NA 0 6 months G, S BO No Written material had little
Feldman’s care staff effect on skill
(1992) study
1
2 Kelly et al. n = 74 AIDS Group comparison Yes 3: 6 and 12 O, P SR No Condition 1 resulted in the
(2000) Service 1. Manual, months least frequent adoption of
Organizations 2. Manual+2- the intervention;
directors day Condition 3 resulted in
workshop, more frequent adoption of
3. Manual+2- the intervention
day workshop 3>2>1
+ follow-up
consultation
3 Rubel et al. n = 44 Pre/Post No 0 None K, S SR Yes No differences in those
(2000) clinicians and who read versus did not
researchers read the treatment manual
2 Sholomskas n = 78 full- Group Comparison No 3: 3 months F, I, K, BO, SR Yes Slight improvements in
et al. (2005) time, 1. Manual, P, S knowledge, adherence,
substance 2. Manual + and skill after reading;
abuse website, improvements did not
counselors 3. Manual + near criterion mastery
didactic levels
seminar + 3>2>1
supervised
casework
Page 26
NIH-PA Author Manuscript NIH-PA Author Manuscript NIH-PA Author Manuscript
Note. Measurement domains: A = Attitudes, C= Confidence, F = Treatment Fidelity or adherence; G = Generalization; I = Implementation Difficulty or Barrier – Anticipated or Actual; K = Knowledge, O =
Organizational Resources and Characteristics, P = Practices or techniques used, S = Skills/competence, Sat = satisfaction/acceptability; Measurement Types: BO = Behavior observation, SR = Self-report.
Herschell et al.
Clin Psychol Rev. Author manuscript; available in PMC 2011 June 1.
Page 27
NIH-PA Author Manuscript NIH-PA Author Manuscript NIH-PA Author Manuscript
Table 2
Summary of Studies Examining the Effectiveness of Self-directed Training Techniques (e.g., computer assisted, video review) as a Training Method
Herschell et al.
Nathan
&
Gorman
(2002)
Criteria Author(s) Sample Design Measurement Method Findings & Comments
1 Dimeff et al. n = 174 drug Group Comparison Yes 3: 1 and 3 IA, C, K, BO, SR No 2>3>1
(2009) and mental 1. Manual months (30 S, Sat Though sometimes
health 2. Online and 90 comparable to other
treatment training days) conditions, findings
providers 3. Instructor favored online training for
led 2-day highest effect sizes in
workshop knowledge, competence,
and adherence at post and
90-day follow-up
1 Miller, n = 140 Group Comparison Yes 5: 4 and 8 K, P, S BO, SR Yes Manuals and videotape
Yahne et al. substance 1. Workshop, months insufficient for behavior
(2004) abuse 2. Workshop + change; ongoing support is
counselors practice necessary for maintenance
feedback, of gains
3. Workshop + 4>2>3>1>5
individual
coaching
sessions,
4. Workshop +
feedback +
coaching,
5. Waitlist
control of self-
guided training
1 Moyer et al. n=129 Group Comparison Yes 3: 4, 8, and 12 S BO Yes The addition of feedback
(2008) behavioral 1. Workshop months and consult calls to the
health 2. Workshop + workshop did not result in
providers feedback + greater performance; skills
consult calls declined by the 4 month
3. Waitlist follow-up; Self-directed
control of self- techniques did not result in
guided training skill improvement.
(1=2)>3
3 National n = 3,558 Pre/Post No 0 None K, Sat SR No 36.3% overall increase in
Crime mental health knowledge; high
Victims professionals satisfaction
Research and
Page 28
Treatment
NIH-PA Author Manuscript NIH-PA Author Manuscript NIH-PA Author Manuscript
Nathan
&
Gorman
Herschell et al.
(2002)
Criteria Author(s) Sample Design Measurement Method Findings & Comments
Note. Measurement domains: A = Attitudes, C = Confidence, F = Treatment Fidelity or adherence, G = Generalization, I = Implementation Difficulty or Barrier – Anticipated or Actual, K = Knowledge, S =
Skills/competence, Sat = satisfaction/acceptability; Measurement Types: BO = Behavior observation, SR = Self-report.
Page 29
NIH-PA Author Manuscript NIH-PA Author Manuscript NIH-PA Author Manuscript
Table 3
Summary of Studies Examining the Effectiveness of Workshops as a Training Method
Herschell et al.
Nathan
&
Gorman
(2002)
Criteria Author(s) Sample Design Measurement Method Findings & Comments
# and
Primary Standardized
Clin Psychol Rev. Author manuscript; available in PMC 2011 June 1.
Page 30
1 Dimeff et al. n = 174 drug Group Comparison Yes 3: 1 and 3 IA, C, K, BO, SR No 2>3>1
(2009) and mental 1. Manual months (30 S, Sat A 2-day workshop
health resulted in self- efficacy,
NIH-PA Author Manuscript NIH-PA Author Manuscript NIH-PA Author Manuscript
Nathan
&
Gorman
Herschell et al.
(2002)
Criteria Author(s) Sample Design Measurement Method Findings & Comments
# and
Primary Standardized
Comparison Measures with
Type Random Assign Group(s) Follow-up Domain Type psychometrics
treatment 2. Online and 90 satisfaction, and
providers training days) demonstrated skills
3. Instructor comparable to online
Clin Psychol Rev. Author manuscript; available in PMC 2011 June 1.
Page 31
(2008) behavioral 1. Workshop months and consult calls to the
workshop did not result in
NIH-PA Author Manuscript NIH-PA Author Manuscript NIH-PA Author Manuscript
Nathan
&
Gorman
Herschell et al.
(2002)
Criteria Author(s) Sample Design Measurement Method Findings & Comments
# and
Primary Standardized
Comparison Measures with
Type Random Assign Group(s) Follow-up Domain Type psychometrics
health 2. Workshop + greater performance;
providers feedback + skills declined by the 4
consult calls month follow-up; Self-
Clin Psychol Rev. Author manuscript; available in PMC 2011 June 1.
Page 32
no changes evidence in
knowledge or confidence;
attitudes slightly higher
NIH-PA Author Manuscript NIH-PA Author Manuscript NIH-PA Author Manuscript
Nathan
&
Gorman
Herschell et al.
(2002)
Criteria Author(s) Sample Design Measurement Method Findings & Comments
# and
Primary Standardized
Comparison Measures with
Type Random Assign Group(s) Follow-up Domain Type psychometrics
though no statistical test
was used
Clin Psychol Rev. Author manuscript; available in PMC 2011 June 1.
Note. Measurement domains: A = Attitudes, C= Confidence, F = Treatment Fidelity or adherence, I = Implementation Difficulty or Barrier - Anticipated or Actual, K = Knowledge, O = openness to learning,
OR = Organizational Readiness for Change; P = Practices or techniques used, S = Skills/competence, Sat = satisfaction/acceptability; SP = Supportive practices; Measurement Types: BO = Behavior observation,
Ch = Chart Review, SR = Self-report.
Page 33
NIH-PA Author Manuscript NIH-PA Author Manuscript NIH-PA Author Manuscript
Table 4
Summary of Studies Examining the Effectiveness of Workshop Follow-ups (observation, feedback, consultation, coaching) as a Training Method
Herschell et al.
Nathan
&
Gorman
(2002)
Criteria Author(s) Sample Design Measurement Method Findings & Comments
# and
Primary Standardized
Clin Psychol Rev. Author manuscript; available in PMC 2011 June 1.
3 Ducharme & n = 7 staff Single-subject multiple baseline NA 0 None G, S BO No General case training
Feldman members produced criterion levels
(1992) study of generalization
2
3 Hawkins & n = 109 Pre/Post No 0 None K SR No The best predictors of
Sinha (1998) clinicians knowledge were (highest
first): reading, peer
support/consultation,
study group attendance,
time spent applying the
treatment; expert
consultation became
more important as
training progressed
2 Kelly et al. n = 74 AIDS Group Comparison Yes 3: 6 and 12 P, O SR No Condition 3 resulted in
(2000) Service 1. Manual, months more frequent adoption of
Organizations 2. Manual+2- the intervention
directors day 3>2>1
workshop,
3. Manual+2-
day workshop
+ follow-up
consultation
1 Miller, n = 140 Group Comparison Yes 5: 4 and 8 K, P, S BO, SR Yes Addition of feedback and/
Yahne et al. substance 1. Workshop, months or coaching improved
(2004) abuse 2. Workshop retention of proficiency
counselors + practice 4>2>3>1>5
feedback,
3. Workshop
+ individual
coaching
sessions,
4. Workshop
+ feedback +
coaching,
5. Waitlist
Page 34
control of
NIH-PA Author Manuscript NIH-PA Author Manuscript NIH-PA Author Manuscript
Nathan
&
Gorman
Herschell et al.
(2002)
Criteria Author(s) Sample Design Measurement Method Findings & Comments
# and
Primary Standardized
Comparison measures with
Type Random Assign Groups Follow- up Domain Type psychometrics
self- guided
training
Clin Psychol Rev. Author manuscript; available in PMC 2011 June 1.
Page 35
use disorders studies (Baer et al., 2004;
Miller et al., 2004)
NIH-PA Author Manuscript NIH-PA Author Manuscript NIH-PA Author Manuscript
Nathan
&
Gorman
Herschell et al.
(2002)
Criteria Author(s) Sample Design Measurement Method Findings & Comments
# and
Primary Standardized
Comparison measures with
Type Random Assign Groups Follow- up Domain Type psychometrics
2 Sholomskas n = 78 full- Group Comparison No 3: 3 months F, I, K, P, BO, SR Yes 3>2>1
et al. (2005) time, 1. Manual, S
Clin Psychol Rev. Author manuscript; available in PMC 2011 June 1.
substance 2. Manual +
abuse website,
counselors 3. Manual +
didactic
seminar +
supervised
casework
Note. Measurement domains: A = Attitudes, F = Treatment Fidelity or adherence, G = Generalization, I = Implementation Difficulty or Barrier – Anticipated or Actual, K = Knowledge, O = Organizational
Resources and Characteristics; P = Practices or techniques used, S = Skills/competence, Sat = satisfaction/acceptability; Measurement Types: BO = Behavior observation, SR = Self-report.
Page 36
NIH-PA Author Manuscript NIH-PA Author Manuscript NIH-PA Author Manuscript
Table 5
Summary of Studies Examining the Effectiveness of Pyramid Training as a Training Method
Herschell et al.
Nathan &
Gorman
(2002)
Criteria Author(s) Sample Design Measurement Method Findings & Comments
# and
Primary Standardized
Comparison measures with
Clin Psychol Rev. Author manuscript; available in PMC 2011 June 1.
Note. Measurement domains: A = Attitudes, K = Knowledge, S = Skills/competence; Measurement Types: BO = Behavior observation, SR = Self-report.
Page 37
NIH-PA Author Manuscript NIH-PA Author Manuscript NIH-PA Author Manuscript
Table 6
Summary of Studies Examining the Effectiveness of Multicomponent Training Packages as a Training Method
Herschell et al.
Nathan
&
Gorman
(2002)
Criteria Author(s) Sample Design Measurement Method Findings & Comments
# and
Primary Standardized
Clin Psychol Rev. Author manuscript; available in PMC 2011 June 1.
Page 38
(1993) (8) and
psychologists
NIH-PA Author Manuscript NIH-PA Author Manuscript NIH-PA Author Manuscript
Nathan
&
Gorman
Herschell et al.
(2002)
Criteria Author(s) Sample Design Measurement Method Findings & Comments
# and
Primary Standardized
Comparison measures with
Type Random Assign Groups Follow-up Domain Type psychometrics
(8); 84
patients
Clin Psychol Rev. Author manuscript; available in PMC 2011 June 1.
Page 39
psychiatric symptom ratings and in
positive and affective
symptoms. No change
NIH-PA Author Manuscript NIH-PA Author Manuscript NIH-PA Author Manuscript
Nathan
&
Gorman
Herschell et al.
(2002)
Criteria Author(s) Sample Design Measurement Method Findings & Comments
# and
Primary Standardized
Comparison measures with
Type Random Assign Groups Follow-up Domain Type psychometrics
nurses, 33 was noted in negative
patients symptoms.
Clin Psychol Rev. Author manuscript; available in PMC 2011 June 1.
Page 40
abuse Training, and manualized format of
counselors the training, Adequate
skill levels were reached
NIH-PA Author Manuscript NIH-PA Author Manuscript NIH-PA Author Manuscript
Nathan
&
Gorman
Herschell et al.
(2002)
Criteria Author(s) Sample Design Measurement Method Findings & Comments
# and
Primary Standardized
Comparison measures with
Type Random Assign Groups Follow-up Domain Type psychometrics
2. Control 1>2
group
Clin Psychol Rev. Author manuscript; available in PMC 2011 June 1.
Note. Measurement domains: A = Attitudes, C = Confidence, Cl = Clinical Outcome, F = Treatment Fidelity or adherence, G = Generalization, I = Implementation Difficulty or Barrier – Anticipated or Actual;
K = Knowledge, M = Job Morale, OR = Organizational Readiness for Change P = Practices or techniques used, S = Skills/competence, Sat = satisfaction/acceptability, SP = Supportive practices; T = therapeutic
interaction/rapport/working alliance; Measurement Types: BO = Behavior observation, PR = Patient report of therapist behavior, SR = Self-report.
Page 41