Implementation's Impact on Program Outcomes
Implementation's Impact on Program Outcomes
See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
Am J Community Psychol (2008) 41:327–350
DOI 10.1007/s10464-008-9165-0
ORIGINAL PAPER
Abstract The first purpose of this review was to assess learn about the product’s existence and potential benefit,
the impact of implementation on program outcomes, and decide to buy the product, use it effectively, and continue
the second purpose was to identify factors affecting the its use if it works as intended and the need persists. This
implementation process. Results from over quantitative total process is known as diffusion or technology transfer
500 studies offered strong empirical support to the con- and can refer to the spread of new ideas, technologies,
clusion that the level of implementation affects the manufactured products such as mousetraps, and evidence-
outcomes obtained in promotion and prevention programs. based promotion, prevention or treatment programs
Findings from 81 additional reports indicate there are at (Rogers 2003).
least 23 contextual factors that influence implementation. For example, social scientists recognize that developing
The implementation process is affected by variables related effective interventions is only the first step toward
to communities, providers and innovations, and aspects of improving the health and well-being of populations.
the prevention delivery system (i.e., organizational func- Transferring effective programs into real world settings
tioning) and the prevention support system (i.e., training and maintaining them there is a complicated, long-term
and technical assistance). The collection of implementation process that requires dealing effectively with the succes-
data is an essential feature of program evaluations, and sive, complex phases of program diffusion. These phases
more information is needed on which and how various include how well information about a program’s existence
factors influence implementation in different community and value is supplied to communities (dissemination1),
settings. whether a local organization or group decides to try the
new program (adoption), how well the program is con-
Keywords Implementation Youth programs ducted during a trial period (implementation), and whether
Prevention Health promotion the program is maintained over time (sustainability).
Moreover, if many people are to benefit, diffusion must be
successful in multiple communities, and at each stage of
Introduction the process, from dissemination through sustainability.
Unfortunately, research indicates that the diffusion of
‘‘Build a better mousetrap and the world will beat a path to effective interventions typically yields diminishing returns
your door.’’ While this dictum from the business world as the process enfolds. For many reasons, information
sounds challenging enough, the ultimate extent to which a about effective interventions does not adequately reach
better product will maximize its market share is dependent many communities. When it does, only some in the com-
on a host of related developments. New consumers must munity become motivated to try something new. Many
innovations encounter implementation problems that
123
15732770, 2008, 3-4, Downloaded from https://onlinelibrary.wiley.com/doi/10.1007/s10464-008-9165-0 by Ethiopian Civil Service Colleg, Wiley Online Library on [13/12/2022]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
328 Am J Community Psychol (2008) 41:327–350
diminish a program’s impact. Finally, only a relatively few Fortunately, there is now an extensive database that
interventions are sustained over time, regardless of their permits a systematic and comprehensive assessment of the
success achieved during a demonstration period (Rogers literature. The primary purpose of this paper is to examine
2003). implementation studies in the field of prevention and pro-
motion targeting children and adolescents to answer two
The Need to Understand Implementation research questions: (1) does implementation affect out-
comes, and (2) what factors affect implementation? In
The current paper focuses on the implementation stage of addition to addressing these two questions, we also sought
the diffusion process. It is important that the potential value to identify gaps that exist in the literature and discuss the
of new interventions is adequately tested, and this is major implications of current research findings.
impossible without attending carefully to the process of We hypothesized there would be credible and consistent
implementation. Assessment of implementation is essential evidence across programs that implementation was a sig-
for assessing the internal and external validity of inter- nificant influence on outcomes. Assuming this hypothesis
ventions. For example, accurate interpretation of outcomes would be confirmed, we developed an ecological frame-
depends on knowing what aspects of the intervention were work adapted from previous authors, and hypothesized that
delivered and how well they were conducted. Negative a literature review would confirm that factors affecting
results can occur if the program is not implemented suffi- implementation would reside in four major categories. The
ciently, or positive impact can be achieved through an ecological framework is explained in a later section when
innovation that, in practice, was very different from what the relevant data are evaluated.
was intended. Valid judgments about the value of the ori- Because answering the second research question is
ginal program would not be possible in either situation. dependent on a positive answer to the first question, this
Implementation data are also important in testing the the- article is organized as followed. First, we define different
ory behind an innovation. Theories about the crucial aspects of implementation and then describe the methods
importance of different intervention components cannot be used in searching for studies relevant to both research
assessed without ascertaining that these components were questions. Then we review the findings on the relationship
effectively administered. Furthermore, early monitoring of between implementation and outcomes before discussing
implementation can identify problems in program appli- the findings regarding which factors affect implementation.
cation that can be corrected quickly to ensure better In the latter section, we indicate how our model of
outcomes. implementation fits with the Interactive Systems Frame-
Although many authors would agree that implementa- work for Dissemination and Implementation (ISF)
tion influences the outcomes of promotion and prevention (Wandersman et al. 2008) guiding this special issue.
programs, the relevant literature has not been completely Lastly, a research agenda for future work on implementa-
examined. For example, Dane and Schneider (1998) found tion is presented.
that only 39 of 162 (24%) of published mental health
prevention studies appearing between 1980 and 1994 Defining Key Terms
described any steps that were taken to document
implementation, and of these 39, only 13 assessed if Our primary interest was interventions conducted in real
implementation affected outcomes. Durlak (1997) reported world settings by non-researchers. We use the term pro-
that less than 5% of the 1,200 prevention studies appearing vider to designate the non-research staff of community-
by the end of 1995 in mental and physical health and based organizations who implement the new intervention
education provided any data on program implementation. (e.g., the staff in schools, health clinics, or community
Later, Durlak (1998) described the results of 11 represen- coalitions). We use the terms program, innovation, and
tative investigations that related implementation to intervention interchangeably in reference to newly intro-
outcomes. Dusenbury et al. (2003a) examined several duced promotion and preventive approaches. Evaluating
hundred outcome studies covering a 25-year period of drug the implementation literature presents a challenge due to
prevention research but briefly summarized data from only the lack of consensus regarding a standardized vocabulary
nine reports providing information on the relationship and set of operational definitions of relevant terms.
between implementation and outcomes. Finally, in their Therefore, as we define major variables, some alternate
review of 32 evidence-based mental health prevention terms used by others are presented in parentheses.
programs, Domitrovich and Greenberg (2000) noted that Although these alternate terms can have other meanings
only 13 studies conducted analyses relating implementa- depending on a specific context, they are often consistent
tion to outcomes. with our definitions.
123
15732770, 2008, 3-4, Downloaded from https://onlinelibrary.wiley.com/doi/10.1007/s10464-008-9165-0 by Ethiopian Civil Service Colleg, Wiley Online Library on [13/12/2022]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
Am J Community Psychol (2008) 41:327–350 329
What is Implementation? the percentage of the eligible population who took part in
the intervention, and their characteristics. A new program
In general, implementation refers to what a program con- may not attract as many or the same types of participants as
sists of when it is delivered in a particular setting. There are the original program. For example, a prevention program
eight different aspects to implementation, and Dane and potentially suitable for all parents in a diverse community
Schneider (1998) described five of these. (1) There is may only attract and retain less than 5% of eligible parents,
fidelity, which is the extent to which the innovation almost all of whom are Caucasian and members of the
corresponds to the originally intended program (a k a upper socioeconomic classes. Finally, the inclusion of
adherence, compliance, integrity, faithful replication). (2) adaptation as an aspect of implementation might surprise
There is dosage, which refers to how much of the original some readers, but data presented later indicate why adap-
program has been delivered (quantity, intervention tation deserves study as a part of implementation. The
strength). (3) Quality refers to how well different program different aspects of implementation are related, but can be
components have been conducted (e.g., are the main pro- separated for study, and we were interested in the extent to
gram elements delivered clearly and correctly?). (4) which this occurred in the studies reviewed here.
Participant responsiveness refers to the degree to which the
program stimulates the interest or holds the attention of
participants (e.g., are students attentive during program Method
lessons?). (5) Program differentiation involves the extent to
which a program’s theory and practices can be distin- Literature Search Procedures
guished from other programs (program uniqueness). The
latter two aspects of implementation have not received The same three search strategies were used to locate pub-
much research attention, and are not evaluated here, but see lished and unpublished studies relevant to both research
Hogue et al. (2005), and Hansen and McNeal (1999) for questions. First, computer searches were conducted of
examples. PyscInfo, MEDLINE, and Dissertation Abstracts using a
There are three additional aspects of implementation variety of search terms. Second, the references from sev-
worthy of attention. These include (6) the monitoring of eral reviews were examined (e.g., Dane and Schneider
control/comparison conditions, which involves describing 1998; Domitrivich and Greenberg 2000; Durlak 1998;
the nature and amount of services received by members of Dusenbury et al. 2003a), and the citations from each
these groups (treatment contamination, usual care, alter- included individual report were also inspected. Third, we
native services). (7) Program reach (participation rates, conducted manual searches of the past 5 years of several
program scope) refers to the rate of involvement and rep- journals that had published relevant articles.2
resentativeness of program participants. Finally, there is The primary focus was on prevention and health pro-
adaptation, (8) which refers to changes made in the original motion programs for children and adolescents related to the
program during implementation (program modification, following topics: physical health and development, aca-
reinvention). demic performance, drug use, and various social and
Monitoring of comparison groups is important. It is mental health issues such as violence, bullying, and posi-
often assumed incorrectly that controls do not receive any tive youth development. The literature review began in
services, but this is almost never the case in school-based 1976 when research on implementation first began
studies (Durlak 1985). For example, several authors who appearing with any frequency, and ended on December 31,
have examined the issue have found that many individuals 2006. Only reports in English were included.
in their no-intervention control condition received some We included quantitative and qualitative investigations,
alternative services (Abbott et al. 1998; Ary et al. 1990; but used them for different purposes.3 Because there were a
Basch et al. 1985; Elder et al. 1996; Kendrick et al. 1995; sufficient number of reports, only quantitative investiga-
Kerr et al. 1985). Child psychotherapy studies in which tions were used to examine the first research question
alternate conditions have received treatment have yielded pertaining to the influence of implementation on outcomes.
mean effect sizes half as large in magnitude as those pro- Studies with control groups and one-group pre-post designs
duced in true treatment versus control designs (Kazdin
2
et al. 1990). Similar findings are likely in promotion and These journals included American Journal of Community Psychol-
prevention studies. As a result, the monitoring of com- ogy, Health Education and Behavior, Health Education Research,
Journal of Community Psychology, Journal of Primary Prevention,
parison groups would provide a more accurate view of the
and Prevention Science.
value of a new intervention. 3
We refer here to traditional distinctions regarding what type of data
Reach is different from participant responsiveness were collected and how the data were analyzed. Several studies
because the former is concerned with questions relating to combined qualitative and quantitative methods.
123
15732770, 2008, 3-4, Downloaded from https://onlinelibrary.wiley.com/doi/10.1007/s10464-008-9165-0 by Ethiopian Civil Service Colleg, Wiley Online Library on [13/12/2022]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
330 Am J Community Psychol (2008) 41:327–350
were included. Both quantitative and qualitative studies A fifth meta-analysis took an innovative approach in
were used to identify factors affecting implementation, and evaluating the impact of implementation. Derzon et al.
for this area we also included the commentaries of several (2005) assessed findings from 46 unpublished drug pre-
authors based on their extensive research or field experi- vention programs funded by SAMHSA. The initial
ences (e.g., Elias et al. 2003; Hogan et al. 2003; Kealey outcome data were discouraging: the mean effect size on
et al. 2000; Mihalic et al. 2004; Scheirer, 2005; Wolff, drug use from the 46 sites was only 0.02, and not statisti-
2001). cally significant. Furthermore, 21 of the 46 sites produced
negative effect sizes, indicating that comparison youth had
less drug use at post than program participants. However,
Results Derzon and colleagues (2005) found that three factors with
the strongest effects on outcomes were related to imple-
Research Question # 1: Does Implementation Influence mentation. Two of these involved the implementation of
Program Outcomes? the intervention (i.e., the degree to which program objec-
tives and procedures were put into everyday practice, and
Overall, we located reports on 542 relevant interventions. the intensity of program delivery) and the third factor
There were 483 studies summarized in five meta-analyses related to the control groups (i.e., the, exposure of the
and 59 additional studies assessing the impact of imple- control students to alternative drug prevention services).5
mentation on outcomes. The meta-analytic findings are Derzon and colleagues (2005) were then interested in
discussed first followed by the data from the additional how effective programs could be if data were adjusted to
quantitative reports optimize the influence of these three factors, and they used
regression procedures to re-estimate study outcomes. In
Findings from Meta-analyses other words, what would the results be if controls received
no alternative services and if programs were implemented
There are five meta-analyses containing information on the consistently and with sufficient intensity? The results of
impact of implementation on outcomes. The primary their synthetic projections were dramatic. Mean effects for
studies in these reviews vary in terms of how they report on the 46 programs rose from 0.02 to 0.24, reached statistical
implementation. For example, in a review of 59 mentoring significance, and only one program now had a negative
studies, DuBois et al. (2002) found programs that moni- effect size. In other words, if issues related to implemen-
tored implementation obtained effect sizes three times tation of the intervention and the receipt of services by
larger than programs that reported no monitoring (mean control groups could be controlled, the programs would
effects of 0.18 vs. 0.06, respectively). Similarly, Smith have been 12 times more effective!
et al. (2004) reported that although 14 whole-school anti- On the one hand, the results of the above meta-analyses
bullying programs obtained modest effects overall, those are consistent in indicating the influence of implementation
that monitored implementation obtained twice the mean on outcomes. The overall magnitude of the difference
effects on self-reported rates of bullying and victimization favoring programs with apparently better as opposed to
than those that did not monitor implementation. poorer implementation is profound, and has resulted in
Tobler (1986) reported that 29% of the outcomes mean effect sizes that are two to three times higher, and,
derived from 143 drug prevention studies were drawn from under ideal circumstances, may be up to 12 times higher
interventions that were improperly implemented, and (Derzon et al. 2005). Such findings offer strong support to
comparisons suggested that well-implemented programs the conclusion that implementation influences outcomes.
achieved effect sizes 0.34 greater than poorly implemented On the other hand, the meta-analytic findings are limited
programs. In the largest relevant meta-analysis, Wilson by the data contained in the original reports. Meta-analysts
et al. (2003) reviewed 221 school-based prevention pro- have had to depend on original authors’ general comments
grams targeting aggressive behaviors. A regression relating to the monitoring of implementation, or if imple-
analysis indicated that implementation was the second mentation problems occurred. Furthermore, the nature and
most important variable overall, and the most important extent of these problems, and the actual level of imple-
program feature that influenced outcomes.4
4
The risk status of students was the most important factor; i.e.,
5
students selected for intervention because of their early aggressive A fourth factor, gender of the participants, was also related to
behavior improved the most. program outcomes but was not included in their subsequent analyses.
123
15732770, 2008, 3-4, Downloaded from https://onlinelibrary.wiley.com/doi/10.1007/s10464-008-9165-0 by Ethiopian Civil Service Colleg, Wiley Online Library on [13/12/2022]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
Am J Community Psychol (2008) 41:327–350 331
mentation achieved in studies were not reported frequently studies have found that higher levels of implementation
enough to be examined. Fortunately, data are available lead to better outcomes.
from additional studies that provide more specific infor- The largest group of studies in Table 1 evaluated fidelity
mation on implementation and its effect on outcomes. (n = 37), while 29 assessed dosage; fewer monitored any
These studies are evaluated next. of the other aspects of implementation such as quality or
program reach. It is noteworthy that the three studies that
assessed adaptation all found a positive effect for adapta-
Additional Studies Linking Implementation tion on program outcomes (Blakely et al. 1987; Kerr et al.
to Outcomes 1985; McGraw et al. 1996). The majority of studies
(n = 41, or 69%) assessed only one aspect of implemen-
Researchers have analyzed implementation data in two tation, but 18 (31%) evaluated at least two aspects such as
major ways: (1) categorically, by creating groups of pro- fidelity and dosage (e.g., McGraw et al. 1996), or dosage
viders who differ in their level of implementation (e.g., low and quality (Bush et al. 1989).
versus high implementation groups); or (2) by assessing
implementation in a continuous fashion (e.g., by using Assessment of Implementation
percentages to assess the level of dosage or fidelity
achieved). In the former case, investigators usually report if The two primary methods of assessing implementation
statistically significant or different outcomes were achieved have been provider self-reports and independent behav-
for different implementation groups, or they compare ioral observations. Most of the latter studies have
outcomes for implementation groups and controls (e.g., documented the reliability of their observational proce-
Botvin et al. 1989; Gottfredson et al. 1993). In the second dures, but studies relying on self-reports typically have
situation, the level of implementation is correlated with not. There are some indications, however, that observa-
outcomes (e.g., Abbott et al. 1998). Both strategies have tional data are more likely to be linked to outcomes than
tended to find a relationship between implementation and self-report data (e.g., Hansen et al. 1991; Lillehoj et al.
outcomes, but the second approach has more statistical 2004; Resnicow et al. 1998a, b), but few studies have
power. In either case, the full range of implementation data directly compared these two strategies. Because obser-
should always be reported. Designations of ‘‘low,’’ or vational data are more objective, it seems preferable to
‘‘high,’’ implementation are arbitrary, and have reference use such information for implementation analyses, if it is
only to locally obtained data; what is ‘‘high’’ in one study realistic to do so. Regardless of the methodology, peri-
may not be ‘‘high’’ in another. odic spot checks of implementation can help identify
Table 1 summarizes the findings from 59 additional providers who might be struggling with executing parts
studies and identifies what aspect of implementation was of the intervention. Several authors have indicated this
assessed in each study, how it was assessed, and with might occur with the more difficult sections of inter-
what results. In 76% of the studies (45 of 59), there was ventions (Botvin et al. 1990; Hahn et al. 2002; Kallestad
a significant positive relationship between the level of and Olweus 2003).
implementation and at least half of all program out-
comes. Moreover, minimal variability in implementation
Other Notable Findings
levels could be an explanation for the weak or null
results obtained in 8 of the remaining 14 studies (e.g.,
There are two other notable findings from implementation
Basch et al. 1985; Cho et al. 2005; Elias et al. 1986;
studies that are not apparent in Table 1.
Hopkins et al. 1988; Komro et al. 2006; Resnicow et al.
1998a, b; Spoth et al. 2002, both studies). If levels of 1. Expecting perfect or near-perfect implementation is
implementation are all very high or very low across unrealistic. Positive results have often been obtained
groups or sites, the lack of variability does not provide with levels around 60%; few studies have attained
much power in detecting any between-group differences. levels greater than 80%. No study has documented
In the former case, all participants might have received 100% implementation for all providers. This point is
an effective level of implementation so that their out- important in light of program adaptation, which is
comes should be similar; in the latter case, discussed later.
implementation might be too low to yield expected 2. There is marked variability in implementation
benefits for any group. Overall, findings from the studies achieved across providers within the same study. The
in Table 1 provide additional support for the relationship range of implementation data has been as high as 87%
between implementation and outcomes. A majority of when comparing the lowest and highest
123
15732770, 2008, 3-4, Downloaded from https://onlinelibrary.wiley.com/doi/10.1007/s10464-008-9165-0 by Ethiopian Civil Service Colleg, Wiley Online Library on [13/12/2022]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
332 Am J Community Psychol (2008) 41:327–350
Table 1 Characteristics and findings for studies assessing the impact of implementation on program outcome
Study General area Implementation features Method of Data treated Proportion
assessing categorically of outcomes
# Of Aspect of implementation or continuously affected by
Measures Implementation implementation
123
15732770, 2008, 3-4, Downloaded from https://onlinelibrary.wiley.com/doi/10.1007/s10464-008-9165-0 by Ethiopian Civil Service Colleg, Wiley Online Library on [13/12/2022]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
Am J Community Psychol (2008) 41:327–350 333
Table 1 continued
Study General area Implementation features Method of Data treated Proportion
assessing categorically of outcomes
# Of Aspect of implementation or continuously affected by
Measures Implementation implementation
123
15732770, 2008, 3-4, Downloaded from https://onlinelibrary.wiley.com/doi/10.1007/s10464-008-9165-0 by Ethiopian Civil Service Colleg, Wiley Online Library on [13/12/2022]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
334 Am J Community Psychol (2008) 41:327–350
Table 1 continued
Study General area Implementation features Method of Data treated Proportion
assessing categorically of outcomes
# Of Aspect of implementation or continuously affected by
Measures Implementation implementation
implementation levels, and 20 to 40% differences and prevention programs of various types offered in
between providers or sites are common. Reporting schools, health clinics, and other community agencies.
only mean implementation data can easily obscure the Implementation has been important in all these situations.
fact that some providers are much better at implemen- Therefore, our hypothesis regarding the first research
tation than others. question was confirmed. There is credible and extensive
empirical evidence that the level of implementation affects
In sum, the results of 483 studies included in five meta-
program outcomes.
analyses that look broadly at implementation combined
This conclusion indicates the importance of identifying
with the results of 59 additional studies with more specific
the factors that affect the implementation process. This
findings clearly indicate that implementation matters. The
research is reviewed next.
level of implementation achieved is an important deter-
minant of program outcomes. Achieving good
implementation not only increases the chances of program
success in statistical terms, but also can lead to much Research Question #2: What Factors Affect
stronger benefits for participants. Implementation?
Moreover, current research has been conducted across a
diverse set of programs, providers, and settings. The liter- Before discussing the findings for the second research
ature includes studies of mentoring, after-school programs, question, the hypothetical framework tested in this aspect
drug prevention, and mental and physical health promotion of the literature review is described.
123
15732770, 2008, 3-4, Downloaded from https://onlinelibrary.wiley.com/doi/10.1007/s10464-008-9165-0 by Ethiopian Civil Service Colleg, Wiley Online Library on [13/12/2022]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
Am J Community Psychol (2008) 41:327–350 335
A Framework for Successful Implementation innovation-specific capacity, as does the ISF model.
Although general and innovation-specific capacity may be
Wandersman et al. (2008) note that ‘‘understanding distinct theoretically, there were no studies in our review
capacity is central to addressing the gap between research that distinguished between these two elements of organi-
and practice.’’ (p. X, this issue). Capacity is often used in zational functioning. While organizational capacity is
reference to the entire process of diffusion and can be important, organizations need support in conducting new
defined as the necessary motivation and ability to identify, interventions successfully, and this support comes primar-
select, plan, implement, evaluate, and sustain effective ily through training and technical assistance that is
interventions. Our focus was on capacity relative to suc- provided by outside parties (i.e., the prevention support
cessful implementation, and we hypothesized that a system noted in the ISF model).
multilevel ecological perspective was necessary for Most important, an organization’s success at imple-
understanding successful implementation, a view shared by mentation will also be dependent on factors present in three
several other authors (Altschuld et al. 1999; Riley et al. other categories that provide an extended ecological con-
2001; Shediac-Rizkallah and Bone 1998; Wandersman text for implementation (i.e., by innovation characteristics,
2003). provider characteristics and community factors). Commu-
Figure 1 depicts how our ecological framework is con- nity factors are also noted in the ISF model as contributing
nected to the Interactive Systems Framework (ISF) to effective dissemination and implementation. The bidi-
presented in this special issue (Wandersman et al. 2008). rectional arrows in the outer circles of Fig. 1 in our model
Our view is that key elements of the Prevention Delivery indicate that variables in these categories can interact with
System related to organizational capacity and two key each other and with the prevention delivery and support
elements of the Prevention Support System in the form of systems to affect implementation.
training and technical assistance lie at the center of effec- In sum, we hypothesized that implementation is influ-
tive implementation. Some type of organizational structure enced by variables present in five categories: innovations,
is necessary and responsible for guiding the implementa- providers, communities, the prevention delivery system
tion of a new program. This can be a newly created (i.e., features related to organizational capacity) and the
structure in the community (e.g., a community coalition) or prevention support system (i.e., training and technical
an existing community-based agency (e.g., health clinic, assistance). Under favorable circumstances, variables in all
hospital, school, or community service center). Therefore, five categories interact and lead to effective implementa-
organization capacity is important for successful imple- tion, that is, a process for conducting the intervention
mentation. However, we do not separate general and as planned. What is specifically required for effective
Community Factors
Innovation Characteristics
Interactive Systems Framework for
Dissemination and Implementation
Prevention Prevention
Delivery Support
System System Effective
+ Implementation
Organizational Training &
Capacity Technical
Assistance
123
15732770, 2008, 3-4, Downloaded from https://onlinelibrary.wiley.com/doi/10.1007/s10464-008-9165-0 by Ethiopian Civil Service Colleg, Wiley Online Library on [13/12/2022]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
336 Am J Community Psychol (2008) 41:327–350
implementation, however, depends on a constellation of illustrate how an ecological perspective is essential for
factors because local contexts differ. Developing sufficient understanding successful implementation.
capacity for implementation is essential for helping local
providers conduct new programs effectively, and the extent Community Level Factors
of their success will depend on the interaction of multiple
ecological factors that contribute to capacity. The community context in which a program will be con-
Research has supported the general relationship between ducted must be considered and the important community
capacity and implementation. One research group found factors identified in the literature are the prevention
that capacity (community readiness) was a significant research system, politics, funding, and policy. Findings
predictor of the implementation of youth tobacco control from prevention science should provide the basic infor-
programs during a 3-year intervention (Engstrom et al. mation for dissemination to communities, and new findings
2002), and at a 3-year follow-up point (Jason et al. 2004). appear all the time. Several groups and federal agencies
Similarly, Kegler et al. (1998a) found that the level of have developed web sites and other means of disseminating
capacity achieved through the actions of community information about prevention to the general public (see
coalitions was significantly related to more effective Chinman et al. 2005). Politics can help or hurt imple-
implementation, and others have also reported significant mentation. For example, school staff pressured by the
relationships between organizational capacity and more superintendent’s office to offer new programs often do not
effective implementation (Gingiss et al. 2006; Riley et al. implement them very effectively, probably because they do
2001). The following sections attempt to explicate if not become committed to the intervention (Berman and
research supports the conceptual framework depicted in McLaughlin 1976). Policies such as No Child Left Behind
Fig. 1 regarding the major influences that contribute to might enhance or impede implementation depending on the
capacity and thus effective implementation. extent to which a new program is perceived as impacting
students’ academic performance. Funding is a necessary
Research Findings but insufficient condition for effective implementation,
although many funders do not provide sufficient time or
We located 81 studies containing quantitative or qualitative money for implementation. Finally, social policy is
data on factors affecting the implementation process. important for institutionalizing new procedures and prac-
Several studies contained data on more than one factor. tices, and supporting an administrative and financial
Data from these reports offered strong support for our infrastructure.
ecological framework for successful implementation.
Although we did not predict which factors would be Provider Characteristics
present within each category, the literature review identi-
fied 23 factors associated with one of the five categories in The four provider characteristics most consistently rela-
our model (Fig. 1). ted to implementation involve perceptions related to the
Table 2 lists the specific factors affecting implementa- need for, and potential benefits of the innovation, self-
tion in these five categories. A factor is listed in Table 2 efficacy, and skill proficiency. Providers who recognize a
only if it was related to implementation in at least five specific need for the innovation, believe the innovation
articles and if findings were consistent in the more rigor- will produce desired benefits, feel more confident in their
ously conducted investigations. For quantitative studies ability to do what is expected (self-efficacy), and have
this typically meant the use of larger samples and psy- the requisite skills are more likely to implement a pro-
chometrically sound assessment procedures; in qualitative gram at higher levels of dosage or fidelity (e.g., Barr
reports this generally meant the use of multiple as opposed et al. 2002; Cooke 2000; Kallestad and Olweus 2003;
to single case studies, prospective rather than retrospective Ringwalt et al. 2003).
designs, and multiple versus single methods of data col-
lection. Once again, we indicate alternate terminology that Innovation Characteristics
occurs across reports for similar constructs. Table 2 is not a
comprehensive listing of all potentially relevant factors, Two innovation characteristics consistently related to
because results based on less than five studies were not implementation are adaptability (flexibility) and compati-
included. bility (contextual appropriateness, fit, match, congruence;
Space does not permit an extended discussion of all the Berman and McLaughlin 1976; Gottfredson and Gottfred-
factors in Table 2. We selectively discuss a few factors son 2002; Mihalic et al. 2004; Richard et al. 2004; Riley
below and instead emphasize the likely interactions that et al. 2001; Rogers 2003). The former indicates that pro-
occur among factors, particularly across categories, which grams that can be modified to fit the needs of providers,
123
15732770, 2008, 3-4, Downloaded from https://onlinelibrary.wiley.com/doi/10.1007/s10464-008-9165-0 by Ethiopian Civil Service Colleg, Wiley Online Library on [13/12/2022]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
Am J Community Psychol (2008) 41:327–350 337
123
15732770, 2008, 3-4, Downloaded from https://onlinelibrary.wiley.com/doi/10.1007/s10464-008-9165-0 by Ethiopian Civil Service Colleg, Wiley Online Library on [13/12/2022]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
338 Am J Community Psychol (2008) 41:327–350
Table 2 continued
V. Factors Related to the Prevention Support System
A. Traininga,b,c
Approaches to insure provider proficiencies in the skills necessary to conduct the intervention and to enhance providers’ sense of self-
efficacy
B. Technical Assistancea, b, c
This refers to the combination of resources offered to providers once implementation begins, and may include retraining in certain skills,
training of new staff, emotional support, and mechanisms to promote local problem solving efforts
a
Factors also identified by Fixsen et al. (2005)
b
Factors also identified by Greenhalgh et al. (2005)
c
Factors also identified by Stith et al. (2006)
Note. A detailed listing of the studies supporting the importance of each factor is available from the first author on request
organizations and communities have a better chance of mutual trust and open communication, shared responsibil-
stronger implementation than those that must be conducted ities for completing important tasks, and efforts to reach
‘‘as is.’’ The latter characteristic suggests that providers and consensus when disagreements or stalemates arise. More-
organizations implement new programs more effectively to over, other data indicate that shared decision-making also
the extent they fit with the organization’s current mission, predicts program sustainability (Hahn et al. 2005). ‘‘The
priorities, and existing practices. literature overwhelmingly shows a positive relationship
between community participation and sustainability’’
The Prevention Delivery System: Factors Related (Shediac-Rizkallah and Bone 1998; p. 103). In other
to Organizational Capacity words, an effective program is more likely to be better
implemented and then remain in a setting when collabo-
There are several ways to describe factors related to rative methods have been used to determine what type of
organizational capacity. The variables identified in our program should be conducted in the first place. As a result,
literature review fit best into three categories: general the importance of encouraging local input into new pro-
organizational features, specific organizational practices gramming cannot be underestimated.
and processes, and specific staffing considerations. Early
diffusion research characterized individuals who were The Prevention Delivery System: Training
among the first to adopt innovations as adventuresome, and Technical Assistance
open to change, and innovative (Rogers 2003), and these
descriptions apply as well to organizations. Innovative The two features of the prevention delivery system that
organizations cultivate an atmosphere conducive to trying have received the most attention and empirical support
new approaches. Effective leadership is crucial to imple- (from over 20 studies) are training and technical assistance
mentation, and the existence of at least one program (TA) (e.g., Allison et al. 1990; Barr et al. 2002; Basen-
champion has long been recognized as a valuable resource Engquist et al. 1994; McCormick et al. 1994; Perry et al.
to encourage innovation. Program champions, particularly 1990). Ideally, training and TA occur after necessary
those who are highly placed in an organization and have resources related to time, staff, administrative, and finan-
the respect of other staff, can do much to help orchestrate cial support have been secured, and other factors are
an innovation through the entire diffusion process from positively disposed toward implementation (shared vision,
adoption to sustainability. shared decision-making, effective leadership and support,
An important organizational practice supporting imple- and so on).
mentation in several studies is shared decision-making (i.e., In general, the goals of training are to prepare providers
collaboration, community involvement or participation, effectively for their new tasks, but this means training
local input, local ownership). Situations in which shared should not only help providers develop mastery in specific
decision-making occurs among providers, researchers, intervention skills, but also attend to their expectations,
administrators, and community members has consistently motivation, and sense of self-efficacy, because the latter
led to better implementation (e.g., Berman and McLaugh- can affect their future performance in and support of the
lin 1976; Cooke 2000; Kegler and Wyatt 2003; new innovation. Research indicates that active forms of
McCormick et al. 1994; Mihalic et al. 2004; Riley et al. learning promote skill acquisition. Training that includes
2003). Ideally, this collaborative process is characterized modeling followed by role playing and performance feed-
by nonhierarchical relationships among participants, back offered in a supportive emotional atmosphere has
123
15732770, 2008, 3-4, Downloaded from https://onlinelibrary.wiley.com/doi/10.1007/s10464-008-9165-0 by Ethiopian Civil Service Colleg, Wiley Online Library on [13/12/2022]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
Am J Community Psychol (2008) 41:327–350 339
been successful in many studies (e.g., Dufrene et al. 2005; A few studies have begun to assess the relative influence
Sterling-Turner et al. 2002). of different factors on implementation, or the possible
In general, TA refers to the resources offered to pro- interactions that occur among contextual factors. Such
viders once the intervention begins. The goals of TA are to studies offer useful directions for future research. For
maintain providers’ motivation and commitment, improve example, a regression analysis indicated that training and
their skills levels where needed, and support local problem providers’ sense of self-efficacy emerged as significant
solving efforts. Depending on the situation, TA may predictors of the implementation of an arson prevention
include some combination of re-training of initial provid- program for children, whereas technical assistance and
ers, training of new staff, and providing emotional support. intervention complexity did not (Henderson et al. 2006).
Early monitoring of implementation followed promptly by Kallestad and Olweus (2003) studied the implementation
retraining has doubled the fidelity of implementation to of school-wide anti-bulling programs in Norway. Using
over 85% for providers who were having initial difficulties multilevel modeling techniques, these authors reported that
(DuFrene et al. 2005; Greenwood et al. 2003). Staff turn- both individual and school level variables predicted
over can jeopardize implementation so contingencies for implementation (e.g., teachers’ perception of the problem
training new staff should be made. If collaboration and and their sense of self efficacy, school climate, and lead-
shared decision-making have characterized the diffusion ership in the school regarding anti-bullying). Kam et al.
process from the beginning, then providers should have (2003) found a significant main effect for principal support
fewer doubts about their competence, and will be able to and a significant interaction between principal support and
find solutions to implementation roadblocks. the fidelity of teacher’s implementation on student out-
comes in a school based mental health program. When both
Relationships Among Factors of these factors were high, students improved significantly
on all outcomes; when principal support was low, however,
Most investigators have assessed factors affecting imple- several negative changes were observed in students.
mentation in an isolated fashion by focusing on only a few Finally, Riley et al. (2001) successfully used a path
variables at a time. This is understandable because analytic model to predict nearly half the variance in
researchers cannot study everything at once. Table 2 thus implementation of Canadian health promotion programs.
presents a complicated array of at least 23 contextual The findings highlighted the importance of variables such
factors that merit attention in future research. However, as a shared vision, integration of programming, and part-
this list will likely be reduced over time through further nerships with other agencies. In this study, variables related
research because there appears to be considerable overlap to funding, staff experience, and managerial support were
among these factors. not part of the final path model predicting implementation.
The two innovation characteristics listed in Table 2 (i.e., Although it is premature at this point to reach conclusions
adaptability and compatibility) are related to each other, about which variables are more important in which situa-
and to the integration of new programming into an orga- tion, studies that compare the influence of different
nization. The more compatible and adaptable a program is, variables, particularly those from multiple categories (i.e.,
the more it can be incorporated into an organization’s at individual, organizational and community levels) are
procedures.6 Shared decision-making regarding program encouraged.
implementation also relates to the above three factors
because mutual input into programming decisions often Convergent Evidence from Other Reviews
involves issues related to compatibility, adaptability, and
integration into existing practices. Similarly, among the Three other systematic narrative reviews identifying fac-
staff variables in Table 2, in some cases, the functions tors affecting implementation have recently appeared
related to leadership, a program champion, and a support- (Greenhalgh et al. 2005; Fixsen et al. 2005; Stith et al.
ive supervisor could be supplied by the same person, so 2006). We did not examine these other sources until our
these variables may or may not be separate depending on review was completed in order to assess how much cor-
the circumstances. respondence existed between the results of these other
reviews and our findings. There are important differences
6
Based on their pioneering survey of school-based implementation, in the types of programs and target populations examined
Berman and McLaughlin (1976) were the first to stress the importance across reviews. Greenhalgh et al. (2005) identified imple-
of mutual adaptation, that is, the organization should adapt to the mentation efforts in multiple disciplines (e.g., sociology,
innovation at the same time as the innovation is adapted to fit the
communications, marketing, medicine, health promotion,
organization. To our knowledge, the extent to which this has occurred
during the diffusion of prevention or promotion programs has not organizational and management development, and manu-
been assessed. facturing) that had relevance for the conduct of health care
123
15732770, 2008, 3-4, Downloaded from https://onlinelibrary.wiley.com/doi/10.1007/s10464-008-9165-0 by Ethiopian Civil Service Colleg, Wiley Online Library on [13/12/2022]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
340 Am J Community Psychol (2008) 41:327–350
treatment and prevention programs for all ages. Fixsen multiple array of interacting ecological factors present at
et al. (2005) also reviewed various types of innovations for the individual, organizational and community level. The
children and adults from multiple research areas (e.g., specific factors as noted above that have been identified as
education, agriculture, business, medicine, and mental important to implementation across research areas and
health). Finally, Stith et al. (2006) reviewed factors disciplines bear particular scrutiny.
affecting the implementation of prevention programs ini-
tiated by community coalitions to combat child abuse and
neglect and domestic violence against adults. Discussion
Despite the diversity in scope and purpose of these other
reviews, the findings are consistent with ours in both The first major purpose of this review was to evaluate
general and specific ways. At the general level, each of the research on the hypothesized relationship between imple-
other reviews confirms the necessity of a multi level eco- mentation and program outcomes. Findings offered strong
logical framework for understanding implementation and support for the premise that effective implementation is
that such a framework should consider variables related to associated with better outcomes. Data from nearly 500
the characteristics of innovations, communities, and indi- studies evaluated in five meta-analyses indicates that the
viduals, as well as those associated with the prevention magnitude of mean effect sizes are at least two to three
delivery and support systems. Each of these reviews also times higher when programs are carefully implemented and
concludes that factors interact to influence implementation. free from serious implementation problems than when
Moreover, there is substantial overlap regarding specific these circumstances are not present. Data from 59 addi-
factors that affect implementation. For example, 21 of the tional quantitative studies confirms that higher levels of
23 factors identified in our review were also identified in implementation are often associated with better outcomes,
some fashion by Greenhalgh et al. (2005), 13 were noted particularly when fidelity or dosage is assessed. Imple-
by Fixsen et al. (2005), and 15 were noted by Stith et al. mentation has been important in research conducted on a
(2006). The correspondence between the results of our wide variety of programs, providers, community settings,
review and the others is identified by superscripts next to and outcomes. In sum, there is extensive and persuasive
each factor in Table 2. evidence that confirms the powerful impact of implemen-
Furthermore, all four reviews (including ours) agreed on tation on outcomes. A major implication emanating from
the importance of 11 factors. These consisted of funding, a these findings is that the assessment of implementation is
positive work climate, shared decision-making, co-ordina- an absolute necessity in program evaluations. Evaluations
tion with other agencies, formulation of tasks, leadership, that lack carefully collected information on implementation
program champions, administrative support, providers’ are flawed and incomplete. Without data on implementa-
skill proficiency, training, and technical assistance. tion, research cannot document precisely what program
At the same time, some reviews mentioned additional was conducted, or how outcome data should be interpreted.
factors such as the importance of having: (a) an accurate The second purpose of this review was to identify fac-
monitoring and feedback system in place as implementa- tors that influence the implementation process. Guided by
tion enfolds (Greenhalgh et al. 2005; Fixsen et al. 2005), an ecological framework, the hypothesis was supported
and, (b) an infrastructure that provides incentives sup- that factors affecting implementation are present in five
porting the work of individuals whose specific job and categories. These categories consist of characteristics of
responsibilities relate to program implementation (Fixsen innovations, individuals and communities, and features
et al. 2005). The latter observation suggests that new career associated with the prevention delivery and support sys-
paths might become established and be appealing to those tems. The latter three categories are explicit parts of the
who are particularly interested in bridging the research to ISF model. Overall, 23 relevant factors were identified and
practice gap by devoting their attention to the effective support the conclusion that contextual factors must be
diffusion of evidence-based programs. A coordinated considered when innovations are implemented in real
infrastructure to support evidence-based prevention world settings. Reviews conducted on other literatures
and promotion efforts is emphasized in the ISF model confirm the importance of many of the specific factors that
(Wandersman et al. 2007). we identified.
When independent researchers use different methods to The finding that shared decision-making (community
examine different literatures, but nevertheless reach similar participation, collaboration) enhances implementation is
conclusions, there is good convergent validity to the consistent with a prominent principle in community psy-
common findings. In sum, convergent evidence obtained chology: empowering community members can be an
from several fields confirms that implementation is a effective way to solve local problems. Shared decision-
complex developmental process that can be affected by a making empowers individuals to exercise some control
123
15732770, 2008, 3-4, Downloaded from https://onlinelibrary.wiley.com/doi/10.1007/s10464-008-9165-0 by Ethiopian Civil Service Colleg, Wiley Online Library on [13/12/2022]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
Am J Community Psychol (2008) 41:327–350 341
over local services and recognizes the importance of effective in a specific context. Researchers can thus learn
matching program delivery to local needs, preferences, and from local practitioners how to improve interventions, if
cultural norms. As a result, community ownership of a they carefully measure what is happening during
program should be promoted. Moreover, as previously implementation.
noted, community participation increases the likelihood A few research groups are beginning to examine how
that effective programs will be sustained (Shediac-Rizollah the implementation of theoretically important program
and Bone 1998). components (i.e., active ingredients, core elements, mech-
anisms of change) relates to outcomes, sometimes with
Finding the Right Mix of Fidelity and Adaptation surprisingly results. Mitchell (1983) found that the types of
activities performed during a mentoring program were
The important role that adaptation can play in program unrelated to outcomes, perhaps because the quality of the
implementation might be the most provocative finding of relationship formed between mentor and youth was more
this review, and deserves extended discussion. There has important. In mentoring, it may be not what you do, but
been substantial debate about whether new interventions how you do it that counts. A large multi site study found
should be implemented with maximum fidelity or whether that the implementation of two components of welfare-to-
adaptation (reinvention) should be permitted or encouraged work programs was positively related to outcomes (i.e.,
to suit local needs and preferences (see Backer 2002; quick entry into the workforce, and tailoring intervention
Blakely et al. 1987). A high level of fidelity is possible procedures to client needs), but a third (close monitoring of
under favorable circumstances (Fagan and Milhalic 2003). clients) was negatively related (Bloom et al. 2003). It is
Some interventions are more conducive to fidelity because noteworthy that one of the former factors involved program
they are highly structured and have accompanying detailed adaptation (i.e., modifying procedures to match each cli-
manuals or lesson plans, but many interventions do not ent’s needs).
have these features. Stevens et al. (2001) reported that the implementation of
Current research suggests that fidelity and adaptation only two of six components of school-wide anti-bullying
frequently co-occur and each can be important to out- programs were significantly related to outcomes, and
comes. That is, providers often replicate some parts of Telzrow et al. (2000) found significant but modest corre-
programs but modify others. Several studies in Table 1 lations between the delivery of six of eight presumed core
indicate that higher levels of fidelity are significantly components and outcomes in a school-based program.
related to program outcomes. However, fidelity levels do More research identifying the core components of pro-
not reach 100%, leaving room for adaptation to have an grams that are related to positive outcomes will help
effect. Several surveys and larger studies of diffusion determine which program features should be executed with
indicate that providers frequently modify programs during fidelity and which can be modified to suit local conditions.
implementation (Berman and McLaughlin 1976; Rogers In our opinion, the fidelity-adaptation debate is framed
2003; Ringwalt et al. 2003). Ringwalt et al. (2003) offered inappropriately in either-or terms, and suffers from
the following observation based on their survey of school- imprecision in the measurement of important constructs.
based programs: We can thus say now with confidence that The prime focus should be on finding the right mix of
some measure of adaptation is inevitable and that for cur- fidelity and adaptation (cf. Backer 2002), and this cannot be
riculum developers to oppose it categorically, even for the determined without measuring each of these dimensions
best of conceptual or empirical reasons, would appear to be during implementation. Unfortunately, it is unclear in most
futile (p. 387). Unfortunately, most researchers have con- studies of implementation exactly which components are
sidered program adaptation as an implementation failure reproduced faithfully, or exactly how the intervention is
(i.e., a failure to achieve fidelity) and have not assessed its being altered in its new context.
possible contribution to outcomes. It is particularly important to specify the theoretically
Nevertheless, three quantitative studies have found that important components of interventions, and to determine
adaptations made by providers improved program out- how well these specific components are delivered or altered
comes (Blakely et al. 1987; McGraw et al. 1996; Kerr during implementation. This is because core program
et al. 1985). Furthermore, data from several qualitative components should receive emphasis in terms of fidelity.
studies on factors such as program adaptability and shared Other less central program features can be altered to
decision-making suggest that better implementation occurs achieve a good ecological fit. Although several authors
when providers can make some program adjustments. have stressed the need to identify the core components of
Actually, this should not come as much of surprise. If interventions and monitor their delivery during imple-
providers are knowledgeable about their communities, they mentation (e.g., Backer 2002; Durlak 1998; Dusenbury
should be able to modify a program to make it more et al. 2003a; Mowbray et al. 2003), researchers have been
123
15732770, 2008, 3-4, Downloaded from https://onlinelibrary.wiley.com/doi/10.1007/s10464-008-9165-0 by Ethiopian Civil Service Colleg, Wiley Online Library on [13/12/2022]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
342 Am J Community Psychol (2008) 41:327–350
slow to respond, so the active ingredients of most programs implementation? Have any important factors been over-
are currently unknown. looked? What is the relative influence of different factors
By the same token, whenever programs are adapted, it is and how do they interact to affect how a program is con-
crucial to determine how and to what extent original core ducted in real world settings? Can the strong presence of
program components are changed, whether new things are some factors offset the absence of others that would ordi-
added and what they are, or whether parts of intended narily promote implementation? Why does the level of
programs are entirely omitted. In some cases, adaptations implementation affect many, but not necessarily all pro-
might improve outcomes, whereas in other cases, changes gram outcomes?
might undermine program success. Therefore, it is essential Previous comments have focused on several needed
to monitor the types of adaptations that occur instead of improvements in implementation research. These include
treating them as failures of implementation. Future clarifying the relative influence of fidelity and adaptation,
research that collects good data on aspects of both fidelity monitoring comparison groups to detect their receipt of
and adaptation that usually co-occur during implementa- alternate services, the limited quantitative focus on some
tion will be valuable in understanding how interventions aspects of implementation (e.g., quality, adaptation, reach,
work in real world settings. program differentiation, and participant responsiveness) the
Current research has relied on naturally occurring events importance of studying contextual variables at multiple
to assess factors related to implementation. Experimental levels of influence, the need for experimental studies that
studies that manipulate conditions potentially affecting vary the conditions under which implementation occurs, and
implementation and assign providers to different levels of how the method of assessing implementation can affect the
these conditions would offer stronger scientific support for findings. There are nine additional research issues that
the various contextual factors affecting implementation. deserve attention. These involve considerations related to
A second type of study is also needed. It would be measurement and general design issues, when implementa-
extremely helpful to compare the results of innovations tion data should be collected, when programs are ready for
offered in one setting that were conducted with high evaluation, the importance of assessing each intervention
fidelity with innovations that were modified by providers. It component, of assessing multiple aspects of implementation
seems feasible that some providers would be amenable to in the same study, and conducting subgroup analyses, pro-
these types of experiments. ‘‘Let’s compare the program gram modifications made for cultural or ethnic reasons, and
already developed with the modified program you are testing for threshold effects. Finally, expanded journal pol-
suggesting to see how effective each one is in your set- icies regarding implementation data and analyses are
ting.’’ These direct comparisons would shed important light necessary. Each of these issues is now briefly discussed.
on how programs should operate in different contexts for
1. Science cannot study what it cannot measure accu-
maximum cost effectiveness, and, at the same time, help
rately and cannot measure what it does not define.
bridge the gap between research and practice. In the spirit
Therefore, it is essential that future authors develop
of collaboration, providers would be encouraged to con-
consensus on the terminology and the operational
tribute actively to the scientific process, and researchers
definitions of relevant constructs and use psychomet-
could learn from providers how to improve interventions.
rically sound assessment strategies to study
Accordingly, we re-emphasize that researchers should
implementation. Measurement of relevant constructs
carefully specify the theoretically important components of
is one important aspect of a well-designed study, and
programs, and monitor the delivery of these components as
useful design and measurement guidelines have been
well as any modifications made by providers during
offered by several authors (Backer 2002; Bellg et al.
implementation. Repeated studies of this sort can clarify
2004; Granner and Sharpe 2004; Mowbray et al. 2003;
the appropriate combination of faithful replication and
Nastasi and Schensul 2005). For example, qualitative
program modification that is necessary in different settings
studies should evaluate the convergent validity of their
and for different innovations to achieve good outcomes.
assessments by using multiple methods of data
collection (e.g., interviews, observations, document
A Future Research Agenda analyses, and surveys). Both quantitative and qualita-
tive work should employ theory-driven analytic
Several research questions should be addressed in future procedures. Whenever possible, comparison groups
studies. Which aspects of implementation are important for should be used in lieu of one-group designs to
which innovations in order to achieve which types of strengthen the confidence regarding the relationship
outcomes for which participants? How much overlap exists between implementation and program outcomes. There
in the currently identified contextual factors that affect is also a need for more measures of organizational and
123
15732770, 2008, 3-4, Downloaded from https://onlinelibrary.wiley.com/doi/10.1007/s10464-008-9165-0 by Ethiopian Civil Service Colleg, Wiley Online Library on [13/12/2022]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
Am J Community Psychol (2008) 41:327–350 343
coalition functioning (Granner and Sharpe 2004). not do justice to the true impact of the intervention.
Given the multidimensional complexities of most When a particular program is ready for evaluation has
innovations, it seems unlikely that standardized mea- to be made on a case-by-case basis, but the collection
sures of all eight aspects of implementation can be of implementation data is important for making such
developed that are applicable for all types of innova- determinations.
tions. In each specific case, however, it is essential to 4. It is important to monitor implementation in each
document the reliability and validity of implementa- major innovation component. This is exemplified by
tion measures. findings from a physical health promotion programs
2. Fixsen et al.’s (2005) recommendation to create a that contained classroom- and cafeteria-based compo-
systematic monitoring and feedback system for imple- nents and a home component (Story et al. 2000). The
mentation is particularly helpful because of the implementation of the two school-based components
variability that has been observed in levels of imple- was good and related to outcomes measured in the
mentation over time. Implementation is not a static school setting, but few parents participated in the home
event but a process that enfolds over time and so the component and positive results were not obtained for
timing of data collection is important. Several studies this part of the program. Instead of dismissing the
indicate that implementation can deteriorate over time. value of the home component, the authors discussed
This has occurred in eight studies that examined this how to improve that component’s implementation.
issue over periods ranging from 2 to 3 days to a year 5. Because there are eight different aspects to implemen-
(Levenson-Gingiss et al. 1994; McCormick et al. tation, it is beneficial to analyze multiple aspects of
1994; Noell et al. 1997; Rohrbach et al. 1993; Smith implementation for the same intervention. This point is
et al. 1992; Story et al. 2000; Tappe et al. 1995; nicely illustrated in an outcome study of the Early
Vadasy et al. 1997). Data collected early in the Risers program, in which the fidelity, quality, and
intervention might easily overestimate the level of reach of implementation of separate child and family
implementation delivered at the end of the program, components was assessed (August et al. 2006). The
indicating the need for data collection at multiple time fidelity and quality of implementation for each com-
points. ponent was considered acceptable based on
3. A related issue concerns when innovations are ready observational data; however, participation rates for
for evaluation. Several years ago, a popular advertising each component (reach) were less than desired. In this
campaign emphasized how important it was to ‘‘serve study, neither fidelity nor quality of implementation
no wine before its time’’ suggesting that wines must affected the outcomes, but reach did (see Table 1).
age properly before they are ready to be consumed. More studies comparing the effects of different aspects
This admonition can be transposed into the current of implementation will shed light on which aspects are
discussion by recommending that no program should more important for different types of interventions.
be evaluated until sufficient time has been allotted for 6. It is also necessary to relate implementation data to
its effective implementation. For example, some gains achieved by different subgroups of participants.
researchers conduct pilot programs to determine how For example, some authors (e.g., Botvin et al. 1990;
well a program does in a new setting in order to Elias et al. 1986) found more positive effects for girls
improve implementation, sometimes by making pro- than for boys in a school based intervention under
gram adjustments to increase its ecological fit (e.g., conditions of better implementation. Felner et al.
Komro et al. 2006). How much time to allot for (2001) found that at intermediate levels of implemen-
effective implementation varies with the complexity of tation high risk students showed little or no benefit,
the intervention. Fixsen et al. (2005) recommend at (i.e., the effect sizes for the outcome measures were
least 1 year, whereas Felner et al. (2001) suggest that near zero) but at high implementation levels, this same
at least 3 years are required for major school-wide group demonstrated substantial improvements (i.e.,
reforms. In contrast to investigations indicating dete- effect sizes between 0.50 to 0.75 depending on the
rioration in implementation levels over time, some outcome domain). The Felner et al. (2001) results also
reports of large scale multi year interventions have suggest that different implementation thresholds might
shown that implementation improves from year to year exist for different participants (see below).
(e.g., Cook et al. 1999; Elder et al. 1996; Felner et al. 7. Cultural factors are not listed explicitly in Table 2 as
2001; Riley et al. 2001). It is reasonable to assume that affecting implementation; nevertheless, they are per-
complicated interventions require more time to be vasive and fundamental considerations. For example,
conducted properly. Program evaluations conducted social scientists have recognized the value of modify-
before implementation is sufficiently established will ing interventions to suit the needs, preferences, and
123
15732770, 2008, 3-4, Downloaded from https://onlinelibrary.wiley.com/doi/10.1007/s10464-008-9165-0 by Ethiopian Civil Service Colleg, Wiley Online Library on [13/12/2022]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
344 Am J Community Psychol (2008) 41:327–350
values of specific racial and ethnic populations 9. Current data confirming the importance of implemen-
(American Psychological Association 2003; National tation data for program evaluations have implications
Research Council and Institute of Medicine 2002). for what information should be present in research
This is another example of the importance of adapta- reports. A few journals now require authors evaluating
tion. It remains an empirical issue how much impact interventions to describe the steps taken to insure good
interventions have when they are conducted in a implementation. In most cases, however, authors focus
similar fashion for all groups or when they are on fidelity, and only say that implementation was
modified for different cultural groups (Miranda et al. effectively achieved without supplying any data. This
2005). Once again, the issue of whether core compo- is insufficient. Researchers should routinely examine
nents are changed is an important one. Resnicow et al. more aspects of implementation than fidelity, and
(1999) offered some useful distinctions with respect to journal requirements should be expanded to require
cultural adaptations in terms of what they call ‘‘surface authors to present their implementation data and assess
and deep structure.’’ Surface structure involves deci- its relationship to different program outcomes. Other-
sions regarding how messages or materials are wise, reviewers will continue to find only a small
changed to match the observable characteristics of a percentage of outcome studies with enough informa-
population (e.g., language and cultural symbols). Such tion to study implementation adequately.
modifications would usually not affect core compo-
nents. However, deep structure refers to pivotal
Limitations and Final Comments
cultural, social, environmental, or psychological fac-
tors specific to a group, and incorporating these
Some of the limitations in the current review should be
elements into the intervention is more likely to involve
noted. It was a challenge to translate the terms used in
an intervention’s core components. Researchers should
different disciplines and emanating from different con-
clarify if and how core components are being affected
ceptual perspectives into a common language. We offered
when they modify innovations for use with different
working definitions of the major constructs that we inves-
cultural or racial groups.
tigated and noted the different terminology used across
8. The variability achieved across studies in levels of
reports. However, others might define these constructs or
implementation suggests the potential value of exam-
interpret the findings of studies differently. We concluded
ining implementation threshold effects. For example,
that a factor affected implementation if its importance was
although it might seem that ‘‘more is always better,’’ it
confirmed in at least five studies. However, it is possible
is possible that once a certain level of implementation
that investigators have overlooked some important factors.
is attained (e.g., in dosage or fidelity), higher levels
Results were consistent in investigations using better
may not lead to significantly better outcomes, partic-
quantitative and qualitative procedures, but judgment was
ularly if the intervention’s core components have
involved in evaluating the quality of different studies.
already been effectively delivered. One research group
Although the co-authors only included information reached
(Botvin 2000) has used a 60% level to assess the
through discussion and final agreement, others might con-
impact of implementation, although whether other
strue the original data differently. Finally, although many
thresholds would produce similar results is unknown.
studies were reviewed, this literature represents, at best,
While the results of the Felner et al. (2001) investi-
only one-third of all the outcome research on prevention
gation indicate that threshold effects are possible for
and promotion programs for children and adolescents. Data
different subgroups of participants, findings for the
relevant to implementation are missing for a majority of
School Health Education Evaluation have indicated
youth programs.
threshold effects for different types of outcomes
Nevertheless, current data offer strong empirical support
(Connell et al. 1985). For example, gains in students’
to the conclusions that implementation affects the out-
knowledge appeared much more quickly than behav-
comes of promotion and prevention programs, and that
ioral changes (after 15 h of intervention compared to
multiple ecological factors affect the implementation pro-
35 h, respectively) but the highest gains in each area
cess. Hopefully, the results of this review will help future
were not stable until after 50 h of intervention. The
researchers understand how programs can be conducted
cost benefit effectiveness of evidence-based programs
effectively in new settings to achieve maximum impact.
could be increased substantially if it were known
which aspects and levels of implementation were Acknowledgement Preparation of this paper was supported in part
necessary to achieve the best results for different target from a grant awarded to the first author by the William T. Grant
populations and for different outcomes. Foundaton (Grant #2212).
123
15732770, 2008, 3-4, Downloaded from https://onlinelibrary.wiley.com/doi/10.1007/s10464-008-9165-0 by Ethiopian Civil Service Colleg, Wiley Online Library on [13/12/2022]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
Am J Community Psychol (2008) 41:327–350 345
References *Bell, M. L., Kelley-Baker, T., Rider, R., & Ringwalt, C. (2005).
Protecting You/Protecting me: Effects of an alcohol prevention
*Abbott, R. D., O’Donnell, J., Hawkins, J. D., Hill, K. G., Kosterman, and vehicle safety program on elementary students. Journal of
R., & Catalano, R. F. (1998). Changing teaching practices to School Health, 75, 171–177.
promote achievement and bonding to school. American Journal *Bellg, A., Borrelli, B., Resnick, B., Hecht, J., Minicucci, D. S., Ory,
of Orthopsychiatry, 68, 542–552. M., et al. (2004). Enhancing treatment fidelity in health behavior
*Aber, J. L., Jones, S. M., Brown, J. L., Chaudry, N., & Samples, F. change studies: Best practices and recommendations from the
(1998). Resolving conflict creatively: Evaluating the develop- NIH behavior change consortium. Health Psychology, 23, 443–
mental effects of a school based violence prevention program in 451.
neighborhood and classroom context. Development and Psycho- *Berman, P., & McLaughlin, M. W. (1976). Implementation of
pathology, 10, 187–213. educational innovation. The Educational Forum, 40, 345–370.
*Allison, K. R., Silverman, G., & Dignam, C. (1990). Effects on *Binford, V. M., & Newwell, J. M. (1991). Richmond, Virginia’s two
students of teacher training in use of a drug education decades of experience with Ira Gordon’s approach to parent
curriculum. Journal of Drug Education, 20, 31–46. education. The Elementary School Journal, 91, 233–237.
Altschuld, J. W., Kumar, D. D., Smith, D. W., & Goodway, J. D. *Blakely, C. H., Mayer, J. P., Gottschalk, R. G., Schmitt, N.,
(1999). School-based educational innovations: Case illustrations Davidson, W. S., Roitman, D. B., et al. (1987). The fidelity-
of context sensitive evaluations. Family and Community Health, adaptation debate: Implications for the implementation of public
22, 66–79. sector social programs. American Journal of Community
American Psychological Association. (2003). Guidelines on multi- Psychology, 15, 253–268.
cultural education, training, research, practice and organizational *Bloom, H. S., Hill, C. J., & Riccio, J. A. (2003). Linking program
change for psychologists. American Psychologist, 58, 377–402. implementation and effectiveness: Lessons from a pooled sample
Ary, D. V., Biglan, A., Glasgow, R., Zoref, L., Black, C., Ochs, L., of welfare-to-work experiments. Journal of Policy Analysis and
et al. (1990). The efficacy of social-influence prevention Management, 22, 551–575.
programs versus ‘‘standard care’’: Are new initiatives needed? Botvin, G. J. (2000). Preventing drug abuse in schools: Social and
Journal of Behavioral Medicine, 13, 281–296. competence enhancement approaches targeting individual-level
*August, G. J., Bloomquist, M. L., Lee, S. S., Realmuto, G. M., & etiologic factors. Addictive Behaviors, 25, 887–897.
Hektner, J. M. (2006). Can evidence-based prevention programs *Botvin, G. J., Baker, E., Dusenbury, L., Botvin, E. M., & Diaz, T.
be sustained in community practice settings? The Early Risers; (1995). Long-term follow-up results of a randomized drug abuse
advanced-stage effectiveness trial. Prevention Science, 7, 151– prevention trial in a white middle-class population. Journal of
165. the American Medical Association, 273, 1106–1112.
*August, G. J., Egan, E. A., Realmuto, G. M., & Hektner, J. M. *Botvin, G. J., Baker, E., Dusenbury, L., Tortu, S., & Botvin, E.
(2003a). Parceling component effects of a multifaceted preven- (1990). Preventing adolescent drug abuse through a multimodal
tion program for disruptive elementary school children. Journal cognitive-behavioral approach: Results of a 3 year study.
of Abnormal Child Psychology, 31, 515–527. Journal of Consulting and Clinical Psychology, 58, 437–446.
*August, G. J., Lee, S. S., Bloomquist, M. L., Realmuto, G. M., & *Botvin, G. J., Baker, E., Filazzola, A. D., & Botvin, E. M. (1990). A
Hektner, J. M. (2003b). Dissemination of an evidence-based cognitive-behavioral approach to substance abuse prevention:
preventive intervention for aggressive children living in diverse, 1 Year follow-up. Addictive Behaviors, 15, 47–63.
urban neighborhoods. Prevention Science, 4, 271–286. *Botvin, G. J., Baker, E., Renick, N. L., Filazzola, A. D., & Botvin, E.
*Backer, T. E. (2002). Finding the balance: Program fidelity and M. (1994). A cognitive-behavioral approach to substance abuse
adaptation in substance abuse prevention: A state-of-the-art prevention. Addictive Behaviors, 9, 137–147.
review. Rockville, MD: Center for Substance Abuse Prevention, *Botvin, G. J., Dusenbury, L., Baker, E., James-Ortiz, S., Botvin, E.
Substance Abuse and Mental Health Services Administration. M., & Kerner, J. (1992). Smoking prevention among urban
*Barr, J. E., Tubman, J. G., Montgomery, M. J., & Soza-Vento, R. M. minority youth: Assessing effects on outcome and mediating
(2002). Amenability and implementation in secondary school variables. Health Psychology, 11, 290–299.
antitobacco programs. American Journal of Health Behavior, 26, *Botvin, G. J., Dusenbury, L., Baker, E., James-Ortiz, S., & Kerner, J.
3–15. (1989). A skills training approach to smoking prevention among
*Basch, C. E., Sliepcevich, E. M., Gold, R. S., Duncan, D. F., & Hispanic youth. Journal of Behavioral Medicine, 12, 279–296.
Kolbe, L. J. (1985). Avoiding type III errors in health education *Bush, P. J., Zuckerman, A. E., Taggart, V. S., Theiss, P. K., Peleg, E.
program evaluations: A case study. Health Education Quarterly, O., & Smith, S. A. (1989). Cardiovascular risk factor prevention
12, 315–331. in black school children: The ‘‘Know Your Body’’ evaluation
*Basen-Engquist, K., O’Hara-Tompkins, N., Lovato, C. Y., Lewis, M. project. Health Education Quarterly, 16, 215–227.
J., Parcel, G. S., & Gingiss, P. (1994). The effect of two types of Chinman, M., Hannah, G., Wandersman, A., Ebener, P. P., Hunter, S.
teacher training on implementation of smart choices: A tobacco B., Imm, P., et al. (2005). Developing a community science
prevention curriculum. Journal of School Health, 64, 334–339. research agenda for building community capacity for effective
*Battistich, V., Schaps, E., Watson, M., Solomon, D., & Lewis, C. preventive interventions. American Journal of Community
(2000). Effects of the child development project on students’ Psychology, 35, 143–157.
drug use and other problem behaviors. The Journal of Primary *Chinman, M., Imm, P., & Wandersman, A. (2004). Getting to
Prevention, 21, 75–99. outcomes 2004: Promoting accountability through methods and
*Battistich, V., Schaps, E., & Wilson, N. (2004). Effects of an tools for planning, implementation, and evaluation (Technical
elementary school intervention on students’ ‘‘connectedness’’ to Report). Santa Monica, CA: RAND Corporation.
school and social adjustment during middle school. Journal of *Cho, H., Hallfors, D. D., & Sanchez, V. (2005). Evaluation of a high
Primary Prevention, 24, 243–262. school peer group intervention for at-risk youth. Journal of
*Bauman, L., Stein, R., & Ireys, H. (1991). Reinventing fidelity: The Abnormal Child Psychology, 33, 363–374.
transfer of social technology among settings. American Journal *Conduct Problems Prevention Research Group. (1999). Initial
of Community Psychology, 19, 619–639. impact of the Fast Track prevention trial for conduct problems:
123
15732770, 2008, 3-4, Downloaded from https://onlinelibrary.wiley.com/doi/10.1007/s10464-008-9165-0 by Ethiopian Civil Service Colleg, Wiley Online Library on [13/12/2022]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
346 Am J Community Psychol (2008) 41:327–350
II. Classroom effects. Journal of Consulting and Clinical *Elias, M. J., Gara, M., Ubriaco, M., Rothbaum, P. A., Clabby, J. F.,
Psychology, 67, 648–657. & Schuyler, T. (1986). Impact of a preventive social problem
Connell, D. B., Turner, R. R., & Mason, E. F. (1985). Summary of solving intervention on children’s coping with middle-school
findings on the school health evaluation: Health promotion, stressors. American Journal of Community Psychology, 14, 259–
effectiveness, implementation, and costs. Journal of School 275.
Health, 55, 316–321. *Elias, M. J., Zins, J. E., Graczyk, P. A., & Weissberg, R. P. (2003).
*Cook, T. D., Habib, F. N., Phillips, M., Settersten, R. A., Shagle, S. Implementation, sustainability, and scaling up of social-emo-
C., & Degirmencioglu, S. M. (1999). Comer’s school develop- tional and academic innovations in public schools. School
ment program in prince George’s county, Maryland: A theory- Psychology Review, 32, 303–319.
based evaluation. American Educational Research Journal, 36, *Engstrom, M., Jason, L. A., Townsend, S. M., Pokorny, S. B., &
543–597. Curie, C. J. (2002). Community readiness for prevention:
*Cook, T. D., Murphy, R. F., & Hunt, H.D. (2000). Comer’s school Applying stage theory to multi-community interventions. Jour-
development program in Chicago: A theory-based evaluation. nal of Prevention & Intervention in the Community, 24, 29–46.
American Educational Research Journal, 37, 535–597. *Epstein, A. S. (1993). Training for quality: Improving early
*Cooke, M. (2000). The dissemination of a smoking cessation childhood programs systematic training. Monographs of the
program: Predictors of program awareness, adoption and main- High/Scope Educational Research Foundation, No. 9. Ypsilanti,
tenance. Health Promotion International, 15, 113–124. MI: The High/Scope Press.
*Cullen, K. W., Baranowski, T., Baranowski, J., Herbert, D., deMoor, *Fagan, A. A., & Mihalic, S. (2003). Strategies for enhancing the
C., Hearn, M. D., et al. (1999). Influence of school organizational adoption of school-based prevention programs: Lessons learned
characteristics on the outcomes of a school health promotion from the blueprints for violence prevention replications of the
program. Journal of School Health, 69, 376–380. life skills training program. Journal of Community Psychology,
*Cunningham, P. B., & Henggeler, S. W. (2001). Implementation of 31, 235–253.
an empirically based drug and violence prevention and inter- Felner, R. D., Favazza, A., Shim, M., Brand, S., Gu, K., & Noonan, N.
vention program in public school settings. Journal of Consulting (2001). Whole school improvement and restructuring as preven-
and Clinical Psychology, 30, 221–232. tion and promotion. Journal of School Psychology, 39, 177–202.
Dane, A. V., & Schneider, B. H. (1998). Program integrity in primary *Felner, R. D., Jackson, A., Kasak, D., Mulhall, P., Brand, S., &
and early secondary prevention: Are implementation effects out Flowers, N. (1997). The impact of middle school reform for the
of control. Clinical Psychology Review, 18, 23–45. middle years: Longitudinal study of a network engaged in
*Derzon, J. H., Sale, E., Springer, J. F., & Brounstein, P. (2005). Turning Points-based comprehensive school transformation. Phi
Estimating intervention effectiveness: Synthetic projection of Delta Kappan, 78, 528–532.
field evaluation results. The Journal of Primary Prevention, 26, Fixsen, D. L., Naoom, S. F., Blasé, K. A., Friedman, R. M., & Wallace,
321–343. F. (2005). Implementation research: A synthesis of the literature.
Domitrovich, C. E., & Greenberg, M. T. (2000). The study of Tampa, FL: University of South Florida, Louis de la Parte Florida
implementation: Current findings from effective programs that Mental Health Institute, The National Implementation Research
prevent mental disorders in school-aged children. Journal of Network (FMHI Publication #231). Retrieved November 1, 2006,
Educational and Psychological Consultation, 11, 193–221. from http://nirn.fmhi.usf.edu/resources/publications/Monograph/
*Dubas, J. S., Lynch, K. B., Galano, J., Geller, S., & Hunt, D. (1998). pdf/monograph_full.pdf.
Preliminary evaluation of a resiliency-based preschool substance *Forgatch, M. S., Patterson, G. R., & DeGarmo, D. S. (2005).
abuse and violence prevention project. Journal of Drug Educa- Evaluating fidelity: Predictive validity for a measure of compo-
tion, 28, 235–255. nent adherence to the Oregon model of parent management
*DuBois, D. L., Holloway, B. E., Valentine, J. C., & Cooper, H. training. Behavior Therapy, 36, 3–13.
(2002). Effectiveness of mentoring programs for youth: A meta- *Fors, S. W., & Doster, M. E. (1985). Implications of results: Factors
analytic review. American Journal of Community Psychology, for success. Journal of School Health, 55, 332–334.
30, 157–198. *Fullan, M., & Pomfret, A. (1977). Research on curriculum and
*Dufrene, B. A., Noell, G. H., Gilbertson, D. N., & Duhon, G. J. instruction implementation. Review of Educational Research, 47,
(2005). Monitoring implementation of reciprocal peer tutoring: 335–397.
Identifying and intervening with students who do not maintain *Gerstenblith, S. A., Soule, D. A., Gottfredson, D. C., Lu, S.
accurate implementation. School Psychology Review, 34, 74–86. Kellstrom, M. A., Womer, S. C., et al. (2005). After-school
Durlak, J. A. (1997). Successful prevention programs for children and programs, antisocial behavior, and positive youth development:
adolescents. New York: Plenum. An exploration of the relationship between program implemen-
Durlak, J. A. (1985). School-based prevention programs for children tation and changes in youth behavior. In J. L. Mahoney, J. S.
and adolescents. Thousand Oaks, CA: Sage. Eccles & R. W. Larson (Eds.), Organized activities as contexts
Durlak, J. A. (1998). Why program implementation is important. of development: Extracurricular activities, after-school and
Journal of Prevention & Intervention in the Community, 17, 5–18. community programs. Mahwah, NJ: Erlbaum.
Dusenbury, L., Brannigan, R., Falco, F., & Hansen, W. B. (2003a). A *Gingiss, P. L. (1992). Enhancing program implementation and
review of research on fidelity of implementation: implications maintenance through a multi-phase approach to peer-based staff
for drug abuse prevention in school settings. Health Education development. Journal of School Health, 62, 161–166.
Research, 18, 237–256. *Gingiss, P. L., Gottlieb, N. H., & Brink, S. G. (1994). Increasing
*Dusenbury, L., Brannigan, R., Falco, M., & Lake, A. (2003b). An teacher receptivity toward use of tobacco prevention education
exploration of fidelity of implementation in drug abuse preven- programs. Journal of Drug Education, 24, 163–176.
tion among five professional groups. Journal of Alcohol and *Gingiss, P. M., Roberts-Gray, C., & Boerm, M. (2006). Bridge-It: A
Drug Education, 47, 4–19. system for predicting implementation fidelity for school-based
*Elder, J. P., Perry, C. L., Stone, E. J., Johnson, C. C., Yang, M., tobacco prevention programs. Prevention Science, 7, 197–207.
Edmundson, E. W., et al. (1996). Tobacco use measurement, *Goldman, K. D. (1994). Perceptions of innovations as predictors of
prediction, and intervention in elementary schools in four states: implementation levels: The diffusion of a nationwide health
The CATCH study. Preventive Medicine, 25, 489–494. education campaign. Health Education Quarterly, 21, 433–443.
123
15732770, 2008, 3-4, Downloaded from https://onlinelibrary.wiley.com/doi/10.1007/s10464-008-9165-0 by Ethiopian Civil Service Colleg, Wiley Online Library on [13/12/2022]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
Am J Community Psychol (2008) 41:327–350 347
*Gottfredson, D. C., & Gottfredson, G. D. (2002). Quality of school- *James, S., Reddy, P., Ruiter, A. M., & van den Borne, B. (2006). The
based prevention programs: Results from a national survey. impact of an HIV and AIDS life skills program on secondary
Journal of Research in Crime and Delinquency, 39, 3–35. school students in Kwazulu-Natal, South Africa. AIDS Educa-
*Gottfredson, D. C., Gottfredson, G. D., & Hybl, L. G. (1993). tion and Prevention, 18, 281–294.
Managing adolescent behavior: A multiyear, multischool study. *Jason, L. A., Pokorny, S. B., Kunz, C., & Adams, M. (2004).
American Educational Research Journal, 30, 179–215. Maintenance of community change: Enforcing youth access to
Granner, M. L., & Sharpe, P. A. (2004). Evaluating community tobacco laws. Journal of Drug Education, 34, 105–119.
coalition characteristics and functioning: A summary of mea- *Kalafat, J., & Ryerson, D. M. (1999). The implementation and
surement tools. Health Education Research, 19, 514–532. institutionalization of a school-based youth suicide prevention
Greenhalgh, T., Robert, G., Macfarlane, F., Bate, P., Kyriakidou, O., program. Journal of Primary Prevention, 19, 157–175.
& Peacock, R. (2005). Diffusion of innovations in health service *Kallestad, J. H., & Olweus, D. (2003). Predicting teachers’ and
organizations: A systematic literature review. Oxford: schools’ implementation of the Olweus bullying prevention
Blackwell. program: A multilevel study. Prevention & Treatment, 6.
*Greenwood, C. R., Dinwiddie, G., Bailey, V., Carta, J. J., Dorsey, Retrieved June 20, 2005.
D., Kohler, F. W., et al. (1987). Field replication of classwide *Kam, C.-M., Greenberg, M. T., & Walls, C. T. (2003). Examining
peer tutoring. Journal of Applied Behavior Analysis, 20, 151– the role of implementation quality in school-based prevention
160. using the PATHS curriculum. Prevention Science, 4, 55–63.
*Greenwood, C. R., Tapia, Y., Abbott, M., & Walton, C. (2003). A Kazdin, A. E., Bass, D., Ayers, W. A., & Rodgers, A. (1990).
building-based case study of evidence-based literacy practices: Empirical and clinical focus of child and adolescent psycho-
Implementation, reading behavior, and growth in reading therapy research. Journal of Consulting and Clinical
fluency, K-4. The Journal of Special Education, 37, 95–110. Psychology, 58, 729–740.
*Greenwood, C. R., Terry, B., Arreaga-Mayer, C., & Finney, R. *Kealey, K. A., Peterson, A. V., Gaul, M. A., & Dinh, K. T. (2000).
(1992). The classwide peer tutoring program: Implementation Teacher training as a behavior change process: Principles and
factors moderating students’ achievement. Journal of Applied results from a longitudinal study. Health Education & Behavior,
Behavior Analysis, 25, 101–116. 27, 64–81.
*Hahn, E. J., Noland, M. P., Rayens, M. K., & Christie, D. M. (2002). *Kegler, M. C., Steckler, A., Malek, S. H., & McLeroy, K. (1998a). A
Efficacy of training and fidelity of implementation of the life multiple case study of implementation in 10 local project
skills training program. Journal of School Health, 72, 282–287. ASSIST coalitions in North Carolina. Health Education
*Hansen, W. B., Graham, J. W., Wolkenstein, B. H., & Rohrbach, L. Research, 13, 225–238.
A. (1991). Program integrity as a moderator of prevention *Kegler, M. C., Steckler, A., McLeroy, K., & Malek, S. H. (1998b).
program effectiveness: Results for fifth grade students in the Factors that contribute to effective community health promotion
adolescent alcohol prevention trial. Journal of Studies on coalitions: A study of 10 project ASSIST coalitions in North
Alcohol, 52, 568–579. Carolina. Health Education & Behavior, 25, 338–353.
*Hansen, W. B., & McNeal, R. B. (1999). Drug education practice: *Kegler, M. C., & Wyatt, V. H. (2003). A multiple case study of
Results of an observational study. Health Education Research, neighborhood partnerships for positive youth development.
14, 85–97. American Journal of Health Behavior, 27, 156–169.
*Harachi, T. W., Abbott, R. D., Catalano, R. F., Haggerty, K. P., & *Kelly, J. A., Somlai, A. M., DiFranceisco, W. J., Otto-Salaj, L. L.,
Fleming, C. B. (1999). Opening the black box: Using process McAuliffe, T. L., Hackl, K. L., et al. (2000). Bridging the gap
evaluation measures to assess implementation and theory between the science and service of HIV prevention: Transferring
building. American Journal of Community Psychology, 27, effective research-based HIV prevention interventions to com-
711–731. munity AIDS service providers. American Journal of Public
*Henderson, J. L., MacKay, S., & Peterson-Badali, M. (2006). Health, 90, 1082–1088.
Closing the research-practice gap: Factors affecting adoption and Kendrick, J. S., Zahniser, C., Miller, N., Salas, N., Stine, J. Gargiuillo,
implementation of a children’s mental health program. Journal P. M., et al. (1995). Integrating smoking cessation into routine
of Clinical Child and Adolescent Psychology, 35, 2–12. public prenatal care: The smoking cessation pregnancy project.
*Hogan, J. A., Baca, I., Daley, C., Garcia, T., Jaker, J., Lowther, M., American Journal of Public Health, 85, 217–222.
et al. (2003). Disseminating science-based prevention: Lessons *Kerr, D. M., Kent, L., & Lam, T. C. M. (1985). Measuring program
learned from CSAP’s CAPTs. Journal of Drug Education, 33, implementation with a classroom observation instrument: The
233–244. interactive teaching map. Evaluation Review, 9, 461–482.
Hogue, A., Liddle, H. A., Singer, A., & Leckrone, J. (2005). *Komro, K. A., Perry, C. L., Veblen-Mortenson, S., Bosma, L. M.,
Intervention fidelity in family-based prevention counseling for Dudovitz, B. S., Williams, C. L., et al. (2004). Brief report: The
adolescent problem behaviors. Journal of Community Psychol- adaptation of Project Northland for urban youth. Journal of
ogy, 33, 191–211. Pediatric Psychology, 29, 457–466.
*Hopkins, R. H., Mauss, A. L., Kearney, K. A., & Weisheit, R. A. *Komro, K. A., Perry, C. L., Veblen-Mortenson, S., Farbakhsh, K.,
(1988). Comprehensive evaluation of a model alcohol education Kugler, K. C., Alfano, K. A., et al. (2006). Cross-cultural
curriculum. Journal of Studies on Alcohol, 49, 38–50. adaptation and evaluation of a home-based program for alcohol
*Hopper, C. A., Munoz, K. D., Gruber, M. B., MacConnie, S., use prevention among urban youth: The ‘‘Slick Tracy Home
Schonfeldt, B., & Shunk, T. (1996). A school-based cardiovas- Team Program’’. The Journal of Primary Prevention, 27, 135–
cular exercise and nutrition program with parent participation: 154.
An evaluation study. Children’s Health Care, 25, 221–235. *Kramer, L., Laumann, G., & Brunson, L. (2000). Implementation
*Ialongo, N., Werthamer, L., Kellam, S. G., Brown, C. H., Wang, S., and diffusion of the rainbows program in rural communities:
& Lin, Y. (1999). Proximal impact of two first-grade preventive Implications for school-based prevention programming. Journal
interventions on the early risk behaviors for later substance of Educational and Psychological Consultation, 11, 37–64.
abuse, depression, and antisocial behavior. American Journal of *Lapan, R. T., Gysbers, N. C., & Petroski, G. F. (2001). Helping
Community Psychology, 27, 599–641. seventh graders be safe and successful: A statewide study of the
123
15732770, 2008, 3-4, Downloaded from https://onlinelibrary.wiley.com/doi/10.1007/s10464-008-9165-0 by Ethiopian Civil Service Colleg, Wiley Online Library on [13/12/2022]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
348 Am J Community Psychol (2008) 41:327–350
impact of comprehensive guidance and counseling programs. *Noell, G. H., Witt, J. C., Slider, N. J., Connell, J. E., Gatti, S. L.,
Journal of Consulting & Development, 79, 320–330. Williams, K. L. et al. (2005). Treatment implementation
*Levenson-Gingiss, P., Gottlieb, N. H., & Brink, S. G. (1994). following behavioral consultation in schools: A comparison of
Increasing teacher receptivity toward use of tobacco prevention three follow-up strategies. School Psychology Review, 34, 87–
education programs. Journal of Drug Education, 24, 163–176. 106.
*Lillehoj, C. J. G., Griffin, K. W., & Spoth, R. (2004). Program *Parcel, G. S., Ross, J. G., Lavin, A. T., Portnoy, B., Nelson, G. D., &
provider and observer ratings of school-based preventive inter- Winters, F. (1991). Enhancing implementation of the teenage
vention implementation: Agreement and relation to youth health teaching modules. Journal of School Health, 61, 35–38.
outcomes. Health Education & Behavior, 31, 242–257. *Payne, A. A., Gottfredson, D. C., & Gottfredson, G. D. (2006).
*Lynch, K. B., Geller, S. R., Hunt, D. R., Galano, J., & Dubas, J. S. School predictors of the intensity of implementation of school-
(1998). Successful program develoment using implementation based prevention programs: Results from a national study.
evaluation. Journal of Prevention & Intervention in the Com- Prevention Science, 7, 225–237.
munity, 17, 51–64. *Pentz, M. A., Trebow, E. A., Hansen, W. B., MacKinnon, D. B.,
*MacDonald, M. A., & Green, L. W. (2001). Reconciling concept and Dwyer, J. H., Johnson, C. A. et al. (1990). Effects of program
context: The dilemma of implementation in school-based health implementation on adolescent drug use behavior: The Midwest-
promotion. Health Education & Behavior, 28, 749–768. ern Prevention Project (MPP). Evaluation Review, 14, 264–289.
*McCormick, L. K., Steckler, A., & McLeroy, K. R. (1994). *Perry, C. L., Murray, D. M., & Griffin, G. (1990). Evaluating the
Diffusion of innovation in schools: A study of adoption and statewide dissemination of smoking prevention curricula: Fac-
implementation of school-based tobacco prevention curricula. tors in teacher compliance. Journal of School Health, 60, 501–
American Journal of Health Promotion, 9, 210–219. 504.
*McGraw, S., Sellers, D., Stone, E., Bebchuk, J., Edmundson, E., *Perry, C. L., Sellers, D. E., Johnson, C., Pedersen, S., Bachman, K.
Johnson, C., et al. (1996). Using process data to explain J., Parcel, G. S., et al. (1997). The Child and Adolescent Trial for
outcomes: An illustration from the child and adolescent trial for Cardiovascular Health (CATCH): Intervention, implementation,
cardiovascular health (CATCH). Evaluation Review, 20, 291– and feasibility for elementary schools in the United States.
312. Health Education and Behavior, 24, 717–735.
*McKenzie, T. L., Nader, P. R., Strikmiller, P. K., Yang, M., Stone, *Piper, D. L., King, M. J., & Moberg, D. P. (1993). Implementing a
E. J., erry, C. L, et al. (1996). School physical education: Effect middle school health promotion research project: Lessons our
of the child and adolescent trial for cardiovascular health. textbook didn’t teach us. Evolution & Program Planning, 16,
Preventive Medicine, 25, 423–431. 171–180.
*Mihalic, S., Irwin, K., Fagan, A., Ballard, D., & Elliott, D. (2004). *Pluye, P., Potvin, L., Denis, J. L., Pelletier, J., & Mannoni, C.
Successful program implementation: Lessons from blueprints. (2005). Program sustainability begins with the first events.
Electronic report. U.S. Department of Justice, Office of Justice Evaluation and Program Planning, 28, 123–137.
Programs). Retrieved August 10, 2006, from http://www. *Rankin, W., Tarnai, J., Fagan, N. J., Mauss, A. L., & Hopkins, R. H.
ojp.usdoj.gov/ojjdp. (1978). An evaluation of workshops designed to prepare teachers
Miranda, J., Bernal, G., Lau, A., Kohn, L., Hwangh, W., & in alcohol education. Journal of Alcohol & Drug Education, 23,
LaFromboise, T. (2005). State of the science on psychosocial 1–13.
interventions for ethnic minorities. Annual Review of Clinical Resnicow, K., Baranowski, T., Ahluwalia, J. S., & Braithwaite, R. L.
Psychology, 1, 113–142. (1999). Cultural sensitivity in public health: Defined and
Mitchell, C. M. (1983). The dissemination of a social intervention: demystified. Ethnicity and Disease, 9, 10–21.
Process and effectiveness of two types of paraprofessional *Resnicow, K., Cohn, L., Reinhardt, J., Cross, D., Futterman, R.,
change agents. American Journal of Community Psychology, 11, Kirschner, E., et al. (1992). A 3-year evaluation of the Know
723–738. Your Body program in inner-city schoolchildren. Health Edu-
*Moskowitz, J. M., Schaps, E., & Malvin, J. H. (1982). Process and cation Quarterly, 19, 463–480.
outcome evaluations in primary prevention: The magic circle *Resnicow, K., Davis, M., Smith, M., Baranowski, T., Lin, L. S.,
program. Evaluation Review, 6, 775–788. Baranowski, J., et al. (1998a). Results of the TeachWell worksite
Mowbray, C. T., Holter, M. C., Teague, G. B., & Bybee, D. (2003). wellness program. American Journal of Public Health, 88,
Fidelity criteria: Developmental, measurement, and validation. 250–257.
American Journal of Evaluation, 24, 315–340. *Resnicow, K., Davis, M., Smith, M., Lazarus-Yaroch, A., Baranow-
Nastasi, B. K., & Schensul, S. L. (2005). Contributions of qualitative ski, T., Baranowski, J., et al. (1998b). How best to measure
research to the validity of intervention research. Journal of implementation of school health curricula: A comparison of
School Psychology, 43, 177–195. three measures. Health Education Research, 13, 239–250.
National Research Council, Institute of Medicine. (2002). Community *Reynolds, K. D., Franklin, F. A., Leviton, L. C., Maloy, J.,
programs to promote youth development. Washington, DC: Harrington, K. F., Yaroch, A. L., et al. (2000). Methods, results,
National Academy Press. and lessons learned from process evaluation of the High 5
*Noell, G. H., Duhon, G. J., Gatti, S. L., & Connell, J. E. (2002). school-based nutrition intervention. Health Education & Behav-
Consultation, follow-up, and implementation of behavior man- ior, 27, 177–186.
agement interventions in general education. School Psychology *Richard, L., Lehoux, P., Breton, E., Denis, J., Labrie, L., & Leonard,
Review, 31, 217–234. C. (2004). Implementing the ecological approach in tobacco
*Noell, G. H., & Witt, J. C. (1999). When does consultation lead to control programs: Results of a case study. Evaluation and
intervention implementation? The Journal of Special Education, Program Planning, 27, 409–421.
33, 29–35. *Riley, B. L. (2003). Dissemination of heart health promotion in the
*Noell, G. H., Witt, J. C., Gilbertson, D. N., Ranier, D. D., & Ontario public health system: 1989–1999. Health Education
Freeland, J. T. (1997). Increasing teacher intervention imple- Research, 18, 15–31.
mentation in general education settings through consultation and *Riley, B. L., Taylor, S. M., & Elliott, S. J. (2001). Determinants of
performance feedback. School Psychology Quarterly, 12, 77–88. implementing heart healthy promotion activities in Ontario
123
15732770, 2008, 3-4, Downloaded from https://onlinelibrary.wiley.com/doi/10.1007/s10464-008-9165-0 by Ethiopian Civil Service Colleg, Wiley Online Library on [13/12/2022]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
Am J Community Psychol (2008) 41:327–350 349
public health units: A social ecological perspective. Health outcomes in school consultation. School Psychology Quarterly,
Education Research, 16, 425–441. 17, 47–77.
*Riley, B. L., Taylor, S. M., & Elliott, S. J. (2003). Organizational *Stevens, V., Van Oost, P., & De Bourdeaudhuij, I. (2001).
capacity and implementation change: A comparative case study Implementation process of the Flemish antibullying intervention
of heart health promotion in Ontario public health agencies. and relation with program effectiveness. Journal of School
Health Education Research, 18, 754–769. Psychology, 39, 303–317.
*Ringwalt, C. L., Ennett, S., Johnson, R., Rohrbach, L. A., Simons- Stith, S., Pruitt, I., Dees, J., Fronce, M., Green, N., Som, A. et al.
Rudolph, A., Vincus, A., et al. (2003). Factors associated with (2006). Implementing community-based prevention program-
fidelity to substance use prevention curriculum guides in the ming: A review of the literature. Journal of Primary Prevention,
nation’s middle schools. Health Education & Behavior, 30, 27, 599–617.
375–391. *Story, M., Mays, R. W., Bishop, D. B., Perry, C. L., Taylor, G.,
*Rogers, E. M. (2003). Diffusion of innovations (5th ed.). New York: Smyth, M., et al. (2000). 5-a-day power plus: Process evaluation
Free Press. of a multicomponent elementary school program to increase fruit
*Rohrbach, L. A., Graham, J. W., & Hansen, W. B. (1993). Diffusion of and vegetable consumption. Health Education & Behavior, 27,
a school-based substance abuse prevention program: Predictors of 187–200.
program implementation. Preventive Medicine, 22, 237–260. *Stotts, A. L., diClemente, C. C., & Dolan-Mullen, P. (2002). One-to-
*Ross, J. G., Kuepker, R. V., Nelson, G. D., Saavedra, P., & Hubbard, one: A motivational intervention for resistant pregnant smokers.
B. M. (1991). Teenage health teaching modules: Impact of Addictive Behaviors, 27, 275–292.
teacher training on implementation and student outcomes. *St. Pierre, T. L., & Kaltreider, D. L. (2001). Reflections on
Journal of School Health, 61, 31–34. implementing a community agency-school prevention program.
Saunders, R. P., Ward, D., Felton, G. M., Dowda, M., & Pate, R. R. Journal of Community Psychology, 29, 107–116.
(2006). Examining the link between program implementation *St. Pierre, Y. L., & Kaltreider, L. (2004). Tales of refusal, adoption,
and behavior outcomes in the lifestyle education for activity and maintenance: Evidence-based substance abuse prevention
program (LEAP). Evaluation and Program Planning, 29, via school extension collaborations. American Journal of
352–364. Evaluation, 25, 479–491.
*Scheirer, M. A. (1990). The life cycle of an innovation: Adoption *Taggart, V. S., Bush, P. J., Zuckerman, A. E., & Theiss, P. K.
versus discontinuation of the fluoride mouth rinse program in (1990). A process evaluation of the District of Columbia ‘‘Know
schools. Journal of Health and Social Behavior, 31, 203–215. Your Body’’ project. Journal of School Health, 60, 60–66.
*Scheirer, M. A. (2005). Is sustainability possible: A review and *Tappe, M. K., Galer-Unti, R. A., & Bailey, K. C. (1995). Long-term
commentary on empirical studies of program sustainability. implementation of the teenage health teaching modules by
American Journal of Evaluation, 26, 320–347. trained teachers: A case study. Journal of School Health, 65,
*Scheirer, M. A., Shediac, M. C., & Cassady, C. E. (1995). Measuring 411–415.
the implementation of health promotion programs: The case of *Telzrow, C. F., McNamara, K., & Hollinger, C. L. (2000). Fidelity
the Breast and Cervical Cancer Program in Maryland. Health of problem-solving implementation and relationship to student
Education Research, 10, 11–25. performance. School Psychology Review, 29, 443–461.
Shediac-Rizkallah, M. C., & Bone, L. R. (1998). Planning for the Tobler, N. S. (1986). Meta-analysis of 143 adolescent drug prevention
sustainability of community-based health programs: Conceptual programs: Quantitative outcome results of program participants
frameworks and future directions for research, practice and compared to a control or comparison group. Journal of Drug
policy. Health Education Research, 13, 87–108. Issues, 16, 537–567.
*Smith, D. W., McCormick, L. K., Steckler, A. B., & McLeroy, K. R. *Tortu, S., & Botvin, G. (1989). School-based smoking prevention:
(1993). Teachers’ use of health curricula: Implementation of The teacher training process. Preventive Medicine, 18, 280–289.
growing healthy, project SMART, and the teenage health *Vadasy, P. F., Jenkins, J. R., Antil, L. R., Phillips, N. B., & Pool, K.
teaching modules. Journal of School Health, 63, 349–354. (1997). The research-to-practice ball game. Remedial and
*Smith, D. W., Redican, K. J., & Olsen, L. K. (1992). The longevity Special Education, 18, 143–156.
of growing healthy: An analysis of the eight original sites *Vincent, M. L., Paine-Andrews, A., Fisher, J., Devereaux, R. S.,
implementing the school health curriculum project. Journal of Gonyear Dolan, H., Harris, K. J., et al. (2000). Replication of a
School Health, 62, 83–87. community-based multicomponent teen pregnancy prevention
*Smith, J. D., Schneider, B. H., Smith, P. K., & Ananiadou, K. model: Realities and challenges. Family and Community Health,
(2004). The effectiveness of whole-school antibullying pro- 23, 28–45.
grams: A synthesis of evaluation research. School Psychology *Wall, M. A., Severson, H. H., Andrews, J. A., Lichtenstein, E., &
Review, 33, 547–560. Zoref, L. (1995). Pediatric office-based smoking intervention:
*Solomon, D., Battistich, V., Watson, M., Schaps, E., & Lewis, C. Impact on maternal smoking and relapse. Pediatrics, 96, 622–
(2000). A six-district study of educational change: Direct and 628.
mediated effects of the Child Development Project. Social Wandersman, A. (2003). Community science: Bridging the gap
Psychology Education, 4, 3–51. between science and practice with community-centered models.
*Spoth, R., Guyll, M., Trudeau, L., & Goldberg-Lillehoj, C. (2002). American Journal of Community Psychology, 31, 227–242.
Two studies of proximal outcomes and implementation quality Wandersman, A., Duffy, J., Flaspohler, P., Noonan, R., Lubell, K.,
of universal preventive interventions in a community-university Stillman, L., et al. (2008). Bridging the gap between prevention
collaboration context. Journal of Community Psychology, 30, research and practice: The interactive systems framwork for
499–518. dissemination and implementation. American Journal of Commu-
*Steckler, A., McLeroy, K. R., Goodman, R. M., Smith, D., Dawson, nity Psychology, 37, (this issue). doi:10.1007/s10464-008-9174-z
L., & Howell, K. (1989). The importance of school district *Wandersman, A., Morissey, E., Davino, K., Seybolt, D., Crusto, C.,
policies in the dissemination of tobacco use curricula in North Nation, M., et al. (1998). Comprehensive quality programming
Carolina schools. Family & Community Health, 12, 14–25. and accountability: Eight essential strategies for implementing
*Sterling-Turner, H. E., Watson, T. S., & Moore, J. W. (2002). The successful prevention programs. The Journal of Primary Pre-
effects of direct training and treatment integrity on treatment vention, 19, 3–30.
123
15732770, 2008, 3-4, Downloaded from https://onlinelibrary.wiley.com/doi/10.1007/s10464-008-9165-0 by Ethiopian Civil Service Colleg, Wiley Online Library on [13/12/2022]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
350 Am J Community Psychol (2008) 41:327–350
Weisman, S. A., Womer, S. C., Kellstrom, M., Bryner, S., Kahler, A., meta-analysis. Journal of Consulting and Clinical Psychology,
Slocum, L. A., et al. (2003). Maryland after school grant 71, 136–149.
program part 1: Report on the 2001–2002 school year evaluation *Winfree, L. T., Lynsky, D. P., & Maupin, J. R. (1999). Developing
of the phase 3 after school programs. Unpublished manuscript, local police and federal law enforcement partnerships:
University of Maryland, College Park. G.R.E.A.T. as a case study of policy implementation. Criminal
*Wiecha, J. L., El Ayadi, A. M., Fuemmeler, B. F., Carter, J. E., Justice Review, 24, 145–168.
Handler, S., Johnson, S., et al. (2004). Diffusion of an integrated Wolff, T. (2001). A practitioner’s guide to successful coalitions.
health education program in an urban school system: Planet American Journal of Community Psychology, 29, 173–181.
Health. Journal of Pediatric Psychology, 29, 467–474.
Wilson, S. J., Lipsey, M. W., & Derzon, J. H. (2003). The effects of
school-based intervention programs on aggressive behavior: A
123