0% found this document useful (0 votes)
13 views12 pages

Evidence-Based Practice, Tanenbaum2005

The article discusses the controversies surrounding evidence-based practice (EBP) in mental health policy, focusing on the definition of 'evidence', the application of research findings in clinical settings, and the effectiveness of treatments. It highlights the challenges of integrating scientific research into mental health care and the implications of these controversies for policy-making. The author argues that unresolved issues in EBP could lead to ineffective care and calls for a more inclusive approach to evidence in mental health practices.

Uploaded by

mopsiribeiro
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views12 pages

Evidence-Based Practice, Tanenbaum2005

The article discusses the controversies surrounding evidence-based practice (EBP) in mental health policy, focusing on the definition of 'evidence', the application of research findings in clinical settings, and the effectiveness of treatments. It highlights the challenges of integrating scientific research into mental health care and the implications of these controversies for policy-making. The author argues that unresolved issues in EBP could lead to ineffective care and calls for a more inclusive approach to evidence in mental health practices.

Uploaded by

mopsiribeiro
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 12

At the Intersection of Health, Health Care and Policy

Cite this article as:


Sandra J. Tanenbaum
Evidence-Based Practice As Mental Health Policy: Three Controversies And A Caveat
Health Affairs, 24, no.1 (2005):163-173

doi: 10.1377/hlthaff.24.1.163

The online version of this article, along with updated information and services, is
available at:
http://content.healthaffairs.org/content/24/1/163.full.html

For Reprints, Links & Permissions:


http://healthaffairs.org/1340_reprints.php
E-mail Alerts : http://content.healthaffairs.org/subscriptions/etoc.dtl
To Subscribe: http://content.healthaffairs.org/subscriptions/online.shtml

Health Affairs is published monthly by Project HOPE at 7500 Old Georgetown Road, Suite 600,
Bethesda, MD 20814-6133. Copyright © 2005 by Project HOPE - The People-to-People Health
Foundation. As provided by United States copyright law (Title 17, U.S. Code), no part of Health Affairs
may be reproduced, displayed, or transmitted in any form or by any means, electronic or mechanical,
including photocopying or by information storage or retrieval systems, without prior written permission
from the Publisher. All rights reserved.

Not for commercial use or unauthorized distribution

Downloaded from content.healthaffairs.org by Health Affairs on March 14, 2015


at UNIV OF MASSACHUSETTS
Mental Health

Evidence-Based Practice As
Mental Health Policy: Three
Controversies And A Caveat
In mental health care, turning research findings into clinical directives
is fraught with difficulties.
by Sandra J. Tanenbaum

ABSTRACT: Evidence-based practice (EBP) is the subject of vigorous controversy in the


field of mental health. In this paper I discuss three distinct but interrelated controversies:
how inclusive the mental health evidence base should be; whether mental health practice
is a variety of applied science; and when and how the effectiveness goal in mental health is
defined. I provide examples of evidence-based policy in mental health. These controversies
pertain as well to general medicine. To the extent that they remain unresolved, evidence-
based policy making may lead to ineffective and limited care.

E
v i d e n c e - b a s e d p r ac t i c e (EBP) refers in a general way to the applica-
tion of scientific research findings to the treatment of individual patients.
Evidence-based medicine (EBM) is one field of EBP; evidence-based mental
health care is another. EBP is ubiquitous. It has a powerful presence in the clinical
literature and in plans for improvement of professional education, health care
management, and health policy making. One commentator finds that physicians
“can’t kick over a bedpan without hearing the phrase ‘evidence-based medicine’
rattle out.”1 EBP is more than a version of health care practice, however. It is a
movement, like the outcomes movement before it, of scientists, public officials,
private payers, and advocacy groups that seek to establish a new knowledge re-
gime in health services.2 This involves not only the funding and dissemination of
clinical scientific research but also “epistemological politics,” by which some
knowledge—and some knowers—are privileged in the consulting room and
policy arena.3 One proponent of EBP portrays the movement as “a revolution…
which asserts the supremacy of data over authority and tradition.”4 EBP can also
be viewed as asserting which data are supreme and pursuing the movement’s own
authority in health care.
This paper focuses mostly on psychologists—who share the field of mental
health with psychiatrists, counselors, social workers, psychiatric nurses, and so

Sandra Tanenbaum ([email protected]) is an associate professor in the School of Public Health at the Ohio
State University in Columbus.

H E A L T H A F F A I R S ~ Vo l u m e 2 4 , N u m b e r 1 163
DOI 10.1377/hlthaff.24.1.163 ©2005 Project HOPE–The People-to-People Health Foundation, Inc.

Downloaded from content.healthaffairs.org by Health Affairs on March 14, 2015


at UNIV OF MASSACHUSETTS
Implementation

forth—but not only as a concession to space. Rather, psychology is a primary lo-


cus of the EBP debate, and psychologists share many concerns with other profes-
sionals in the field. In psychology, EBP influences the research priorities of fund-
ing sources, the editorial policies of scholarly journals, the program agendas of
scholarly conferences, the content of approved treatment lists from professional
organizations and public agencies, and the tenor of intraprofessional discourse.
EBP has been described as the cause of “psychological warfare between therapists
and scientists.”5 Whereas one prominent psychologist issued a “Manifesto for a
Science of Clinical Psychology,” the president-elect of the American Psychological
Association (APA) considers it “fundamentally insane” to require the use of scien-
tific treatment manuals in psychotherapy.6
The EBP debate is far ranging, and the various issues tend to run together. It is
possible, however, to identify three distinct but interrelated and policy-relevant
controversies. This paper depicts these three areas and offers a caveat about policy
making in the face of these controversies not only in mental health but also in
medicine. EBP sets methodological standards that may delegitimize effective
treatments, and when those are incorporated into health policy making, patients
and the polity may be adversely affected.

Controversy 1: Defining ‘Evidence’


The first controversy is, How restrictive should the definition of “evidence” be;
that is, does the dominant definition inappropriately privilege some kinds of treat-
ment over others? Although a number of evidence hierarchies—for example, that
of David Sackett and colleagues—consider randomized controlled trials (RCTs)
to be the gold standard for clinical decision making, a recent reformulation of the
need for evidence calls for “practical clinical trials” (PCTs), which “select clini-
cally relevant interventions to compare, include a diverse population of study par-
ticipants, recruit participants from a variety of practice settings, and collect data
on a broad range of health outcomes.”7 This proposal notes the limitations of vari-
ous alternatives: Traditional RCTs measure efficacy but not effectiveness, and
nonexperimental research methods suffer from selection bias and confounding.
n List of validated treatments. In psychology, perhaps the most influential evi-
dence hierarchy has been the one adopted by the APA’s Division 12 (Clinical Psy-
chology). In the early 1990s Division 12 charged the Task Force on the Promotion
and Dissemination of Psychological Procedures with creating a list of Empirically
Validated Treatments (EVTs) for dissemination to practitioners and educators.
These were treatments (more recently called “empirically supported” and then “evi-
dence based”) for which there existed sufficiently rigorous evidence of efficacy—at
least two RCTs or ten single-case experimental studies—with patients fitting the
specific diagnostic criteria of the American Psychiatric Association’s Diagnostic and
Statistical Manual of Mental Disorders, Fourth Edition (DSM-IV). A precondition for
this efficacy research was that treatments were administered according to treat-

164 Ja n u a r y/ Fe b r u a r y 2 0 0 5

Downloaded from content.healthaffairs.org by Health Affairs on March 14, 2015


at UNIV OF MASSACHUSETTS
Mental Health

ment manuals. Division 12 issued successive lists in the late 1990s. The Division 12
Web site maintains a list of “well established” treatments that meet the original evi-
dentiary criteria. Although the Web site concedes that some “beneficial psycho-
therapies” may not yet have been studied, potential patients are urged to undergo
those on the list because they have met “basic scientific standards of effectiveness.”8
The Division 12 list has survived nearly a decade of controversy and has influ-
enced mental health policy in a number of states. For example, in response to a
1999 consent decree, Hawaii established a Division 12–like panel to review the ef-
fectiveness of treatments for a range of childhood and adolescent mental health
conditions. The Empirical Basis to Services (EBS) Task Force searched and evalu-
ated controlled studies in childhood mental health; it included administrators and
parents as well as mental health professionals, but much like Division 12, the task
force made between-group experimental design the gold standard.9 An EBS com-
mittee continues to review the literature to decide the content of practice guide-
lines; these are then appended to requests for proposal (RFPs) issued by the Child
and Adolescent Mental Health Division of the Hawaii Department of Health to
service providers seeking contracts. The guidelines specify what services will and
will not be provided to the division’s clients.10
n Controversies about the list. Critics of Division 12 raise several points. They
object that the list glosses over the difference between treatments that do not ap-
pear because they have been found ineffective and those that do not appear because
they have not had requisite study. Another common criticism is that RCTs measure
efficacy but not effectiveness—that they put reliability above validity. EBP propo-
nents have answered this critique in a number of ways, including the revision of ear-
lier evidence hierarchies to favor PCTs, as above. APA Division 29 (Psychotherapy)
responded to Division 12 by creating its own task force to evaluate the effectiveness
of mental health treatment. Division 29 took the position that much, if not most, of
the benefit from psychological services results from the relationship between psy-
chotherapist and patient rather than from specific psychotherapeutic techniques.
They proceeded to review psychotherapy research, including RCTs but also natural-
istic and process-outcome studies, and to compile a list of psychotherapy relation-
ships that work.11 To be sure, most psychologists recognize that techniques are exe-
cuted within relationships, which in turn rely on relationship-building techniques.
Still, they continue to debate the respective contributions of technique and relation-
ship to the success of psychological treatment and the best way of framing the object
of clinical inquiry.12
This controversy is complex, because underlying the debate is the question of
whether the research methodology to which an activity is suited will determine
whether or not it is deemed effective. In other words, assuming that psychothera-
peutic relationships are less compatible with, say, PCTs than treatment tech-
niques are, what should be the consequences for the psychological evidence base?
Should the public compilation of approved treatments be organized around tech-

H E A L T H A F F A I R S ~ Vo l u m e 2 4 , N u m b e r 1 165

Downloaded from content.healthaffairs.org by Health Affairs on March 14, 2015


at UNIV OF MASSACHUSETTS
Implementation

niques, or around the kinds of relationships found to be as or more effective in less


methodologically controlled studies? Many mental health interventions, as “so-
cially complex services,” necessarily violate the assumptions—precise protocols,
equivalent trial conditions, and more—of experimental research.13 Experimental
methodology is especially ill suited to psychodynamic and humanistic psycho-
therapies: Such methods do not focus solely on a disorder to be alleviated (rather,
on a relationship with an individual patient); they do not enlist a predetermined
treatment (rather, principles of therapeutic process); and they do not seek unifor-
mity among therapists (rather, each therapist’s adherence to a theoretical orienta-
tion and a set of techniques that are compatible with his or her persona and the
patient’s needs).14 Perhaps not surprisingly, RCT findings favor the behavioral and
cognitive psychotherapies, where technique is paramount and more easily codi-
fied in treatment manuals.15 On the other hand, high-quality naturalistic and pro-
cess-outcome studies attest to the effectiveness of psychodynamic and other ther-
apies. The weaknesses of nonexperimental research have been documented (and
disputed), but what to do if the object of knowledge—in this case, the value of
psychotherapeutic relationships—is not accessible to experimental study?16 One
clinical leader implores, “Can’t we have both relationship and technique?”17 A more
policy-relevant question might be: Can EBP in mental health commit itself to an
inclusive enough evidence hierarchy not to privilege technique unfairly over rela-
tionship? Can it do so without further stigmatizing psychology vis-à-vis medicine
(including psychopharmacology), thus undermining mental health care’s claim to
effectiveness worthy of funding?

Controversy 2: Applying Research Evidence


The second controversy is, Can, and if so, should, practice consist of the faithful
application of research evidence? Applying research to practice is the raison d’etre
of the EBP movement. Its goal is to change practitioners’ behavior by bridging the
distance from research to practice and securing a central place for research in the
consulting room. Proponents of EBP are intent on discovering how best to build
this bridge. Generally speaking, they apply a diffusion-of-technology model,
whereby experimental research is disseminated to clinicians, sometimes in the
form of guidelines or protocols. EBP includes a role (for example, in the IOM defi-
nition) for clinical judgment. The weight of the EBP literature, however, holds
that large numbers of uninformed (or unethical) practitioners are responsible for
inadequate clinical care.18 More research, not more judgment, is prescribed.
n Dissemination and practice. The dissemination of research findings to prac-
titioners has not, for the most part, brought practice into line with research. There
is, therefore, a large body of work devoted to increasing practitioners’ uptake of
study findings. Decision-support tools such as manuals, protocols, and guidelines
are designed to specify practice with greater or lesser authority. The Cochrane Col-
laboration has built an electronic bridge with an accessible database of RCTs,

166 Ja n u a r y/ Fe b r u a r y 2 0 0 5

Downloaded from content.healthaffairs.org by Health Affairs on March 14, 2015


at UNIV OF MASSACHUSETTS
Mental Health

meta-analyses, and systematic reviews. In mental health, the National Institute of


Mental Health (NIMH) sponsors the assembly and distribution of “implementation
toolkits,” which contain information and training resources for creating evidence-
based practitioners in NIMH-funded programs.19 Simultaneously, a team of psychol-
ogists is calling for more basic theory on transportability of research to practice.20
n Science versus practice. The difficulty of changing practitioners’ behavior
contributes to a larger controversy in mental health about whether practice is in fact
applied science. Some parties to the dispute argue that no matter how rigorous,
probabilistic research cannot say with certainty how to treat the next patient;
knowledge of the aggregate is simply not knowledge of an individual. In this view,
clinical care necessarily entails judgment, and the exemplary “reflective practitio-
ner” decides what to do based not only on research but also on disciplined consider-
ation of his clinical experience and in-depth knowledge of the patient.21 Thus, re-
search pertains, but the attempt to substitute research for discretion is futile.
Adherents to this position cite naturalistic studies of real-time expert decision mak-
ing; these find nothing of the formal decision analysis that is used to turn research
findings into clinical directives.22 One prominent psychologist protests that “science
and practice are not the same, and no monistic ideology can make them the same.”23
Psychologists who dispute the applied science model offer alternative models of
practice-relevant inquiry. According to Donald Peterson, mature psychological
practice requires “disciplined inquiry” wherein the practitioner brings a “guiding
conception” as well as evidence and experience to bear on patient assessment,
plan formulation, and psychological intervention. The clinician’s ongoing evalua-
tion may lead to reformulating the treatment plan, and the final evaluation will
feed back into the clinician’s experience base and guiding conception.24 Daniel
Fishman, in turn, offers a Pragmatic Case Study Method that uses a highly speci-
fied report structure (corresponding to the disciplined inquiry model) to create
large, searchable databases of case studies that allow for a practitioner’s inductive
generalization.25 Proponents of EBP argue that this kind of practice knowledge is
too weak methodologically to identify effective treatment the way experimental
studies do. The actual evaluation of EBP (as opposed to clinical trials of some
treatments) in mental health is as yet limited: How practitioners know remains a
matter of dispute.
n Fidelity versus discretion. A recent report from the Milbank Memorial Fund
describes several mental health interventions that have proved successful in clinical
trials.26 The authors emphasize fidelity in the application of this research—for ex-
ample, the prescription not just of an effective antipsychotic medication but also of
the dosage specified in the research. Among the report’s eight “essential points,”
then, are that providers should be held accountable for delivering evidence-based
practices and that measures of program fidelity are available to further that account-
ability. The authors go on to make two other points, however, that speak to the other
side of the practice controversy. They conclude that “a wide variety of effective treat-

H E A L T H A F F A I R S ~ Vo l u m e 2 4 , N u m b e r 1 167

Downloaded from content.healthaffairs.org by Health Affairs on March 14, 2015


at UNIV OF MASSACHUSETTS
Implementation

ments” should be available because overall effectiveness may not translate into effec-
tiveness in “significant subgroups.” Furthermore, “treatment choice and wide selec-
tion” are necessary to “maximize treatment response and adherence to treatment.”
Admittedly, the authors urge variety and choice among effective treatments, but
they also describe individual variation within an overall effect. They note that even
an effective treatment does not work for every patient and that individual patients’
treatment preferences help determine what is effective for them. How does this
square with maximal practitioner fidelity to research findings, especially when ef-
fective treatment but not all effective treatments are mandated? How should EBP
operationalize fidelity to allow enough but not too much discretion?
n Managed mental health care. The effort to hold practitioners to evidence-
based directives is widespread. Roughly two-thirds of Americans with private
health insurance are enrolled in managed behavioral health care organizations
(MBHOs), and these have used efficacy research to limit types and duration of
care.27 In the public sector, the District of Columbia Department of Mental Health
(DMH) has proposed a policy regarding evidence-based psychotherapy in that sys-
tem.28 According to the draft policy, all psychotherapy services delivered to commu-
nity-based adult consumers will conform to a short list of EBPs. The list includes
twelve disorders, five of which have only one treatment option. Psychodynamic psy-
chotherapy does not appear on the list, but techniques such as eye movement desen-
sitization and reprocessing do. The chief clinical officer of the DMH, in consultation
with experts, will review the list annually. Providers may submit requests to expand
it, but they must also be specifically credentialed for any treatment they render.
Otherwise, “they should not attempt to provide that evidence-based psychother-
apy.” Credentialing criteria are vague but represent a major departure from profes-
sional practice. Psychologist psychotherapists, for example, are licensed profession-
als whose scope of practice is delineated by legislatures. With few exceptions, they
are not credentialed for one technique at a time.
The controversy over psychological practice as applied science has implications
for the health system as well as individual practitioners. In one well-developed
scenario, mental health practice adheres increasingly to research-based manuals
or guidelines, and most treatments are performed by clinicians with less training.
Highly trained practitioners will design systems, conduct research, manage qual-
ity assurance programs, and, when necessary, care for patients whose “manual-
ized” treatment did not succeed.29 There are obvious cost advantages to a system
of this kind; some MBHOs now enjoy these advantages.
It is unclear, however, whether cost containment is a felicitous side effect of
EBP or whether EBP has been used to legitimate less costly treatment. Most men-
tal health RCTs by design investigate manualizable treatment techniques. These,
then, become the psychological services deemed effective in the treatment of men-
tal illness. Proponents of this approach argue that if manualized services, deliv-
ered by less highly trained practitioners, are effective, it is inexcusably wasteful

168 Ja n u a r y/ Fe b r u a r y 2 0 0 5

Downloaded from content.healthaffairs.org by Health Affairs on March 14, 2015


at UNIV OF MASSACHUSETTS
Mental Health

not to deliver them. Critics counter that more individualized therapy with a more
highly trained clinician is more effective, even if less demonstrably so. From this
perspective, the efficacy of manualized treatments is short-term symptom relief
based on formal diagnosis under study conditions, which will prove limited for
many mentally ill people.
As the first controversy questioned the evidence half of EBP, this second one
questions the practice half. As above, the issue is one of inclusiveness. The desir-
ability of more high-quality evidence for mental health practice is a given, but does
evidence include the quasi-experimentation suited to much of practice? The perti-
nence of high-quality research to mental health practice is also a given, but does
practice include pluralistic and inductive knowledge that benefits from, but is
not, experimental research? At stake in both cases is the rigor and perceived rigor
of the mental health enterprise, along with the necessary complexity of effective
mental health care.

Controversy 3: What Is ‘Effective’?


The third controversy is, To the extent that EBP is the means to effective health
care, what is meant by “effective,” and who decides? For the movement, EBP is a
moral imperative: Experimental research produces therapeutic efficacy and there-
fore better health, a largely uncontested moral good.30 In other words, EBP occu-
pies the moral high ground because its practitioners do “what works.” At the soci-
etal level, doing what works not only improves health care quality but presumably
allows for an efficient allocation of scarce health care resources.31
It is not altogether clear, however, what it means for an intervention to work,
and this is the occasion for a third controversy. What does effectiveness mean? EBP’s
definition of effectiveness (like its definitions of evidence and practice) privileges ob-
jectivity; it documents the quantifiable outcome of a standardized intervention.
Even RCTs, however, are designed by researchers who make judgments about
what effectiveness is, and these decisions are further interpreted by the clinicians
who implement the trials.32 Definitions of effectiveness may respond to the mission
of a funding agency or the data that are available for analysis, among other things.
One review of physiotherapy research found thirty-one trials using twelve differ-
ent outcome measures; only two were common enough to allow for meta-analy-
sis.33 Were these the best measures of effectiveness?
In the field of mental health, the meaning of effectiveness is especially conten-
tious. As noted earlier, Division 12 defines effectiveness first as symptom relief. Vari-
eties of functional dis/ability also figure as treatment outcomes. Some schools of
psychotherapy, however, view symptoms as manifestations of underlying mental
health conditions. For them, it is possible to eliminate symptoms but not suffer-
ing, and this does not amount to effective treatment. One possible effectiveness
measure is the patient’s understanding of what has happened to him or her. Given
that many “effective” treatments produce only partial relief, understanding may

H E A L T H A F F A I R S ~ Vo l u m e 2 4 , N u m b e r 1 169

Downloaded from content.healthaffairs.org by Health Affairs on March 14, 2015


at UNIV OF MASSACHUSETTS
Implementation

“Proponents of EBP lament the difficulty of making policy


responsive to evidence.”
help patients relate better to their remaining symptoms.34 A simpler example of
competing definitions is whether marital therapy is more effective when the part-
ners remain married or when they are most satisfied, and does it depend on
whether or not they have young children?
In Oregon, “evidence based” has itself come to mean “cost-effective,” although
the meaning of cost-effectiveness is not very clear. In August 2003 the legislature
passed Senate Bill 267, which requires that for the biennium beginning 1 July 2005,
a number of state agencies, including those that deal with addiction and mental
health, will spend 25 percent of their program budgets on evidence-based pro-
grams. The figure rises to 50 percent in 2007 and 75 percent in 2009. Agencies that
do not meet this requirement will face budget consequences in the following bien-
nium. According to the legislation, an evidence-based program is one that “(a) in-
corporates significant and relevant practices based on scientifically based re-
search; and (b) is cost effective.” The Oregon Office of Mental Health and
Addiction Services offers an operational definition of evidence-based practice: RCTs
are at the top of the evidence hierarchy, and except at the lowest level of accept-
able evidence, implementation must be measured by a fidelity tool.35 For Oregoni-
ans, then, EBP research must include relative cost in its definition of effective treatment.
Although effectiveness is a compelling goal, it is not the only one. Patients, for
example, may forgo “what works” to avoid inhumane side effects or to preserve the
personal meanings they give to health or illness. In mental health, especially, effec-
tiveness can be used to justify coercive treatment of problematic patients. Even
the psychiatric recovery movement parts ways with EBP. Recovering mental pa-
tients can benefit from information about effectiveness, but at least some see the
effectiveness criterion as a possible infringement on their freedom.36 Most men-
tally ill people want what other people have: houses, jobs, and friends. Effective
treatment may or may not be the best means of getting these, and even the most ef-
fective means of getting them may not include the value of finding one’s own way.
Finally, who gets to say what effectiveness is? EBP may create it, but what effec-
tiveness looks like is only specified in the research process. Some proponents of
EBP in mental health consider its adaptability to market and political goal setting
as a strength.37 So if, in this view, managed care sets as an effectiveness goal fewer
mental health episodes in the course of a year, EBP can identify the treatment to
achieve this. Mental health care suppliers, including drug companies, and public
agencies with large mental health caseloads also set effectiveness criteria. Their
agendas are more or less transparent, invite more or less criticism, and are set
through a more or less participatory process. In any event, EBP as effectiveness re-
search responds to someone’s definition of effectiveness, and although it is often

170 Ja n u a r y/ Fe b r u a r y 2 0 0 5

Downloaded from content.healthaffairs.org by Health Affairs on March 14, 2015


at UNIV OF MASSACHUSETTS
Mental Health

treated as self-evident, that definition has profound consequences. Patients’ val-


ues and preferences are putatively part of EBP, but they come into play mostly af-
ter the fact; once the effectiveness data have been collected and analyzed, patients
may determine whether the “effective” treatment suits them individually. Perhaps
this third controversy deserves a mini-hearing whenever EBP researchers state
their outcome measures. They should say why they have chosen these measures,
and whose interests they represent.

Caveat: The EBP Policy Train Is Leaving The Station


EBP in mental health may be more controversial than EBM. Psychologists may
be more diverse or contentious than physicians. Managed mental health care may
be more intrusive and rile practitioners more vigorously. The poignant failures of
mental health policy may call more loudly for change. On the other hand, EBM is
also broadly and deeply debated, and each of the controversies described here per-
tains to EBM as well.38
First, despite the dominance of the “medical model,” medical practice is plural-
istic, especially across specialties, and in general medicine there is new interest in
the relational aspects of diagnosis and treatment. The same epistemological ques-
tion arises: Can clinical research be both rigorous and inclusive? Second, EBM re-
searchers do not easily effect behavioral change in medical practitioners, and this
is sometimes blamed on the insufficiency of the applied science model. Disagree-
ments about the nature and value of clinical judgment are joined. Third, the defi-
nition of medical effectiveness is in dispute. As in mental health, some physicians fo-
cus not on symptom relief or functional measures but on the nature of suffering.39
As in mental health, it is not always clear which effectiveness definition applies in a
given study or review of studies, or why. Furthermore, both EBM and EBP in men-
tal health participate in vital debate while the policy train is leaving the station.
Proponents of EBP lament the difficulty of making policy responsive to evidence,
and this is generally true. To the extent that EBP has not resolved its controversies,
however, how much and what kind of policy making is called for? One recent
policy analysis warned of the potential undesirability of introducing hospital
quality standards into malpractice litigation. The authors examine the conse-
quences of promulgating standards based on questionable evidence and say: “We
should require especially convincing evidence when the law steps in to demand
universal compliance with the standard.”40

I
m p l e m e n tat i o n o f t h e p o l i c i e s described in this paper deserves close
and serious study. Suffice it to say that although they do not demand universal
compliance, these policies are authoritative about access to critical services by
vulnerable people. The evidence of effectiveness on which they rely is more or less
convincing, from an EBP perspective or from others. The District of Columbia’s
evidence-based psychotherapy policy permits only dialectical behavioral therapy

H E A L T H A F F A I R S ~ Vo l u m e 2 4 , N u m b e r 1 171

Downloaded from content.healthaffairs.org by Health Affairs on March 14, 2015


at UNIV OF MASSACHUSETTS
Implementation

(DBT) for people with borderline personality disorder (BPD). These patients rep-
resent 10–20 percent of all mentally ill people and have high rates of service use, in-
cluding hospitalization. There is, however, at least one high-quality RCT showing
psychoanalytically oriented psychotherapy to be no less effective than DBT at ter-
mination and more so afterward; a compilation of clinical guidelines for BPD con-
cludes that different interventions are most effective for different patients.41 Un-
der circumstances like these—which are surely not uncommon in clinical
science—what authority should accrue to evidence-based policy? In EBM as well
as in EBP in mental health, policy may risk unnecessary ineffectiveness and depri-
vation along with the political encumbrances of confusion and mistrust.

The author acknowledges the help and encouragement of the editors and anonymous referees.

NOTES
1. K. Patterson, “What Doctors Don’t Know (Almost Everything),” New York Times Magazine, 5 May 2002,
71–74.
2. A.M. Epstein, “The Outcomes Movement—Will It Get Us Where We Want to Go?” New England Journal of
Medicine 323, no. 4 (1990): 266–270.
3. S.J. Tanenbaum, “Knowing and Acting in Medical Practice: The Epistemological Politics of Outcomes Re-
search,” Journal of Health Politics, Policy and Law 19, no. 1 (1994): 27–44.
4. Patterson, “What Doctors Don’t Know.”
5. C. Tavris, “Mind Games: Psychological Warfare between Therapists and Scientists,” Chronicle of Higher Edu-
cation 29, no. 5 (2003): B7–B10.
6. R.M. McFall, “Manifesto for a Science of Clinical Psychology,” Clinical Psychologist 44, no. 6 (1991): 75–88;
and B. Carey, “For Psychotherapy’s Claims, Skeptics Demand Proof,” New York Times, 10 August 2004.
7. S.R. Tunis, D.B. Stryer, and C.M. Clancy, “Practical Clinical Trials: Increasing the Value of Clinical Re-
search for Decision Making in Clinical and Health Policy,” Journal of the American Medical Association 290, no.
12 (2003): 1624–1632.
8. American Psychological Association, Society of Clinical Psychology (Division 12), A Guide to Beneficial Psy-
chotherapy, www.apa.org/divisions/div12/rev_est/index.html (15 November 2004).
9. B.F. Chorpita et al., “Toward Large-Scale Implementation of Empirically Supported Treatments for Chil-
dren: A Review and Observations by the Hawaii Empirical Bases to Services Task Force,” Clinical Psychol-
ogy: Science and Practice 9, no. 2 (2002): 165–190.
10. Eric Daleiden, research and evaluation specialist, Child and Adolescent Mental Health Division, Hawaii
Department of Health, telephone interview, 12 July 2004.
11. J.C. Norcross, ed., Psychotherapy Relationships That Work: Therapist Contributions and Responsiveness to Patient Needs
(New York: Oxford University Press, 2002).
12. See, for example, L.E. Beutler, “The Empirically Supported Treatments Movement: A Scientist-Practitio-
ner’s Response,” Clinical Psychology: Science and Practice 11, no. 3 (2004): 225–229.
13. N. Woolf, “Using Randomized Controlled Trials to Evaluate Socially Complex Services: Problems, Chal-
lenges, and Recommendations,” Journal of Mental Health Policy and Economics 3, no. 2 (2000): 97–109.
14. A.C. Bohart, M. O’Hara, and L.M. Leitner, “Empirically Violated Treatments: Disenfranchisement of Hu-
manistic and Other Psychotherapies,” Psychotherapy Research 8, no. 2 (1998): 141–157.
15. D.L. Chambless et al., “An Update on Empirically Validated Therapies,” Clinical Psychologist 49, no. 2 (1996):
5–18.
16. J. Concato, N. Shah, and R.I. Horwitz, “Randomized Controlled Trials, Observational Studies, and the Hi-
erarchy of Research Designs,” New England Journal of Medicine 342, no. 25 (2000): 1887–1892.
17. Beutler, “The Empirically Supported Treatments Movement.”
18. E.A. McGlynn et al., “The Quality of Health Care Delivered to Adults in the United States,” New England
Journal of Medicine 348, no. 348 (2003): 2635–2645; and K.W. Goodman, Ethics and Evidence-based Medicine: Fal-

172 Ja n u a r y/ Fe b r u a r y 2 0 0 5

Downloaded from content.healthaffairs.org by Health Affairs on March 14, 2015


at UNIV OF MASSACHUSETTS
Mental Health

libility and Responsibility in Clinical Science (New York: Cambridge University Press, 2003).
19. W. Torrey et al., “Implementing Evidence-based Practices for Persons with Severe Mental Illness,” Psychiat-
ric Services 52, no. 1 (2001): 45–50.
20. W.K. Silverman, W.M. Kurtines, and K. Hoagwood, “Research Progress on Effectiveness, Transportability,
and Dissemination of Empirically Supported Treatments: Integrating Theory and Research,” Clinical Psy-
chology: Science and Practice 11, no. 3 (2004): 295–299.
21. D.A. Schon, The Reflective Practitioner: Toward a New Design for Teaching and Learning in the Professions (San Fran-
cisco: Jossey-Bass, 1983).
22. G. Klein, Sources of Power: How People Decide (Cambridge: MIT Press, 1999).
23. D.R. Peterson, “Science, Scientism, and Professional Responsibility,” Clinical Psychology: Science and Practice 11,
no. 2 (2004): 196–210.
24. D.R. Peterson, “Connection and Disconnection of Research and Practice in the Education of Professional
Psychologists,” American Psychologist 46, no. 4 (1991): 422–429.
25. D.B. Fishman, The Case for Pragmatic Psychology (New York: New York University Press, 1999).
26. A.F. Lehman et al., Evidence-based Mental Health Treatments and Services: Examples to Inform Public Policy, June
2004, www.milbank.org/reports/20041ehman/20041ehman.html (18 October 2004).
27. M. Oss, E.L. Jardine, and M.J. Pesare, Open Minds Yearbook of Behavioral Health and Employee Assistance Program
Market Share in the United States, 2002–2003 (Gettysburg, Pa.: Open Minds, 2002); and N.A. Cummings, “The
First Decade of Managed Behavioral Health Care: What Went Right and What Went Wrong,” in Psycho-
Economics: Managed Care in Mental Health in the New Millenium, ed. R.D. Weitz (New York: Haworth, 2000),
19–38.
28. D.C. Department of Mental Health, ”Policy No. 311.2: Evidence-based Psychotherapy,” 2004.
29. S.C. Hayes, D.H. Barlow, and R.O. Nelson-Gray, The Scientist-Practitioner: Research and Accountability in the Age of
Managed Care, 2d ed. (Boston: Allyn and Bacon, 1999).
30. M. Gupta, “A Critical Appraisal of Evidence-based Medicine: Some Ethical Considerations,” Journal of Eval-
uation in Clinical Practice 9, no. 2 (2003): 111–121.
31. Norcross, Psychotherapy Relationships That Work; and P.E. Nathan and J.M. Norman, A Guide to Treatments That
Work, 2d ed. (New York: Oxford University Press, 2002).
32. J.J. Gonzales, H.L. Ringeissen, and D.A.Chambers, “The Tangled and Thorny Path of Science to Practice:
Tensions in Interpreting and Applying Evidence,” Clinical Psychology: Science and Practice 9, no. 2 (2002):
204–209; and M. Berg, Rationalizing Medical Work: Decision-Support Techniques and Medical Practices (Cambridge:
MIT Press, 1997).
33. W.A. Rogers, “Evidence-based Medicine in Practice: Limiting or Facilitating Patient Choice?” Health Expec-
tations 5, no. 2 (2002): 95–103.
34. S.B. Messer, “Empirically Supported Treatments: What’s a Non-Behaviorist to Do?” in Critical Issues in Psy-
chotherapy: Translating New Ideas into Practice, ed. B.D. Slife, R.N. Williams, and S.H. Barlow (Thousand Oaks,
Calif.: Sage Publications, 2001), 3–19; and Bohart et al., “Empirically Violated Treatments.”
35. State of Oregon, “An Act: SB 267, Chapter 669 Oregon Laws” (2003), www.leg.state.or.us/orlaws/sess0600
.dir/0669ses.htm (15 November 2004); and Oregon Office of Mental Health and Addiction Services, “Pro-
posed Operational Definition for Evidence-based Practices, Final Draft,” 1 June 2004, www.dhs.state.or.
us/mentalhealth/ebp/definition0722.pdf (10 November 2004).
36. F. Frese et al., “Integrating Evidence-based Practices and the Recovery Model,” Psychiatric Services 52, no. 11
(2001): 1462–1468.
37. Hayes et al., The Scientist-Practitioner.
38. See, for example, S.J. Tanenbaum, “What Physicians Know,” New England Journal of Medicine 329, no. 17
(1993): 1268–1271; and Tanenbaum, “Knowing and Acting in Medical Practice.”
39. E.J. Cassell, The Nature of Suffering and the Goals of Medicine (New York: Oxford University Press, 1991).
40. M.M. Mello, D.M. Studdert, and T.A. Brennan, “The Leapfrog Standards: Ready to Jump from Market-
place to Courtroom?” Health Affairs 22, no. 2 (2003): 46–59.
41. A. Bateman and P. Fonagy, “Treatment of Borderline Personality Disorder with Psychoanalytically Ori-
ented Partial Hospitalization: An Eighteen-Month Follow-up,” American Journal of Psychiatry 158, no. 1
(2001): 36–42; and M.H. Stone, “Clinical Guidelines for Psychotherapy for Patients with Borderline Per-
sonality Disorder,” Psychiatric Clinics of North America 23, no. 1 (2000): 193–210.

H E A L T H A F F A I R S ~ Vo l u m e 2 4 , N u m b e r 1 173

Downloaded from content.healthaffairs.org by Health Affairs on March 14, 2015


at UNIV OF MASSACHUSETTS

You might also like