A Guide To Measuring Advocacy and Policy 1
A Guide To Measuring Advocacy and Policy 1
Current Advocacy
Evaluation Practice
Julia Coffman
Center for
Evaluation
Innovation
October 2009
Julia Coffman
Center for Evaluation Innovation
October 2009
T
his brief offers an overview of current practice in the new and now rapidly
growing field of advocacy evaluation. It highlights the kinds of approaches be-
ing used, offers specific examples of how they are being used and who is using
them, and identifies the advantages and disadvantages of each approach.
The brief is organized around the summary matrix on page 2, which identifies four
key evaluation design questions and then offers common advocacy evaluation
responses to those questions.1 Questions include: 1) Who will do the evaluation? 2)
What will the evaluation measure? 3) When will the evaluation take place? 4) What
methodology will the evaluation use?
For each question, three options or possible responses are given. Options are There is no one
based on the experiences of advocates, evaluators, and funders who already have “right” approach to
responded to these questions and are learning about the benefits and drawbacks advocacy evaluation.
of their choices. The options for each question are not necessarily mutually exclu-
Some options fit
sive. For some design questions, evaluations can blend two or even all three options
simultaneously. certain advocacy
efforts better than
The matrix describes each option in brief. Boxes contain shorthand labels, brief
descriptions, the options’ main advantages (pro) and disadvantages (con), and others, and different
examples of existing evaluation efforts that feature each option. The pages that fol- evaluation users
low then describe the options and examples in more detail. will make different
There is no one “right” approach or response to each design question. Some options choices.
fit certain advocacy efforts better than others, and different evaluation users
will make different choices. In addition, the matrix is not an exhaustive list of the
approaches being used. Rather, it highlights the approaches that are among the
most common in the field.
1
The matrix was informed by Organizational Research Services’ A Guide to Measuring Policy and
Advocacy. Find the guide on their website at www.organizationalresearch.com.
2
Advocacy Capacity Progress Impact
2
The focus is on how the advocacy organization itself The focus is on what the advocacy effort is achieving The focus is on longer-term outcomes and making a
has changed. tactically on the way to policy change. case for advocacy’s contribution to them.
PRO: Targets an outcome CON: Does not tell about PRO: Safeguards against CON: Audiences may be PRO: Targets outcomes CON: Impact can take
Focus that is critical to the advocacy effort’s concluding failure if less interested in these in which funders and a long time; outcomes
advocacy success success in the policy policy is not achieved; data; transparency may external audiences often hard to measure; hard to
What will the
arena data inform strategy be an issue express more interest isolate contribution
evaluation measure?
[Advocacy Capacity Assessment Tools] [U.S. Connect] [National Committee for Responsive Philanthropy]
Before During/Prospective After/Retrospective
3 Evaluators or evaluative thinking inform the strategy
before it is implemented.
PRO: Clarifies strategies, CON: Evaluators can
J ust as in other fields, with advocacy evaluation, the individuals conducting the
evaluation can be external evaluation consultants or internal advocacy organiza-
tion staff members. A third option, the combination approach, blends both ap-
proaches. It features external consultants facilitating the evaluation’s design and
start up while building internal evaluation capacity so that advocacy organizations
eventually can take over the evaluation’s implementation.
Most formal advocacy evaluations so far have been conducted by external evalua-
tion consultants (although more now are using the combination external-internal
approach). In part, this is because larger foundations that fund advocacy efforts and
tend to have more resources for external evaluation have been among the first to
enter this emerging field. In addition, because advocacy is notoriously hard to mea-
sure and this field is new, funders and advocates have partnered with professional
evaluators to tackle this formidable challenge.
However, because many advocacy organizations are small and resources often are
External evaluators
limited, only about a quarter of advocacy organizations currently engage in some
form of evaluation.2 The reality moving forward is that many advocates will need are particularly
to become their own evaluators. As the advocacy evaluation field grows, it will be useful when
important to make sure that resource-efficient ideas and supports exist for smaller independence or
advocacy organizations that must do their own monitoring and evaluation.
objectivity is a
External primary concern,
External evaluators commonly are used when advocacy efforts are large-scale cam- or when specific
paigns or when they involve a collaborative or coalition of multiple organizations
technical expertise
working toward similar policy goals.
is needed.
External evaluators are particularly useful when independence or objectivity is
a primary concern, or when specific technical expertise is needed (e.g., to assess
advocates’ influence with key audiences or constituencies such as policymakers,
media, business, or voters). A potential disadvantage of this approach is that some
evaluators are not well-versed in advocacy or the policy change process, and this
knowledge can be critical in ensuring that evaluations are both realistic and useful.
Harvard Family Research Project’s (HFRP) evaluation of the David and Lucile Packard
Foundation’s Preschool for California’s Children grantmaking program is an example
of an external advocacy evaluation. Since 2003, HFRP has been collecting data about
the program’s progress toward establishing state-level policies that would make
high-quality preschool available to all three- and four-year-olds in the state. The
evaluation’s primary audience is the Packard Foundation, and data collected and
provided in real time are intended to inform the grantmaking strategy as it evolves.
Because the Packard Foundation maintains a close relationship with its grantees,
HFRP’s evaluation does not focus on what the grantees are doing and achieving
individually. Rather, it focuses on the strategy’s influence with external audiences
who play an important role in the policy process—e.g., state and local policymakers
2
For more on what advocates are doing and their capacity for evaluation, see Innovation Network’s
publication Speaking for Themselves: Advocates’ Perspectives on Evaluation on their website at
www.innonet.org/advocacy.
Internal
Internal evaluation is conducted by staff members or units from within organiza-
Internal evaluators
tions implementing advocacy efforts. For advocacy evaluation, internal evaluation
tends to be conducted on a smaller scale than external evaluation, as resources bring important
available for evaluation generally are more limited, and the individuals responsible knowledge of the
for data collection often have additional responsibilities within the organization. organization and of
The key advantages of this approach are that internal evaluators bring important advocacy to the table
knowledge of the organization and of advocacy to the table and are positioned to and are positioned
develop recommendations that internal stakeholders are likely to commit to (and
to help ensure their
internal evaluators can follow up on recommendations). The main disadvantage is
that evaluation capacity within advocacy organizations often is not high, both in recommendations
terms of the time and resources needed for the evaluation and in terms of specific are implemented.
methodological expertise.
3
Learn more about the Preschool for California’s Children evaluation on Harvard Family Research
Project’s website at www.hfrp.org.
Combination
The combination approach mixes external and internal evaluation. This might
involve, for example, integrating self-evaluation into an external evaluation, or us-
ing external facilitators to help design and facilitate internal evaluation. Currently
within the advocacy evaluation field, the latter approach is most common.
This approach’s main benefit is that it helps to develop internal evaluation skills and
capacity that can be sustained over time. It also helps to build support for evalua-
tion and its use. A potential disadvantage is that this approach can work better in
theory than in practice. The process generally starts off well, but unless an advocacy
organization has sufficient resources and supports to sustain the evaluation once it
is designed, enthusiasm and commitment can fall off when implementation begins.
The combination
The Annie E. Casey Foundation uses a combination external-internal approach with approach has
its KIDS COUNT initiative. KIDS COUNT is a network of child advocates in all 50 states, external evaluators
the District of Columbia, Puerto Rico, and the Virgin Islands. The Foundation has
invited several grantees to participate in a pilot project to develop evaluation strate-
work with advocates
gies for their advocacy and policy change work. Organizational Research Services to develop internal
(ORS) is working with these grantees to develop their evaluation strategies, which evaluation skills and
includes the development of outcome maps.5 Once designed, the expectation is that
capacity that can be
advocates will implement their own evaluations. While this process is still under-
way, the evaluators, advocates, and the Foundation have found that the process of sustained over time.
identifying outcomes and their linkage to strategies calls into question a host of
strategic questions, including consensus within the organization, transparency, real-
time relevance, belief in the value of evaluation, and the interconnectedness among
organizational strategies.
Advocacy evaluations generally focus their data collection on three types of out-
comes or results—advocacy capacity, progress toward policy goals, or an advocacy
effort’s impact. While some advocacy evaluations focus on just one area, more often
they focus on more than one.
4
See The Brainerd Foundation’s theory of change on their website at www.brainerd.org/strategy.
5
See Organizational Research Services’ publication Orientation to Theory of Change on their website at
www.organizationalresearch.com for an easy-to-follow overview of theory of change techniques and
how theory of change development fits into other types of outcomes-based planning.
To support advocacy capacity assessment, the Alliance for Justice, with assistance
from Mosaica and in partnership with The George Gund Foundation, developed
an Advocacy Capacity Assessment Tool that helps advocates and their funders
assess their ability to sustain effective advocacy efforts; develop a plan for building
advocacy capacity; and determine appropriate advocacy plans based on the organi-
zation’s advocacy resources. The tool is available both online and in print, and has
been used in numerous advocacy evaluations.6
TCC Group also has worked on this issue and has developed an advocacy capac-
ity framework and complementary assessment tool. The framework outlines and
defines in detail the four capacities—leadership, adaptive, management, technical—
of an effective advocacy organization.7
Progress
Most advocacy evaluations emphasize the importance of tracking tactical progress
on the way to achieving policy change. A focus on measuring progress ensures that
A focus on measuring
advocates have data that signal if they are on the right track or if midcourse correc- progress ensures
tions are needed. It also ensures that the evaluation does not conclude unfairly that that advocates have
the whole advocacy effort was a failure if a policy was not achieved. For example,
data that signal if
an advocacy organization might lose the battle for a specific legislative, regulatory,
or judicial objective, but by motivating a large number of citizens to advocate on its they are on the right
issue, may have built a more experienced grassroots coalition for the future. track or if midcourse
The Connect U.S. Fund offers an example of an evaluation that includes a focus corrections are
on tracking progress. Connect U.S. promotes responsible U.S. global engagement needed
through grantmaking and operations that advance foreign policy objectives in the
areas of human rights, climate change, nuclear weapons and proliferation, civil-
military affairs, and trade and development. Continuous Progress Strategic Services
6
Find Alliance for Justice’s Advocacy Capacity Assessment Tool at www.advocacyevaluation.org.
7
See TCC Group’s publication What Makes an Effective Advocacy Organization? on their website
at www.tccgrp.com.
Impact
For traditional program evaluation, capturing impact generally means that an evalu-
ation uses a rigorous evaluation design to determine if a causal relationship can be
established between a program and its intended outcomes. For advocacy evaluation,
Assessing impact has
the meaning is different. An advocacy evaluation that focuses on impact does one or
more of the following: a different meaning
for advocacy
1) Assesses the longer-term “big” outcomes that precede policy change (e.g. public evaluation than it
will, political will, shifts in social norms)
does for traditional
2) Determines whether a plausible and defensible case can be made that an advo- program evaluation.
cacy effort has impacted the policy process or contributed to a policy change
3) Documents the long-term impact of advocacy and policy change on people’s lives
(or on the environment, the economy, etc.).
Of these three approaches, the first two are most common. With the first approach,
longer-term “big” outcomes typically refer to important shifts in how policy stake-
holders are thinking about or acting on certain policy issues. For example, many
evaluations that use this approach attempt to operationalize and measure changes
in public will or political will surrounding an issue.
With the second approach, because advocacy work typically is collaborative and
complex and the policy process is affected by many variables, definitively isolating
whether a certain policy outcome would not have happened without an advocacy
effort is difficult at best. Therefore, the standard that has developed in advocacy
evaluation is a focus on contribution (using data to determine if a credible case can
be made that the advocacy effort contributed to a particular policy outcome), rather
than attribution (showing a causal connection between an advocacy effort and a
policy outcome).
8
See the online Advocacy Progress Planner at www.planning.continuousprogress.org.
E valuation and evaluative thinking can play a role before, during, or after an
advocacy strategy’s implementation. Based on the principle that evaluation use
increases when organizations can apply it to their planning and strategies, most Evaluators and
advocacy evaluation is occurring during strategy implementation. This approach is evaluative thinking
particularly useful with advocacy efforts, where strategy is constantly evolving and can be a helpful
regular feedback can be valuable for informing next steps. But many evaluators also
work with advocates before advocacy strategies are implemented (or early on in
resource early on
their implementation) to ensure strategies have realistic and measurable outcomes. before an advocacy
In addition, some retrospective evaluations are occurring after advocacy outcomes strategy
are known to identify what can be learned from the advocacy strategy’s implementa-
is implemented.
tion and success (or lack thereof).
Before
When engaged early on in an advocacy strategy’s development, evaluators can be
helpful resources or partners as a strategy is being shaped. Commonly, this comes
in the form of evaluators working with advocates on the development of a theory of
change or logic model to articulate and clarify their strategy.
A number of tools have been created for use during both advocacy planning and
evaluation. For example:
●● The Advocacy and Policy Change Composite Logic Model and its online version
the Advocacy Progress Planner (mentioned earlier) were developed to facilitate
advocacy theory of change or logic model development. 10
●● The Alliance for Justice Advocacy Evaluation Tool helps organizations identify and
describe their specific advocacy achievements, both for pre-grant and post-grant
9
Find both the New Mexico and North Carolina reports Strengthening Democracy, Increasing
Opportunity on the National Committee for Responsive Philanthropy’s website at www.ncrp.org.
10
Find the Advocacy and Policy Change Composite Logic Model on Innovation Network’s resource database
at www.innonet.org/advocacy.
See the online Advocacy Progress Planner at www.planning.continuousprogress.org.
Evaluators and evaluative thinking also can be useful in other ways. For example,
some evaluators are working with advocates on developing contingency logic
models. Drawing on the concept of scenario planning, these models imagine that the
political or economic context has changed in an important way, or that parts of the
strategy do not go as planned. Contingency logic models identify how the strategy
will shift if those scenarios occur.
11
For more information on both Alliance for Justice tools, go to www.advocacyevaluation.org.
12
Klein, G. (2007). Performing a project premortem. Harvard Business Review, 85(9), 18-19.
13
Guthrie, K., Louie, J., David, T., & Crystal Foster, C. (2005). The challenge of assessing policy and advocacy
activities: Strategies for a prospective evaluation approach. San Francisco, CA: Blueprint Research and
Design. Find this publication at www.calendow.org.
After/Retrospective
While the emphasis in the advocacy evaluation field is on prospective evaluation
Retrospective
that occurs while the advocacy effort is being implemented, retrospective evalua-
tion also can be extremely valuable. Retrospective evaluations take place after an evaluation is a useful
advocacy effort has occurred and the outcome already is known. They look back- learning tool that
ward and examine the factors that led to or affected that outcome, and therefore are
takes place after an
extremely useful for learning purposes. The benefit of a retrospective approach is
that hindsight is 20/20. Often, it is easier to see after the fact where things went well advocacy effort has
and where the strategy might have improved for better effect. occurred and the
Michael Quinn Patton’s case study evaluation of a judicial advocacy effort designed
outcome already
to influence a Supreme Court decision is an example of a retrospective approach.15 is known. It looks
Patton used the “general elimination method” to determine whether a plausible and back to determine
defensible case could be made that the advocacy effort in fact had an impact. The
the factors that led
general elimination method begins with an intervention (advocacy) and searches
for an effect. It uses evidence to eliminate alternative or rival explanations until the to or affected that
most compelling explanation remains. Patton’s conclusion, based on a thorough outcome.
review of the campaign’s activities, key informant interviews, and analysis of the
Supreme Court decision, was that the advocacy campaign did in fact contribute
significantly to the Court’s decision.
E valuations can use many different approaches or models. One study, for example,
identified at least 22 available approaches.16 Within the advocacy evaluation
field, however, the list is smaller as many traditional program evaluation approaches
do not work well with advocacy. The three options listed in the matrix—tracking/
monitoring, developmental evaluation, and case studies—are not the only approach-
es being used in the field, but they are among the most common.
Tracking/Monitoring
Tracking and monitoring refers to the practice of identifying indicators, benchmarks,
Tracking and
or performance measures (usually quantitative) connected to advocacy outcomes
and then tracking those indicators over time. Tracking examines progress and identi- monitoring identifies
fies where midcourse corrections might be needed. For example, by determining indicators (usually
whether issues or messages are appearing more in targeted media outlets, media
quantitative)
tracking can identify whether media outreach tactics are making headway. Track-
ing’s main disadvantage is that it often tells little about why changes are occurring connected to
over time. advocacy outcomes
and then tracks
14
For more information on evaluating community organizing, see Alliance for Justice’s online living
library of resources Resources for Evaluating Community Organizing at www.afj.org/for-nonprofits- them over time.
foundations/reco.
15
Patton, M.Q. (2008). Advocacy impact evaluation. Journal of Multidisciplinary Evaluation, 5(9), 1-10.
16
Stufflebeam, D. (2001). Evaluation models. New Directions for Evaluation, 89, Special issue .
Developmental Evaluation
Developmental
Michael Quinn Patton coined the term “developmental evaluation” to describe an
approach to evaluating complex or evolving efforts, like advocacy. “Developmental evaluation features a
evaluation refers to long-term, partnering relationships between evaluators and long-term partnering
those engaged in innovative initiatives and development…Evaluators become part of
relationship between
a team whose members collaborate to conceptualize, design and test new approach-
es in a long-term, ongoing process of continuous improvement, adaptation, and evaluators and
intentional change. The evaluator’s primary function in the team is to elucidate team advocates, so that
discussions with evaluative questions, data and logic, and to facilitate data-based evaluators become
assessments and decision-making in the unfolding and developmental processes of
a part of the
innovation.”19 Developmental evaluation is different from traditional evaluation in
that evaluators do not make definitive judgments about success or failure. Rather, advocacy team.
like with prospective evaluation, they provide feedback, generate learning, and
either support strategy decisions or affirm changes to them.
This approach is useful for advocacy efforts that are complex and constantly evolve.
Developmental evaluation allows evaluators to be flexible, so that when strategies
change or critical events occur, evaluators quickly become aware of those changes
and can adjust the evaluation accordingly.
Since 2005, Innovation Network , with support from The Atlantic Philanthropies,
has been using a developmental approach for its evaluation of the Coalition for
Comprehensive Immigration Reform (CCIR)—a collaborative of immigrant advocacy,
grassroots, and religious groups, labor organizations, and policy leaders on Capitol
Hill and throughout the United States. For several years, Innovation Network has
been documenting CCIR’s work as it unfolds and is capturing best practices to inform
other coalitions and the advocacy field. Because immigration reform activity fluc-
tuates and has evolved over time, Innovation Network has been flexible and has
experimented with different approaches to ensure the evaluation is both useful and
not burdensome for advocates. The evaluation fosters continuous learning so CCIR
leadership can act on evaluation findings and make real-time adjustments to their
activities and strategies.
17
Find Asibey Consulting’s Are We There Yet? A Communications Evaluation Guide on The Communication
Network’s website at www.comnetwork.org.
18
Download the eNonprofit Benchmarks Study at www.e-benchmarksstudy.com.
19
Patton, M. Q. (2006).Evaluation for the way we work. The Nonprofit Quarterly, 13(1), 28-33.
A key advantage of case studies is that they tell a full story about what happened,
rather than provide isolated data points that tell only part of the story or do not
incorporate context or the environment in which the advocacy effort occurred. A
potential disadvantage is that because context plays such an important role in this
approach, at times it can be difficult to extrapolate lessons to other advocacy or
political circumstances.
Case studies recently completed by Colin Knox of the University of Ulster and sup-
Case studies can
ported by The Atlantic Philanthropies offer an example of this approach. This series tell a full and
of seven case studies chronicles advocacy efforts in post-conflict Northern Ireland rich story about
in the areas of human rights, children and youth, and aging. The case studies provide
what an advocacy
insights and lessons about how advocates achieved traction and influenced policy
agendas in complex and challenging political environments that were extremely strategy did and
resistant to change. accomplished.
Conclusion
As this brief demonstrates, the past several years have been a tremendous opportu-
nity for creativity and growth in the advocacy evaluation field. Where few resources
and little expertise existed before, multiple tools and a growing base of experience
now exist. This growth has been fueled by a group of pioneering funders, evalua-
tors, and advocates who share a strong dedication to the field and are committed to
growing it through collaboration.
For sure, there is much more happening that has not been captured here, and there
is enormous opportunity for further growth and innovation. Although early work
in this field has generated a great deal of momentum, there is much left to do. For
example, the field must expand beyond eager innovators and reach out to the much
The past several
larger majority of individuals and organizations who still know little about advocacy
evaluation or remain skeptical about its value. In addition, the field must fill in some years have been
clear gaps in its infrastructure, particularly in the areas of outreach and training. a tremendous
Opportunities to stay updated on new developments as the field continues to grow
opportunity for
include: creativity and growth
in the advocacy
●● Innovation Network has a free online clearinghouse and newsletter dedicated to
advocacy evaluation (www.innonet.org/advocacy). evaluation field.
But there is much
●● The American Evaluation Association has an Advocacy and Policy Change Topical
left to do.
Interest Group (www.eval.org).