0% found this document useful (0 votes)
135 views14 pages

A Guide To Measuring Advocacy and Policy 1

This document provides an overview of current practices in advocacy evaluation, highlighting various approaches, their advantages and disadvantages, and the key evaluation design questions. It emphasizes that there is no single 'right' approach to advocacy evaluation, as different strategies may suit different advocacy efforts and users. The document also discusses the importance of understanding advocacy capacity, progress, and impact in the evaluation process.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
135 views14 pages

A Guide To Measuring Advocacy and Policy 1

This document provides an overview of current practices in advocacy evaluation, highlighting various approaches, their advantages and disadvantages, and the key evaluation design questions. It emphasizes that there is no single 'right' approach to advocacy evaluation, as different strategies may suit different advocacy efforts and users. The document also discusses the importance of understanding advocacy capacity, progress, and impact in the evaluation process.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 14

Overview of

Current Advocacy
Evaluation Practice

Julia Coffman

Center for
Evaluation
Innovation

October 2009

©2009 Center for Evaluation Innovation. All rights reserved.


May not be reproduced whole or in part without written permission from Center for Evaluation Innovation.
Overview of
Current Advocacy
Evaluation Practice

Julia Coffman
Center for Evaluation Innovation

October 2009

T
his brief offers an overview of current practice in the new and now rapidly
growing field of advocacy evaluation. It highlights the kinds of approaches be-
ing used, offers specific examples of how they are being used and who is using
them, and identifies the advantages and disadvantages of each approach.
The brief is organized around the summary matrix on page 2, which identifies four
key evaluation design questions and then offers common advocacy evaluation
responses to those questions.1 Questions include: 1) Who will do the evaluation? 2)
What will the evaluation measure? 3) When will the evaluation take place? 4) What
methodology will the evaluation use?

For each question, three options or possible responses are given. Options are There is no one
based on the experiences of advocates, evaluators, and funders who already have “right” approach to
responded to these questions and are learning about the benefits and drawbacks advocacy evaluation.
of their choices. The options for each question are not necessarily mutually exclu-
Some options fit
sive. For some design questions, evaluations can blend two or even all three options
simultaneously. certain advocacy
efforts better than
The matrix describes each option in brief. Boxes contain shorthand labels, brief
descriptions, the options’ main advantages (pro) and disadvantages (con), and others, and different
examples of existing evaluation efforts that feature each option. The pages that fol- evaluation users
low then describe the options and examples in more detail. will make different
There is no one “right” approach or response to each design question. Some options choices.
fit certain advocacy efforts better than others, and different evaluation users
will make different choices. In addition, the matrix is not an exhaustive list of the
approaches being used. Rather, it highlights the approaches that are among the
most common in the field.

In general, the approaches described reveal that advocacy evaluation is different


from more conventional evaluation approaches in which the evaluator develops an
evaluation design and then reports back when the data are all collected and ana-
lyzed. Advocacy evaluation focuses more on strategic and targeted data collection
and analysis that can be used to refine or adjust the advocacy strategy while it is
being implemented.

1
 The matrix was informed by Organizational Research Services’ A Guide to Measuring Policy and
Advocacy. Find the guide on their website at www.organizationalresearch.com.

Overview of Current Advocacy Evaluation Practice 1


Evaluation Design Questions and Common Advocacy Evaluation Responses
External Internal Combination
External evaluators conduct evaluations and provide Advocacy staff members are responsible for data External evaluators facilitate initial evaluation
1 data for advocates or their funders to use.
PRO: Evaluators have
data expertise and
CON: Costly; time limited;
not sustainable; uses

collection, analysis, and for facilitating use.


PRO: Advocates
understand their
CON: Advocates’
evaluation capacity

design and data collection, and then advocates take


it over.
PRO: Builds sustainable
evaluation capacity
CON: Advocates’
evaluation capacity may
Evaluator
capacity; objectivity evaluator vs. advocate evaluation needs best may be limited ; can be within organizations be limited (budget, time,
Who will do the
lens objectivity risks roles)
evaluation?
[Preschool for California’s Children; [Humane Society of the US; The Brainerd
[KIDS COUNT]
Environmental Support Center] Foundation, Wilburforce Foundation, ONE/NW]

2
Advocacy Capacity Progress Impact
2
The focus is on how the advocacy organization itself The focus is on what the advocacy effort is achieving The focus is on longer-term outcomes and making a
has changed. tactically on the way to policy change. case for advocacy’s contribution to them.
PRO: Targets an outcome CON: Does not tell about PRO: Safeguards against CON: Audiences may be PRO: Targets outcomes CON: Impact can take
Focus that is critical to the advocacy effort’s concluding failure if less interested in these in which funders and a long time; outcomes
advocacy success success in the policy policy is not achieved; data; transparency may external audiences often hard to measure; hard to
What will the
arena data inform strategy be an issue express more interest isolate contribution
evaluation measure?
[Advocacy Capacity Assessment Tools] [U.S. Connect] [National Committee for Responsive Philanthropy]
Before During/Prospective After/Retrospective
3 Evaluators or evaluative thinking inform the strategy
before it is implemented.
PRO: Clarifies strategies, CON: Evaluators can

Evaluation occurs as the strategy is implemented to


track progress and inform strategy over time.
PRO: Positions the CON: Can be time

Evaluation occurs after outcomes are known to


assess advocacy’s contribution and lessons learned.
PRO: Useful for CON: Post-hoc analysis
Timing including the timing of be less familiar with evaluation to be as useful consuming and a more identifying lessons; takes may be less useful to

Overview of Current Advocacy Evaluation Practice


When will the outcomes; assesses risks advocacy as possible costly approach advantage of hindsight advocates
evaluation take place?
[Theory of Change/Composite Logic Model] [Communities for Public Education Reform] [Supreme Court Case Study]
Tracking/Monitoring Developmental Evaluation Case Studies
4
Over time, the evaluation tracks specific indicators, [Used with external evaluators] Evaluators are Detailed descriptions and analyses (often qualitative)
benchmarks, or performance measures connected to embedded on the advocacy team and emphasize a are performed on individual advocacy strategies or
advocacy outputs or outcomes. collaborative process of continuous improvement. organizations and their results.
Approach PRO: Efficient; tracks CON: Reveals little about PRO: Can monitor and CON: Can be time PRO: Offers a full story CON: Can be difficult to
progress and signals why changes are (or are respond to evolving consuming and a more with context; useful for extrapolate learning to
What methodology will needs for corrections not) occurring strategy costly approach generating lessons other contexts
the evaluation use?
[Communications Evaluation/Tracking Tools] [Coalition for Comprehensive Immigration Reform] [Northern Ireland Case Studies]
 Evaluator: Who will do the evaluation?

J ust as in other fields, with advocacy evaluation, the individuals conducting the
evaluation can be external evaluation consultants or internal advocacy organiza-
tion staff members. A third option, the combination approach, blends both ap-
proaches. It features external consultants facilitating the evaluation’s design and
start up while building internal evaluation capacity so that advocacy organizations
eventually can take over the evaluation’s implementation.

Most formal advocacy evaluations so far have been conducted by external evalua-
tion consultants (although more now are using the combination external-internal
approach). In part, this is because larger foundations that fund advocacy efforts and
tend to have more resources for external evaluation have been among the first to
enter this emerging field. In addition, because advocacy is notoriously hard to mea-
sure and this field is new, funders and advocates have partnered with professional
evaluators to tackle this formidable challenge.

However, because many advocacy organizations are small and resources often are
External evaluators
limited, only about a quarter of advocacy organizations currently engage in some
form of evaluation.2 The reality moving forward is that many advocates will need are particularly
to become their own evaluators. As the advocacy evaluation field grows, it will be useful when
important to make sure that resource-efficient ideas and supports exist for smaller independence or
advocacy organizations that must do their own monitoring and evaluation.
objectivity is a
External primary concern,
External evaluators commonly are used when advocacy efforts are large-scale cam- or when specific
paigns or when they involve a collaborative or coalition of multiple organizations
technical expertise
working toward similar policy goals.
is needed.
External evaluators are particularly useful when independence or objectivity is
a primary concern, or when specific technical expertise is needed (e.g., to assess
advocates’ influence with key audiences or constituencies such as policymakers,
media, business, or voters). A potential disadvantage of this approach is that some
evaluators are not well-versed in advocacy or the policy change process, and this
knowledge can be critical in ensuring that evaluations are both realistic and useful.

Harvard Family Research Project’s (HFRP) evaluation of the David and Lucile Packard
Foundation’s Preschool for California’s Children grantmaking program is an example
of an external advocacy evaluation. Since 2003, HFRP has been collecting data about
the program’s progress toward establishing state-level policies that would make
high-quality preschool available to all three- and four-year-olds in the state. The
evaluation’s primary audience is the Packard Foundation, and data collected and
provided in real time are intended to inform the grantmaking strategy as it evolves.
Because the Packard Foundation maintains a close relationship with its grantees,
HFRP’s evaluation does not focus on what the grantees are doing and achieving
individually. Rather, it focuses on the strategy’s influence with external audiences
who play an important role in the policy process—e.g., state and local policymakers

2
 For more on what advocates are doing and their capacity for evaluation, see Innovation Network’s
publication Speaking for Themselves: Advocates’ Perspectives on Evaluation on their website at
www.innonet.org/advocacy.

Overview of Current Advocacy Evaluation Practice 3


and other policy influencers in the state such as media, the business community,
and other politically-important constituencies. The evaluation features two new
methods—the bellwether methodology and policymaker ratings—to capture the
advocacy strategy’s influence with these audiences.3

Mosaica’s recent evaluation with the Environmental Support Center (ESC)—an


organization focused on improving the effectiveness of nonprofits working on
environmental and environmental justice issues—is another example of an external
evaluation. This evaluation assessed ESC’s programs to support the advocacy capac-
ity of small organizations. It yielded lessons about what ESC could do to improve its
efforts, as well as valuable learning about what small organizations are capable of in
terms of both advocacy and evaluation.

Internal
Internal evaluation is conducted by staff members or units from within organiza-
Internal evaluators
tions implementing advocacy efforts. For advocacy evaluation, internal evaluation
tends to be conducted on a smaller scale than external evaluation, as resources bring important
available for evaluation generally are more limited, and the individuals responsible knowledge of the
for data collection often have additional responsibilities within the organization. organization and of
The key advantages of this approach are that internal evaluators bring important advocacy to the table
knowledge of the organization and of advocacy to the table and are positioned to and are positioned
develop recommendations that internal stakeholders are likely to commit to (and
to help ensure their
internal evaluators can follow up on recommendations). The main disadvantage is
that evaluation capacity within advocacy organizations often is not high, both in recommendations
terms of the time and resources needed for the evaluation and in terms of specific are implemented.
methodological expertise.

The Humane Society of the United States is an example of an advocacy organiza-


tion with an internal evaluation effort. Several years ago, the Humane Society—a
national NGO with a budget exceeding $100 million and more than 400 staff—
attempted to develop an approach for quantifying its policy and advocacy
efforts and outcomes. Spearheaded by the organization’s Director of Strategy
and Performance Measurement, this effort resulted in a framework that identi-
fies key outcome areas and indicators that can be used across the organization.
Measurements focus on laws passed at the state and federal level; the enforce-
ment of existing laws; and both formal and informal alliances with networks of
policy enablers. Also, in a unique move, the framework includes scales that assign
“weights” to different types of advocacy and policy outcomes (because some poli-
cies or outcomes have broader or larger-scale implications than others).

The Brainerd Foundation, Wilburforce Foundation, and ONE/Northwest (a grantee


of both foundations) also are working on internal evaluation. These three conser-
vation-focused organizations are trying to develop resource-effective evaluation
approaches that advocacy organizations can implement on their own. The Brainerd
Foundation, for example, has articulated its strategic plan as a theory of change
with clear advocacy and policy change outcomes (e.g., strengthened base of support;

3
 Learn more about the Preschool for California’s Children evaluation on Harvard Family Research
Project’s website at www.hfrp.org.

Overview of Current Advocacy Evaluation Practice 4


strengthened organizational capacity and improved policies).4 The Foundation
does not prescribe a specific evaluation process for its grantees; instead it pro-
motes ongoing self-evaluation and reflection, particularly in areas aligned with
the Foundation’s outcomes. To this end, The Brainerd Foundation has developed a
grantee reporting form that cultivates a culture of learning and is aimed at strength-
ening advocacy work on the ground.

Combination
The combination approach mixes external and internal evaluation. This might
involve, for example, integrating self-evaluation into an external evaluation, or us-
ing external facilitators to help design and facilitate internal evaluation. Currently
within the advocacy evaluation field, the latter approach is most common.

This approach’s main benefit is that it helps to develop internal evaluation skills and
capacity that can be sustained over time. It also helps to build support for evalua-
tion and its use. A potential disadvantage is that this approach can work better in
theory than in practice. The process generally starts off well, but unless an advocacy
organization has sufficient resources and supports to sustain the evaluation once it
is designed, enthusiasm and commitment can fall off when implementation begins.
The combination
The Annie E. Casey Foundation uses a combination external-internal approach with approach has
its KIDS COUNT initiative. KIDS COUNT is a network of child advocates in all 50 states, external evaluators
the District of Columbia, Puerto Rico, and the Virgin Islands. The Foundation has
invited several grantees to participate in a pilot project to develop evaluation strate-
work with advocates
gies for their advocacy and policy change work. Organizational Research Services to develop internal
(ORS) is working with these grantees to develop their evaluation strategies, which evaluation skills and
includes the development of outcome maps.5 Once designed, the expectation is that
capacity that can be
advocates will implement their own evaluations. While this process is still under-
way, the evaluators, advocates, and the Foundation have found that the process of sustained over time.
identifying outcomes and their linkage to strategies calls into question a host of
strategic questions, including consensus within the organization, transparency, real-
time relevance, belief in the value of evaluation, and the interconnectedness among
organizational strategies.

 Focus: What will the evaluation measure?

A dvocacy is unique in that its end goals—typically whether policies or appropria-


tions are achieved (or blocked)—are easy to measure. The much harder chal-
lenge is assessing what happens either before or after that goal is achieved.

Advocacy evaluations generally focus their data collection on three types of out-
comes or results—advocacy capacity, progress toward policy goals, or an advocacy
effort’s impact. While some advocacy evaluations focus on just one area, more often
they focus on more than one.

4
 See The Brainerd Foundation’s theory of change on their website at www.brainerd.org/strategy.
5
 See Organizational Research Services’ publication Orientation to Theory of Change on their website at
www.organizationalresearch.com for an easy-to-follow overview of theory of change techniques and
how theory of change development fits into other types of outcomes-based planning.

Overview of Current Advocacy Evaluation Practice 5


Advocacy Capacity
Advocacy capacity refers to the knowledge, skills, and systems an organization
needs to implement and sustain effective advocacy work. From the very beginning, Many evaluations
the advocacy evaluation field has recognized the critical importance that advocacy treat advocacy
capacity plays in determining the effectiveness of an organization’s policy change capacity—the
efforts. Often, advocacy’s most visible results are in the form of increased capacity
knowledge, skills,
through, for example, stronger leadership and partnerships, improved media skills
or infrastructure, or increased knowledge and skills needed to navigate complex and systems an
legislative, judicial, executive branch, and election-related processes. organization needs
Because advocacy capacity plays such an important role in success, and because to implement and
some advocacy funders are including resources specifically for advocacy capacity sustain effective
building, many evaluations are treating it as a key evaluation outcome. Capacity advocacy work—as
typically is measured at the evaluation’s start and then results are used to identify
areas in which the organization might approve. Repeated assessments later deter-
a key evaluation
mine whether changes have occurred. outcome.

To support advocacy capacity assessment, the Alliance for Justice, with assistance
from Mosaica and in partnership with The George Gund Foundation, developed
an Advocacy Capacity Assessment Tool that helps advocates and their funders
assess their ability to sustain effective advocacy efforts; develop a plan for building
advocacy capacity; and determine appropriate advocacy plans based on the organi-
zation’s advocacy resources. The tool is available both online and in print, and has
been used in numerous advocacy evaluations.6

TCC Group also has worked on this issue and has developed an advocacy capac-
ity framework and complementary assessment tool. The framework outlines and
defines in detail the four capacities—leadership, adaptive, management, technical—
of an effective advocacy organization.7

Progress
Most advocacy evaluations emphasize the importance of tracking tactical progress
on the way to achieving policy change. A focus on measuring progress ensures that
A focus on measuring
advocates have data that signal if they are on the right track or if midcourse correc- progress ensures
tions are needed. It also ensures that the evaluation does not conclude unfairly that that advocates have
the whole advocacy effort was a failure if a policy was not achieved. For example,
data that signal if
an advocacy organization might lose the battle for a specific legislative, regulatory,
or judicial objective, but by motivating a large number of citizens to advocate on its they are on the right
issue, may have built a more experienced grassroots coalition for the future. track or if midcourse
The Connect U.S. Fund offers an example of an evaluation that includes a focus corrections are
on tracking progress. Connect U.S. promotes responsible U.S. global engagement needed
through grantmaking and operations that advance foreign policy objectives in the
areas of human rights, climate change, nuclear weapons and proliferation, civil-
military affairs, and trade and development. Continuous Progress Strategic Services
6
 Find Alliance for Justice’s Advocacy Capacity Assessment Tool at www.advocacyevaluation.org.
7
 See TCC Group’s publication What Makes an Effective Advocacy Organization? on their website
at www.tccgrp.com.

Overview of Current Advocacy Evaluation Practice 6


(CPSS) has been working with Connect U.S. to help its more than 20 advocacy grant-
ees establish evaluation objectives and benchmarks for tracking their progress
toward policy goals. CPSS used its online Advocacy Progress Planner (APP), developed
with support from The California Endowment, to work with grantees.8 The APP offers
a comprehensive menu of options that might go into an advocacy logic model or the-
ory of change. Users can click through these options to highlight their policy goals,
target audiences, assets, tactics, and benchmarks. Connect U.S. and CPSS found that
defining appropriate benchmarks was grantees’ single biggest challenge. While most
grantees had clear objectives and benchmarks to help them determine if they were
on course, others struggled to identify measurable benchmarks that would meaning-
fully indicate progress. From this experience, CPSS developed model benchmarks
that grantees can use to track common advocacy outcomes that generally precede
policy change.

Impact
For traditional program evaluation, capturing impact generally means that an evalu-
ation uses a rigorous evaluation design to determine if a causal relationship can be
established between a program and its intended outcomes. For advocacy evaluation,
Assessing impact has
the meaning is different. An advocacy evaluation that focuses on impact does one or
more of the following: a different meaning
for advocacy
1) Assesses the longer-term “big” outcomes that precede policy change (e.g. public evaluation than it
will, political will, shifts in social norms)
does for traditional
2) Determines whether a plausible and defensible case can be made that an advo- program evaluation.
cacy effort has impacted the policy process or contributed to a policy change

3) Documents the long-term impact of advocacy and policy change on people’s lives
(or on the environment, the economy, etc.).

Of these three approaches, the first two are most common. With the first approach,
longer-term “big” outcomes typically refer to important shifts in how policy stake-
holders are thinking about or acting on certain policy issues. For example, many
evaluations that use this approach attempt to operationalize and measure changes
in public will or political will surrounding an issue.

With the second approach, because advocacy work typically is collaborative and
complex and the policy process is affected by many variables, definitively isolating
whether a certain policy outcome would not have happened without an advocacy
effort is difficult at best. Therefore, the standard that has developed in advocacy
evaluation is a focus on contribution (using data to determine if a credible case can
be made that the advocacy effort contributed to a particular policy outcome), rather
than attribution (showing a causal connection between an advocacy effort and a
policy outcome).

Although rarer, evaluations that address the third meaning of impact—documenting


advocacy’s long-term impact or return-on-investment—also exist. For example,

8
 See the online Advocacy Progress Planner at www.planning.continuousprogress.org.

Overview of Current Advocacy Evaluation Practice 7


the National Committee for Responsive Philanthropy (NCRP) recently studied the
positive impacts of advocacy, community organizing, and civic engagement efforts
in New Mexico (and also in North Carolina). This work, documented in the report
Strengthening Democracy, Increasing Opportunity, found that for every dollar
invested in the 14 advocacy and organizing groups studied, New Mexico’s residents
reaped more than $157 in benefits. That means the $16.6 million from foundations
and other sources to support advocacy efforts totaled more than $2.6 billion of
benefits to the broader public. The report also documents how New Mexico’s overall
economy has benefited from policy changes advocated for by local nonprofits, and
highlights a range of successful advocacy efforts on issues such as economic secu-
rity, environmental justice, civil and human rights, health, and education.9

 Timing: When will the evaluation take place?

E valuation and evaluative thinking can play a role before, during, or after an
advocacy strategy’s implementation. Based on the principle that evaluation use
increases when organizations can apply it to their planning and strategies, most Evaluators and
advocacy evaluation is occurring during strategy implementation. This approach is evaluative thinking
particularly useful with advocacy efforts, where strategy is constantly evolving and can be a helpful
regular feedback can be valuable for informing next steps. But many evaluators also
work with advocates before advocacy strategies are implemented (or early on in
resource early on
their implementation) to ensure strategies have realistic and measurable outcomes. before an advocacy
In addition, some retrospective evaluations are occurring after advocacy outcomes strategy
are known to identify what can be learned from the advocacy strategy’s implementa-
is implemented.
tion and success (or lack thereof).

Before
When engaged early on in an advocacy strategy’s development, evaluators can be
helpful resources or partners as a strategy is being shaped. Commonly, this comes
in the form of evaluators working with advocates on the development of a theory of
change or logic model to articulate and clarify their strategy.

A number of tools have been created for use during both advocacy planning and
evaluation. For example:

●● The Advocacy and Policy Change Composite Logic Model and its online version
the Advocacy Progress Planner (mentioned earlier) were developed to facilitate
advocacy theory of change or logic model development. 10

●● The Continuous Progress website (www.continuousprogress.org) helps advocates


and funders collaboratively plan and evaluate advocacy efforts.

●● The Alliance for Justice Advocacy Evaluation Tool helps organizations identify and
describe their specific advocacy achievements, both for pre-grant and post-grant

9
 Find both the New Mexico and North Carolina reports Strengthening Democracy, Increasing
Opportunity on the National Committee for Responsive Philanthropy’s website at www.ncrp.org.
10
 Find the Advocacy and Policy Change Composite Logic Model on Innovation Network’s resource database
at www.innonet.org/advocacy.
See the online Advocacy Progress Planner at www.planning.continuousprogress.org.

Overview of Current Advocacy Evaluation Practice 8


●● information. In addition, their Advocacy Capacity Assessment Tool (mentioned
earlier) helps organizations identify ways to strengthen their advocacy capacity.11

Evaluators and evaluative thinking also can be useful in other ways. For example,
some evaluators are working with advocates on developing contingency logic
models. Drawing on the concept of scenario planning, these models imagine that the
political or economic context has changed in an important way, or that parts of the
strategy do not go as planned. Contingency logic models identify how the strategy
will shift if those scenarios occur.

The advocacy premortem is another before-implementation approach that has


utility for both planning and evaluation. 12 The method is based on the concept of
prospective hindsight, which involves imagining an event already has occurred. A
premortem involves an exercise that assumes the effort has failed. Advocates and
any other stakeholder involved in the advocacy effort are tasked with identifying
possible reasons for the effort’s failure. Stakeholders independently write down
every possible reason that the effort might have failed. Each person then shares one Prospective
reason from his or her list until all reasons have been recorded and a collective list is evaluation occurs
generated. The result is a comprehensive list of risks that an advocacy effort should
while a strategy is
be cognizant of and monitor. It also is a list that the evaluation can use later to guide
its inquiry. being implemented
and regularly feeds
During/Prospective back data to help
Prospective evaluation occurs while an advocacy effort is being implemented. With
advocates reflect
this approach, evaluation regularly feeds back data to help advocates reflect, in real
time, on their strategies to assess whether they’re working and where midcourse in real time on their
corrections are needed. By more deeply integrating evaluation with implementation, strategies.
prospective evaluation provides advocates and funders with data on progress long
before policy change can be achieved, and collects insights that advocates can use
to continuously improve and refine their strategies.13

The main benefit of a prospective approach is that it positions the evaluation to be


useful for both learning and accountability purposes. It delivers feedback to refine
advocacy strategy and implementation, and encourages advocate engagement in
the evaluation process.

Blueprint Research and Design is using a prospective approach with Communities


for Public Education Reform (CPER), a partnership of local and national foundations
using community organizing to improve educational opportunities and outcomes
for students in low-income communities. This participatory evaluation was designed
to ensure that findings serve as ongoing learning tools for all sites and for CPER as a
whole. Evaluation questions and data collection focus on the areas of policy change
and education reform; capacity building and leadership development; student, par-
ent, and community engagement; collaboration and coalition building; scaling up;

11
 For more information on both Alliance for Justice tools, go to www.advocacyevaluation.org.
12
 Klein, G. (2007). Performing a project premortem. Harvard Business Review, 85(9), 18-19.
13
 Guthrie, K., Louie, J., David, T., & Crystal Foster, C. (2005). The challenge of assessing policy and advocacy
activities: Strategies for a prospective evaluation approach. San Francisco, CA: Blueprint Research and
Design. Find this publication at www.calendow.org.

Overview of Current Advocacy Evaluation Practice 9


and the value of and support for education organizing. The evaluation also is pro-
viding valuable lessons about how to assess outcomes associated with community
organizing.14

After/Retrospective
While the emphasis in the advocacy evaluation field is on prospective evaluation
Retrospective
that occurs while the advocacy effort is being implemented, retrospective evalua-
tion also can be extremely valuable. Retrospective evaluations take place after an evaluation is a useful
advocacy effort has occurred and the outcome already is known. They look back- learning tool that
ward and examine the factors that led to or affected that outcome, and therefore are
takes place after an
extremely useful for learning purposes. The benefit of a retrospective approach is
that hindsight is 20/20. Often, it is easier to see after the fact where things went well advocacy effort has
and where the strategy might have improved for better effect. occurred and the
Michael Quinn Patton’s case study evaluation of a judicial advocacy effort designed
outcome already
to influence a Supreme Court decision is an example of a retrospective approach.15 is known. It looks
Patton used the “general elimination method” to determine whether a plausible and back to determine
defensible case could be made that the advocacy effort in fact had an impact. The
the factors that led
general elimination method begins with an intervention (advocacy) and searches
for an effect. It uses evidence to eliminate alternative or rival explanations until the to or affected that
most compelling explanation remains. Patton’s conclusion, based on a thorough outcome.
review of the campaign’s activities, key informant interviews, and analysis of the
Supreme Court decision, was that the advocacy campaign did in fact contribute
significantly to the Court’s decision.

 Approach: What methodology will the evaluation use?

E valuations can use many different approaches or models. One study, for example,
identified at least 22 available approaches.16 Within the advocacy evaluation
field, however, the list is smaller as many traditional program evaluation approaches
do not work well with advocacy. The three options listed in the matrix—tracking/
monitoring, developmental evaluation, and case studies—are not the only approach-
es being used in the field, but they are among the most common.

Tracking/Monitoring
Tracking and monitoring refers to the practice of identifying indicators, benchmarks,
Tracking and
or performance measures (usually quantitative) connected to advocacy outcomes
and then tracking those indicators over time. Tracking examines progress and identi- monitoring identifies
fies where midcourse corrections might be needed. For example, by determining indicators (usually
whether issues or messages are appearing more in targeted media outlets, media
quantitative)
tracking can identify whether media outreach tactics are making headway. Track-
ing’s main disadvantage is that it often tells little about why changes are occurring connected to
over time. advocacy outcomes
and then tracks
14
 For more information on evaluating community organizing, see Alliance for Justice’s online living
library of resources Resources for Evaluating Community Organizing at www.afj.org/for-nonprofits- them over time.
foundations/reco.
15
 Patton, M.Q. (2008). Advocacy impact evaluation. Journal of Multidisciplinary Evaluation, 5(9), 1-10.
16
 Stufflebeam, D. (2001). Evaluation models. New Directions for Evaluation, 89, Special issue .

Overview of Current Advocacy Evaluation Practice 10


Several recently-developed tools are helping the field understand how to track indi-
cators associated with specific advocacy tactics. For example, the Are We There Yet?
A Communications Evaluation Guide created by Asibey Consulting and published by
The Communications Network helps users create plans for monitoring and measur-
ing their communications. 17

In addition, M+R Strategic Services has done groundbreaking work on tracking


electronic advocacy efforts. Their eNonprofit Benchmarks Study (completed initially
in 2006 and updated in 2008 and 2009) analyzed online messaging, fundraising, and
e-advocacy data from 21 leading nonprofit organizations. This work resulted in a set
of key indicators and valuable benchmark data that can be used for tracking and
interpreting nonprofit online communications.18

Developmental Evaluation
Developmental
Michael Quinn Patton coined the term “developmental evaluation” to describe an
approach to evaluating complex or evolving efforts, like advocacy. “Developmental evaluation features a
evaluation refers to long-term, partnering relationships between evaluators and long-term partnering
those engaged in innovative initiatives and development…Evaluators become part of
relationship between
a team whose members collaborate to conceptualize, design and test new approach-
es in a long-term, ongoing process of continuous improvement, adaptation, and evaluators and
intentional change. The evaluator’s primary function in the team is to elucidate team advocates, so that
discussions with evaluative questions, data and logic, and to facilitate data-based evaluators become
assessments and decision-making in the unfolding and developmental processes of
a part of the
innovation.”19 Developmental evaluation is different from traditional evaluation in
that evaluators do not make definitive judgments about success or failure. Rather, advocacy team.
like with prospective evaluation, they provide feedback, generate learning, and
either support strategy decisions or affirm changes to them.

This approach is useful for advocacy efforts that are complex and constantly evolve.
Developmental evaluation allows evaluators to be flexible, so that when strategies
change or critical events occur, evaluators quickly become aware of those changes
and can adjust the evaluation accordingly.

Since 2005, Innovation Network , with support from The Atlantic Philanthropies,
has been using a developmental approach for its evaluation of the Coalition for
Comprehensive Immigration Reform (CCIR)—a collaborative of immigrant advocacy,
grassroots, and religious groups, labor organizations, and policy leaders on Capitol
Hill and throughout the United States. For several years, Innovation Network has
been documenting CCIR’s work as it unfolds and is capturing best practices to inform
other coalitions and the advocacy field. Because immigration reform activity fluc-
tuates and has evolved over time, Innovation Network has been flexible and has
experimented with different approaches to ensure the evaluation is both useful and
not burdensome for advocates. The evaluation fosters continuous learning so CCIR
leadership can act on evaluation findings and make real-time adjustments to their
activities and strategies.

17
 Find Asibey Consulting’s Are We There Yet? A Communications Evaluation Guide on The Communication
Network’s website at www.comnetwork.org.
18
 Download the eNonprofit Benchmarks Study at www.e-benchmarksstudy.com.
19
 Patton, M. Q. (2006).Evaluation for the way we work. The Nonprofit Quarterly, 13(1), 28-33.

Overview of Current Advocacy Evaluation Practice 11


Case Studies
Case studies are used to collect descriptive data through the intensive examination
of a phenomenon in a particular individual, group, or situation. Case studies are
particularly useful for studying unique or complex phenomena, two descriptors that
apply to most advocacy efforts.

A key advantage of case studies is that they tell a full story about what happened,
rather than provide isolated data points that tell only part of the story or do not
incorporate context or the environment in which the advocacy effort occurred. A
potential disadvantage is that because context plays such an important role in this
approach, at times it can be difficult to extrapolate lessons to other advocacy or
political circumstances.

Case studies recently completed by Colin Knox of the University of Ulster and sup-
Case studies can
ported by The Atlantic Philanthropies offer an example of this approach. This series tell a full and
of seven case studies chronicles advocacy efforts in post-conflict Northern Ireland rich story about
in the areas of human rights, children and youth, and aging. The case studies provide
what an advocacy
insights and lessons about how advocates achieved traction and influenced policy
agendas in complex and challenging political environments that were extremely strategy did and
resistant to change. accomplished.

Conclusion
As this brief demonstrates, the past several years have been a tremendous opportu-
nity for creativity and growth in the advocacy evaluation field. Where few resources
and little expertise existed before, multiple tools and a growing base of experience
now exist. This growth has been fueled by a group of pioneering funders, evalua-
tors, and advocates who share a strong dedication to the field and are committed to
growing it through collaboration.

For sure, there is much more happening that has not been captured here, and there
is enormous opportunity for further growth and innovation. Although early work
in this field has generated a great deal of momentum, there is much left to do. For
example, the field must expand beyond eager innovators and reach out to the much
The past several
larger majority of individuals and organizations who still know little about advocacy
evaluation or remain skeptical about its value. In addition, the field must fill in some years have been
clear gaps in its infrastructure, particularly in the areas of outreach and training. a tremendous
Opportunities to stay updated on new developments as the field continues to grow
opportunity for
include: creativity and growth
in the advocacy
●● Innovation Network has a free online clearinghouse and newsletter dedicated to
advocacy evaluation (www.innonet.org/advocacy). evaluation field.
But there is much
●● The American Evaluation Association has an Advocacy and Policy Change Topical
left to do.
Interest Group (www.eval.org).

●● The Foundation Review (www.foundationreview.org) has chosen advocacy and


policy change as a theme a 2009 issue.

Overview of Current Advocacy Evaluation Practice 12


About the Author
Julia Coffman is Director of the Center for Evaluation Innovation. She is an evaluator
with particular expertise in the evaluation of advocacy and policy change efforts.
Her approach emphasizes real-time learning that helps organizations adapt their
strategies and continuously improve. Julia also consults with nonprofit organizations
and foundations on evaluation and works with the Harvard Family Research Project
(HFRP), a research and evaluation organization at the Harvard Graduate School of
Education.

About the Center for Evaluation Innovation Acknowledgements


The Center for Evaluation Innovation is pushing evaluation practice in new direc-
This brief was developed
tions and into new arenas. The Center specializes in areas that are hard to mea-
with generous support
sure and where fresh thinking and new approaches are required. This includes, for
from The California
example, advocacy and policy change, communications, and systems change efforts.
Endowment. Special
The Center works with other organizations to develop and then share new ideas thanks to Astrid
and solutions to evaluation challenges through research, communications, training Hendricks at The
development, and convening. Find the Center at www.evaluationinnovation.org. California Endowment;
Contact Julia at [email protected]. Barbara Masters
at MastersPolicy
About Advocacy Evaluation Advances Consulting; Sue
In January 2009, 120 advocates, evaluators and funders gathered at The California Hoechstetter at
Endowment’s Center for Healthy Communities for two days of thought-provoking Alliance for Justice;
presentations and discussions on recent advocacy evaluation advances. The con- and Jane Reisman
and Anne Gienapp at
vening, sponsored by The California Endowment with support from The Atlantic
Organizational Research
Philanthropies and Annie E. Casey Foundation, focused on real-life experiences
Services for their
with advocacy evaluation and what has been learned from testing different tools
comments on earlier
and approaches in this emerging field over the last several years. It also focused on versions.
challenges that still must be addressed, and identified priorities for the field moving
forward. The examples featured in this brief were presented and discussed during
the convening. To access other convening and presenter resources, including many
mentioned in this brief, visit the Advocacy Evaluation Advances web page at
www.calendow.org/article.aspx?id=3774.

Overview of Current Advocacy Evaluation Practice 13

You might also like