0% found this document useful (0 votes)
17 views36 pages

Project-Level Impact Assessment Guide

This paper from the Enterprise Development Impact Assessment Information Service (EDIAIS) analyses the principles and core methods of impact analysis.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
17 views36 pages

Project-Level Impact Assessment Guide

This paper from the Enterprise Development Impact Assessment Information Service (EDIAIS) analyses the principles and core methods of impact analysis.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

BASIC IMPACT ASSESSMENT AT PROJECT LEVEL

CONTENTS

Summary
This paper examines the underlying principles and basic methods of assessing
impact of development projects. It is often referred to in other texts on the site as
the "Core Text". Many of the methods and techniques outlined in this text are
examined in more detail in the Toolbox; they are also shown in application to
Enterprise Development activities in the Application Guidance Notes and the Case
Studies. All of these sections can be found under the main heading "Information
Resources".

The paper begins with outlining the principles of impact assessment and how these
relate to Enterprise Development interventions. It then goes on to develop a
framework for assessment at project level, and the practical application of this
framework. This is followed by issues to be considered when commissioning and
conducting an impact assessment, including items that should be included in
drawing up Terms of Reference for engagement of external consultants. Finally the
paper looks in more detail at the main methods of assessing impact, advantages
and disadvantages of these, and the kinds of situation to which each is most suited.

1. WHAT IS IMPACT ASSESSMENT?

2. IMPACT ASSESSMENT FOR ENTERPRISE DEVELOPMENT

3. PRINCIPLES OF IMPACT ASSESSMENT

4. CONCEPTUAL FRAMEWORK FOR ENTERPRISE AND


PROJECT LEVEL IMPACT ASSESSMENT

5. IMPACT ASSESSMENT IN PRACTICE

6. CONDUCTING AN IMPACT ASSESSMENT

7. COMMISSIONING AN IMPACT ASSESSMENT: TERMS OF


REFERENCE FOR CONSULTANTS

8. IMPACT ASSESSMENT METHODS

1
This paper has been prepared by Colin Kirkpatrick and
David Hulme, with contributions from Linda Mayoux,
Caroline Pinder, Tertia Gavin and Clive George

1. WHAT IS IMPACT ASSESSMENT?

In its broadest sense, impact assessment is the process of identifying the anticipated or
actual impacts of a development intervention, on those social, economic and
environmental factors which the intervention is designed to affect or may inadvertently
affect. It may take place before approval of an intervention (ex ante), after completion (ex
post), or at any stage in between. Ex ante assessment forecasts potential impacts as part
of the planning, design and approval of an intervention. Ex post assessment identifies
actual impacts during and after implementation, to enable corrective action to be taken if
necessary, and to provide information for improving the design of future interventions. The
stages in the project cycle where impact assessment needs consideration are shown in
Figure 1.

A distinction can be made between two separate but interlinked levels:

• Internal monitoring and evaluation for ongoing learning, through for example the
integration of specific impact indicators into existing management information systems,
which makes information immediately available to staff;

• External impact assessment, often involving independent investigators. Such


assessments produce reports for specific purposes, such as poverty impact
assessment, regulatory impact assessment, social impact assessment or health impact
assessment. Certain types of ex ante assessment may be part of the approval process
for certain types of intervention, including environmental impact assessment and
economic impact assessment (cost-benefit analysis). These may contain their own ex
post monitoring activities. Separate ex post assessments may be undertaken or
commissioned for any particular intervention or set of interventions, to provide fuller
information than may be available from routine monitoring and evaluation.

In the context of sustainable development, the social, economic and environmental


impacts of an intervention are all interlinked. The various types of impact assessment may
therefore need to be combined in an integrated impact assessment, whose nature will vary
according to the type of intervention, and the aims and cost-effectiveness of the overall
impact assessment package.

For each impact assessment type a wide range of methodologies has been developed,
according to the precise purpose of the assessment, the types of question to be asked, the
organisational context, the socio-economic context, available budget, research capacity
and other factors. An impact assessment may include any or all of:

• Quantitative statistical methods involving baseline studies, the precise identification


of baseline conditions, definition of objectives, target setting, rigorous performance
evaluation and outcome measurement. Such methods can be costly, limited in the
types of impacts which can be accurately measured, and may pose difficulties for
inference of cause and effect. Some degree of quantification may be necessary in

2
all impact assessments, in order to evaluate the success of the intervention and the
magnitude of any adverse effects.
• Qualitative methods suitable for investigating more complex and/or sensitive types
of social impacts, e.g. intra-household processes, policy issues and investigation of
reasons for statistical relationships and policy implications. These methods
generally require high levels of skill, and may be relatively costly. Some degree of
qualitative interpretation may be necessary in all impact assessments, in order to
evaluate the causes of impacts which have been observed.

• Participatory approaches suitable for initial definition or refinement of the actual or


potential impacts which are of concern to stakeholders, questions to be asked, and
appropriate frameworks and indicators to be used. Such approaches can contribute
to all types of assessment, and are particularly suited to exploratory low budget
assessments and initial investigation of possible reasons for observed statistical
relationships. They offer a means of involving stakeholders in the research,
learning and decision-making processes. These methodologies also require a
certain level of skill, depending on the issues to be addressed and ways in which
they are integrated with other methods. Some degree of stakeholder participation is
likely to be necessary in all impact assessments, in order to achieve a good
understanding of stakeholder perceptions of impacts.

Whatever mix of techniques is used, consideration should be given to :

• transparency and public accountability


• stakeholder involvement
• reliability of the information obtained
• reliability of inference for policy improvement
• cost and skill requirements

For detailed discussion of quantitative, qualitative and participatory approaches, see


Toolbox sections on:
Programme Management Cycle
Impact Assessment and Stakeholder Analysis
Quantification Methods
Qualitative Methods
Participatory Methods

3
FIGURE 1. PROGRAMME CYCLE:
AREAS WHERE IMPACT ASSESSMENT NEEDS CONSIDERATION
IDENTIFICATION Look at EDIAIS for lessons from other programmes

*PROJECT HEADER SHEET (PIMS, POM, PAM)


• Decision on markings give indication of which TSP checklists to
consider(eg gender, poverty etc)

*CONCEPT NOTE
• Indicate consideration of level and methodology of IA

DESIGN *STAKEHOLDER ANALYSIS


• Which stakeholders will be involved in proposed IA? Are these
the appropriate stakeholders? What methodologies are best for
securing their involvement?

*LOGFRAME
• Agree purpose of IA and indicators
• Agree methodologies and mix to fit
• Agree on baseline data required, & how this will be collected
• Agree frequency/timing of IA

APPRAISAL *PROJECT MEMORANDUM (& Technical Annexes)


• Ensure IA reflected throughout document including budget
• Collate multi-disciplinary tech annexes (in particular, economic,
social and environmental); negotiate and agree IA strategy for
project with multi-disciplinary Advisers/team members

APPROVAL *MONITORING ANNEX


• Detail proposals as agreed at earlier stages, ie purpose,
methods, frequency etc

IMPLEMENTATION *MONITORING
AND MONITORING . DFID staff monitoring visits at least once a year
. Scoring for PRISM

*REPORTING
• internally written reports by project staff required at least once a
year for projects over £250 000 reporting against logframe
indicators, activities and budget
• End-of-project report will include IA findings
REVIEW / IMPACT
ASSESSMENT *REVIEWING
• Mid-term review should include a mid-term IA
• Output-to-purpose review held 6mths before end of project;
should include full-term IA
• If extension likely review IA strategy & include in new
PM(Project Memorandum)

4
2. IMPACT ASSESSMENT FOR ENTERPRISE DEVELOPMENT

Impact assessment for enterprise development (ED) has three different, but interrelated,
objectives:

• Accountability: to provide evidence about the achievements of ED interventions and


their costs

• Improving Programme/Project Effectiveness: providing recommendations about the


means by which present and future programme/project performance could be improved

• Policy Development: to provide guidance about the ways in which government and
donor policies should be reformed so that the environment for enterprise development
becomes more favourable.

Each of these objectives is likely to shape the design of an IA in different directions.


Accountability encourages a quantitative focus and the comparison of inputs with outputs
and outcomes. Improving effectiveness encourages a focus on the processes by which
inputs are converted into outputs and outcomes. Often this means that there is an
extensive use of qualitative research methods. Policy development encourages a focus on
macro-level contexts and often involves international comparisons.

Impact assessment for ED at DFID is particularly important in showing how ED work


contributes to poverty alleviation by benefiting the poor. Examples of this include support
for small and medium enterprises (to create jobs or lower cost goods and services for the
poor) and creating an enabling environment for private sector development (to promote
pro-poor economic growth).

The breadth of ED interventions means that very different approaches to IA have to be


adopted, depending on the nature of the intervention. Methodologies are relatively well
advanced at the micro-level (especially for microfinance) but are less mature at the macro-
level of policy reform and institutional change.

While all of the partners with whom EDAs work should, as a matter of good practice, be
assessing the impacts of their activities, the Evaluation Department at DFID has a distinct
role. It provides independent assessments of the achievements of DFID investments,
including ED projects and programmes, and seeks to draw out policy lessons for DFID
from comparative international studies.

5
3. PRINCIPLES OF IMPACT ASSESSMENT

Impact assessments carried out as part of planning and approval of an intervention (ex
ante) are predictive in nature, and generally follow well established methodologies (e.g. for
environmental impact assessment, social impact assessment, health impact assessment,
economic appraisal).

Assessments carried out subsequently (ex post) aim to evaluate actual impacts. While
techniques vary according to the nature of the intervention and the purpose of the
assessment, the basic methodology is similar (Figure 2).

Figure 2 General impact assessment methodology

define the scope of the assessment


Ð
define impact targets
Ð
define indicators
Ð
identify unplanned impacts
Ð
identify stakeholders
Ð
involve stakeholders
Ð
assess impacts
Ð
quantify impacts
Ð
identify corrective actions
Ð
identify policy lessons
Ð
report
Ð
dissemination of findings

• scope: determine which impacts should be investigated in the assessment;


• targets: identify targets (where possible) for the impacts to be assessed, from the
planning documents, or from widely accepted objectives appropriate for the type of
intervention
• indicators: identify indicators which will allow each impact to be measured, in relation to
its target (if one has been identified); also methods of collection against these, and who
will be responsible for collection
• unplanned impacts: within the scope of the assessment, identify potentially significant
unplanned impacts

6
• stakeholder identification: identify those social groups likely to be affected by planned
or unplanned impacts, and other stakeholders (e.g. government bodies) with a
significant interest
• stakeholder involvement: decide how stakeholders will be involved in the assessment;
this may include involving stakeholders in refining the scope of the assessment, and in
the identification of unplanned impacts and targets for planned ones
• assessment of impacts: determine what impacts have occurred, their direct and indirect
causes, and their importance in relation to targets
• quantification of impacts: assess impact magnitude where practicable, in relation to
targets
• corrective action: define what steps can be taken to eliminate or reduce any significant
adverse impacts or to compensate for them
• policy learning: identify lessons for the planning and design of future interventions
• reporting: document the findings of the assessment in a manner that is clearly
understandable to those who will use them; identify uncertainties and reliability of
findings; establish means of public access to the report
• dissemination of findings: evaluation findings should be disseminated amongst
stakeholders in a way that contributes to learning (e.g. by workshops, meetings,
circulation of report); obtain stakeholder agreement to the report and agree follow-up
action

In many cases the assessment will be expected to assess impact on very broad goals,
such as poverty alleviation. Unless the intervention can be expected to have a direct
impact on such goals, it may be appropriate to identify relevant intermediary factors (e.g.
education), and limit the assessment to impacts on them. The linkages between
intermediary factors and broader goals can often be assessed reliably only through a
complex policy-level impact assessment. In general, the targets and indicators used in the
assessment will be those for which the intervention can be expected to have a direct
impact.

Whatever the precise scope of the assessment in relation to particular social, economic or
environmental objectives, consideration should be given to the following potential issues:

• time-dependency - might impacts that are small (or large) at the time of the
assessment increase (or decrease) with time?
• changing or abnormal conditions - how secure is an observed impact, in relation to
economic or environmental shocks and other conditions which may vary from those
pertaining at the time of the assessment?
• cumulative effects - would a small effect become significant if the intervention or its
effects were replicated?
• remote effects - might unplanned impacts be occurring beyond the boundaries of the
study area or community?
• second order effects and interactions - might unplanned impacts be occurring that are
not obviously associated with the intervention?

7
The last of these issues can entail a complex investigation of the interlinkages between
social, economic and environmental impacts. A fully integrated impact assessment of this
nature would be required if potentially important interactive effects are identified, within the
scope of the assessment or subsequently (Figure 3).

Figure 3 Types of Impact on Sustainable Development

Intervention

Economic Environmental
Impacts Impacts

Social Impacts

Sustainable
Development

direct impacts

indirect (secondary) impacts

feedback impacts

regulatory impacts

8
4. CONCEPTUAL FRAMEWORK FOR ENTERPRISE AND PROJECT LEVEL
IMPACT ASSESSMENT

All impact assessment studies have an underlying conceptual framework. In well-planned


and well-resourced IAs with long ‘lead-in’ times such frameworks are usually explicitly
identified; by contrast, in many smaller scale exercises the framework is implicit and may
be seen as ‘common sense’. There are three main elements to a conceptual framework:

• a model of the impact chain that the study is to examine

• the specification of the unit(s) or levels at which impacts are assessed

• the specification of the types of impact that are to be assessed.

4.1 Impact Chains

Behind all aid financed initiatives, is the assumption that the intervention will change
behaviour and practice in ways that lead to the achievement (or raise the probability of
achievement) of desired outcomes. IAs assess the difference in the values of key
variables between the outcomes on ‘agents’ (individuals, enterprises, households,
populations, policymakers etc) which have experienced an intervention against the values
of those variables that would have occurred had there been no intervention (Figure 3).
The fact that no agent can both experience an intervention and at the same time not
experience an intervention generates many methodological problems. All changes are
influenced by mediating processes (specific characteristics of the agent and of the
economic, physical, social and political environment) that influence both behavioural
changes and the outcomes in ways that are difficult to predict.

The impact chain is very simply depicted in Figure 4. A more detailed conceptualisation
would present a complex set of links as each ‘effect’ becomes a ‘cause’ in its own right
generating further effects. For example, in a conventional microfinance project a package
of technical assistance and capital changes the behaviour (and products) of a
microfinance institution (MFI). The MFI subsequently provides different services to a
client, most commonly in the form of a loan. These services lead to the client modifying
her/his microenterprise activities which in turn leads to increased or decreased
microenterprise income. The change in microenterprise income causes changes in
household income which in turn leads to greater or lesser household economic security.
The modified level of household economic security leads to changes in the morbidity and
mortality of household members, in educational and skill levels and in future economic and
social opportunities. Ultimately, perhaps, these changes lead to modifications in social
and political relations and structures.

9
Figure 4 The Conventional Model of the Impact Chain

Mediating Processes
Behaviours and Outcomes for the agent
Agent practices over and/or other agents
a period of time

The difference
between Impact
outcomes is
the impact

Modified behaviours Modified outcomes for the


Agent and practices over a agent and/or other agents
period of time

Mediating Processes
Program
Intervention

The complexity of such chains provides the assessor with a range of choices about which
link (or links) to focus on. It is useful to distinguish between two main approaches with
regard to which link(s) in the chain to focus on. For convenience, these are termed the
‘intended beneficiary’ and the ‘intermediary’ approaches.

The intended beneficiary approach, building on the ideas of conventional evaluation, seeks
to get as far down the impact chain as is feasible (in terms of budgets and techniques) and
to assess the impact on intended beneficiaries (individuals or households). The
intermediary approach focuses purely on the beginning of the chain and in particular on
changes in project outputs or institutional outreach and sustainability. The link between the
intermediary process and the ultimate impact is often much wider than can be reasonably
assessed for any individual project. One may therefore wish to do an ‘intended beneficiary’
assessment infrequently on a broad basis, to evaluate the policy assumptions underlying
the design of individual interventions. For each intervention, one would only assess
intermediary impacts.

4.2 Units of assessment

Following on from the design of a model of the impact path comes the choice of the unit(s)
of assessment (or levels of assessment). Common units of assessment are the
household, the enterprise or the institutional environment within which agents operate.

The relative advantages and disadvantages of different units of assessment are


summarised in Box 1.

10
Box 1: Units of Assessment and Their Advantages and Disadvantages

Unit Advantages Disadvantages

Individual • Easily defined and • Most interventions have


identified impacts beyond the
individual
• Difficulties of
disaggregating group
impacts and impacts on
‘relations’
Enterprise • Availability of analytical • Definition and
tools (profitability, return identification is difficult
on investment etc) in microenterprises
• Much microfinance is
used for other
enterprises and/or
consumption
• Links between
enterprise performance
and livelihoods need
careful validation

Household • Relatively easily defined • Sometimes exact


and identified membership difficult to
• Permits an appreciation of gauge
livelihood impacts • The assumption that
• Permits an appreciation of what is good for a
interlinkages of different household in aggregate
enterprises and is good for all of its
consumption members individually is
often invalid
Community • Permits major • Quantitative data is
externalities of difficult to gather
interventions to be • Definition of its
captured boundary is arbitrary
Institutional Impacts • Availability of data • How valid are inferences
• Availability of analytical about the outcomes
tools (profitability, SDIs, produced by institutional
transaction costs) activity?
Household Economic • Comprehensive coverage • Complexity
Portfolio (i.e. of impacts • High costs
household, enterprise, • Appreciation of linkages • Demands sophisticated
individual and between different units analytical skills
community) • Time consuming

11
4.3 Types of impact

An almost infinite array of variables can be identified to assess impacts on different units.
To be of use these must be able to be defined with precision and must be measurable.
Conventionally, economic indicators have dominated, with assessors particularly keen to
measure changes in income despite the enormous problems this presents. Other popular
variables have been levels and patterns of expenditure, consumption and assets. A strong
case can be made that assets are a particularly useful indicator of impact because their
level does not fluctuate as greatly as other economic indicators and is not simply based on
an annual estimate.

The social indicators that became popular in the early 1980s (e.g. educational status,
access to health services, nutritional levels, anthropometric measures and contraceptive
use) have recently been extended into the socio-political arena in an attempt to assess
whether project interventions can promote empowerment. This has led to the
measurement of individual control over resources, involvement in household and
community decision-making, levels of participation in community activities and social
networks and electoral participation. The bulk of this work has focused on changes in
gender relations, but there are sometimes partially-formulated assessments of class
relations within it. These extensions do add, however, to the complexity of IA work and
require the skills of assessors who are experienced at making judgements on social
relations.

In addition, impact assessors should seek to keep the number of variables they measure
to a manageable number and not be tempted to go for a comprehensive approach that will
impact adversely on data quality and study relevance.

12
5. IMPACT ASSESSMENT IN PRACTICE

Good practice impact assessments will be based on the principles identified in section 3.
They must also seek to achieve a ‘fit’ with the objectives that are set, the intervention type
and its goals, and the resources and time available (Figure 5). Inevitably, this entails
compromises and trade-offs (e.g. if results are required rapidly then levels of rigour may
need to be reduced).

Figure 5 Achieving ‘Fit’

The objectives of Type and scale


the IA of ED
intervention

Impact
Assessment
Activity

Resources Targets for the


available intervention
for the IA
Time
scale

5.1 Objectives of the IA: the objective(s) of the IA will shape the foci and methods of
the study. An IA may focus on accountability, improving programme/project performance,
policy development or a combination of these three objectives (Section 2). The greater the
‘mix’ of objectives set, and the higher the levels of rigour to be achieved then the greater
will be the need for resources and time.

13
5.2 Type and scale of ED intervention: ED intervention can take a variety of forms:

• Fair trade
• Microfinance
• Regulatory frameworks
• Business development services
• Business associations and market linkages
• Business and vocational skills development
• Rural development projects
• Appropriate and innovatory technology projects
• Social and environmental enterprises
• Informal sector development and support services
• Tourism initiatives
• Privatisation of state-owned enterprises and services

The type of intervention has a significant influence on how an IA should be designed and
who should conduct it. In addition, the greater the scale of intervention, then the greater
will be the costs, or any given level of rigour and detail.

5.3 Targets for the intervention: different ED interventions have different goals. Good
practice demands that achievement is assessed against:

• targets identified in the project framework

• a broader set of targets - ideally the targets set out in DFID’s Target Strategy Papers
(Box 3).

14
Box 2 The DFID Target Strategy Papers are to be found at [Link]
then click on International Development Targets to obtain Adobe format of each TSP.

• Halving poverty: this is central to EDDs work, incorporating its core enterprise
development support to the legal and regulatory environment for private sector activity,
business development services and microfinance then search International
• Making government work for poor people: building stronger links between
government and the private sector is essential to ensure both fulfil their responsibility to
the public good
• Meeting the urban challenge: this provides an opportunity to explore partnership
arrangement with communities and enterprises which link physical improvements in
infrastructure with the creation of employment and small enterprises
• Poverty eradication and the empowerment of women: provides an opportunity for
EDD to develop specific gender skills and understanding of how to further the gains in
socio-economic empowerment, linked to EDD’s knowledge of ED processes
• Environmental sustainability and eliminating poverty: an opportunity to improve
understanding of the environmental impact of enterprise development and its scope for
enhancing business opportunities
• Human rights for poor people: greater use of participatory techniques to enable people
to better understand and exercise their rights
• Addressing the water crisis: EDD has a potential role to support enterprises as users,
suppliers or polluters of water
• Better health for poor people: two components of this TSP are particularly relevant for
EDD - health care financing, and the involvement of the private sector in health care
delivery
• Education for all: EDD has a role to play in developing a coherent approach towards
educating for enterprise

5.4 Resources for the IA: the rigour and quality of an IA are partly determined by the
level of funds devoted to the IA and the quality of personnel. At present there are no clear
guidelines on what levels of expenditure IA should attract, but good practice suggests that
the level of investment in an IA should rise with the significance of the anticipated findings
(e.g. (i) if a major expansion of a project is being considered then the initial project should
be assessed in detail; (ii) if there is clear evidence that a project is failing and the main
reasons are agreed upon by all key stakeholders then a low cost IA to verify this situation
and confirm project termination may suffice).

For many forms of intervention, and in many regions, the main resource constraint is
skilled IA personnel. This often makes it necessary for EDAs to:

• book IA consultants up well in advance of the IA


• build IA timetables around IA consultant availability
• seek to attach less experienced consultants to experienced IA consultants to broaden
the human resource base.

(see Box 3 for components that need to be taken into account in budgeting for IA)

5.5 Timescale: careful consideration needs to be given to the timescale for an IA.
Good practice suggests that the time available for an IA increases with:

15
• the scale of the ED intervention
• the complexity of the impacts
• the degree to which claims of impacts are challenged by different stakeholders.

A common problem is that insufficient time is allowed for the IA to be conducted


satisfactorily. IAs will be more effective if an initial data collection exercise precedes the IA
itself. Ideally, all ED interventions should have a baseline survey (or baseline statement)
against which impact assessors can compare their data.

16
Box 3: Budgets for Impact Assessment

As many aspects of impact assessment should be part of the good practice


management of a project, it is not always meaningful to try and separate out costs.
However, in thinking through impact assessment in project design/implementation,
the following components should be considered and, as far as practical, costed.
These components and costs should be agreed with the partner-organisation, whose
capacity to undertake the required level of IA (e.g. their existing MIS, staff
understanding of IA) should be part of the appraisal process.

INITIAL INFORMATION COLLECTION


Baseline data:
• What baseline data is required?
• How will this will be collected? Who by?
• Are there local skilled enumerators, or is local training needed, or are external
consultants required?

DESIGN OF IMPACT ASSESSMENT


Stakeholder analysis:
• What meetings are required to undertake stakeholder analysis? At these,
consider:
• Who should participate in IA and what methods of IA will be most effective in
getting relevant information? What will be the costs to stakeholders of
participation?
• Are external facilitators required? Can internal staff be trained to do it?
• What are the costs of these options? Which is likely to produce the better
information?

ONGOING INFORMATION COLLECTION


• What steps are required to produce data that measures impact against
indicators?
• Is there an existing MIS that will generate relevant data?
• Does MIS require modification/elaboration?
• Do staff know how to use MIS, or is training required?
• Is there a need for periodic large scale quantitative/qualitative exercises? Are
there local personnel who can undertake these?

ANALYSIS OF INFORMATION
• Who is responsible for processing information and generating meaningful reports
of progress against indicators?

DISSEMINATION OF FINDINGS
• What processes are required to disseminate finding?
• Should there be series of workshops to involve stakeholders?
• Should the report be translated into local language?
• Who should the report be copied to?
• Should the report be synthesised for publication in academic journals, at
conferences etc?

17
6. CONDUCTING AN IMPACT ASSESSMENT

The level of impact assessment required, who it should be undertaken by, and the timing
of any reports should be determined during the project design phase and set out in the
Project Memorandum. There should, however, be some scope for flexibility to take
account of any changes that may take place in the project's circumstances (e.g. major
social, political or economic changes). The Logframe should highlight special features to
be included in the impact assessment, for example vulnerable groups, the impact of a
particular training programme, relative effectiveness of male and female staff in reaching
female clients etc. Consideration of these issues at the design stage will enable much
better assessment of the practical implications and learning of IA.

It is important to engage the stakeholders, principally the implementing partners and their
staff, from the beginning. Many project staff and managers feel threatened by impact
assessment, believing their work is being judged in a critical manner and seeing it as a
personal performance review. If they have been actively engaged in the design of the
project from the start, however, and participated in the design of the IA strategy, fully
understanding it's objectives, they will be more supportive. They should be encouraged to
view it as a learning experience rather than judgmental. Whilst failure to meet targets due
to negligence should not be overlooked or ignored, where impacts have not been as
extensive as they might have been the priority should be to discover why rather than seek
to blame, and then to consider what action can be taken to improve impact in the future.

Depending on the nature and level of impact assessment required, it may be necessary at
the start of the project, or even before, to contract out the collection of some baseline data
on which to base future work. This may involve a range of qualitative and quantitative
methods which can be time consuming and may require different skills from those
possessed by project staff or EDAs. This should also identify which of these baseline
indicators can be incorporated into existing Management Information Systems and how
this should be done. It should also identify which types of information will require external
consultants/researchers, or whether special staff will be needed to conduct research by M
and E departments.

For all their projects, EDAs are expected to oversee project reporting by project staff, and
participate in routine monitoring and output to purpose reviews. Internal monitoring for
ongoing programme level learning should take place through the integration of specific
impact indicators into existing management information systems, however, which makes
information immediately available to staff, and allows the project's management to act on
that information more timeously. Periodic updates of this data would then be required
(ideally undertaken by the same team) throughout the project.

Regular reports (usually either quarterly or six monthly should be sent to EDAs by project
staff. These reports should cover progress against activities and against outputs, using the
impact assessment indicators specified in the project logical framework. This information
should be gathered routinely by project staff.

Periodic project monitoring visits are required by EDAs, the frequency of visits depending
on the complexity and progress of the project. Monitoring visits should be used as
opportunities to discuss the impact of the project with stakeholders and to verify the
accuracy of project reports.

18
Output to purpose reviews are also required for all projects over £500 000. These may
sometimes be done by EDAs and sometimes commissioned out, largely depending on the
resources EDAs have available. These reports should focus on reporting against impact
assessment indicators at the output level and at the purpose level.

The key findings of the impact assessments at each level should be shared with the
stakeholders involved to increase programme-level learning. Impact assessment should
be regarded as a dynamic process and not as a series of static reports. Impact
assessment can only contribute to lesson learning if the information is used as a basis for
asking intelligent questions about project implementation and how it can be enhanced.

The critical issues which the designer and manager of an impact assessment study will
need to take into account are:

• Costs and confidence

The design of an IA must be very closely related to the budget available: this may be a
platitude but overambitious designs continue to lead to poor quality studies or delays that
make findings irrelevant.

While rapid appraisal approaches may appear cheaper than large-scale surveys, however,
rigorous qualitative IAs will require the use of high calibre staff who are given time to
prepare properly, and the importance of engaging suitable staff should not be
underestimated.

Between these two extremes are a vast array of different options, and limited investments
in project monitoring by program staff, for example by including in the project design and
budget, development of an appropriate MIS system can make moderate cost impact
assessment at high levels of quality much more feasible as less primary data collection is
necessary

• Availability of human resources for impact assessment

In many, if not most, developing countries recruiting IA personnel who have the skills and
qualities to interview, collate, analyse and write up findings is a key problem at both
consultant and fieldworker levels. Commonly, different studies find themselves competing
for the same small pool of people which, while it may usefully raise payments for scarce
skills, puts these individuals under great strain and does not appear to stimulate a ‘supply
side response’. This must be recognised as a key constraint and efforts to build ‘impact
assessment’ capacities professionally and institutionally should be a priority for
development agencies if they intend to continue to emphasize the need for IA.

• Respondents: motivation and representation

The issue of how to persuade respondents to spare the time for an interview, and provide
accurate and honest answers, is an important one that is rarely mentioned in IA
methodological statements. Different strategies are needed for different types of
respondent - program beneficiary, control group and program drop out.

19
Beneficiaries are the easiest group to approach as generally they accept ‘answering
questions’ as one of the unavoidable transaction costs of being in a program, particularly
one supported by a foreign donor agency. Motivation can be enhanced by having
interviewers introduced by program officers: but, this has the danger of linking the
assessor with field level staff and encouraging the recounting of ‘the right answers’. For
both data quality and ethical reasons the personal introductions that interviewers make
prior to interview need to be carefully worked out so that respondents understand why they
are being interviewed and have an opportunity to ask their own questions before the
interview begins.

Motivation is a more difficult issue with control groups as, having by definition no
connection with a program, they have no incentive to cooperate. In many cases, however,
the novelty value of being interviewed is sufficient encouragement (though expatriates
should note that when they are working at a field site the willingness of people to be
interviewed may be higher than is the norm because of the rarity value of foreigners). The
problems of response increase significantly if longitudinal data is collected, as second and
third interviews have much less perceived value. In such cases rewarding interviewees
should be considered to promote data quality and for ethical reasons (what right have
impact assessors to assume that the opportunity costs of an interview, particularly for poor
people, are zero?). This can take the form of a social reward, such as bringing soda
waters and snacks to share with respondents (this works well in East and Southern Africa),
or ‘bribery’ where the interviewee is paid cash for surrendering her/his time.

Program drop-outs represent a particular problem, and a failure to pursue drop-outs may
have led to some IAs underestimating the negative impacts of projects. When the drop-
out is traceable then significant effort is merited to obtain an interview/re-interview. Where
drop-outs cannot be traced, or death has occurred, then a replacement respondent
sampled at random from the original population, and preferably from the same stratum,
should be interviewed.

Participatory and rapid appraisal methods that work with groups generally manage to
muster respondents because of the social interaction they create. However, care needs to
be taken to observe who has turned up and, perhaps more significantly, who has not come
to the meeting, and why. It is not necessarily the case that participants in a PLA exercise
represent ‘the community’. Additional interviews or focus groups may be necessary to
collect information from people who do not turn up for communal PLA or RRA sessions.

• The problem of ‘low impact’ impact assessments

A final problem of IA concerns the impact of IAs on policy and practice. This depends in
part on the original objectives of a study. It applies to both ‘proving’ and ‘improving’ IAs.
The evaluation literature of the 1980s bemoans the limited influence of evaluation on
subsequent decision-making.

There are a number of ways this problem can be ameliorated:

(i) Impact assessors need to devote more time to the ‘use’ of their studies (and
perhaps a little less time to the product itself!). Their focus must go beyond ‘the
report’ into a dissemination strategy aimed at decision-makers. Bullet point

20
summaries, short user-friendly papers, snappy presentations and strategic cups of
coffee are the key to this environment.

(ii) The timing of findings needs to be carefully considered. As a general rule of thumb
the longer the length of time between data collection and presentation of findings,
then the lower the impact for IAs focused on ‘improving’ practice. The common
response to initial findings presented more than 9 months after completion of
fieldwork is ‘our program has already been redesigned so your findings have little
relevance’.

(iii) Program managers often regard impact assessors as impractical people who have
lots of time on their hands. For high cost approaches pursuing the scientific method
this will be of only limited significance as the people to whom one’s results must be
credible are in Washington and European capitals. However, for the vast majority
of IA studies the issue of how to develop constructive relationships with program
staff requires careful thought and action. Efforts to achieve co-ownership of findings
by involving program staff in IA design, showing respect for their ideas and
opinions, and discussing interim findings are ways of making influence more
probable.

• How ‘robust’ have the findings got to be?

If an IA is to provide findings to a high degree of confidence (e.g. 95% confidence levels in


statistical tests) then in most cases a ‘complex approach’ will be needed. This will be
costly and time-consuming. By contrast, if an IA is required to provide corroboration of
programme impact and strengthen aspects of implementation then a ‘simple approach’
should be adopted.

The following approaches have been developed for micro-finance institutions, but are
largely applicable to other enterprise projects, although measurable assessment of outputs
and outcomes is harder to achieve in some other areas (e.g. impact of business
development services), and will need to be well supported by qualitative and participatory
studies.

Simple Approach

These are the most numerous forms of IAs. Reliability is moderate, at best (and based
mainly on triangulation), and the major objective is to test the existing understanding of
impacts and contribute to improvements in programme operation. The main audiences
are programme managers and donor ‘country-based’ staff. The central methodological
feature of such an approach is the use of a variety of methods. Usually this involves a
small scale client survey, compared with a comparison group that could be rapidly
identified (e.g. approved clients who have not yet received services) and cross-checked by
rapid or participatory appraisal methods. If a baseline study is not available then a recall
methodology would be utilised. The key variables to be studied would depend on
programme objectives, but for easily quantifiable variables (e.g. income and assets) the
focus would be on ordinal and nominal measurements. For programmes prioritising

21
empowerment goals and local institutional development, then participatory methods would
be highlighted and the survey work might be dropped altogether.

A Moderate Approach

The moderate approach would involve substantially more costs than the simple approach,
would yield higher levels of reliability (statistical inference rather than triangulation) but is
not likely to deliver findings for a period of 2 to 3 years. Its focus is on both proving impact
and improving programmes. Its audiences would include policymakers (looking for
reassurance about their agency’s investments) and the senior managers of programmes.
The ‘mix’ would centre on a significant survey that would stratify clients and compare them
with a carefully matched control group. The survey would involve at least two visits with a
minimum of 12 months between them and recall techniques would not be used.
Contextual and cross-checking materials would be produced by rapid appraisal
techniques, and carefully planned participant observation and case studies might also be
commissioned. The selection of variables would depend on programme objectives but is
likely to be more extensive than for the simple approach, and measurement would focus
on interval and nominal scales.

A Complex Approach

The complex approach focuses on ensuring high levels of reliability with regard to the
attribution of causality and has an exclusively ‘proving’ orientation. Its main audiences are
policymakers and researchers and it is likely to be 4 to 6 years after launch before findings
are available. The central method in such an approach is a large scale sample survey
very carefully constructed to represent all key features of the client population. This is
compared against a carefully selected control group, so that the number of households
surveyed is likely to be between 750 and 1500. At least 3 interviews will be conducted
with each household over a period of 2 to 3 years. A wide set of variables will be
measured and the focus will be on high precision through interval measurements. A set of
related studies on institutional performance would be conducted, but the heart of the study
would be the econometric analysis of survey findings.

• Summary

The key task for the IA designer is to select an approach that can meet the objectives of
the specific assessment at an acceptable level of rigor, that is compatible with the
program’s context, feasible in terms of costs, timing and human resource availability and
that avoids the problems identified in earlier sections. Wherever possible an IA
methodology should be piloted before full implementation.

The questions that s/he must answer can be summarised as follows:


• What are the objectives of the assessment?

• How is the information to be used and by whom?

• What level of reliability is required?

• How complex is the program, what type of program is it, what is already known about it?

• What resources (money, human and time) are available?

22
7. COMMISSIONING AN IMPACT ASSESSMENT: TERMS OF REFERENCE FOR
CONSULTANTS

EDAs may wish to commission external impact assessments where:


- A complex IA has been decided at the design stage involving a range of quantitative,
qualitative and participative methods
- Where a project is not achieving the stated outputs and a greater level of analysis is
needed to determine the reasons
- Where an EDA wishes to look at IA across a sector or theme (e.g. outreach to the
urban poor) to draw comparative conclusions

Terms of Reference should be drawn up in participation with project staff and with
professional support from DFID colleagues in other sectors. Costs of the IA strategy
should have been included in the project's budget at the design stage; these should have
included capacity for the project to absorb the lessons that may emerge from the study, for
example to hold workshops with staff to disseminate the findings and negotiate changes in
the organisation's work that will lead to improved impact in the future.

Detailed guidance on the process for commissioning an impact assessment is contained in


DFID's Evaluation Guidelines.1 The following is a summary of the main points to be
included in the Terms of Reference, which can then be used as the EDA's guide for
steering the IA through to satisfactory completion.

Model Terms of Reference for Consultants

1 Project Title: Brief Description

2 Background

This should provide general background and context for the proposed consultancy:
• reasons for the consultancy (including details of problems/constraints faced by the
recipient)
• the level of local resources/capacity available to support the consultancy
• the proposed role of the consultants
• arrangements for working with local staff (including role of counterparts and/or local
task forces) and some idea of the existing MIS and what data is already available.

3 Overall Objectives

Clearly state the objectives and intended users of the impact assessment. Ideally the
objectives should include both justifying programme investments and improving
programmes.

4 Scope and Methods of the Work

This section should list in detail all of the tasks and activities to be carried out by the
Consultant in the sequence that they are expected to be undertaken.

1
Available from the Monitoring & Evaluation Dept (link to DFID M&E site).

23
- State the key research questions. These should be guided by the intended use of
the information that is to be gathered, and limited to those which it is most
important to address

- Identify the key partners in the IA process and the stakeholders to be included in
the sample – this will affect methodology.

- Determine the method or mix of methods envisaged for the IA. The choice of
methods should be based on the objectives of the impact assessment, the key
research questions, the availability of resources (time, money and people) and the
degree of generalisation and precision required. Involving project staff in the design
process is very important for improving the credibility and ultimate usefulness of an
IA. Their knowledge of the programme and the context can be invaluable in
informing many aspects of the research design.

Checklist:

• Are the methods proposed consistent with the time and resources
available for the impact assessment?

• Will the methods provide the type and quality of impact assessment
required by the stakeholders ?

• Have specific questions or hypotheses relating to each impact


assessment indicator been generated during the inception stage of the
impact assessment ?

• Will the methods to be used by the impact assessors provide valid and
reliable information which will allow these questions to be answered ?

• Are the methods to be used clearly described in the proposal and


inception report ?

5 Expected Outcome and Deliverables

The general outcome should be described. Against each of the tasks and activities set out
under the scope of work there should be a corresponding output. It is important that
consultation of the draft report with stakeholders is included, to assist in developing
ownership of the findings. Not all stakeholders will necessarily agree with the findings; this
should be reflected in the report. Dissemination of the report should also be included as a
key outcome.

There will be occasions when the partner organisation feels the outcome of the IA is not a
fair reflection of their achievements or did not take into account some special
circumstances. This can be a source of tension, even conflict. It is most likely to happen
when the project staff and management feel their views were not taken into account in
determining indicators and criteria for assessment, and confirms the need to engage them
in design of the IA strategy from the start.2

2
DFID’s Centre for Social Dimensions of Business Practice is developing guidelines for dealing with dissent
(link required) ??Link URL??

24
6 Competency and Expertise Requirements

Minimum requirements and preferences should be stated, and CVs requested for the
consultants put forward to carry out substantial parts of the work. You should expect them
to advise of their:

• professional expertise relating to the issue being evaluated


• knowledge of the country / region
• general development expertise and experience
• cross-disciplinary skills (e.g.. social, economic and institutional)
• awareness of gender and cultural issues
• impact assessment experience, and of what methodologies
• any special language skills

7 Conduct of the Work

This should set out any useful information about the way the consultancy is expected to be
organised and implemented. In particular it should state what will be:

• the role of the consultancy team leader;


• the role of team members and the organisation of the team;
• the design and implementation of work programmes;
• target dates for completion of work programmes;
• the role of local counterpart staff in the conduct of the work
• how the research process can benefit those involved
• how the research process will contribute to the programme

8 Reporting Requirements

This covers both routine reporting on the progress of the assignment and on the final
outcome/ conclusions of the work carried out.
Particular attention should be paid to:
• the scope and timing of progress reports for DFID and recipients
• the need for presentations/ workshops to discuss progress and conclusions with project
staff and management
• the coverage and timing of reports, setting out the results of the consultancy.

Impact assessment reports need to be short, clear, and easy to read if they are to be
accessible and effective. Authors should write for a general audience, and should bear in
mind that English will not be the first language of many readers. Technical and academic
jargon should be avoided. Long reports will not be read, and are expensive to translate.
At a minimum the report should include:

- A clear statement of the objective of the impact assessment


- A description of the programme including its objectives
- A description of the methodology used for qualitative and quantitative components,
especially criteria and process for selecting participants
- Findings, interpretation of results, conclusions and when appropriate,
recommendations for programme improvement

25
Dissemination of Findings

Impact assessment findings will need to be effectively disseminated within DFID and its
partners if the aim is policy lesson learning. Accountability requires that the impact
assessment findings are accessible to the public and participants in the UK and partner
countries.

For this to be the case :


• dissemination needs to be planned from the beginning, and included in the TOR.

• dissemination needs to be directed at, and tailored for, specific groups.


Dissemination should be project / audience-led, not consultant- or product-led.

• reports (or at least summaries) should be translated into local languages where
necessary (and the costs of this included in the project budget).

• findings need to be openly and widely disseminated throughout the process, not
just at the end.

• the budget needs to contain sufficient resources for effective dissemination.

Dissemination methods need to be designed and implemented on a case by case basis.


Three primary methods of dissemination can be used :

• presentations, workshops and seminars


• impact assessment reports and summaries
• articles in general and academic publications

26
8. COMMON IMPACT ASSESSMENT METHODS

Table 1 (below) sets out the methods most often used in IAs, and Table 2 (on the next
page) summarises the comparative strengths and weaknesses of these methods.

Table 1: Common Impact Assessment Methods

Method Key Features

Sample Surveys Collect quantifiable data through


questionnaires. Usually a random
sample and a matched control group
are used to measure predetermined
indicators before and after intervention

Rapid Appraisal A range of tools and techniques


developed originally as rapid rural
appraisal (RRA). It involves the use of
focus groups, semi-structured interview
with key informants, case studies,
participant observation and secondary
sources

Participant Observation Extended residence in a program


community by field researchers using
qualitative techniques and mini-scale
sample surveys

Case Studies Detailed studies of a specific unit (a


group, locality, organisation) involving
open-ended questioning and the
preparation of ‘histories’

Participatory Learning and Action The preparation by the intended


beneficiaries of a program of timelines,
impact flow charts, village and resource
maps, well-being and wealth ranking,
seasonal diagrams, problem ranking
and institutional assessments through
group processes assisted by a facilitator

27
Table 2: Comparative strengths and weaknesses of different methods
Method Surveys Rapid Participant Case Participatory
Criteria appraisal observation studies Learning and
Action

1. Coverage
(scale of High Medium Low Low Medium
applicability

2. Represent- High Medium Low Low Medium


ativeness

3. Ease of data
standardisation, High Medium Medium or Low Low Medium or low
aggregation and
synthesis (e.g.
quantification)

4. Ability to
isolate and High Low Low Low Low
measure non-
project causes of
change

5. Ability to cope High Medium Medium Medium Medium


with the
attribution
problem

6. Ability to Low High High High High


capture
qualitative
information

7. Ability to Low High High Medium High


capture
causal processes

8. Ability to Minimal Medium High Medium Medium


understand
complex
processes (e.g..
institution
building)

9. Ability to Low High Very high Medium High


capture
diversity of
perceptions

28
Method Surveys Rapid Participant Case Participatory
Criteria appraisal observation studies Learning and
Action

10. Ability to elicit Low Medium High High Medium


views of women (if
and targeted)
disadvantaged
groups

11. Ability to Low High Very high High High


capture
unexpected or
negative
impacts

12. Ability to Low High High Medium High


identify (due to
and articulate felt low
needs coverage)

13. Degree of
participation Low High Medium Medium Very high
encouraged by
method

14. Potential to
contribute to Low High Low Medium Very high
stakeholder to low
capacity
building

15. Probability of Low High Medium Medium High


enhancing
downwards
accountability

16. Human Specialist High skilled Medium skilled Medium High skilled
resource supervision practition- practitioners, with skilled practitioners
requirements - large ers,who are good practit-
numbers of able to rite supervision, who ioners with
less qual- -up and are prepared to good
ified field analyse commit for super-
workers results lengthy period vision

17. Cost range Very high High to Medium to low Medium High to Medium
to medium medium to low

18. Timescale Very high Medium to High High to Medium to low


to medium low medium
V V V V V
Surveys Rapid Participant Case Participatory
appraisal observation studies learning and action

Source: Adapted from Montgomery et. al. (1996).

29
As mentioned in Section1, IA methods fall into three broad categories: qualitative,
quantitative, and participatory. Techniques from each of these categories can be combined
in any one IA, always bearing in mind the purpose of the IA.

Quantitative approach

This most closely resembles a 'scientific method', but even with careful design is still
subject to problems such as:
• sample selection bias
• mis-specification of underlying causal relationships, and
• respondent motivation

Selection bias may occur because of:

i. difficulties in finding a location at which the control group’s economic, physical and
social environment matches that of the treatment group

ii. the treatment group systematically possessing an ‘invisible’ attribute which the control
group lacks (most commonly identified as entrepreneurial drive and ability)

iii. receiving any form of intervention may result in a short-term positive response from the
treatment group

iv. the control group may become contaminated by contact with the treatment group.

Problems (i) and (iv) can be tackled by more careful selection of the control group. This
applies particularly to controlling for access to infrastructure (which has a key influence on
input and output prices as well as other variables) and ensuring that the control group is
located far away from the treatment group. Problems (ii) and (iii) are more intractable, but
in many cases they can be tackled by using program-accepted ‘clients-to-be’, who have
not yet received the services, as the control group. It must be noted, however, that this
approach will not be valid when the take up of services is based on diffusion through a
heterogeneous population.

The mis-specification of underlying causal relationships arises most commonly


because of the assumption that causality is a one-way process. Overcoming this can be
enormously demanding in terms of data requirements, technical expertise and costs, and
is often only feasible on very rare occasions. The main means of dealing with it are (i)
tracing dropouts from both the treated and control groups; (ii) only conducting IAs on
relatively mature programs; (iii) interim impact monitoring activities to gather qualitative
information about the complexity of causality; and (iv) retrospective. in-depth interviews
with clients.

Respondent motivation operates at two levels. Firstly, the people who take up the
services may be more strongly motivated to participate in the intervention than those in the
control group (similar to 'the entrepreneurial factor' referred to above). Secondly, their
motivation when taking part in the IA, whether it be completion of a survey form or in the
conduct of an interview, may be influenced by their desire to say what they think the
interviewer wants to hear, or how they relate to the interviewer, or the time they are able to
give to completion of the form, and so on. Training of interviewers to ask questions in a

30
similar manner, and careful design of forms may help to some extent but it is impossible to
completely overcome this problem

Qualitative Approach

The qualitative approach attempts to resolve some of the problems described above by
seeking to provide an interpretation of the processes involved in intervention and of the
impacts that have a high level of plausibility. It recognises that there are usually different,
and often conflicting, accounts of what has happened and what has been achieved by a
program. The validity of specific IAs adopting this approach has to be judged by the
reader on the basis of:

(i) the logical consistency of the arguments and materials presented;


(ii) the strength and quality of the evidence provided;
(iii) the degree of triangulation used to cross-check evidence;
(iv) the quality of the methodology; and
(v) the reputation of the researcher(s).

Although such work has been common in development studies for decades, it is only
during the 1980s that its relevance for IA has been recognised. This recognition has arisen
partly because of the potential contribution of qualitative approaches (especially in
understanding changes in social relations, the nature of program staff-beneficiary relations
and fungibility) and partly because of the widespread recognition that much IA survey work
was based on inaccurate information collected by questionnaire from biased samples.
Low budget and low rigour IAs claiming to adopt the scientific method were at best
pseudo-science, but more often simply bad science, despite the sophisticated analytical
tools that were applied to poor datasets.

However, IAs with their roots in the humanities have considerable difficulties with regard to
the attribution of cause and effect. Such studies cannot usually demonstrate the causal
link as they are not able to generate a ‘without program’ control group (although at times
some researchers neglect to mention this to the reader and simply assume causality).
Instead, causality is inferred from the information about the causal chain collected from
intended beneficiaries and key informants, and by comparisons with data from secondary
sources about changes in out-of-program areas. Problems also arise because not
infrequently the labels ‘rapid appraisal’, ‘mini-survey’ and ‘case study’ are applied to work
which has been done in an ad hoc manner and does not achieve a minimum professional
standard in terms of informant selection and the rigor of data collection and analysis.
Examples of this include: i) basing data collection only in program areas that are
performing well, and surveying best clients, and ii) inferring that the data collected in one
area applies to all clients without explaining this assumption.

While such studies cannot provide the degree of confidence in their conclusions that a fully
resourced scientific method approach can yield, in some cases their conclusions may be
more valid than survey based IA work that masquerades as science but has not collected
data with scientific rigor. Whatever, it is becoming increasingly common to combine
‘scientific’ and ‘humanities’ approaches so as to check the validity of information and
provide added confidence in the. In the future dealing with attribution by multi-method
approaches seems the way forward.

31
Participatory Approaches: PLA (Participatory Learning and Action)

In the last five years participatory approaches to development planning and management
have moved from being a fringe activity to centre stage. While many donor agencies,
have simply added a bit of PLA to their existing procedures, it can be argued that this is
inappropriate as conceptually participatory approaches challenge the validity and utility of
the scientific method as applied to developmental problems.

According to this line of argument the scientific/quantitative method fails as:

• it ignores the complexity, diversity and contingency of winning a livelihood;

• it reduces causality to simple unidirectional chains, rather than complex webs;

• it measures the irrelevant or pretends to measure the immeasurable; and,

• it empowers professionals, policymakers and elites, thus reinforcing the status quo and
directly retarding the achievement of development goals.

Supporters of participatory approaches do not believe that ultimately there is one objective
reality that must be understood. Rather, there are multiple realities and before any
analysis or action is taken the individuals concerned must ask themselves, ‘whose reality
counts?’. Their answer is that the perceived reality of the poor must take pride of place as,
if development is about ‘empowering the poor’ or ‘empowering women’ (as virtually all
development agencies now say), then the first step towards empowerment is ensuring that
‘the poor’ or ‘women’ take the lead in problem identification and analysis and knowledge
creation.

For impact assessment the purist PLA line is damning: conventional baseline surveys are
virtually useless for impact assessments, and the question now is about how widely local
people can be enabled to identify their own indicators, establish their own participatory
baselines, monitor change, and evaluate causality. By this means two objectives may be
achieved:

32
(i) better impact assessments, and

(ii) intended beneficiaries will be empowered through the research process itself

In practice, the art of participatory impact assessment (PIA) is in its infancy and a
pragmatic rather than a purist approach has been common.

The reliability of participatory methods varies enormously, as with ‘scientific’ surveys,


depending largely on the motivation and skills of facilitators and those investigated
and the ways in which informants’ perceptions of the consequences of research are
addressed. Nevertheless, it is argued that a number of rigorous comparative studies
have shown that, when well-conducted, participatory methods can be more reliable
than conventional surveys.

To date the literature on PLA and PIA has only partially addressed the issue of
attribution. From a scientific perspective PIA has grave problems because:

• of the subjectivity of its conceptualisations of impact;

• the subjectivity of the data used to assess impact;

• the variables and measures used vary from case to case and do not permit
comparison;

• its pluralist approach may lead to a number of mutually conflicting accounts


being generated about causality; and,

• the assumption that because lots of people are taking part in an exercise
means that all are able to ‘voice’ their concerns (so that opinions are
representative) is naive about the nature of local power relations.

From the perspective of a ‘new professional', however, such a set of accounts is


unproblematic, as it reflects the complexity and contingency of causality in the real
world. In addition, it can be argued that PIA contributes to program goals (perhaps
particularly in terms of empowering women and the poor) by not facilitating the
continued dominance of target groups by powerful outsiders.

Because of the different pattern of strengths and weaknesses offered by each


method of IA, there has been a growing consensus amongst impact assessors that
the central question is no longer "what is the optimal method for this study" but rather
"what mix of methods is most appropriate for this study, and how should they be
combined."

Depending on the level of resources available and the context, impact studies
increasingly seek to combine the strengths of different approaches and, in particular,
seek to combine the advantages of sample survey and statistical approaches
(representativeness, quantification and attribution) with the advantages of humanities
or participatory approaches (ability to uncover processes, capture the diversity of
perceptions, views of minorities, unexpected impacts etc).

33
In well resourced studies with long time scales all of these different methods may be
utilised in a comprehensive fashion. In cases where a high degree of statistical
confidence is required (for example, when it is desired to ‘prove’ impact for policy or
major investment purposes) then a large scale, longitudinal sample survey must be
mounted, preferably supported and triangulated by the use of their methods on a
limited scale. By contrast, if an IA is required to provide independent corroboration
of the impact of a small scale program and strengthen aspects of its implementation
then a mix of rapid appraisal and small scale survey is likely to be appropriate.

Table 3 summarises the conditions in which key methods are, and are not,
appropriate.

34
Table 3: Conditions in which key methods are, and are not, appropriate Source: Adapted from Montgomery et al 1996.
Sample Surveys are appropriate when: Rapid Appraisal and/or PLA are appropriate when: Participant Observation and/or Case Studies are
appropriate when:
A project affects large numbers of beneficiaries The project is adopting or promoting participatory An understanding of the motivations and perceptions of
principles in (re-)planning, implementation, monitoring project clientele is a priority
Policymakers require accurate estimates of project impacts and evaluation
Other methods (surveys and rapid appraisals) are unlikely to
Statistical comparisons must be made between groups over An understanding of the motivations and perceptions of capture the views of minorities or women
time and/or between locations project clientele is a priority
One of the purposes of the study is to assess whether or not
Project delivery/implementation mechanisms are operating One of the purposes of the study is to assess whether or felt needs are being addressed by the project
well, thereby justifying investment in the assessment of not felt needs are being addressed by the project
impacts The impact of community-based organisations or other
The impact of community-based organisations or other institution building activities are of importance
The target population is heterogeneous and it is difficult to institution building activities are of importance
isolate the influence of factors unrelated to the project (e.g. There is a need to understand the quality of other data
contextual variables, other programmes etc) There is a need to understand the quality of other data collected through surveys or rapid appraisals (e.g.. causal
collected through surveys processes)

There is a need for contextual studies before designing There is a need for contextual studies before designing more
more complex monitoring or impact assessment complex monitoring or impact assessment exercises (e.g..
exercises (e.g.. case studies, or surveys). before carrying out rapid appraisals, or before designing a
survey)
Sample Surveys are usually not appropriate when: Rapid Appraisal and/or PLA are not appropriate Participant Observation and/or Case Studies are usually
when: not appropriate when:
A project affects small numbers of beneficiaries Projects are relatively un-complex, in which bounded The project is small and “uncomplicated”, providing a specific
locations are not units of analysis (e.g.. Health centres service or limited intervention which is unlikely to affect
Policymakers are mainly concerned with project outcomes. serving a wide catchment area, or large schools serving a community dynamics beyond a few specific effects (e.g..
(Was infrastructure completed on time and within budget? variety of communities) disease specific health facilities or campaigns)
How many people us the health clinics?)
Indicators of project impact are uncontroversial, and Bounded locations are not units of analysis (e.g.. health
Project implementation is recent and untested, and it is likely negative impacts are unlikely centres serving a wide catchment area, or large schools
that the way in which the project is implemented will have serving a variety of communities)
little impact at the present time. Standardised and statistically representative
generalisations for a large and diverse project population Indicators of project impact are clear and easily measurable
The purpose of the assessment is to study and evaluate are regarded as the sole priority or assessable (by survey or rapid appraisals)
complex activities or processes (e.g.. the development &
operation of community-based organisations; qualitative use Participation of clientele is not a priority (e.g.. in public Indicators of project impact are uncontroversial, and negative
of skills as a result of training programmes) administration or power sector reform, or an impacts are unlikely
organisational change programme - in these types of
The purpose of the assessment is to document easily projects more limited focus group discussions with staff Information is needed quickly, and standardised,
observable changes in the physical environment or other may be more appropriate) representative
tangibles (which can be assessed through simpler, structured (statistical) generalisations are regarded as the sole priority
visits)

The purpose of the assessment is to understand whether or


not the project is meeting the felt needs of project clientele

35
KEY REFERENCE DOCUMENTS:

DFID, Draft Evaluation Guidelines (First Draft Nov 2000), Evaluation Dept, DFID, UK

Gosling L and Edwards M Toolkits: a practical guide to assessment, monitoring,


review and evaluation Save the Children Development Manual 5, SCF London

36

You might also like