0% found this document useful (0 votes)
103 views202 pages

Techniques For M&E

The document provides an overview of Monitoring and Evaluation (M&E), highlighting their distinct yet complementary roles in assessing project effectiveness, efficiency, and impact. It discusses key concepts such as relevance, effectiveness, efficiency, impact, and sustainability, and outlines the importance of M&E for accountability, operational management, and strategic planning. Additionally, it emphasizes the need for a systematic approach in M&E, including stakeholder analysis, situational analysis, and the use of tools like the LogFrame for effective project design and implementation.

Uploaded by

oelijah
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
103 views202 pages

Techniques For M&E

The document provides an overview of Monitoring and Evaluation (M&E), highlighting their distinct yet complementary roles in assessing project effectiveness, efficiency, and impact. It discusses key concepts such as relevance, effectiveness, efficiency, impact, and sustainability, and outlines the importance of M&E for accountability, operational management, and strategic planning. Additionally, it emphasizes the need for a systematic approach in M&E, including stakeholder analysis, situational analysis, and the use of tools like the LogFrame for effective project design and implementation.

Uploaded by

oelijah
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 202

TECHNIQUES FOR M&E

OVERVIEW OF M &E
• Monitoring and Evaluation are closely related
concepts that are distinct but complementary.
• Monitoring is a continuous collection of data on
specified indicators to facilitate decision making on
whether an intervention (project, program or policy)
is being implemented in line with the design i.e. its
activity schedules and budget
• Evaluation is the periodic and systematic collection of
data to assess the design, implementation and impact
in terms of effectiveness, efficiency, distribution and
sustainability of outcomes and impacts
Review of concepts
• Relevance – is what we are doing now a good
idea in terms of improving the situation at
hand? Is it dealing with the priorities of the
target group? Why or why not?
• Effectiveness – Have we done what we set out
to do? Why or why not? Is what we are doing
the best way to maximise impact?
• Efficiency – are resources being used in the best
way possible? Why or why not? What can we do
differently to improve this?
Review of concepts – cont’d
• Impact – to what extent has the project
contributed to making qualitative changes e.g.
poverty reduction, reduction in the number of
the homeless? What are the anticipated
negative and positive consequences of the
project? Why are they arising
• Sustainability – will there be continued positive
impacts as a result of the project after the
project funds run out? Why or why not?
Review of concepts – cont’d
• There are very many other concepts that are
of relevance to M & E. Identify these in class
Why carry out M & E
• Accountability: demonstrating to donors,
taxpayers, beneficiaries and implementing
partners that expenditure, actions and results
are as agreed or can reasonably be expected in
the situation.
• Operational management/Implementation:
provision of the information needed to co-
ordinate the human, financial and physical
resources committed to the project or
programme, and to improve performance.
Why carry out M & E? – cont’d
• Strategic management: provision of information
to inform setting and adjustment of objectives
and strategies.
• Capacity building: building the capacity, self-
reliance and confidence of beneficiaries and
implementing staff and partners to effectively
initiate and implement development initiatives.
• Organizational learning and adaptive
management.
What are the benefits?
Benefits at a sector level
• Improve project and programme design through
feedback provided from baseline, mid-term, terminal
and ex-post evaluations
• Inform and influence sector and country assistance
strategy through analysis of the outcomes and impact
of interventions, and the strengths and weaknesses of
their implementation, enabling governments and
organizations to develop a knowledge base of the types
of interventions that are successful (i.e. What works,
what does not and why.
• Provide the evidence basis for building consensus
between stakeholders
Benefits at the project level
• Provide regular feedback on project
performance and show any need for ‘mid-
course’ corrections
• Identify problems early and propose solutions
• Monitor access to project services and
outcomes by the target population;
• Evaluate achievement of project objectives
• Incorporate stakeholder views and promote
participation, ownership and accountability
Organisation for M & E

• Discussed in terms of What to do? Who to do?


When to do?
Organisation for M & E

What to evaluate?
• Processes
– Context evaluation – the environment within
which the project is operating (beneficiaries,
partners, project managers, external factors and
risks
– Implementation evaluation – intrinsic to the
project itself – answers questions regarding
project components, project implementation
process, organization structure
Organisation for M & E

What to evaluate? – cont’d


• Products
– What products are produced?
– Of what quality, why?
Organisation for M & E

What to evaluate? – cont’d


• Impacts
– Are they positive or negative?
– Have they been caused by the project?
Organisation for M & E

Who to do?
• M & E can be carried out:
– Internally
– Externally
– Internally but with the use of consultants
– Independent evaluators
• What are the pros and cons in each case??
Learning and action, accountability, objectivity,
cost in terms of logistics and money
Organisation for M & E

When to do?
• A continuous process (as already mentioned in
previous discussion)
Organisation for M & E

• In all cases, the following items should always be


made clear before embarking on an M&E
process:
– The purpose, scope and timeframe
– Expected deliverables
– Key research questions to be addressed
– Activities to be carried out and their financial and
temporal costs
• It should also be ensured that persons charged
with carrying out M&E are competent to do it!
Essentials of a functional M & E system
Projects do not exist in isolation of the external factors like
Government policies, political environment, product competitors… etc.
In addition, projects are initiated to achieve specific purposes. A
functional M&E system should be sensitive to these realities. As such it
should:
• Have a clear linkage with the Organisational/ National Development
Strategies and policies relating to its area of concern. This
necessitates the need for situational analysis before designing M&E
systems
• Have clear statements and linkage with project objectives. Normally
there are several of these. These should be measurable.
• Have a structured set of indicators for the project inputs, processes,
outputs, outcomes, impact, and exogenous factors. These, in most
cases are the proxies through which M&E is carried out
Essentials of a functional M & E system
• Have clear data collection mechanisms in order to
remain consistent and objective
• Be capable of monitoring progress over time. It should
thus have ways of including the findings from baseline
surveys and a means for comparing progress and
achievements against targets.
• Have a clear mechanism and structures for reporting
and use of M&E results in decision-making.
• Since M&E is carried out within some institutional set-
up, it should have a sustainable organizational
arrangement for data collection, management,
analysis, and reporting.
Situational analysis
• Should be understood to be part and parcel of project
planning and management. Enables us to examine the
current situation of our project and its external
environment so that we can identify and agree on major
issues that affect how we plan for its future
• Within project M&E set-ups, situational analysis is
carried out to obtain the baseline conditions that will
enable future examination of:
– Discrepancies between actual implementation and
implementation schedule
– Actual resource use vs. planned resource use
– Development of performance indicators
– Project impacts
– Identification of assumptions of the project
– Population to target – from a cross-sectional studies –
cont’d next slide
Why carry out a situational analysis – cont’d
– Benefit enjoyment among the target population. Take
for instance the case of HIV test & councelling centre;
• Is the service and facilities located where they are needed?
• Do people know that the service and facilities exist?
• Is the service and facilities available to everyone who needs
them?
• Are the services offered available when needed?
• Are the services and facilities affordable?
• Do people actually use the facility?
– Etc
!(All these feed into the logical framework which forms
an important part of M&E)
Situational analysis
• Key information required to carry out a
situational analysis
– Largely determined by the profile of the
population within which a project is located
– Includes
• Socio-demographic factors within the project
environment e.g. the housing conditions
• Economic factors within and without project
environment e.g. household incomes, poverty levels…
etc.
• Socio-political factors which are both internal as well as
external to the project environment e.g. policies, law/
regulations, public participation etc.
Stakeholder analysis
• Who is a stakeholder? – any entity with a stake in a policy/ project at hand. Can be
organisations, individuals, groups (organised or otherwise)
• Most programmes/ projects have stakeholder who can broadly fall under: i)
International actors (e.g. donors), ii) National or political actors (e.g. legislators), iii)
Public sector agencies, iv) Interest groups (e.g. trade unions), v) Commercial
entities (e.g. contractors/ housing developers), vi) Non-profit organizations (e.g.
NGOs & foundations), vii) Civil societies, and viii) Consumers
• How can stakeholders participate in a project? Stopped here on 17th February 2012
– Consulted
– Manipulated
– Delegated power
– Partnership
– Giving information
– Learning together
– Contributing resources
– Making decisions/ controlling processes
Stakeholder analysis
• Stakeholder analysis is a systematic methodology of establishing
who the stakeholders in a project are and the stake they hold. The
overall aim is to find their interests, their social standing and
influence and therefore take them on-board a project in order to
ensure that policies adopted are politically realistic, acceptable and
sustainable
• What are the benefits of their participation?
– Inspiration to identify, manage and control own development
aspirations
– Ensure project goals and objectives stay relevant with respect to the
felt needs
– Ensuring that strategies are appropriate to the local circumstances
– Building partnerships, ownership and commitment needed for
effective implementation
Stakeholder analysis
• What then to consider in stakeholder analysis?
– Their position on issues addressed by the project
– Their level of influence in terms of the power they
wield or resources
– Their level of interest in the project
– The group they are likely to coalesce with against/
for the project
• ! NB:
– Remain positive and objective in carrying out a
stakeholder analysis
Stakeholder analysis
• Collect data from different sources in order to carry out a reliable
stakeholder analysis. A successful data analysis can lead you to come
up/ categorise stakeholders into:
• Apathetics: Their actions cannot affect the implementation of the project and
they low priority to it at the same time
• Latents: This category attaches a low priority to the project even though their
actions can affect its implementation
• Defenders: Attach a high priority to the project but their actions cannot have an
impact on its implementation
• Promoters: attach high priority to the project and their actions can have a
positive impact on the implementation of the project
• What are their impacts in terms of designing M&E??
• When to carry out a stakeholder analysis?? General rule: timing is an
important factor in the implementation of stakeholder analysis to
ensure the usefulness of the results for project design
• Exercise – identify a project of interest and carry out stakeholder
analysis, problem analysis and objective analysis.
Problem analysis
• Carried out through the problem tree analysis -
a tool used to analyse problems.

• Uses the concept of a tree


Problem analysis
Problem analysis
• Enhances understanding of the problem by
examining the root causes and the effects of
the problem

• The focus should be on identifying the core


problem

• It also forms a sound basis for formulation of


Goals and objectives for addressing the
problem.
Problem analysis
• Under the root causes, look at the WHY questions

• Under the effects, look at the WHAT Questions

• Carry out a simple problem tree analysis in class


Objective analysis
• The problem tree analysis enables the
development of the goals, objectives and
strategies.
• In objective analysis, look at objectives to address
in a plan
• Revisit the problem tree analysis and Convert the
negative statements in your problem tree into
positive statements
• The objectives set should be SMART. Only then
can they be monitored and evaluated.
Alternatives analysis
• Done through the use of CBA, EIA, SIA
• The goal is to identify an alternative among
many alternatives that best addresses the
problem at hand given the environmental,
financial and social constraints imposed by the
project circumstances.
The LogFrame

• A project planning and implementation tool that can


also be useful in guiding the design of M&E system.
• There is no one rigid format of the LogFrame – it can be
varied to enable ease of understanding by stakeholders
and thus effective participation in planning
• Nonetheless, it addresses the following areas:
– What a project should achieve – from the goals to the
specific activities
– The performance questions and indicators for monitoring
progress and overall achievement
– How the indicators will be monitored
– The assumptions
The Logical Framework

Source: IFAD, 2004


Application of the techniques at various stages
of the M&E design
Data collection
• A strategy has to be put in place for data
collection once the indicators are clear. This
strategy identifies ‘what data is needed’, ‘when
it is needed’, ‘how is it to be collected’ and
‘whose responsibility it is to collect it’
• There should be a flow between each indicator
and the strategies
• The subsequent discuss different methods and
tools of data collection and analysis
Sampling
• A sample is “a smaller (but hopefully
representative) collection of units from a
population used to determine truths about that
population” (Field, 2005)
• Why sample?
– Resources (time, money) and workload
– Gives results with known accuracy that can be
calculated mathematically
• The sampling frame is the list from which the
potential respondents are drawn
– Registrar’s office
– Class rosters
– Must assess sampling frame errors
37
RESEARCH METHODS IN GEOGRAPHY
Sampling
• What is your population of interest?
• To whom do you want to generalize your
results?
–All teachers
–School children
–All Kenyans
–Women aged 15-45 years
–Other
• Can you sample the entire population?

38
RESEARCH METHODS IN GEOGRAPHY

Sampling
• 3 factors that influence sample representative-
ness
• Sampling procedure
• Sample size
• Participation (response)

• When might you sample the entire population?


• When your population is very small
• When you have extensive resources
• When you don’t expect a very high response

39
RESEARCH METHODS IN GEOGRAPHY

Sampling
• Two general approaches to sampling are used in social science research.
With probability sampling, all elements (e.g., persons, households) in the
population have some opportunity of being included in the sample, and
the mathematical probability that any one of them will be selected can be
calculated. With nonprobability sampling, in contrast, population
elements are selected on the basis of their availability (e.g., because they
volunteered) or because of the researcher's personal judgment that they
are representative. The consequence is that an unknown portion of the
population is excluded (e.g., those who did not volunteer). One of the
most common types of nonprobability sample is called a convenience
sample – not because such samples are necessarily easy to recruit, but
because the researcher uses whatever individuals are available rather
than selecting from the entire population.

• Because some members of the population have no chance of being


sampled, the extent to which a convenience sample – regardless of its
size – actually represents the entire population cannot be known
40
Important statistical terms
Population:
a set which includes all
measurements of interest
to the researcher
(The collection of all responses,
measurements, or counts that
are of interest)

Sample:
A subset of the population
Why sampling?

Get information about large populations


 Less costs
 Less field time
 More accuracy i.e. Can do a Better job of data
collection
 When it’s impossible to study the whole
population
Target Population:
The population to be studied/ to which the
investigator wants to generalize his results
Sampling Unit:
smallest unit from which sample can be
selected
Sampling frame
List of all the sampling units from which
sample is drawn
Sampling scheme
Method of selecting sampling units from
sampling frame
Population definition
• A population can be defined as including all people
or items with the characteristic one wishes to
understand.
• Because there is very rarely enough time or money
to gather information from everyone or everything
in a population, the goal becomes finding a
representative sample (or subset) of that
population.

44
Population definition…….
• Note also that the population from which the
sample is drawn may not be the same as the
population about which we actually want
information. Often there is large but not complete
overlap between these two groups due to frame
issues etc .
• Sometimes they may be entirely separate - for
instance, we might study rats in order to get a better
understanding of human health, or we might study
records from people born in 2008 in order to make
predictions about people born in 2009.
45
SAMPLING FRAME
• In the most straightforward case, such as the sentencing of a
batch of material from production (acceptance sampling by
lots), it is possible to identify and measure every single item
in the population and to include any one of them in our
sample. However, in the more general case this is not
possible. There is no way to identify all rats in the set of all
rats. Where voting is not compulsory, there is no way to
identify which people will actually vote at a forthcoming
election (in advance of the election)
• As a remedy, we seek a sampling frame which has the
property that we can identify every single element and
include any in our sample .
• The sampling frame must be representative of the
population

46
RESEARCH METHODS IN GEOGRAPHY

Sampling process
• While developing a sampling design, the researcher must pay attention to the
following points:
• (i) Type of universe: The first step in developing any sample design is to clearly define
the set of objects, technically called the Universe, to be studied. The universe can be
finite or infinite. In finite universe the number of items is certain, but in case of an
infinite universe the number of items is infinite, i.e., we cannot have any idea about
the total number of items. The population of a city, the number of workers in a factory
and the like are examples of finite universes, whereas the number of stars in the sky,
listeners of a specific radio programme, throwing of a dice etc. are examples of infinite
universes.
• (ii) Sampling unit: A decision has to be taken concerning a sampling unit before
selecting sample. Sampling unit may be a geographical one such as state, district,
village, etc., or a construction unit such as house, flat, etc., or it may be a social unit
such as family, club, school, etc., or it may be an individual. The researcher will have to
decide one or more of such units that he has to select for his study.
• (iii) Source list: It is also known as ‘sampling frame’ from which sample is to be drawn.
It contains the names of all items of a universe (in case of finite universe only). If
source list is not available, researcher has to prepare it. Such a list should be
comprehensive, correct, reliable and appropriate. It is extremely important for the
source list to be as representative of the population as possible.
47
RESEARCH METHODS IN GEOGRAPHY
Sampling process
• (iv) Size of sample: This refers to the number of items to be selected from the universe to constitute a
sample. This a major problem before a researcher. The size of sample should neither be excessively large,
nor too small. It should be optimum. An optimum sample is one which fulfils the requirements of
efficiency, representativeness, reliability and flexibility. While deciding the size of sample, researcher
must determine the desired precision as also an acceptable confidence level for the estimate. The size of
population variance needs to be considered as in case of larger variance usually a bigger sample is
needed. The size of population must be kept in view for this also limits the sample size. The parameters
of interest in a research study must be kept in view, while deciding the size of the sample. Costs too
dictate the size of sample that we can draw. As such, budgetary constraint must invariably be taken into
consideration when we decide the sample size.
• (v) Parameters of interest: In determining the sample design, one must consider the question of the
specific population parameters which are of interest. For instance, we may be interested in estimating
the proportion of persons with some characteristic in the population, or we may be interested in knowing
some average or the other measure concerning the population. There may also be important sub-groups
in the population about whom we would like to make estimates. All this has a strong impact upon the
sample design we would accept.
• (vi) Budgetary constraint: Cost considerations, from practical point of view, have a major impact upon
decisions relating to not only the size of the sample but also to the type of sample. This fact can even
lead to the use of a non-probability sample.
• (vii) Sampling procedure: Finally, the researcher must decide the type of sample he will use i.e., he must
decide about the technique to be used in selecting the items for the sample. In fact, this technique or
procedure stands for the sample design itself. There are several sample designs (explained in the slides to
follow) out of which the researcher must choose one for his study. Obviously, he must select that design
48
which, for a given sample size and for a given cost, has a smaller sampling error.
Sample size
A study is to be performed to determine the age
at which school girls drop out of school to get
married in a certain community. From a previous
study an SD of 40 was obtained. If a sample
error of up to 4 is to be accepted. How many
subjects should be included in this study at 99%
level of confidence?

49
Solution
2
Z σ 2
n
D 2

2
2.58x
402
n 665
.64
~666
42
Sample size for stratified sample
• In stratified sampling, the method of proportional allocation under which
the sizes of the samples from the different strata are kept proportional to
the sizes of the strata is usually followed. For instance, if Pi represents the
proportion of population included in stratum i, and n represents the total
sample size, the number of elements selected from stratum i is n . Pi.

Illustration:
• Suppose that we want a sample of size n = 666 to be drawn from a
population of size N = 8000 which is divided into three strata of size N1 =
4000, N2 = 2400 and N3 = 1600.
• If we adopt proportional allocation, we shall the following sample sizes for
the different strata:
• For strata with N1 = 4000, we have P1 = 4000/8000 and hence n1 = n . P1
= 30 (4000/8000) = 333
• For strata with N2 = 2400, we have
• n2 = n . P2 = 30 (2400/8000) = 200,
• For strata with N3 = 1600, we have n3 = n . P3 = 30 (1600/8000) = 133.

51
PROBABILITY SAMPLING
• A probability sampling scheme is one in which every unit in
the population has a chance (greater than zero) of being
selected in the sample, and this probability can be accurately
determined.

• When every element in the population does have the same


probability of selection, this is known as an 'equal probability
of selection' (EPS) design. Such designs are also referred to as
'self-weighting' because all sampled units are given the same
weight.

52
PROBABILITY SAMPLING…….

• Probability sampling includes:


• Simple Random Sampling,
• Systematic Sampling,
• Stratified Random Sampling,
• Cluster Sampling
• Multistage Sampling.
• Multiphase sampling

53
NON PROBABILITY SAMPLING
• Any sampling method where some elements of
population have no chance of selection (these are
sometimes referred to as 'out of
coverage'/'undercovered'), or where the probability of
selection can't be accurately determined. It involves the
selection of elements based on assumptions regarding
the population of interest, which forms the criteria for
selection. Hence, because the selection of elements is
nonrandom, nonprobability sampling not allows the
estimation of sampling errors..

• Example: We visit every household in a given street, and


interview the first person to open the door. In any
household with more than one occupant, this is a
nonprobability sample, because some people are more
likely to answer the door (e.g. an unemployed person who
spends most of their time at home is more likely to
answer than an employed housemate who might be at
work when the interviewer calls) and it's not practical to
calculate these probabilities. 54
NONPROBABILITY SAMPLING…….

• Nonprobability Sampling includes: Accidental


Sampling, Quota Sampling and Purposive
Sampling. In addition, nonresponse effects
may turn any probability design into a
nonprobability design if the characteristics of
nonresponse are not well understood, since
nonresponse effectively modifies each
element's probability of being sampled.

55
SIMPLE RANDOM SAMPLING
• Applicable when population is small,
homogeneous & readily available
• All subsets of the frame are given an equal
probability. Each element of the frame thus has
an equal probability of selection.
• It provides for greatest number of possible
samples. This is done by assigning a number to
each unit in the sampling frame.
• A table of random number or lottery system is
used to determine which units are to be
selected. 56
SIMPLE RANDOM SAMPLING……..
• Estimates are easy to calculate.
• Simple random sampling is always an EPS design, but
not all EPS designs are simple random sampling.

• Disadvantages
• If sampling frame large, this method impracticable.
• Minority subgroups of interest in population may not
be present in sample in sufficient numbers for study.

57
SYSTEMATIC SAMPLING
• Systematic sampling relies on arranging the target
population according to some ordering scheme and
then selecting elements at regular intervals through
that ordered list.
• Systematic sampling involves a random start and then
proceeds with the selection of every kth element from
then onwards. In this case, k=(population size/sample
size).
• It is important that the starting point is not
automatically the first in the list, but is instead
randomly chosen from within the first to the kth
element in the list.
• A simple example would be to select every 10th name
from the telephone directory (an 'every 10th' sample,
also referred to as 'sampling with a skip of 10').
58
SYSTEMATIC SAMPLING……
As described above, systematic sampling is an EPS method, because all
elements have the same probability of selection (in the example given, one in
ten). It is not 'simple random sampling' because different subsets of the same
size have different selection probabilities - e.g. the set {4,14,24,...,994} has a
one-in-ten probability of selection, but the set {4,13,24,34,...} has zero
probability of selection.

59
SYSTEMATIC SAMPLING……

• ADVANTAGES:
• Sample easy to select
• Suitable sampling frame can be identified easily
• Sample evenly spread over entire reference population
• DISADVANTAGES:
• Sample may be biased if hidden periodicity in population coincides
with that of selection.
• Difficult to assess precision of estimate from one survey.

60
STRATIFIED SAMPLING
Where population embraces a number of distinct
categories, the frame can be organized into separate
"strata." Each stratum is then sampled as an
independent sub-population, out of which individual
elements can be randomly selected.
• Every unit in a stratum has same chance of being
selected.
• Using same sampling fraction for all strata ensures
proportionate representation in the sample.
• Adequate representation of minority subgroups of
interest can be ensured by stratification & varying
sampling fraction between strata as required.

61
STRATIFIED SAMPLING……
• Finally, since each stratum is treated as an
independent population, different sampling
approaches can be applied to different strata.

• Drawbacks to using stratified sampling.


• First, sampling frame of entire population has to
be prepared separately for each stratum
• Second, when examining multiple criteria,
stratifying variables may be related to some, but
not to others, further complicating the design,
and potentially reducing the utility of the strata.
• Finally, in some cases (such as designs with a
large number of strata, or those with a specified
minimum sample size per group), stratified
sampling can potentially require a larger sample
than would other methods
62
STRATIFIED SAMPLING…….

Draw a sample from each stratum

63
POSTSTRATIFICATION

• Stratification is sometimes introduced after the


sampling phase in a process called
"poststratification“.
• This approach is typically implemented due to a lack of
prior knowledge of an appropriate stratifying variable or
when the experimenter lacks the necessary information
to create a stratifying variable during the sampling phase.
Although the method is susceptible to the pitfalls of post
hoc approaches, it can provide several benefits in the
right situation. Implementation usually follows a simple
random sample. In addition to allowing for stratification
on an ancillary variable, poststratification can be used to
implement weighting, which can improve the precision of
64
a sample's estimates.
CLUSTER SAMPLING
• Cluster sampling is an example of 'two-stage sampling' .
• First stage a sample of areas is chosen;
• Second stage a sample of respondents within those
areas is selected.
• Population divided into clusters of homogeneous units,
usually based on geographical contiguity.
• Sampling units are groups rather than individuals.
• A sample of such clusters is then selected.
• All units from the selected clusters are studied.

65
CLUSTER SAMPLING…….

• Advantages :
• Cuts down on the cost of preparing a sampling
frame.
• This can reduce travel and other
administrative costs.
• Disadvantages: sampling error is higher for a
simple random sample of same size.

66
CLUSTER SAMPLING…….
• Identification of clusters
– List all cities, towns, villages & wards of cities with
their population falling in target area under study.
– Calculate cumulative population & divide by 30, this
gives sampling interval.
– Select a random no. less than or equal to sampling
interval having same no. of digits. This forms 1st
cluster.
– Random no.+ sampling interval = population of 2nd
cluster.
– Second cluster + sampling interval = 4th cluster.
– Last or 30th cluster = 29th cluster + sampling interval 67
CLUSTER SAMPLING…….
Two types of cluster sampling methods.
One-stage sampling. All of the elements within
selected clusters are included in the sample.
Two-stage sampling. A subset of elements
within selected clusters are randomly selected
for inclusion in the sample.

68
CLUSTER SAMPLING…….
• Freq cf cluster • XVI 3500 52500 17
• I 2000 2000 1 • XVII 4000 56500 18,19
• II 3000 5000 2 • XVIII 4500 61000 20
• III 1500 6500 • XIX 4000 65000 21,22
• IV 4000 10500 3
• XX 4000 69000 23
• V 5000 15500 4, 5
• XXI 2000 71000 24
• VI 2500 18000 6
• VII 2000 20000 7 • XXII 2000 73000
• VIII 3000 23000 8 • XXIII 3000 76000 25
• IX 3500 26500 9 • XXIV 3000 79000 26
• X 4500 31000 10 • XXV 5000 84000 27,28
• XI 4000 35000 11, 12 • XXVI 2000 86000 29
• XII 4000 39000 13 • XXVII 1000 87000
• XIII 3500 44000 14,15 • XXVIII 1000 88000
• XIV 2000 46000 • XXIX 1000 89000 30
• XV 3000 49000 16
• XXX 1000 90000
• 90000/30 = 3000 sampling interval

69
Difference Between Strata and Clusters

• Although strata and clusters are both non-


overlapping subsets of the population, they differ
in several ways.
• All strata are represented in the sample; but only
a subset of clusters are in the sample.
• With stratified sampling, the best survey results
occur when elements within strata are internally
homogeneous. However, with cluster sampling,
the best results occur when elements within
clusters are internally heterogeneous

70
QUOTA SAMPLING

• The population is first segmented into mutually


exclusive sub-groups, just as in stratified sampling.
• Then judgment used to select subjects or units from
each segment based on a specified proportion.
• For example, an interviewer may be told to sample 200
females and 300 males between the age of 45 and 60.
• It is this second step which makes the technique one of
non-probability sampling.
• In quota sampling the selection of the sample is non-
random.
• For example interviewers might be tempted to interview
those who look most helpful. The problem is that these
samples may be biased because not everyone gets a
chance of selection. This random element is its greatest
weakness and quota versus probability has been a matter
of controversy for many years

71
CONVENIENCE SAMPLING
• Sometimes known as grab or opportunity sampling or accidental
or haphazard sampling.
• A type of nonprobability sampling which involves the sample being
drawn from that part of the population which is close to hand.
That is, readily available and convenient.
• The researcher using such a sample cannot scientifically make
generalizations about the total population from this sample
because it would not be representative enough.
• For example, if the interviewer was to conduct a survey at a
shopping center early in the morning on a given day, the people
that he/she could interview would be limited to those given there
at that given time, which would not represent the views of other
members of society in such an area, if the survey was to be
conducted at different times of day and several times per week.
• This type of sampling is most useful for pilot testing.
• In social science research, snowball sampling is a similar technique,
where existing study subjects are used to recruit more subjects
into the sample.

72
Judgmental sampling or Purposive sampling

• - The researcher chooses the sample


based on who they think would be
appropriate for the study. This is used
primarily when there is a limited number
of people that have expertise in the
area being researched

73
PANEL SAMPLING

• Method of first selecting a group of participants through a


random sampling method and then asking that group for the same
information again several times over a period of time.
• Therefore, each participant is given same survey or interview at
two or more time points; each period of data collection called a
"wave".
• This sampling methodology often chosen for large scale or
nation-wide studies in order to gauge changes in the population
with regard to any number of variables from chronic illness to job
stress to weekly food expenditures.

74
What sampling method u recommend?
• Determining proportion of undernourished five
year olds in a village.
• Investigating nutritional status of preschool
children.
• Selecting maternity records for the study of
previous abortions or duration of postnatal
stay.
• In estimation of immunization coverage in a
province, data on seven children aged 12-23
months in 30 clusters are used to determine
proportion of fully immunized children in the
province.
• Give reasons why cluster sampling is used in
this survey.
75
Quantitative methods –
Qualitative methods
Quantitative Qualitative

Surveys Focus groups


Questionnaires
Tests Unstructured
interviews
Existing databases Unstructured
observations
Often, it is better to use more than
one method….
Mixed methods for one program
• Log of activities and participation
• Self-administered questionnaires
completed after each workshop
• In-depth interviews with key informants
• Observation of workshops
• Survey of participants
Are the data reliable and valid?
• Validity: Are you measuring what you think
you are measuring?
– Example:
• Reliability: if something was measured again
using the same instrument, would it produce
the same (or nearly the same) results?
– Example:
“Trustworthy” and “credible” data
What do these words mean relative to
your evaluation information?

How can you help ensure that your


evaluation data are trustworthy and
credible?
Common data collection methods
• Survey • Testimonials
• Case study • Tests
• Interview • Photographs,
• Observation videotapes, slides
• Group assessment • Diaries, journals, logs
• Expert or peer reviews • Document review and
• Portfolio reviews analysis
When choosing methods, consider…
•The purpose of your evaluation − Will the
method allow you to gather information that can
be analyzed and presented in a way that will be
credible and useful to you and others?

•The respondents − What is the most appropriate


method, considering how the respondents can
best be reached, how they might best respond,
literacy, cultural considerations, etc.?
Consider…

• Resources available. Time, money, and staff to


design, implement, and analyze the information.
What can you afford?
• Type of information you need. Numbers, percents,
comparisons, stories, examples, etc.
• Interruptions to program or participants. Which
method is likely to be least intrusive?
• Advantages and disadvantages of each method.
• The need for credible and authentic evidence.
• The value of using multiple methods.
• The importance of ensuring cultural
appropriateness.
Quality criteria for methods
UTILITY
Will the data sources and collection
methods serve the information needs of
your primary users?
Quality criteria…
FEASIBILITY
Are your sources and methods practical
and efficient?
Do you have the capacity, time, and
resources?
Are your methods non-intrusive and non-
disruptive?
Quality criteria…
PROPRIETY
Are your methods respectful, legal, ethical,
and appropriate?
Does your approach protect and respect
the welfare of all those involved or
affected?
Quality criteria…
ACCURACY
Are your methods technically adequate to:
• answer your questions?
• measure what you intend to measure?
• reveal credible and trustworthy
information?
• convey important information?
There is no one right method of collecting
data.

Each has a purpose, advantages, and


challenges.

The goal is to obtain trustworthy,


authentic, and credible evidence.

Often, a mix of methods is preferable.


Culturally appropriate evaluation
methods
• How appropriate is the method given the
culture of the respondent/the setting?
• Culture differences: nationality, ethnicity,
religion, region, gender, age, abilities, class,
economic status, language, sexual orientation,
physical characteristics, organizational affiliation
Is a written questionnaire culturally appropriate?

Things to consider:
• Literacy level
• Tradition of reading, writing
• Setting
• Not best choice for people with oral tradition
• Translation (more than just literal translation)
• How cultural traits affect response – response sets
• How to sequence the questions
• Pretest questionnaire may be viewed as intrusive
Are interviews culturally
appropriate?
Things to consider:
• Preferred by people with
an oral culture
• Language level proficiency;
verbal skill proficiency
• Politeness – responding to authority (thinking it’s
unacceptable to say “no”), nodding, smiling, agreeing
• Need to have someone present
• Relationship/position of interviewer
• May be seen as interrogation
• Direct questioning may be seen as impolite,
threatening, or confrontational
Are focus groups culturally
appropriate?
Things to consider:
• Issues of gender, age, class, clan differences
• Issues of pride, privacy, self-sufficiency, and
traditions
• Relationship to facilitator as prerequisite to
rapport
• Same considerations as for interview
Is observation culturally
appropriate?
Things to consider:
• Discomfort, threat of being observed
• Issue of being an “outsider”
• Observer effect
• Possibilities for
misinterpretations
Cultural issues related to use of
existing data/records
• Need careful translation of documents in
another language
• May have been written/compiled using
unknown standards or levels of aggregation
• May be difficult to get authorization to use
• Difficult to correct document errors if low
literacy level
Culturally appropriate informed
consent
How can we be culturally sensitive and
respectful and ensure the protection of
those involved in our evaluations?
– Children
– Marginalized, “less powerful” participants
Focus groups
Structured small group interviews
“Focused” in two ways:
– Persons being interviewed are similar in
some way (e.g. limited resource families,
family services professionals, or elected
officials).
– Information on a particular topic is guided
by a set of focused questions.
Focus groups
Focus groups are used...
• To solicit perceptions, views, and a range
of opinions (not consensus)
• When you wish to probe an issue or
theme in depth
Survey
A structured way to collect information
using questionnaires. Surveys are typically
conducted through the mail (electronic or
surface), phone, or internet.
Survey
Surveys are used…
• To collect standardized information from
large numbers of individuals
• When face-to-face meetings are
inadvisable
• When privacy is important or
independent opinions and responses
are needed
Steps in planning a survey
1. Decide who should be involved in the process.
2. Define survey content.
3. Identify your respondents.
4. Decide on the survey method.
5. Develop the questionnaire.
6. Pilot test the questionnaire and other materials.
7. Think about analysis.
8. Communicate about your survey and its results.
9. Develop a budget, timeline, and management process.
Response rate
The proportion of people who respond:
divide the number of returned surveys by
the total number of surveys distributed.
Example: If you distribute 50
questionnaires and you get 25
questionnaires back, your response rate is
50%.
Response rate

# that answered = response rate


# you contacted
Response rate
• High response rate promotes confidence
in results.

• Lower response rate increases the


likelihood of biased results.
Response rate
•There is no standard response rate. “The
higher, the better.” Anything under 60% is a
warning.
•Why is high return important? It’s the only
way to know if results are representative.
•Address low response. How are people who
didn’t respond different from those who did?
Only describe your results in terms of who did
respond.
How to increase response rate
• Generate positive publicity for your survey.
• Over sample.
• Ensure that respondents see the value of
participating.
• Use a combination of methods.
• Make (multiple) follow-up contacts.
• Provide incentives.
• Provide 1st class postage/return postage.
• Set return deadlines.
• Make the survey easy to complete.
If response rate is low…
Use language that is suggestive rather
than decisive.
Examples: “The data suggests” vs. “These
data show”; “It appears” vs. “We can
conclude”
• Don’t generalize findings to the entire
group.
• Clearly describe who the data
represents.
Document review
Using information that already exists in records,
receipts, meeting minutes, reports,
budgets…rather than collecting new data

There is a wealth of information available on the


web.

CHECK − What information is already available?


Document review –
Advantages of using existing data
• Available – don’t have to collect data
• Low cost
• Minimum staff required
• Comparative or longitudinal data may be
available
Document review −
Issues in using existing data
• Missing or incomplete data
• Confidentiality issues
• Unknown, different, or changing definitions of
data make comparison difficult
• May not match what you need in terms of
geographic location, same time period, or
population – may be too aggregated
Observation…
• Is watching people, programs, events,
communities, etc.
• Involves all 5 senses: sight, hearing,
smell, touch, and taste
– observation includes more than just
“seeing”
Observation is used…
• To provide information about real-life
situations and circumstances
• To assess what is happening
• Because you cannot rely on participants’
willingness and ability to furnish
information
When is observation useful?
• When you want direct information
• When you are trying to understand an
ongoing behavior, process, unfolding
situation, or event
• When there is physical evidence,
products, or outcomes that can be
readily seen
• When written or other data collection
methods seem inappropriate
Observations
• Advantages • Disadvantages
– Most direct measure – May require training
of behavior – Observer’s presence
may create artificial
– Provides direct
situation
information
– Potential for bias
– Easy to complete,
– Potential to overlook
saves time meaningful aspects
– Can be used in natural – Potential for
or experimental misinterpretation
settings – Difficult to analyze
Observation – Purpose, benefits
• Unobtrusive
• Can see things in their natural context
• Can see things that may escape conscious awareness,
things that are not seen by others
• Can discover things no else has ever really paid attention
to, things that are taken for granted
• Can learn about things people may be unwilling to talk
about
• Inconspicuous – least potential for generating observer
effects
• Least intrusive of all methods
• Can be totally creative – has flexibility to yield insight into
new realities or new ways of looking at old realities
Observation – Limitations
1. Potential for bias
• Effect of culture on what you observe and
interpret
2. Reliability
• Ease of categorization

Usually you do not rely on observation alone;


combine your observations with another
method to provide a more thorough
account of your program.
Observation – Ethical issues
• Unobtrusiveness is its greatest strength;
also potential for abuse in invasion of
privacy
• Can venture into places and gather data
almost anywhere
• Covert – overt
– Always consider ethics and human subjects
protection.
Types of observation
Structured Unstructured
Looking for Looking at

Observing what does not happen may be as


important as observing what does happen.
Steps in planning for observation
• Determine who/what will be observed.
• Determine aspects that will be observed
(characteristics, attributes, behaviors, etc.).
• Determine where and when observations will be
made.
• Develop the observation record sheet.
• Pilot test the observation record sheet.
• Train the observers and have them practice.
• Collect the information.
• Analyze and interpret the collected information.
• Write up and use your findings.
Who/what to observe
• People (individuals, groups,
communities)
– Characteristics
– Interactions
– Behaviors
– Reactions
• Physical settings
• Environmental features
• Products/physical artifacts
Observation – Example
If you want
information about… You would record…
Who uses a particular Total number of users
service broken down by gender,
age, ethnicity, etc.
Interactions between # and types of questions
youth and adults asked by each
Neighborhood safety ???
What to observe − Example
Information needed:
Number of youth who visit the exhibit: age, gender,
cultural background

Can the information be observed accurately? E.g.,


gender may be more obvious than age or cultural
background.
Will the observer affect the situation?
Example – Plans for observing
participation in an after school program
• Who: youth attending the program
• What:
– approximate age
– gender, cultural background
– length of time student stays in the program
• When: all hours the program is open for
one week each month during 2007
Recording your observations
Observations need to be recorded to be
credible. You might use:
– Observation guide
– Recording sheet
– Checklist
– Field note
– Picture
– Combination of the above
Observational rating scales
• Written descriptions – written
explanations of each gradation to
observe
• Photographs – series of photos that
demonstrate each of the grades on the
rating scale
• Drawings, sketches, etc. – other visual
representations of conditions to be
observed
Who are the observers?
• You – program staff
• Participants
• Stakeholders
• Colleagues
• Volunteers
• College students
Training observers
Training is often necessary:
– To learn what to look for
– To learn how to record observations
– To practice
– When want standardized observations
across sites: important that all observers
use same methods, rate same observation
in same way
Practice
For our workshop today, what
observational data could we collect that
would tell us …
– whether learning is occurring
– the characteristics of attendees
– whether the setting is conducive to
learning
– whether the materials are easy to use
Practice
Imagine you are sitting in a room where
ten youth are participating in a computer
demonstration. If you were looking for
indicators of student interest and learning
from the demonstration, what would you
look for?
(Remember to include verbal and nonverbal indicators.)
Interviewing is…
• Talking and listening to people
• Verbally asking program participants the
program evaluation questions and hearing the
participant’s point of view in his or her own
words. Interviews can be either structured or
unstructured, in person or over the
telephone.
• Done face-to-face or over the phone
• Individual; group
Interviews are useful…
• When the subject is sensitive
• When people are likely to be inhibited in
speaking about the topic in front of others
• When people have a low reading ability
• When bringing a group of people together
is difficult (e.g., in rural areas)
Interviews
Verbally asking program participants the
program evaluation questions and
hearing the participant’s point of view in
his or her own words.
Interviews can be either structured or
unstructured, in person or over the
telephone.
Interviews
• Advantages • Disadvantages
– deep and free – costly in time and
response personnel
– flexible, adaptable – requires skill
– glimpse into – may be difficult to
respondent’s tone, summarize responses
gestures – possible biases:
– ability to probe, interviewer,
follow-up respondent, situation
Types of interviewing

Structured Conversational
Type: Structured interview
• Uses script and questionnaire
• No flexibility in wording or order of
questions
• Closed response option
• Open response option
Type: Guided interview
• Outline of topics or issues to cover
• May vary wording or order of questions
• Fairly conversational and informal
Type: Conversational interview
• May not know that an interview is taking
place
• Spontaneous
• Questions emerge from the situation and
what is said
• Topics or questions are not
predetermined
• Individualized and relevant to situation
Probing
Interview question:
“What did you like best about this program?”
Response: “I liked everything.”
Probe 1: “What one thing stood out?”
R: “Being with my friends.”
Probe 2: “What about the program activities?”
R: “I liked it when we worked as a team.”
Probe 3: “How come?”
R: “It was neat to hear each other’s perspectives. I heard
some things I hadn’t considered before.”
Probe 4: “What is one thing that you learned?”
Interviewing tips
• Keep language pitched to that of respondent
• Avoid long questions
• Create comfort
• Establish time frame for interview
• Avoid leading questions
• Sequence topics
• Be respectful
• Listen carefully
Recording responses
• Write down response
• Tape record
• Key in on computer
• Work in pairs
• Complete notes after interview
Questionnaires are…
• Data collection instruments used to collect
standardized information that can be expressed
numerically or through short answers
• Basic instruments of surveys and structured
interviews
• Appropriate when…
– you want information from many people
– you have some understanding of the situation and
can ask meaningful questions
– information is sensitive or private − anonymous
questionnaires may reduce bias
Questionnaires
• Advantages • Disadvantages
– can reach large – might not get
numbers careful feedback
– provide for – wording can bias
anonymity client’s response
– relatively – response rate is
inexpensive often low
– easy to analyze – literacy demands
When should a questionnaire be
used?
• Respondents can provide useful information
about the topic.
• You know what it is you want to know and are
reasonably sure that you can ask standardized
questions to get the information.
• Respondents can be relied upon to provide the
information you need (perhaps with
incentives). This means they can comprehend
the questions and respond properly, they are
truthful, and they are motivated enough to
respond carefully.
Good questionnaires are NOT EASY!
• Developing a good questionnaire, takes
time, time, and more time.
• Multiple (even a dozen!) drafts may be
involved before the questionnaire is
ready.
• It’s important to involve others in
writing the questionnaire.
Questionnaire design −
Considerations
• Kind of information: What do you want to
know? Is the information already available?
• Wording of questions and responses
• Formatting the questionnaire
• Pre-testing
• Cover letters and introductions
• When/where will the questionnaire be
distributed?
• How will returns be managed? How will the
data be analyzed?
• Who is responsible for each task?
Questionnaire design
• Is the information already available?
• Don’t ask a question unless
it has a use.
– Eliminate the “nice to know.”
• What will you do with each piece of
information gathered?
Questionnaire design
• Write questions through your
respondent’s eyes.
– Will the question be seen as reasonable?
– Will it infringe on the respondent’s
privacy?
– Will the respondent be able and willing to
answer the question?
• Be selective and realistic when writing
questions.
6 STEPS IN DEVELOPING EFFECTIVE
QUESTIONNAIRES
1. Decide what information you need.
2. Determine sample – respondents.
3. Develop accurate, user-friendly
questionnaire.
4. Develop plan for distribution, return,
and follow-up.
5. Provide clear instructions and a good
cover letter.
6. Pilot test.
Step 1: What information is needed?
• Be specific
• Need to know vs. would like to know
• Check to see if information exists
elsewhere
• What do you want to be able to say:
counts, percentages, relationships,
narratives
Step 2: Sample
• Who will complete the questionnaire?
• What do you know about their
preferences, abilities, and cultural
characteristics that may affect the way
they respond?
Step 3: Develop questionnaire
• Make sure questions cover information
needed.
• Word questions carefully.
• Consider cultural nuances.
• Sequence questions appropriately.
• Attend to formatting.
Step 3 continued
• Write clear, complete directions.
• Review to see if it is user-friendly;
consider the respondent.
• Make the questionnaire attractive.
• Work as a team.
• Plan on writing several draft
questionnaires.
Step 4: Plan distribution, return,
follow-up
Distribution: when, where
– At meetings, sites, through mail, email,
internet
Return: when, where
– Return to individual, collection box
– Return envelope addressed/stamped
– Return envelope addressed only
Follow-up
Step 5: Cover Letter − Explanation
• Purpose of questionnaire –
how information will be used
• Why they are being asked to fill it out
• Importance of their response
• How and when to respond
• Whether response will be anonymous or
confidential
• Your appreciation
• Promise results, if appropriate
• Signature − sponsorship
Step 6: Pilot test
• Always
• With people as similar to respondents as
possible
– Do they understand the questions? The
instructions?
– Do questions mean same thing to all?
– Do questions elicit the information you
want?
– How long does it take?
• Revise as necessary
Kinds of information –
What do you want to know?
• Knowledge − what people know, how
well they understand something
• Beliefs − attitudes, opinions
• Behaviors − what people do
• Attributes/Demographics − what people
are and what people have
Types of questions
• Open-ended questions − allow
respondents to provide their own
answers

• Closed-ended questions − list answers


and respondents select either one or
multiple responses
Open-ended questions
• Do not provide any specific responses
from which the participant would
choose.

• Allow respondents to express their own


ideas and opinions.
Open-ended questions
Pros: Cons:
• Can get • More difficult to
unintended or answer
unanticipated • May be harder to
results categorize for
• Wide variety of interpretation
answers • More difficult for
• Answers in people who don’t
participants’ write much
“voices”
Open-ended questions
Examples:

What communication skills did you


learn in this workshop that you will use
with your children?

What benefits do you receive from this


organization?
Closed-ended questions
• Provide specific answers from which the
participant must choose.
• Sometimes called “forced choice.”
• Response possibilities include: one best
answer, multiple responses, rating, or
ranking scale.
Closed-ended questions
Pros: Cons:
• Easy to analyze • Chance of none of
responses the choices being
• Stimulates recall appropriate
• Biases response to
what you’re looking
for
• Misses unintended
outcomes
Closed-ended questions
Example − one best answer:

What does the word “nutrition” mean to you?


(Circle one number.)

1 Getting enough vitamins


2 The food you eat and how your body uses it
3 Having to eat foods I don’t like
4 Having good health
Closed-ended questions
Example − multiple responses:
Of the communication skills taught in this
workshop, which will you use with your
children? (Check all that apply.)
___active listening
___acknowledge feelings
___ask more open-ended questions
___provide one-on-one time for discussion
___negotiation
___other_____________________
Closed-ended questions
Example − rating scale
To what extent do you agree or disagree
with the new zoning code? (Circle one.)
1 Strongly disagree
2 Mildly disagree
3 Neither agree or disagree
4 Mildly agree
5 Strongly agree
When wording the questions,
consider…
• The particular people for whom the
questionnaire is being designed
• The particular purpose of the
questionnaire
• How questions will be placed in relation
to each other in the questionnaire
Use clear, specific, simple wording.
• Match vocabulary and reading skills of
your respondents.
• Are any words confusing? Do any
words have a double meaning?
• Avoid the use of abbreviations and
jargon.
Example:
Use clear, specific, simple wording.
• Avoid jargon or technical language.
Jargon:
What kind of post-litigation concerns have you
and your ex-spouse had?
Better:
Since having your visitation rights set by a
judge, what other concerns have you and your
ex-spouse had about visitation?
Include all necessary information.
• Avoid vague questions and answers.
• Avoid ambiguous words or phrases.
• Avoid questions that may be too specific.
• Avoid making assumptions.
Example: Vague questions
Vague:
How will this seminar help you?
Example: Vague questions
Better:
What skills did you learn in this
seminar that will help you follow the
child custody arrangements set by the
court?
____how to negotiate changes or with my ex-spouse
____how to explain visitation arrangements to my children
____steps to requesting a change in arrangements from the
court
____how to separate child support from visitation disputes
Example: Avoid ambiguous words or
phrases.
Ambiguous:
How has your child demonstrated
improved communication skills since
participating in “Let’s
Communicate”?
Example: Avoid specificity that limits
the potential for reliable responses.
Too specific:
How many meals have you eaten as a
family during the past year?
___________ number of meals
Example: Avoid making
assumptions.
Question for teachers that makes
assumptions:

What practices have you used to get


more parents to read to their
children?
Avoid leading questions.
• Biased questions
– Influence people to respond in a certain way
– Make assumptions about the respondent
– Use language that has strong positive or
negative appeal
Example: Leading questions
Leading:
Do you think this seminar will help
you stop fighting with your spouse
about the children?
Better:
How do you think this seminar will
help you work with your spouse to
address your children’s concerns?
Avoid double-barreled questions.
• Ask one question at a time.
• Avoid ambiguity − questions that have
multiple responses.
Example: Double-barreled question
Double:
How will this seminar help you communicate
better with your children and their grandparents
about your divorce?

Better:
How will this seminar help you communicate with
your children about your divorce?
How will this seminar help you communicate with
your children’s grandparents about their
relationship with their grandchildren?
Make the response categories clear,
logical, and mutually exclusive.
• Only one possible answer
• Similar-sized categories
• Responses in a logical order
Example: Clear, logical, and mutually
exclusive responses
Poor spacing and logic: Better spacing, logic, and
mutually exclusive:
Children’s Ages
Children’s Ages
0−1
under 1 year of age
1−3
1−3 years of age
3−6
4−6 years of age
7−12 7−9 years of age
13−18 10−12 years of age
13−15 years of age
16−18 years of age
Example: Vague quantifier
Vague:
How often did you attend an Extension-
sponsored workshop during the past
year?
a. Never
b. Rarely
c. Several times
d. Many times
Example: Vague quantifier
Better:
How often did you attend an Extension-
sponsored workshop during the past
year?
a. Not at all
b. One to two times
c. Three to five times
d. More than five times
Rating scales
• Ordered options to gauge difference of
opinion.
• Keep the order of choices the same
throughout the form.
• Odd number of options allows people to
select a middle option.
• Even number forces respondents to take
sides.
• Simpler is better.
Types of rating scales
Category scales
Numeric scales
Semantic differentials
Category/Rating scales
• Use words or phrases to express a range
of choices.
• The number of categories depends on
the amount of differentiation.
• Three, four, or five categories are most
common.
Category/Rating scales
• Balance the scale with an equal number
of positive and negative options.
• “No opinion” or “uncertain” are not part
of a scale. They are usually placed off to
the side or in a separate column.
• All choices should refer to the same
thing/concept.
Category/Rating scales − Example
Poor: Better:
__Not worth my time __Not at all interested
__Slightly interested __Slightly interested
__Moderately __Moderately
interested interested
__Very interested __Very interested

Left column includes two concepts –


“worth” and “interest level.”
Rating scales − Words

Not much A little Not much


Some Some Little
A great deal A lot Somewhat
Much
A great deal
Rating scales − Words

Never Extremely poor


Seldom Below average
Often
Average
Always
Above average
Excellent
Rating scales − Words
Strongly Disagree Completely
disagree Neither agree disagree
Disagree nor disagree Mostly disagree
Agree Agree Slightly
Strongly disagree
agree Slightly agree
Uncertain Mostly agree
Completely
agree
Formatting considerations
• Overall appearance
• Length of the questionnaire
• Order of questions
• Demographic data collection
Formatting − Overall appearance
• Use an easy-to-read typeface.
• Leave plenty of white space.
• Separate different components of a
questionnaire by using different type
styles.
• Use arrows to show respondents where
to go.
Formatting − Length of the
questionnaire
• Shorter questionnaires usually generate
higher return.
• Include enough items to be thorough but
don’t over-burden the respondent.
• Length is not usually as important as other
formatting characteristics.
Formatting − Order of questions
• Introduction − Include questionnaire’s sponsor,
purpose, use, confidentiality, etc.
• Include instructions for how to answer the
questions (e.g., Circle one; Check all that
apply).
• Arrange questions so they flow naturally.
• Place demographic questions at the end of the
questionnaire.
• Be consistent with numbers, format, and scales.
Formatting − Order of questions
• Start with the easiest questions −
avoid controversial topics.
• Address important topics early.
• Move from specific questions to
general questions.
• Move from closed-ended to open-
ended questions.
Formatting − Demographic data
collection
• Only include questions about
demographic data that you will use.
• You may want to preface demographic
questions with the purpose for collecting
the information.
• You may need to state that providing this
information is optional and/or explain
how it affects program eligibility.
Formatting − Demographic data
collection
Age Residence
Gender Previous contact with
Ethnicity organization
Marital status Prior knowledge of
Family size topic
Occupation First-time participant
vs. repeats
Education
How you learned about
Employment status the program
What you want to find out in a
pretest:
• Does each question measure what it is
supposed to measure?
• Are all the words understood?
• Are questions interpreted in the same way
by all respondents?
• Are all response options appropriate?
• Is there an answer that applies to each
respondent?
- Salant and Dillman (1994)
Source: Salant, P., & Dillman, D. A. (1994). How to conduct your own survey. New York: John Wiley & Sons, Inc.
Pre-testing questions
• Are the answers respondents can choose from
correct? Are some responses missing?
• Does the questionnaire create a positive
impression – does it motivate people to answer
it?
• Does any aspect of the questionnaire suggest
bias?
• Do respondents follow the directions?
• Is the cover letter clear?

-Salant and Dillman (1994)


Source: Salant, P., & Dillman, D. A. (1994). How to conduct your own survey. New York: John Wiley & Sons, Inc.
Pre-testing steps
1. Select reviewers who are similar to the
respondents and who will be critical.
(Also ask your colleagues to review it.)
2. Ask them to complete the questionnaire
as if it were “for real.”
3. Obtain feedback on the form and
content of the questionnaire and the
cover letter. Was anything confusing,
difficult to answer, de-motivating?
Pre-testing steps, continued
4. Assess whether the questions produce
the information you need.
5. Try the tabulation and analysis
procedures.
6. Revise.
7. If necessary, repeat these steps to pre-
test the revised version.
Revise and revise…
• A quality questionnaire is almost never
written in one sitting.
• A quality questionnaire goes through
multiple revisions (maybe a dozen!)
before it is ready.
• Remember – a list of questions is just
the starting point. There are many
factors that affect response.
Choices: Timing of data collection
When will data be collected?
– Before and after the program
– At one time
– At various times during the course of the
program
– Continuously through the program
– Over time − longitudinally
Data analysis
• Data can be analysed both quantitatively as
well as qualitatively
• Students are referred to research methods and
data analysis for detailed discussion of this
part

You might also like