Evaluation and monitoring for
summer pharmacy students
By
Feyera Gebissa ([Link], MHA)
Department of public Health
UNIT VI: Evaluation
Evaluation
•is systematic determination of merit, worth and significance of
something or someone
•It is also the systematic acquisition and assessment of
information to provide useful feedback about something
•Utilizes information gathered by monitoring and other
methods to make judgments about the project or program.
•Is about using information to make changes and improvements
UNIT VI: Evaluation
• the systematic investigation of the merit, worth, or
significance of an ‘object’ ”
Michael Scriven
• “…the systematic assessment of the operation and/or
outcomes of a program or policy, compared to a set
of explicit or implicit standards as a means of
contributing to the improvement of the program or
policy…”
Weiss Carol
• “A systematic way to determine the “value” of a
program, program components, or activity.”
Evaluation …
• Evaluation aims to answer agreed questions and to make a judgment
against specific criteria.
• For a good evaluation (like research) , data must be collected and
analyzed systematically, and its interpretation considered carefully .
• Assessing 'value' - or the worth of something - and then taking action
makes evaluation distinctive. The results of an evaluation are intended
to be used.
• There are many different perspectives and approaches to evaluation.
Answering
• 'Why are we doing it?'
• 'Who is the evaluation for?' and
• 'What are the key issues to address?' approaches will help decide
whether to self-evaluate or to have an external evaluation.
For example, this could be:
Your organizational structure and how it works?
How you carry out your services or activities?
How users experience the project?
What changes or benefits the project brings about?
Why Evaluation ?
• To gain insight - about a program and its operations
to see where we are going and where we are coming
from, and to find out what works and what doesn’t
• To improve practice – to modify or adapt practice to
enhance the success of activities
• To assess effects – to see how well we are meeting
objectives and goals, how the program benefits the
community, and to provide evidence of effectiveness
• To build capacity - increase funding, enhance skills,
strengthen accountability
What is monitoring?
• Is about collecting information that will help to answer questions about a
project.
• These information/data/ is to be collected in a planned, organized and
routine way.
• This information can be used as a report of the project or to help evaluate
it
• Because of this, some says monitoring kills evaluation.
• Information is collected routinely and systematically against a plan about
activities or services, users, or about outside factors affecting organization
or project.
• Monitoring information is conducted at specific time: daily, monthly or
quarterly
• Finally monitoring can be used to evaluate the project’s effect.
Importance of Monitoring and Evaluation
– Program improvement
– Accountability
– Results sharing with partners
– To assess the worth of a program
Comparison of monitoring and
evaluation
Description Monitoring
9 Evaluation
Frequency Periodic, regular Episodic
Basic purpose Verification Explanation
Undertaken by Program Same + external
managers evaluators
Cost Less expensive Very expensive
Emphasis Process/output Outcome/impact
Focuses of Monitoring and Evaluation
[Link]: Resources used in the program. They
include financial, human or material resources.
Examples:
• Technical personnel for HIV testing
• HIV test kits
• ITN
2. Process/activities/: Program procedures that
are implemented to obtain the desired effects.
Examples:
• Training of human resources for counseling and testing
• Referring HIV positive clients to treatment services
• Selection of HIV positive patients for ART
Focuses of M and E…
3. Outputs: Are the immediate consequences of the
inputs utilized and the program activities conducted.
Examples:
• Number of appointments provided
• Number of clients counseled
• Number of ITNs distributed
• Number of users who received pre-testing counseling
• Number of HIV tests carried out
Focuses of M and E…
4. Outcomes: Effects upon the target population.
The effects include several types and may focus on
awareness, attitudes, behavior, utilization etc.
Examples:
• Increase of ITN use
• Improvement of quality of HIV/AIDS services
• Reduction of risk behaviors
Focuses of M and E…
5. Impacts: Are related to long-term accumulative effects
of the programs usually in the general population.
They are rarely attributed to a single program or
intervention.
Examples:
• Reduction of Malaria incidence
• Reduction of HIV/AIDS mortality
Common terms of M and E
1. Inputs- resources provided for a program
2. Processes- activities to transform inputs into outputs
3. Outputs- immediate results of a program
4. Effectiveness- measures the extent to which the objectives
are achieved
5. Efficiency- achieving effectiveness at the lowest practical cost
6. Outcome- peoples’ response to a program and how they are
doing things as a result of a program
7. Impact- effect of a program on the people and their
surroundings. It may be economical, social, organizational,
health, environmental, technical or other intended/unintended
results of a program.
Common terms of M and E…
8. Input/output Monitoring: Follow up of information
about inputs or resources and about outputs resulting
from the program activities.
Answers questions such as:
– Which services were provided?(output monitoring)
– Which resources were used?(input monitoring)
– What is the number of people who received
assistance?(output monitoring)
Examples:
– Input: Follow up of the number of ITNS acquired every
quarter by the program.
– Output: Follow up of the number of people who received
assistance from the service every month.
Common terms of M and E …
[Link] Evaluation: It is usually approached as equivalent to
implementation analysis.
- It supplements the monitoring of inputs and outputs with an
explanation dimension, enabling the understanding of the
organizational context that affect the program.
Answers questions such as:
Was the intervention implemented according to what was expected
(compliance)?
What is the implementation degree of the program?
What program/context/users related factors may explain the observed
degree of implementation?
Are the planned actions reaching the targeted population?
Do the users have access to the intervention?
Example: Was the Project implemented as planned?
E.g. the capacity building was adequate; inputs were
available when needed, etc.
Common terms of M and E…
[Link] Monitoring: Follow up of information
related to the program’s expected outcome. Usually is
related to a period of time.
Answers questions such as:
What activities have been done to change behavior (the
intended outcome)?
What was the behavior change; did the expected outcome
happen?
Example: What is the change in ITN use after the
behavioral intervention?
Common terms of M and E…
11. Outcome Evaluation: Provides explanations about
why the program did or did not achieve its results.
– It gives emphasis to causal relations between
intervention and effect.
– Coverage is the usual indicator for outcome of a
program.
Answers question such as:
Does the intervention explain the observed effect on the
target population?
Example: Was the observed change in ITN use due to the
intervention? Why? How?
Common terms of M and E…
12. Impact Monitoring: Usually, it relates to the
follow up of disease trends in the general population.
Answers questions such as:
What effects do all interventions have upon HIV and AIDS
prevalence?
Example: Systematic follow up of HIV and AIDS prevalence rates
(HIV/AIDS surveillance)
13. Impact Evaluation: Analyze the relationships between
disease trends, control programs and other associated
factors.
Answers questions such as:
How much of the change is due to the program?
Example: How much of the prevalence reduction was due to the
Program?
Steps in program Evaluation ?
• Step 1: Engage Stakeholders
• Step 2: Describe the Program
• Step 3: Focus the Evaluation Design
Step 4: Gather Credible Evidence
• Step 5: Justify Conclusion
• Step 6: Ensure Use and Share Lessons Learned
Types of evaluation
1. Formative Evaluation:
• performed during the entire planning process and
program execution.
• Stakeholders should be involved
• It is equivalent to process evaluation /developmental
evaluation.
• Answers :
– How can the intervention be modified?
– Are there better solutions compared to those
proposed?
– How do the components relate amongst themselves?
Types of evaluation …
2. Normative Evaluation: Usually performed to provide managers
or users judgment about a program’s compliance to best
practices.
• Answers :
– Is the program following recommended guidelines?
– Does the program follow national guidelines?
– Does the program comply with prescribed norms?
3. Summative Evaluation: provide judgment about a program’s
worth and merit.
• Stakeholders may not be involved; only evaluators can do it.
• Answers questions such as:
• Is the program effective?
• Should the program be continued?
What to evaluate in health services
organizations?
1. Service achievement
2. Work progress
3. Use of resources
4. Staff performance
Examples of Indicators used in evaluation
1. Health policy indicators
• Political commitment
• Allocation of adequate resources
• Equity of distribution of resources
• Community involvement etc
2. Social and economic indicators
• Rate of population growth
• Gross national product
• Adequacy of housing
• Income distribution
• Adult literacy rate
3. Indicators of provision of health care
– Availability
– Utilization
– Accessibility (physical)
– Quality of care
– Economic and cultural accessibility
4. Indicators of PHC coverage
– Availability of safe water supply
– Adequate sanitary facilities
– Access of mothers and children to health care
– Availability of essential drugs etc… (PHC
components)
Indicators …
5. Health status indicators
– Percentage of newborns with birth wt >
2500g
– Percentage of children with wt for age fitting
to standard
– IMR, CMR, Morbidity rate etc
Appraising Staff Performance
• Measuring the actual contribution of each staff work
output against the standard set by the team itself. The
selected standards must:
– achieve the desired results
– be attainable
– be measurable
– be understood by everybody
Staff performance appraisal of Ethiopia
The following criteria are used:
1)Personal traits and behavior such as
cooperativeness, dependability, attitude
2) Job dimension attribute such as quality and
quantity of work
• Scoring: Very Low (1), Low (2), Average (3), High (4), Very
high (5)
• The following appraisal scales are filled twice a year
1. Ability to apply knowledge
2. Punctuality
3. Cooperation on jobs
4. Quality and alertness
5. Quantity of work done
6. Handling and utilization of resources
7. Discipline
8. Innovation, acceptance and application of ability
9. Ability to accept responsibility
10. Planning and organizing ability
11. Leadership and control ability
12. Decision making ability
NB: 1- 9 = for all workers
1- 12 = for those who have positions
Indicators
1. Shorthand- indicators which could measure, but
measuring would be very expensive. E.g. health
worker to population ratio for measuring health
service availability
2. Proxy- is indicator of something which is inherently
immeasurable. E.g. GNP can be used as a proxy
indicator of development
• Indicators can also be classified as:
1. Simple e.g. infant mortality rate
2. Composite e.g. disability adjusted life year (DALY)
Usefulness of indicators
1. Validity---- Does the indicator actually
measures what it is intended to?
2. Reliability- will the indicator provide the same
information under different situations?
3. Sensitivity- Does the indicator really show
changes in situation?
4. Specificity- Does the indicator show changes in
situation as specified?
Management audit
• The purpose of management audit is to insure future
success and improvements in the performance of
management/managers.
• Evaluates how efficient is the management work?
-Self evaluation of management Success
Failure
• The management is questioned against a list of
questions in all of its functions
• It is conducted to critically evaluate the activities and
efficiency of the management on the basis of certain
specific objectives.
THE END