0% found this document useful (0 votes)
30 views15 pages

Evaluation Terms

The document discusses key concepts in program evaluation, including definitions of evaluands, types of evaluators (internal and external), and the distinction between formative and summative evaluations. It also outlines the roles of stakeholders and presents the Program Evaluation Standards developed by the Joint Committee on Standards for Education Evaluation, emphasizing attributes such as utility, feasibility, propriety, accuracy, and meta-evaluation. Additionally, it highlights the importance of ethics and cultural competence in evaluation, advocating for a multicultural perspective to enhance validity.

Uploaded by

mtayyabbb1
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
30 views15 pages

Evaluation Terms

The document discusses key concepts in program evaluation, including definitions of evaluands, types of evaluators (internal and external), and the distinction between formative and summative evaluations. It also outlines the roles of stakeholders and presents the Program Evaluation Standards developed by the Joint Committee on Standards for Education Evaluation, emphasizing attributes such as utility, feasibility, propriety, accuracy, and meta-evaluation. Additionally, it highlights the importance of ethics and cultural competence in evaluation, advocating for a multicultural perspective to enhance validity.

Uploaded by

mtayyabbb1
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 15

PROGRAM EVALUATION

Evaluation Terms
EVALUAND
 “Evaluand,” a generic term coined by
Michael Scriven, may apply to any object of
an evaluation. It may be a person, program,
idea, policy, product, object, performance, or
any other entity being evaluated (Mathison,
2005, p. 139).
 Internal evaluators are employees of the

organization in which the evaluation is


conducted.
EVALUATORS

• An external evaluator is
someone who conducts an
External evaluation and is not an
employee of the
evaluator organization that houses
the object of the evaluation
(e.g., a program)

• Internal evaluators are


Internal employees of the
organization in which the
evaluator evaluation is conducted.

s
FORMATIVE AND SUMMATIVE TYPES
OF EVALUATION
• Evaluation is considered to be formative when it
is conducted during the development or delivery
of a program or product with the intention of
providing feedback to improve the evaluand.
Formative • Formative evaluation may also focus on program
evaluation plans or designs.

• A summative evaluation is one that is done at the end


of or on completion of a program.
• Summative evaluations may be done internally or
Summativ externally and typically for the purpose of decision
making.
e • Michael Scriven, the originator of the terms formative
evaluation and summative evaluation, distinguishes summative
evaluation’s aim as reporting “on” the program rather
than “to” the program
STAKEHOLDERS
 Stakeholders are people who have a stake or
a vested interest in the program, policy, or
product being evaluated (hereafter referred
to as “the program”) and therefore also have
a stake in the evaluation
TYPES OF STAKEHOLDERS
people who have direct
responsibility for the
people who have decision[-
program, including program
making] authority over the
developers, administrators in
program, including other
the organization
policy makers, funders, and
implementing the program,
advisory boards;
program managers, and
direct service staff

people who are the intended


people disadvantaged by the
beneficiaries of the program,
program, as in lost funding
their families, and their
opportunities.
communities
GUBA AND LINCOLN (1989) CONCEPTUALIZED
FOUR GENERATIONS OF EVALUATION
 First generation: Measurement— testing of
students
 Second generation: Description—

objectives and tests (Tyler’s work, cited in


Stufflebeam et al., 2000)
 Third generation: Judgment—the decision-

based models, such as Stake (1983), Scriven


(1967a), and Stufflebeam (1982)
 Fourth generation: Constructivist, heuristic

evaluation
STANDARDS FOR CRITICALLY
EVALUATING PROGRAMS
 The Joint Committee on Standards for Education
Evaluation developed the Program Evaluation Standards
(referred to hereafter as the Standards) (Yarbrough et al.,
2011).
 The Joint Committee included members from three
organizations: the American Educational Research
Association (AERA), the American Psychological
Association (APA), and the National Council on
Measurement in Education (NCME).
 The representatives of these three organizations were
joined by members of 12 other professional organizations
(e.g., the American Evaluation Association, the American
Association of School Administrators, the Association for
Assessment in Counseling, and the National Education
Association) to develop a set of standards that would
guide the evaluation of educational and training
programs, projects, and materials in a variety of settings.
THE STANDARDS ARE ORGANIZED ACCORDING
TO FIVE MAIN ATTRIBUTES OF AN EVALUATION:

• how useful and appropriately


utility used the evaluation is

• the extent to which the


evaluation can be
feasibility implemented successfully in
a specific setting

• how humane, ethical, moral,


Propriety proper, legal, and
professional the evaluation is
CONTI…

• how dependable,
precise, truthful, and
Accuracy trustworthy the
evaluation is

• the extent to which


Meta- the quality of the
evaluatio evaluation itself is
n assured and
controlled
ETHICS AND EVALUATION: THE AEA’S
GUIDING PRINCIPLES

Systematic Competence Integrity/


inquiry • Evaluators honesty
• Evaluators provide • Evaluators
conduct skilled behave with
data-based professional honesty and
inquiries that services to transparency
are stakeholders. in order to
thorough, ensure the
methodical, integrity of
and the
contextually evaluation.
relevant
CONTI…

Respect for Common good


people and equity.
• Evaluators honor the • Evaluators strive to
dignity, well-being, contribute to the
and self-worth of common good and
individuals and advancement of an
acknowledge the equitable and just
influence of culture society.
within and across
groups.
ETHICS, EVALUATION, AND CULTURAL
COMPETENCE
 Mertens (2009, 2015a) and Kirkhart (2005)
recognize that concerns about diversity and
multiculturalism have pervasive implications
for the quality of evaluation work.
 Kirkhart introduced the term “multicultural

validity” in her presidential address at the


1994 AEA conference; she defined this as
“the vehicle for organizing concerns about
pluralism and diversity in evaluation, and as
a way to reflect upon the cultural boundaries
of our work” (Kirkhart, 1995, p. 1).
5 JUSTIFICATIONS FOR CONSIDERING VALIDITY
FROM A MULTICULTURAL PERSPECTIVE

Interpersonal—The quality of the


interactions between and among
participants in the evaluation process.

Consequential—The social
consequences of understandings and
judgments and the actions taken based
upon them

Experiential— Congruence with the lived


experience of participants in the program
and in the evaluation process.
CONTI…

Theoretical—The cultural
congruence of theoretical
perspectives underlying the
program, the evaluation, and the
assumptions of validity.

Methodological—The cultural
appropriateness of measurement
tools and cultural congruence of
design configurations.

You might also like