0% found this document useful (0 votes)
96 views16 pages

Section 4:: Evaluation of Professional Development

This document discusses frameworks for evaluating the impact and process of professional development. It outlines four levels of evaluation: (1) reaction, (2) learning, (3) behavior, and (4) results. Evaluation is an ongoing process that informs continuous improvement of professional development activities. The goal is to determine if professional development alters instructional behavior and improves learner performance over time. Multiple methods are needed to fully understand the effects of professional development.

Uploaded by

Maria Ric
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
96 views16 pages

Section 4:: Evaluation of Professional Development

This document discusses frameworks for evaluating the impact and process of professional development. It outlines four levels of evaluation: (1) reaction, (2) learning, (3) behavior, and (4) results. Evaluation is an ongoing process that informs continuous improvement of professional development activities. The goal is to determine if professional development alters instructional behavior and improves learner performance over time. Multiple methods are needed to fully understand the effects of professional development.

Uploaded by

Maria Ric
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 16

Section 4:

Evaluation of Professional Development

Overview

An essential component of professional development activities involves ongoing and systematic


evaluation procedures. Few efforts have been made to evaluate the results of professional development
beyond the brief responses requested at the conclusion of workshops which assess participant reaction
to the session (see box). It is an especially critical time for the adult education field to emphasize the
evaluation of professional development for at least two reasons:
$ Given the certainty of diminishing resources and competing priorities, the luxury of
unfocused and unexamined professional development no longer exists. Increasing
participation and financial support by non-educational partnerships are bringing to adult
education new demands for accountability.

$ I f adult education practices are to respond to rapidly changing technological and social structures,
professional development is the primary vehicle for meeting that challenge. Sound information is
needed to make thoughtful decisions on how to change directions.

The focus of this section is to examine methods and


In a meta-analysis of the results of
procedures for identifying what changes have taken place as a professional development, Wade
(1985) concludes: “few accounts
result of professional development and determining whether present concrete evidence of its
(professional development) effects
intended goals have been achieved. This section also suggests on teachers and students.”
specific and practical ongoing evaluation activities that should be Likewise, Loucks and Melle (1982)
note that “most staff development
incorporated within all professional development efforts. The reports are simply statements of
participant satisfaction.”
information is designed to assist professional development
coordinators, administrators at all levels, instructors, and other interested practitioners in developing
ongoing evaluations of professional development activities. We present an evaluation framework that
is appropriate for all approaches to professional development. The framework emphasizes that
evaluation is continuous rather than a single event C especially not just a single event that occurs at the
end of professional development activities.

Evaluation of Professional Development 4-1


A Framework for Evaluating the Professional Development

Process and Impact

Professional development is about CHANGE. The purpose of professional development is to


improve learner outcomes by changing instructional behavior to achieve a pre-determined goal C
whether in teaching adults or administering programs, in designing professional development activities,
or in teaching adult students. While learning about such innovations may be relatively easy, applying
them in a consistent and insightful manner is another matter. As Guskey (1986) notes, practitioners
appear to be most motivated to change as they observe learner success and satisfaction and this cannot
occur immediately. Furthermore, for professional development, like learning, to be successful, it Amust
be adapted to the complex and dynamic characteristics of specific contexts@ (Guskey, 1995). This
change process takes time. Therefore, it is unreasonable to expect that individual professional
development activities will immediately result in altered long-term instructional behavior, improved
learner performance, or changed organizational structures and practices. The role of evaluation, then,
is not only to provide information on the impact of professional development, but also to provide data
for refining and adjusting professional development activities to ensure that services can be improved
on an ongoing basis.
Evaluation of the impact of professional development activities must address the following two
questions:
1. Does professional development alter long-term instructional behavior?

2. How do we know that professional development activities do, in fact, improve learner
performance?

Evaluation of the process of professional development can tell program staff how well
professional development activities within the program are working. Five questions must be
considered when using evaluation as a mechanism to promote continuous program improvement:
1. What would we like to see happen? (Examine goals identified in needs assessments.
When correctly done, needs assessments detail the learning needs of participants, which
are then reflected in professional development activities. Such assessments should
provide a clear reading of the specific objectives of professional development activities.
Evaluation is a logical Anext step@ of needs assessments in that evaluation provides
information as to whether (and to what extent) goals identified through needs
assessments have been met.)

2. How can we make that happen? (Design a professional development plan that includes
information on delivery, timing, and use of professional development approaches, and
evaluation questions that need to be answered.)

3. How is it going? (Collect information and monitor progress on an ongoing basis.)


4-2 Professional Development Resource Guide for Adult Educators
4. What are the results? (Assess the extent of both short and long-term changes.)

5. What should be done with the results? (Evaluate options and make decisions.)

The following exhibit shows how evaluation relates to professional development activities and
can inform continuous program improvement efforts by staff from professional development agencies
and state and local adult education programs. As shown by this figure, evaluation data are used in all
stages of the professional development process, including planning, implementing, and reviewing and
revising professional development activities. It emphasizes that evaluation is continuous, rather than a
single event that occurs at the end of professional development activities.
The professional development framework implies that time is required before professional
development activities can be expected to show success, and needs assessments are a critical
component of evaluation. Also, the framework is suitable for the different professional development
approaches detailed in Section 2 of the Guide C Workshop/Presentations, Inquiry/Practitioner
Research, Product/Program Development, and Observation/Feedback.

Evaluation of Professional Development 4-3


An Ongoing Professional Development Process

4-4 Professional Development Resource Guide for Adult Educators


An Evaluation Framework
The next exhibit presents a framework for evaluating process and impact, based on
Kirkpatrick’s (1994) sequential levels of evaluation for training programs. While his evaluation
approach was developed primarily for evaluating business and industry training programs, consisting
largely of what we characterize in this Guide as the Workshop/Presentation approach, many of his
concepts and aspects of his design are applicable to a broader base of adult programs. The four stages
of evaluation are intended to measure: (1) reaction, (2) learning, (3) behavior and actions, and
(4) results.
$ Reaction: Measures how those who participate in professional development activities
react to what has been presented. Although typically characterized as “the happiness
quotient,” participants need to have a positive reaction to a professional development
activity if information is to be learned and behavior is to be changed.

$ Learning: Measures the extent that professional development activities have improved
participants' knowledge, increased their skills, and changed their attitudes. Changes in
instructional behavior and actions cannot take place without these learning objectives
being accomplished.

$ Behavior: Measures what takes place when the participant completes a professional
development activity. It is important to understand, however, that instructors cannot
change their behavior unless they have an opportunity to do so.

$ Results: Measures the final results that occurred because an instructor participated in
professional development activities. Evaluating results represents the greatest
challenge in evaluating professional development approaches.

As shown in the exhibit, these levels differ by their specific purposes and types of program
decisions which they can inform, and especially when attempting to evaluate changed behaviors and
results, become more time consuming and expensive to conduct. Kirkpatrick emphasizes the
importance of progressing through all four stages sequentially because as he notes, if information/skills
are not learned (Level 2), it is unlikely that instructors can change their instructional behaviors (Level 3)
or that the programs will change their procedures and learning gains will result (Level 4).

Evaluation of Professional Development 4-5


Four Levels of Evaluation for Professional Development

Levels Purposes Benefits Link to Approaches


LEVEL 1 Measures how those who participate 1. Helps improve future training. Useful following Workshop Presentation
(Reaction) in professional development programs 2. Creates trust in participants. Approach. Also used at critical points
react to it. 3. Quantitative information useful to managers during Observation Feedback,
and others. Inquiry/Research or Product/Program
4. Establishes standards of performance (may Development to determine level of
need to change leaders, facilities, satisfaction with product or process.
materials.)
LEVEL 2 This level determines if the 1. Measures effectivenessof instruction. Pre/post tests of information or skills
(Learning) professional development program 2. Measures specific learning (information, appropriate with Workshop/Presentation
has: changed attitudes; improved attitudes, skills). and Observation/Feedback. Of minimal
knowledge; increased skills. 3. Results = changes in instruction, use for Inquiry Research as information or
instrument, other resources. skills are more open and discoverable than
prescribed.
LEVEL 3 Determines the extent to which 1. Intrinsic rewards: self-esteem, Whereas Kirkpatrick recommends such
(Change in behavior has changedas a result of empowerment if successful. devices as Management by Walking
Behavior) the professional development 2. Extrinsic rewards: praise, promotion, salary Around (MBWA), or self-report such as
Transfer of program. ... patterned interviews or survey
training. 3. Provides possible information to managers. questionnaires at spaced intervals, the
(Check to see if there are restraints Observation/Feedback Approach would
that prevent change in behavior.) (If program is continuing C long range, important seem to be more appropriate. It can
to consider cost in relation to gains.) measure continuous change (especially
with behavior descriptors such as found in
the CIM C see Appendix)
LEVEL 4 What final results occurred because 1. Measurable increases in quality: teamwork; Kirkpatrick notes in workplace it is near
(Results) participants attended the professional morale, safety. impossible to tie directly training and
development program? 2. Be satisfied with r“elationships”or evidence specific results (e.g., increased
if “proof” is not available. productivity, reduced costs). He suggests
Tangible results (in the workplace) “evidence” is sufficient. In other adult
might include: increased production (Also important to measure results against cost.) programs, program change may be more
or improved quality. Less tangible easily linked with professional
results may include self-esteem, cross- development. The Product/Program
cultural tolerance or improved Development Approach can provide
communication. multiple evidence (see examples in Section
2). Also Observation/Feedback can
(Level 4 is greatest challenge.) provide evidence of adoption of
professional development practices.

4-6 Professional Development Resource Guide for Adult Educators


Evaluation Devices
Evaluation devices are instruments for measuring outcomes and processes. Different devices
can be used within this evaluation framework. However, three questions need to be answered before
determining which devices to use:
1. What specific evaluation devices or types of instruments are most appropriate for the
different evaluation stages (i.e., reaction, learning, behavior and actions, and
results)?

2. What specific devices or instruments are most appropriate for which professional
development approach (i.e., Workshop/Presentations, Inquiry/Practitioner Research,
Product/Program Development, and Observation/Feedback).

3. What specific devices or instruments are most appropriate for collecting data about
program factors and processes that influence the effectiveness of professional
development activities (i.e., administrative support and flexibility, adequate funding)?

Answering these questions is not always an easy task, and often there are many choices. The
following exhibit1 summarizes a number of possible evaluation devices as they relate to the different
evaluation stages and professional development approaches. Each device has strengths and
weaknesses. To select those procedures most suitable for adult education, we cite advantages and
concerns for each device. To measure change as a result of professional development activities, some
measure of pre-and-post activity is necessary (it is assumed as a prerequisite in all of the examples).
Like the approaches themselves, evaluation is most effective when a combination of devices are
employed C each appropriate to specific goals. Such combinations can create a comprehensive and
valid evaluation of professional development. Clearly, then, no one method of evaluating professional
development is appropriate for all or even any one professional development approach. For example,
Inquiry/Research may employ self-report, interview and observation/feedback combinations.
Product/Program Development may favor an evaluation of product use, evidence of leadership in
professional development for that product and self-report devices. Workshop/ Presentation may
choose Levels 1 and 2 (reports of satisfaction and content/skill assessment) followed by
Observation/Feedback and self-report. The combination of possibilities are endless.

1
The chart and following discussion are adapted from
Pennington and Young (1989). Their research has been
adapted for professional development and the base broadened to adult education.

Evaluation of Professional Development 4-7


Professional Development Evaluation Devices
INTERVIEWS COMPETENCY TESTS*
Typically, interviews consist of directive and non-directive questions (sometimes rank-ordered) asked in Most appropriately used following some workshop/presentation approach where content or techniques
private. Interviews can be used following any of the approaches suggested in this Guide. The question are the focus of the workshop. (For example, the ESL Institute in California used tests of content and
protocols are designed appropriate to each. sequence to determine if participants understood training content.) Pre-post forms of a test can be used
to measure growth in content of professional development topic.

Advantages Disadvantages Advantages Disadvantages


• May get candid responses from participants C • Is time-consuming • Helps to guarantee minimum standards of • Knowledge does not equal effective
especially if non-directive. • Answers may reflect what interviewer wants to knowledge. teaching.
• Allows participants to summarize for hear. • Eliminates individual bias if objectively scored. • At best only samples behavior (as do all
themselves. • Probes may cause person being interviewed to • Are logically defensible in a court of law. instruments).
• Allows interviewer to check for mis­ feel stress or be defensive. • If well constructed, can have limited validity • Have not been shown to have predictive
communication. • Is, after all, a self-report device that reflects and reliability. validity (i.e. successful teaching).
• Can have an additional benefit of building biases of individual and may not reflect actual
positive relations if successfully conducted. changes in behavior.
• Allows for in-depth probes if answers are too
general to be useful.
• Interviews focused on an observation tend to
be most successful.
• Some states also require pre-service competency tests for initial adult education credentials. Such
tests frequently require basic competence in reading, writing and math.

STUDENT EVALUATIONS STUDENT ACHIEVEMENT


Maintains that students are best able to evaluate change in instructional behavior because they are ever- Some advocates (Medley 1982) maintain that effective professional development should be tied directly
present. It is a form of observation/feedback except that students are the observers. Can be done by a to student achievement. That position states that the purpose of change in instruction is to improve
student committee responsible for communicating with the entire class or classes (Pennington 1989, p. 628). student performance. Pre-post tests of student achievement, therefore, should serve as the principal
means of professional development (and instructor) effectiveness.

Advantages Disadvantages Advantages Disadvantages


• Provides an additional means of • Research shows tendency for students in • Is seemingly a logical basis for evaluating the • Research on reliability of student
communication between students and “required” courses to rate instructors more effects of professional development as noted achievement as a “measure of teaching
instructor.* harshly; thus GED and some ESL or ABE above. effectiveness has been low” (Pennington
• Standardized format can improve consistency. instructors might be rated unfairly. • Would encourage instructors to focus on 1989; Darling-Hammond 1983).
• Research shows a positive correlation (.70) • ESL students traditionally tend to be student achievement as well as instructional • Teaching performance is one of many
between student and peer ratings of uncomfortable with change in instructional strategies. variables affecting student learning.
instructional effectiveness. (Aleamoni 1987). patterns C especially if different from those • Given inconsistent attendance and turnover
• Data from this approach appears to have previously experienced. in adult education, student achievement
considerable validity and reliability ((Aleamoni • Data from students is often subject to data would be highly suspect as a measure
1987)* misinterpretation. of teaching effectiveness.
• Can be used effectively in conjunction with • Students may be reluctant to be critical of • In beginning-level classes (especially those
other evaluation data (e.g. peer observation in instructors (especially in ESL). with low-level English skills) and for students
nonpunitive situations). with learning problems, this practice could
produce misleading results.
• Individual learning styles also skew learning
results from a given instructional strategy.
• Would rely heavily n short-term change
whereas language learning, for example, is
a long-term process.
*If students view the teacher as “legitimate” and “expert.”
Professional Development Evaluation Devices (Continued)
CLASSROOM OBSERVATION SELF-EVALUATION/SELF-REPORT

Assumes a “research-based approach whereby the observer collects descriptive data on a predetermined Probably the most common procedure in adult education for evaluating the results of professional
aspect of the instructor's performance” (McGreal 1983). That performance should be directly related to development. May take the form of interviews, written evaluations (such as portfolio anecdotes), or by
professional development activities. public testimony. A variation of this procedure adds an observation-type approach by using a self-made
video of classroom instruction.

Advantages Disadvantages Advantages Disadvantages


• Has the advantage of allowing instructors to • Requires careful planning and focus C usually • Ultimately is most motivating form of • Procedure tends to lack reliability and
demonstrate change in the actual situation involving pre-post conferences and established evaluation and often the most critical C “the objectivity (at least in the minds of those
where change takes place: the classroom. performance criteria. only effective motive for change comes from reviewing reports).
• If used in conjunction with a prepared and • Requires systematic and adequate sampling of within” (Brighton 1965, p. 28). • Research shows that insecure instructors
agreed-upon format, the data gathered can instructional behavior which, in turn, requires • Encourages a sense of responsibility and tend to overrate themselves; secure
be extremely reliable. administrative support. professionalism that is consistent with the instructors tend to underrate themselves
• With use of a pre-post instrument, the data • Can be seen as evaluating the instructor as a notion of professional development. (Pennington 1989 p. 640).
can effectively show change in instructional person rather than the effects of professional • Helps educators focus on long-term goals • Training in self-evaluation would appear
behavior resulting from professional development efforts. rather than fleeting interests. essential to improve validity.
development. • Controversy surrounds whether visits should be • May be most effective when combined with
Evidence indicates that peer observations scheduled or unannounced (Master 1983; other modes of evaluation, such as peer
may provide the best data by avoiding threat Pennington 1989). observation.
of employment decisions. • Requires an “objective: observer who uses
agreed-upon criteria C not just “the way I would
do it.”

PRODUCT/PROGRAM EVALUATION

In the case of curriculum development, for example, it is possible to judge the knowledge and skill of the
developer by the resulting product. Likewise, a newly developed program can establish evaluation criteria
such as size of continuing voluntary enrollment, degree of student retention, success in job or school
placements, school and career advancement and the like. If the program has positive results in each of the
criteria established, the program developer could possibly be evaluated by those results.

Advantages Disadvantages
• A product or program has concrete • Both program and product are likely to be • •
observable characteristics that can be developed by a team. It is difficult to assess
objectively evaluated by established criteria. whether all members benefitted or contributed
The skill of the developer can likewise be equally.
evaluated. • Discord among team members can affect the
• When the development is team-based, the quality of the result and make evaluation
collegial learning as a hands-on process has difficult.
increased potential for retention and further • Selection of participants is a problematical task.
application. Neither volunteers nor administratively selected
• The problem-solving nature of the task participants may be the most qualified to serve.
produces cognitive skill development useful Careful criteria and screening are required.
to both classroom and collegial roles. The If members are arbitrarily selected, there is
results can be observed as part of the potential for faculty dissention and unwillingness
evaluation process. to use results. Evaluation of product might not
• Involvement in program or product reflect that situation.
development efforts often motivate
participants to become leaders in other
ways.
The following discussion briefly summarizes each evaluation device listed in the preceding
chart and links each with the appropriate professional development approaches cited in this Guide.
Interviews
Probably Athe major advantage of the interview process is its confidential nature@ (Pennington
and Young 1989). On the other hand, the serious drawbacks of time, discomfort, untrained
interviewers, and lack of focus make this approach questionable. However, if an agency is willing to
invest in interview training of non-threatening, interactive coordinators, the development of appropriate
criteria and protocols, and the time required to carry out this process C especially if accompanied by
observations C the interview process has demonstrated considerable effectiveness. As such, this device
can be used appropriately with any of the professional development approaches.
Competency Tests
Competency tests appear to be useful in assessing the extent to which participants have
mastered content and skill training. (See also Kirkpatrick's Level 2.) They can serve a role as one
component of a series of procedures designed to evaluate professional development. That series
should go beyond paper and pencil testing of content or skills. If a professional development approach
has a goal of increasing knowledge or skill, such tests are appropriate to ensure that those elements are
present before evaluating application of the knowledge or skills. This device could easily be a
component of the Workshop/Presentation Approach or the Observation/Feedback Approach.
Student Evaluations
Whereas it is an intriguing notion that adult students who sit in day-to-day observance of
instructional strategies are most qualified to evaluate the use of newly learned instructional strategies,
this approach may not provide an accurate assessment of the adult education program. Not only do
adult students have preconceived notions about appropriate strategies, they may have had negative
experiences with them. In addition, erratic attendance of adult students may prevent a sense of
continuity. Feelings about instructors make an unbiased judgment difficult. On the other hand, this
method used as a corollary with other approaches such as peer observation (Observation/Feedback
Approach), might provide some new insights into specific instructional behaviors that work well or
could be made more effective. Likewise, student feedback is an important element of the Product
Development Approach (e.g., new curriculum) and any Inquiry/Research Approach.
Student Achievement
Because the reliability of test scores as a measure of teaching effectiveness is low, serious
questions must be raised about the efficacy of student achievement as an evaluation tool for
professional development programs. Further, instructors might be tempted to teach to the test in order
to validate their professional development efforts. In addition, little or no relationship has been found
between specific instructional approaches and performance on selected test items (Centra and Potter
1980).
Finally, because teaching performance is only one of many factors that predict student learning,
it should not be isolated in a single cause-effect relationship. At the same time, an obvious goal of
professional development is to assist in improving student achievement. If not by test scores alone,
attention must ultimately be paid to student learning, learning styles, and metacognitive strategies in
relation to instructional strategies. The relationship is obviously complex but one in need of study as
adult education programs begin to serve funders with more stringent accountability requirements.
Classroom Observation/Feedback
The research data in K-12 programs that link the Workshop/Presentation approach with
Observation/Feedback has received accolades (Joyce and Showers, 1981) with some cautionary
admonitions (Wade 1984/85).
As noted by Pennington and Young (1989) in discussing evaluation approaches for ESL
faculty, “The observation method . . . may arguably be the most valid criterion for evaluation of
practicing teachers, i.e., classroom performance” (p. 636). To make this procedure valid, however,
requires following strict guidelines. Even then, such observer deficiencies as using subjective
standards, lack of content expertise, lack of training in observation methods, and insufficient sampling
can invalidate results.
A reliable and valid observation procedure can be established according to Pennington and
Young (1989) “only by employing highly trained, sensitive observers who themselves have experienced
teaching in the types of classes observed, and who conduct a number of observations under comparable
conditions in a variety of classes over a period of time” (p. 637). Competency-based program
development (Product/ Program Development Approach), the ESL Institute (Observation/Feedback
Approach) and many Inquiry/Research studies have successfully used peer coaching and
Observation/Feedback. In addition, it is frequently the content of a Workshop/ Presentation Approach.
Self-Evaluation/Self-Report
Advantages of this method of evaluation of professional development efforts are many:
increased likelihood of changing instructional behavior, increased sense of professionalism, and
improved goal-setting abilities. It is especially relevant to portfolio development as a reflective practice
activity (Inquiry/Research Approach). The lack of objectivity and reliability, however, must be noted.
Again a combination of this method with other approaches (such as Observation/Feedback) can
enhance both objectivity and reliability of the method yet maintain the advantages noted above. (See
also Kirkpatrick’s Levels 2 and 3.)
Product/Program Evaluation
A case can be made that the product or program developed reflects the success of professional
development efforts. However, several factors make such a simple evaluation analogy difficult: Can
the Agrowth@ of the individual be documented without pre-post measures? How can we measure
individual development if the product or program is a group effort? Do the results truly represent
professional development levels or did prior qualification, arbitrary selection, or group dissention affect
the outcomes?
Surely product or program results are part of the evaluation process but more comprehensive
assessment and evaluation such as those discussed above should also be applied to this approach.
Evaluation Scenario
The scenario presented in the following exhibit incorporates components of the professional
development evaluation model described earlier in this section. Specifically, the scenario depicts how a
combination of evaluation devices can be applied to evaluating professional development. It must be
noted, however, that in this scenario, program and administrative factors are all supportive, enhancing
the likelihood that the professional development activity would be successful.
Professional Development Evaluation Scenario
INFORMATION GATHERING &
PROFESSIONAL DEVELOPMENT STAGES EVALUATION PROCEDURES
1. PLANNING FOR PROFESSIONAL
DEVELOPMENT

Three levels of needs assessment profiles Analysis of needs assessment profiles for instructors,
reveal that several ESL instructors, the professional development coordinator and site
professional development coordinator, and the administrator by the Professional Development
site administrator feel that ESL students are Council.
being "spoon-fed" by a number of well-
meaning ESL instructors who want to protect Identification of a specific problem in need of
their students from uncomfortable situations resolution.
and honor student beliefs that the role of the
instructor is to present "information" and the Decision to set up an Action Research Project
role of the student is to learn it. The issue (Inquiry/Research Approach)
comes up at most faculty meetings. To resolve
the problem, the Professional Development Establishes clear goals + evaluation questions:
Council, consisting of instructors, the P.C. • W ill students accept instructional strategies that
Coordinator, the site administrator, and student require more self-direction?
representativesdecide to set up an action • W ill students become more independent
research project with beginning ESL students learners?
to see if students will, in fact, accept other • W ill student gains be as great or greater as
instructional strategies and become more expected in traditional classrooms?
independent learners without sacrificing
expected gains in English competence. Decision to hold workshop series to standardize
Because there are differing perceptions of procedures and inform other interested faculty.
action research, the Council decides to hold a
workshop series on "Action Research: Theory Procedures to include:
and Practice" open to all ESL faculty including • A pre/post survey on action research for workshop
those participating in the Action Research series;
project. Participants will establish guidelines, • P re/post English competency measures to show
interventions, as well as monitoring and student gains;
evaluation procedures. • A Level 1 evaluation form for each workshop
session;
• A 3-hr. informal video of each ESL teacher's
classroom (pre-post)
• A portfolio anecdotal log (weekly).
2. IMPLEMENTING PROFESSIONAL
DEVELOPMENT

The action research project is carried out • A


ll pre-tests and surveys are administered;
following the steps illustrated in Section 2 of • P
re classroom videos are recorded;
this Guide. The length of the project will • E
ach Friday the Coordinator facilitates a meeting
encompass 75 instructional hours for each of participants (for which they are paid). Sessions
student. last 90 minutes. Portfolios are reviewed,
compared and evaluated;
During the final week of the program, a second • D
ecisions are made to modify instructional
video is recorded in each ESL classroom. strategies, change timelines or make other
needed changes.
Post tests are administered to students and
post-surveys to instructors.
INFORMATION GATHERING &
PROFESSIONAL DEVELOPMENT STAGES EVALUATION PROCEDURES
3. REVIEWING PROFESSIONAL
DEVELOPMENT

Results of all assessments are first analyzed by Each entity looked at the data to see if the original
the professional development coordinator with evaluation questions had been answered and to what
an evaluation specialist. The data and their extent goals were achieved. A report was compiled to
findings are then presented to the faculty present the findings, which were considered to be very
participating and lastly, to the professional favorable.
development council.

The Professional Development Council is The Council, with administrative concurrence, decides
pleased with the results which show to have presentations of results to all ESL faculty, for
comparable student gains, but great strides in the Board of Education, to other appropriate
independent learning and metacognitive community organizations, and at the statewide adult
strategies as well as improved self-esteem by education conference.
both students and instructors.
In addition, faculty who participated have volunteered
to "peer-coach" other interested faculty
(Observation/Feedback Approach).

It was also decided to conduct a new needs


assessment following the faculty presentations to see
if other faculty would like peer coaching in the
metacognative, problem-solving, decision-making
strategies used in the research project.

The Council has indicated if peer coaching is


successful, to consider mandating the successful
strategies throughout the ESL program.

Thus, the evaluation cycle has come full-circle with a targeted needs assessment that will follow
the same steps illustrated above. During this targeted professional development activity, other
professional development activities should also be taking place to meet other needs or solve other
organizational problems. As Showers (1995) points out: AThe entire [professional development]
process must be embedded in the school culture to ensure a permanent ongoing inquiry into how to
make the school better (p.6).”
References

Baden, D. (1982). “A User’s Guide to the Evaluation of Staff Development.” Assessing the Impact of
Staff Development Programs. Syracuse, N.Y.: National Council of States on Inservice
Education.

Brighton, S. (1965). Increasing Your Accuracy in Teacher Evaluation. Englewood Cliffs, NJ:
Prentice Hall.

Centra, J.A. and Potter, D.A. (1980). “School and Teacher Effects: An Interrelational Model.”
Review of Educational Research, 50, 273-291.

Darling-Hammond, L.,Wise, A. and Pease, S. (1983). “Teacher Evaluation in the Organizational


Context.” Review of Educational Research, 53; 285-328.

Fenstermacher, G. and Berliner, D. (November 1983). A Conceptional Framework for the Analysis of
Staff Development. Santa Monica, CA: Rand Corp.

Gusky, T. (May 1986). “Staff Development and the Process of Teacher Change.” Educational
Researcher, pp. 5-12.

Gusky, T. (Fall 1994). “Results-Oriented Professional Development: In Search of an Optimal Mix of


Effective Practices.” Journal of Staff Development, Oxford, OH: 15(4) 42-50.

Hoard, S. and Loucks, S. (1980). A Concerns-Based Model for the Delivery of Inservice. University
of Texas at Austin: Research and Development Center for Teacher Education.

Joyce, B. and Showers, B. (1981). ATransfer of Training: The Contribution of Coaching.” Journal of
Education, 163; 163-172.

Kaufman A. (1985). Implementing Problem-Based Medical Education. New York: Springer.

Kaufman, A., Mennins, R., Waterman, R. and Duban, S. (1989). AThe New Mexico Experiment:
Educational Innovation and Institutional Change.” Academic Medicine, 64 (6) 285-294.

Kirkpatrick, D. (1994). Evaluating Training Programs. San Francisco: Berrett-Koehler Publishers,


Inc.

Loucks, S. and Melle, M. (1982). “Evaluation of Staff Development: How Do You Know It Took?”
The Journal of Staff Development, 3; 102-117.

Master, P. (1983). AThe Etiquette of Observing.@ TESOL Quarterly, 17; 497-501.

Medley, D.M. (1982). Teacher Competency Testing and the Teacher Educator. Charlottesville, VA:
Association of Teacher Educators and the Bureau of Educational Research, School of
Education, Univ. of Virginia.
Millman, J. (1981). “Student Achievement As a Measure of Teacher Competence.” In J. Millman
(Ed.) Handbook of Teacher Evaluation. Beverly Hills, CA: Sage (pp. 146-166).

Pennington, M.C. (1989). “Faculty Development for Language Programs.” In R.K. Johnson (Ed.),
The Second Language Curriculum, (pp. 91-110). Cambridge: Cambridge University Press.

Pennington, M.C. and Young, A.L. (December 1989). “Approaches to Faculty Evaluation for ESL.”
Tesol Quarterly, pp. 619-648.

Pennington, M.C. and Young, A.L. (in press). “Procedures and Instruments for Faculty Evaluation in
ESL.” In M.C. Pennington (Ed.), Evaluation of English Language Programs and Personnel,
Washington, DC: National Association for Foreign Student Affairs.

Rickard, P., Stiles, R., Posey, V. and Equez, J. (May 1991) “The Essential Role of Assessment.” Adult
Learning, 2 (7) 9-11.

Wade, R. (1985). “What Makes a Difference in Inservice Teacher Education? A Meta-Analysis of


Research.” Educational Leadership, 42(4), 48-54.

Wood, F., Thompson, S. and Russel, F. (1981). “Designing Effective Staff Development Programs.”
In Dillon-Peterson (Ed.) Staff Development/Organization Development. Alexandria, VA:
Association for Supervision and Curriculum Development.

You might also like