0% found this document useful (0 votes)
101 views8 pages

E Planning, Resource and Assessment

This document discusses various types of assessment including formative, summative, diagnostic, and portfolio assessment. It describes designing valid assessment criteria and test items like direct, indirect, multiple choice, and gap fill questions. The document also covers preparing students for exams, marking tests reliably using assessment scales, and involving students in the assessment process.

Uploaded by

Dobreanu Sorana
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
101 views8 pages

E Planning, Resource and Assessment

This document discusses various types of assessment including formative, summative, diagnostic, and portfolio assessment. It describes designing valid assessment criteria and test items like direct, indirect, multiple choice, and gap fill questions. The document also covers preparing students for exams, marking tests reliably using assessment scales, and involving students in the assessment process.

Uploaded by

Dobreanu Sorana
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 8

E Planning, resource and assessment

88. Assessment and testing


Assessing students
? how well they have done
? how well they are doing or how well they perform in standard (often national) exams.
Types of assessment
 FORMATIVE ASSESSMENT happens when we test students so that we can help
them to do better next time.
 CORRECTION = kind of mini formative assessment.
 SUMMATIVE ASSESSMENT happens when we want to see how well students have
done - testing their knowledge at the end of something (a semester or a year) or in
some PUBLIC EXAM.
 DIAGNOSTIC TESTS when we want to know how much they know so that we can
decide what to do next -like a doctor diagnosing a patient's symptoms: PLACEMENT
TEST.
 PROGRESS TESTS to see how students are getting on. Progress tests often happen at
the end of a week, a month or when a unit of work is finished.
 ACHIEVEMENT TESTS at the end of something, such as a semester or a year. We
want to know what they have learnt in ilie past few months - what tl1ey have
achieved.
 PROFICIENCY TESTS measure a student's language ability at a particular point in
time.
 PORTFOLIO ASSESSMENT is based on work that the students have done during a
course. They keep examples of their work and this forms the basis of our assessment
 CONTINUOUS ASSESSMENT -where we keep a record of students' work, giving
marks for each piece of homework or mini-test, and we use this to decide a final
grade.
Designing and making tests
 we have to decide on our ASSESSMENT CRITERIA (know what it is we are testing.)
 Tests need to have VALIDITY. This means that if we tell the students that we are
going to assess their writing, we shouldn't make it dependent on a lot of reading
because they were not expecting a reading test.
 achievement tests: we need to test things that the students have been learning
(grammar, vocabulary, etc.), and we have to be sure that we use the same kinds of test
items and tasks as the ones they have been using in their lessons.
 list exactly what it is we want to measure and how to do it. SENTENCE
REORDERING items if we want to test SYNTAX (the order that words go in);we can
get the students to put pictures in order to test comprehension of a stmy.
 We have to decide on the balance of items in a test. Do we want all the questions to be
'discrete point' items (that is only testing one thing- such as a verb tense -at a time) or
should we include more INTEGRATIVE items.
 write RUBRICS (instructions) that are easy for the students/candidates to understand.
 it is a very good idea to give them to colleagues (or students who are not going to do
the tests later) to try out (PILOT) first.

89. Test items and how to teach them


Types of test item
 A DIRECT TEST ITEM asks the candidate to perform the skill that is being tested
(for example, make an ORAL PRESENTATION).
 An INDIRECT TEST ITEM, on the other hand, examines the candidate's knowledge
of individual items of language.
Direct test items
 In tests of speaking, students can be asked to do such things as give an oral
presentation, do an INFORMATION-GAP ACTIVITY or take part in an
INTERVIEW.
 In tests of writing, students can be asked to do such things as write a letter or a report,
or compose a newspaper report or a BLOG entry.
 In tests of reading, students can be asked to transfer information from a written TEXT
to some kind of VISUAL ORGANISER (a pie chart, a graph, etc.) or match texts with
pictures and headlines.
 In tests of listening, students can be asked to transfer the information they hear to
some visual organiser (a pie chart, graph, etc.) or they can put pictures (or events) in
the right sequence, or choose between different written summaries of what they hear.
Indirect test items
 GAP FILLS, students have to write a word or words in BLANKS.
 In CLOZE texts, every sixth (or seventh, eight, etc.) word is a blank. The students
have to understand the whole text in order to fill in the blanks.
 In MULTIPLE-CHOICE items, the students have to choose the correct (or perhaps the
best) from three or four alternatives. We can also add a third option, such as 'no
information given in the text'.
Test items and how to teach them
 For JUMBLED SENTENCES, the srudents have to put sentences in the correct order
to make a coherent text.
 SENTENCE-REORDERING tasks they have to put words in order to make correct
sentences.
 SENTENCE TRANSFORMATION exercises ask students to rewrite sentences in a
slightly different form.
 PROOFREADING exercises ask students to identify the mistakes in certain sentences.
For example: Underline the mistake in the following sentence.
 MATCHING tasks and we can give them DICTATIONS which test a range of
competencies, such as listening, spelling, grammar, collocations, etc.
How to prepare students for tests
 Students are often highly MOTIVATED in exam classes because they have a clear
goal to aim for.
 We will give the students experience with the indirect test items that they are likely to
meet.
 strategies for dealing with MULTIPLE-CHOICE QUESTIONS.
 We will discuss with our students general exam skills, such as how to prepare, how to
use PROCESS WRITING techniques when writing and how to get exam timing right.
 We will let the students do MOCK EXAMS in real time.
 We have to be careful of exam WASHBACK (where teachers only teach the kind of
things that are in the test).
 We can get the students to ROLEPLAY oral interviews (one student plays the
examiner). • Students can try to write their ovvn exam items and give them to their
classmates.

90.Marking and grading tests


After the test
 Writing and giving tests is the first part of a complex process.
The trouble with marking and how to deal with it
 A good test has scorer RELIABILITY - in other words, whoever grades the test, the
student should get the same result. But this is not easy to achieve.
 It is easy to be SUBJECTIVE when we grade tests and exams, because that's what we
do in real life..
 OBJECTIVE - that is, if a candidate gets the same grade whoever is marking the test.
= test RELIABLE, and we can be confident that (provided we have designed it well)
the test gives a clear picture of a candidate's real ability and achievement.
 For example, when candidates answer well-designed MULTIPLE-CHOICE
QUESTIONS only one answer is correct = we can have confidence that the grading
will be accurate.
 ASSESSMENT SCALES Whatever kind of marking we are using, the grading will
always be more reliable if more than one person is involved.
Using assessment scales
 ASSESSMENT CRITERIA (exactly what skills and sub-skills we want to assess), we
can design an assessment scale (or ·scales).
Example 1: a basic five-level assessment scale for writing
1 Very difficult to understand, with poor spelling and many vocabulary and grammar
mistakes.
2 Quite a lot of grammar and vocabulary mistakes, but the meaning is mostly clear.
3 The meaning is clear although there are some grammar and vocabulary mistakes.
4 The meaning is clear and there are few mistakes of grammar and vocabulary.
5 The meaning is clear and the writing is almost mistake-free.
Marking and grading tests
Example 2: an assessment scale for giving oral presentations (with a total of 25 possible
marks)
Enough Quite interesting Occasionally Listeners will informative, clear
Content interesting at times interesting, but find it difficult to information to unlikely to
Organisation find anything engage listeners engage listeners mteresting here Excellent Mostly
Pronunciation clear and Easy to follow, Rather poorly Extremely structure with easy to follow
Grammar Almost faultless Ve1y good Clearly Quite a few ·Very poor with no problems
Vocabulary pronunciation intelligible but pronunciation pronunciation for the listener with
only some problems problems make and very difficult occasional make listening
a this speaker to understand difficulties for little difficult difficult to the listener
understand Use of varied Use of varied Good grammar Often good Many and
varied grammar v.rith no grammar with a use with some grammar use but
grammar mistakes few errors mistakes quite a few mistakes mistakes A wide use
of A wide use of Good vocabulary A lot of So many appropriate vocabula1y
with use with some vocabulary vocabulary vocabulary with occasional problems
problems make mistakes that it is no problems problems it difficult to very
difficult to understand understand

Involving the students


 we can ask the students what grading system they would like us to use.
 Would they like A, B, C, D grades or would they prefer a simple pass, fail, distinction
marking system?
 Students can mark their own tests if we give them dear criteria for doing so.
 CAN-DO STATEMENTS , such as I can write a simple email ; a meeting or I can
make a two-minute oral presentation about a scientific topic to assess their own
abilities; can be included at the end of a week or month's work, for example, or at the
end of a COURSEBOOK.
F. Teaching young learners
Why we test young learners
 because we want to see what they have achieved or how proficient they are (what they
know) = SUMMATIVE ASSESSMENT.
 because we want to see how well they are doing, in order to help them do better =
FORMATIVE ASSESSMENT.
 CONTINUOUS ASSESSMENT, (including PORTFOLIO ASSESSMENT ) where
we are continually evaluating the students' work and helping them to improve over the
semester or year.
Continuous assessment
 We can look at our students' work as evidence (imagining that we are detectives).
 We can then see what the evidence tells us so that we can take action to help our
students improve.
Assessing young learners
 we can make a list of the strengths it has and then say what the student can do to
improve and build on those strengths.
 We can make a LEARNER LANGUAGE PROFILE (LLP) for each student. The
profile might consist of a number of CAN-DO STATEMENTS .
 Most teachers don't have time to fill in learner language profiles all the time, so it is
helpful to divide the class up into GROUPS
Students assess themselves
 At the end of a SEQUENCE OF LESSONS (for example two weeks' work or a unit in
a coursebook), ;ve can ask them to say yes or no to statements such as I can give my
opinion about things with the expression 'I thin!? that ... ' or I can use more than two
adjectives (in the right order) to describe objects and people.
 students can do tasks similar to the ones they have been doing. When they have done
these tasks, they themselves say if they have been successful or not.
Types of test items
 we will write test items which are similar to things which the students have been doing
in their lessons.
 In listening tests for young learners who don't yet write well, we can ask the students
to point to the objects which we name or describe.
 In speaking tests for older children (perhaps eight years old or more), we can ask them
to FIND THE DIFFERENCES between two similar pictures when talking to an
examiner - or two children can do the activity and an examiner can listen and grade
their speaking to each other.
 INFORMATION-GAP ACTIVITIES or tell a story based on pictures that we give
them.
 In reading and writing tests for younger children who have just started to read, we can
ask them to put ticks and crosses (or yes or no) to say if statements are true (for
example This is a crocodile)
 In reading and writing tests for older children (perhaps eight years or more), we can
ask them to do FILL-IN tasks by choosing words from a box which have
accompanying illustrations.
 We can ask them if statements about a picture (for example, There are three monkeys
on top of the car) are true or false.
EXERCISES:
88.
1 For questions 1–7, choose the best option (A, B or C) to complete each statement.
1 If we only want to find out how well our students have done we use
__SUMMATIVE________ assessment. A formative B informal C summative
2 If we want to evaluate people so we can decide how to help them in the future we use
___FORMATIVE_______ assessment. A formative B informal C summative
3 An entrance test (to decide what level class a student should go into) is
___DIAGNOSTIC_______ test. A an achievement B a diagnostic C a proficiency
4 When we want to see if students have learnt what we have been teaching them in the last
two weeks or a month etc. we give them ____AN ACHIEVEMENT______ test. A an
achievement B a diagnostic C a proficiency
5 When students take an exam to see if they are at a particular level (often in a public exam)
we call it ___PROFICIENCY_______ test. A an achievement B a diagnostic C a proficiency
6 When we get students to collect examples of their work over a period of time and use that
for evaluation we call it ____PORTFOLIO______ assessment. A formative B portfolio C
placement
7 If a test item tests things that the students are not supposed to know because they are not on
the syllabus they have been studying, we say that item is not VALID… A reliable. B
continuous. C valid.
Reflect 3
Some students seem to do very well in ‘all or nothing’ tests, but others feel that they don’t do
their best. What about you? Would you prefer to be graded on an ‘all or nothing’ final test, or
using continuous assessment? Why?
Some people just seem to do well in final tests, others don’t. Every person is different. In my
case I would like an all or nothing final test. I am a lazy person. During classes it can be
stressful to be examined through a continous assessment so I prefer to focus all my attention
at the end. Continuous assessment means to learn during classes and at home which is not so
preferable. I want to be able to process the information and at the end to remember it.
89.
1 For questions 1–9, match the descriptions with what they are describing A–J. There is one
extra option that you do not need to use. A Cloze test B Gap fill C Indirect test items D
Jumbled sentences E Mock exam F Multiple-choice items G Proofreading H Sentence
transformation I True/false items J Washback effect

1 Each sentence has a blank where students have to write the correct word. B
2 In the text every sixth or seventh word is a blank which students have to fill in. A
3 Most teachers decide how and what to teach on the basis of the content of the tests their
students are going to take. J
4 Students show that they know language by doing such things as completing sentences or
putting words in order – rather than using the language in a proper task. C
5 Students have to choose between options A, B, C (and D). F
6 Students have to put sentences in order to make a coherent sequence. D
7 Students have to re-write a sentence using a word that is given to them. Their sentence has
to mean the same as the original one. H
8 Students have to read sentences and find where the mistakes are. G
9 Students take an exam which is just like the one they are going to take in order to get some
practice. E
Reflect
3 Compare direct and indirect testing by answering the following questions.
1 Which are easier to write? Direct or indirect test items, in your opinion? 2 What do indirect
test items tell you about a student’s ability to use English?
Some indirect test items are quite difficult to write. You have to make sure only one answer is
possible, for example, or try to be sure that the alternatives in a multiple-choice item are
worth putting there. On the other hand, writing a good direct item – which really tests a
students’ ability to DO something in English in a valid way – is also challenging. It is often
easier to grade indirect test items because (usually) only one answer is possible, whereas
when we grade direct test items we have to take many things into account at the same time.
Indirect test items can tell us about our students’ knowledge of grammar and vocabulary. The
big question is whether they tell us about the students’ ability to USE language. Most people
believe that direct test items are better for that.
90.
1 For questions 1–15, complete the text with words and phrases from the box. You will have
to use one word twice.
assessment criteria assessment scales can-do statements computers grade indirect
objective overlay peer evaluation reliability reliable scorer training scorers
subjectively

It is really important to mark tests properly. Scorer (1) ___REALIBILITY____________ is


one of the key issues here. It is difficult for most people to be (2)
_____OBJECTIVE__________ in their judgments – most of us tend to mark (3)
_______SUBJECTIVELY________ unless we have some training or unless we are given
proper (4) ____ASSESSMENT CRITERIA___________. This often takes the form of (5)
_______ASSESSMENT SCALES________, where there is a description of what the students
should be able to do for each task. That way, we know which mark to give. A good test has to
be (6) _______REALIABLE________. This means that if different people grade the same
test, they will all give it the same (7) _____GRADE__________. So everything we do – using
assessment scales, giving (8) ______SCORER TRAINING_________ etc. – is because we
want to be sure that the tests are (9) ____RELIABLE___________ in this way. Of course it is
easier to design reliable material for (10) ____INDIRECT___________ test items where only
one piece of language is tested at a time. With multiple-choice items, for example, we can use
an (11) _____OVERLAY__________, which you put over the questions so that you can see
at a glance if the student’s answers are correct. Multiple-choice items are now frequently
marked by (12) ______COMPUTERS_________. Not all tests have to be marked by teachers
or (13) _______SCORERS________. In (14) ____PEER EVALUATION___________
students grade each other’s tests. Students can also see how good their English is by using
(15) __________CAN-DO STATEMENTS_____ to see what they are capable of.
Reflect
3 Even when teachers use assessment scales, scorer reliability is not guaranteed. What can be
done to make marking and grading more reliable when more than one person is involved in
grading a test?
Some indirect test items are quite difficult to write. You have to make sure only one answer is
possible, for example, or try to be sure that the alternatives in a multiple-choice item are
worth putting there. On the other hand, writing a good direct item – which really tests a
students’ ability to DO something in English in a valid way – is also challenging. It is often
easier to grade indirect test items because (usually) only one answer is possible, whereas
when we grade direct test items we have to take many things into account at the same time.
Indirect test items can tell us about our students’ knowledge of grammar and vocabulary. The
big question is whether they tell us about the students’ ability to USE language. Most people
believe that direct test items are better for that.
101.
1 For questions 1–9, match the descriptions with what they are describing A–J. There is one
extra option that you do not need to use.
A Can-do statements B Continuous assessment C Fill in D Formative assessment E
Information-gap activities F Learner language profile G Portfolio assessment H Proficiency
test I ‘Sudden death’ test J Summative assessment

1 These help students (and teachers) to describe their ability to use language. A
2 This is a kind of test item where students have to write words where there is a blank in a
sentence or paragraph. C
3 This is the kind of evaluation which tells us how good a student is, and whether they match
a certain pre-decided level. H
4 This is the name for a kind of test where 100% of a student’s final grade depends on one
exam. I
5 This is the kind of testing that takes place bit by bit over a semester or a year.B
6 This is the kind of testing we do when we want to see how the students are getting on so we
can help them to do better. D
7 This is the kind of testing we do when we want to see what the students have achieved. J
8 This is the kind of testing where we look at examples of work that the students have been
collecting over a semester or a year. G
9 This is where we describe a student’s ability in detail. F
Reflect
3 Some people think that young learners are tested too much in state school systems. Other
people believe that testing is important in order to know how well the education system
works. What is your opinion?
I think that they should give children test because they are very important for them to gain
knowledge. But this can be made in a certain limit. Too much testing can be discouraging and
we need to motivate them rather than make them want to burn the school down. Well, we can
transform the whole learning process in a game. It is not always possible but where it is we
can take fully advantage of it.

You might also like