0% found this document useful (0 votes)
21 views13 pages

Classroom Assessment Strategies Guide

Uploaded by

jamesangeles284
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
21 views13 pages

Classroom Assessment Strategies Guide

Uploaded by

jamesangeles284
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd

HANDOUT

IN
ED 302
Assessment in Learning

Submitted by;
ANGELES, BRYAN JAMES B.

Bachelor of Physical Education 3rd Year

DR. CRAIG N. REFUGIO

Section W - WEDNESDAY 5:30 PM – 8:30PM


DepEd Order No. 8, s.2015

1. Purpose of Classroom Assessment

 Integral to curriculum implementation.


 Tracks learner progress and informs instruction.
 Provides feedback to learners, parents, and teachers.
 Promotes higher-order thinking and 21st-century skills.
2. Types of Assessment

 Formative Assessment

 Assessment for learning (teacher adjusts instruction).


 Assessment as learning (students reflect on progress).
 Informal, ongoing, not part of grades.
Provides immediate feedback, identifies strengths/weaknesses.

 Summative Assessment

 Assessment of learning (end of unit/quarter).


 Measures achievement of content and performance standards.
 Results are recorded and used for grading/reporting.
3. What is Assessed

 Content Standards → essential knowledge and understanding (“What should learners


know?”).
 Performance Standards → abilities and skills learners must demonstrate (“What can
learners do with what they know?”).
 Learning Competencies → knowledge, skills, and attitudes demonstrated in lessons.
4. Cognitive Process Dimensions (Anderson & Krathwohl, 2001)

 Remembering → recall, list, repeat.


 Understanding → interpret, summarize, explain.
 Applying → use, demonstrate, solve.
 Analyzing → differentiate, compare, and organize.
 Evaluating → judge, critique, defend.
 Creating → design, development, and produce.
5. Formative Assessment Across Lesson Phases

 Before Lesson → check prior knowledge, misconceptions, barriers.


 During Lesson → monitor progress, adjust strategies.
 After Lesson → evaluate achievement of objectives, plan remediation/enrichment.
6. Summative Assessment Components

 Written Work (WW) → quizzes, tests, essays, reports.


 Performance Tasks (PT) → projects, presentations, demonstrations.
 Quarterly Assessment (QA) → end-of-quarter test (objective or performance-based).
7. Grading System

 Standards- and competency-based.


 Grades based on weighted raw scores of summative assessments.
 Passing grade: 60 (transmuted to 75 in report card).
 Lowest grade shown: 60.
Weights for Grades 1–10 (Table 4):

 Written Work: 30–40% (depending on subject).


 Performance Tasks: 40–60%.
 Quarterly Assessment: 20%.
Weights for Senior High School (Table 5):

 Core subjects: WW 25%, PT 50%, QA 25%.


 Academic track (other subjects): WW 25%, PT 45%, QA 30%.
 TVL/Sports/Arts track: WW 20–35%, PT 60%, QA 20–25%.
8. Computing Grades

1. Add raw scores for each component.


2. Convert raw scores → Percentage Score (PS).

o Formula:
3. Convert PS → Weighted Score (WS).
o Formula:
4. Sum WS = Initial Grade.
5. Transmute Initial Grade → Quarterly Grade (QG).
6. Record QG in report card.

1. Shift of Educational Focus: Content to Learning Outcomes


 Outcome-Based Education (OBE): This represents the core paradigm shift in modern
education.
o The Shift: It moves away from the traditional focus on "what the teacher taught"
(content/input) to "what the student can actually do" (learning outcomes/output).
 Student-Centered Approach: The priority is the student's ability to actively demonstrate
knowledge, skills, and attitudes, rather than just passively receiving information.
 Constructive Alignment: To be effective, all three components must align:
1. Teaching activities.
2. Learning tasks.
3. Assessment methods.
o Note: All of these must match the intended learning outcomes.

 "Design Down, Deliver Up": A curriculum planning strategy where you design starting
with the end goal (the outcomes) and deliver instruction to build students up toward that
goal.
2. Determining Progress Towards Attainment of Learning Outcomes
These four terms are often confused but have distinct meanings:
 Measurement: The process of quantifying attributes. This implies assigning a numerical
value to performance (e.g., giving a score of 85/100).
 Assessment: The broader process of gathering evidence of student performance. This
includes giving quizzes, observing presentations, or checking projects.
 Evaluation: The act of interpreting the data to make judgments or decisions.
o Example: Determining if a student passes or fails the course based on their
accumulated scores.
 Indicators: Specific, observable behaviors or evidence that demonstrate a learning
outcome has been successfully achieved.
3. DepEd Order No. 8, s. 2015
(Policy Guidelines on Classroom Assessment for the K to 12 Basic Education Program)
 Theoretical Basis: Based on Lev Vygotsky’s Zone of Proximal Development (ZPD),
suggesting that assessment should facilitate learning and bridge the gap between what a
learner can do alone vs. with help.
 Core Principle: Assessment must be holistic, tracking the learner's progress through
three specific components:
o Written Work (WW): Assesses knowledge via quizzes, long tests, and essays.

o Performance Tasks (PT): Assesses skills demonstration via group projects,


multimedia presentations, and practical application. Note: This often holds the
highest weight.
o Quarterly Assessment (QA): Periodical exams typically given at the end of the
quarter.
 Transmutation: Raw scores are not the final grade; they are converted into a
Transmuted Grade to determine the final rating on the report card.
 Minimum Pass: A grade of 75 is required to pass a subject.
4. Effective Oral and Multimedia Assessment
A. Oral Assessment
 Purpose: To assess verbal communication skills, critical thinking, and immediate
understanding of a topic.
 Examples: Interviews, oral recitations, debates, and viva voce (defense).
 Rubrics: These are essential for grading oral tasks objectively. They usually assess
criteria such as clarity, content accuracy, and confidence.

B. Multimedia Assessment
 Purpose: To assess creativity and the ability to synthesize and present information using
technology.
 Examples: e-Portfolios, video presentations, podcasts, and digital storytelling.
 Key Criterion: The technology used should enhance the message, not distract from it.
5. Types of Assessment (Based on Function)
 Assessment FOR Learning (Formative):
o Done during instruction.

o Goal: To identify learning gaps and adjust teaching strategies immediately (e.g.,
seatwork, drafts, Q&A).
o Note: usually recorded but not graded for the final mark.

 Assessment OF Learning (Summative):


o Done after instruction.

o Goal: To certify mastery and assign grades (e.g., final exams, periodical tests).

 Assessment AS Learning (Self-Assessment):


o Students reflect on their own work to monitor their own progress.

o Examples: Self-reflection logs, peer reviews, rubrics checklists.

 Diagnostic Assessment:
o Done before instruction.

o Goal: To check prior knowledge and identify misconceptions (e.g., pre-tests).

6. Types of Exams (Item Analysis)


A. Selected-Response Types (Objective)
 Multiple Choice: Consists of a stem (the question) and options (distractors + one correct
key).
o Best Use: Testing higher order thinking if the items are well-designed.

 Matching Type: Consists of two columns (Premise and Response).


o Best Use: Testing associations (dates, terms, definitions).

o Rule: Ideally, the response column should have more items than the premise
column to prevent students from guessing by elimination.
 True or False: Binary choice.
o Weakness: Prone to 50% guessing.

o Rule: Avoid double negatives and vague qualifiers like "sometimes" or "always."

B. Supply Types (Subjective/Objective)


 Fill in the Blanks: Requires exact recall of facts.
o Rule: The blank should be near the end of the sentence.

o Rule: Avoid "mutilated text" (having too many blanks in one sentence), which
makes context impossible to understand.
7. Measures of Central Tendency and Dispersion
A. Measures of Central Tendency (Where the data centers)
 Mean ($\bar{x}$): The arithmetic average.
o Weakness: Very sensitive to outliers (extreme high or low scores).

 Median ($\tilde{x}$): The middle score when data is arranged in order.


o Best Use: When data is skewed (e.g., income distribution or class grades with a
few failing students).
 Mode ($\hat{x}$): The most frequent score.
o Types: A distribution can be unimodal (one peak), bimodal (two peaks), or
multimodal.
B. Measures of Dispersion/Variability (How spread out the data is)
 Range: The simplest measure; the difference between the highest score and the lowest
score.
 Variance ($s^2$): The average of the squared differences from the Mean.
 Standard Deviation (SD or $\sigma$): The square root of the variance. It tells you how
far scores typically are from the average.
o Small SD: Scores are clustered near the mean (Homogeneous group/performance
is similar).
o Large SD: Scores are spread far apart (Heterogeneous group/performance varies
widely).

8. Grading Systems
General Grading Approaches
 Norm-Referenced Grading: Compares a student against other students.
o Example: "Top 10% of the class" or curved grading.

 Criterion-Referenced Grading: Compares a student against a fixed standard or criteria.


o Example: "Must get 75% to pass." DepEd uses this system.

DepEd Grading System (K-12 Weight Distribution)


The weights of assessment components change depending on the subject area. Performance
Tasks generally hold the highest weight because K-12 emphasizes skill application.

CHAPTER 5 (DISTINGUISHING AND CONSTRUCTING VARIOUS PAPER- AND-


PENCIL- TESTS)

 Assessment is not just about grading, it is about measuring learning accurately and fairly.
 Important Steps in Test Planning

1. Identify test objectives / learning outcomes


2. Decide on the type of test
3. Prepare a Table of Specifications (TOS)
4. Construct draft test items
5. Conduct try-out and validation

 A good test starts with clear objectives, not with writing questions immediately.
 Bloom’s Taxonomy Levels Covered in Tests
A comprehensive test should include various cognitive levels:

1. Knowledge / Remembering
o Identify facts, terms, or definitions
2. Comprehension / Understanding
o Explain or determine meaning
3. Application / Applying
o Use rules or concepts in new situations
4. Evaluation / Evaluating
o Judge correctness or validity
5. Synthesis / Creating
o Formulate rules, ideas, or solutions

 A test that measures only memorization is incomplete and unfair.


 What is a TOS?
1) A test blueprint or map
2) Guides teachers in constructing balanced tests
3) Ensures alignment between:
 Objectives
 Content
 Test items
 Purpose of TOS
o Balances lower-order and higher-order thinking skills
o Prevents overemphasis on one level (e.g., pure recall)

 Components of a Simple TOS


o Level of objective
o Statement of objective
o Item numbers
o Number and percentage of items
 TOS is essential for content validity.
 Types of Paper-and-Pencil Test
1) Major Types
o Selected-Response Tests
o True-False
o Multiple Choice
o Matching Type
2) Supply-Type / Constructed Response
o Completion / Fill-in-the-blank
3) Essay Tests
o Restricted
o Non-restricted / Extended
 Each test type measures different learning outcomes.
 Selected-Response Rests
1) True–False Tests
o Characteristics
a. Two options only
b. 50% chance of guessing
Guidelines
 Avoid hints in the question
 Avoid words like always, never, all
 Keep statements short and clear
 Avoid trick questions
 Avoid quoting textbooks verbatim
 Balance true and false answers
 Avoid obvious patterns
 Best for lower order thinking skills.
2) Multiple Choice Tests
o Components
 Stem – question or incomplete statement
 Options – choices
 One correct answer
 Others are distracters

Guidelines for Good MCQs (Very Important)


o Use familiar words
o Avoid vague modifiers (usually, often)
o Avoid negatives and double negatives
o Keep stems short and clear
o Distracters must be plausible
o Ensure grammatical consistency
o Avoid giving clues through length or wording
o Avoid overlapping or synonymous options
o Avoid “All of the above” and misuse of “None of the above”
o Avoid unnecessary information
o Avoid revealing answers to other items
o A good multiple-choice item tests understanding, not reading tricks.
3) Matching Type Tests
o Description
 Modified multiple-choice
 Items on the left matched with options on the right
o Guidelines
 Match homogeneous items only
 Place longer statements on the left
 More options than stems
 Arrange options alphabetically or logically
 Give clear directions
 Indicate if answers may be repeated (imperfect matching)
 Mostly measures knowledge level only.
 Supply-Type / Constructed Response Tests
1. Completion / Fill-in-the-Blank
o Characteristics
 One correct answer only
 Measures recall (mostly)
o Guidelines
 Avoid over mutilated sentences
 Avoid open-ended items
 Ask significant concepts, not trivial facts
 Make blanks uniform in length
 Provide enough context
 Strength:
Reduces guessing compared to multiple choice.
 Weakness:
Limited for higher order thinking unless carefully designed.
 Essay Tests
o General Characteristics
 Non-objective
 Measures higher order thinking skills
 Difficult to score consistently
o Skills Measured by Essays
 Comparing
 Cause-and-effect
 Justifying
 Summarizing
 Generalizing
 Inferring
 Classifying
 Applying
 Analyzing
 Evaluating
 Creating
 Essays assess depth of understanding, not memorization.
 Types of Essays
1. Restricted Essay
 Short, focused responses
 Clear limits on content and length
 Easier to score
2. Non-Restricted / Extended Essay
 Longer, complex responses
 Measures organization, creativity, and reasoning
 Requires rubrics for fair scoring
 Use rubrics to improve reliability in essay scoring.
 Valid tests are aligned with objectives
 TOS ensures fairness and balance
 No single test type fits all objectives
 Avoid trick questions—test learning, not confusion
 Ethical assessment reflects true student learning
 Good assessment is planned, objective-based, balanced, and ethical. It measures not just
what students remember, but what they understand, apply, and create.

Chapter 7 (MEASURES OF CENTRAL TENDENCY AND DISPERSION


(VARIABILITY))

 Measures of Central Tendency - Measures of Central Tendency describe the typical or


central value of a dataset.
1) Mean – The average of the data. It is computed by adding all values and dividing
the sum by the number of values.
2) Median – The middle value when the data are arranged from lowest to highest.
3) Mode – The value that appears most frequently in the dataset.

 Mean is affected by extreme scores, while median and mode are more resistant to
outliers.
 Types of Distributions - distributions describe how scores are spread across a dataset.
1) Score Distribution – Scores tend to cluster around the mean.
2) Standard Normal Distribution – Mean, median, and mode are equal.
3) Positively Skewed Distribution – Mean is greater than the median and mode (tail
extends to the right).
4) Negatively Skewed Distribution – Mean is less than the median and mode (tail
extends to the left).

 Skewness affects how the mean represents the data.


 Outcome-Based Teaching - Outcome-based teaching emphasizes alignment between
instruction and assessment.
o Aligned content and assessment – Teachers reteach concepts until students
achieve mastery.
o Misaligned content and assessment – Students are poorly evaluated, leading to
low achievement and inaccurate results.

 What is taught must match what is tested.

 Measures of Variability - Measures of variability show how spread out or consistent


the scores are.
o Variability – The degree to which scores differ from one another.
o Range – The difference between the highest and lowest scores.
o Variance – The average of the squared differences from the mean.
o Standard Deviation (SD) – The square root of the variance; shows how far
scores deviate from the mean.

 Steps in Computing Standard Deviation


o Compute the mean of the dataset.
o Subtract the mean from each score and square the difference.
o Compute the average of the squared differences (variance).
o Take the square root of the variance to get the standard deviation.

 Small SD – Scores are consistent and close to the mean.


 Large SD – Scores are spread out and less consistent.
 SD is the most commonly used measure of variability.

Chapter 8 (GRADING SYSTEMS AND THE GRADING SYSTEM OF DEPED)

 Types of Grading Systems - grading systems describe how student performance is


evaluated.
o Norm-Referenced Grading – A student’s grade is compared with the
performance of other students.
o Criterion-Referenced Grading – Grades are based on fixed standards or
learning criteria.
 Norm-referenced compares students; criterion-referenced compares performance to
standards.

 DepEd Order No. 8, s. 2015 (K to 12 Grading System)


o Grading in the K to 12 Basic Education Program is based on three components:

1. Written Work (WW) – Quizzes, long tests, essays, and written outputs.
2. Performance Tasks (PT) – Projects, presentations, demonstrations, and
practical activities.
3. Quarterly Assessment (QA) – End-of-quarter examination (objective or
performance-based).

 These components reflect both knowledge and skills.

 Promotion Policy
o Promoted – A student with a final grade of 75 or higher in all learning areas.
o Not Yet Promoted – A student with at least one subject below 75; remedial
classes are required before promotion.

 Alternative Grading System


o Pass–Fail System – Used by some colleges and universities.
o Students receive either Pass or Fail instead of numerical grades.

 Focuses on mastery rather than ranking.

 Standardized Tests and Test Standardization


o Standardized Test – Administered and scored uniformly using a fixed scoring
system.
o Test Standardization – The process of ensuring a test is valid, reliable, and fair
for all examinees.

 Standardization allows fair comparison of results.

 Average System of Grading


o The final grade is computed by averaging all grades from each grading period.
o Consistent performance across quarters leads to a higher final grade.

You might also like