0% found this document useful (0 votes)
47 views3 pages

Unit 6 - VALIDITY Group 8

The document discusses different types of validity in assessment including face validity, content validity, criterion-related validity which has concurrent validity and predictive validity, and construct validity. It also discusses important considerations about validity, factors affecting validity of test items, and reasons that can reduce validity.

Uploaded by

Gerald Guiwa
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
47 views3 pages

Unit 6 - VALIDITY Group 8

The document discusses different types of validity in assessment including face validity, content validity, criterion-related validity which has concurrent validity and predictive validity, and construct validity. It also discusses important considerations about validity, factors affecting validity of test items, and reasons that can reduce validity.

Uploaded by

Gerald Guiwa
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

Pamintuan, Lovely Joy

Chapter 6
ESTABLISHING VALIDITY OF A TEST

According to Airasian (2000), validity in assessment means that the information obtained
from a test or assessment allows a teacher to make accurate decisions about a student's
learning. It is about ensuring that the test measures what it is intended to measure.

Types of Validity

1. Face Validity
is essentially a superficial assessment of whether a measurement or assessment
"looks like" it measures what it claims to measure. It is based on the subjective
judgment.
2. Content Validity
is about making sure that a test or questionnaire includes the right questions to
measure what it intends to measure.
3. Criterion-related Validity
also known as criterion validity, is a type of validity that assesses how well a
measure or test corresponds to a specific criterion or outcome. It evaluates the
extent to which scores on a test can accurately predict or correlate with
performance on a separate criterion measure.

2 Types of Criterion
• Concurrent Validity
is a type of validity where the criterion (the thing you want to measure) and
the predictor (the test or measure you're using) data are collected at the same
time. It is useful when you want to assess a student's current status or diagnose
their condition.
• Predictive Validity
is a type of validation that tells us how well a test or measure can predict a
student's future performance or outcome based on their current test results. It is
used when we want to see if a student's scores on a test can help us accurately
estimate how they will perform or what their status will be at a later time.

In both concurrent and predictive validity, the goal is to establish a strong


relationship or correlation between the test scores and the criterion measure. A
high correlation indicates good criterion-related validity, suggesting that the test is
effective in measuring or predicting the intended outcome or criterion of interest.
4. Construct Validity
is a type of validation that helps us determine if a test or measurement instrument
is really measuring what it claims to measure. It focuses on assessing the accuracy
of the test in capturing the intended qualities or attributes of a specific concept or
construct.

3 Methods
• Convergent validation is one method used to establish construct validity. It
involves comparing the scores of the test with scores from other established
measures that are believed to assess the same construct. If there is a strong
positive correlation between the scores of the test and the scores of the
established measures, it suggests that the test is accurately measuring the
intended construct.
• Divergent validation is another method used to establish construct validity.
It focuses on examining the relationship between the test and measures of
unrelated constructs. If there is a weak or negligible correlation between the
test scores and measures of unrelated constructs, it indicates that the test is
not measuring those unrelated constructs, supporting its construct validity.
• Factor analysis is a statistical technique used in construct validity studies.
It helps identify the underlying factors or dimensions of the construct.
Researchers analyze the patterns of responses from participants to see if the
items on the test are aligning with the expected factors related to the
construct.

Important things to remember about validity

• Validity refers to the decisions we make and not to the test itself or to the
measurement
• Validity is not an all or nothing concept it is never totally absent or absolutely
perfect
• A validity estimate called validity coefficient refers to specific type of validity it
ranges between 0 and 1
• Validity can never be finally determined it is specific to each administration of the
test
Factors affecting the validity of a test item
• the test itself
• the administration and scoring of a test
• personal factors influencing how students response to the test
• validity is always specific to a particular group

Reasons that reduce the validity of the test items


• poorly constructed test items
• unclear directions
• ambiguous test items
• too difficult vocabulary
• complicated syntax
• inadequate the limit
• inappropriate level of difficulty
• unintended clues
• improper arrangement of test items

You might also like