Reliabilty Lecture (5)
Reliabilty Lecture (5)
• Useful measure of reliability when it is impractical to assess reliability with two tests
or to administer a test twice – because of the factors such as time and expense.
• Recommended for homogenous tests.
• One common criticism of this technique is determining where to split the test
because of how the items are divided within the measure.
• There is more than one way to split a test – but there are some ways you should
never split a test.
– Simply dividing the test in the middle is not recommended – there is likely that this procedure
would spuriously raise or lower the reliability coefficient.
• (1). acceptable way to split a test – randomly assign items to one/other half of the
test.
• (2). Other acceptable way – split a test to assign odd-numbered items to now half of
the test & even-numbered items to other half of test. This method referred to as
odd-even reliability.
Alpha Coefficient
• Internal consistency reliability – Coefficient alpha or Cronbach’s alpha.
• Widely reported measure of reliability (Hogan, Benjamin, Brezinski, 2003).
• Similar to split half reliability as it also measures the internal consistency or
correlation between the items on a test.
• Main difference between split half and coefficient alpha – the entire test is used to
estimate the correlation between the items without splitting the test in half.
• Cronbach (1951) outlined that coefficient of alpha of greater than or equal to 0.7 is
generally acceptable.
• Very high Cronbach alpha – indicate redundancy of the items.
Inter-rater reliability
• Also known as inter-observer reliability or inter-judge reliability,
assesses the level of agreement or consistency between multiple raters
or observers when they independently assess the same phenomenon,
event or data. It is used to determine the extent to which different
raters, who may have different perspectives or judgments, provide
similar assessments or insights.
• Measuring the consistency of ratings across different raters.
• High inter-rater reliability indicates that the judgments made by
different raters are in agreement.
Intra-rater reliability
• Also known as intra-observer reliability or test-retest reliability.
• It assesses the consistency of ratings or measurements made by the same rater or
observer on two or more occasions when assessing the same phenomenon or data.
• When a researcher examines the consistency of one particular individual’s rating at
multiple points in time.
• Purpose of intra-rater reliability is to determine the sustainability
of an individual’s ratings at two different points in time.
• It is used to determine whether a single rater’s judgments or measurements are
consistent over time.
• High intra-rater reliability indicates that the rater provides consistent results when
assessing the same thig on different occasions.
Thank you