Screening - 2012
Screening - 2012
UMO
Do you agree?
early diagnosis” of disease is beneficial, then …
• screening is bound to be effective
• screening for lung cancer using sputum cytology or
chest X-rays can reduce mortality from the disease
screening can involve simple clinical
examinations
• such as assessment of blood pressure, or a clinical
breast examination
Screening: defined
“the presumptive identification
• of unrecognised disease or defect *
• by the application of tests, examinations or other
procedures
• that can be applied rapidly.”
A screening test is not intended to be diagnostic
• Rather a positive finding will have to be confirmed by
special diagnostic procedures
• United States Commission on Chronic Illness (1957)
criterion e
n 60
10000
choose closest to top left s
i 50
corner to maximized the t
40
25000
50000
i
discriminative ability of the test v 30
i
t 20
y
10
0
0 20 40 60 80 100
1- specificity
Receiver Operator Characteristic (ROC) Curve
ROC curve to determine best cutoff point for Wilsom Risk sum
The area under the curve scoring to detect difficulty of endotracheal intubation
100
represent overall 1
0
90
accuracy of the test
80
useful to compare two test70
2
sensitivity
60
503
40
30
205
10
0
0 20 40 60 80 100
1- specificity
It is important
The validity of the screening test to be
used should have been evaluated and
their expected values stated,
There should be an acceptable program
of quality control to ensure that the
stated levels of sensitivity and specificity
are attained and maintained.
2*2 table
D+ D-
Test + 180
Test - 20
Test - 20 9310
D+ D-
PPV= 180/670
Test + 180 490 670 = 27%
Test - 20 9310
D+ D-
D+ D-
PVP= 1800/2200
T+ 1800 400 2200 = 82%
T- 200 7600 7800 PVN= 7600/7800
2000 8000 10000 = 97%
Prevalence 20%
relationship of disease prevalence and predicitve values
General Rule:
low prevalence > low PVP; high PVN
high prevalence > high PVP; low PVN
use of multiple tests
sequential (two-stage) testing
• there is a lost in net sensitivity
• there is a gain in net specificity
simultaneous testing
• gain net sensitivity
• lost net specificity
Reliability (Repeatability) of Tests
Results replicated if repeated
Intrasubject variation
Intraobserver variation
Interobserver variation
Measuring agreement
Kappa is a ratio:
the numerator is the observed improvement
over chance agreement (A0-Ac)
the denominator is the maximum possible
improvement over chance agreement (N-Ac)
Size of agreement
0.81 to 1.00 Almost perfect
0.61 to 0.80 Substantial
0.41 to 0.60 Fair
0.21 to 0.40 Slight
0.00 to 0.20 Poor