CRITICAL APPRAISAL
(INTRODUCTION)
Savitri Sayogo
Department of Nutrition FKUI
2012
1
Learning Outcomes
By the end of this session the students should :
Understand the principles of critical appraisal and
why
you should undertake it
Be able to appraise published research and judge its
reliability
Be able to assess the relevance of published research
to your own work
2
What is critical appraisal ?
Critical appraisal is the assessment of evidence
by systematically reviewing its relevance,
validity and results to spesific situations
Chambers, R (1998)
3
Critical appraisal is :
Balances assessment of benefits and strengths
of research against its flaws and weaknesses
Assessment of research process and results
Consideration of quantitative and qualitative
aspects of research
To be undertaken by all health profesionals as
part of their work
4
The Problem
Vast and expanding
literature.
Limited time to read.
Different reasons to read
– mean different
strategies.
Keeping up to date.
Answering specific
clinical questions.
Pursuing a research
interest.
5
Specify Your Information Need.
What kind of reports do I want?
How much detail do I need?
How comprehensive do I need to be?
How far back should I search?
The answers to these questions should flow
from the reasons for reading.
6
Key Steps to Effective Critical Appraisal
Three broad question :
Are the Results valid ?
What are the results ?
How will these results-help me work with my
patients ?
7
11 items
1. What is the research question ?
2. What is the study type ?
3. What are the outcome factors and how are they measured ?
4. What are the study factors and how are they measured ?
5. What important confounders are considered ?
6. What are the sampling frame and sampling method ?
7. In an experimental study, how were the sebjects assigned to groups ?
In a longitudinal study, how many reached final follow-up ?
In a case control study, are the controls appropriate ? (Etc)
8. Are statistical test considered ?
9. Are the results clinically/socially significant ?
10. Is the study ethical ?
11. What conclusions did the authors reach about the study question ?
8
Frame of a scientific paper
Title, outhership
Abstract
Introduction
Methods
Results
Discussion
Conclusion & recommendation
References
9
Title
Describing the outcome
Describing relationship between risk
factor and outcome
Clear and communicative
12-16 words
10
Authorship
Complete name of all of the authors
Name of institution
11
Abstract
English and bahasa Indonesia
Overview summary of the work
Highlight of result (objective, method)
General statement of significant findings
Around 200-250 words
Key words (3-8 key words)
12
Introduction
Rationale (background)
Magnitude of problems
Impact of outcome
Differences between previous results (risk factors
and outcomes)
To identify specific risk factors and outcomes for a
specific area or population
Literatur review relevant (up date)
Purpose of the study
13
METHODS
1. Study design
2. Study Population :
Subject selection procedure
3. Methods of measurement
4. Description of statistical analysis
14
Results
General characteristics of the data (textular,
table, graphical)
What Happened ?
Discussion
Strenght and limitation of the study
Meaning (implication of results) significancy,
comparison with other study
15
Conclusion
Answer the research problem and aim
of the study
Acknowledgement
References
16
Study design THE EPIDEMIOLOGIC STUDY
Controlled Assignment Uncontrolled Assignment
Experimental Studies Observational Studies
Not randomized Randomized Sampling with Sampling with
assignment assignment Regard to Regard to
Disease or Expostire
effect Characteristic
Community Clinical
or Cause
Trials Trials
Cross-sectional
and/or Retro- Prospective
spective Studies Studies
Exposure or History Exposure or
Characteristic at Time Characteristic (Prior to
of Study Time of Study)
Cross-sectional Studies Retrospective Studies
Fig.1. The epidemiologic study 17
Questions to Ask
Is it of interest ?
Why was it done ?
How was it done ?
What has been found ?
What are the
implications ?
What else is of interest ?
18
Is it of Interest ?
(title, abstract)
1. How relevant the topic is to the
information needed
2. How interesting the results are likely
to prove
19
WHY WAS IT DONE
(introduction)
1. Is sufficient evidence presented to justify
the study
2. Is the purpose of the study clearly stated ?
3. Is the study hypothesis clearly stated ?
4. Does the study address question that has
clinical relevance ?
20
Introduction
Research problem
Magnitude of problem
Summary of current relevant literature
Highlight 'gaps' in knowledge, e.g.
No data on specific aspect of problem
Conflicting results
Previous studies have failed to adjust for important
confounders
Limitations of previous studies (sample size, bias, etc)
21
Introduction (cont.)
Purpose of the study :
Should follow the gaps in the literature
Should be clear how your study is better than previous
research
Study hypothesis : State clearly
Association to be assessed
Direction of the association
22
Questions to Ask (cont.)
How was it done?
(Methods.)
Brief but should include enough detail to enable one to
judge quality.
Must include who was studied and how they were
recruited.
Basic demographics must be there.
An important guide to the quality of the paper.
23
HOW WAS IT DONE
(methods)
a. Consider who were studied ?
b. Consider the study design ?
c. Consider the outcome variable ?
d. Consider the predictor variables ?
e. Consider the methods of analysis ?
f. Consider the possible sources of bias ?
24
WHO WERE STUDIED ?
(methods)
1. Is the population from which the study sample
was drawn, clearly stated ?
2. Are the inclusion criteria and eclusion criteria
specified ?
3. Do the criterias match the goals os the study ?
4. Do the authors account for every eligible
patient who does not enter the study ?
5. Is the baseline comparability of the treatment
and control groups documented ?
25
Methods (cont.)
Study design (cross-sectional, case
control, cohort, etc).
Study population :
where is sample coming from
Inclusion/exclusion criteria
26
Methods (cont.)
Outcome
Source of measurement (e.g. interview, medical
record, etc.)
Instrument (questionnaire, scale, ec.)
Definition or diagnostic criteria
Type of variable used in analysis (e.g. continous,
categorical, etc.)
If longitudinal study, length of follow-up and
number of individuals lost to follow-up
27
CONSIDERE THE METHOD OF
ANALYSIS
(method)
1. Are the statistical methods employed suitable
for the types of variables in the study ?
(nominal vs ordinal vs continous)
2. Is the sample size adequate to answer the
research questions ?
3. Etc.
28
CONSIDERE THE POSSIBLE
SOURCES OF BIAS
(methods)
1. Is the method of selection of subjects likely to
have biased results ?
2. Is the measurement of either exposure or the
disease likely to be biased ?
3. Have the investigators considered whether
counfounders could account for the observed
result
29
Questions to Ask
What has it found?
(Results.)
The data should be there – not just statistics.
Are the aims in the introduction addressed in the results?
Look for illogical sequences, bland statements of results.
? Flaws and inconsistencies.
All research has some flaws – the impact of the flaws
need to assessed.
30
Questions to Ask (cont.)
What are the implications?
(Abstract/discussion)
The whole use of research is how far the results can be
generalised.
All authors will tend to think their work is more
important
What is new here?
What does it mean for health care?
Is it relevant to my patients?
31
Questions to Ask (cont.)
What else is of interest ?
(Introduction/discussion.)
Useful references?
Important or novel ideas?
Even if the results are discounted it doesn’t mean there is
nothing of value.
32
Validity/accuracy
The degree to which a variable actually
represent what it supposed to represent
Best way to assess: comparison with a reference
standard
Threatened by: systematic error (bias)
Contributed by:
Observer
Subject
Instrument
33
Reliability/precision
DEFINITION : BEST WAY TO ASSESS:
Precision: Comparison among repeated
the degree to which a measure
variable has nearly the same THREATENED BY: random error
value when measured several (variance)
time Contributed by:
- Observer
- Subject
- Instrument
34
Basic types of Error
The basic types of error may be divided into :
Random (chance) error
Systematic error
Random error is the by-chance error which
make observed values differ from the true
value. This may occur through sampling
variability or random fluctuation of the event
of interest
35
Basic types of Error (cont.)
Systematic error or Bias is any difference
between the true value and observed value due
to all causes other than random fluctuation and
sampling variability. This type of error is
generally more important, and hard to detect,
e.g. over-estimate of body weight of every
subject by 0.1 kilogram resulted from using
inaccurate weighing scale.
36
What one gets from a study !?!
OBSERVED VALUE = FACT + DISTORTION
SYSTEMATIC ERROR RANDOM ERROR
(BIAS) (CHANCE)
Inherent difference Difference in handling
between groups & evaluation between
SELECTION BIAS groups
ALLOCATION BIAS INFORMATION BIAS
CONFOUNDING
Can be solved Proper study Quality control Statistical Testing
by: designs & analysis
Figure 2. Schematic presentation of common bias & error found in epidemiologi study 37
Bias
Selection bias
Information bias
Confounding bias
38
Selection bias
Methods of sample
Population selection:
- Random
- Systematics
Sample - Multistage
- Purposive
- etc.
39
Information bias
Methods of selection :
Source of data ?
Instrument and media (reagent) ?
Executors ?
40
To control bias
Selection Subjects are representative for target populations
Most of the cases are being the sample
Information Standardized data collection methods
(diagnostics, questionaire, human resources, etc)
Confounding Identify all potential confounding factors
Analysis all potential confounding factors
41
Subject (biological) Random
variation
Repeatibility Systematic
Observer
(measurement) Within observer
variation (tends to be random)
Evaluation of
quality of Between observer (tends
measurement Sensitivity (ability to be systematic)
to indentify true
positives)
Validity
Specificity (ability to
exclude true
negatives)
42
INTERNAL VALIDITY
1
SAMPLING
2
PATIENTS
ASSEMBLE
SAMPLING
EXTERNAL BIAS CHANCE
PATIENTS
INTO GROUP
VALIDITY SELECTION
BIAS CHANCE
3
MEASUREMENT
MAKE BIAS CHANCE
MEASUREMENT
4 CONFOUNDING
5
ANALYZE BIAS
GENERELIZE
CONCLUSIONS TO
RESULT
OTHER PATIENTS CONCLUSIONS
FROM SAMPLE
Source: Amri Z, 2005
43
Sampling – chance
Assemble into groups – selection bias &
chance
Make measurement – Measurement bias
Analyzed result – confounding
Conclusion from sample – generalization?
44
TRUTH IN THE TRUTH IN THE FINDINGS IN
UNIVERSE Inference STUDY Inference THE STUDY
#2 #1
EXTERNAL INTERNAL
VALIDITY VALIDITY
Figure 3. The two inferences involved in drawing conclusions from the
finding of a study and applying them to the universe outside
45
Drawing TRUTH IN THE Infer TRUTH IN THE Infer FINDINGS IN
conclusions UNIVERSE STUDY THE STUDY
Designing RESEARCH STUDY ACTUAL
and QUESTION PLAN STUDY
implementing design Implement
EXTERNAL INTERNAL
VALIDITY VALIDITY
Figure 4. The process of designing and implementing a research project
sets the stage for the process of drawing conclusions from it
46
CRITICAL
APPRAISAL
SURVEY & CASE
CONTROL STUDY
Savitri Sayogo
Dec 2007
47
Questions to ask when you read
a scientific paper in journal :
Is it of interest ? / title, abstract
Why was it done ? / introduction
How was it done ? / methods
What has it found ? / results
What are the implications ? / abstract,
discussion
What else is of interest ? / introduction,
discussion
48
Interpreting the results
Statistical significance
The play of chance
The logic of statistical tests
Confidence intervals
Bias
Confounding
49
Relative Risk Confidence Interval Comment
1.2 0.1-9 Not significant, imprecise result
1.2 0.9 – 1.4 Not significant, precise result
1.2 1.1 – 1.3 Significant, precise result
4 1.1 – 8 Significant, imprecise result
50
Confounding Variable
Mother’s knowledge Malnutrition
(Independent variable) (dependent variable)
Family income
(counfounding variable)
51
The standard appraisal
questions
Are the aims clearly stated ?
Was the sample size justified ?
Are the measurements likely to be valid and
reliable ?
Are the statistical methods described ?
Do the numbers add up ?
Was the statistical significant assessed ?
What do the main findings mean ?
How are null findings interpreted ?
Are important effects overlooked ?
How do the results compare with previous reports?
What implications does the study have for your
practice ? 52
SURVEY
The essential questions:
Who was studied ?
How was the sample obtained ?
What was the response rate ?
53
The detailed questions
Are the aims clearly stated ?
Is the design appropriate to the stated objective?
Was the sample size justified?
Are the measurements likely to be valid and
reliable ?
Are the statistical methods described ?
Is the result could be generalized ?
54
Conduct :
Did untoward events occur during the study ?
Analysis :
Were the basic data adequately described?
Do the numbers add up ?
Was the statistical significance assessed ?
Were the findings serendipitous?
55
Interpretation :
What do the main findings mean ?
How could selection bias arise ?
How are null findings interpreted ?
Are important effects overlooked?
Can the results be generalised ?
How do the results compare with previous reports ?
What implications does the study have for your
practice ?
56
The complete list for
the appraisal case
control studies
57
The essential questions
How were the cases obtained ?
Is the control group appropriate ?
Were data collected the same way for cases
and controls
58
The detailed questions :
Are the aims clearly stated ?
Is the method appropriate to the aims ?
Was the sample size justified?
Are the measurements likely to be valid and
reliable ?
Are the statistical methods described?
59
Conduct :
Did untoward events occur during the study?
Analysis :
Were the basic data adequately described?
Do the numbers add up ?
Was there data-dredging ?
As the statistical significant assessed ?
60
Interpretation :
What do the main findings mean ?
Where are the biases?
Could there be confounding ?
How are null findings interpreted ?
Are important effects overlooked ?
How do the results compare with previous reports ?
What implications does the study have for your
practice ?
61
62
Daftar pustaka
1. Djuwita R. Critical Appraisal. Seameo.
2. Crombie I.K. The pocket guide to critical
appraisal.
3. Sastroasmoro S. Metodologi penelitian klinis.
63