Meta-Analysis - An
Meta-Analysis - An
Introduction
What is Meta-Analysis (MA)?
Term coined by Gene Glass in his 1976
AERA Presidential address
An alternative to the traditional literature
review
Allows the reviewer to quantitatively
combine and analyze the results from
multiple studies
What is Meta-Analysis (MA)?
Traditional literature review is based on
the reviewers analysis and synthesis of
study themes or conclusions
MA collects the essential empirical
results from multiple studies and draw
conclusions about the overall effect
across studies no matter what the
original study conclusions were
Thus a MA becomes a research study on
research studies, hence the term
"meta".
Growth and Development of
MA
MA has developed substantially both in
methods and in applications (Larry Hedges,
Ingram Olkin, John Hunter, and Frank
Schmidt)
Literature review should be as systematic as
primary research and study characteristics
and design should provide a context for
interpreting study results and conclusions
(Glass)
MA now widely used in many disciplines
(e.g., education, social sciences, medicine)
Conducting a Meta-Analysis
Researcher first collects studies on a
particular topic
Information about studies is then
collated and coded
Results of each study are translated into
a common metric, the study effect size
Analysis is then conducted to
summarize effect size across studies or
analyze relationships between
covariates and effect size
Effects of MA
An important consequence of the
development of MA is the way it has changed
our thinking about research
Increased focus on a number of important
issues in science including publication biases
How to understand and summarize statistical
results
Importance of effect size and statistical power
Effect Size in MA
Effect size makes meta-analysis possible
it is the dependent variable
it standardizes findings across studies such
that they can be directly compared
Any standardized index can be an effect
size (e.g., standardized mean difference,
correlation coefficient, odds-ratio) as long as:
It is comparable across studies
It represents the magnitude and direction
of the relationship of interest
It is independent of sample size
Different meta-analyses may use different
effect size indices
Which Studies to Review?
Should be as inclusive as possible
Need to find all studies
Include unpublished studies
Apples and Oranges
A priori inclusion and exclusion criteria
Revision of criteria as MA proceeds
More than one sample of studies for different
purposes
Which Studies?
Significant findings are more likely to be
published than nonsignificant findings (File
drawer problem)
Critical to try to identify and retrieve all
studies that meet your eligibility criteria
Potential sources for identification of
documents
computerized bibliographic databases
authors working in the research domain
conference programs
dissertations
review articles
reference lists
hand searching relevant journals
government reports, bibliographies, clearinghouses
Strengths of Meta-Analysis
Imposes a discipline on the process of
summing up research findings
Represents findings in a more differentiated
and sophisticated manner than conventional
reviews
Capable of finding relationships across
studies that are obscured in other approaches
Protects against over-interpreting differences
across studies
Can handle a large numbers of studies (this
would overwhelm traditional approaches to
review)
Weaknesses of Meta-Analysis
Requires a good deal of effort
Mechanical aspects dont lend themselves to
capturing more qualitative distinctions between
studies
Apples and oranges; comparability of studies
is often in the eye of the beholder
Most meta-analyses include blemished studies
Selection bias posses continual threat
negative and null finding studies that you were unable
to find
outcomes for which there were negative or null findings
that were not reported
Analysis of between study differences is
fundamentally correlational
Examples of Different Types of
Effect Sizes:
Standardized Mean Difference (continuous
outcome)
group contrast research
treatment groups
naturally occurring groups
Odds-Ratio (dichotomous outcome)
group contrast research
treatment groups
naturally occurring groups
Correlation Coefficient
association between variables research
The Standardized Mean Difference
ES
X1 X 2 s12 n1 1 s22 n2 1
s pooled
s pooled n1 n2 2
Frequencies
Success Failure ad
Treatment Group a b ES
Control Group c d bc
The Odds-Ratio is the odds of success in
the treatment group relative to the odds
of success in the control group.
Converting results into a
common metric
Can convert p-values t, F, etc. into the
standardized effect size metric being used
in the meta-analysis (e.g., d, r)
Interpreting Effect Size
Results
Cohens Rules-of-Thumb
standardized mean difference effect size
small = 0.20
medium = 0.50
large = 0.80
correlation coefficient
small = 0.10
medium = 0.25
large = 0.40
Interpreting Effect Size
Results
Rules-of-thumb do not take into
account the context of the intervention
a small effect may be highly meaningful
for an intervention that requires few
resources and imposes little on the
participants
small effects may be more meaningful for
serious and fairly intractable problems
Cohens rules-of-thumb do, however,
correspond to the distribution of
effects across meta-analyses found by
Lipsey and Wilson (1993)
Interpreting Effect Size
Results
Findings must be interpreted within the
bounds of the methodological quality of the
research base synthesized.
Studies often cannot simply be grouped
into good and bad studies.
Some methodological weaknesses may bias
the overall findings, others may merely add
noise to the distribution.
Traditional Narrative reviews
Vote-counting