Evaluating and Using Observational Evidence - The Contrasting Views of Policy Makers and Epidemiologists
Evaluating and Using Observational Evidence - The Contrasting Views of Policy Makers and Epidemiologists
Lily O’Donoughue Jenkins1*, Paul M. Kelly2,3, Nicolas Cherbuin1 and Kaarin J. Anstey1
1
Centre for Research on Ageing, Health and Wellbeing, Australian National University, Canberra, ACT, Australia, 2 ACT Health
Directorate, Canberra, ACT, Australia, 3 Australian National University Medical School, Canberra, ACT, Australia
Background: Currently, little is known about the types of evidence used by policy
makers. This study aimed to investigate how policy makers in the health domain use
and evaluate evidence and how this differs from academic epidemiologists. By having
a better understanding of how policy makers select, evaluate, and use evidence, aca-
demics can tailor the way in which that evidence is produced, potentially leading to more
effective knowledge translation.
Methods: An exploratory mixed-methods study design was used. Quantitative measures
were collected via an anonymous online survey (n = 28), with sampling from three health-
related government and non-government organizations. Semi-structured interviews with
policy makers (n = 20) and epidemiologists (n = 6) were conducted to gather qualitative data.
Edited by: results: Policy makers indicated systematic reviews were the preferred research
Alastair James Fischer,
resource (19%), followed closely by qualitative research (16%). Neither policy makers
Nice, UK
nor epidemiologists used grading instruments to evaluate evidence. In the web survey,
Reviewed by:
Emmanuel D. Jadhav, policy makers reported that consistency and strength of evidence (93%), the quality
Ferris State University, USA of data (93%), bias in the evidence (79%), and recency of evidence (79%) were the
Janya McCalman,
Central Queensland University, most important factors taken into consideration when evaluating the available evidence.
Australia The same results were found in the qualitative interviews. Epidemiologists focused on
*Correspondence: the methodology used in the study. The most cited barriers to using robust evidence,
Lily O’Donoughue Jenkins
according to policy makers, were political considerations (60%), time limitations (55%),
[email protected]
funding (50%), and research not being applicable to current policies (50%).
Specialty section:
conclusion: The policy maker’s investigation did not report a systematic approach to
This article was submitted
to Public Health Policy, evaluating evidence. Although there was some overlap between what policy makers
a section of the journal and epidemiologists identified as high-quality evidence, there was also some important
Frontiers in Public Health
differences. This suggests that the best scientific evidence may not routinely be used in
Received: 09 September 2016
Accepted: 14 November 2016
the development of policy. In essence, the policy-making process relied on other juris-
Published: 06 December 2016 dictions’ policies and the opinions of internal staff members as primary evidence sources
Citation: to inform policy decisions. Findings of this study suggest that efforts should be directed
O’Donoughue Jenkins L, Kelly PM,
toward making scientific information more systematically available to policy makers.
Cherbuin N and Anstey KJ (2016)
Evaluating and Using Observational Keywords: policy making, knowledge translation, evidence-based practice, government, mixed-methods research
Evidence: The Contrasting Views of
Policy Makers and Epidemiologists.
Front. Public Health 4:267. Abbreviations: ACT, Australian Capital Territory; EBM, evidence-based medicine; HREC, Human Research Ethics Committee;
doi: 10.3389/fpubh.2016.00267 KT, knowledge translation; RCT, randomized controlled trial; NGO, non-government organization.
Given the importance of promptly incorporating new and were developed in consultation with research experts and senior
robust scientific evidence into policy and the barriers to KT staff from the health department, Alzheimer’s Australia, and
identified above, there is an urgent need to better understand NHMRC. Interviews were recorded and transcribed. Interviews
how policy makers evaluate and use evidence. Therefore, the were done one-on-one and took approximately 1 h.
current study had two intersecting aims. The first aim was to
gain an understanding of the role of research evidence in policy Analysis
making. The second was to investigate how policy makers in the The transcribed interviews were uploaded into NVivo (30)
health domain select this evidence and whether they systemati- and thematically analyzed for themes. These themes included
cally assess evidence quality and how this differs from academic evidence sources, choice of evidence sources, confusion about
epidemiologists. policy, evaluating policy, grading evidence, policy drivers,
policy process, policy maker concerns, and barriers affecting KT.
Average and percentage calculations were also applied.
MATERIALS AND METHODS
An exploratory mixed-methods study design was used in order to Qualitative Interviews with
provide a deeper understanding. The design involved the collec- Epidemiologists
tion and data analysis of two sets of qualitative interviews (n = 13 Participants and Recruitment
and 6) and one quantitative survey (n = 28). Both interviews The second set of interviews focused on a purposive sample
and survey are included in Supplementary Material. Written of chronic disease experts, known to the authors, who were
informed consent was obtained from all participants prior to approached to provide their views regarding the characteristics
involvement in the study. The Australian National University of high-quality observational research, their opinion about the
Human Research Ethics Committee (HREC) and the Australian currently available evidence rating systems, and the implications
Capital Territory (ACT) Health HREC Survey Resource Group of grading observational research.
approved the study.
Measures
Qualitative Interviews with Policy Makers Participants were asked to answer seven open-ended questions
Participants and Recruitment in paper form seeking their views on: (1) their opinion of what
The first set of interviews focused on a purposive heterogene- constitutes high-quality observational research and how it com-
ous sample of 20 people who worked in policy. Thirteen par- pares with other types of research; (2) their understanding and
ticipants were from the ACT Government Health Directorate, use of current rating systems for grading evidence; and (3) the
four participants were from non-governmental organizations, consequences of inappropriately rating observational research.
one from a national Australian government department, and Participants were provided with a one-page guide to grading
two from Australia’s peak research funding body, the National instruments to clarify what the authors defined as a grading
Health and Medical Research Council. Individuals were invited instrument (included in Supplementary Material).
to participate in the study if they had any previous experience
contributing to the development and implementation of health Analysis
policy or programs relating to risk factors for chronic disease, Thematic analysis using a step by step process was conducted to
mental health, or aging. Executive Directors from ACT Health analyze the interviews. The interview transcripts were repeat-
identified participants and invited them via email to participate. edly screened in order to identify a list of recurring themes that
Individuals who responded and consented to participating appeared critical to evaluating evidence. These themes included
were then contacted by the ANU researchers. Participants were evidence sources, choice of evidence sources, evaluating evi-
selected irrespective of policy background, time spent in organi- dence, grading evidence, and observational research. Average
zational roles, or seniority. Participants from ACT Health were and percentage calculations were also applied.
from a wide range of policy units, including Women’s Youth and
Child Health; Aboriginal and Torres Strait Islander; Alcohol and Quantitative Survey
Other Drugs; Rehabilitation, Aged and Community Care; and Participants and Recruitment
Population Health. An anonymous survey was compiled using the online survey tool
Qualtrics (31). Senior staff from three health-related organiza-
Measures tions, two government and one non-government, invited all
One-on-one semi structured interviews were conducted with policy makers via email to complete the survey. The selection
participants focusing on understanding: (1) how policy makers of participation was not reliant on age, gender, or policy experi-
locate and use evidence from observational and other research; ence; however, the survey was only distributed to staff who were
(2) factors influencing their choice of evidence sources; (3) how not involved in the qualitative interviews. Participants were not
policy makers deal with conflicting evidence from specific top- offered any incentives for completing the survey. The survey was
ics; (4) how policy makers evaluate the quality of research; and accessible to participants for 6 months. In the time the survey
(5) how policy makers view researchers. The interviews also sought was accessible, 58 participants began the survey, but only 28
information on perceived barriers to KT. The interview questions participants provided responses to all questions.
Measures a policy … what evidence should you be using to back that up.
The focus of the survey was barriers to knowledge-uptake, Don’t just go to a website and copy something – that happens,
knowledge needs at the time of the survey, and the accessibility you know, which is not very good but it happens” (Government
of information. The survey comprised 18 questions. Of these Health Project Officer). Policy makers identified the following as
six were multiple-choice, five were rating scales, six were open- the most common factors which affect evidence choice: the type
ended, and one was a combination of multiple-choice and open of evidence (60%), the reputation of the evidence source (55%),
ended (included in Supplementary Material). The questions in quality of the evidence (45%), and local applicability (40%).
the survey were developed after examination of surveys that had Only three participants (15%) knew of grading systems and
been used in related studies, consideration of results of the quali- they did not use grading systems to evaluate evidence. Two of
tative interviews, and after consultation with collaborators from these participants discussed the mismatch between grading
the University, the Health Department and Alzheimer’s Australia. systems and policy, with RCTs not necessarily being applicable
in the policy decision-making, but rather social research being
Analysis more likely to inform a policy decision. One of the participants
Participants with missing data were excluded at the item level. highlighted this mismatch and the use of systematic reviews, stat-
Data were analyzed using Microsoft Excel. Descriptive statistics ing: “it’s hard to find any RCTs for the issues we’re after and whether
and bar graphs were used to illustrate response patterns to survey they’re appropriate anyway in some contexts … in terms of policy
items. what’s really good is a Cochrane review or something that’s looked
at a bunch of things across everywhere and synthesized it and so
RESULTS then you can look at what the general opinion or picture looks like”
(Government Health Middle Manager).
Qualitative Interviews with Policy Makers The most cited barriers to using robust evidence were political
All but one participant (95%) reported that policies and program agenda (60%), time limits (55%), funding (50%), and research
decisions were often based on work, including current programs not being applicable for current policies (50%). For example,
or policies, which had been done in other jurisdictions or by one participant stated “research takes time, as well as money and
other organizations that were presumed to have better resources effort …. Policies have a different timeframe. So if a government is
for seeking evidence (e.g., work tendered to university research- going to move in a particular area, or feels inclined or compelled
ers, larger organizations). In this context, respondents reported that it needs to come up with something, it might not be able to
that greater emphasis was placed on the experience of running wait for research” (Government Health Senior Manager) Two
the program or implementing the policy than on the evidence participants also stated that government department employees
base behind it, which was typically not systematically checked. were risk averse and so would “perpetuate current practice”
As an example, one participant noted a program implemented rather than suggesting and evaluating “original ideas” based on
in another state that was “taken up” and resulted in a lot of prob- new research.
lems. Subsequent contact with those who had set up the original When policy makers were asked what could improve the use of
program revealed that they too had had a lot of problems but had evidence in developing policy, six participants (30%) stated that
not reported them. there should be more “links” or collaborations between govern-
No respondents identified a systematic approach to gathering ment staff and researchers. According to one policy maker these
evidence for policy. Fourteen participants (70%) mentioned that linkages “would make policy development a lot easier because you
part of their research strategy included talking to experts, includ- would have shown quite clearly due to the collaborative nature of
ing academics and consultants. Eleven participants (55%) gained the research that you’ve considered a large number of things and it
most of their information from consumer input and subscribed would seem to provide a very solid finding because of that” (NGO
to publications by the Australian Healthcare and Hospitals Manager). Two participants (10%) stated that being able to access
Association. Academic journals and Institutional research/ collated information would be helpful as it would reduce the
library services were only used by five participants (25%). amount of time spent looking for applicable research.
Twelve participants (60%) mentioned politics or political
agenda as a significant contributor to the policy formation pro- Qualitative Interviews with
cess. The political agenda may drive what research is used and is Epidemiologists
not, regardless of the quality of the research. For example, one Seven epidemiologists were asked to participate; however, only
participant said “… the politicians are wanting to say ‘we’ve made six agreed and completed the interview. All interviewees had a
a decision, this is what we’re going to do’… and if there are votes in it post-doctoral degree, and all but one was a researcher from an
[the politicians] will do it regardless of the evidence” (Government Australian university. There was an even number of male and
Health Middle Manager). Eleven participants (55%) also cited female respondents.
that consumer or community views were another policy driver. All respondents cited that they had heard of grading system
Eight participants (40%) discussed that there was not a great but tended not to use them to evaluate research evidence, rather
understanding of what constitutes good or strong evidence. For they had their own way of evaluating evidence. One participant
example, one participant said “I think it’s a bit of an issue that stated that they evaluated studies from first principles (clearly
we’ve seen in terms of being able to identify well what is good defined research question, clear and appropriate methods, high
evidence, what’s real evidence, what evidence should you use for participation rates, appropriate analysis, and conclusions), and
What Evidence Do Policy Makers Use in the evidence themselves, or lack confidence in their own skills.
the Policy Process? A third possibility, cited by two participants, is that individu-
Systematic reviews were the preferred research method in the als within the government health department are risk averse.
policy-making process. Observational research came last and Individuals may feel that the culture within the public service
ranked as the lowest quality. Previous research has found that discourages innovative programs and policies as such evaluations
policy makers perceived systematic reviews as better suited may fail and result in damage to the government and individual’s
to identifying gaps in existing research rather than providing reputations.
answers to policy questions (32). This research also found that Previous research has found that political opinion or targets
systematic reviews were useful only when they had been commis- influenced the adoption of particular policies or programs (13).
sioned to support policy decisions that had already been made, Within this study, most policy makers mentioned politics or
rather than inform the decision-making process of which policy political agenda as a significant driver in the policy formation
option is most effective. Systematic reviews may be favored by process, followed closely by consumer or community views.
policy actors because of their potential to save time and other In Ritter’s study (15), they found that the internet, notably
resources and are seen as a credible source of information. “Google,” and statistical data were the third and fourth most
The information provided by policy makers about the use of frequently mentioned source used by policy makers. Policy
academic resources in the policy process is inconsistent. In the makers did not mention the use of the internet in our quan-
web survey, all responding participants indicated that academic titative survey, and only one participant mentioned it in the
research was the most often utilized evidence source in the qualitative interviews. The majority of respondents in our
policy process. However, in the interviews, only one-quarter of study indicated that they did not use their own departmental
participants stated that they referred to academic journals when epidemiological reports. Our results may differ from Ritter’s
gathering evidence for policy. Furthermore, participants from the because we did not explicitly ask about internet or statistical
web survey stated that academics and existing academic research data use or because participants were hesitant to discuss their
were the most difficult evidence source to understand. This dif- usage of these sources.
ference between responses may indicate that what policy makers
think they are using, or what they should ideally be using, is not How Do Policy Makers Evaluate the
what they actually use and that they may not fully understand the Quality of Evidence and How Does It
academic research that they are using. Studies that had similar Compare to Epidemiologist’s Evaluation?
results found that respondents did not use academic literature Just under half of participants in the interviews discussed that
because they did not have access to libraries or online journals, there was not a great understanding among policy makers
they were not trained in how to use academic search engines and of what makes good quality evidence. This has been found in
because they found academic literature complex and frequently previous studies (23). Although some respondents had heard
contradictory (15). of grading systems, neither the policy makers nor epidemiolo-
Results from both the web survey and interviews found that gists whom we interviewed used them. Rather, policy makers
the majority of policy makers used work which had been done by and epidemiologists had their own way of evaluating evidence.
other jurisdictions or organizations as a base for policy and pro- Although grading systems may not identify the most appropri-
gram decisions. The use of other jurisdictions programs/policies ate research methodology, their usage enables a standardized,
may be a feasible option as it fits with the policy environment and comprehensive, transparent, and easily communicated way of
provides a sense of security that the intended outcomes will be rating the quality of evidence for different policy decisions and
achieved within the decided timelines. However, as participants the strength of recommendations and could improve decision-
pointed out, this transferability either may not be applicable to making processes (34).
the adopting jurisdiction or key information and supporting Both parties agreed that RCTs, followed by systematic reviews,
evidence may not be provided by the other jurisdiction. Given provided the highest quality evidence and that observational
that respondents stated they usually did not check the quality research was ranked the lowest. However, both policy makers
of the evidence to these programs, or the applicability of this and epidemiologists cited problems with using RCTs in their
evidence to the situation, then the policy/program objective may respective fields. For policy makers, RCTs were not applicable
not be met. in the policy decision-making process, whereas epidemiolo-
Respondents from both the interviews and web survey also gists had methodological issues with RCT designs (e.g., limited
mentioned that other internal staff members were one of the generalizability and bias). These findings are similar to previous
most frequently utilized source of evidence, and that part of their research (20, 23).
research strategy included talking to others, such as experts or
consultants. This has been found in previous studies and may be
a way of gaining accurate information quickly (9, 14, 33). Barriers to Use of Evidence in
Policy maker’s reliance on peers and other jurisdictions, Policy Making
rather than evidence, could indicate several possible characteris- The most cited barrier to using robust evidence was political
tics of policy makers. First, this might suggest that respondents agenda and time limits. Previous research has also found that the
are assuming that someone else has checked and evaluated the short time periods, or need for action, within the policy making
evidence. Second, policy makers may lack the skills to evaluate sphere meant that decisions were often made whether “strong”
evidence was there or not (13, 35). Research can take up to 3 years strategies, such as training and participation in internships. This
to be published following data collection, so by the time it is made recommendation is based on our finding that policy makers
available the information may be out of date or less useful to did not have a great understanding of what makes good qual-
policy makers (4). ity evidence nor did they use a standardized way of evaluating
Half of our policy-maker participants stated that barriers to evidence. As only a small number of policy makers in this study
using research were lack of funding and research not being appli- referred to academic sources, the second recommendation is to
cable to current policy. This has been found in previous research ensure that policy makers can access robust sources of scientific
(35, 36). It has been suggested that in order to overcome these evidence, for example online peer-reviewed journals. Third,
barriers there should be a dialog between researchers and policy because the policy and scientific processes occur on different
makers before the study design is carried out. As policy decisions time scales, which policy makers in this study cited as a bar-
may be influenced by pragmatic considerations, such as cost (13), rier to using robust evidence, the sharing of evidence between
then, researchers should be made aware of these considerations researchers and public servants should be facilitated through
and build research and recommendations that accommodate new channels and ways of conducting business. This is particu-
them. larly important for health issues for which the scientific data may
vary substantially over time. Finally, we recommend developing
Enablers to Use of Evidence in mechanisms through which scientists with specific expertise
are invited into a particular department for a “scientific chat”
Policy Making to openly discuss planned policies. This would be particularly
The establishment of more links, or collaborations, between
useful in cases where commissioning new research would take
policy makers and researchers was cited by one-third of policy
too long but where substantial “soft” evidence is already available
makers as a way to improve the use of evidence in the policy-
in the scientific field.
making process. This strategy has been frequently discussed in
previous research (7, 35). Previous research has identified that For Researchers
policy makers use sources that are highly accessible and prefer Based on our findings that policy makers cited researchers
summative information that uses plain language and clear data and existing academic research as one of the most difficult
(7, 15). Two policy makers in our study discussed having access evidence sources to understand and that a barrier to using
to evidence collated within a single source. We think this type of robust evidence was research not being applicable to current
information source would not only reduce the amount of time policies, we have three recommendations for researchers.
policy makers spend gathering evidence but could also be used The first is to build awareness among researchers producing
to help policy makers identify strong evidence, based on the policy-relevant material that this information cannot be com-
methodological considerations discussed by epidemiologists in municated exclusively through typical scientific dissemination
this study. The authors have developed a web-based tool designed processes (e.g., conference presentation, peer-reviewed publi-
to help policy makers find and evaluate evidence (37). This cation). Furthermore, academic research with policy-relevant
tool will integrate the epidemiologists and policy makers on material should include a clearly identified policy-relevant
observational evidence and provide policy makers with the skills section that can easily be identified by policy makers, and
needed to understand and critically appraise research, which is the language and statistics included should be tailored in a
a specific practice of KT (3). way that makes them usable by policy makers. Second, training
This study has some limitations. First, the sample size was on the production and effective ways to communicate policy-
small and only a few organizations within a single Australian relevant material in scientific research should be provided to
provincial-level jurisdiction were surveyed and as such may researchers. Finally, forums where scientists and policy makers
not be more widely generalized. Second, due to survey design, can interact and demonstrate their viability and effectiveness
we could not analyze how policy maker’s level of research should be established.
training affected their use of scientific research. For example,
it is possible that those with a specific health-related Masters
or Postgraduate degree are more likely to use peer reviewed CONCLUSION
literature. Despite these limitations, this study gathered data This study has found that neither policy makers nor epidemi-
on a process about which little is known or understood. ologists are using grading systems to evaluate evidence, rather
Furthermore, it used different methodologies in order to gain each have their own ways of assessing the evidence. Both policy
a more comprehensive understanding of the issue and different makers and epidemiologists recognized that RCTs were usually
organizations from both government and non-government at the top of these hierarchies, but that RCTs were not always the
were involved. most efficient or applicable evidence upon which to base popu-
lation health policies and that there were some problems with
Recommendations RCT designs. Policy makers in this study demonstrated a good
For Policy Makers understanding that they need to have an evidence base, that it is
To facilitate the use and assessment of academic research in the an important part of the process, and that it justifies the policy.
policy-making process, we have four recommendations. The first However, the time and resources to form that evidence base, as
is to build policy makers capacity to appraise evidence, through well as an understanding of what constitutes good evidence and
how to evaluate it was lacking. This study is limited by its small ACKNOWLEDGMENTS
sample size; however, by having both in-depth interviews and
the web survey we are provided with more and often conflicting The authors wish to acknowledge the project partner from
information than previous research has found using just survey Alzheimer’s Australia, Dr Chris Hatherly, and Professor Gabriele
data. Finally, this study focused on the use of observational evi- Bammer for their contribution to the survey and interview ques-
dence and interviewed only one type of public health researcher, tions. The authors also thank the study interviewers, Michelle
academic epidemiologists. By using this approach, the authors Irving and Karemah Francois, and the study participants.
have not examined the use of intervention research which
provides direct evidence on how to produce change and which FUNDING
may be more relevant to policy makers (38). Findings from this
study demonstrate that scientific information needs to be more This study is supported by the Australian Research Council
systematically available to policy makers and that efforts should Centre of Excellence in Population Ageing Research (project
be directed toward increasing the communication between number CE110001029) and the Australian Research Council
researchers and policy makers. Linkage Project (Project ID LP120200609). NC is funded by
ARC Fellowship (#FT120100227). KA is funded by NHMRC
AUTHOR CONTRIBUTIONS Fellowship (#1002560).
The study concept and design was done by KA, NC, and PK. All SUPPLEMENTARY MATERIAL
authors contributed to the analysis and interpretation of data and
drafting of the manuscript. LJ conducted all statistical analysis. The Supplementary Material for this article can be found online at
All authors have read the final paper and have agreed to be listed http://journal.frontiersin.org/article/10.3389/fpubh.2016.00267/
as authors. full#supplementary-material.
26. DiPietro NA. Methods in epidemiology: observational study designs. 35. Campbell DM, Redman S, Jorm L, Cooke M, Zwi AB, Rychetnik L.
Pharmacotherapy (2010) 30(10):973–84. doi:10.1592/phco.30.10.973 Increasing the use of evidence in health policy: practice and views of policy
27. Tang JL, Griffiths S. Review paper: epidemiology, evidence-based med- makers and researchers. Aust New Zealand Health Policy (2009) 6(1):21.
icine, and public health. Asia Pac J Public Health (2009) 21(3):244–51. doi:10.1186/1743-8462-6-21
doi:10.1177/1010539509335516 36. Samet JM, Lee NL. Bridging the gap: perspectives on translating epidemiologic
28. Davis FG, Peterson CE, Bandiera F, Carter-Pokras O, Brownson RC. How do evidence into policy. Am J Epidemiol (2001) 154(12 Suppl):S1–3. doi:10.1093/
we more effectively move epidemiology into policy action? Ann Epidemiol aje/154.12.S1
(2012) 22(6):413–6. doi:10.1016/j.annepidem.2012.04.004 37. Centre for Research on Ageing Health and Wellbeing. Learning to Evaluate
29. Dreyer NA, Tunis SR, Berger M, Ollendorf D, Mattox P, Gliklich R. Why obser- Evidence for Policy. Canberra: Australian National University (2015). Available
vational studies should be among the tools used in comparative effectiveness from: http://leep.anu.edu.au
research. Health Aff (2010) 29(10):1818–25. doi:10.1377/hlthaff.2010.0666 38. Sanson-Fisher RW, Campbell EM, Perkins JJ, Blunden SV, Davis BB.
30. QSR International Pty Ltd. NVivo Qualitative Data Analysis Software. 10 ed. Indigenous health research: a critical review of outputs over time. Med J Aust
Melbourne: QSR International (2012). (2006) 184(10):502–5.
31. Qualtrics. Qualtrics Software. November 2013 Ed. Provo, UT: Qualtrics (2013).
32. Smith KE, Stewart E. “Black magic’ and ‘gold dust’: the epistemic and political
Conflict of Interest Statement: PK is employed by ACT Health Directorate as a
uses of evidence tools in public health policy making. Evid Policy (2015)
policy maker, a co-investigator in the study as well as being an author. The other
11(3):415–38. doi:10.1332/174426415X14381786400158
authors declare no conflict of interest.
33. Haynes AS, Gillespie JA, Derrick GE, Hall WD, Redman S, Chapman S,
et al. Galvanizers, guides, champions, and shields: the many ways that
policymakers use public health researchers. Milbank Q (2011) 89(4):564–98. Copyright © 2016 O’Donoughue Jenkins, Kelly, Cherbuin and Anstey. This is
doi:10.1111/j.1468-0009.2011.00643.x an open-access article distributed under the terms of the Creative Commons
34. Vogel JP, Oxman AD, Glenton C, Rosenbaum S, Lewin S, Gülmezoglu AM, Attribution License (CC BY). The use, distribution or reproduction in other forums
et al. Policymakers’ and other stakeholders’ perceptions of key considerations is permitted, provided the original author(s) or licensor are credited and that the
for health system decisions and the presentation of evidence to inform those original publication in this journal is cited, in accordance with accepted academic
considerations: an international survey. Health Res Policy Syst (2013) 11(1):19. practice. No use, distribution or reproduction is permitted which does not comply
doi:10.1186/1478-4505-11-19 with these terms.