0% found this document useful (0 votes)
7 views9 pages

Evaluating and Using Observational Evidence - The Contrasting Views of Policy Makers and Epidemiologists

This study investigates how health policy makers use and evaluate evidence compared to academic epidemiologists, highlighting a lack of systematic approaches in policy-making. Findings reveal that while both groups recognize systematic reviews and qualitative research as valuable, policy makers often prioritize political considerations and practical constraints over rigorous evidence. The study suggests a need for better knowledge translation practices to enhance the use of scientific evidence in policy development.

Uploaded by

mbarreto44
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views9 pages

Evaluating and Using Observational Evidence - The Contrasting Views of Policy Makers and Epidemiologists

This study investigates how health policy makers use and evaluate evidence compared to academic epidemiologists, highlighting a lack of systematic approaches in policy-making. Findings reveal that while both groups recognize systematic reviews and qualitative research as valuable, policy makers often prioritize political considerations and practical constraints over rigorous evidence. The study suggests a need for better knowledge translation practices to enhance the use of scientific evidence in policy development.

Uploaded by

mbarreto44
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 9

Original Research

published: 06 December 2016


doi: 10.3389/fpubh.2016.00267

Lily O’Donoughue Jenkins1*, Paul M. Kelly2,3, Nicolas Cherbuin1 and Kaarin J. Anstey1
1
Centre for Research on Ageing, Health and Wellbeing, Australian National University, Canberra, ACT, Australia, 2 ACT Health
Directorate, Canberra, ACT, Australia, 3 Australian National University Medical School, Canberra, ACT, Australia

Background: Currently, little is known about the types of evidence used by policy
makers. This study aimed to investigate how policy makers in the health domain use
and evaluate evidence and how this differs from academic epidemiologists. By having
a better understanding of how policy makers select, evaluate, and use evidence, aca-
demics can tailor the way in which that evidence is produced, potentially leading to more
effective knowledge translation.
Methods: An exploratory mixed-methods study design was used. Quantitative measures
were collected via an anonymous online survey (n = 28), with sampling from three health-
related government and non-government organizations. Semi-structured interviews with
policy makers (n = 20) and epidemiologists (n = 6) were conducted to gather qualitative data.
Edited by: results: Policy makers indicated systematic reviews were the preferred research
Alastair James Fischer,
resource (19%), followed closely by qualitative research (16%). Neither policy makers
Nice, UK
nor epidemiologists used grading instruments to evaluate evidence. In the web survey,
Reviewed by:
Emmanuel D. Jadhav, policy makers reported that consistency and strength of evidence (93%), the quality
Ferris State University, USA of data (93%), bias in the evidence (79%), and recency of evidence (79%) were the
Janya McCalman,
Central Queensland University, most important factors taken into consideration when evaluating the available evidence.
Australia The same results were found in the qualitative interviews. Epidemiologists focused on
*Correspondence: the methodology used in the study. The most cited barriers to using robust evidence,
Lily O’Donoughue Jenkins
according to policy makers, were political considerations (60%), time limitations (55%),
[email protected]
funding (50%), and research not being applicable to current policies (50%).
Specialty section:
conclusion: The policy maker’s investigation did not report a systematic approach to
This article was submitted
to Public Health Policy, evaluating evidence. Although there was some overlap between what policy makers
a section of the journal and epidemiologists identified as high-quality evidence, there was also some important
Frontiers in Public Health
differences. This suggests that the best scientific evidence may not routinely be used in
Received: 09 September 2016
Accepted: 14 November 2016
the development of policy. In essence, the policy-making process relied on other juris-
Published: 06 December 2016 dictions’ policies and the opinions of internal staff members as primary evidence sources
Citation: to inform policy decisions. Findings of this study suggest that efforts should be directed
O’Donoughue Jenkins L, Kelly PM,
toward making scientific information more systematically available to policy makers.
Cherbuin N and Anstey KJ (2016)
Evaluating and Using Observational Keywords: policy making, knowledge translation, evidence-based practice, government, mixed-methods research
Evidence: The Contrasting Views of
Policy Makers and Epidemiologists.
Front. Public Health 4:267. Abbreviations: ACT, Australian Capital Territory; EBM, evidence-based medicine; HREC, Human Research Ethics Committee;
doi: 10.3389/fpubh.2016.00267 KT, knowledge translation; RCT, randomized controlled trial; NGO, non-government organization.

Frontiers in Public Health | www.frontiersin.org 1 December 2016 | Volume 4 | Article 267


Jenkins et al. Evaluating and Using Observational Evidence

INTRODUCTION of decision (80%) and responding to urgent day-to-day issues


rather than “long-term” thinking (75%) (9).
There has been increasing discussion that in order to improve By investigating the type of research that policy makers
public health outcomes quality scientific research should be used use to inform policy decisions, how they identify evidence
throughout the development of health policies (1). The process and what other factors may influence policy decisions, we
of disseminating academic research to policy makers is referred can identify what information is viewed as more relevant and
to as knowledge translation (KT) or knowledge exchange (2). timely (6). This may contribute to researchers better tailoring
The process of KT involves many activities and specific practices, their research to policy maker’s needs and thus improving KT
including producing synthesized research aimed at informing processes and the take up of scientific evidence in the policy
policy, writing plain language summaries of findings, and spend- development process.
ing time with users to understand their context and research There is also a need to investigate how policy makers select and
needs (3). It is believed that if KT is done effectively then the evaluate the quality of evidence. One way of selecting and evalu-
use of scientific evidence in policy and practice decisions will be ating evidence is by using an “evidence hierarchy.” This hierarchy
increased (4). may consider certain types of experimental research, for example
In the “real world” of policy making, scientific research is randomized controlled trials (RCTs) and systematic reviews of
just one of many types of information used (5). Policy makers RCTs, as highest in methodological quality (16). Researchers and
interpret and “use” evidence in a broad sense (e.g., non-research clinicians use particular grading instruments to grade the quality
data such as public health surveillance data and strategic needs of evidence. An example of such an instrument is Grading of
assessments) (6). There is also a range of political, economic, and Recommendations, Assessment, Development and Evaluation,
social drivers which affect decisions during policy development. which evaluates biomedical evidence based on risk, burden, and
In order to support a particular policy agenda, while also manag- cost of intervention (17).
ing the competing interests of diverse stakeholders, policy makers Although the use of evidence hierarchy and grading systems
may use specific information without giving consideration to all may provide an easier, or at least more streamlined, way of
the available evidence (4, 7) or may not be able to directly trans- identifying high-quality evidence, in many situations RCTs may
late the findings, or recommendations, from epidemiological not be the most appropriate research methodology to answer
research into action within their particular context (8). specific policy questions, particularly in the sphere of public
Previous research has focused on the apparently low uptake health. For example, findings from RCTs do not usually take
of academic research by policy makers, with particular attention into account the political, social, or economic context (18–20).
given to understanding how and under what circumstances RCTs may also not be a practical, or ethical, research option
policy makers access and use academic evidence (9). However, (e.g., research in smoking, HIV, or dementia) (21). Finally,
the needs and practices of policy makers are rarely the subject of the results of RCTs may not be easily applied to the general
rigorous study and are likely to be more complex and nuanced population or specific individuals (22). Due to these factors,
than can be captured in surveys (6). For example, three system- policy makers often use a different hierarchy of evidence than
atic reviews (10–12) discussed the facilitators and barriers to researchers (23). For example, policy makers may consider the
the use of evidence in policy making and identified that policy strongest evidence to be that from systematic reviews as they
makers use a broad range of evidence. These studies could not provide an overview of scientific studies which meet explicit
find reliable evidence of how much policy makers use academic criteria. Yet, single studies and evaluations are more commonly
research in the policy making process or how the definition of used to support policy than systematic reviews, possibly because
evidence by policy makers differs from the conceptualization of systematic reviews are not available due to time constraints or
what is classified as evidence by researchers. As such, we require lack of sufficient evidence (23).
a clearer understanding of how policy makers define and use Epidemiological data and research is typically valued highly
evidence (13). as “objective” or “hard” data compared to qualitative data or case
Currently, little is known about what types of information studies (24). Findings derived from epidemiological research
and evidence is normally used as part of the policy development are perceived to be most relevant indicator of adverse effects in
process or the extent to which political agendas and budgetary humans (25) and inform public health, such as health promotion
constraints influence the design and choice of policy options (6, 9, and health policy and planning (26). Public health practice is
14). In one of the few studies investigating the sources of research mostly based on observational epidemiological research, such
evidence that policy makers in government accessed when mak- as cohort, case-control, and cross-sectional studies, rather than
ing a decision, academic literature was one of the least frequently RCTs (27). Observational epidemiological research has multiple
used sources, along with internal expertise, policy documents, advantages, for example a large sample size and longer follow-
and employing a consultant (15). A study by Head et al. (9) up periods. It can also provide a powerful argument for change
found that the most valued source was the knowledge of their by using local data and can impact policy to address emerging
immediate colleagues (93%). Their study also found that over public health problems (28). However, epidemiological findings
40% of policy makers reported that academic research was used may not be in a form that is useful or easily understood by policy
in informing policy and legitimizing policy choices. However, makers, for example lengthy research reports with data at a state
the majority also stated that policy making was overwhelmingly or country level (20) or policy makers may be hesitant to use it
driven by budgetary considerations (83%), political acceptability due to chance of bias and confounding (29).

Frontiers in Public Health | www.frontiersin.org 2 December 2016 | Volume 4 | Article 267


Jenkins et al. Evaluating and Using Observational Evidence

Given the importance of promptly incorporating new and were developed in consultation with research experts and senior
robust scientific evidence into policy and the barriers to KT staff from the health department, Alzheimer’s Australia, and
identified above, there is an urgent need to better understand NHMRC. Interviews were recorded and transcribed. Interviews
how policy makers evaluate and use evidence. Therefore, the were done one-on-one and took approximately 1 h.
current study had two intersecting aims. The first aim was to
gain an understanding of the role of research evidence in policy Analysis
making. The second was to investigate how policy makers in the The transcribed interviews were uploaded into NVivo (30)
health domain select this evidence and whether they systemati- and thematically analyzed for themes. These themes included
cally assess evidence quality and how this differs from academic evidence sources, choice of evidence sources, confusion about
epidemiologists. policy, evaluating policy, grading evidence, policy drivers,
policy process, policy maker concerns, and barriers affecting KT.
Average and percentage calculations were also applied.
MATERIALS AND METHODS
An exploratory mixed-methods study design was used in order to Qualitative Interviews with
provide a deeper understanding. The design involved the collec- Epidemiologists
tion and data analysis of two sets of qualitative interviews (n = 13 Participants and Recruitment
and 6) and one quantitative survey (n = 28). Both interviews The second set of interviews focused on a purposive sample
and survey are included in Supplementary Material. Written of chronic disease experts, known to the authors, who were
informed consent was obtained from all participants prior to approached to provide their views regarding the characteristics
involvement in the study. The Australian National University of high-quality observational research, their opinion about the
Human Research Ethics Committee (HREC) and the Australian currently available evidence rating systems, and the implications
Capital Territory (ACT) Health HREC Survey Resource Group of grading observational research.
approved the study.
Measures
Qualitative Interviews with Policy Makers Participants were asked to answer seven open-ended questions
Participants and Recruitment in paper form seeking their views on: (1) their opinion of what
The first set of interviews focused on a purposive heterogene- constitutes high-quality observational research and how it com-
ous sample of 20 people who worked in policy. Thirteen par- pares with other types of research; (2) their understanding and
ticipants were from the ACT Government Health Directorate, use of current rating systems for grading evidence; and (3) the
four participants were from non-governmental organizations, consequences of inappropriately rating observational research.
one from a national Australian government department, and Participants were provided with a one-page guide to grading
two from Australia’s peak research funding body, the National instruments to clarify what the authors defined as a grading
Health and Medical Research Council. Individuals were invited instrument (included in Supplementary Material).
to participate in the study if they had any previous experience
contributing to the development and implementation of health Analysis
policy or programs relating to risk factors for chronic disease, Thematic analysis using a step by step process was conducted to
mental health, or aging. Executive Directors from ACT Health analyze the interviews. The interview transcripts were repeat-
identified participants and invited them via email to participate. edly screened in order to identify a list of recurring themes that
Individuals who responded and consented to participating appeared critical to evaluating evidence. These themes included
were then contacted by the ANU researchers. Participants were evidence sources, choice of evidence sources, evaluating evi-
selected irrespective of policy background, time spent in organi- dence, grading evidence, and observational research. Average
zational roles, or seniority. Participants from ACT Health were and percentage calculations were also applied.
from a wide range of policy units, including Women’s Youth and
Child Health; Aboriginal and Torres Strait Islander; Alcohol and Quantitative Survey
Other Drugs; Rehabilitation, Aged and Community Care; and Participants and Recruitment
Population Health. An anonymous survey was compiled using the online survey tool
Qualtrics (31). Senior staff from three health-related organiza-
Measures tions, two government and one non-government, invited all
One-on-one semi structured interviews were conducted with policy makers via email to complete the survey. The selection
participants focusing on understanding: (1) how policy makers of participation was not reliant on age, gender, or policy experi-
locate and use evidence from observational and other research; ence; however, the survey was only distributed to staff who were
(2) factors influencing their choice of evidence sources; (3) how not involved in the qualitative interviews. Participants were not
policy makers deal with conflicting evidence from specific top- offered any incentives for completing the survey. The survey was
ics; (4) how policy makers evaluate the quality of research; and accessible to participants for 6 months. In the time the survey
(5) how policy makers view researchers. The interviews also sought was accessible, 58 participants began the survey, but only 28
information on perceived barriers to KT. The interview questions participants provided responses to all questions.

Frontiers in Public Health | www.frontiersin.org 3 December 2016 | Volume 4 | Article 267


Jenkins et al. Evaluating and Using Observational Evidence

Measures a policy … what evidence should you be using to back that up.
The focus of the survey was barriers to knowledge-uptake, Don’t just go to a website and copy something – that happens,
knowledge needs at the time of the survey, and the accessibility you know, which is not very good but it happens” (Government
of information. The survey comprised 18 questions. Of these Health Project Officer). Policy makers identified the following as
six were multiple-choice, five were rating scales, six were open- the most common factors which affect evidence choice: the type
ended, and one was a combination of multiple-choice and open of evidence (60%), the reputation of the evidence source (55%),
ended (included in Supplementary Material). The questions in quality of the evidence (45%), and local applicability (40%).
the survey were developed after examination of surveys that had Only three participants (15%) knew of grading systems and
been used in related studies, consideration of results of the quali- they did not use grading systems to evaluate evidence. Two of
tative interviews, and after consultation with collaborators from these participants discussed the mismatch between grading
the University, the Health Department and Alzheimer’s Australia. systems and policy, with RCTs not necessarily being applicable
in the policy decision-making, but rather social research being
Analysis more likely to inform a policy decision. One of the participants
Participants with missing data were excluded at the item level. highlighted this mismatch and the use of systematic reviews, stat-
Data were analyzed using Microsoft Excel. Descriptive statistics ing: “it’s hard to find any RCTs for the issues we’re after and whether
and bar graphs were used to illustrate response patterns to survey they’re appropriate anyway in some contexts … in terms of policy
items. what’s really good is a Cochrane review or something that’s looked
at a bunch of things across everywhere and synthesized it and so
RESULTS then you can look at what the general opinion or picture looks like”
(Government Health Middle Manager).
Qualitative Interviews with Policy Makers The most cited barriers to using robust evidence were political
All but one participant (95%) reported that policies and program agenda (60%), time limits (55%), funding (50%), and research
decisions were often based on work, including current programs not being applicable for current policies (50%). For example,
or policies, which had been done in other jurisdictions or by one participant stated “research takes time, as well as money and
other organizations that were presumed to have better resources effort …. Policies have a different timeframe. So if a government is
for seeking evidence (e.g., work tendered to university research- going to move in a particular area, or feels inclined or compelled
ers, larger organizations). In this context, respondents reported that it needs to come up with something, it might not be able to
that greater emphasis was placed on the experience of running wait for research” (Government Health Senior Manager) Two
the program or implementing the policy than on the evidence participants also stated that government department employees
base behind it, which was typically not systematically checked. were risk averse and so would “perpetuate current practice”
As an example, one participant noted a program implemented rather than suggesting and evaluating “original ideas” based on
in another state that was “taken up” and resulted in a lot of prob- new research.
lems. Subsequent contact with those who had set up the original When policy makers were asked what could improve the use of
program revealed that they too had had a lot of problems but had evidence in developing policy, six participants (30%) stated that
not reported them. there should be more “links” or collaborations between govern-
No respondents identified a systematic approach to gathering ment staff and researchers. According to one policy maker these
evidence for policy. Fourteen participants (70%) mentioned that linkages “would make policy development a lot easier because you
part of their research strategy included talking to experts, includ- would have shown quite clearly due to the collaborative nature of
ing academics and consultants. Eleven participants (55%) gained the research that you’ve considered a large number of things and it
most of their information from consumer input and subscribed would seem to provide a very solid finding because of that” (NGO
to publications by the Australian Healthcare and Hospitals Manager). Two participants (10%) stated that being able to access
Association. Academic journals and Institutional research/ collated information would be helpful as it would reduce the
library services were only used by five participants (25%). amount of time spent looking for applicable research.
Twelve participants (60%) mentioned politics or political
agenda as a significant contributor to the policy formation pro- Qualitative Interviews with
cess. The political agenda may drive what research is used and is Epidemiologists
not, regardless of the quality of the research. For example, one Seven epidemiologists were asked to participate; however, only
participant said “… the politicians are wanting to say ‘we’ve made six agreed and completed the interview. All interviewees had a
a decision, this is what we’re going to do’… and if there are votes in it post-doctoral degree, and all but one was a researcher from an
[the politicians] will do it regardless of the evidence” (Government Australian university. There was an even number of male and
Health Middle Manager). Eleven participants (55%) also cited female respondents.
that consumer or community views were another policy driver. All respondents cited that they had heard of grading system
Eight participants (40%) discussed that there was not a great but tended not to use them to evaluate research evidence, rather
understanding of what constitutes good or strong evidence. For they had their own way of evaluating evidence. One participant
example, one participant said “I think it’s a bit of an issue that stated that they evaluated studies from first principles (clearly
we’ve seen in terms of being able to identify well what is good defined research question, clear and appropriate methods, high
evidence, what’s real evidence, what evidence should you use for participation rates, appropriate analysis, and conclusions), and

Frontiers in Public Health | www.frontiersin.org 4 December 2016 | Volume 4 | Article 267


Jenkins et al. Evaluating and Using Observational Evidence

another admitted to giving more credibility to studies published


in prestigious journals as they tended to undergo more rigorous
peer-review and methodological editing.
All respondents cited that although RCTs are considered at
the top of the hierarchy of evidence and observational research
lower, RCTs are not necessarily the most efficient or applicable
evidence. Respondents found several problems with using RCTs,
including unsuitable research questions (e.g., environmental and
health-related research questions), limited generalizability, and
bias. All respondents argued that it is more important to look at
the design and conduct of the study – for example cohort size,
duration, evaluation of relevant covariates/confounders – than it
is to look at what rating the evidence is.
Responses to what constituted as high-quality observational FIGURE 1 | The most important factors taken into consideration when
research all focused on the rigor of the methodology. All respond- evaluating evidence.
ents agreed that high-quality observational research should
address bias and ensure that the data are valid. Four respondents
also argued that the sample had to be representative of the target
population and large.

Quantitative Survey with Policy Makers


The majority of respondents were aged between 35 and 44 years
(32%) followed closely by 45–54 (29%) and 55–64 years (25%).
Respondents were mostly female (71%) and had completed a
postgraduate qualification (82%). Of the 28 participants who
responded to all questions, 13 (46%) described their level within
the organization as “middle management or project/policy officer
with some management responsibilities.”
When asked to indicate preferred research methods, respond-
ents (19%) indicated that systematic reviews were the preferred
research method. Qualitative research and RCTs followed with
response rates of 16 and 13% respectively. Only 7% of respond-
ents indicated a preference for observational research. FIGURE 2 | The relative quality of specific research methods and data
The most easily understood sources of evidence were trusted synthesis techniques.
organizations (96%), other internal staff (92%), consumer views
(85%), policies from other jurisdictions (81%), and expert opin-
ions (73%). The most difficult evidence sources to understand translated into practical clinical guidelines, systematic reviews,
were researchers and existing academic research (42%) and case studies (depending upon the research question), and sound
internal statistical data (35%). methodology, clearly articulated, and peer reviewed research.
The most important factors taken into consideration when
evaluating evidence are shown in Figure 1. When asked to DISCUSSION
identify how often evidence sources were utilized in the policy
process, the subset of policy makers (40%) who responded to this The aims of this study were to gain an understanding of the role
question indicated that the most often used policy sources were: of research evidence in policy making, investigate how policy
existing academic research (92%), other staff within the organi- makers in the health domain select this evidence, and whether
zation (92%), similar policy experience from other jurisdictions they systematically assess evidence quality. While use of evidence
(85%), publications from trusted organizations (73%), and guide- differs somewhat across policy makers, it appears that the reliance
lines (58%). The majority of policy makers from the government on direct scientific evidence in the policy development process
health department (61%) indicated that they had not used their is low. The policy maker’s investigation did not seem to have a
own departmental epidemiological reports in formulating new methodical approach to evaluating evidence. Although there was
population health-relevant policy. some overlap between what policy makers and epidemiologists
The relative ranking of specific research methods and data identified as high-quality evidence, there was also some impor-
synthesis techniques, as indicated by policy makers, is shown tant differences which suggests that the best scientific evidence
in Figure 2. Policy makers’ responses to the open-ended ques- is not frequently used in the development of policy. Differences
tion of what (in their opinion) constitutes high-quality forms of between epidemiologists and policy makers included the way
evidence varied. Some responses included: articles published in evidence was evaluated and the importance placed on study’s
reputable peer-reviewed journals, RCTs that can be related to and methodology.

Frontiers in Public Health | www.frontiersin.org 5 December 2016 | Volume 4 | Article 267


Jenkins et al. Evaluating and Using Observational Evidence

What Evidence Do Policy Makers Use in the evidence themselves, or lack confidence in their own skills.
the Policy Process? A third possibility, cited by two participants, is that individu-
Systematic reviews were the preferred research method in the als within the government health department are risk averse.
policy-making process. Observational research came last and Individuals may feel that the culture within the public service
ranked as the lowest quality. Previous research has found that discourages innovative programs and policies as such evaluations
policy makers perceived systematic reviews as better suited may fail and result in damage to the government and individual’s
to identifying gaps in existing research rather than providing reputations.
answers to policy questions (32). This research also found that Previous research has found that political opinion or targets
systematic reviews were useful only when they had been commis- influenced the adoption of particular policies or programs (13).
sioned to support policy decisions that had already been made, Within this study, most policy makers mentioned politics or
rather than inform the decision-making process of which policy political agenda as a significant driver in the policy formation
option is most effective. Systematic reviews may be favored by process, followed closely by consumer or community views.
policy actors because of their potential to save time and other In Ritter’s study (15), they found that the internet, notably
resources and are seen as a credible source of information. “Google,” and statistical data were the third and fourth most
The information provided by policy makers about the use of frequently mentioned source used by policy makers. Policy
academic resources in the policy process is inconsistent. In the makers did not mention the use of the internet in our quan-
web survey, all responding participants indicated that academic titative survey, and only one participant mentioned it in the
research was the most often utilized evidence source in the qualitative interviews. The majority of respondents in our
policy process. However, in the interviews, only one-quarter of study indicated that they did not use their own departmental
participants stated that they referred to academic journals when epidemiological reports. Our results may differ from Ritter’s
gathering evidence for policy. Furthermore, participants from the because we did not explicitly ask about internet or statistical
web survey stated that academics and existing academic research data use or because participants were hesitant to discuss their
were the most difficult evidence source to understand. This dif- usage of these sources.
ference between responses may indicate that what policy makers
think they are using, or what they should ideally be using, is not How Do Policy Makers Evaluate the
what they actually use and that they may not fully understand the Quality of Evidence and How Does It
academic research that they are using. Studies that had similar Compare to Epidemiologist’s Evaluation?
results found that respondents did not use academic literature Just under half of participants in the interviews discussed that
because they did not have access to libraries or online journals, there was not a great understanding among policy makers
they were not trained in how to use academic search engines and of what makes good quality evidence. This has been found in
because they found academic literature complex and frequently previous studies (23). Although some respondents had heard
contradictory (15). of grading systems, neither the policy makers nor epidemiolo-
Results from both the web survey and interviews found that gists whom we interviewed used them. Rather, policy makers
the majority of policy makers used work which had been done by and epidemiologists had their own way of evaluating evidence.
other jurisdictions or organizations as a base for policy and pro- Although grading systems may not identify the most appropri-
gram decisions. The use of other jurisdictions programs/policies ate research methodology, their usage enables a standardized,
may be a feasible option as it fits with the policy environment and comprehensive, transparent, and easily communicated way of
provides a sense of security that the intended outcomes will be rating the quality of evidence for different policy decisions and
achieved within the decided timelines. However, as participants the strength of recommendations and could improve decision-
pointed out, this transferability either may not be applicable to making processes (34).
the adopting jurisdiction or key information and supporting Both parties agreed that RCTs, followed by systematic reviews,
evidence may not be provided by the other jurisdiction. Given provided the highest quality evidence and that observational
that respondents stated they usually did not check the quality research was ranked the lowest. However, both policy makers
of the evidence to these programs, or the applicability of this and epidemiologists cited problems with using RCTs in their
evidence to the situation, then the policy/program objective may respective fields. For policy makers, RCTs were not applicable
not be met. in the policy decision-making process, whereas epidemiolo-
Respondents from both the interviews and web survey also gists had methodological issues with RCT designs (e.g., limited
mentioned that other internal staff members were one of the generalizability and bias). These findings are similar to previous
most frequently utilized source of evidence, and that part of their research (20, 23).
research strategy included talking to others, such as experts or
consultants. This has been found in previous studies and may be
a way of gaining accurate information quickly (9, 14, 33). Barriers to Use of Evidence in
Policy maker’s reliance on peers and other jurisdictions, Policy Making
rather than evidence, could indicate several possible characteris- The most cited barrier to using robust evidence was political
tics of policy makers. First, this might suggest that respondents agenda and time limits. Previous research has also found that the
are assuming that someone else has checked and evaluated the short time periods, or need for action, within the policy making
evidence. Second, policy makers may lack the skills to evaluate sphere meant that decisions were often made whether “strong”

Frontiers in Public Health | www.frontiersin.org 6 December 2016 | Volume 4 | Article 267


Jenkins et al. Evaluating and Using Observational Evidence

evidence was there or not (13, 35). Research can take up to 3 years strategies, such as training and participation in internships. This
to be published following data collection, so by the time it is made recommendation is based on our finding that policy makers
available the information may be out of date or less useful to did not have a great understanding of what makes good qual-
policy makers (4). ity evidence nor did they use a standardized way of evaluating
Half of our policy-maker participants stated that barriers to evidence. As only a small number of policy makers in this study
using research were lack of funding and research not being appli- referred to academic sources, the second recommendation is to
cable to current policy. This has been found in previous research ensure that policy makers can access robust sources of scientific
(35, 36). It has been suggested that in order to overcome these evidence, for example online peer-reviewed journals. Third,
barriers there should be a dialog between researchers and policy because the policy and scientific processes occur on different
makers before the study design is carried out. As policy decisions time scales, which policy makers in this study cited as a bar-
may be influenced by pragmatic considerations, such as cost (13), rier to using robust evidence, the sharing of evidence between
then, researchers should be made aware of these considerations researchers and public servants should be facilitated through
and build research and recommendations that accommodate new channels and ways of conducting business. This is particu-
them. larly important for health issues for which the scientific data may
vary substantially over time. Finally, we recommend developing
Enablers to Use of Evidence in mechanisms through which scientists with specific expertise
are invited into a particular department for a “scientific chat”
Policy Making to openly discuss planned policies. This would be particularly
The establishment of more links, or collaborations, between
useful in cases where commissioning new research would take
policy makers and researchers was cited by one-third of policy
too long but where substantial “soft” evidence is already available
makers as a way to improve the use of evidence in the policy-
in the scientific field.
making process. This strategy has been frequently discussed in
previous research (7, 35). Previous research has identified that For Researchers
policy makers use sources that are highly accessible and prefer Based on our findings that policy makers cited researchers
summative information that uses plain language and clear data and existing academic research as one of the most difficult
(7, 15). Two policy makers in our study discussed having access evidence sources to understand and that a barrier to using
to evidence collated within a single source. We think this type of robust evidence was research not being applicable to current
information source would not only reduce the amount of time policies, we have three recommendations for researchers.
policy makers spend gathering evidence but could also be used The first is to build awareness among researchers producing
to help policy makers identify strong evidence, based on the policy-relevant material that this information cannot be com-
methodological considerations discussed by epidemiologists in municated exclusively through typical scientific dissemination
this study. The authors have developed a web-based tool designed processes (e.g., conference presentation, peer-reviewed publi-
to help policy makers find and evaluate evidence (37). This cation). Furthermore, academic research with policy-relevant
tool will integrate the epidemiologists and policy makers on material should include a clearly identified policy-relevant
observational evidence and provide policy makers with the skills section that can easily be identified by policy makers, and
needed to understand and critically appraise research, which is the language and statistics included should be tailored in a
a specific practice of KT (3). way that makes them usable by policy makers. Second, training
This study has some limitations. First, the sample size was on the production and effective ways to communicate policy-
small and only a few organizations within a single Australian relevant material in scientific research should be provided to
provincial-level jurisdiction were surveyed and as such may researchers. Finally, forums where scientists and policy makers
not be more widely generalized. Second, due to survey design, can interact and demonstrate their viability and effectiveness
we could not analyze how policy maker’s level of research should be established.
training affected their use of scientific research. For example,
it is possible that those with a specific health-related Masters
or Postgraduate degree are more likely to use peer reviewed CONCLUSION
literature. Despite these limitations, this study gathered data This study has found that neither policy makers nor epidemi-
on a process about which little is known or understood. ologists are using grading systems to evaluate evidence, rather
Furthermore, it used different methodologies in order to gain each have their own ways of assessing the evidence. Both policy
a more comprehensive understanding of the issue and different makers and epidemiologists recognized that RCTs were usually
organizations from both government and non-government at the top of these hierarchies, but that RCTs were not always the
were involved. most efficient or applicable evidence upon which to base popu-
lation health policies and that there were some problems with
Recommendations RCT designs. Policy makers in this study demonstrated a good
For Policy Makers understanding that they need to have an evidence base, that it is
To facilitate the use and assessment of academic research in the an important part of the process, and that it justifies the policy.
policy-making process, we have four recommendations. The first However, the time and resources to form that evidence base, as
is to build policy makers capacity to appraise evidence, through well as an understanding of what constitutes good evidence and

Frontiers in Public Health | www.frontiersin.org 7 December 2016 | Volume 4 | Article 267


Jenkins et al. Evaluating and Using Observational Evidence

how to evaluate it was lacking. This study is limited by its small ACKNOWLEDGMENTS
sample size; however, by having both in-depth interviews and
the web survey we are provided with more and often conflicting The authors wish to acknowledge the project partner from
information than previous research has found using just survey Alzheimer’s Australia, Dr Chris Hatherly, and Professor Gabriele
data. Finally, this study focused on the use of observational evi- Bammer for their contribution to the survey and interview ques-
dence and interviewed only one type of public health researcher, tions. The authors also thank the study interviewers, Michelle
academic epidemiologists. By using this approach, the authors Irving and Karemah Francois, and the study participants.
have not examined the use of intervention research which
provides direct evidence on how to produce change and which FUNDING
may be more relevant to policy makers (38). Findings from this
study demonstrate that scientific information needs to be more This study is supported by the Australian Research Council
systematically available to policy makers and that efforts should Centre of Excellence in Population Ageing Research (project
be directed toward increasing the communication between number CE110001029) and the Australian Research Council
researchers and policy makers. Linkage Project (Project ID LP120200609). NC is funded by
ARC Fellowship (#FT120100227). KA is funded by NHMRC
AUTHOR CONTRIBUTIONS Fellowship (#1002560).

The study concept and design was done by KA, NC, and PK. All SUPPLEMENTARY MATERIAL
authors contributed to the analysis and interpretation of data and
drafting of the manuscript. LJ conducted all statistical analysis. The Supplementary Material for this article can be found online at
All authors have read the final paper and have agreed to be listed http://journal.frontiersin.org/article/10.3389/fpubh.2016.00267/
as authors. full#supplementary-material.

REFERENCES 12. Orton L, Lloyd-Williams F, Taylor-Robinson D, O’Flaherty M, Capewell S. The


use of research evidence in public health decision making processes: system-
1. Zardo P, Collie A. Measuring use of research evidence in public health atic review. PLoS One (2011) 6(7):e21704. doi:10.1371/journal.pone.0021704
policy: a policy content analysis. BMC Public Health (2014) 14(1):496. 13. Petticrew M, Whitehead M, Macintyre SJ, Graham H, Egan M. Evidence
doi:10.1186/1471-2458-14-496 for public health policy on inequalities: 1: the reality according to policy-
2. Graham ID, Logan J, Harrison MB, Straus SE, Tetroe J, Caswell W, et al. Lost makers. J Epidemiol Community Health (2004) 58(10):811–6. doi:10.1136/
in knowledge translation: time for a map? J Contin Educ Health Prof (2006) jech.2003.015289
26(1):13–24. doi:10.1002/chp.47 14. Oliver KA, de Vocht F. Defining ‘evidence’ in public health: a survey of poli-
3. Jacobson N, Butterill D, Goering P. Organizational factors that influence cymakers’ uses and preferences. Eur J Public Health (2015) 1–6. doi:10.1093/
university-based researchers’ engagement in knowledge transfer activities. eurpub/ckv082
Sci Commun (2004) 25(3):246–59. doi:10.1177/1075547003262038 15. Ritter A. How do drug policy makers access research evidence? Int J Drug
4. Cvitanovic C, Hobday AJ, van Kerkhoff L, Wilson SK, Dobbs K, Marshall NA. Policy (2009) 20:70–5. doi:10.1016/j.drugpo.2007.11.017
Improving knowledge exchange among scientists and decision-makers to 16. Leigh A. What evidence should social policymakers use? Econ Round-up
facilitate the adaptive governance of marine resources: a review of knowledge (2009) 1:27–43. doi:10.2139/ssrn.1415462
and research needs. Ocean Coastal Manag (2015) 112:25–35. doi:10.1016/j. 17. Atkins D, Best D, Briss PA, Eccles M, Falck-Ytter Y, Flottorp S, et al.
ocecoaman.2015.05.002 Grading quality of evidence and strength of recommendations. BMJ (2004)
5. Bowen S, Zwi A, Sainsbury P. What evidence informs government population 328(7454):1490–4. doi:10.1136/bmj.328.7454.1490
health policy? Lessons from early childhood intervention policy in Australia. 18. Bowen S, Zwi AB. Pathways to “evidence-informed” policy and practice: a
N S W Public Health Bull (2005) 16(11–12):180. doi:10.1071/NB05050 framework for action. PLoS Med (2005) 2(7):e166. doi:10.1371/journal.
6. Oliver K, Lorenc T, Innvaer S. New directions in evidence-based policy pmed.0020166
research: a critical analysis of the literature. Health Res Policy Syst (2014) 19. Petticrew M, Roberts H. Evidence, hierarchies, and typologies: horses for
12(1):34. doi:10.1186/1478-4505-12-34 courses. J Epidemiol Community Health (2003) 57(7):527–9. doi:10.1136/
7. Haynes AS, Derrick GE, Chapman S, Redman S, Hall WD, Gillespie J, et al. jech.57.7.527
From “our world” to the “real world”: exploring the views and behaviour of 20. Brownson RC, Royer C, Ewing R, McBride TD. Researchers and policymakers:
policy-influential Australian public health researchers. Soc Sci Med (2011) travelers in parallel universes. Am J Prev Med (2006) 30(2):164. doi:10.1016/j.
72(7):1047–55. doi:10.1016/j.socscimed.2011.02.004 amepre.2005.10.004
8. de Goede J, van Bon-Martens MJ, Putters K, van Oers HA. Looking for 21. West SG, Duan N, Pequegnat W, Gaist P, Des Jarlais DC, Holtgrave D, et al.
interaction: quantitative measurement of research utilization by Dutch local Alternatives to the randomized controlled trial. Am J Public Health (2008)
health officials. Health Res Policy Syst (2012) 10(1):9. doi:10.1186/1478- 98(8):1359–66. doi:10.2105/AJPH.2007.124446
4505-10-9 22. Williams B. Perils of evidence-based medicine. Perspect Biol Med (2010)
9. Head B, Ferguson M, Cherney A, Boreham P. Are policy-makers interested in 53(1):106–20. doi:10.1353/pbm.0.0132
social research? Exploring the sources and uses of valued information among 23. Brownson RC, Chriqui JF, Stamatakis KA. Understanding evidence-based
public servants in Australia. Policy Soc (2014) 33:89–101. doi:10.1016/j. public health policy. Am J Public Health (2009) 99(9):1576–83. doi:10.2105/
polsoc.2014.04.004 AJPH.2008.156224
10. Innvaer S, Vist G, Trommald M, Oxman A. Health policy-makers’ perceptions 24. Marston G, Watts R. Tampering with the evidence: a critical appraisal of
of their use of evidence: a systematic review. J Health Serv Res Policy (2002) evidence-based policy-making. Drawing Board: Aust Rev Public Aff (2003)
7(4):239–44. doi:10.1258/135581902320432778 3(3):143–63.
11. Oliver K, Innvar S, Lorenc T, Woodman J, Thomas J. A systematic review of 25. Matanoski GM. Conflicts between two cultures: implications for epidemi-
barriers to and facilitators of the use of evidence by policymakers. BMC Health ologic researchers in communicating with policy-makers. Am J Epidemiol
Serv Res (2014) 14(1):2. doi:10.1186/1472-6963-14-2 (2001) 154(12 Suppl):S36–42. doi:10.1093/aje/154.12.S36

Frontiers in Public Health | www.frontiersin.org 8 December 2016 | Volume 4 | Article 267


Jenkins et al. Evaluating and Using Observational Evidence

26. DiPietro NA. Methods in epidemiology: observational study designs. 35. Campbell DM, Redman S, Jorm L, Cooke M, Zwi AB, Rychetnik L.
Pharmacotherapy (2010) 30(10):973–84. doi:10.1592/phco.30.10.973 Increasing the use of evidence in health policy: practice and views of policy
27. Tang JL, Griffiths S. Review paper: epidemiology, evidence-based med- makers and researchers. Aust New Zealand Health Policy (2009) 6(1):21.
icine, and public health. Asia Pac J Public Health (2009) 21(3):244–51. doi:10.1186/1743-8462-6-21
doi:10.1177/1010539509335516 36. Samet JM, Lee NL. Bridging the gap: perspectives on translating epidemiologic
28. Davis FG, Peterson CE, Bandiera F, Carter-Pokras O, Brownson RC. How do evidence into policy. Am J Epidemiol (2001) 154(12 Suppl):S1–3. doi:10.1093/
we more effectively move epidemiology into policy action? Ann Epidemiol aje/154.12.S1
(2012) 22(6):413–6. doi:10.1016/j.annepidem.2012.04.004 37. Centre for Research on Ageing Health and Wellbeing. Learning to Evaluate
29. Dreyer NA, Tunis SR, Berger M, Ollendorf D, Mattox P, Gliklich R. Why obser- Evidence for Policy. Canberra: Australian National University (2015). Available
vational studies should be among the tools used in comparative effectiveness from: http://leep.anu.edu.au
research. Health Aff (2010) 29(10):1818–25. doi:10.1377/hlthaff.2010.0666 38. Sanson-Fisher RW, Campbell EM, Perkins JJ, Blunden SV, Davis BB.
30. QSR International Pty Ltd. NVivo Qualitative Data Analysis Software. 10 ed. Indigenous health research: a critical review of outputs over time. Med J Aust
Melbourne: QSR International (2012). (2006) 184(10):502–5.
31. Qualtrics. Qualtrics Software. November 2013 Ed. Provo, UT: Qualtrics (2013).
32. Smith KE, Stewart E. “Black magic’ and ‘gold dust’: the epistemic and political
Conflict of Interest Statement: PK is employed by ACT Health Directorate as a
uses of evidence tools in public health policy making. Evid Policy (2015)
policy maker, a co-investigator in the study as well as being an author. The other
11(3):415–38. doi:10.1332/174426415X14381786400158
authors declare no conflict of interest.
33. Haynes AS, Gillespie JA, Derrick GE, Hall WD, Redman S, Chapman S,
et al. Galvanizers, guides, champions, and shields: the many ways that
policymakers use public health researchers. Milbank Q (2011) 89(4):564–98. Copyright © 2016 O’Donoughue Jenkins, Kelly, Cherbuin and Anstey. This is
doi:10.1111/j.1468-0009.2011.00643.x an open-access article distributed under the terms of the Creative Commons
34. Vogel JP, Oxman AD, Glenton C, Rosenbaum S, Lewin S, Gülmezoglu AM, Attribution License (CC BY). The use, distribution or reproduction in other forums
et al. Policymakers’ and other stakeholders’ perceptions of key considerations is permitted, provided the original author(s) or licensor are credited and that the
for health system decisions and the presentation of evidence to inform those original publication in this journal is cited, in accordance with accepted academic
considerations: an international survey. Health Res Policy Syst (2013) 11(1):19. practice. No use, distribution or reproduction is permitted which does not comply
doi:10.1186/1478-4505-11-19 with these terms.

Frontiers in Public Health | www.frontiersin.org 9 December 2016 | Volume 4 | Article 267

You might also like