100% found this document useful (1 vote)
682 views139 pages

Cowles e Nelson (2019) - Survey Research

Uploaded by

Vinícius Rennó
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
100% found this document useful (1 vote)
682 views139 pages

Cowles e Nelson (2019) - Survey Research

Uploaded by

Vinícius Rennó
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

An Introduction to Survey Quantitative Approaches

COWLES • NELSON
THE BUSINESS
EXPERT PRESS Research, Volume II to Decision Making Collection
DIGITAL LIBRARIES Donald N. Stengel, Editor
Carrying Out the Survey
EBOOKS FOR
BUSINESS STUDENTS Second Edition
Curriculum-oriented, born- Ernest L. Cowles • Edward Nelson
digital books for advanced
business students, written Survey research is a powerful tool to help understand how and why
individuals behave the way they do. Properly conducted, surveys
An Introduction to
by academic thought
leaders who translate real-
world business experience
can provide accurate insights into areas such as attitudes, opinions,
motivations, and values, which serve as the drivers of individual
behavior. This two-volume book is intended to introduce funda-
Survey Research,
into course readings and

Volume II
mentals of good survey research to students and practitioners of
reference materials for the survey process as well as end-users of survey information.
students expecting to tackle This second volume focuses on carrying out a survey—including
management and leadership
Carrying Out the
how to formulate survey questions, steps that researchers must

AN INTRODUCTION TO SURVEY RESEARCH, VOLUME II


challenges during their use when conducting the survey, and impacts of rapidly changing
professional careers. technology on survey design and execution. The authors conclude

POLICIES BUILT
BY LIBRARIANS
with an important, but often neglected aspect of surveys—the
presentation of results in different formats appropriate to different
audiences.
Survey
• Unlimited simultaneous
usage
• Unrestricted downloading
Ernest L. Cowles is professor emeritus of sociology at California
State University, Sacramento. He served as the director of the Second Edition
Institute for Social Research for 8 years and continues as a sen-
and printing
ior research fellow. Before becoming director of the Institute for
• Perpetual access for a
Social Research, he directed research centers at the University
one-time fee of Illinois for 10 years. Beyond his academic work, he has served
• No platform or as a public agency administrator and consultant both within the
maintenance fees United States and internationally. In 2015, he was presented with
• Free MARC records the Distinguished Alumni Award by the College of Criminology and
• No license to execute Criminal Justice at Florida State University where he received his

Ernest L. Cowles
The Digital Libraries are a PhD in criminology.
comprehensive, cost-effective Edward Nelson is professor emeritus of sociology at California
way to deliver practical
treatments of important
State University, Fresno. He received his PhD in sociology from
UCLA specializing in research methods. He was the director of the Edward Nelson
business issues to every Social Research Laboratory at California State University, Fresno,
student and faculty member. from 1980 to 2013 and has directed more than 150 surveys. He
taught research methods, quantitative methods, critical thinking,
and computer applications. He has published books on observation
in sociological research, and using SPSS, a statistical computing
package widely used in the social sciences.
For further information, a
free trial, or to order, contact: 
[email protected] Quantitative Approaches to Decision Making
www.businessexpertpress.com/librarians Collection
Donald N. Stengel, Editor
An Introduction to Survey
Research, Volume II
An Introduction to Survey
Research, Volume II
Carrying Out the Survey

Second Edition

Ernest L. Cowles
Edward Nelson
An Introduction to Survey Research, Volume II, Second Edition: Carrying
Out the Survey
Copyright © Business Expert Press, LLC, 2019.

All rights reserved. No part of this publication may be reproduced, stored


in a retrieval system, or transmitted in any form or by any means—
electronic, mechanical, photocopy, recording, or any other except for
brief quotations, not to exceed 250 words, without the prior permission
of the publisher.

First published in 2019 by


Business Expert Press, LLC
222 East 46th Street, New York, NY 10017
www.businessexpertpress.com

ISBN-13: 978-1-94999-128-4 (paperback)


ISBN-13: 978-1-94999-129-1 (e-book)

Business Expert Press Quantitative Approaches to Decision Making


Collection

Collection ISSN: 2153-9515 (print)


Collection ISSN: 2163-9582 (electronic)

Cover and interior design by S4Carlisle Publishing Services Private Ltd.,


Chennai, India

First edition: 2015


Second Edition: 2019

10 9 8 7 6 5 4 3 2 1

Printed in the United States of America.


Dedication
As we complete this second edition of Introduction to Survey Research, I am
again indebted to the many people who have provided me with support,
insights, and patience along the way. My wife, Ellison, deserves a special
thank-you for having given up many opportunities for adventures because
I was sequestered at my computer for “just a little while longer.” I am also
grateful for her help in editing, reference checking, and other commonplace
aspects of writing the book, which plunged her deeper into survey research
than she probably would have liked. Finally, I am very fortunate to have
had Ed Nelson working with me, who, despite suffering a deep personal loss,
persevered in completing the manuscript. His work and dedication continue
to make him an invaluable colleague.

—Ernest L. Cowles
I dedicate this book to my wife, Elizabeth Nelson, and my children, Lisa
and David, for all their support over many years. Elizabeth and I were
both in the Sociology Department at California State University, Fresno,
for many years and shared so much both at work and at home with our
family. It has been a pleasure to work with my coauthor, Ernest Cowles,
on this book. Both of us were directors of survey research centers until our
retirements, and we have combined our years of experience in this project.

—Edward Nelson
Abstract
This two-volume work updates a previous edition of our book that was
­intended to introduce the fundamentals of good survey research to ­students
and practitioners of the survey process as well as end users of survey informa-
tion. It describes key survey components needed to d ­ esign, understand, and
use surveys effectively and avoid the pitfalls stemming from bad survey con-
struction and inappropriate methods. In the first volume, we first considered
the ways to best identify the information needed and how to structure the
best approach to getting that information. We then reviewed the processes
commonly involved in conducting a survey, such as the benefit of repre-
sentative sampling and the necessity of dealing with the types of errors that
commonly distort results. Volume I concluded with a chapter that examines
the elements to consider when developing a survey followed by a chapter
that acquaints the reader with the different modes of delivering a survey.
In this second volume, we focus on carrying out a survey. We begin
with a brief overview of the importance of research questions and the
research design and go on to discuss key elements in writing good ques-
tions. We then focus on the steps that researchers must go through when
conducting the survey. We next turn our attention to the impacts of rap-
idly changing technology on survey research, including rapidly evolving
mobile communication, specifically, online access, the expanded use of
web-based surveys, and the use of survey panels, and the opportunities
and challenges provided by access to “Big Data.” We ­conclude with an
important, but often neglected, aspect of surveys: the presentation of re-
sults in different formats appropriate to different audiences. As with the
previous edition, each chapter concludes with a summary of important
points contained in it and an annotated set of references for readers who
want more information on chapter topics.

Keywords
Big Data; ethical issues; Internet surveys; interviewer training; mailed
­surveys; mixed-mode surveys; mobile devices; sampling; surveys; s­urvey
content; survey construction; survey panels; survey processes; survey
­reports; survey technology; telephone surveys; web surveys
Contents
Preface...................................................................................................xi
Acknowledgments..................................................................................xiii

Chapter 1 Introduction......................................................................1
Chapter 2 Writing Good Questions....................................................7
Chapter 3 Carrying Out the Survey..................................................33
Chapter 4 Changing Technology and Survey Research......................53
Chapter 5 Presenting Survey Results.................................................73

Notes....................................................................................................99
References............................................................................................107
About the Author.................................................................................117
Index..................................................................................................119
Preface
Survey research is a widely used data collection method that involves
­getting information from people, typically by asking them questions and
collecting and analyzing the answers. Such data can then be used to un-
derstand individuals’ views, attitudes, and behaviors in a variety of areas,
including political issues, quality of life at both the community and the
individual levels, and satisfaction with services and products. Decision
makers in both the public and the private sectors use survey results to
understand past efforts and guide future direction. Yet there are many
misperceptions about what is required to conduct a good survey. Poorly
conceived, designed, and executed surveys often produce results that are
meaningless, at best, and misleading or inaccurate, at worst. The resultant
costs in both economic and human terms are enormous.
Our purpose of writing this two-volume edition is to provide an in-
troduction to and overview of survey research. In Volume I, we intro-
duced key elements of information gathering, specifically, identifying the
information needed and the best way to get that information. We then
explored the importance of representative sampling and identifying and
minimizing survey errors that can distort results. The remainder of the
first volume was focused on the practical issues to consider when develop-
ing, building, and carrying out a survey, including the various modes that
can be used to deliver a survey.
This volume (Volume II) focuses on carrying out the survey. We
­introduce survey implementation by first looking at the importance of
research questions in the research design. We also discuss key elements
in writing good questions for various types of surveys. We next take the
reader through the key steps that researchers must go through when con-
ducting the survey. We then highlight some of the major factors that
are influencing the way today’s surveys are being delivered such as the
­rapidly changing technology that is rapidly transforming the construction
and presentation of surveys. The rapidly changing landscape of mobile
xii PREFACE

communication, including online access to web-based surveys, the cre-


ation of survey panels, and the use of “Big Data,” presents exciting new
opportunities for survey research, but also new obstacles and challenges
to its validity and reliability. We conclude with an important, but often
neglected, chapter dealing with the presentation of results in different
formats appropriate to different audiences. As with the previous edition,
each chapter concludes with a summary of important points contained
in the chapter and an annotated set of references indicating where readers
can find more information on chapter topics.
Acknowledgments
We would like to thank the S4Carlisle Publishing Services team for
their great work and contributions which have significantly improved
this work. We also acknowledge the help and support of Scott Isenberg,
the Executive Acquisitions Editor of B ­ usiness Expert Press, and Donald
­Stengel, the Collection Editor for a group of their books under a collec-
tion called Quantitative Approaches for Decision Making. Don also read
our manuscript and offered useful and valuable suggestions.
CHAPTER 1

Introduction

Research starts with a question. Sometimes these are why questions. Why
do some people vote Democrat and others vote Republican? Why do
some people purchase health insurance and others do not? Why do some
people buy a particular product and others buy different products? Why
do some people favor same-sex marriage and others oppose it? Why do
some people go to college and others do not? Other times they are how
questions. If you are a campaign manager, how can you get people to vote
for your candidate? How could we get more people to purchase health
insurance? How could you get customers to buy your product? How
could we convince more people to go to college? But regardless, research
starts with a question.
Have you thought about how we go about answering questions in
everyday life? Sometimes we rely on what people in authority tell us.
Other times we rely on tradition. Sometimes we use what we think is
our common sense. And still other times we rely on what our gut tells
us. But another way we try to answer questions is to use the scientific
approach.
Duane Monette et al. suggest that one of the characteristics of the
scientific approach is that science relies on systematic observations.1 We
often call these observations data and say that science is empirical. That
means it is data based. However, the scientific approach doesn’t help you
answer every question. For example, you might ask whether there is a
God, or you might ask whether the death penalty is right or wrong. These
types of questions can’t be answered empirically. But if you want to know
why some people vote Democrat and others vote Republican, the sci-
entific method is clearly the best approach. Relying on what people in
authority tell you or what tradition tells you or your gut won’t work.
2 AN INTRODUCTION TO SURVEY RESEARCH, VOLUME II

Research Design
Your research design is your plan of action. It’s how you plan to answer
your research questions. Ben Jann and Thomas Hinz recognize the im-
portance of questions when they say that “surveys can generally be used
to study various types of research questions in the social sciences.”2 The
research design consists of four main parts—measurement, sampling,
data collection, and data analysis. Measurement is about how you will
measure each of the variables in your study. Sampling refers to how you
will select the cases for your study. Data collection is about how you
plan to collect the information that you will need to answer the research
questions. And data analysis is about how you plan to analyze the data.
You need to be careful to decide on your research design before you col-
lect your data.
In this book, we’re going to focus on data collection, specifically on
surveys. The book is organized in two volumes. In the first volume we
focused on the basics of doing surveys and talked about sampling, survey
error, factors to consider when planning a survey, and the different types
of surveys you might use. In the second volume we’ll focus on carrying
out the survey, and we’ll discuss writing good questions, the actual carry-
ing out of surveys, the impacts of current technology on survey research,
and survey reporting.

Questioning (Interviewing) as a Social Process


Surveys involve asking people questions. Usually, this is referred to as in-
terviewing, which is in some ways similar to the types of conversations we
engage in daily but in other ways very different. For example, the inter-
viewer takes the lead in asking the questions, and the respondent has little
opportunity to ask the interviewer questions. Once the respondent has
consented to be interviewed, the interviewer has more control over the
process than does the respondent. However, it is the respondent who has
control over the consent process, and it is the respondent who determines
whether and when to terminate the interview. We discussed nonresponse
in Chapter 3 (Volume I), “Total Survey Error,” and we’ll discuss it further
in Chapter 3 (Volume II), “Carrying Out the Survey.”
INTRODUCTION
3

Raymond Gorden has provided a useful framework for viewing the


interview as a social process involving communication. Gordon says that
this communication process depends on three factors: “the interviewer,
the respondent, and the questions asked.”3 For example, the race and gen-
der of the interviewer relative to that of the respondent can influence
what people tell us, and we know that the wording and order of questions
can also influence what people tell us. We discussed these considerations
in Chapter 3 (Volume I), “Total Survey Error.”
Gorden goes on to suggest that the interaction of interviewer, respon-
dent, and questions exists within the context of the interview situation.4
For example, are we interviewing people one-on-one or in a group set-
ting? Many job interviews occur in a one-on-one setting, but one of the
authors recalls a time when he was among several job applicants who
were interviewed in a group setting involving other applicants. Rest as-
sured that this affected him and the other applicants. Are we interviewing
people in their homes or in another setting? Think of what happens in
court when witnesses are questioned in a courtroom setting. That clearly
affects their comfort level and what they say.
Gorden notes that the interview and the interview situation exist
within the context of the culture, the society, and the community.5 There
may be certain topics, such as religion and sexual behavior, that are diffi-
cult to talk about in certain cultures. Norms of reciprocity may vary from
culture to culture. Occupational subcultures, for example, the subcultures
of computer programmers and lawyers, often have their own language.
Norman Bradburn views “the interview as a microsocial system con-
sisting of two roles, that of the interviewer and that of the respondent.
The actors engage in an interaction around a common task, that of com-
pleting an interview.”6 He goes on to suggest that “there are general social
norms that govern interactions between strangers.”7 Two of those norms
are mutual respect (including privacy) and truthfulness.
It’s helpful to keep in mind that the interview can be viewed as a social
setting that is affected by other factors, as is the case with any other social
setting. In this book, we will be looking at many of the factors that affect
the interview. We’ll look at the research that has been done and how we
can use this research to better conduct our interviews.
4 AN INTRODUCTION TO SURVEY RESEARCH, VOLUME II

Book Overview
Here’s a brief summary of what we covered in the first volume of this book.

• Chapter 1—Introduction—Interviewing isn’t the only way we can


obtain information about the world around us. We can also ob-
serve behavior. We compared observation and interviewing as two
different methods of data collection. We also looked at a brief his-
tory of social surveys.
• Chapter 2—Sampling—What are samples and why are they used?
In this chapter, we discussed why we use sampling in survey re-
search, and why probability sampling is so important. Common
types of samples are discussed along with information on choosing
the correct sample size and survey approach.
• Chapter 3—Total Survey Error—Error is inevitable in every scien-
tific study. We discussed the four types of survey error—sampling,
coverage, nonresponse, and measurement error, focusing on how
we can best minimize it.
• Chapter 4—Factors to Consider When Thinking about Surveys—
In this chapter some of the fundamental considerations about sur-
veys were presented: the stakeholders and their roles in the survey
process; ethical issues that impact surveys; factors that determine
the scope of the survey; and how the scope, in turn, impacts the
time, effort, and cost of doing a survey.
• Chapter 5—Modes of Survey Delivery—There are four basic
modes of survey delivery—face-to-face, mailed, telephone, and
web delivery. We focused on the critical differences among these
different modes of delivery and the relative advantages and disad-
vantages of each. We also discussed mixed-mode surveys, which
combine two or more of these delivery modes.

Here’s what we are going to discuss in this second volume.

• Chapter 2—Writing Good Questions—Here we look at survey


questions from the perspective of the researchers and the survey
INTRODUCTION
5

participants. We focus on the fundamentals of the design, format-


ting, and wording of open- and closed-ended questions and discuss
some of the most commonly used formats in survey instruments.
• Chapter 3—Carrying Out the Survey—Every survey goes through
different stages, including developing the survey, pretesting the
survey, administering the survey, processing and analyzing the
data, reporting the results, and making the data available to others.
Surveys administered by an interviewer must also pay particular
attention to interviewer training.
• Chapter 4—Changing Technology and Survey Research—The
chapter focuses on the impacts technology has had on survey
research. As computers became widely available to the general
public, survey platforms adapted to self-administered formats.
Likewise, as cell-phone technology replaced landline telephones,
survey researchers had to adapt to new issues in sampling method-
ology. Currently, rapid advances in mobile technology drive both
opportunities and challenges to those conducting surveys.
• Chapter 5—Presenting Survey Results—In this chapter we talk
about the last step in the survey process—presenting the survey
findings. Three major areas, the audience, content, and expression
(how we present the survey), which shape the style and format of
the presentation, are each discussed along with their importance
in the creation of the presentation. The chapter concludes with a
discussion on how different types of presentations such as reports,
executive summaries, and PowerPoints can be structured and how
survey data and results can be effectively presented.

Annotated Bibliography
Research Design

• Matilda White Riley’s Sociological Research I: A Case Approach is an


early but excellent discussion of research design.8 Her paradigm of
the 12 decisions that must be made in constructing a research design
includes the alternative methods of collecting data—observation,
questioning, and the combined use of observation and questioning.
6 AN INTRODUCTION TO SURVEY RESEARCH, VOLUME II

• Earl Babbie’s The Practice of Social Research is a more recent intro-


duction to the process of constructing a research design.9
• Delbert Miller and Neil Salkind’s Handbook of Research Design &
Social Measurement provides many examples of the components of
the research design.10

Questioning (Interviewing) as a Social Process

• Raymond Gorden’s Interviewing: Strategy, Techniques, and Tactics is


one of the clearest discussions of the communication process and
the factors that affect this process.11
• Norman Bradburn’s article “Surveys as Social Interactions” is an
excellent discussion of the interactions that occur in interviews.12
CHAPTER 2

Writing Good Questions

More than four decades ago, Warwick and Lininger indicated that

Survey research is marked by an unevenness of development in its


various subfields. On the one hand, the science of survey sampling
is so advanced that discussion of error often deals with fractions
of percentage points. By contrast, the principles of ­questionnaire
design and interviewing are much less precise. Experiments
­
­suggest that the potential of error involved in sensitive or vague
opinion questions may be twenty or thirty rather than two or
three percentage points.1

While both sampling methodology and survey question development


have advanced significantly since Warwick and Lininger made that obser-
vation, the development of question methodology continues to be crucial
because of its relevance to measurement error (which is an important
component of the total survey error). Continuing research across differ-
ent disciplines is expanding our horizons to address subtle problems in
survey question construction. For example, a study by Fisher2 examined
the effect of survey question wording in a sensitive topic area involving
estimates of completed and attempted rape and verbal threats of rape.
The results of the study show significant differences between the two
sets of rape estimates from two national surveys: the “National Violence
against College Women” study and the “National College Women Sexual
­Victimization” study, with the latter study’s estimates ranging from 4.4 to
10.4 percent lower than that of the former. While Fisher attributes the
difference between the two surveys to four interrelated reasons, “the use of
behaviorally specific questions cannot be overemphasized, not necessarily
8 AN INTRODUCTION TO SURVEY RESEARCH, VOLUME II

because they produce larger estimates of rape but because they use words
and phrases that describe to the respondent exactly what behavior is being
measured.”3 Fisher’s point, essentially, is that estimates coming from sur-
veys that use more specific language describing victim’s experiences in
behavioral terms, such as “he put his penis into my vagina,” produce
more accuracy in the responses and thus improve the overall quality of
the survey.
In a different discipline, potential response bias resulting from racial
or ethnic cultural experience was found in research on health behavior by
Warnecke et al.4 Among the researchers’ findings was evidence in support
of differences in question interpretation related to respondent ethnicity.
They suggest that providing cues in the question that help respondents
better understand what is needed may address these problems.
Finally, in a third discipline, an economic study examining household
surveys asking individuals about their economic circumstances, financial
decisions, and expectations for the future conducted by Bruine de Bruin
et al.5 found that even slight changes in question wording can affect how
respondents interpret a question and generate their answer. Specifically,
the authors concluded that questions about “prices in general” and “prices
you pay” focused respondents more on personal price experiences than
did questions about “inflation.” They hypothesized that thoughts about
personal price experiences tend to be biased toward extremes, such as
large changes in gas prices, leading respondents to overestimate overall in-
flation. Essentially, what would be considered irrelevant changes in ques-
tion wording affected responses to survey questions.
These three examples from different disciplines serve as important il-
lustrations of the sensitivities continuing to be explored in relation to
the structure and format of survey questions. The goal of this chapter is
to acquaint you with some structural issues on questions design and to
provide general guidelines on writing good survey questions. Before we
plunge into the topic, however, some context on survey questions will
help clarify some of the points we make later.
We begin by highlighting the distinction between the survey mode,
the survey instrument, and the survey questions. The mode, discussed in
Chapter 5 (Volume I), is the method of delivery for the survey. The survey
instruments often referred to as questionnaires can range from a traditional
Writing Good Questions 9

paper-and-pencil mail-out form to a self-administered e­ lectronic online


survey with embedded audio and video and to the survey interview screen
seen by computer-assisted telephone interviewing (CATI) telephone in-
terviewers when they call interview participants and record the responses
into a computerized database. Essentially, the survey instrument is the
platform for the questions, while the questions are the expressions—a
word, phrase, sentence, or even image—used to solicit information from
a respondent. When talking about survey elements, the mode, the survey
instrument, and the survey questions are interwoven and frequently dis-
cussed simultaneously (see, for example, Snijkers et al.6). However, in our
discussion here, we have decided to focus on the questions rather than the
questionnaire for two reasons. First, as Dillman notes in his discussion
of the evolution from his Total Design Method (introduced in 1978)7
to the more recent developments in Tailored D ­ esign Method,8 advances
in survey methodology, changes in culture, rapidly changing technology,
and greater emphasis on online surveys have created a need to move away
from a one-size-fits-all approach. This more formulaic approach has been
replaced by a more customized one that can be adapted to the situation
and participants. As Dillman et al. point out:

Rapid technological development in the past 15 years has changed


this situation substantially so that there are now many means for
contacting people and asking them to complete surveys. Web
and cellular telephone communication have undergone rapid
­maturation as means of responding to surveys. In addition, voice
recognition, prerecorded phone surveys that ask for numerical
and/or voice recorded responses, fillable PDFs, smartphones,
­tablets, and other devices have increasingly been used for data
collection. Yet, for many reasons traditional phone, mail, and
­in-person contacts have not disappeared, and are often being used
in combination to maximize the potential of reaching people.
In addition, offering multiple ways of responding (e.g., web and
mail in the same survey) is common. It is no longer practical to
talk about a dominant mode of surveying, as in-person interviews
were described in the middle of the 20th century and telephone
was referred to from about 1980 to the late 1990s.9
10 AN INTRODUCTION TO SURVEY RESEARCH, VOLUME II

The advantages of such mixed-mode surveys include a means to re-


duce total survey error within resource and time limitations.10 For ex-
ample, by using mixed-mode surveys, researchers can increase response
rates. A common way to mix modes to reduce costs is to collect as many
responses as possible in a cheaper mode before switching to a more expen-
sive mode to try to obtain additional responses. This strategy was used by
the U.S. Census Bureau for the 2010 Decennial Census. Paper question-
naires were first mailed to nearly every address in the United States, and
about 74 percent of them responded (U.S. Census Bureau n.d.). Only
then was the more expensive method of sending interviewers to try to ob-
tain responses from households that did not respond by mail. The Census
Bureau was able to avoid considerable expense by getting most house-
holds to respond by mail and minimizing the number that would need to
be visited by in-person interviewers.11
While the survey field now embraces the idea of mixed-mode surveys
(as discussed in Chapter 5, Volume I), the ability to garner information
across different modes requires even greater attention to creating ques-
tions that can be formatted to fit into different modes. Further, while the
modes of survey delivery continue to change, with a greater emphasis on
self-administration and rapid electronic delivery and response, the funda-
mentals of good questions remain the same. In summary, it is essential to
understand the most basic element in survey design and a­ dministration—
the question—as we try to ensure compatibility across a range of delivery
modes and formats if we are to produce valid and reliable surveys. Even a
well-designed and formatted questionnaire cannot undo the damage to a
survey caused by questions that are poorly conceived, badly constructed,
or offensive to the respondent.

Begin at the Beginning


One of the themes you will see throughout this book is the importance
of having a clear idea of what information you need to get to answer your
research question(s). Before you ever begin to tackle the development of
your questions for the survey, you should have these research questions
and objectives written down, with a clear understanding and agreement
between the sponsor and the researchers as to what they are. At the end
Writing Good Questions 11

of the survey project, the test of whether a survey was successful will be
whether those original research questions were answered.
Dillman notes three goals for writing good questions for self-­
administered surveys so that every potential respondent will (1) ­interpret
the question the same way, (2) be able to respond accurately, and (3) be
­willing to answer.12 Let’s briefly take a look at Dillman’s goals, which will
help frame the actual design of questions: to write a question that every
­potential respondent will be willing to answer, will be able to respond to
accurately, and will interpret in the way the surveyor intends. We then
have to organize those questions into a questionnaire that can be admin-
istered to respondents by an interviewer or that respondents can process
on their own. We also have to find and convince sample members to
complete and return the questionnaire.

Validity and Reliability in Survey Questions

The first two question qualities that Dillman’s goals highlight center
on two important concepts that must be addressed in our questions:
­reliability and validity. Reliability and validity are two of the most
­important concepts in research, generally, and much of the effort we put
into survey ­research is directed toward maximizing both to the greatest
extent possible. In surveys, reliability refers to the consistency in responses
across different respondents in the same situations.i Essentially, we should
see consistency of the measurement, either across similar respondents or
across different administrations of the survey. In a questionnaire, this
means that the same question elicits the same type of response across
similar respondents. To illustrate, the question “In what city do you cur-
rently live?” is an extremely reliable question. If we asked 100 people
this question, we would expect to see a very high percentage responding
to city of residence in a similar fashion, by naming the city in which they

i
It is important for researchers to recognize changes in the context of the situation,
which might affect the consistency of responses. For example, if a survey on school
safety was administered to children in a particular school district before and after a
major school shooting was reported in another part of the country, the situation of the
survey might appear to be the same, but the situation context would be substantially
different.
12 AN INTRODUCTION TO SURVEY RESEARCH, VOLUME II

are ­currently living. Likewise, if we were to ask similar groups this same
question in three successive decades, we would again expect the kind of
responses we would get to be very parallel across those years. By contrast,
if we asked 100 people, “How much money in dollars does it take to be
happy?” we would find a great deal of inconsistency in their responses.
Further, if we posed the same question to groups across three decades,
we would likely find a great deal of variation in their responses. One of
the major differences between the two questions is the degree to which
perceptions of key concepts are shared among participants. In the first
question, the concept of city of residence has a generally shared defini-
tion. By contrast, in the second question, the definition of the concept of
happiness is vague and not as widely shared. Many people, for example,
would probably have great difficulty in putting a monetary value on what
they view as essential to being happy.
While perceptions of the question by survey participants affect reli-
ability on written survey instruments, when the questions are presented
in an interview format, we must also deal with the differences between
interviewers. Reliability can be impacted if the question is asked differ-
ently by different interviewers or if a single interviewer varies the way
the question is presented to different participants. Once the question has
been asked, reliability can also be impacted by how the response is re-
corded by the interviewer. Interviewer issues, including the importance of
interviewer training, are discussed later in Chapter 3 (Volume II).
Validity,ii on the other hand, refers to the extent that the measure
we are using accurately reflects the concept we are interested in, or, as
Maxfield and Babbie note, “Put another way, are you really measuring
what you say you are measuring?”13 Let’s revisit our first question: “In
what city do you currently live?” If we asked a group of high school stu-
dents that question, we would likely get accurate responses. On the other
hand, if we were trying to assess the intelligence of high school students
and asked them what their grade point average was and concluded, on

ii
There are four major areas of validity, namely, face, content, criterion, and construct,
which we will not discuss. The interested reader can go to any introductory statis-
tics or social research methodology text to learn more about these different types of
validity.
Writing Good Questions 13

the basis of those grade point averages, that the percentage of students
with an A average was the percentage of very intelligent students, the
percentage of those with a B average was the percentage of students with
above-average intelligence, the percentage of those with a C average was
the percentage of average intelligence students, and so forth, such a con-
clusion would be invalid because grade point average isn’t an accurate
measure of intelligence.
There is an interplay between reliability and validity in survey re-
search, and when creating survey questions, we must pay attention to
both. One of the common analogies used to help understand the relation-
ship between reliability and validity is the target shown in Figure 2.1. The
objective in our questions is to hit the bull’s-eye on the target.

Figure 2.1  Relationship of reliability and validity in question design

Willingness to Answer Questions

Being asked to complete a survey, whether it’s conducted in person or on-


line, by mail, or on the phone, probably isn’t on the top of most people’s
list of things they most enjoy. The issue surrounding potential partici-
pants’ willingness to take part in a survey has become very visible in the
past couple of decades because of substantially declining survey participa-
tion response rates. As a result, considerable attention has been paid to
the use of incentives as ways to improve participation, as well as the de-
sign factors that intrinsically make participation more likely.14 However,
research has consistently shown that incentives to reduce the burden on
respondents as well as rewarding them for their help have a significant im-
pact on improving responsiveness.15 Participants must see a value to their
involvement that outweighs the effort they need to expend by participat-
ing. If participants are not motivated to answer each question, if they see
14 AN INTRODUCTION TO SURVEY RESEARCH, VOLUME II

no benefit from their effort, if a question is offensive or demeaning, if


they don’t understand a question, or if they believe that answering a ques-
tion will result in harm to them (such as a violation of their privacy), it is
likely they simply won’t answer the question.

Key Elements of Good Questions


So, what are the essential elements of good questions? A reading of the lit-
erature from general textbooks to highly specialized journal articles will pro-
vide a vast, almost alarming assortment of recommendations, cautions, and
directions—as Dillman puts it, “a mind-boggling array of generally good,
but often confusing and conflicting directions about how to do it.”16 To
avoid falling into this trap ourselves, we will use a slightly modified version
of attributes suggested by Alreck and Settle to attempt to distill these recom-
mendations down into three major areas: specificity, clarity, and brevity.17

Specificity, Clarity, and Brevity

By question specificity, we are referring to the notion that the question


addresses the content of the information sought as precisely as possible.
Does the information targeted by the question match the target of the
needed information? If a question does not, the results it produces will
have low validity in terms of addressing the research objective. A rough
analogy might be using a pair of binoculars to look at a distant object. If
you close first your left eye, then your right, each eye will see the object
independently. However, if you open both eyes and instead of seeing one
image with both, you see two images, then you know some adjustment to
the binoculars is needed. Similarly, there should be a high level of congru-
ence between the research objectives and the question(s) asked. If there
is not, some tweaking of the question(s) will be needed. For the question
to accurately address the research question(s), it must also be relevant to
the survey respondent. If you ask survey respondents about a topic with
which they are unfamiliar, the question may have high congruity between
its topic and the information needed but would do a poor job of getting
that information from respondents.
Writing Good Questions 15

The second area, question clarity, is one of the biggest problems in


survey research, particularly when it’s used with s­ elf-administered ­survey
instruments. Lack of clarity has a large impact on both q­ uestion valid-
ity and reliability because the question must be equally ­understandable
to all respondents. The core vocabulary of the survey q­ uestion should
be attuned to the level of understanding of the participants. There
is oftentimes a disparity between what the survey sponsors or the
­researchers know about the question content and the respondents’ level
of ­understanding. Frequently, this happens when technical terms or
professional jargon, very familiar to sponsors or researchers but ­totally
unknown to respondents, is used in survey questions. To illustrate, con-
sider the ­following question that was found on a consumer satisfac-
tion survey sent to one of the authors of this book: “How satisfied are
you with your ability to engage the safety lock-out m ­ echanism?” The
­response categories ranged from very satisfied to not satisfied at all. The
problem was the author wasn’t aware there was a safety lock-out mecha-
nism on this product or how it was used! The reverse of this problem
can also hamper question clarity. This happens when sponsors or re-
searchers have such a superficial knowledge of the topic that they fail to
understand the intent of their question. For example, if a research firm
asked small businesses if they supported higher taxes for improved city
services, they might find respondents asking, “Which taxes?” “Which
services?”
The third area, brevity, has to do with the length of the question.
The length and complexity of questions affects the response rate of par-
ticipants as well as impacting the validity and reliability of the responses.
Basically, questions should be stated in as straightforward and uncom-
plicated a manner as possible, using simple words rather than specialized
ones and as few words as possible to pose the question18 (although this last
caution may be more applicable to self-administered questionnaires than
for interview formats).19 More complex sentence structures should be
avoided. For example, compound sentences (two simple sentences joined
by a ­conjunction such as and or or) or compound–complex s­entences
(those combining an independent and a dependent clause) should be bro-
ken down into two simpler questions.20
16 AN INTRODUCTION TO SURVEY RESEARCH, VOLUME II

Avoiding Common Question Pitfalls

Before moving on to question types, let’s look at some common question


pitfalls that apply equally to different question types and formats.

• Double-barrel questions: These are created when two different top-


ics are specified in the question, essentially asking the respondent
two questions in one sentence. This leaves the respondent puzzled
as to which part of the question to answer.
?? Example. How would you assess the success of the Chamber

of Commerce in creating a favorable business climate and an


awareness of the negative impact of overtaxation on businesses?
Correction: The question should be split into two separate
questions:
1. How would you assess the success of the Chamber of Com-
merce in creating a favorable business climate?
2. How would you assess the success of the Chamber of Com-
merce in creating an awareness of the negative impact of
overtaxation on businesses?
• Loaded or leading questions: These originate when question wording di-
rects a respondent to a particular answer or position. As a result, the re-
sponses are biased and create false results. Political push polls, which are
sometimes unethically used in political campaigns, illustrate extreme
use of loaded questions.21 They create the illusion of asking legitimate
questions but really use the question to spread negative information by
typically using leading questions (see the second example).
?? Example. Don’t you see some problem in letting your children

consume sports drinks?


Correction: The question should be reworded to a neutral
statement.
1. Is letting your children consume sports drinks a problem?
?? Example. Are you upset by Senator ____________’s wasteful

spending of your tax dollars on programs for illegal immigrants?


Correction: All negative references in the question should be
removed.
1. Should programs for undocumented immigrants be sup-
ported with tax dollars?
Writing Good Questions 17

• Questions with built-in assumptions: Some questions contain


assumptions that must first be considered either true or false in
order to answer the second element of the question. These pose a
considerable problem as the respondent may feel disqualified from
answering the second part of the question, which is the real topic
focus.
?? Example. In comparison with your last driving vacation, was

your new car more comfortable to ride in?


Correction: Potential respondents may hesitate to answer this
question because of an assumption contained in it: that the in-
dividual has taken a driving vacation. This question could be
split into two separate questions.
1. Have you previously taken a driving vacation?
2. If yes, in comparison with your last driving vacation, was
your new car more comfortable to ride in?
• Double-negative questions: Questions that include two negatives
not only confuse the respondent but may also create a level of frus-
tration resulting in nonresponse.
?? Example. Please indicate whether you agree or disagree with the

following statement. A financial adviser should not be required


to disclose whether the adviser gets any compensation for cli-
ents who purchase any of the products that the financial adviser
recommends.
Correction: The not be required in the question adds a layer of
unnecessary complexity. The question should be worded as fol-
lows: “A financial adviser should be required to disclose whether
or not he or she gets any compensation for clients who purchase
any of the products that the adviser recommends.”

Question Types and Formats


The formatting of the survey question considers the research objectives,
the characteristics of the respondent, the survey instrument and mode
of delivery, and the type of analysis that will be needed to synthesize and
­explain the survey’s results. Broadly speaking, there are two ­principal types
of survey questions: unstructured and structured. Unstructured ­questions
are sometimes called open-ended because they do not restrict the possible
18 AN INTRODUCTION TO SURVEY RESEARCH, VOLUME II

answers that the survey respondent may give. The second general question
type, structured, is commonly referred to as closed-ended because the survey
participant is limited to responses or response categories (pre)identified by
the researchers. For example, if we are doing a telephone survey of com-
munity needs, we might use either of the following two survey questions.

1. What do you like best about living in this city?


2. What do you like best about living in this city?
a. A good transportation system
b. A low crime rate
c. A lot of entertainment and recreational opportunities
d. A good school system
e. Good employment opportunities

The first question is open-ended, while the second is a closed-ended


question. Let’s briefly look at the characteristics of both types.

Open-Ended Questions

In responding to the first question, which is open-ended, a respondent’s


answer could obviously cover many different areas, including some the
researchers had not previously considered. In this respect, open-ended
questions are particularly well suited to exploring a topic or to g­ athering
information in an area that is not well known. With open-ended
­questions, the scope of the response is very wide, so they generally work
best when the researcher wants to provide, in essence, a blank canvas to
the respondent. On the downside, the open-ended question may elicit a
­response that may be well outside the question’s focus: An answer such
as “My best friend lives here” or “I get to live rent-free with my parents”
doesn’t really address the community needs issue, which was the intent of
the question. Thus, the response to the question would have low validity.
When self-administered surveys use open-ended questions, the ability
to interactively engage in follow-up questions or probe to clarify answers
or get greater detail is limited. For this reason, considering the aspects
of specificity, clarity, and brevity in question design is especially impor-
tant. Wordy, ambiguous, or complex open-ended question formats not
Writing Good Questions 19

only create difficulty in terms of the respondent’s understanding of the


questions but may also present a visual image format that suggests to the
respondent that this question will be difficult and time consuming to
answer. For this reason, an open-ended question should never exceed one
page on a written survey or require respondents to read across different
screens in a computerized format. Similarly, if the question is provided in
a paper format, the space provided to answer the question should directly
follow the question rather than being placed on a separate page or after
additional questions. Simple formatting elements such as providing suf-
ficient space to allow the respondent to write or type in a narrative-type
response in a paper or online survey are very important.22
When open-ended survey questions are administered to the survey
­participant in an interview, the response is often recorded verbatim or with
­extensive notes, which is useful to later pick up on the nuances of the response,
such as how the person responding phrases an answer or the strength of a
­feeling they express in their response. Similarly, with in-person, telephone, or
interactive online interviews, the interviewer can use follow-up questions to
obtain more specific information or probe for more detail or seek explanation
of the open response. In some cases, these probes are anticipated and prepro-
grammed into the interview questionnaire, but in others they are spontane-
ously developed by the interviewer on the basis of answers that are not clear
or lack detail. Obviously, the experience and qualifications of the interviewers
have a major impact on the ability to follow up with conditional probes.
Open-ended questions can be time consuming for both the respon-
dent and the researcher. For the respondent, it requires the individual
to not only recall past experiences but also make a judgment as to how
best and with how much detail to answer. For the researcher, open-ended
questions often yield many different responses, which may require ad-
ditional coding and can complicate or even prevent the analysis. Also,
since responses to open-ended questions are typically in narrative format,
a qualitative rather than quantitative analysis of the information must
be anticipated. Such analysis, even when aided by computerized qualita-
tive analysis programs,23 typically requires more time and effort. Owing
to the increased effort on the part of the respondents and researchers
with these types of questions, from a practical perspective, the number
of open-ended questions must be held to a reasonably low number on a
20 AN INTRODUCTION TO SURVEY RESEARCH, VOLUME II

survey instrument. It also prompts researchers to avoid using open-ended


questions when they are working with large samples.

Closed-Ended Questions

The closed-ended question format is more defined and has been stan-
dardized to a greater extent than the open-ended style. While both
open- and closed-ended questions require researchers to craft questions
carefully, closed-ended questions place an additional burden on research-
ers to carefully consider what responses are needed and are appropriate. As
can be seen in the second question, the closed-ended responses are restric-
tive and therefore must be precisely targeted to research questions. The
closed-ended format does allow researchers to impart greater uniformity
to the responses and to easily determine the consensus on certain items,
but only on those items that were specified by the answers provided. This,
in turn, can lead to another problem, as pointed out by Krosnick and Fab-
rigar;24 because of researcher specification with closed-ended questions,
open-ended questions are less subject to the effect of the researcher.
With closed-ended questions, the response choices should be both
exhaustive and mutually exclusive. This means that all potential responses
are listed within answer choices and that no answer choice is contained
within more than one response category. We should point out a distinc-
tion here between single- and multiple-response category questions. In
single-response questions, only one of the choices can be selected, and
therefore the response choices must be exhaustive and mutually exclusive.
However, some questions are worded in a way that a respondent may
choose more than one answer from the choices provided. In this situa-
tion, the response choices are each still unique, but the person responding
can select more than one choice (see Example 4).
Consider the following closed-ended question:

Example 1
In which of the following categories does your annual family
income fall?

a) Less than $20,000


b) $21,000–$40,000
Writing Good Questions 21

c) $40,000–$60,000
d) $61,000–$80,000

Can you see a problem with the response set? If you said that it has
both nonmutually exclusive categories and does not provide an exhaustive
listing of possible family income levels, you are right. Both problems are
seen with the current response categories. If you look carefully, you will
notice that a respondent whose family income is between $20,000 and
$20,999 has no response category from which to choose. Similarly, if a
respondent’s family income level is $105,000 a year, the individual would
be in a similar quandary, as again there is no appropriate answer category.
A problem also arises if the respondent has an annual family income of
$40,000 a year. Which category would the individual choose—(b) or (c)?
Fortunately, these two problems are easy to fix. To solve the issue of non-
mutually exclusive categories, we would change response (c) to “$41,000
to $60,000.” To correct the problem of transforming the response set of
answers to be exhaustive, we could change the first response option (a) to
“less than $21,000” and add another response option at the end of the
current group, (e) “more than $80,000.”
One of the first considerations with closed-ended questions is the level of
precision needed in the response categories. Take, for example, the following
three questions that are essentially looking for the same type of information.

Example 2
When you go clothes shopping, which of the following colors do you
prefer?
(Please check your preference)

______ Bright colors ______ Dark colors

Example 3
When you go clothes shopping, which of the following colors do you
prefer?
(Please check your preference)

______ Bright colors   _____ Dark colors   _____ No preference


22 AN INTRODUCTION TO SURVEY RESEARCH, VOLUME II

Example 4
When you go clothes shopping, which of the following colors do you
prefer?
(Please check each of your preferences)

  ____ Yellows  ____ Browns    ____ Reds     ____ Greens


____ Blues     ____ Pinks       ____ Blacks    ____ Whites
____­Oranges   ____ Purples   ____ Lavenders  
____ No preference

Each of these questions provides a different level of precision. In


the first question, the choices are limited to two broad, distinct color
­categories, which would be fine if researchers were looking for only gen-
eral ­impressions. However, if the question were on a fashion survey, this
level of detail wouldn’t be sufficient. The second question also opens up
another answer possibility, that is, a response that indicates the indi-
vidual doesn’t have a color preference. The third question, which could
be e­ xpanded to any number of response categories, not only provides
an indication of specific color choices but also allows the individual to
select specific colors from both bright and dark areas. This question
could be further enhanced by providing a visual image, such as a color
wheel, that would let the respondents mark or check precise colors,
thus ensuring greater reliability in the answers provided across different
respondents.
Unfortunately, there is always a delicate balance in trying to get to
the greatest level of precision in question responses, on the one hand,
while not sacrificing the respondent’s ability to answer the question ac-
curately, on the other. With closed-ended questions, the formatting of
the response categories can impact important response dimensions, such
as the ability to accurately recall past events. More than 40 years ago,
Seymour Sudman and Norman Bradburn described the problem of recall
on memory.

There are two kinds of memory error that sometimes operate in


opposite directions. The first is forgetting an episode entirely,
Writing Good Questions 23

whether it is a purchase of a product, a trip to the doctor, a law


­
violation, or any other act. The second kind of error is com-
pression of time (telescoping) where the event is remembered
as ­occurring more recently than it did. Thus, a respondent who
­reports a trip to the doctor during the past seven days when the
doctor’s ­records (show) how that it took place three weeks ago has
made a compression-of-time error.25

The second problem in trying to make the response categories too


precise occurs when the response categories become impossible to dif-
ferentiate in the respondent’s mind. It’s a little bit like the average wine
drinker trying to distinguish whether the sauvignon blanc wine promoted
by the local wine shop actually has a fruit forward taste with plum and
cherry notes and a subtle flowery finish. In a survey question directed to
­office workers, the problem might look like this:

Example 5
If you spend more time responding to business e-mails and text
­messages this year compared with last year, please choose the category
below that best describes the difference in the average amount of time
per day you spend responding to business e-mails and text messages
this year compared with last year?

a) _____ Less than 8 minutes


b) _____ 8–16 minutes more
c) _____ 17–25 minutes more
d) _____ 26–34 minutes more
e) _____ 35–43 minutes more
f ) _____ More than 43 minutes

As you can imagine, the average worker would likely have great
difficulty in trying to estimate time differences with this degree of
­
specification. Essentially, the researchers are trying to create too finite a
­distinction in the categories. This concept is often referred to in terms of
24 AN INTRODUCTION TO SURVEY RESEARCH, VOLUME II

the granularity of the response category, which refers to the level of detail
in the response categories.
There are two basic formats for closed-ended questions: (a) unordered
or unscalar and (b) ordered or scalar.26 The first of these, the unordered or
unscalar response category, is generally used to obtain information or to
select items from simple dichotomous or multiple-choice lists. The data
obtained from this type of question are usually categorical, measured at
the nominal level, which means there are discrete categories but no value
is given to the categories. Here are some examples.

Example 6
Will your company be doing a significant amount of hiring in the
next year?

_____ Yes _____ No

Example 7
If your company plans to expand its workforce in the coming year,
which of the following best explains that expansion?

____ Rehiring from previous downsizing


____ New markets have created greater product demand
____ Expansion in existing markets has created greater product
demand
____ New products coming to market

Such unordered response categories are sometimes referred to as


forced-choice categories because the respondent can choose only one
­answer. Some forced-choice questions are used to help determine choices
between areas that, on the surface, appear to have an equal likelihood of
being selected.

Example 8
When buying a new home, which of the following do you consider
most important?
Writing Good Questions 25

____ Cost   ____ Location    ____ Size    ____ Age


Unordered response categories may be partially opened by providing
an alternative to the choices listed by including an other category with a
blank line that allows a respondent to insert another response in addition
to those listed.

Example 9
When buying a new home, which of the following do you consider
most important?

_____ Cost
_____ Location
_____ Size
_____ Age
_____ Other (please explain)

Ordered or scalar response category, as the name implies, arranges


­responses in an order by requiring the respondent to select a response
that conveys some order of magnitude among the possible choices. These
­response choices are measured by ranking or rating the response on a scale
at the ordinal, interval, or ratio level. With ordinal ranking, the response
categories are sorted by relative size, but the actual degree of difference be-
tween the items cannot be determined. For example, consider commonly
seen scales that ask respondents to indicate whether they strongly agree,
agree, neither agree nor disagree, disagree, or strongly disagree. Thus, each
identified response category becomes a point along a continuum. One of
the first and most commonly used rating scales is the Likert Scale, which
was first published by psychologist Rensis Likert in 1932.27 The Likert
Scale presents respondents with a series of (attitude) dimensions, which
fall along a continuum. For each of the attitude dimensions, respondents
are asked whether, and how strongly, they agree or disagree, using one of
a number of positions on a five-point scale. Today, Likert and Likert-type
scales are used commonly in surveys to measure opinions or attitudes.
The following example shows a Likert-scale question and response set.
26 AN INTRODUCTION TO SURVEY RESEARCH, VOLUME II

Question: Please rate the employees of this


company for each of the areas listed below
For each item
below, please
check the answer
that best applies, Neither
from strongly agree
disagree to Strongly nor Strongly
strongly agree disagree Disagree disagree Agree agree
1. The employees in
this company are
hard working
2. The employees in
this company have
good job skills
3. The employees in
this company are
dependable
4. The employees
in this company
are loyal to the
company
5. The employees
of this company
produce high-
quality work

A variant of the Likert Scale is the Semantic Differential Scale, another


closed-ended format, which is used to gather data and interpret it on the basis
of the connotative meaning of the respondent’s answer. It uses a pair of clearly
opposite words that fall at the ends of a continuum and can be either marked
or unmarked. Here are some examples of a semantic differential scale.
Marked Semantic Differential Scale
Please answer based on your opinion regarding the product:

Very Slightly Neither Slightly Very


Inexpensive [] [] [] [] [] Expensive
Effective [] [] [] [] [] Ineffective
Useful [] [] [] [] [] Useless
Reliable [] [] [] [] [] Unreliable
Writing Good Questions 27

Unmarked Semantic Differential Scale


The central line serves as the neutral point:

Inexpensive ________________________ ________________________ Expensive


Effective ________________________ ________________________ Ineffective
Useful ________________________ ________________________ Useless
Reliable ________________________ ________________________ Unreliable
28
Source: Sincero (2012).

With interval scales, by contrast, the difference between the categories


is of equal distance and can be measured, but there is no true zero point.
A common example of an interval scale is calendar years. For e­ xample,
there is a specific distance, 100 years, between 1776 and 1876, yet it
makes no sense to say 1776 is 95 percent of the later year. With ratio
scales, which do have a true zero point, you can calculate the ratios be-
tween the amounts on the scale. For example, salaries measured on a
dollar scale can be compared in terms of true magnitude. A person who
makes $200,000 a year makes twice as much as someone whose salary is
$100,000.
In summary, question content, design, and format serve as the fun-
damental elements in building and executing good surveys. Research has
provided some guidance on best practices. For example, the following
general design and question order recommendations have emerged from
research: (a) order questions from easy to difficult, (b) place general ques-
tions before specific questions, (c) do not place sensitive questions at the
beginning of the survey, and (d) place demographics at the end of the
questionnaire to prevent boredom and to engage the participant early in
the survey.29
It is also recognized that survey responses can be affected by how
the question and response categories are presented, particularly ordinal
scale questions.30 There is considerable debate over many of the facets
of response sets and scales. For example, there is little agreement on the
optimum number of points on a scale. The only agreement is that be-
tween 5 and 10 points is good, with 7 considered the optimal number
28 AN INTRODUCTION TO SURVEY RESEARCH, VOLUME II

by many researchers.31 But there is a range of opinions on this issue and


on whether extending the number of points to 10 or more increases the
validity of the data.32
Similarly, while it seems there is general agreement on using an odd
number of categories, so as to have a defined midpoint in the scale, issues
such as the inclusion of don’t know categories33 remain controversial, as
some contend they are used when the respondent means no or doesn’t
want to make a choice.
Because the primary mode of delivery continues to undergo changes as
technology drives our ways of communicating and interacting, ­questions
need to be adaptable over multiple platforms and retain their validity and
reliability in mixed-mode designs. While we have a solid research base, we
are struggling to see how new technologies such as web-based surveys and
smartphone applications (apps) change the dynamics of survey design
and administration.

Summary

• The design, format, and wording of questions are extremely


­important in surveys.
?? Questions form the basic building blocks of surveys.

?? Questions have a major impact on measurement error (which is

an important component of the total survey error).


?? Question construction has a major impact on how individuals

respond.
?? Questions must be targeted to answer the research questions.

• The relationship between survey mode, survey instrument, and


survey questions is important to consider.
?? The method of delivery of the survey (mode); the platform for

the questions (survey instrument or questionnaire); and the ex-


pressions of words, phrases, images, and so forth used to solicit
information (questions) are all interrelated and must be devel-
oped in concert.
Writing Good Questions 29

?? Because mixed-mode surveys are becoming increasingly popu-


lar, questions must be designed to be adaptable across different
instruments and modes.
• Addressing question reliability and validity
?? If a question is reliable, we see consistency in responses across

different respondents in similar situations.


?? If a question is valid, it accurately measures what we say we are

measuring.
?? Questions must have both reliability and validity.

• Addressing participants’ willingness to answer questions


?? Declining participation rates have focused more attention

on ways of improving respondents’ willingness to answer


questions.
?? Motivation to answer questions can be increased or decreased

by several factors.
◾◾ Value of participation to the respondents

▸▸ Balance of effort needed to respond against benefit of

responding
▸▸ Respect and courtesy shown to participants

▸▸ Providing incentives

◾◾ Avoiding questions that are offensive or demeaning

◾◾ Making questions understandable

◾◾ Assuring participants that they will not be put at risk of harm

(such as violating privacy) by responding


• Key elements of good questions
?? Specificity—Addressing the content of information as precisely

as possible
?? Clarity—Ensuring that question wording and concepts are un-

derstandable to the respondent


?? Brevity—Making the question as short, straightforward, and

simply worded as possible


• Common question pitfalls
?? Double-barrel questions

?? Loaded or leading questions


30 AN INTRODUCTION TO SURVEY RESEARCH, VOLUME II

?? Questions with built-in assumptions


?? Double-negative questions
• Open-ended questions
?? Can be administered through interviews, other interactive for-

mats, or in self-administered forms


?? Are good for exploring topics or gathering information in areas

that are not well known


?? Allow participants a blank canvas to respond, usually in narra-

tive format
?? Have some problems with validity because responses may miss

question intent
?? Are frequently used with follow-up questions or probes to get

more detail or further information


?? Require more effort on the part of both respondents and

researchers
• Closed-ended questions
?? They are more defined and standardized than open-ended.

?? They generally require less effort on the part of respondents and

researchers.
?? Response categories are restricted and predetermined by

researchers.
?? Wording and format of response categories must be carefully

constructed to ensure that the information required to answer


research questions is obtained.
◾◾ Response categories may be (a) ordered or scalar or (b) un-

ordered or unscalar.
◾◾ Response categories may be measured at the nominal, ordi-

nal, interval, or ratio level.


◾◾ Response categories must be mutually exclusive and
exhaustive.

Annotated Bibliography
General

• See Chapters 7 and 8 in Robert Groves et al., Survey Methodology,


2nd ed.34
Writing Good Questions 31

Characteristics of Good Questions

• Dillman et al. text on surveys35 provides a great deal of informa-


tion on the characteristics of good questions, including Dillman’s
19 principles for good question design.

Reliability and Validity

• For a conversational and easy-to-understand overview of reliability


and validity in survey research, see “Understanding Evidence-Based
Research Methods: Reliability and Validity Considerations in
­Survey Research” by Etchegaray and Fischer.36
• For an in-depth review of reliability in surveys, see Alwin’s Margins
of Error: A Study of Reliability in Survey Measurement.37

Question Type and Structuring Questions on a Survey

• Ian Brace38 provides a good overview of question types and the


importance of how questions are structured on survey instruments.
CHAPTER 3

Carrying Out the Survey

So far, we have talked about a number of different aspects of doing a


survey, including

• How to select the cases for your survey (Chapter 2—Volume I);
• The different types of error that can occur in a survey (Chapter 3—
Volume I);
• Things you need to think about when planning a survey
(Chapter 4—Volume I);
• Different ways of delivering the survey to your sample (Chapter 5—
Volume I); and
• Writing good questions (Chapter 2—Volume II).

In this chapter, we’re going to talk about how you carry out the survey.
We’re not going to get into the nuts and bolts of doing a survey. There
are lots of good books that will do this, and we’ll mention them in the
annotated bibliography at the end of this chapter. Rather we’re going to
describe the steps that every researcher must go through in carrying out
a survey.

Developing the Survey


Let’s assume that you want to do a survey of adults in your county to de-
termine their perception of the quality of life. You know that there are cer-
tain areas that you want to explore, including perceptions of crime and the
economy. You want to develop a survey that can be repeated on an annual or
biannual basis to track how perceived quality of life varies over time. You’re
aware of other quality-of-life surveys to which you would like to compare
your survey results. What should you do to begin developing your survey?
34 AN INTRODUCTION TO SURVEY RESEARCH, VOLUME II

Looking at Other Surveys

It’s often helpful to look at the types of questions that other researchers
have used. One place to search is Google (http://google.com) and Google
Scholar (http://scholar.google.com). If you happen to be on a college
campus that subscribes to the Roper Center for Public Opinion Research
(http://www.ropercenter.cornell.edu), consider using iPOLL, which is a
database of over 700,000 survey questions. You can search all these search
engines by keywords. Entering the words quality and life will search for all
questions containing both words in the question. Often what others have
asked will give you ideas of what you might ask.

Focus Groups

Focus groups are another tool that you can use in developing your survey.
A focus group is a small group of individuals from your study population
who meet and discuss topics relevant to the survey.1 Typically, they are
volunteers who are paid to take part in the focus group. For example, if
your study deals with quality of life, you might explore with the focus
group what they think quality of life means and which issues, such as
crime and jobs, are critical to quality of life. A focus group gives you the
opportunity to discuss the types of information you want to get from your
survey with a group of people who are similar to those you will sample.

Cognitive Interviews

A cognitive interview is a survey administered to volunteers from your


study population that asks them to “think out loud”2 as they answer the
questions.3 Cognitive interviews give you the opportunity to try out the
questions and discover how respondents interpret them and what they
mean by their answers. Let’s say that one of the questions you want to ask
in your survey is “What is the most pressing problem facing the commu-
nity in which you live?” In a cognitive interview, you can ask respondents
how they interpret this question. What does “most pressing problem”
mean to them? And you can ask them to take you through their thought
processes as they think through the question and formulate an answer.
Carrying Out the Survey 35

Behavior Coding and Interviewer Debriefing


Another way to pretest a survey is to conduct a pilot study, where you
administer the survey to a small sample of respondents. Respondent
­behavior can be coded to help you identify problem questions. Gordon
Willis suggests that you look for questions in which the following events
occurred—“(1) Interrupts question reading (2) Requests repeat of ques-
tion reading (3) Requests clarification of question meaning (4) Provides
qualified response indicating uncertainty (5) Provides an uncodeable
­response (6) Answers with Don’t Know/Refused.”4 Interviewers can also
be debriefed about problems they encountered while administering the
survey.5

Asking Experts to Review the Survey

When you have a draft of the survey completed, ask survey experts to re-
view it and point out questions that might be confusing to respondents as
well as other types of problems. Most colleges and universities will have
someone who is trained in survey research and willing to review your draft.

Pretesting the Survey


When you think you are ready to try out your survey, select a small num-
ber (25–40) of respondents from your study population and have them
take the survey using the same procedures you will use in the actual sur-
vey. In other words, if you are using a telephone survey, then do your
pretest over the phone. If it’s a web survey, then your pretest should be over
the web. You probably won’t be using these responses as part of your data
since you are likely to make changes in the survey based on the pretest
results.
Here are some of the things that you ought to look for in your pretest6

• How much variation is there in the answers to each question?


Questions that don’t have much variation will not be very useful
when you analyze your data. For example, if you want to ­explore
why some people are concerned about being a crime victim and
36 AN INTRODUCTION TO SURVEY RESEARCH, VOLUME II

others aren’t and if almost everyone is concerned, then this ques-


tion doesn’t have much variation and there isn’t anything to ex-
plain. Of course, you can point out that there is near-universal
concern about being a victim of crime, but that’s about all you
will be able to say. You won’t be able to explore why some are more
concerned about being a victim than others, since there is little
variation in how respondents answer this question.
• How many respondents skip certain questions or say they don’t
know how to respond? No answers and don’t knows could be an
indication of a problem with the way the question is worded or it
could indicate that the question asks for information that respon-
dents can’t or don’t want to provide.
• Is there evidence of satisficing? Some questions require a lot of ef-
fort to answer, and sometimes respondents look for ways to reduce
the burden of answering certain questions. This is what is called
satisficing. For example, giving one-word answers to open-ended
questions can indicate satisficing. Asking people what is the most
pressing problem facing their community requires a lot of effort to
answer. Answering with one word such as “crime” or “education” is
one way to reduce the burden. We discussed satisficing in Chapter 3
(Volume I). You might want to refer back to that chapter.
• If you are asking respondents to skip particular questions based on
their answers to previous questions, did the skip patterns work as
you intended? For example, you could ask respondents if they are
very satisfied, somewhat satisfied, somewhat dissatisfied, or very
dissatisfied with their life in general. You might want to ask only
those who are dissatisfied to tell you why they are dissatisfied. This
requires a skip pattern in the questions. If you’re using a telephone
or web survey, you can program that skip into the software you are
using. If you are using a face-to-face survey, the interviewer will
have to be instructed when to skip to the next question. If you are
using a mailed survey, the instructions will have to be written in
the survey. However you build the skip pattern into your survey,
did it work as you intended? It’s important to check to make sure
that the skip patterns are working properly before you begin the
actual survey. The pretest is the place to check it out.
Carrying Out the Survey 37

• How long did it take for the respondents to complete the survey?
Do you think respondents will be willing to spend that much time
on your survey? You can ask respondents in the pretest whether the
survey took too long to complete.
• If you are using an interviewer-administered survey, did the inter-
viewers report any problems during the survey? Be sure to debrief
your interviewers after the pretest.

Pretesting is an essential step in preparing your survey so it is ready for


delivery to your sample. Here are some other suggestions for the pretest.

• There are two questions that are always important to ask when pre-
paring a survey. How are respondents interpreting the questions?
What do respondents mean by their answers? We talked about the
usefulness of cognitive interviews when you are developing your
survey. They are just as useful during the pretest. Howard Schuman
suggests the following probes. “Could you tell me why you say
that?” “Would you explain what you meant by _______?”7 George
Bishop suggests asking respondents to “think out loud” while an-
swering the question.8
• Ask respondents to tell you about the problems they encountered
while doing the pretest. Were there questions they had difficulty in
answering? Were there questions that were confusing?
• If it’s possible, record the pretests so you can go back over them
with the interviewers and talk about particular questions. These
could be audio or video recordings. Remember that you will need
to get the respondent’s permission to record the interviews.

Administering the Survey—Using Probe Questions


Administering the survey depends in part on your mode of survey delivery.
In Chapter 5 (Volume I), we talked about the four basic modes of survey
delivery—face-to-face, mailed, telephone, and web—and mixed-mode
surveys, which combine two or more of these delivery modes. You might
want to go back and look at this chapter again and at some of the refer-
ences mentioned in the annotated bibliography.
38 AN INTRODUCTION TO SURVEY RESEARCH, VOLUME II

One of the most important tasks of survey administration is to clarify


the answers of respondents through follow-up questions. These types of
questions are referred to as probes. There are a number of different types
of probes. For example, we could ask respondents to “tell us more” or
what they meant by a particular answer. Patricia Gwartney suggests some
other probes.9

• Silence—Don’t be afraid of not saying anything for a few seconds.


This can encourage respondents to expand on what they told you.
• Repetition—We could repeat what respondents tell us in their own
words to encourage them to expand on their answers.
• Repeating the question—Another type of probe is to simply repeat
the question and the response categories.
• Asking for help—Saying that you don’t understand the respon-
dent’s answer and asking for help is a useful probe. Asking for help
can encourage respondents to work with you to clarify an answer.

Some questions are particularly likely to require a follow-up question


in order to clarify what respondents tell us. Here are some examples.

• Suppose we ask a respondent, “What is the most pressing problem


facing your community today?” and the respondent says, “Crime.”
We could probe by saying, “Could you tell me a little more about
that?”
• Researchers often want to know a person’s race and ethnicity. Often
we start with a question such as “Would you describe yourself as
being Hispanic or Latino?” This could be followed by “What race
do you consider yourself to be?” But what do you do if the person
says that he or she is German or Italian? One approach is to probe
by rereading the question, but this time asking them to select their
answer from among a set of categories, such as White, American
Indian, African American or Black, Asian or Pacific Islander, and
other. Many surveys allow respondents to select more than one
category. There also needs to be a category for “refusal.”10
• Sometimes we want to know what respondents do for a living. We
might start by asking them if they are currently employed and, if
Carrying Out the Survey 39

they are, by asking “What is your current occupation (or job)?”


Some respondents may not give you the information you need.
Gwartney suggests the following probes: “What kind of work do
you do?” “What is your job title?” “What are your usual activities
or duties at your job?”11

Probing in Web Surveys

The way in which we probe depends in large part on the mode of survey
delivery. Surveys that are interviewer-administered, such as f­ace-to-face
and telephone surveys, provide the interviewer with considerable c­ ontrol
over the use of probe questions. Web surveys are not interviewer-­
administered, but technological advances give the researcher considerable
control here as well.
There are some questions that you know will require a probe question.
For example, if you ask someone their job title, you will need to follow
that up with a question asking about the duties and activities of their job.
If you ask people what they think is the most pressing problem facing
their community, you might want to follow that up with a probe asking,
“Why do you feel that way?” This type of probe can easily be built into
any survey, including web surveys.
There are other types of probe questions that depend on what re-
spondents tell you. Pamela Alreck and Robert Settle call these interactive
or d­ ynamic probes.12 For example, if respondents give you a one-word
answer, such as “crime” or “drugs,” to the most-pressing-problem ques-
tion, you would want to ask them to “tell me a little more about that.”
That’s more difficult to carry out in a web survey unless you can identify
the specific keywords for which you want to ask a probe question. In ad-
dition, you need to be using web survey software that allows you to use
this type of probe question.

Probing in Mailed Surveys

Probing is more difficult in a mailed survey. Mailed surveys are not


­interactive. There is no contact between the interviewer and the respon-
dent unless one provides the respondent with a telephone number or web
40 AN INTRODUCTION TO SURVEY RESEARCH, VOLUME II

address that they can use to contact you. Consequently, all instructions
and questions have to be written out in the survey. This limits you to
probes that can be anticipated in advance. If you are asking about the
respondent’s occupations or jobs, you can include a probe question asking
the respondents to tell you about their job’s duties and activities. If you
are asking about attitudes or opinions on some issue, you can ask them
to tell you why they feel that way. But there is no opportunity for follow-
ing up on respondents’ specific answers. If they tell you that their race is
Swedish, you can’t follow that up. You have to make your instructions
clear and specific enough to make sure that the respondents know what
you are asking.

Administering the Survey—Record Keeping


Another important part of survey administration is record keeping. It’s es-
sential to keep good records regardless of the survey delivery mode. But
the information that is available for your records will vary by the mode
of survey delivery. In an interviewer-administered survey, you might have
information about individuals you are unable to contact or who refuse to
be interviewed. Each time you attempt to reach a potential respondent, a
record must be kept of the result. These are often referred to as disposition
codes. You should be sure to record the following information.

• Was the respondent eligible to be part of the survey or ineligible


based on whom you were trying to contact or you don’t know? If
the respondent was ineligible or you don’t know, why?
• Were you able to make contact with the respondent? If not, why?
• Was the interview completed? If not, why?

Patricia Gwartney has a detailed list of disposition codes for telephone


interviews, which could be adapted for face-to-face surveys.13 You can
also look at the disposition codes published by the American Association
for Public Opinion Research.14
Often respondents are unable to do the interview at the time you
reach them and the interview needs to be scheduled for a callback. This
should be recorded on a callback form. You should attach a call record to
Carrying Out the Survey 41

each survey, which records each contact, the outcome, the date and time
of the contact, the interviewer’s name, and when to call back along with
any other information that the interviewer wants to convey to the next
interviewer. If you are doing a phone survey and are using CATI software,
the program will create this record for you.
In a self-administered survey, you probably won’t have much informa-
tion about nonrespondents. You may only know that they didn’t respond.
However, sometimes the respondents will contact you and indicate why
they aren’t completing your survey. This could be because they have
moved and aren’t part of your study population or because they don’t have
the time or aren’t interested or because they have a problem about survey
confidentiality. Be sure to record this information. But at the very least,
you need to be able to report the response rate15 for your survey.
Another reason that good record keeping is so important is that it
provides a record of the way in which you carried out your survey. For
example, when you create a data file, you make decisions about how to
name your questions and how you record the responses to these ques-
tions. An example is a person’s age. You would probably name this ques-
tion as age and record the person’s age as a number. But what will you do
if a person refuses to answer this question? You might decide to use 98
for any person who is 98 years of age or older and use 99 for refusals. You
should record this decision in a permanent file so that you will remember
what you did when you come back to this data file after several years. Or
you might give someone else permission to use your data sometime in
the future, and he or she will need to know how you recorded age. There
needs to be a permanent record of the way in which the survey was carried
out to enable future use of this survey.

Administering the Survey—Linking


to Other Information
For some types of surveys, there are other administrative or organiza-
tional data that might be available. For example, if your population is
students at a university, the registrar will have information about the stu-
dents. If your population is employees in a large organization, there is
bound to be information on these employees, such as length of time at
42 AN INTRODUCTION TO SURVEY RESEARCH, VOLUME II

the organization and salary. You might be able to link your survey to these
types of administrative or organizational data.
This, of course, raises a series of questions that you must consider and
answer before linking to these types of data. Here are just a few of these
questions.16

• Do you need the individual’s informed consent? How do you go


about getting this consent?
• Is it ethical and legal to access these data?
• What is the quality of the data?
• What is the cost of accessing these data?
• How do you access these data, and how much effort is it going to
take to do so?
• How accurate will your matching procedure be? In other words,
will you be able to accurately match your survey respondent with
the correct person in the organization’s records?
• How do you maintain the confidentiality of the respondents?

Processing the Data


Coding

If your survey includes open-ended questions, you will probably want to


code the responses into categories. Let’s consider the question we have
been using as an example: “What is the most pressing problem facing
your community today?” Responses to this question could be coded into
categories, such as the economy, crime, education, traffic and transporta-
tion, and so on. You will probably want to divide each of these catego-
ries into more specific categories, such as lack of jobs, violent crime, and
property crime. Once you have developed the categories, have two or
more people code the data independently so you can see if the coding
done by different individuals is consistent.

Editing the Data

In addition to coding answers to open-ended questions, you will want to


review all the answers. For example, let’s say that you’re doing a mailed
Carrying Out the Survey 43

survey and you ask an agree–disagree question with the following cat-
egories: strongly agree, agree, disagree, strongly disagree. What are you
going to do if someone selects more than one answer? With other survey
delivery modes, you have more control over the types of answers that
respondents give so you would be able to avoid this type of problem. But
you still need to edit the data to check for completeness and consistency.
You may need to have a category for uncodable, and you will definitely
need categories for people who say they don’t know or refuse to answer
questions.

Data Entry

There are several options for data entry. You could enter your data directly
into a program, such as Excel, or into a statistical package, such as SPSS.
If you are using CATI software or web survey software, such as Survey
Monkey or Qualtrics, the data can be exported into a number of statisti-
cal packages, such as SPSS or SAS, or into an Excel file.

Data Analysis

Data analysis is beyond the scope of this book. There are many good
books on statistical analysis, and we’ll mention some of them in the an-
notated bibliography at the end of this chapter.

Writing the Report

Writing reports will be one of the topics covered in Chapter 5, Volume II.

Making the Data Available to Other Social Scientists


It has become commonplace for researchers to make their survey data ac-
cessible to other researchers by placing their data in archives, such as the
Inter-university Consortium for Political and Social Research (ICPSR)
at the University of Michigan and the Roper Center for Public Opinion
Research at Cornell University. Depending on the nature of your data,
you may or may not choose to make your survey data publicly available.
44 AN INTRODUCTION TO SURVEY RESEARCH, VOLUME II

Regardless of your decision to make your data available, it’s important for
you to document how the data were collected. If your data are in a file that
can be read by a statistical program, such as SPSS, SAS, Stata, or R, you need
to document how that file was created. At some later point in time, you may
want to reanalyze your data or give it to another researcher for further analysis.
You may want to look in the archives of the ICPSR or the Roper
Center for data that you might be interested in accessing. Mary Vardigan
and Peter Granda provide an excellent introduction to data archiving,
documentation, and dissemination.17

Listening
In an interviewer-administered survey, it’s important for the interviewer
to be a good listener. Raymond Gorden talks about active listening and
suggests that interviewers ask themselves several questions as they are lis-
tening to the respondent.

• Is it clear what that means?


• Is that really relevant to the question?
• Is the answer complete?
• What does that tone of voice mean?
• Should I interrupt now to probe or should I wait till later?18

Gorden also suggests a number of keys to being a good listener.19

• “Know your objective.” Interviewers should understand what each


question is attempting to find out about the respondent and what
the purpose of the question is.
• “Pay attention from the beginning.” Don’t get distracted and miss
what the respondent is telling you.
• “Control your urge for self-expression.” Don’t interject your own
thoughts into the interview. Remember it’s not about what you
think; it’s about what the respondent thinks.
• “Listen actively.” As the respondents are talking, pay attention to what
they are saying. Think about possible probes that you may want to ask.
• “Be patient.” Don’t rush. Let the respondents tell you in their own
words.
Carrying Out the Survey 45

Interviewer Training
In interviewer-administered surveys, interviewers need to be trained. It’s
unreasonable to expect them to pick up what they need to know through
on-the-job training. Here are some different training techniques. A good
training program will combine several of these approaches.

Providing Documentation

You will need to provide documentation for interviewers to study and to


have available for reference during interviews. These should include:

• Copies of the survey questions with skip patterns.


• List of questions that respondents might ask and suggestions for
answering these questions. Questions might include, for example:
How did you get my name and address or phone number? How
long will it take? Do I have to do it? What’s the survey about?
What’s the purpose of the survey? Who is the survey for? Is what
I tell you confidential? There are some excellent examples of hand-
outs on answering respondent’s questions in Don Dillman’s and
Patricia Gwartney’s books on survey research.20
• Why people refuse and how you might respond to these refusals.
For example, people might respond by saying:
?? I don’t have the time to do it.

?? I never do surveys.

?? I’m sick now.

?? I’m not interested.

?? It’s nobody’s business what I think or do.

For some of these reasons, there’s an easy response. For example, if


someone doesn’t have time to do it now or is sick, you should offer to call
back at a more convenient time. If someone says they never do surveys,
you should explain why this survey is important and worth their time.
Don Dillman and Patricia Gwartney also have examples of handouts on
how to handle refusals.21
46 AN INTRODUCTION TO SURVEY RESEARCH, VOLUME II

• Interviewer manual including information on the following:


?? Getting the respondents to participate

?? The structure of the interview

?? How to ask questions

?? How and when to probe

?? What to say when the respondent doesn’t understand a question

?? Disposition codes that indicate the result of the contact

?? Scheduling callbacks

?? Time sheets to record hours worked

?? Getting paid
i

Practice Interviews

Interviewers should have the opportunity to practice the interview be-


fore actually starting data collection. A good place to start is to practice
interviewing themselves. Have them read through the questions and
think about how they would answer and what they might find confus-
ing. Then interviewers could pair off with another interviewer and take
turns interviewing each other. They could also interview friends and
family.
Role playing is often a useful training device. Have experienced in-
terviewers play the role of respondents, and simulate the types of prob-
lems interviewers might encounter. For example, problems often arise
when asking questions about race and ethnicity. Respondents often give
one-word answers to open-ended questions. These types of difficulties
could be simulated in a practice session.
Another useful training tool is to have experienced interviewers
work with new interviewers and coach them on how to handle dif-
ficult problems that arise. Experienced interviewers could listen to
practice interviews and then discuss with the new interviewers how
they might improve their interviewing technique. If it’s possible, re-
cord the practice interviews so you can review them and use them as
teaching tools.

i
This list is not meant to be exhaustive. It is only meant to give examples.
Carrying Out the Survey 47

Survey Participation
As has been mentioned in Chapters 3 and 5 (Volume I), a major concern
of survey researchers is the declining response rates that all modes of survey
delivery have experienced during the last 35 to 40 years.22 This has been
one of the factors that have led to the increased cost of doing surveys.
But the concern is not just over cost. The concern is also that this will
lead to increased nonresponse bias. Bias occurs when the people who do
not respond to the survey are systematically different from those who do
respond, and these differences are related to the questions we ask. Increas-
ing response does not necessarily decrease bias. Jeffrey Rosen et al. note
that increasing response rates among those who are underrepresented is
what is necessary to reduce nonresponse bias.23
We discussed survey participation in Chapter 3 (Volume I), so we’re
not going to repeat the discussion here. Rather we want to emphasize
that declining response to surveys is a serious potential problem since it
increases the possibility of nonresponse bias. Take a look at our discussion
in Chapter 3 (Volume I) of various theories of survey participation and
how you might increase response rates.
Robert Groves and Katherine McGonagle describe what they call a
“theory-guided interviewer training protocol regarding survey participa-
tion.”24 It starts with listing the types of concerns that respondents have
about participating in the survey and then organizing these concerns into
a smaller set of “themes.” Training consists of:

• “Learning the themes”;


• “Learning to classify sample person’s actual wording into these
themes”;
• “Learning desirable behavior to address these concerns”;
• “Learning to deliver . . . a set of statements relevant to their con-
cerns”; and
• “Increasing the speed of performance” so this process can be done
quickly.25

For example, if the respondent says, “I’m really busy right now!” the
interviewer might respond, “This will only take a few minutes of your
48 AN INTRODUCTION TO SURVEY RESEARCH, VOLUME II

time.” Basically what the interviewer is doing is tailoring his or her ap-
proach and response to the respondent’s concerns.26

Summary
• Tools for developing the survey
?? Focus groups allow the researcher to get a sense of how people

feel about the issues covered in the survey.


?? Cognitive interviewing is a way to find out how respondents

interpret the questions and what they mean by their answers.


One way to do this is to ask respondents to “think out loud” as
they answer the questions.
?? Look at other surveys with a similar focus.

?? Survey experts can review your survey and point out problems.

• Pretesting the survey


?? Try out your survey on a small group of individuals from your

survey population.
?? Ask respondents in your pretest to talk about the problems they

had taking the survey.


?? In interviewer-administered surveys, ask the interviewers about

the problems they had while administering the survey.


• Administering the survey
?? Probes are follow-up questions that elicit additional informa-

tion or clarify what the respondent said.


?? There are many types of probes, including the following:

◾◾ Silence

◾◾ Repetition

◾◾ Asking respondents for help in understanding their response

• It’s critical to keep good records of each attempt to conduct an


interview and to keep an accurate record of the ways in which the
survey is carried out.
• Sometimes you can link your survey data to other administrative
records. However, this raises a number of ethical and logistical
questions.
• Processing the data includes coding open-ended responses, editing
the data, data entry, data analysis, and writing reports (covered in
Chapter 5—Volume II).
Carrying Out the Survey 49

• You may want to make your survey data accessible to other re-
searchers. There are a number of archives which may be willing to
house your data. Regardless of your decision to archive your data,
you will want to keep good documentation of the process by which
you created the data.
• When an interview is administered by an interviewer, it’s essential
for the interviewer to be a good listener. Being a good listener is
something people can learn to do.
• There are several approaches to training interviewers for face-to-face
and telephone surveys.
?? Providing copies of the survey and skip patterns, questions

­interviewers might be asked, how to respond to refusals, and in-


terviewing manuals.
?? Practice interviews

?? Coaching

• Survey participation
?? Survey response rates have been declining for the last 35 to

40 years.
?? This increases the possibility of nonresponse bias.

?? Increasing the overall response rate does not necessarily decrease

bias unless you increase the response rate for those who are un-
derrepresented in the survey.

Annotated Bibliography
• Developing the survey
?? Willis
27
provides the fundamentals of good design.
?? Floyd Fowler’s Survey Research Methods
28
and Robert Groves
29
et al.’s Survey Methodology discuss focus groups and cognitive
interviews.
• Pretesting the survey
?? Earl Babbie’s Survey Research Methods
30
has a good discussion
of pretesting.
?? Jean Converse and Stanley Presser’s Survey Questions: Hand-crafting

the Standardized Questionnaire31 is another excellent discussion


of pretesting.
50 AN INTRODUCTION TO SURVEY RESEARCH, VOLUME II

?? Gordon Willis32 has an excellent review of the various ways you


can pretest your survey.
• Administering the survey
There are a number of very good books on how to do various types
of surveys. Here are some excellent sources.
?? Don Dillman’s series of four books on survey research

◾◾ Mail and Telephone Surveys—The Total Design Method 


33

◾◾ Mail and Internet Surveys—The Tailored Design Method 


34

◾◾ Internet, Mail, and Mixed-Mode Surveys—The Tailored ­Design

Method 35
◾◾ Internet, Phone, Mail, and Mixed-Mode Surveys—The ­Tailored

Design Method 36
?? Patricia Gwartney—The Telephone Interviewer’s Handbook 
37

?? Mick Couper—Designing Effective Web Surveys


 38

• Listening
?? An excellent discussion of how to be a good listener is Raymond

Gorden’s Basic Interviewing Skills.39


• Interviewer Training
Here are some good references on training interviewers.
?? Floyd Fowler—Survey Research Methods
40

?? Patricia Gwartney—The Telephone Interviewer’s Handbook 


41

?? Robert Groves and Katherine McGonagle—“A Theory-guided

Interviewer Training Protocol Regarding Survey Participation”42


• Nonresponse
These are excellent discussions of nonresponse, nonresponse bias,
and increasing response.
?? Herbert Weisberg—The Total Survey Error Approach 
43

?? Robert Groves et al.—Survey Methodology 


44

• Data Analysis
Data analysis is beyond the scope of this book, but here are some
excellent references.
?? Earl Babbie—The Practice of Social Research 
45

?? Jane Miller—The Chicago Guide to Writing about Multivariate

Analysis 46
Carrying Out the Survey 51

?? Your favorite statistics book. If you don’t have a favorite statis-


tics book, take a look at Social Statistics for a Diverse Society 47
by Chava Frankfort-Nachmias and Anna Leon-Guerrero and
Working with Sample Data 48 by Priscilla Chaffe-Stengel and
Donald N. Stengel.
CHAPTER 4

Changing Technology
and Survey Research

In 2011, Robert Groves, a former Director of the U.S. Census Bureau


and one of the deans of U.S. survey research, highlighted the impacts of
changing technology in a discussion of the state of survey research.1

at this moment in survey research, uncertainty reigns. Participa-


tion rates in household surveys are declining throughout the
­developed world. Surveys seeking high response rates are experienc-
ing ­crippling cost inflation. Traditional sampling frames that have
been serviceable for decades are fraying at the edges. Alternative
sources of statistical information, from volunteer data collections,
­administrative records, and Internet-based data, are propelled by
new technology. To be candid, these issues are not new to 2011,
but have been building over the past 30 years or so. However, it is
not ­uncommon to have thoughtful survey researchers discussing
what lies in the future and whether key components of the basic
paradigm of sample surveys might be subject to rethinking.2

The Force of Modern Computing


The birth and development of modern computers is closely intertwined
with today’s survey techniques. A rapid development of general-use com-
puting began in 1951, when the U.S. Census Bureau signed a contract for
the first commercial computer in the United States. When UNIVAC—
the Universal Automatic Computer—was dedicated a few months later,
the New York Times called the machine “an eight-foot-tall mathematical
genius” that could in one-sixth of a second “classify an average citizen
54 AN INTRODUCTION TO SURVEY RESEARCH, VOLUME II

as to sex, marital status, education, residence, age group, birthplace,


­employment, income and a dozen other classifications.” The UNIVAC
was put to work to process parts of the 1950 census; then in 1954, it was
employed to handle the entire economic census.3 By the mid-1960s, the
world began to experience what might be called the age of the computer.
In the early days, general-use computing was dominated by huge, expen-
sive mainframe computers. These large machines were physically seques-
tered in secure, environmentally controlled facilities that were accessible
only to a select number of individuals in government, large corporations,
and universities, and thus survey research using computers was similarly
limited to large institutions with large budgets. Within a few short years,
however, the processing power and data storage capacity of computers
began to increase at an exponential rate. In 1965 Gordon Moore, who
cofounded the chip maker Intel, predicted that processor speeds, or over-
all processing power for computers (actually transistors on an affordable
CPU), would double every two years. His prediction, now termed Moore’s
Law, has held up for more than 50 years.4
In the nearly 60 years since, technology has provided an amazing abil-
ity to put more and more data processing and storage capacity hardware
into smaller and smaller units at a cheaper cost. To illustrate, consider
that in 1956, the IBM RAMAC 305 (mainframe) had 5 MB of stor-
age. The original 305 RAMAC computer system could be housed in a
room of about 9 meters (30 ft) by 15 meters (50 ft); and its disk storage
unit measured around 1.5 square meters (16 sq ft). Currie Munce, re-
search vice president for Hitachi Global Storage Technologies (which has
acquired IBM’s hard disk drive business), stated in a Wall Street Journal
interview5 that the RAMAC unit weighed over a ton, had to be moved
around with forklifts, and was delivered via large cargo airplanes.6 It was
hardly portable! Today, you can buy a PC external disk drive that can hold
8 ­Terabytes (TB) of data for around $160.00,7 which is equal to about
8,000 ­Gigabytes (GB) or 8,000,000 Megabytes (MB) of information,
more than one and o­ ne-half million times more data storage than the
RAMAC 305. These leaps in technology have resulted in the shrinking of
mainframes into desktops, desktops into laptops, and laptops into tablets
and notebooks. It was this transformation in computer technology to these
smaller, yet more powerful computers that gave rise to the development of
Changing Technology and Survey Research 55

desktop and laptop computing, which, in turn, led directly to the develop-
ment of a computer-assisted paradigm of survey research.

The Move to Wireless Telephones

As discussed in Chapter 5 (Volume I), this computer revolution popular-


ized computer-assisted telephone interviewing (CATI) in the mid-1970s
and computer-assisted personal interviewing (CAPI) in the 1980s and
1990s. By the 1990s, two other impacts of the ubiquitous march of tech-
nology were beginning to have a fundamental impact on the larger world,
including survey research. The first of these impacts was the rapid min-
iaturization of memory chips with simultaneous expansion of capacity,
which ushered in a tremendous increase in the use of cell phones and
other portable “smart” devices. In 1990, there were roughly 20 cell-phone
users per 1,000 persons in the United States; by 2005 that number had
grown to 683, and in 2009 it exceeded 900. By the second half of 2017,
Center for Disease Control’s National Center for Health Statistics study
reported that a majority of American homes had only wireless telephones.8
The linkages between advances in technology created rapid evolution
in survey methodology. As computer technology moved from personal
computers to smaller handheld devices, the CATI/CAPI survey followed.
Today there are MCATI and MCAPI surveys (with the M designating
the mobile device nature of these modalities), using platforms that have
moved interviewing onto mobile devices. However, while advances in
technology enabling such platforms have greater elasticity in conduct-
ing interviews, they have introduced their own set of coverage, sampling,
nonresponse and measurement problems, as well as specific legal restric-
tions directed at mobile contact.9 For example, preliminary results from
the most recently available July to December 2017 National Health Inter-
view Survey (mentioned above) highlight the infusion of mobile commu-
nication technology into everyday life. The survey found that more than
one-half (53.9 percent) of American homes did not have a landline tele-
phone but did have at least one wireless telephone. Yet, a closer look at the
demographics of these cell-only households reveals unevenness in wireless
phone access among certain subgroups within the population on charac-
teristics such as income, race/ethnicity, age, and geographic location.10
56 AN INTRODUCTION TO SURVEY RESEARCH, VOLUME II

The National Center for Health Statistics study authors, Stephan


Blumberg and Julian Lake, warn that these findings raise red flags,
­because as the number of adults who are cell only has grown, the poten-
tial for bias in landline surveys that do not include cell-phone i­ nterviews
is also growing: “The potential for bias due to undercoverage remains
a real threat to health surveys that do not include sufficient represen-
tation of households with only wireless telephones.”11 The researchers
also indicated this undercoverage problem is made worse by the fact
that some households with landlines nevertheless still take their calls
on cells. Moreover, some people who live in households with landlines
cannot be reached on those landlines because they rely on wireless tele-
phones for all or almost all of their calls.12 Thus, to be methodological
sound, sampling frames for such mobile device interviewing now must
include dual sampling frames, comprised of both mobile phones and
landline phones.

Internet Access and Survey Mobility

A second major impact was the rapid expansion of Internet access and
use. The Pew Research Center has documented this rise in Internet adop-
tion in a series of studies, finding that it grew from 14 percent of the U.S.
adult population in 1996 to 89 percent less than a decade later.13 A sharp
growth of web-based survey research paralleled this rapid expansion of the
Internet. As a result, the platforms for the selection of probability samples
and survey data collection were rapidly adapted to the needs of many sur-
vey researchers. Today, the use of the web for data collection is occurring
in all sectors of the survey industry from the U.S. Census Bureau, which,
for example, now allows respondents to take the American Community
Survey14 on the Web, to political polling, where the cost and difficulty
of targeting phone calls geographically is prompting more organizations
to rely on online polls, to the marketing research community, which has
moved virtually all consumer surveys to the Web.
Moreover, just as mobile phones ushered in MCATI and MCAPI
survey interviewing, the rapid growth of web-based survey research was
transformed by the rapid extension of Internet access through mobile
technologies. The evolution in technology from simple cell phones to
Changing Technology and Survey Research 57

Internet-accessible smartphones brought online mobility to the every-


day uses of the Internet for shopping, reading newspapers, participat-
ing in forums, completing and making surveys, communicating with
friends and making new ones, filing their tax returns, getting involved
in politics, and purchasing things or looking for information before
purchasing offline.15 By 2015, a Pew Research Center study16 found
nearly two-thirds of Americans owned a smartphone. Such availabil-
ity and familiarity make the web-enabled smartphone or mobile de-
vice a prime modality for a wide range of survey data collection.17 As
Pinter et al. note, “Respondents in online surveys planned for a PC
environment may rather use mobile devices. Further, mobile devices
can be used independently in mobile internet-based surveys, in mobile
ethnography, in mobile diary, in location-based research or in passive
measurement.”18
At first glance, the increased availability of portable online access
would appear to solve one problem of online survey data collection be-
cause it permits researchers to reach more potential respondents with
survey applications, particularly populations that traditionally have had
limited or no access to the Internet, that is, the 22 percent of Americans
who are dependent on smartphones for their online access. This is a group
of individuals the Pew Research Center terms “smartphone dependent.”19
Some of those who rely on smartphones for online access at elevated lev-
els, include:

• Younger adults—15 percent of Americans ages 18 to 29 are heavily


dependent on a smartphone for online access.
• Non-whites—12 percent of African Americans and 13 percent of
Latinos are smartphone-dependent, compared with 4 percent of
whites.
• Those with low household incomes and levels and educational
attainment:
?? Some 13 percent of Americans with an annual household in-

come of less than $30,000 per year are smartphone-dependent.


?? Just 1 percent of Americans from households earning more than

$75,000 per year rely on their smartphones to a similar degree


for online access.20
58 AN INTRODUCTION TO SURVEY RESEARCH, VOLUME II

However, a closer look at the details within Pew Research Center’s


study shows that mobile online access does not mean that the underrep-
resentation of individuals who might otherwise have limited or no access
to traditional PC accessed online platforms necessarily disappears. For the
Pew Center study also revealed that the connections to online resources
that smartphones enable are often most tenuous for those users who rely
on those connections the most. Users who are dependent on their smart-
phone are subject to sporadic loss of access due to a blend of economic
and technical constraints.21 Thus, while the smartphone initially appears
to open access to web-based surveys for those with limited access other-
wise, there may be a hidden undercoverage and nonresponse error for
those very groups.
Further, access is only one of the problems that have been identi-
fied with the use of mobile devices with web-based surveys.22 Although
research is still lacking in this area,23 recent studies are now beginning to
illustrate the breadth of these issues.
First, just as Internet access, the so-called “digital divide” itself shows
variation by age, racial/ethnic background, education, and ­economic
­status, which can affect coverage and response error (see the Pew ­Center
study above), research is beginning to show the adoption of various m ­ obile
device platforms may also be differentially impacted due to ­demographic
differences. For example, Christopher Antoun found not only was m ­ obile
Internet use unevenly distributed across demographics groups but the
usage divide is also reflected in the significant demographic differences
between those who use mostly their phones to go online and those who
use mostly their computers.23
Second, response times are greater with mobile survey applications
contrasted with PCs. For, example, in a well-controlled study of the differ-
ences in survey response times between mobile and PC-based ­respondents,
Ioannis Andreadis found that smartphone users had longer response times.
She proposes that longer mobile response times may be due to completing
the survey outside of their home, with this environment creating more
distractions than are available to the desktop users, who complete the sur-
vey in a quieter room in their home or in their office.24
Third, breakoff rates (the rates at which individuals stop respond-
ing to the survey before completion) in mobile web surveys are a key
Changing Technology and Survey Research 59

challenge for survey researchers. In the introduction to their meta-analy-


sis of ­breakoff rates, Mavletova and Couper note studies showing b­ reakoff
rates for commercial mobile web surveys ranging from slightly over
40 percent to as high as 84 percent, while the breakoff rate for PCs based
ranged from 17 to 24 percent.25 In their own meta-analysis of breakoff
rates of 14 studies of mobile surveys, there were breakoff rates ranging
between roughly 1 percent and about 30 percent.26 The results of their
meta-analysis led the researchers to conclude that optimizing web surveys
for mobile devices to minimize breakoffs among mobile respondents was
very important. They also found that e-mail invitations, shorter surveys,
using prerecruitment, more reminders, a less complex design, and an op-
portunity to choose the preferred survey mode all decrease breakoff rates
in mobile web surveys.27

Survey Panels
As mentioned in Chapter 4 (Volume I), another web-based survey option
gaining popularity, particularly because of low response rates to tradi-
tional telephone surveys and the growing availability of mobile Internet
accessibility, is the survey panel. A survey panel is a sample of respondents
who have agreed to take part in multiple surveys over time. Within online
panel research, there are two distinct types: probability-based ­panels and
opt-in or access panels.28

Probability-based Panels

With probability-based panels, a random sample is drawn from the popu-


lation of interest, and the selected individuals are contacted and solicited
to join the panel. While such a probability-based sample is considered
the gold standard in terms of survey panels (as with other types of sur-
vey samples), it is not without problems. As might be inferred from the
­discussion of online access earlier, one crucial factor with probability-
based panels is that those who do not have Internet access must be pro-
vided with it. If not, then the elimination of such individuals results in a
­biased sample that may not reflect the target population. As mentioned
earlier in this chapter, the Pew Research Center estimates 89 percent of
60 AN INTRODUCTION TO SURVEY RESEARCH, VOLUME II

U.S. adults self-identify as Internet users. The fact that such an online-
only survey panel would be excluding roughly one-in-ten adults might
be considered unimportant, but the true share of the population that will
be excluded from the Web-only survey is actually larger than that esti-
mate suggests. For in addition to respondents who are not Internet users,
the Pew Research identified other problems of web panel participation in
connection with its American Trends Panel (ATP).29 Some respondents
invited to participate in the Pew Research Center’s ATP either did not
have or were not willing to provide an e-mail address in order to facili-
tate participation online. In fact, a little less than half of the typical mail
sample of the Pew Center’s ATP consisted of Internet users who, for one
reason or another, declined to participate in the ATP via the Web, and
that share likely would have been higher if researchers had not stopped
inviting these respondents to join the panel partway through recruitment.
In total, the weighted share of panelists in the Pew study of the ATP who
took surveys by mail was around 18 percent.30

Opt-in Panels

On the other hand, in opt-in or access panels, individuals volunteer to


participate. If they do not have Internet access, they cannot be part of
the panel. This fact, in turn, raises the issues of how representative of
the target population different online panels truly are in terms not just
of socio-demographics but also of attitudinal variables.31 Because people
volunteer to participate in opt-in panels, there is also a risk of professional
respondents, that is respondents who frequently participate in surveys
and are mainly doing so for incentives.32 Succinctly put, the key charac-
teristic of opt-in panels is that the participant pool is not constructed with
random selection. It is worth noting that the majority of online research
is based on such nonprobability panels.33
An interesting trend fostered by the growth in opt-in online panels has
been a corresponding growth in third-party vendors. These vendors vet
potential participants based on different background characteristics and
willingness to meet the participation requirements of the vendor. Such
vendors, then, market these panels to companies, as well as governmental,
nongovernmental, and academic entities. These panels, which can range
Changing Technology and Survey Research 61

in size from 100 to over 1,000,000, are usually recruited to match certain
characteristics sought by the research sponsor. For example, American
Consumer Opinion, a company that provides online panels, advertises
for panel participants thusly:

You will never have to pay any money to be a member. Your


­participation in our surveys is the only “cost” of membership. Join
our paid online survey panel and help evaluate new products, test
new advertising, and tell companies what you think. Make your
opinions count.34

Indeed, a quick Internet search will show dozens of online survey


recruitment sites with catchy come-ons such as:

Get free gift cards for taking polls, answering surveys and so much
more!35 and “Want to earn money taking online surveys? Here’s
your chance. Always high payouts. Free to join. Get paid for your
opinion. Over $236 million Awarded.”36

An example of a large opt-in survey panel is YouGov, a company


that bills itself as a global public opinion and data company. It currently
claims:

An online panel of over 6 million panellists [sic] across 38 c­ ountries


covering the UK, USA, Europe, the Nordics, the Middle East and
Asia Pacific. These represent all ages, socio-economic groups and
other demographic types which allows us to create n ­ ationally
representative online samples and access hard to reach groups,
both consumer and professional. Our US panel has 2 million
respondents.37

YouGov provides a variety of “incentives”; basically participants earn


points for taking part in YouGov surveys, which they can turn into cash
or vouchers.
Another popular online opt-in survey panel called Audience is
operated by the SurveyMonkey company. SurveyMonkey’s Audience
­
62 AN INTRODUCTION TO SURVEY RESEARCH, VOLUME II

panel is currently comprised of 2.4 million people in the United States.


These individuals are recruited from among those who take one of Survey-
Monkey’s surveys. Individuals who volunteer from this pool of p ­ otential
participants are incentivized by the company making a fi ­ fty-cent con-
tribution to the participant’s preferred charity, which according to
SurveyMonkey provides better representativeness: “We use charitable
­incentives—and ensure diversity and engagement—so you get trustwor-
thy market insights,” and “. . . attracts people who value giving back and
encourages thoughtful honest participation.”38 Exactly how such a selec-
tion process and incentive system might accomplish this is not explained.
Clearly the advantage to businesses of having such participant-
ready survey panels is their immediate availability; the risk is whether
the panel truly represents the target population. The rapid expansion of
­online survey vendors attests to the popularity (and likely profitability) of
these ­approaches but also provides concerns about quality.39 Somewhat
­ironically, there are even companies that provide rankings of different
survey panel opportunities for potential participants.40
To sum up, online panels offer five advantages:

1. Perhaps the most familiar use of panels is to track change in at-


titudes or behaviors of the same individuals over time. Whereas
independent samples can yield evidence about change, it is more
difficult to estimate exactly how much change is occurring—and
among whom it is occurring—without being able to track the
same individuals at two or more points in time.
2. Considerable information about the panelists can be accumulated
over time. Because panelists may respond to multiple surveys
on ­different topics, it is possible to build a much richer portrait
of the respondents than is feasible in a single survey interview,
which must be limited in length to prevent respondent fatigue.
3. Additional identifying information about respondents (such as
an address) is often obtained for panelists, and this information
can be used to help match externally available data, such as vot-
ing history, to the respondents. The information necessary to
make an accurate match is often somewhat sensitive and difficult
to obtain from respondents in a one-time interview.
Changing Technology and Survey Research 63

4. Panels can provide a relatively efficient method of data collec-


tion compared with fresh samples because the participants have
already agreed to take part in more surveys.
5. It can be possible to survey members of a panel using different
interviewing modes at different points in time. Contact informa-
tion can be gathered from panelists (e.g., mailing addresses or
e-mail addresses) and used to facilitate a different interview mode
than the original one or to contact respondents in different ways
to encourage participation.

On the other hand, survey panels have limitations:

1. They can be expensive to create and maintain, requiring more


extensive technical skill and oversight than a single-shot survey.
2. Repeated questioning of the same individuals may yield dif-
ferent results from what we would obtain with independent or
“fresh” samples. If the same questions are asked repeatedly, re-
spondents may remember their answers and feel some pressure
to be consistent over time.
3. Survey panels comprise many different types of samples. A fun-
damental distinction is between panels built with probability
samples and those built with nonprobability, or “opt-in” samples.
While probability panels are purportedly built on probability
sampling, there has been an explosion of nonprobability or opt-
in sample ­strategies. Using techniques such as weighting back to
the ­population some providers of opt-in panel indicate they can
achieve representative online samples.

For many, the Web survey has become the pathway to solve the prob-
lems of using traditional methods in a rapidly changing technological
landscape. Yet in the haste to take advantage of this technology, scant at-
tention has been paid to some of the fundamental elements of probability
sampling. As Groves insightfully points out:

The Internet offers very low per-respondent costs relative to other


modes; it offers the same within-instrument consistency checks
64 AN INTRODUCTION TO SURVEY RESEARCH, VOLUME II

that CATI and CAPI offer; it offers the promise of questions en-
hanced with video content; and it offers very, very fast turnaround
of data records. When timeliness and cost advantages are so clear,
the problems of the absence of a sampling frame are ignored by
those parts of the profession whose users demand fast, cheap
statistics.41

Big Data and Probability Surveys


The Internet and mobile technologies are producing large databases as
information is routinely captured in written, audio, and video form. This
constant and largely unseen capture of information about individuals is
providing virtual warehouses of data about individuals throughout the
world. Robert Groves captures the heart of such data accumulation thusly:

We’re entering a world where data will be the cheapest commodity


around, simply because society has created systems that automati-
cally track transactions of all sorts. For example, Internet search
engines build data sets with every entry; Twitter generates tweet
data continuously; traffic cameras digitally count cars; s­canners
record purchases; radio frequency identification (RFID) tags feed
databases on the movement of packages and equipment; and
­Internet sites capture and store mouse clicks.42

Today the term Big Data has become increasingly used to describe
the resultant amalgam of information that is available for access through
companies, organizations, government agencies, and universities. More
than 15 years ago, Doug Laney provided a discussion of the three major
characteristics that distinguished Big Data from other forms of data
­collection.43 Today his precepts are referred to as the Three Vs of Big Data.
As characterized by the University of Wisconsin’s Data Science program,
today the Three Vs of Big Data include:

1. Volume (high volume): The unprecedented explosion of data means


that the digital universe will reach 180 zettabytes (180 followed by
21 zeroes) by 2025. Today, the challenge with data volume is not so
Changing Technology and Survey Research 65

much storage as it is how to identify relevant data within gigantic


data sets and make good use of it.
2. Velocity (high velocity): Data is generated at an ever-accelerating
pace. Every minute, Google receives 3.8 million search queries.
E-mail users send 156 million messages. Facebook users upload
243,000 photos. The challenge for data scientists is to find ways
to collect, process, and make use of huge amounts of data as it
comes in.
3. Variety (high variety): Data comes in different forms. Structured
data is that which can be organized neatly within the columns of
a database. This type of data is relatively easy to enter, store, query,
and analyze. Unstructured data is more difficult to sort and extract
value from. Examples of unstructured data include e-mails, social
media posts, Word-processing documents; audio, video and photo
files; web pages; and more.44

The massive amount of Big Data collected through the Internet


­enterprise has offered great promise for survey research but also comes
with distinct warnings. On the one hand, the ability to collect, store,
and analyze so-called Big Data clearly offers opportunities to examine
the relationships between variables (topics of interest) previously un-
available, and on a scale of populations rather than small samples. In so
doing, many of the concerns about sampling and sampling error pre-
sumably fall away. At its extreme, Big Data provides the possibility of
simple enumeration data, requiring nothing more complicated than ad-
dition, subtraction, multiplication, and division to summarize results.
Some see the use of Big Data as a pathway to the elimination or at
least great reduction of the need to do traditional probability s­ ampling
­surveys at all!
However, several large issues do not bode well for a blanket abandon-
ment of traditional surveys in favor of simple analysis of Big Data. One
of the most important of these for survey researchers is that Big Data are
often secondary data, intended for another primary use. As Lilli Japec
et al. indicate, “This means that Big Data are typically related to some
non-research purpose and then reused by researchers to make a social
observation.”45 Japec et al. relate this to Sean Taylor’s distinction between
66 AN INTRODUCTION TO SURVEY RESEARCH, VOLUME II

“found vs. made” data. He argues that a key difference between Big Data
approaches and other social science approaches is that the data are not
being initially “made” through the intervention of some researcher.46
Japec et al. also highlight other problems related to the nature of such
found data that are of concern to survey researchers, including the fact
that there often are no informed consent policies surrounding their cre-
ation, leading to ethical concerns, and they raise statistical concerns with
respect to the representative nature of the data.47
Despite the drawbacks, it is clear that Big Data is likely the 800-pound
gorilla in the room of survey research. In the Big Data in Survey Research,
AAPOR Task Force Report, Lilli Japec and her associate describe the na-
ture of the new type of data as being transformative.48 The traditional
statistical paradigm, in which researchers formulated a hypothesis, iden-
tified a population frame, designed a survey and a sampling technique,
and then analyzed the results,49 will give way to examining correlations
between data elements not possible before. Whatever the methods are
that become the standards for the incorporation of Big Data, it seems
that much of the focus in the analytic process is moving away from con-
centrated statistical efforts after data collection is complete to approaches
centered around collecting, organizing, and mining of information. As
Jules Berman puts it, “the fundamental challenge in every Big Data analy-
sis project: collecting the data and setting it up for analysis. The analysis
step itself is easy; preanalysis is the tricky part.”50
Today survey researchers are faced with many issues. Many of these are
driven by rapidly changing technology that creates a moving target in the
development of data collection approaches. Some of the solutions attempt to
take advantage of the new methodologies within a framework of the existing
sampling paradigm. The web has become an easily accessible and inexpen-
sive tool for survey delivery, even though a large number of web applications
use nonprobability sampling methods, such as certain survey panels, and
therefore are suspect in terms of generalizing back to a larger population of
interest. With these new technologies come problems that affect the repre-
sentativeness of sampling when they are simply layered over designs created
around different data collection methods. The creation of new platforms for
survey delivery requires an examination of alternative approaches.51
Changing Technology and Survey Research 67

Summary
The birth and development of modern computers is closely intertwined
with today’s survey techniques.

• In the early days, general-use computing was dominated by huge,


expensive, mainframe computers, which were physically seques-
tered in secure facilities that were accessible only to a select number
of individuals.
• Since the mid-1960s, computer technology has grown at an expo-
nential rate, providing an amazing ability to put more and more
data-processing and storage-capacity hardware into smaller and
smaller units at a cheaper cost.
?? Transformation in computer technology to these smaller, yet

more powerful computers then gave rise to the development


of desktop and laptop computing, which, in turn, led directly
to the development of a computer-assisted paradigm of survey
research.
?? The computer revolution popularized computer-assisted tele-

phone interviewing (CATI) in the mid-1970s and computer-


assisted personal interviewing (CAPI), in the 1980s and 1990s.

The rapid miniaturization of memory chips with simultaneous expan-


sion of capacity ushered in a tremendous increase in the use of cell phones
and other portable “smart” devices.

• As computer technology moved from personal computers to


smaller handheld devices, the CATI/CAPI survey followed. Today
there are MCATI and MCAPI (with the M designating the mo-
bile device nature of these modalities), using platforms that have
moved interviewing onto mobile devices.
• While advances in technology enabling such platforms have pro-
vided greater elasticity in conducting interviews and conducting
online surveys, they have introduced their own set of coverage,
sampling, nonresponse, and measurement problems.
68 AN INTRODUCTION TO SURVEY RESEARCH, VOLUME II

• Availability and familiarity make the web-enabled smartphone or


mobile device a prime modality for a wide range of survey data
collection.
• The survey panel as web-based survey option is gaining popularity,
particularly because of low response rates to traditional telephone
surveys and the growing availability of mobile Internet accessibility.
?? Within online panel research, there are two distinct types:

probability-based panels and opt-in or access panels.


?? The majority of online research is based on such nonprobability

panels.

Internet and mobile technologies are resulting in the capture and use of
large databases of information. The term Big Data has become increasingly
used to describe the resultant amalgam of information that is available for ac-
cess through companies, organizations, government agencies, and universities.

• Big Data problems identified as most important for survey ­researchers


include: (1) Big Data are often secondary data, i­ntended for a­ nother
primary use (identified as found data); (2) often there are no ­informed
consent policies surrounding their creation, leading to ethical con-
cerns; and (3) there are statistical concerns with respect to the repre-
sentative nature of the data.
• Whatever the methods are that become the standards for the
­incorporation of Big Data, it seems that much of the focus in
the analytic process will move away from concentrated statistical
­efforts after data collection is complete to approaches centered on
collecting, organizing, and mining of information.

Annotated Bibliography
Survey Sampling and Technology

Some resources for the impacts of technology on sampling methodologies


and modalities include:

• See AAPOR’s (American Association for Public Opinion Research)


2014 report Mobile Technologies for Conducting, Augmenting
and Potentially Replacing Surveys.52
Changing Technology and Survey Research 69

• Brick provides a good review of the forces now shaping survey sam-
pling in his Public Opinion Quarterly article “The future of survey
sampling.”53 75, no. 5, pp. 872–888.
• Similarly, Courtney Kennedy, Kyley McGeeney, and Scott
Keeter 2016 provide a recent discussion on the transformation
of ­survey interviewing as landlines disappear in “The twilight of
­landline  ­interviewing.”  http://www.pewresearch.org/2016/08/01/
the-­twilight-of-landline-interviewing/54 (accessed October 2, 2018).

There are many research studies which have compared the results of
survey administration using different survey platforms. Here are some
­examples of the different avenues of this research:

• Malanie Revilla and Carlos Ochoa discuss differences in narrative


questions and responses using PCs and smartphones in their article
“Open narrative questions in PC and smartphones: Is the device
playing a role?”55
• For a good review and more global perspective, see Daniele­­Toninell,
Robert Pinter, and Pable de Pedraza’s Mobile ­Research Methods:
­Opportunities and Challenges of Mobile Research Methodologies.56
• Tom Wells, Justin Bailey, and Michael W. Link examine differences
between smartphone and online computer surveys in “Compari-
son of smartphone and online computer survey administration.”57
• Similarly Yeager et al. explore quality differences between RDD
(telephone survey) and Internet surveys in “Comparing the accu-
racy of RDD telephone surveys and Internet surveys conducted
with probability and non-probability samples.”58

Online Surveys

In the past decade or so, there has been much discussion regarding online
surveys, particularly as they have migrated from PCs to mobile devices
(which are rapidly becoming the predominate way to access the web).

• Baumgardner et al., for example, examine the impacts of the Census


Bureau’s move to an online version of the American Community
70 AN INTRODUCTION TO SURVEY RESEARCH, VOLUME II

Survey: The Effects of Adding an Internet Response Option to the


American Community Survey.59
• Mick Couper’s book, Designing Effective Web Surveys,60 provides
a nice overview of the fundamental issues surrounding the design
and implementation of online surveys.
• Roger Tourangeau et al. similarly explore online surveys with
­additional insights gained since Couper’s earlier volume in their
2013 book, Science of Web Surveys.61

Survey Panels

Online survey panels have become one of the most ubiquitous and often
controversial elements of the transition of survey research to mobile
­electronic platforms.

• Nearly a decade ago, the American Association for Public Opinion


Research took on the topic of online panels, exploring the strengths
and weakness of panel-type surveys, and discussed how such panel
would likely integrate into the larger realm of survey research.
AAPOR (American Association for Public Opinion R ­ esearch). 2010.
AAPOR Report on Online Panels. Also see, Baker, R., S. J. Blumberg,
M. P. Couper, et al. 2010.62 “AAPOR Report On Online Panels.” The
Public Opinion Quarterly, 74, no. 4, pp. 711–781.
• Similarly, Callegaro et al. visited the online panel issue, focusing
on data quality, which has been one of the major concerns of panel
surveys, in their volume, Online Panel Research: A Data Quality
Perspective.63
• More data is being accumulated on panels as they migrate to
­mobile devices. A good example of recent comparative research
is Peter Lugtig and Vera Toepoel’s 2016 article: “The use of PCs,
smartphones, and tablets in a probability-based panel survey:
­Effects on survey measurement error.”64

Big Data and Probability Sampling

The availability of massive qualities of secondary data gleaned from the


everyday use of the web has both excited and alarmed the survey research
community.
Changing Technology and Survey Research 71

• In 2015, AAPOR (American Association for Public Opinion


­Research) undertook an extensive review of the impacts of Big
Data on survey research, which can be found in AAPOR Report
on Big Data.65 Also see, Lilli Japec et al. discussion: “Big Data in
Survey Research: AAPOR Task Force Report.”
• AAPOR (American Association for Public Opinion Research) also
examined the larger topic of nonprobability sampling in 2013 in
the Report of the AAPOR Task Force on Non-Probability Sampling.66
• Berman provides a more global review on Big Data in Principles
of Big Data: preparing, sharing, and analyzing complex informa-
tion.67 (Note: Some of his discussion is presented at a fairly high
technical level.)
• Kreuter’s edited book provides an examination of information
coming available through online transactions as it relates to survey
research: Improving Surveys with Paradata: Analytic Uses of Process
Information.68
• Lampe et al. presented an interesting paper dealing with the critical
issue of trust in the application of Big Data to social research at the
AAPOR meeting in 2014: “When are big data methods trustwor-
thy for social measurement?”69
• Finally, an intriguing look at the impacts of Big Data is provided
by Mayer-Schonberger and Cukier in Big Data: A Revolution That
Will Transform How We Live, Work, and Think (2013).70
CHAPTER 5

Presenting Survey Results

How we present our survey results is one of the most important aspects
of the entire survey effort. It doesn’t matter how well the survey was con-
ceived, designed, or executed, if it’s poorly or inaccurately presented,
none of the effort that went into the survey will be recognized. Moreover,
important findings may never get the visibility or attention they should.
Paul Hague and his colleagues echo this notion in their book Market-
ing Research: “These reports should be the crowning glory of the huge
amount of time and money that has been invested in the research and yet
so often the results are disastrous.”1
In this chapter, we try to provide you with some guidelines based on
our own experience and the expertise of others on how to ensure that
survey results are presented well.
In this discussion, we use the term survey presentations generically to
refer to all forms of reporting, including written materials; verbal presen-
tations; and visual materials such as PowerPoint, web graphics, and so
forth. As a memory tool and means of providing organization to survey
presentations, we present the discussion of survey presentations follow-
ing a simple three-part acronym, ACE, to help focus on the three major
considerations in developing a survey presentation.

A—Audience: Who is the audience of the presentation?


C—Content: What are the key points that we need to convey?
E—Expression: How do we present the survey in a way that is clear,
understandable, and complete?

Too often, the expression or appearance of the presentations be-


comes the main focus in their development. And too often the dominant
themes in constructing presentations become centered on things such as
74 AN INTRODUCTION TO SURVEY RESEARCH, VOLUME-II

formatting questions: Is the blue background in the PowerPoint too dark?


or Do you think that the color graph will copy well in black and white?
Because these concerns should not be the drivers in the development of
presentations and because your material may be presented in several dif-
ferent formats and media types, we recommend that the first two key
­components—the audience and the content—become the first areas to
get your attention. Once these areas are addressed, the expression of the
information will be a much easier task.

The Audience
You may recall that in Chapter 4 (Volume I) we discussed the three
­primary stakeholder groups in the survey process: the sponsors, the re-
searchers, and the participants. The sponsor, in addition to identifying
the purpose of the survey, the population of interest, the timeline, and the
approximate resources available, should specifically indicate what project
deliverables are expected, such as a report, a presentation to a specific audi-
ence, or the submission of data files. There should be agreement on these
deliverables with regard to the presentation. Should there be a written
report? If so, is there a specification of the topics that are to be covered in
the report? Is there a requirement to make a live presentation to an execu-
tive or administrative group, or perhaps in a more public forum, such as
a public hearing or a shareholders’ meeting? Is there to be an online pre-
sentation, possibly being posted to the organization’s website, or a Twitter
or Facebook posting? Thus, how the sponsor wants those findings pre-
sented should be explicitly stated in the deliverables the sponsor provides
in the initial stages of the project. Sponsors sometimes don’t realize that
suddenly requesting a different type of presentation or multiple format
presentations at the conclusion of the project will take additional time
and resources that may not fit within the original timelines or budget. It is
important to remember that each different type of presentation not only
requires a different presentation format but also brings different audience
considerations into play.
Beyond the explicit conditions for the presentation detailed in the
deliverables, there are also implicit specifications for the presentation cen-
tered on the sponsor’s stated or unstated expectations of what is the most
PRESENTING SURVEY RESULTS 75

important information and how that information should be presented.


Sometimes these implicit expectations closely match the formal specifica-
tion of deliverables, but in other situations, the sponsor’s real expectations
may be very different. For example, the deliverables may call for a detailed
report covering all aspects of the survey project, but in conversations with
the individuals in the sponsoring organization, they may indicate they’re
most interested in an executive summary and a 30-minute presentation,
with an opportunity to ask questions. In this case, if the researchers put
in a great deal of effort producing a massive tome on the survey but fail to
deliver a concise, understandable executive summary, or if they created a
presentation that was essentially just images grabbed from the report and
pasted into a PowerPoint, the sponsors would be very dissatisfied with the
presentation and see little value in the survey or its findings. Therefore,
in addition to being very familiar with the project’s stated deliverables,
it is critical for researchers to get to know their audience and what that
audience expects regarding the presentation of the survey and its findings.

Knowing Your Audience

Too often research presentations, especially reports and PowerPoint-type


presentations, are created with little thought about who will be reading
or viewing their content. Like possible compromising photos that ultim-
ately end up posted on the Internet, what they reveal may be expected by
one audience, misinterpreted by another, and totally misunderstood by a
third. Therefore, it is very important to know and understand your audi-
ence. Conversations with sponsors are an essential part of the develop-
ment process not to just understand their expectations but to gauge how
best to present material to them.

The Hidden Lack of Understanding by the Audience

The increasing use of surveys to gather information on different popula-


tions in the academic, political, and business realms has created a con-
stant stream of survey results. The fact that surveys have become such
a common part of the landscape in information gathering has created a
familiarity with surveys to the point of sometimes promoting a false sense
76 AN INTRODUCTION TO SURVEY RESEARCH, VOLUME-II

of understanding them. As Lynn McAlevey and Charles Sullivan aptly


note, “The news media almost daily quote from them, yet they are widely
misused.”2
In a study focusing on the understanding of surveys, McAlevey and
Sullivan looked at students with prior managerial experience embarking
on an MBA program. What they found was that common sample sur-
vey results are misunderstood even by those managers who have previous
coursework in statistics. In general, those managers with some statistics
background fared no better than managers who had never studied sta-
tistics. McAlevey and Sullivan’s succinct conclusion was “In general, we
find no difference. Both groups misuse the information substantially.”3
McAlevey and Sullivan put the implications of this hidden lack of under-
standing about survey methodology into perspective thusly:

For example, great statistical care may be used to take account of the
effects of complex survey design (e.g., stratification and clustering)i
on estimates of sampling error. But what is the practical value of this
if the primary users have gross misconceptions and misunderstand-
ings about sampling error?4

If a lack of understanding prevails in the audience, then presenta-


tion emphasis on methodological and design features important to the
researchers may be lost on the audience. Perhaps even more importantly,
such a disconnect could have a negative impact. Brunt,5 for example,
notes that sample surveys have counterintuitive properties for nonspecial-
ists. Thus, a lack of understanding by the audience creates mistrust of the
survey process and in the end a rejection of the findings, basically result-
ing in the proverbial “throwing the baby out with the bathwater.”
For these reasons, the conversations with the individuals to whom the
presentation is directed should not only focus on their expectations but
also assess their understanding of the survey process. This latter area can
be somewhat sensitive, particularly for individuals in upper-level man-
agement who may not want to appear unknowledgeable, especially if the
presentation will include subordinates. One way of approaching this is to

i
These were discussed in Chapter 2 of Volume I “Sampling.”
PRESENTING SURVEY RESULTS 77

point out that every specialty has its own set of technical concepts and
jargon, and ask them which of these they think would be helpful to re-
view in the survey presentation. Another way is to split the audience into
homogenous groups, as we discuss in the following section.

Types of Audiences

In her discussion of survey presentations, Arlene Fink classifies audi-


ences into three categories: nontechnical, technical, and mixed (both
nontechnical and technical).6 The nontechnical audience could best be
characterized as:

• Primarily interested in the survey findings;


• Wanting to know if the findings are important, and if so, what
makes them important;
• Wanting to know how to use the results;
• Not interested in the methodological details, including the data
collection processes; and
• Not understanding the details of the analysis, and not wanting to
see a lot of statistics or an emphasis on data presentation.

By contrast, the technical audience is interested in the details of the


study’s design and methodology. This audience is characterized as being
interested in:

• Why a particular design was used;


• The way the sample was selected;
• How the data collection was carried out;
• Characteristics of the respondents, information about response
rates, and details about nonresponse; and
• Details of the analysis, including a detailed data review, and even a
discussion of survey error.

The third audience is the one comprised of both nontechnical and


technical audiences. From the perspective of preparing or presenting infor-
mation on a survey project, the mixed audience can be a major problem,
78 AN INTRODUCTION TO SURVEY RESEARCH, VOLUME-II

but it can also be an asset. The problem is that nontechnical and technical
members of a mixed audience will come in with contrasting and sometimes
conflicting expectations. One option to deal with the conflicts ­between the
technical and nontechnical is to prepare separate presentations for each.
For example, reports can be separated into components (an idea we’ll ex-
plore later in the chapter) or two separate meetings can be held, one for the
technical audience and another for the nontechnical audience. This notion
can also be extended to other logical groupings. For example, separate pre-
sentations might be made to different groups based on positions, such as
departments or divisions within the sponsor’s organization. As Paul Hague
et al. point out, the different members of an audience can also vary by their
positions, which create different needs and expectations.

Typical audiences for [marketing] research reports consist of prod-


uct managers, marketing managers, sales managers, market research
managers, business development people, technical development
managers and of course the “C-suite” of top executives. A researcher
needs to balance the needs of these groups within the report.

The job responsibilities of the audience will strongly influence the


specific intelligence they look for from the report. Sales people
want to know specifics such as what each of their customers, and
especially potential customers, is thinking and doing. Communi-
cations managers are interested in different things, such as which
journals people read, which websites they visit and what messages
are effective. Technical staff is likely to be interested in which
product features are valued.7

The value of being confronted by a mixed audience is that it forces


those presenting the results to consider topic areas that would other-
wise be missed if only a single audience type was involved. For example,
let’s say that an online health care survey sent to a health care organiza-
tion’s members had a very low response rate for individuals in the 65- to
75-year-old age range; yet the overall survey results found that mem-
bers indicated they would benefit from online health care information.
If a presentation only included a broad overview of the results without
PRESENTING SURVEY RESULTS 79

a more detailed view of the methodology and response rates, the deci-
sion makers might erroneously decide that going forward with an online
health awareness campaign would be a great idea, when in reality, such a
program would be of little value to this very important demographic in
the health organization’s membership.

The Content
The principal focus of the presentation will be to answer the research
questions and address the study’s objectives, which were first identified at
the beginning of the study. The content should lead the audience to those
results by addressing (1) why the study was undertaken (introduction
and background), (2) how it was designed and structured (methodol-
ogy), (3) how the information was gathered (data collection), (4) how the
data were examined (the analysis), (5) what the findings (results) were,
and (6) what the findings mean (summary and recommendations). If
this sounds familiar, it’s because it follows the traditional structure for
structuring and reporting out research—a standard format in academic
research. It is commonly presented as:

• Statement of the problem


• Review of the relevant literature
• Methodology
• Data collection
• Data analysis and findings
• Conclusions
• Summary and recommendations

To illustrate, the APA report style,8 developed by the American Psy-


chological Association and widely used throughout the social sciences,
typically divides research publications into seven major sections:

• Title page—The title of the paper, the names of authors, and the
affiliations of the authors.
• Abstract—A brief overview of the entire project of about 150 to
250 words.
80 AN INTRODUCTION TO SURVEY RESEARCH, VOLUME-II

• Introduction—The background and logic of the study, including


previous research that led to this project.
• Method—Minute details of how the study proceeded, including
descriptions of participants, apparatus, and materials, and what
­researchers and participants actually did during the study.
• Results—A detailed statement of the statistics and other results of
the study.
• Discussion—What the results tell us about thought and behavior.
?? Multiple experiments (if appropriate)

?? Meta-analysis (if appropriate)

• References—Where to find the work cited in the paper that relates


to the presentation.9
?? Footnotes—Explanatory reference notes to items discussed in

the text
?? Appendixes and Supplemental Materials.
ii

Of course, not every research report follows this formal format. Some
commonly found components may be combined or embedded within
another section; for example, a statement of the hypothesis may be in-
cluded as part of a statement of the problem section. Further, while some
survey research reports are structured in a very formal style, particularly
when they are presented in academic journals or in formal conference
settings, survey reports using a more informal structure are more com-
mon, especially when the reports are primarily intended for the sponsor’s
use. Some of the differences include the fact that in the more informal
presentations, a literature review will likely be omitted (unless specifi-
cally requested), or if presented, it will be in an abbreviated format, such
as quickly reviewing recent surveys similar to the current effort. More
informal survey reports and PowerPoint-type presentations also tend to
have a lower level of granularity in the information in the methodol-
ogy and results sections. By a lower level of granularity, we mean the
level of detail is less. For example, in more informal presentations, the

ii
Note: The APA Style Manual is particularly oriented toward the publication of re-
search in peer-reviewed journal formats. Survey research published in these journals
would generally follow this format.
PRESENTING SURVEY RESULTS 81

methodology section may contain only the key points of the design,
sampling approach, and data collection. It is also common, particularly
with surveys that employ sophisticated methodologies or use consistent
methodological approaches across similar surveys, to break the method-
ology section out into an entirely separate document or to place it in a
report appendix.10 Similarly, the results section will contain less detail
in the body of the report, again placing detailed information, such as
a comprehensive breakdown of the survey population in an appendix
at the end of the report. Finally, in the more informal survey format,
an abstract is often not included. However, one component not typi-
cally included in more formal presentation formats, but commonplace
in today’s organizationally sponsored survey presentations, is an execu-
tive summary.

Executive Summaries

As the title might suggest, executive summaries are usually directed at ex-
ecutives or decision makers primarily because they might not have time
to attend a full presentation or read an entire survey report. Because the
executive summary may be the only exposure that some people may get
to the survey content and findings, it is important that it presents the sur-
vey approach and results information as accurately as possible and that it
captures all the important content of a larger report or presentation. Like
the report or live audience presentation, an executive summary should be
targeted to the audience. Executive summaries are typically one to three
pages long and frequently use a bulleted-type format rather than lengthy
narrative discussions. The executive summary usually contains the follow-
ing elements:

• Brief overview of the survey: Why the survey was conducted, who
or what organization (or department within a large organization)
initiated the survey, when and where it was conducted, and any
other key background points.
• Goals and objectives of the survey: These are lifted from the goals
and objectives that were stated at the beginning of the survey.
The discussion should include which goals were met (e.g., which
82 AN INTRODUCTION TO SURVEY RESEARCH, VOLUME-II

research questions were answered and which were not). For those
goals and objectives that were not met, a brief explanation should
be provided as to why not.
• Survey methodology: Again, this section of the executive report will
be a much-truncated version of the material presented in a full report
or presentation to an audience. It should, however, contain all the es-
sential elements of the methodology, including who the participants
of the survey were, how they were selected (what type of sampling
was used), what the sample size and response rate were, what survey
mode (e.g., interviewing, online, mail-out questionnaire, mixed) and
data collection procedure were used, and what type of survey instru-
ment was used, including a brief description of the kinds of ques-
tions used. Any specific difficulties or unusual circumstances that
might affect the results should also be mentioned here, for example,
if a major blizzard occurred that affected the survey administration.
• Survey results: This section will briefly explain the major find-
ings of the survey. The research questions should specifically be
addressed here. Any significant unusual or unexpected findings
should be highlighted.
• Recommendations: This is an optional section. Some sponsors
want recommendations stemming from the survey to be included.
Others may prefer that recommendations be omitted.

Executive summaries are typically attached to the full report or are


disseminated in conjunction with the presentation or meeting with a live
audience. However, this isn’t always the case. In preparing an executive
summary, it is wise to consider the possibility that this may serve as a
standalone summary and may be the only document about the survey
that some may see. For this reason, it is good to have someone not famil-
iar with the survey read it to make sure it covers the important points.
In this regard, we have sometimes found it very useful to reference more
detailed materials in the executive summary that are available outside the
document, such as sections in a full report, including page numbers, so
those reading the executive summary will have access to more detailed
backup material, if it is desired.
PRESENTING SURVEY RESULTS 83

Privacy, Confidentiality, and Proprietary Information

As we are constantly reminded by the news media, there is no such


thing as privacy. When conducting a survey, it is important to keep in
mind that the materials produced may have a much longer shelf-life
than was originally intended. Results of a consumer satisfaction survey
conducted years before may end up in a later product liability court
case. Similarly, the release of personal data about survey respondents
may violate not only ethical (and sometimes legal) standards but could
ultimately cause harm to the individuals who were willing to take part
in the survey. Thus, it is crucial to keep in mind that certain types of
information collected during a survey may be confidential, anonymous,
or proprietary.
There are federal regulations that protect individuals from harm ­during
the course of research. These regulations are referred to as human subject
protections. These protection regulations were developed, in part, due to
past abuses of individuals during research, such as the now ­infamous
Tuskegee syphilis clinical research.11 Federal regulations pertaining to
the protection of human subjects can be found online,12 and a review of
these regulations related to survey research can be found on the A ­ merican
13
­Association for Public Opinion Researchers (AAPOR) website.
There are three points in the survey process and its presentation where
the privacy, confidentiality, and data proprietary issues become particu-
larly relevant. The first occurs in terms of who has access to the data col-
lected. If personal data, such as a respondent’s name, birth date, contact
information (address and phone number), social security number, and so
forth, are obtained, then there is a duty by the researchers to notify the re-
spondents that the data are being collected and how the data will be used,
and to make a diligent effort to keep that information from becoming
public. If the survey participants are told their information will be kept
in strictest confidence, then both the researchers and the sponsor have
an obligation to uphold this promise. In this regard, it is also important
to make sure the researchers and those employed by research firms have
signed data confidentiality agreements, which detail the requirements for
maintaining data confidentiality and the obligations of those who have
access to the data.
84 AN INTRODUCTION TO SURVEY RESEARCH, VOLUME-II

Second, when survey data are provided to sponsors as a deliverable,


it should be de-identified, meaning that personal or confidential informa-
tion should be removed. That way, if the data somehow become public
at a later time, no one will be able to connect the specific respondent’s
personal information with that individual’s responses on the survey. This
safeguard is particularly important with surveys that focus on sensitive
topics, such as employee job satisfaction. Similarly, it is common practice
to aggregate individual survey responses in reporting results, so data are
only reported at the group rather than individual level. For example, in
reporting survey results, only the percentage of respondents who checked
a particular response category on a closed-end question is reported.
Third, there is also the issue of the proprietary nature of the data
collected, the techniques used for data collection, and the methods of
analysis. Both the sponsor and the researchers have vested proprietary
interests at stake. The sponsors may have provided the researchers with
proprietary information, such as contact information, for clients taking
part in a social program, a customer database, or confidential information
about marketing a new product. If the researchers are in-house, this is
less of a problem than when the survey is being provided through con-
tract. In Chapter 4 (Volume I), we indicated that the sponsor owns the
survey, but the extent and terms of this ownership need to be agreed
upon at the beginning of the process. If not addressed beforehand, these
problems can become major challenges in the presentations of the survey.
For example, a sponsor may want a full disclosure of the sampling meth-
odology, including the algorithms used to weight the survey responses
in the final report. The researchers may decline to provide this because
they developed it by extracting and statistically manipulating population
data obtained from other sources. Similarly, if researchers were to include
proprietary information about a health care provider’s intent to expand
services in a particular geographic location when making a live audience
survey presentation, without first obtaining the sponsor’s permission, a
serious problem could emerge. Again, good communication between the
researchers and sponsors about the materials to be included in written
reports, executive summaries, live audience meetings, and online postings
of materials is essential to ensure that there is agreement about proprietary
content, timing of the presentation, and ultimate ownership of materials.
PRESENTING SURVEY RESULTS 85

The Expression
We use the term expression to refer to the way in which the content of the
survey is presented or delivered. Just as the mode of delivery is the platform
for getting the survey instrument to participants (you may want to review
the discussion on mode of survey delivery in Chapter 5—­Volume I), the
expression of the survey findings is the platform upon which the content
is delivered. Before we turn to specifics of expression, there are three im-
portant reminders that should be kept in mind when preparing a survey
presentation. First, the presentation should focus on substance not style.
If the content is not well defined or organized, no matter how well it is
presented, the audience will not get the intended information. Second,
it is important to remember that the content of the survey remains the
same irrespective of whether it’s presented in a formal report, an execu-
tive summary, a PowerPoint presentation, or in other venues. While each
of these different types of presentations shapes how the information is
formatted, the level of detail presented, and the length of the presenta-
tion, the fundamental elements to be presented are the same. In a good
checks-and-balances review of a survey report, PowerPoint, and so forth, it
is good to ask yourself, “What would I change if I were completing this
in a different format?” A third point to keep in mind is that a survey pre-
sentation reflects the researchers and sponsors. If it is poorly constructed,
has grammatical or spelling errors, has inappropriate language, or sets an
improper tone, those problems will be attributed not just to the survey
but also to the organizations and individuals who commissioned, created,
and carried it out.

Presenting Data

Presenting survey data is one of the most difficult parts of presentation.


Nothing will cause an audience’s eyes to glaze over more quickly than
PowerPoint-type slides with row after row of numbers presented on large
tables. Similarly, a report with page after page of tables or figures display-
ing survey statistics is one of the best tonics around for sleeplessness. As
Hague and his colleagues note, “Data are the problem. Often there is
so much information it is difficult to work out what to include and ex-
clude, and making sense of it is not so easy.”14 Unfortunately, sometimes
86 AN INTRODUCTION TO SURVEY RESEARCH, VOLUME-II

researchers present massive tables filled with data simply because it’s eas-
ier than spending the time and effort to distill the information down to
a summary level. Summary presentation, of course, does not mean that
the detailed data should not be available (with a reference in the report
body or at a PowerPoint presentation to the appropriate report appen-
dix or separate document where it can be found) for those who want
to dig deeper or verify the accuracy of the summary. However, if you
are confronted with statements in reports or meetings such as, “As you
can see from the percentages in the third column of the second page of
Table 2 . . .,” a red flag should go up in your mind. If the information
was important enough to point out in raw form, then why did those
presenting the material not take the time to synthesize and explain this
important information?

Text, Graphs, and Tables

Synthesizing information and providing commentary on statistics brings


meaning to the data collected in surveys. “Writing commentary to ac-
company statistics should be approached as ‘statistical story-telling’ in
that the commentary needs to be engaging to the reader as it provides an
overview of the key messages that the statistics show.”15 It is this synthesis
and commentary that provides both context for the data presented and a
connection between different pieces of the results. What, then, is the best
way to present data? Basically, information can be presented in text, table,
or graphic form and will “generally include all three approaches as these
assist in ensuring that the wider user base is catered for.”16 Again, selecting
the approach to presenting the information should be based on the needs
of the audience.
Text is used for commentary around and in summary of numerical or
quantitative results. It is a good way to point out particularly significant
statistical findings, which might be lost in the numbers reported in a
table or depicted in a graph. It might be considered the train that carries
the statistical story mentioned earlier. For example, a conclusion might
point out that “More than 95 percent of the respondents said they were
‘very satisfied’ with the service they received,” or “There were statistically
significant differences between men and women as to how satisfied they
PRESENTING SURVEY RESULTS 87

were with the service they received.” However, using only text to express
large numbers is both time consuming and requires a great deal of effort
by the audience to decipher. For example, text saying, “The initial survey
was mailed to six-thousand, five-hundred individuals,” is much harder to
grasp than saying, “The initial survey was mailed to 6,500 individuals.”
The APA style manual, mentioned earlier in this chapter, uses the conven-
tion of spelling out numbers under 10, but using Arabic numerals for val-
ues greater than nine.17 Similarly, trying to relate a lot of numeric data in
a sentence will cause the point to become muddied and vague. Consider
the following text description:

According to the survey findings, almost no respondents (.9%)


were “very dissatisfied” with the service they received, but ap-
proximately 35% of the respondents were “dissatisfied,” while
32% were “satisfied,” and another 32% reported they were “very
satisfied.”

It may take a couple of readings to pick up the idea that a negligible


percent of respondents said they were very dissatisfied with the program
and that there was little difference between the respondents on the re-
maining response categories. In this case, a graph might be a simpler and
more straightforward way to point out the similarities between responses
given in three categories, as illustrated in Figure 5.1.

Figure 5.1  Responses to survey question “Are you satisfied with


the program?”
88 AN INTRODUCTION TO SURVEY RESEARCH, VOLUME-II

It is important here that you not get the impression that all text in sur-
vey presentations must be in the briefest form possible. Trying to be too
brief in text can have an equally bad effect as trying to provide too much
information. Essentially, while the old expression “Brevity is the soul of
wit” may be true, we would argue that “Too much brevity is the soul of a
half-wit.” The point we are trying to make with this rather bad joke is that
if wording becomes too general or leaves out too much detail, it can be-
come as meaningless and unintelligible as trying to describe a table worth
of data in a sentence. Take the following sentence, which might be found
in a report, as a bullet point in an executive summary or on a PowerPoint
slide. “The change in customer satisfaction was tremendous!”
The sentence is meaningless because the terms used are vague and
abstract. Due to a lack of detail, the audience wouldn’t know whether
the change in satisfaction was up or down—a small detail that could have
major consequences. Similarly, the word tremendous, while having an
emotional connotation, doesn’t convey any real sense of magnitude. What
do you think should be done to improve the wording?
As the example presented earlier demonstrates, graphic representa-
tions of data can be used to present or relate complex findings as an alter-
native to descriptions using text, or to further illustrate summary points
made in the text. A simple visual chart or graph is an easy way not only
to provide precise data values but also to show relationships between dif-
ferent pieces of data. This feature is particularly useful in PowerPoint pre-
sentations, where reading extensive material on a slide may be difficult
and unproductive. While an extensive review of designing and creating is
beyond the scope of our discussion here, there are a few basic ideas you
should keep in mind when considering graphs and charts in your presen-
tation. (There is a considerable amount of research available on this, and
we provide a couple of good references in the Annotated Bibliography at
the end of the chapter.)
One of the problems that we face in creating good graphics for pre-
sentations is that our graphic software has grown so sophisticated that it
is easy to create both an attractive and a bad graphic at the same time.
The first fundamental rule of creating graphs and charts is that they
should be appropriate to the type of data they are displaying. For example,
PRESENTING SURVEY RESULTS 89

categorical information is best displayed in graphics that capture distinct


units, such as a bar chart. (Figure 5.2 provides a simple illustration.) A
common mistake is connecting categorical data in a trend graph, which
is more appropriate for showing change over time (see Figure 5.3). Simi-
larly, presenting data that represents percentages should be displayed in
a format that shows portions of a total whole (usually 100 percent). Pie
charts and stacked bar charts are effective graphs for this purpose (see
Figures 5.4 and 5.5).

Figure 5.2  Employee satisfaction by position level (bar chart


representation)

Figure 5.3  Employee satisfaction by position level (line graph


representation)
90 AN INTRODUCTION TO SURVEY RESEARCH, VOLUME-II

Figure 5.4  Percent of college workforce

Figure 5.5  Comparison of four-year university and community colleges


by position type

Graphs should be simplified as much as possible. Years ago, Edward


Tufte, one of the premier researchers in the field of the visual presentation
of data, coined the term “chartjunk”18 to describe elements that appear
in graphs that distract the reader from understanding the content in the
graph. He suggested that, if you create a figure, you should maximize
the amount of relevant information and minimize anything that would
distract the reader. Tufte identified a principle he referred to as maximi-
zation of the data-to-ink ratio—graphs become more useful when the
amount of data grows and the amount of ink decreases.19 Tufte’s criti-
cisms of extraneous graphic objects or tools inserted into graphs included
the use of unnecessary and distracting lines and background grids. He
also objected to patterns and visual features, such as three-dimensional
depictions of data that was only two dimensional.20 Figure 5.6 shows a
PRESENTING SURVEY RESULTS 91

Figure 5.6  Premier healthcare—facility ratings

hypothetical worst-case scenario in terms of the clutter that can make a


graph unreadable, while Figure 5.7 displays the same information in a
much cleaner graphic.

Figure 5.7  Premier healthcare—facility ratings

Julian Champkin perhaps captured the essence of creating good


graphics best when he said, “The mark of a good graphic is that the user
reads the diagram, not the caption. Information should be in the picture,
not in the words underneath.”21
Presenting tables and presenting figures share much in common.
Tables are a convenient way to report lots of data in an organized and
succinct manner. They are particularly helpful in presenting the details of
92 AN INTRODUCTION TO SURVEY RESEARCH, VOLUME-II

the data. For example, numbers containing decimal points are difficult to
read in text, particularly if you are presenting a series of numeric values.
In Western countries, tables are commonly read top to bottom, left to
right. The fact that there are columns and rows in a table makes it conve-
nient to provide data both across and within different number sets. For
example, if you want to compare survey responses across three different
cities, then it is relatively easy to construct a table with three columns,
one for the responses from each city. By then looking across the rows of
the three columns, you could compare the responses to a particular sur-
vey question across the three cities. However, within the same table, you
could also look down a particular column and see how responses to differ-
ent question areas compared within a particular city. It is this matrix qual-
ity that allows easy access to specific pieces of data. In the illustration, the
intersection cell of the table would contain a specific value on a certain
survey item for a specific city. We typically only create tables in two di-
mensions for presentations, but hypothetically, if you had a good way to
display a third dimension (think Rubik’s cube), you could find a particu-
lar piece of data using the intersection of three different variables, say the
response (1) to a specific question, (2) by males, and (3) within a specific
city. However, because we typically only present two-dimensional tables,
what we usually do is create a step-down table to display this relationship.
By this we mean that we would take our original table that provided the
responses to a particular question by cities and then break it down into
two subtables, in this case, one for males and one for females. While this
works fairly well for tables containing only a small number of categories,
imagine the difficulty of creating such tables if you were looking at data
across some dimension such as income level or race and ethnicity. It is for
this reason that we typically move the presentation of such data into a
summary form using statistics. Those summary statistics can then be put
into the narrative in a report, or presented with a PowerPoint, with an ex-
planation. In our illustration, for example, our summary could then read
something along the lines, “Males in Chicago had (statistically) higher
satisfaction levels with regard to service than males in either Atlanta or
Phoenix. However, there were no significant differences in satisfaction
levels among females in the three cities.”
PRESENTING SURVEY RESULTS 93

As it was with graphs and charts, the use of text in conjunction with
tables is a very important part of the presentation of survey research. The
emphasis in tables must be on clarity and simplicity. Granularity, in terms
of the detail presented, is an important consideration in the inclusion of
tables in presentations. Neither PowerPoint presentations nor executive
summaries lend themselves to great detail, and for this reason, it is wise
to leave tables to the written report, or in the report appendixes or to a
separate methodology and data report.
In summary, it is the integration of the presentation narrative,
graphics, and tables that tells the story of the survey results. The fun-
damental content is the foundation of the presentation, but if it is
delivered in a way that doesn’t meet the audience or members’ needs
and expectations, if it doesn’t interest them, and if it doesn’t create an
understanding of the survey, then all of the work and effort of the sur-
vey will likely just gather electronic dust on some hard drive or network
server.

Summary
• Presenting survey findings is one of the most critical aspects of the
survey process, yet it is often given only passing attention.
?? Presentations reflect the amount of work that went into the

survey.
?? If the presentation is poorly done, lacks clarity, or is not well or-

ganized, important information from the survey isn’t conveyed


to the sponsors and other intended audiences.
?? The sponsor should identify the deliverables at the onset of the

survey project, which will determine what form the presenta-


tion takes and how it is to be delivered.
?? Commonly, survey results are provided in the form of reports,

executive summaries, PowerPoint presentations to audiences,


materials posted to website, or through feeds on social media.
Details of the methodology, statistical analysis, and data are fre-
quency placed in appendices to the full report or are presented
in a separate methodology report.
94 AN INTRODUCTION TO SURVEY RESEARCH, VOLUME-II

• The acronym ACE can be a helpful tool in focusing on the three


major components to be considered when preparing a presentation.
?? A—Audience: Who is the audience of the presentation?

?? C—Content: What are the key points that we need to convey?

?? E—Expression: How do we present the survey in a way that is

clear, understandable, and complete?


• Knowing your audience is a critical part of developing an appropri-
ate presentation. Maintaining contact with the sponsors and desig-
nated audiences helps you tailor the presentation both in content
and form.
?? What are the audience’s needs and expectations?

?? Is the audience technical, nontechnical, or mixed?

?? What level of understanding does the audience have of the

survey process, methods, analysis, and so forth?


• The content serves as the foundation for the presentation. The content
should address the following:
1. Why the study was undertaken (introduction and background)
2. How it was designed and structured (methodology)
3. How the information was gathered (data collection)
4. How the data were examined (the analysis)
5. What the findings (results) were
6. What the findings mean (summary and recommendations)
• The content can be presented in a formal fashion or may be
less formally presented depending on the audience’s needs and
expectations.
?? Today, presentation of survey results tends to be done in a less

regimented structure, especially when surveys are commis-


sioned by organizations for internal purposes.
◾◾ More informal styles usually mean omitting certain elements

characteristically seen in formal reports, such as an abstract,


literature review, and detailed presentation of the methodol-
ogy. However, today’s survey presentations typically contain
an executive summary, which was not part of the traditional
format.
◾◾ Executive summaries may be written in a narrative style but

typically make extensive use of bulleted formats.


PRESENTING SURVEY RESULTS 95

?? Contents presented as PowerPoint presentations or executive


summaries have a lower level of granularity, meaning the level
of detail is less.
• Privacy, confidentiality, and proprietary information
?? It is important to keep in mind that certain types of informa-

tion collected during the course of a survey may be confidential,


anonymous, or proprietary.
◾◾ There are federal regulations that protect individuals from

harm during research, including both the collection and


presentation of survey data. These regulations are referred to
as human subject protections.
◾◾ Researchers and sponsors have an obligation to survey par-

ticipants to ensure that commitments regarding the protec-


tion of their private and confidential information are kept
and that participants are not exposed to risk because of the
disclosure of such information.
?? Sponsors are considered the owner of the survey, but sponsors

and researchers should have clear agreements made at the begin-


ning of the project as to the extent and terms of this ownership,
particularly regarding data and proprietary information.
• The expression component of the presentation refers to how the
survey findings are disseminated.
?? The focus in the presentation should be on substance not style.

?? Content remains the same irrespective of the way it’s delivered.

?? The survey presentation is a reflection of the sponsors and

researchers.
• Presenting survey data is one of the most difficult aspects of survey
presentation.
?? Presenting survey data without commentary and summary is

not good practice.


?? Synthesizing and providing commentary on survey statistics

brings meaning to the collected data; it is essentially statistics


story-telling.
?? Data can be presented in text, graphic, or table formats, but

presentations usually include all three because it meets the


needs and expectations of a broader audience.
96 AN INTRODUCTION TO SURVEY RESEARCH, VOLUME-II

• The strength of text or narrative presentation is in providing ex-


planation, commentary, or summary of numerical or quantitative
data, not for actually presenting the data. Text should be simple,
clear, and brief, but not so brief that the wording becomes mean-
ingless or unintelligible in terms of describing the data.
• Graphics can be used to present or relate complex findings as an
alternative to lengthy descriptions in text. Graphics should be
appropriate to the kind of data they’re displaying and should be
simplified as much as possible.
• Tables can be used to display a lot of data in an organized and suc-
cinct manner. Tables are particularly helpful in presenting detailed
numeric results but are not typically suitable for executive summa-
ries and PowerPoint presentations.

Annotated Bibliography
General

• Arlene Fink, as part of The Survey Kit series, provides a short book,
How to Report on Surveys,22 which gives an easy-to-understand
overview of important elements in the survey report.
• A very helpful ancillary resource is The Facts on File Guide to Re-
search23 by Jeff Lenburg. This book covers a wealth of research
sources as well as information on different writing formats.

Audience Considerations

• Patrick Forsyth provides a good discussion of the interface between


the audience and the written research report.24

Presentation Elements

• For a very good perspective on the elements of report technical writing,


see Lutz Hering and Heike Hering’s How to Write Technical Reports.25
• For example, The American Psychological Association’s Publication
Manual provides a thorough review of areas such as the mechanics
of style and details of presenting results.26
PRESENTING SURVEY RESULTS 97

Narrative, Charts, Graphs, and Tables

• See J. Miller’s The Chicago Guide to Writing About Numbers.27


• Pamela Alreck and Robert Settle provide information about for-
matting different types of tables and graphs to display survey
questions responses and show how to write around the visual pres-
entation of data.28
• Edward Tufte’s classic works on the visual presentation provide ex-
cellent insights into some of the dos and don’ts of presenting data
in visual form.29
Notes

Chapter 1
1. Monette, Sullivan, and DeJong (1998, pp. 22–24).
2. Jann and Hinz (2016, p. 105)
3. Gorden (1987, p. 63).
4. Gorden (1987, p. 64).
5. Gorden (1987, p. 64).
6. Bradburn (2016, pp. 95–96).
7. Bradburn (2016, p. 96).
8. Riley (1963).
9. Babbie (2016).
10. Miller and Salkind (2002).
11. Gorden (1987).
12. Bradburn (2016).

Chapter 2
1. Warwick and Lininger (1975).
2. Fisher (2009).
3. Fisher (2009, p. 143).
4. Warnecke et al. (1997).
5. de Bruin et al. (2012).
6. Snijkers et al. (2013).
7. Dillman (1978).
8. Dillman, Smyth, and Christian (2014).
9. Dillman, Smyth, and Christian (2014, p. 10).
10. Dillman, Smyth, and Christian (2014, p. 10).
11. Dillman, Smyth, and Christian (2014, p. 13).
12. Dillman, Smyth, and Christian (2014, p. 94). Note: questions are presented
in a different order than originally covered by Dillman.
13. Maxfield and Babbie (1998, p. 109).
14. See, for example, de Leeuw and de Heer (2002), Singer and Ye (2013).
15. Over the years Don A Dillman et al. (2009; 2014, see in particular
­chapter 2) have been particularly attentive to these issues and have built on
the notion of using incentives as a consistent component of survey design.
Also see Church’s (1993) metanalysis and Singer and Ye’s review (2013).
100 NOTES

16. Dillman (2009, p. 50).


17. Alreck and Settle (2004, pp. 89–92).
18. Dillman (2009, p. 33).
19. Bradburn (1983, pp. 304–6).
20. Alreck and Settle (2004, p. 94).
21. McCutcheon (2015).
22. Zuell, Menold, and Körber (2014) found that particularly large answer
boxes should be avoided, however, because they reduce respondents’
­willingness to respond.
23. A number of companies market qualitative analysis software; two of the
best known are ATLAS (Scientific Software Development GmbH 2014)
and NVivo International (QSR International 2018). Even companies
­traditionally known for their quantitative analysis software such as IBM
SPSS (2014), provide qualitative software.
24. Krosnick and Fabrigar (1997).
25. Sudman and Bradburn (1973, p. 805).
26. Dillman (2009, pp. 40–47).
27. Likert (1932).
28. Sincero (2012).
29. Dillman (2009).
30. Christian, Parsons, and Dillman (2009, p. 394).
31. Krosnick and Fabrigar (1997).
32. Brace (2013).
33. See, for example, Groothuis and Whitehead (2002).
34. Groves et al. (2009).
35. Dillman (2009).
36. Etchegaray and Fischer (2010).
37. Alwin (2007).
38. Brace (2013).

Chapter 3
1. See Alreck and Settle (2004); Fowler (2014, pp. 100–1); Groves et al.
(2004, pp. 243–5).
2. Bishop (1992).
3. See Fowler (2014, pp. 102–4); Groves et al. (2004, pp. 245–7).
4. Willis (2016), p. 367.
5. Willis (2016, pp. 366–7). This list is modeled after Fowler (2011).
6. See Converse and Presser (1986), pp. 54–65. See also Schaeffer and Presser
(2003).
NOTES
101

7. Schuman (2008, p. 81).


8. Bishop (1992).
9. Gwartney (2007, pp. 207–12).
10. Gwartney (2007, pp. 212–16) has a very useful discussion of probing when
asking race and ethnicity questions.
11. Gwartney (2007, pp. 216–22).
12. Alreck and Settle (2004, p. 185).
13. Gwartney (2007, pp. 103–15).
14. AAPOR (2011, p. 3).
15. See Tourangeau and Plewes (2013, pp. 9–12) for various ways of calculat-
ing response rates suggested by the American Association for Public Opin-
ion Research (AAPOR). APPOR defines response rate as “the number of
complete interviews with reporting units divided by the number of eligible
reporting units in the sample.” American Association for Public Opinion
Research (2011, p. 5).
16. See Lane (2010) for an excellent discussion of these and other questions.
See also Schnell (2016).
17. Vardigan and Granda (2010). See also Vardigan, Granda, and Holder
(2016).
18. Gorden (1992, pp. 82–93). Gorden includes other questions interviewers
might ask themselves as respondents are answering questions.
19. Gorden (1992, pp. 92–96). We picked out particular keys on which to
focus.
20. Dillman (1978, pp. 260–1); Gwartney (2007, pp. 86–88).
21. Dillman (1978, p. 262); Gwartney (2007, pp. 163–5).
22. Tourangeau and Plewes (2013, p. 24).
23. Rosen et al. (2014).
24. Groves and McGonagle (2001).
25. Groves and McGonagle (2001, p. 253).
26. See Dillman (2000); Dillman, Smyth, and Christian (2009, 2014) for a
systematic and thorough discussion of tailoring.
27. Willis (2016).
28. Fowler (2014).
29. Groves et al. (2004).
30. Babbie (2016).
31. Converse and Presser (1986).
32. Willis (2016).
33. Dillman (1978).
34. Dillman (2000).
35. Dillman, Smyth, and Christian (2009).
36. Dillman, Smyth, and Christian (2014).
102 NOTES

37. Gwartney (2007).


38. Couper (2008).
39. Gorden (1992).
40. Fowler (2014).
41. Gwartney (2007).
42. Groves and McGonagle (2001).
43. Weisberg (2005).
44. Groves et al. (2004).
45. Babbie (2016).
46. Miller (2013).
47. Frankfort-Nachmias and Leon-Guerrero (2018).
48. Chaffe-Stengel and Stengel (2011).

Chapter 4
1. Groves (2011).
2. Groves (2011, p. 861).
3. Fabry (2016).
4. Sneed (May 19, 2015).
5. Gomes (August 22, 2006), cited by Wikipedia (2018).
6. Gomes, (August 22, 2006), cited by Wikipedia (2018).
7. See for example, Best Buy at https://www.bestbuy.com/site/hard-drives/
external-portable-hardrives/pcmcat186100050005.c?id=pcmcat1861000
50005&qp=harddrivesizerange_facetpercent3DStoragepercent20Capacity
~8TBpercent20-percent2011.9TB
8. Blumberg and Luke (2018).
9. Slavec and Toninelli (2015, pp. 41–62).
10. Blumberg and Luke (2018, p. 3).
11. Blumberg and Luke (2018, p. 3)
12. Blumberg and Luke (2018, p. 3).
13. Pew Research Center (2015a).
14. U.S. Census Bureau (2018a).
15. Pinter, Toninelli, and Pedraza (2015).
16. Pew Research Center (2015b).
17. Pinter, Toninelli, and Pedraza (2015).
18. Pinter, Toninelli, and Pedraza (2015, p. 1).
19. Pew Research Center (2015b).
20. Pew Research Center (2015b, p. 3).
21. Pew Research Center (2015b, p. 4).
22. Pinter, Toninelli, and Pedraza (2015).
NOTES
103

23. Antoun (2015, p. 76).


24. Andreadis (2015).
25. Mavletova and Couper (2015, p. 82).
26. Mavletova and Couper (2015, p. 86).
27. Mavletova and Couper (2015).
28. See chapter 5.2 in Callegaro, Lozar and Vehovar (2015).
29. See Pew Research Center’s American Trends Panel (2015c) for a review of
the panel’s development and methodology.
30. Pew Research Center (2015c).
31. Matthijsse, de Leeuw, and Hox (2015).
32. AAPOR (American Association of Opinion Research) (2010). Also see
Baker et al. (2010).
33. AAPOR (American Association of Opinion Research) (2010). Also see
Baker et al. (2010).
34. American Consumer Opinion (2014).
35. Swagbucks (2018).
36. MyPoints (2018).
37. YouGov (2018).
38. SurveyMonkey (2018).
39. Stern, Bilgen, and Dillman (2014).
40. Survey Police (2014).
41. Groves (2011, p. 867).
42. Groves (2011, p. 868).
43. Laney (2001).
44. University of Wisconsin (2018).
45. AAPOR (American Association of Opinion Research) (2015). Also see
Japec et al. (2015, p. 844).
46. Taylor (2013).
47. AAPOR (American Association of Opinion Research) (2015). Also see
Japec et al. (2015, p. 843).
48. AAPOR (American Association of Opinion Research) (2015). Also see
Japec et al. (2015, p. 844).
49. Groves (2011).
50. Berman (2013, p. 129).
51. Brick (2011).
52. AAPOR (American Association of Opinion Research) (2014a, b).
53. Brick (2011).
54. Kennedy, McGeeney, and Keeter (2016).
55. Revilla and Ochoa (2016).
56. Toninelli, Pinter, and dePedraza (2015).
57. Wells, Bailey, and Link (2014).
104 NOTES

58. Yeager et al. (2011).


59. Baumgardner, Griffin, and Raglin (2014).
60. Couper (2008).
61. Tourangeau, Conrad, and Couper (2013).
62. AAPOR (American Association of Opinion Research) (2010). Also see
Baker et al. (2010).
63. Callegaro, et al. (2014).
64. Lugtig and Toepoel (2016).
65. AAPOR (American Association of Opinion Research) (2015). Also see
Japec et al. (2015).
66. AAPOR (American Association of Opinion Research) (2013).
67. Berman (2013).
68. Kreuter (2013).
69. Lampe et al. (2014).
70. Mayer-Schonberger and Cukier (2013).

Chapter 5
1. Hague, Hague, and Morgan (2013, p. 197).
2. McAlevey and Sullivan (2010, p. 911).
3. McAlevey and Sullivan (2010, p. 912).
4. McAlevey and Sullivan (2010, p. 911).
5. Brunt (2001, p. 179).
6. Fink (2003).
7. Hague, Hague, and Morgan (2013, p. 197).
8. American Psychological Association (APA) (2010).
9. Beins (2012, p. 4).
10. U.S. Census Bureau (2014). The Census Bureau produces many separate
surveys whose components are accessible in a variety of formats. Because of
the size, complexity, and ongoing nature of its surveys such as the ­American
Community Survey, it provides a separate 2014 Design and Methodology Re-
port for the survey, which contains descriptions of the basic design of the
American Community Survey and details of the full set of methods and
procedures.
11. The Tuskegee syphilis experiment was an infamous clinical study conducted
between 1932 and 1972 by the U.S. Public Health Service to study the
natural progression of untreated syphilis in rural African American men
who thought they were receiving free healthcare from the U.S. government
(Wikipedia 2014).
12. U.S. Department of Health and Human Services (2018).
13. AAPOR (2018).
NOTES
105

14. Hague, Hague, and Morgan (2013, p. 196).


15. Snijkers, Haraldsen, and Jones (2013, p. 536).
16. Snijkers, Haraldsen, and Jones (2013, p. 536).
17. APA (2010).
18. Tufte (1983, 2001).
19. Tufte (1983, 2001).
20. Tufte (2006).
21. Champkin (2011, p. 41).
22. Fink (2003).
23. Lenburg (2010).
24. Forsyth (2013).
25. Hering and Hering (2010).
26. American Psychological Association (2010).
27. Miller (2004).
28. Alreck and Settle (2004, pp. 341–85).
29. Tufte (1983, 2001, 2006).
References
AAPOR (American Association for Public Opinion Research). 2010. AAPOR
Report on Online Panels. Deerfield, IL: AAPOR.
AAPOR. 2011. Standard Definitions: Final Dispositions of Case Codes and Out-
come Rates for Surveys. 7th ed. Deerfield, IL: AAPOR.
AAPOR. 2015. AAPOR Report on Big Data. Deerfield, IL: AAPOR.
AAPOR. 2013. Report of the AAPOR Task Force on Non-Probability Sampling.
Deerfield, IL: AAPOR.
AAPOR. 2014a. Mobile Technologies for Conducting, Augmenting and Potentially
Replacing Surveys. Deerfield, IL: AAPOR
AAPOR. 2014b. Replacing Surveys: Report of the AAPOR Task Force on Emerging
Technologies in Public Opinion Research. Deerfield, IL: AAPOR.
AAPOR. 2018. “Institutional Review Boards.” https://www.aapor.org/Standards-
Ethics/Institutional-Review-Boards.aspx, (accessed August 26, 2018).
Alreck, P. L., and R. B. Settle. 2004. The Survey Research Handbook. 3rd ed.
­Boston, MA: McGraw-Hill Irwin.
Alwin, D. F. 2007. Margins of Error: A Study of Reliability in Survey Measurement.
Vol. 547. Hoboken, NJ: John Wiley & Sons.
American Consumer Opinion. 2014. “Information about Our Online Surveys.”
https://www.acop.com/how, (accessed August 25, 2018).
American Psychological Association. 2010. Publication Manual of the American
Psychological Association. 6th ed. Washington, DC: American Psychological
Association.
Andreadis, I. 2015. “Comparison of Response Times between Desktop and
Smartphone Users.” In Mobile Research Methods: Opportunities and ­Challenges
of Mobile Research Methodologies, eds. D. Toninelli, R. Pinter, and P. de
­Pedraza. London, England: Ubiquity Press, pp. 63–79.
Antoun, C. 2015. “Who Are the Internet Users, Mobile Internet Users, and
Mobile-Mostly Internet Users? Demographic Differences across Internet-
Use Subgroups in the U.S.” In Mobile Research Methods: Opportunities and
­Challenges of Mobile Research Methodologies, eds. D. Toninelli, R. Pinter, and
P. de Pedraza. London, England: Ubiquity Press, pp. 99–114.
Babbie, E. 2016. Survey Research Methods. 14th ed. Boston, MA: Cengage
Learning.
Baker, R., S. J. Blumberg, M. P. Couper, et al. 2010. “AAPOR Report on Online
Panels.” The Public Opinion Quarterly 74, no. 4, pp. 711–81.
108 REFERENCES

Baumgardner, S., D. H. Griffin, and D. A. Raglin. 2014. The Effects of Adding an


Internet Response Option to the American Community Survey. ­Washington, DC:
US Census Bureau. https://www.census.gov/library/working-papers/2014/
acs/2014_Baumgardner_04.html
Beins, B. C. 2012. “APA Style Simplified: Writing in Psychology, Education, N
­ ursing,
and Sociology.” http://csus.eblib.com/patron/FullRecord.aspx?p=822040
Berman, J. J. 2013. Principles of Big Data Preparing, Sharing, and Analyzing Com-
plex Information. Waltham, MA: Morgan Kaufmann.
Best Buy. (2018). https://www.bestbuy.com/site/hard-drives/external-portable-
hardrives/pcmcat186100050005.c?id=pcmcat186100050005&qp=ha
rddrivesizerange_facetpercent3DStoragepercent20Capacity~8TBpercent
20-percent2011.9TB (accessed September 15, 2018).
Bishop, G. F. 1992. “Qualitative Analysis of Question-Order and Context E ­ ffects:
The Use of Think-Aloud Responses.” In Context Effects in Social and Psycho-
logical Research, eds. N. Schwarz and S. Sudman. New York, NY: Springer-
Verlag, pp. 149–62.
Blumberg, S. J., and J. V. Luke. June, 2018. “Wireless substitution: Early release
of estimates from the National Health Interview Survey, July–December
2017.” https://www.cdc.gov/nchs/nhis.htm, (accessed September 7, 2018).
Brace, I. 2013. Questionnaire Design: How to Plan, Structure and Write Survey Ma-
terial for Effective Market Research. London, England: Kogan Page Limited.
http://www.eblib.com
Bradburn, N. M. 1983. “Response Effects.” In Handbook of Survey Research, eds.
P. H. Rossi, J. D. Wright, and A. B. Anderson. Orlando, FL: Academic Press,
pp. 289–328.
Bradburn, N. M. 2016. “Surveys as Social Interactions.” Journal of Survey ­Statistics
and Methodology 4, no. 1, pp. 94–109.
Brick, J. M. 2011. “The Future of Survey Sampling.” Public Opinion Quarterly
75, no. 5, pp. 872–88.
Brunt, L. T. 2001. “The Advent of the Sample Survey in the Social Sciences.”
Journal of the Royal Statistical Society 50, pp. 179–89.
Callegaro, M., K. M. Lozar, and V. Vehovar. 2015. Web Survey Methodology. Los
Angeles, CA: Sage.
Callegaro, M., R. Baker, J. Bethlehem, A. S. Göritz, J. A. Krosnick, and
P. J. Lavrakas, eds. 2014. Online Panel Research: A Data Quality Perspective.
New York, NY: Wiley.
Chaffe-Stengel, P., and D. N. Stengel. 2011. Working with Sample Data-Exploration
and Inference. New York, NY: Business Expert Press.
Champkin, J. 2011. “Making Information Beautiful—and Clear.” Significance
8, no. 1, pp. 39–41.
Christian, L. M., N. L. Parsons, and D. A. Dillman. 2009. “Designing Scalar Ques-
tions for Web Surveys.” Sociological Methods & Research 37, no. 3, pp. 393–425.
REFERENCES
109

Church, A. H. 1993. “Estimating the Effect of Incentives on Mail Survey


­Response Rates: A Meta-Analysis.” Public Opinion Quarterly 57, no. 1,
pp. 62–79.
Converse, J. M., and S. Presser. 1986. Survey Questions: Handcrafting the
­Standardized Questionnaire. Vol. 63. Beverly Hills, CA: Sage.
Couper, M. P. 2008. Designing Effective Web Surveys. Cambridge, MA: C ­ ambridge
University Press.
de Bruin Bruine, W., W. van der Klaauw, G. Topa, J. S. Downs, B. Fischhoff,
and O. Armantier. 2012. “The Effect of Question Wording on Consumers’
Reported Inflation Expectations.” Journal of Economic Psychology 33, no. 4,
pp. 749–57.
de Leeuw, E. D., and W. de Heer. 2002. “Trends in Household Survey Non-
response.” In Survey Nonresponse, eds. R. M. Groves, D. A. Dillman, J. L.
Eltinge, and R. J. A. Little. New York, NY: John Wiley and Sons, pp. 41–54.
Dillman, D. A. 1978. Mail and Telephone Surveys: The Total Design Method.
New York, NY: John Wiley & Sons.
Dillman, D. A. 2000. Mail and Internet Surveys: The Tailored Design Method. 2nd
ed. New York, NY: John Wiley & Sons.
Dillman, D. A., J. D. Smyth, and L. M. Christian. 2009. Internet, Mail and
Mixed-Mode Surveys: The Tailored Design Method. 3rd ed. New York, NY:
John Wiley & Sons.
Dillman, D. A., J. D. Smyth, and L. M. Christian. 2014. Internet, Phone, Mail
and Mixed-Mode Surveys: The Tailored Design Method. 4th ed. New York, NY:
John Wiley & Sons.
Etchegaray, J. M., and W. G. Fischer. 2010. “Understanding Evidence-Based
­Research Methods: Reliability and Validity Considerations in Survey Re-
search.” Health Environments Research & Design Journal 4, no. 1, pp. 131–35.
Fabry, M. 2016. “The Story behind America’s First Commercial Computer.” http://
time.com/4271506/census-bureau-computer-history, (accessed S­ eptember 26,
2018).
Fink, A. 2003. How to Report on Surveys. 2nd ed. Vol. 10. Thousand Oaks, CA:
Sage Publications.
Fisher, B. S. 2009. “The Effects of Survey Question Wording on Rape Estimates:
Evidence from a Quasi-Experimental Design.” Violence against Women
15, no. 2, pp. 133–47.
Forsyth, P. 2013. “How to Write Reports and Proposals.” http://csus.eblib.com/
patron/FullRecord.aspx?p=1131620
Fowler, F. J. 2011. “Coding the Behavior of Interviewers and Respondents to
Evaluate Survey Questions.” In Question Evaluation Methods, eds. J. Madens,
K. Miller, A. Maitland, and G. B. Willis, Gordon B. John Wilen & Sons:
Hoboken, NJ: John Wilen & Sons.
110 REFERENCES

Fowler, F. J., Jr. 2014. Survey Research Methods. 5th ed. Los Angeles, CA: Sage
Publications.
Frankfort-Nachmais, C., and A. Leon-Guerrero. 2018. Social Statistics for a
­Diverse Society. 8th ed. Thousand Oaks, CA: Sage Publications.
Gomes, L. August 22, 2006. “Talking Tech.” The Wall Street Journal. New York,
NY. Cited in Wikipedia. 2018. “IBM 305 RAMAC.” https://en.wikipedia.
org/wiki/IBM_305_RAMAC, (accessed September 7, 2018).
Gorden, R. L. 1987. Interviewing: Strategy, Techniques, and Tactics. 4th ed.
­Chicago, IL: Dorsey Press.
Gorden, R. L. 1992. Basic Interviewing Skills. Itasca, IL: F.E. Peacock Publishers.
Groothuis, P. A., and J. C. Whitehead. 2002. “Does Don’t Know Mean No?
Analysis of ‘Don’t Know’ Responses in Dichotomous Choice Contingent
Valuation Questions.” Applied Economics 34, no. 15, pp. 1935–40.
Groves, R. M. 2011. “Three Eras of Survey Research.” Public Opinion Quarterly
75, no. 5, pp. 861–71.
Groves, R. M., and K. A. McGonagle. 2001. “A Theory-Guided Interviewer
Training Protocol Regarding Survey Participation.” Journal of Official Statis-
tics 17, no. 2, pp. 249–65.
Groves, R. M., F. J. Fowler, M. P. Couper, J. M. Lepkowski, E. Singer, and
R. Tourangeau. 2004. Survey Methodology. Hoboken, NJ: Wiley-Interscience.
Groves, R. M., F. J. Fowler, M. P. Couper, J. M. Lepkowski, E. Singer, and
R. Tourangeau. 2009. Survey Methodology. 2nd ed. Hoboken, NJ: John Wiley.
Gwartney, P. A. 2007. The Telephone Interviewer’s Handbook: How to Conduct
Standardized Conversations. San Francisco, CA: Josey-Bass.
Hague, P. N., N. Hague, and C. A. Morgan. 2013. “Market Research in Prac-
tice: How to Get Greater Insight from Your Market.” http://csus.eblib.com/
patron/FullRecord.aspx?p=1412261
Hering, L., and H. Hering. 2010. How to Write Technical Reports. http://csus
.eblib.com/patron/FullRecord.aspx?p=645771
IBM SPSS. 2014. “SPSS Homepage.” http://www-01.ibm.com/software/­
analytics/spss, (accessed September 20, 2014).
Jann, B., and T. Hinz. 2016. “Research Questions and Design for Survey Re-
search.” In The Sage Handbook of Survey Methodology, eds. C. Wolf, D. Joye,
T. W. Smith, and Y. Fu. Thousand Oaks, CA: Sage Publications, pp. 105–21.
Japec, L., F. Kreuter, M. Berg, P. Biemer, P. Decker, C. Lampe, J. Lane, C. O’Neil,
and A. Usher. 2015. “Big Data in Survey Research: AAPOR Task Force
­Report.” Public Opinion Quarterly 79, no. 4, pp. 839–80.
Kennedy, C., K. McGeeney, and S. Keeter. 2016. “The Twilight of Landline
­Interviewing.” http://www.pewresearch.org/2016/08/01/the-twilight-of-landline-
interviewing, (accessed October 2, 2018).
REFERENCES
111

Kreuter, F., ed. 2013. Improving Surveys with Paradata: Analytic Uses of Process
Information. New York, NY: Wiley.
Krosnick, J. A., and L. R. Fabrigar. 1997. “Designing Rating Scales for Effec-
tive Measurement in Surveys.” In Survey Measurement and Process Quality,
eds. P. B. L. Lyberg, M. Collins, E. de Leeuw, C. Dippo, N. Schwarz, and
D. Trewin. New York, NY: John Wiley, pp. 141–64.
Lampe, C., J. Pasek, L. Guggenheim, F. Conrad, and M. Schober. 2014. When
are big data methods trustworthy for social measurement? Presented at Annual
Meetings American Association for Public Opinion Research, Anaheim, CA,
May 15–18.
Lane, J. 2010. “Linking Administrative and Survey Data.” In Handbook of ­Survey
Research, eds. V. Marsden and J. D. Wright. Bingley, England: Emerald
Group Publishing Limited, pp. 659–80.
Laney, D. 2001. “3D Data Management: Controlling Data Volume, Veloc-
ity and Variety,” Application Delivery Strategies. https://blogs.gartner.com/
doug-laney/files/2012/01/ad949-3D-Data-Management-Controlling-Data-­
Volume-Velocity-and-Variety.pdf, (accessed September 4, 2018).
Lenburg, J. 2010. “The Facts on File Guide to Research.” http://csus.eblib.com/
patron/FullRecord.aspx?p=592610
Likert, R. 1932. “A Technique for the Measurement of Attitudes.” Archives of
Psychology 140, pp. 5–55.
Lugtig, P., and V. Toepoel. 2016. “The Use of PCs, Smartphones, and Tablets
in a Probability-based Panel Survey: Effects on Survey Measurement Error.”
Social Science Computer Review 34, pp. 78–94.
Matthijsse, S. M., E. de Leeuw, and J. Hox. 2015. “Internet Panels, Professional
Respondents, and Data Quality.” Methodology 11, no. 3, pp. 81–88.
Mavletova, A., and M. P. Couper. 2015. “A Meta-Analysis of Breakoff Rates in
Mobile Web Surveys, in Mobile Research Methods: Opportunities and
­
­Challenges of Mobile Research Methodologies.” In Mobile Research Methods:
Opportunities and Challenges of Mobile Research Methodologies, eds. D. Toninelli,
R. Pinter, and P. de Pedraza. London, England: Ubiquity Press, pp. 81–98.
Maxfield, M. G., and E. Babbie. 1998. Research Methods for Criminal Justice and
Criminology. 2nd ed. Belmont, CA: Wadsworth Publishing.
Mayer-Schonberger, V., and K. Cukier. 2013. Big Data: A Revolution That Will
Transform How We Live, Work, and Think. New York, NY: Houghton Mifflin.
McAlevey, L., and C. Sullivan. 2010. “Statistical Literacy and Sample Survey
Results.” International Journal of Mathematical Education in Science and Tech-
nology 41, no. 7, pp. 911–20.
McCutcheon, C. 2015. “Political Polling Do Polls Accurately Measure Public At-
titudes?” CQ Researcher, pp. 121–44. http://library.cqpress.com.proxy.lib.csus
112 REFERENCES

.edu/cqresearcher/document.php?id=cqresrre2015020600, (accessed August


16, 2018).
Miller, D. C., and N. J. Salkind. 2002. Handbook of Research Design and Social
Measurement. 6th ed. Los Angeles, CA: Sage Publications.
Miller, J. E. 2013. The Chicago Guide to Writing about Multivariate Analysis.
­Chicago, IL: University of Chicago Press.
Miller, J. 2004. The Chicago guide to writing about numbers (Chicago guides to
writing, editing, and publishing). Chicago: University of Chicago Press.
Monette, D. R., T. J. Sullivan, and C. R. DeJong. 1998. Applied Social Re-
search: Tool for the Human Services. 4th ed. Fort Worth, TX: Harcourt Brace
College.
Mypoints.com. 2018. “Welcome to MyPoints.” https://www.mypoints.com/
landingpage?cmp=1455&cxid=kwd-79989543057423:loc-190&aff_
sid=online%20surveys&msclkid=9b27e237185e121f016bb9b5a7d78f35,
(accessed August 26, 2016).
QSR International. 2018. Nivo Software. https://www.qsrinternational.com/
nvivo/home (accessed September 21, 2018).
Pew Research Center. 2015a. “Coverage Error in Internet Surveys.” http://www
.pewresearch.org/2015/09/22/coverage-error-in-internet-surveys/pm_2015-
09-22_coverage-error-02, (accessed November 12, 2015).
Pew Research Center. 2015b. “The Smartphone Difference.” http://www.pewinter-
net.org/2015/04/01/us-smartphone-use-in-2015, (accessed September 8, 2018).
Pew Research Center. 2015c. “Building Pew Research Center’s American Trends
Panel.”  http://www.pewresearch.org/wp-content/uploads/2015/04/2015-
04-08_building-the-ATP_FINAL.pdf, (accessed October 2, 2018).
Pinter, R., D. Toninelli, and P. de Pedraza. 2015. “Mobile Research Methods:
Possibilities and Issues of a New Promising Way of Conducting Research.”
In Mobile Research Methods: Opportunities and Challenges of Mobile Research
Methodologies, eds. D. Toninelli, R. Pinter, and P. de Pedraza. London,
­England: Ubiquity Press, pp. 1–10.
Revilla, M., and C. Ochoa. 2016. “Open Narrative Questions in PC and Smart-
phones: Is the Device Playing a Role?” Quality & Quantity 50, pp. 2495–2513.
Riley, M. W. 1963. Sociological Research I: A Case Approach. New York, NY: Har-
court, Brace & World.
Rosen, J. A., J. Murphy, A. Peytchev, T. Holder, J. A. Dever, D. R. Herget, and
D. J. Pratt. 2014. “Prioritizing Low-propensity Sample Members in a Survey:
Implications for Nonresponse Bias.” Survey Practice 7, no. 1, pp. 1–8.
Scientific Software Development GmbH. 2014. “Atlas TI Homepage.” http://
www.atlasti.com/index.html, (accessed September 20, 2014).
Schaeffer, N. C., and S. Presser. 2003. “The Science of Asking Questions.” A­ nnual
Review of Sociology 29, pp. 65–88.
REFERENCES
113

Schnell, R. 2016. “Record Linkage.” In The Sage Handbook of Survey M ­ ethodology,


eds. C. Wolf, D. Joye, T. W. Smith, Y. Fu, and C. Wolf. Thousand Oaks, CA:
Sage Publications, pp. 662–69.
Schuman, H. 2008. Method and Meaning in Polls and Surveys. Cambridge, MA:
Harvard University Press.
Sincero, S. M. 2012. “Survey Response Scales.” https://explorable.com/survey-
response-scales, (accessed September 22, 2014).
Singer, E., and C. Ye. 2013. “The Use and Effects of Incentives in Surveys.”
­Annals of the American Academy of Political and Social Science 645, no. 1,
pp. 112–41.
Slavec, A., and D. Toninelli. 2015. “An Overview of Mobile CATI Issues in
­Europe.” In Mobile Research Methods: Opportunities and Challenges of ­Mobile
Research Methodologies, eds. D. Toninelli, R. Pinter, and P. de Pedraza.
­London, England: Ubiquity Press, pp. 41–62.
Sneed, A. 2015. “Moore’s Law Keeps Going, Defying Expectations.” http://www
.scientificamerican.com/article/moore-s-law-keeps-going-defying-expectations,
(accessed August 28, 2018).
Snijkers, G., G. Haraldsen, and J. Jones. 2013. Designing and Conducting Business
Surveys. Hoboken, NJ: John Wiley & Sons. http://www.eblib.com
Stern, M. J., I. Bilgen, and D. A. Dillman. 2014. “The State of Survey Method-
ology: Challenges, Dilemmas, and New Frontiers in the Era of the Tailored
Design.” Field Methods 26, no. 3, pp. 284–301.
Sudman, S., and N. Bradburn. 1973. “Effects of Time and Memory Factors
on Response in Surveys.” Journal of the American Statistical Association 68,
pp. 805–15.
SurveyMonkey. 2018. “Engaged Survey Respondents, Honest Feedback.” https://
www.surveymonkey.com/mp/audience/our-survey-respondents,  (accessed
­September 26, 2018).
SurveyPolice.com. 2018. “Survey Panel Rankings.” https://www.surveypolice
.com/rankings, (accessed August 26, 2018).
Swagbucks.com. 2018. “Let Brands Know What You’re Thinking . . .” https://
www.swagbucks.com/g/paid-surveys, (accessed August 26, 2018).
Taylor, S. J. 2013. “Real Scientists Make Their Own Data.” http://bit
.ly/15XAq5X, (accessed September 9, 2018).
Toninelli, D., R. Pinter, and P. de Pedraza, eds. 2015. Mobile Research M ­ ethods:
Opportunities and Challenges of Mobile Research Methodologies. London,
­England: Ubiquity Press,
Tourangeau, R., F. G. Conrad, and M. P. Couper. 2013. The Science of Web
­Surveys. New York, NY: Oxford University Press.
Tourangeau, R., and T. J. Plewes, eds. 2013. Nonresponse in Social Science Surveys:
A Research Agenda. Washington, DC: National Academies Press.
114 REFERENCES

Tufte, E. R. 1983. The Visual Display of Quantitative Information. Cheshire, CT:


Graphics Press.
Tufte, E. R. 2001. The Visual Display of Quantitative Information. 2nd ed.
Cheshire, CT: Graphics Press.
Tufte, E. R. 2006. Beautiful Evidence. Cheshire, CT: Graphics Press.
U.S. Census Bureau. 2018. “American Community Survey.” https://www.census
.gov/programs-surveys/acs, (accessed August 20, 2018).
U.S. Census Bureau. 2014. “American Community Survey: Design and Method-
ology Report.” https://www.census.gov/programs-surveys/acs/methodology/
design-and-methodology.html, (accessed August 29, 2018).
U.S. Department of Health and Human Services. 2018. “Health Information
Privacy.” https://www.hhs.gov/hipaa/index.html, (accessed August 29, 2018).
University of Wisconsin. 2018. “What Is Big Data?” https://datasciencedegree
.wisconsin.edu/data-science/what-is-big-data, (accessed September 9, 2018).
Vardigan, M. B., and P. Granda. 2010. “Archiving, Documentation, and
­Dissemination.” In Handbook of Survey Research, eds. V. Marsden and
J. D. Wright. London, England: Emerald Group Publishing Limited,
pp. 707–29.
Vardigan, M. B., P. Granda, and L. Holder. 2016, “Documenting Survey Data
across the Life Cycle.” In The Sage Handbook of Survey Methodology, eds.
C. Wolf, D. Joye, T. W. Smith, and Y. Fu. Thousand Oaks, CA: Sage Publica-
tions, pp. 443–59.
Warnecke, R. B., T. P. Johnson, N. Chávez, S. Sudman, D. P. O’Rourke, L. Lacey,
and J. Horm. 1997. “Improving Question Wording in Surveys of Culturally
Diverse Populations.” Annals of Epidemiology 7, no. 5, pp. 334–42.
Warwick, D. P., and C. A. Lininger. 1975. The Sample Survey: Theory and Practice.
New York, NY: McGraw-Hill.
Weisberg, H. F. 2005. The Total Survey Error Approach: A Guide to the New Science
of Survey Research. Chicago, IL: University of Chicago Press.
Wells, T., J. Bailey, and M. Link. 2014. “Comparison of Smartphone and Online
Computer Survey Administration.” Social Science Computer Review 32, no. 2,
pp. 238–55.
Wikipedia. 2014. “Tuskegee Syphilis Experiment.” https://en.wikipedia.org/
wiki/Tuskegee_syphilis_experiment, (accessed September 30, 2014).
Wikipedia. 2018. “IBM 305 RAMAC.” https://en.wikipedia.org/wiki/
IBM_305_RAMAC, (accessed September 7, 2018).
Willis, G. B. 2016. “Question Pretesting.” In The Sage Handbook of Survey
­Methodology, eds. C. Wolf, D. Joye, T. W. Smith, and Y. Fu. Thousand Oaks,
CA: Sage Publications, pp. 359–77.
Yeager, D. S., J. A. Krosnick, L. Chang, H. S. Javitz, M. S. Levendusky, A. Simpser,
R. Wang. 2011. “Comparing the Accuracy of RDD Telephone Surveys and
REFERENCES
115

Internet Surveys Conducted with Probability and Non-probability Samples.”


Public Opinion Quarterly 75, no. 4, pp. 709–47.
YouGov. 2018. “Our Panel.” https://today.yougov.com/about/about-the-yougov-
panel, (accessed September 26, 2018).
Zuell, C., N. Menold, and S. Körber. 2014. “The Influence of the Answer Box
Size on Item Nonresponse to Open-ended Questions in a Web Survey.”
­Social Science Computer Review 33, no.1, pp. 115–22.
About the Authors
Ernest L. Cowles is professor emeritus of sociology at California State
University, Sacramento. He served as the director of the Institute for
Social Research for 8 years and continues as a senior research f­ellow.
Before becoming director of the Institute for Social Research, he ­directed
research centers at the University of Illinois for 10 years. Beyond his
academic work, he has served as a public agency administrator and
­consultant both within the United States and internationally. In 2015,
he was presented with the Distinguished Alumni Award by the College
of Criminology and Criminal Justice at Florida State University where he
received his PhD in criminology.

Edward Nelson is professor emeritus of sociology at California State


University, Fresno. He received his PhD in sociology from UCLA
­
specializing in research methods. He was the director of the Social
­
Research Laboratory at California State University, Fresno, from 1980 to
2013 and has directed more than 150 surveys. He taught research meth-
ods, quantitative methods, critical thinking, and computer applications.
He has published books on observation in sociological research, and using
SPSS, a statistical computing package widely used in the social sciences.
Index
AAPOR Task Force Report, 66 Data, 1
Active listening, 44 coding, 42
American Association for Public collection, 2
Opinion Researchers confidentiality agreements, 83
(AAPOR), 83 editing, 42–43
American Consumer Opinion, 61 making available to other social
American Trends Panel (ATP), 60 scientists, 43–44
Answering questions, willingness to, presenting, 85–86
13–14 Data analysis, 2, 43
Audience, 74–75 Data entry, 43
knowing, 75 Deliverables, 74–75
lack of understanding by, 75–77 Digital divide, 58
types of, 77–79 Disposition codes, 40
Documentation, for interviewer,
Behavior coding, 35 45–46
Big Data, 64–66 Double-barrel questions, 16
Breakoff rates, 58–59 Double-negative questions, 17
Brevity, question, 14–15 Dynamic probes, 39
Built-in assumptions questions, 17
Editing, data, 42–43
Call record, 40–41 Empirical, 1
Callback, 40 Ethical issues, 66–68
Changing technology, and survey Executive summaries, 81–82
research, 53–71 Experts, reviewing survey, 35
Chartjunk, 90 Expression, survey results, 85
Checks-and-balances, 85 presenting data, 85–86
City of residence, 11–12 text, graphs, and tables, 86–93
Clarity, question, 14–15
Closed-ended questions, 20–28 Face-to-face survey, 36, 39
Coding, 42 Focus groups, 34
behavior, 35
Cognitive interviews, 34 Good questions. See also Survey
Computer-assisted personal questions
interviewing (CAPI), 55 key elements of
Computer-assisted telephone common question pitfalls,
interviewing (CATI), 9, 55 avoiding, 16–17
Confidentiality, 83–84 specificity, clarity, and brevity,
Content, survey results, 79–81 14–15
executive summaries, 81–82 writing, 7–31
privacy, confidentiality, and goals for, 11
proprietary information, Google, 34
83–84 Google Scholar, 34
120 INDEX

Granularity, 24 New York Times, 53


Graphs, 86–93 Nonresponse bias, 47
Nontechnical audience, 77–78
Human subject protections, 83
Online panels, 60–63
IBM RAMAC 305, 54 Online survey, 9, 19, 57, 61–62
Inflation, 8 Open-ended questions, 18–20
Inter-university Consortium for Opt-in panels, 60–64
Political and Social Research Ordered/scalar response category, 25
(ICPSR), 43–44
Interactive probes, 39 Participants, 74
Internet, 64 Pew Research Center, 56–57
access, 56–59 Practice interviews, 46
-accessible smartphones, 57 Pretesting, 35–37
Interviewer Privacy, 83–84
-administered survey, 44 Probability-based panels, 59–60
debriefing, 35 Probability surveys, 64–66
manual, 46 Probes, defined, 38
–respondent–questions interaction, Probing
3 in mailed surveys, 39–40
training in web surveys, 39
practice interviews, 46 Proprietary information, 83–84
providing documentation for,
45–46 Questioning, as social process,
Interviewing. See Questioning 2–3, 6
Questions, 1–6
Level of granularity, 80
Likert Scale, 25–26 Record keeping, 40–41
Listening, 44 Refusals, 45
Loaded/leading questions, 16 Reliability, 11–13
Research design, 2
Mailed surveys, probing in, 39–40 Researchers, 74
Marketing Research, 73 Response rate, 41, 47
MCAPI, 55–56 Role playing, 46
MCATI, 55–56
Measurement, 2 Sampling, 2, 7
Mixed-mode surveys, 8–9 Satisficing, 36
advantages of, 10 Scientific approach, 1
Mobile technologies, 64. See also Self-administered survey, 41
MCAPI; MCATI Semantic Differential Scale, 26–28
Modern computing, 53–55 Skip patterns, 36
internet access and survey mobility, Smartphones
56–59 dependent, 57
wireless telephones, 55–56 internet-accessible, 57
Moore’s Law, 54 Social process, questioning as, 2–3, 6
Specificity, question, 14–15
Narrative presentation. See Text Sponsors, 74
National Center for Health Statistics, 56 Statistical story-telling, 86
INDEX
121

Structured questions. See Closed- behavior coding and interviewer


ended questions debriefing, 35
Survey mobility, 56–59 developing, 7, 33–34
Survey panels, 59 cognitive interviews, 34
opt-in panels, 60–64 focus groups, 34
probability-based panels, 59–60 other surveys, 34
Survey presentations, 73 interviewer training
Survey questions, 7–31 practice interviews, 46
good questions providing documentation for,
key elements of, 14–17 45–46
writing, 7–31 listening, 44
responses to, 87 making data available to other social
types and formats, 17–18 scientists, 43–44
closed-ended, 20–28 mode of delivery, 37
open-ended, 18–20 participation, 47–48
validity and reliability in, 11–13 pretesting, 35–37
willingness to answer, 13–14 processing data
Survey reports coding, 42
format of, 79–80 data analysis, 43
writing, 43 data entry, 43
Survey results editing, 42–43
audience, 74–75 writing reports, 43
knowing, 75
lack of understanding by, 75–77 Tables, 86–93
types of, 77–79 Tailored Design Method, 9
content, 79–81 Technical audience, 77–78
executive summaries, 81–82 Telephone survey, 35, 36, 39
privacy, confidentiality, and disposition codes for, 40
proprietary information, Telephones, wireless, 55–56
83–84 Text, 86–93
expression, 85 Total Design Method, 9
presenting data, 85–86
text, graphs, and tables, 86–93 UNIVAC, 53–54
presenting, 73–97 Universal Automatic Computer. See
Survey technology, 53–71. See also UNIVAC
specific technologies Unstructured questions. See Open-
SurveyMonkey, 61–62 ended questions
Surveys, 33–51
administering Validity, 11–13
linking to other information,
41–42 Wall Street Journal, 54
probe questions, 37–39 Web surveys, probing in, 39
probing in mailed surveys, 39–40 Wireless telephones, 55–56
probing in web surveys, 39
record keeping, 40–41 YouGov, 61
OTHER TITLES IN QUANTITATIVE APPROACHES
TO DECISION MAKING COLLECTION
Donald N. Stengel, California State University, Fresno, Editor
• Regression Analysis: Understanding and Building Business and Economic Models Using
Excel, Second Edition by J. Holton Wilson, Barry P. Keating, and Mary Beal-Hodges
• Operations Methods: Managing Waiting Line Applications, Second Edition
by Kenneth A. Shaw
• Using Statistics for Better Business Decisions by Justin Bateh and Bert G. Wachsmuth
• Applied Regression and Modeling: A Computer Integrated Approach by Amar Sahay
• The Art of Computer Modeling for Business Analytics: Paradigms and Case Studies
by Gerald Feigin
• Data Visualization, Volume I: Recent Trends and Applications Using Conventional
and Big Data by Amar Sahay
• Data Visualization, Volume II: Uncovering the Hidden Pattern in Data Using Basic
and New Quality Tools by Amar Sahay
• Decision Analysis for Managers, Second Edition: A Guide for Making Better Personal
and Business Decisions by David Charlesworth
• Conducting Survey Research: A Practical Guide by John Fogli and Linda Herkenhoff
• MS Excel, Second Edition: Let’s Advance to the Next Level by Anurag Singal
• Business Decision Making, Second Edition: Streamlining the Process for More Effective
Results by Milan Frankl
• An Introduction to Survey Research, Second Edition, Volume I: The Basics of Survey
Research by Ernest L. Cowles and Edward Nelson

Announcing the Business Expert Press Digital Library


Concise e-books business students need for classroom and research

This book can also be purchased in an e-book collection by your library as

• a one-time purchase,
• that is owned forever,
• allows for simultaneous readers,
• has no restrictions on printing, and
• can be downloaded as PDFs from within the library community.

Our digital library collections are a great solution to beat the rising cost of textbooks. E-books
can be loaded into their course management systems or onto students’ e-book readers.

The Business Expert Press digital libraries are very affordable, with no obligation to buy in
future years. For more information, please visit www.businessexpertpress.com/librarians.
To set up a trial in the United States, please email [email protected]
An Introduction to Survey Quantitative Approaches

COWLES • NELSON
THE BUSINESS
EXPERT PRESS Research, Volume II to Decision Making Collection
DIGITAL LIBRARIES Donald N. Stengel, Editor
Carrying Out the Survey
EBOOKS FOR
BUSINESS STUDENTS Second Edition
Curriculum-oriented, born- Ernest L. Cowles • Edward Nelson
digital books for advanced
business students, written Survey research is a powerful tool to help understand how and why
individuals behave the way they do. Properly conducted, surveys
An Introduction to
by academic thought
leaders who translate real-
world business experience
can provide accurate insights into areas such as attitudes, opinions,
motivations, and values, which serve as the drivers of individual
behavior. This two-volume book is intended to introduce funda-
Survey Research,
into course readings and

Volume II
mentals of good survey research to students and practitioners of
reference materials for the survey process as well as end-users of survey information.
students expecting to tackle This second volume focuses on carrying out a survey—including
management and leadership
Carrying Out the
how to formulate survey questions, steps that researchers must

AN INTRODUCTION TO SURVEY RESEARCH, VOLUME II


challenges during their use when conducting the survey, and impacts of rapidly changing
professional careers. technology on survey design and execution. The authors conclude

POLICIES BUILT
BY LIBRARIANS
with an important, but often neglected aspect of surveys—the
presentation of results in different formats appropriate to different
audiences.
Survey
• Unlimited simultaneous
usage
• Unrestricted downloading
Ernest L. Cowles is professor emeritus of sociology at California
State University, Sacramento. He served as the director of the Second Edition
Institute for Social Research for 8 years and continues as a sen-
and printing
ior research fellow. Before becoming director of the Institute for
• Perpetual access for a
Social Research, he directed research centers at the University
one-time fee of Illinois for 10 years. Beyond his academic work, he has served
• No platform or as a public agency administrator and consultant both within the
maintenance fees United States and internationally. In 2015, he was presented with
• Free MARC records the Distinguished Alumni Award by the College of Criminology and
• No license to execute Criminal Justice at Florida State University where he received his

Ernest L. Cowles
The Digital Libraries are a PhD in criminology.
comprehensive, cost-effective Edward Nelson is professor emeritus of sociology at California
way to deliver practical
treatments of important
State University, Fresno. He received his PhD in sociology from
UCLA specializing in research methods. He was the director of the Edward Nelson
business issues to every Social Research Laboratory at California State University, Fresno,
student and faculty member. from 1980 to 2013 and has directed more than 150 surveys. He
taught research methods, quantitative methods, critical thinking,
and computer applications. He has published books on observation
in sociological research, and using SPSS, a statistical computing
package widely used in the social sciences.
For further information, a
free trial, or to order, contact: 
[email protected] Quantitative Approaches to Decision Making
www.businessexpertpress.com/librarians Collection
Donald N. Stengel, Editor

You might also like