Pangasinan State University
School of Advance Studies
Urdaneta City, Campus
DATA
COLLECTION
INSTRUMENT
Submitted by:
DAISY S. TABIOS
Submitted to:
Dr. Roy Ferrer and Dr. Irma Mirasol Ferrer
DATA COLLECTION
• Data collection is a systematic process of gathering observations or measurements. Whether you are
performing research for business, governmental or academic purposes, data collection allows you to
gain first-hand knowledge and original insights into your research problem.
While methods and aims may differ between fields, the overall process of data collection remains largely
the same. Before you begin collecting data, you need to consider:
• The aim of the research
• The type of data that you will collect
• The methods and procedures you will use to collect, store, and process the data
To collect high-quality data that is relevant to your purposes, follow these four steps.
Define the aim of your research
Choose your data collection method
Plan your data collection procedures
Collect the data
Step 1: Define the aim of your research
Before you start the process of data collection, you need to identify exactly what you want to achieve.
You can start by writing a problem statement: what is the practical or scientific issue that you want to
address and why does it matter?
Next, formulate one or more research questions that precisely define what you want to find out.
Depending on your research questions, you might need to collect quantitative or qualitative data:
Quantitative data is expressed in numbers and graphs and is analyzed through statistical methods.
Qualitative data is expressed in words and analyzed through interpretations and categorizations.
If your aim is to test a hypothesis, measure something precisely, or gain large-scale statistical insights,
collect quantitative data. If your aim is to explore ideas, understand experiences, or gain detailed insights
into a specific context, collect qualitative data. If you have several aims, you can use a mixed methods
approach that collects both types of data.
Examples of quantitative and qualitative research aims
You are researching employee perceptions of their direct managers in a large organization.
Your first aim is to assess whether there are significant differences in perceptions of managers across
different departments and office locations.
Your second aim is to gather meaningful feedback from employees to explore new ideas for how
managers can improve.
Step 2: Choose your data collection method
Based on the data you want to collect, decide which method is best suited for your research.
Experimental research is primarily a quantitative method.
Interviews/focus groups and ethnography are qualitative methods.
Surveys, observations, archival research and secondary data collection can be quantitative or qualitative
methods.
Step 3: Plan your data collection procedures
When you know which method(s) you are using, you need to plan exactly how you will
implement them. What procedures will you follow to make accurate observations or
measurements of the variables you are interested in?
For instance, if you’re conducting surveys or interviews, decide what form the questions will
take; if you’re conducting an experiment, make decisions about your experimental design.
Operationalization
Sometimes your variables can be measured directly: for example, you can collect data on the average
age of employees simply by asking for dates of birth. However, often you’ll be interested in collecting
data on more abstract concepts or variables that can’t be directly observed.
Operationalization means turning abstract conceptual ideas into measurable observations. When
planning how you will collect data, you need to translate the conceptual definition of what you want to
study into the operational definition of what you will actually measure.
Example of operationalization
You have decided to use surveys to collect quantitative data. The concept you want to measure is the
leadership of managers. You operationalize this concept in two ways:
You ask managers to rate their own leadership skills on 5-point scales assessing the ability to
delegate, decisiveness and dependability.
You ask their direct employees to provide anonymous feedback on the managers regarding the same
topics.
Sampling
You may need to develop a sampling plan to obtain data systematically. This involves defining a
population, the group you want to draw conclusions about, and a sample, the group you will actually collect
data from.
Your sampling method will determine how you recruit participants or obtain measurements for your
study. To decide on a sampling method you will need to consider factors like the required sample
size, accessibility of the sample, and timeframe of the data collection.
Standardizing procedures
• If multiple researchers are involved, write a detailed manual to standardize data collection procedures in
your study.
• This means laying out specific step-by-step instructions so that everyone in your research team collects
data in a consistent way – for example, by conducting experiments under the same conditions and using
objective criteria to record and categorize observations.
• This helps ensure the reliability of your data, and you can also use it to replicate the study in the future.
Creating a data management plan
• Before beginning data collection, you should also decide how you will organize and store your data.
• If you are collecting data from people, you will likely need to anonymize and safeguard the data to
prevent leaks of sensitive information (e.g. names or identity numbers).
• If you are collecting data via interviews or pencil-and-paper formats, you will need to perform
transcriptions or data entry in systematic ways to minimize distortion.
• You can prevent loss of data by having an organization system that is routinely backed up.
Step 4: Collect the data
• Finally, you can implement your chosen methods to measure or observe the variables you are interested
in.
Examples of collecting qualitative and quantitative data
• To collect data about perceptions of managers, you administer a survey with closed- and open-ended
questions to a sample of 300 company employees across different departments and locations.
• The closed-ended questions ask participants to rate their manager’s leadership skills on scales from 1–5.
The data produced is numerical and can be statistically analyzed for averages and patterns.
• The open-ended questions ask participants for examples of what the manager is doing well now and
what they can do better in the future. The data produced is qualitative and can be categorized through
content analysis for further insights.
To ensure that high quality data is recorded in a systematic way, here are some best practices:
• Record all relevant information as and when you obtain data. For example, note down whether or how
lab equipment is recalibrated during an experimental study.
• Double-check manual data entry for errors.
• If you collect quantitative data, you can assess the reliability and validity to get an indication of your
data quality.
CONSTRUCTION OF RESEARCH INSTRUMENT
The Instrument
• Instrument is the general term that researchers use for a measurement device (survey, test, questionnaire,
etc.). To help distinguish between instrument and instrumentation, consider that the instrument is the
device and instrumentation is the course of action (the process of developing, testing, and using the
device)
Validity
• Validity is the extent to which an instrument measures what it is supposed to measure and performs as it
is designed to perform. It is rare, if nearly impossible, that an instrument be 100% valid, so validity is
generally measured in degrees. As a process, validation involves collecting and analyzing data to assess
the accuracy of an instrument. There are numerous statistical tests and measures to assess the validity of
quantitative instruments, which generally involves pilot testing. The remainder of this discussion focuses
on external validity and content validity.
• External validity is the extent to which the results of a study can be generalized from a sample to a
population. Establishing eternal validity for an instrument, then, follows directly from sampling. Recall
that a sample should be an accurate representation of a population, because the total population may not
be available. An instrument that is externally valid helps obtain population generalizability, or the
degree to which a sample represents the population.
• Content validity refers to the appropriateness of the content of an instrument. In other words, do the
measures (questions, observation logs, etc.) accurately assess what you want to know? This is
particularly important with achievement tests. Consider that a test developer wants to maximize the
validity of a unit test for 7th grade mathematics. This would involve taking representative questions
from each of the sections of the unit and evaluating them against the desired outcomes.
Reliability
• Reliability can be thought of as consistency. Does the instrument consistently measure what it is
intended to measure? It is not possible to calculate reliability; however, there are four general estimators
that you may encounter in reading research
• Enter-Rater/Observer Reliability: The degree to which different raters/observers give consistent answers
or estimates.
• Test-Retest Reliability: The consistency of a measure evaluated over time.
• Parallel-Forms Reliability: The reliability of two tests constructed the same way, from the same content.
• Internal Consistency Reliability: The consistency of results across items, often measured with
Cronbach’s Alpha.
Relating Reliability and Validity
• Reliability is directly related to the validity of the measure. There are several important principles. First,
a test can be considered reliable, but not valid. Consider the SAT, used as a predictor of success in
college. It is a reliable test (high scores relate to high GPA), though only a moderately valid indicator of
success (due to the lack of structured environment – class attendance, parent-regulated study, and
sleeping habits – each holistically related to success).
• Second, validity is more important than reliability. Using the above example, college admissions may
consider the SAT a reliable test, but not necessarily a valid measure of other quantities colleges seek,
such as leadership capability, altruism, and civic involvement. The combination of these aspects,
alongside the SAT, is a more valid measure of the applicant’s potential for graduation, later social
involvement, and generosity (alumni giving) toward the alma mater.
• Finally, the most useful instrument is both valid and reliable. Proponents of the SAT argue that it is
both. It is a moderately reliable predictor of future success and a moderately valid measure of a student’s
knowledge in Mathematics, Critical Reading, and Writing.
CONCEPT OF VALIDITY AT RELIABILITY
• Reliability and validity are concepts used to evaluate the quality of research. They indicate how well a
method, technique or test measures something. Reliability is about the consistency of a measure, and
validity is about the accuracy of a measure.
• Reliability refers to how consistently a method measures something. If the same result can be
consistently achieved by using the same methods under the same circumstances, the measurement is
considered reliable.
• You measure the temperature of a liquid sample several times under identical conditions. The
thermometer displays the same temperature every time, so the results are reliable.
• A doctor uses a symptom questionnaire to diagnose a patient with a long-term medical condition.
Several different doctors use the same questionnaire with the same patient but give different diagnoses.
This indicates that the questionnaire has low reliability as a measure of the condition.
What is validity?
• Validity refers to how accurately a method measures what it is intended to measure. If research has high
validity, that means it produces results that correspond to real properties, characteristics, and variations
in the physical or social world.
• High reliability is one indicator that a measurement is valid. If a method is not reliable, it probably isn’t
valid.
• If the thermometer shows different temperatures each time, even though you have carefully controlled
conditions to ensure the sample’s temperature stays the same, the thermometer is probably
malfunctioning, and therefore its measurements are not valid.
• If a symptom questionnaire results in a reliable diagnosis when answered at different times and with
different doctors, this indicates that it has high validity as a measurement of the medical condition.
• However, reliability on its own is not enough to ensure validity. Even if a test is reliable, it may not
accurately reflect the real situation.
• The thermometer that you used to test the sample gives reliable results. However, the thermometer has
not been calibrated properly, so the result is 2 degrees lower than the true value. Therefore, the
measurement is not valid.
• Validity is harder to assess than reliability, but it is even more important. To obtain useful results, the
methods you use to collect your data must be valid: the research must be measuring what it claims to
measure. This ensures that your discussion of the data and the conclusions you draw are also valid.
How are reliability and validity assessed?
• Reliability can be estimated by comparing different versions of the same measurement. Validity is
harder to assess, but it can be estimated by comparing the results to other relevant data or theory.
Methods of estimating reliability and validity are usually split up into different types.
REFERENCES
• https://www.scribbr.com/methodology/reliability-vs-validity/#:~:text=Reliability%20and%20validity
%20are%20concepts,the%20accuracy%20of%20a%20measure.
• https://researchrundowns.com/quantitative-methods/instrument-validity-reliability/
• https://www.scribbr.com/methodology/data-collection/
•
•