GROUP-6-RESEARCH-NOTES
GROUP-6-RESEARCH-NOTES
College of Engineering
Respondents of the Study: This describes the target population and the sample frame.
The population is composed of persons or objects that possess some common
characteristics that are of interest to the researcher. There are two groups of population:
the target population and the accessible population. The target population consists of the
entire group of people or objects to which the findings of the study generally apply.
Meanwhile, the accessible population is the specific study population.
A parameter is a numeric characteristic of a population. It is very impractical for
the researcher to get data from the entire population especially if it is very large; in this
case, a sample is derived. A sample is a subset of the entire population or a group of
individuals that represents the population and serves as the respondents of the study. A
statistic is a numeric characteristic of a sample. A single member of the sample is called an
element.
Instruments of the Study: It describes the specific type of research instrument that will
be used such as questionnaire, checklist, questionnaire-checklist, interview schedule,
teacher-made tests, and the like.
Most frequently used data collection techniques:
1. Documentary Analysis. This technique is used to analyse primary and secondary
sources that are available mostly in churches, schools, public or private offices, hospitals, or
in community, municipal, and city halls.
2. Interview. The instrument used in this method is the interview schedule. The skill of
the interviewer determines if the interviewee is able to express his/her thoughts clearly.
Data obtained from an interview may be recorded on audiotapes or videotapes. Today, cell
phones or smartphones can be used as recording devices.
Three Types of Interview:
a. Unstructured. This interview can be in the form of normal conversations or a
freewheeling exchange of ideas. The researcher must be skilled in conducting the interview
so that he/she can steer the course of conversation.
Cebu Technological University – Danao City Campus
College of Engineering
b. Structured. The conduct of questioning follows a particular sequence and has a well-
defined content. The interviewer does not ask questions that are not part of the
questionnaire but he/she can ask the interviewee to clarify his/her answers.
c. Semi-structured. There is a specific set of questions, but there are also additional
probes that may be done in an open-ended or close-ended manner.
3. Observation. This process or technique enables the researcher to participate actively in
the conduct of the research. Observation must be done in a quiet and inconspicuous
manner so as to get realistic data.
Two Types of Observation:
a. Structured. The researcher uses a checklist as a data collection tool.
b. Unstructured. The researcher observes things as they happen. The researcher
conducts the observation without any preconceived ideas about what will be observed.
4. Physiological Measures. The technique applied for physiological measures involves the
collection of physical data from the subject. It is considered more accurate and objective
than other data-collection methods.
5. Psychological Tests. These include personality inventories and projective techniques.
Personality inventories are self-reported measures that assess the differences in personality
traits, needs, or values of people.
6. Questionnaire. It is the most commonly used instrument in research. It is a list of
questions about a particular topic, with spaces provided for the response to each question,
and intended to be answered by a number of persons (Good, 1984). The questionnaire can
be structured or unstructured. Structured questionnaires provide possible answers and
respondents just have to select from them. Unstructured questionnaires do not provide
options and the respondents are free to give whatever answer they want.
Establishing and validating reliability: The instrument must pass the validity and
reliability tests before it is utilized. Validity is the ability of an instrument to measure
what it intends to measure. When a study investigates the common causes of absences,
the content of the instrument must focus on these variables and indicators. Reliability
refers to the consistency of results. A reliable instrument yields the same results for
individuals who take the test more than once.
Statistical Treatment: One of the many ways of establishing the objectivity of research
findings is by subjecting the data to different but appropriate statistical formulas and
processes. Statistical treatment is the culmination of the long process of formulating a
hypothesis, constructing the instrument and collecting data. It is used to properly test the
hypothesis answer the research questions, and present the results of the study in a clear and
understandable manner. In quantitative research, which deals more with numerical data, as in
Cebu Technological University – Danao City Campus
College of Engineering
most surveys and experiments, it is logical to use the statistical treatment. Statistics is the body
of knowledge and techniques used in collecting, organizing presenting analysing, and
interpreting data. It is a prerequisite in any research researcher has sufficient knowledge of
various statistical techniques.
Two Branches of Statistics
1. Descriptive Statistics. It involves tabulating, depicting, and describing the collected
data. The data are summarized to reveal overall data patterns and make them manageable.
2. Inferential statistics. It involves making generalizations about the population through
a sample drawn from it. It also includes hypothesis testing and sampling. Similarly, it's
concerned with a higher degree of critical judgment and advanced mathematical modes such as
parametric (interval and ratio scale) and non-parametric (nominal and ordinal) statistical tools.
2. Proportion. It is the total frequency divided by the number of cases in each category. It can be be
derived from the frequency distribution
4. Measure of central tendencies. It indicates where the center of the distribution tends to be located.
It refers to the typical or average score in a distribution
b. Median. It is the middlemost value in a distribution below.or above which is exactly 50% of
cases that are found.
c. Mean. It is the exact mathematical center of a distribution. It is equal to the sum of an scores
divided by the number of cases.
5. Variability or dispersion. It refers to the extent and manner in which the scores in distribution differ
from each other.
a. Range. It is the difference between the highest value and the lowest value in the given
distribution.
b. Average deviation. It is the measure of variation that takes into consideration deviations of
the individual scores from the mean.
d. Standard deviation. It is the square root of the quotient of the total squared deviation of the
mean and the total number of cases.
Cebu Technological University – Danao City Campus
College of Engineering
INFERENTIAL STATISTICS
1. Parametric tests. These tests require a normal distribution. The level of measurements must either be
interval and ratio.
a. T-test. This test is used to compare two the means of two independent samples or two
independent groups or the means of two correlated samples before and It be used for samples
composed of at least 30 elements.
b. Z-test. It is to compare two the sample mean and the perceived population mean. It be used
when the sample has 30 or more elements.
c. F-test. Also known as the analysis of variance (ANOVA), it is used when comparing the means
of two or more independent groups. One-way ANOVA is used when there is one variable involved and
two-way ANOVA used when there are two variables involved
e. Simple linear regression analysis. It is used when there is a significant relationship between
the y variables. It is used predicting the value of y, given the value of x.
2. Non-parametric test. It does require the normal distribution of scores. It when the data are nominal
or ordinal.
a. Chi-square test. This is a test of difference between the observed and the expected
frequencies. The chi-square test has three functions:
i. Test of goodness of fit. It is a test of difference between the observed and expected
frequencies
ii. Test of homogeneity. It is concerned with two or more samples with only one
criterion variable. This test is used to determine if two or more populations are homogenous.
iii. Test of independence. The sample used in this test consists of members randomly
drawn from the same population. This test is used to look into which measures are taken or if
two criterion variables are either independent or associated with one in a given population.
b. Spearman’s Rank Order Correlation Coefficient. This is the non-parametric version of the
Pearson product-moment correlation. This measures the strength and direction of association between
two ranked variable.
c. After the treatment has been administered, researchers observe or measure the groups
receiving the treatments to see if they differ.
a. The experimental group receives a treatment of some sort while the control group receives
no treatment.
b. Enables the researcher to determine whether the treatment has had an effect or whether
one treatment is more effective than another.
a. The researcher deliberately and directly determines what forms the independent variable will
take and which group will get which form.
1. True experimental design. A design is considered a true experiment when the following criteria are
present: the researcher manipulates the experimental variables i.e., the researcher as control over the
independent variables, as well as the treatment and the subjects there must be one experimental group
and one comparison or control group and the subjects are randomly assigned either to the comparison
or experimental group. The control group is a group that does not receive the treatment.
3.a. The experimental group receives the treatment while the control group does not
2.b. The experimental group receives the treatment while the control group does not receive
the treatment.
2.c. Two of the groups (experimental group 1 and control group 1) are pretested.
3.c. The other two groups (experimental group 2 and control group 2) receive the routine
treatment or no treatment.
2. Quasi-experimental design. A design in which either there is no control group or the subjects are not
randomly assigned to groups.
a. Non-equivalent controlled group design. This design is similar to the pretest-posttest control
group design except that there is no random assignment of subjects to the experimental and control
groups.
3. Pre-experimental design. This experimental design is considered very weak because it researcher has
little control over the research.
a. One-shot case study. A single group is exposed to an experimental treatment and observed
after the treatment.
By including a control group to use as a point of comparison, researchers are better able to isolate the
effects of the treatment. Being able to report on the difference (or lack of difference) between the
control and experimental groups is very important to ensuring that conclusions drawn from the study
are valid.
Treatment and Control groups. In experimental research, some subjects are administered one or more
experimental stimulus called a treatment (the treatment group) while other subjects are not given such
a stimulus (the control group). The treatment may be considered successful if subjects in the treatment
group rate more favourably on outcome variables than control group subjects. Multiple levels of
experimental stimulus may be administered, in which case, there may be more than one treatment
group. For example, in order to test the effects of a new drug intended to treat a certain medical
condition like dementia, if a sample of dementia patients is randomly divided into three groups, with the
first group receiving a high dosage of the drug, the second group receiving a low dosage, and the third
Cebu Technological University – Danao City Campus
College of Engineering
group receives a placebo such as a sugar pill (control group), then the first two groups are experimental
groups and the third group is a control group. After administering the drug for a period of time, if the
condition of the experimental group subjects improved significantly more than the control group
subjects, we can say that the drug is effective. We can also compare the conditions of the high and low
dosage experimental groups to determine if the high dose is more effective than the low dose.
Treatment manipulation. Treatments are the unique feature of experimental research that sets this
design apart from all other research methods. Treatment manipulation helps control for the “cause” in
cause-effect relationships. Naturally, the validity of experimental research depends on how well the
treatment was manipulated. Treatment manipulation must be checked using pretests and pilot tests
prior to the experimental study. Any measurements conducted before the treatment is administered are
called pre-test measures , while those conducted after the treatment are posttest measures .
In a no-treatment control condition, participants receive no treatment whatsoever. One problem with
this approach, however, is the existence of placebo effects. A placebo is a simulated treatment that lacks
any active ingredient or element that should make it effective, and a placebo effect is a positive effect of
such a treatment. Many folk remedies that seem to work—such as eating chicken soup for a cold or
placing soap under the bed sheets to stop nighttime leg cramps—are probably nothing more than
placebos. Although placebo effects are not well understood, they are probably driven primarily by
people’s expectations that they will improve.
A control group study can be managed in two different ways. In a single-blind study, the researcher will
know whether a particular subject is in the control group, but the subject will not know. In a double-
blind study, neither the subject nor the researcher will know which treatment the subject is receiving. In
many cases, a double-blind study is preferable to a single-blind study, since the researcher cannot
inadvertently affect the results or their interpretation by treating a control subject differently from an
experimental subject.
Cebu Technological University – Danao City Campus
College of Engineering
While data analysis in qualitative research can include statistical procedures, many times analysis
becomes an on-going iterative process where data is continuously collected and analyzed almost
simultaneously. Indeed, researchers generally analyse for patterns in observations through the entire
data collection phase (Savenye, Robinson, 2004). The form of the analysis is determined by the specific
qualitative approach taken (field study, ethnography content analysis, oral history, biography,
unobtrusive research) and the form of the data (field notes, documents, audiotape, videotape).
ANALYSIS OF DATA
Numbers or figures simply presented will not be easily comprehended and the significance will
not be determined without a correct analysis. Analysis is the process of breaking a whole into parts. The
researcher must be critical in looking at details to prove or disprove a certain theory or claim.
1. The highest numerical value such as scores, weighted means. percentages, variability, etc.
2. The lowest numerical value such as scores, weighted means, percentages, variability, etc.
3. The most common numerical values like mode or values that appear repeatedly.
4. The final numerical value like the average weighted mean, total, chi-square value. correlation
index, etc.
INTERPRETATION OF DATA
The following are the levels of interpretation which are considered in organizing the discussion
of the results of findings (Ducut and Pangilinan, 2006):
1. Level 1. Data collected are compared and contrasted. Unexpected results if any, may be
mentioned. The researcher is allowed to comment on certain shortcomings of the study but should not
concentrate too much on the flaws.
2. Level 2. The researcher should explain the internal validity of the results as well as their
consistency or reliability. The causes or factors that may have influenced the results may also be
described
3. Level 3. The researcher should explain the external validity of the results, that is, their
generality or applicability to external conditions.
Cebu Technological University – Danao City Campus
College of Engineering
4. Level 4. The researcher should relate or connect the interpretation of data with theoretical
research or with the reviewed literature.
DISCUSSION OF DATA
1. The flow of the discussion of results or findings is based on how the problems are stated.
a. Discussion of the findings in relation to the results of previous studies cited in the
review of related literature and studies.
What is SPSS?
SPSS is short for Statistical Package for the Social Sciences, and it’s used by various kinds of researchers
for complex statistical data analysis. SPSS is revolutionary software mainly used by research scientists
which help them process critical data in simple steps. Working on data is a complex and time consuming
process, but this software can easily handle and operate information with the help of some techniques.
These techniques are used to analyse, transform, and produce a characteristic pattern between
different data variables.
In addition to it, the output can be obtained through graphical representation so that a user can easily
understand the result. Officially dubbed IBM SPSS Statistics, most users still refer to it as SPSS. As the
world standard for social science data analysis, SPSS is widely coveted due it’s straightforward and
English-like command language and impressively thorough user manual.
SPSS is used by market researchers, health researchers, survey companies, government entities,
education researchers, marketing organizations, data miners, and many more for the processing and
analysing of survey data. While Alchemer has powerful built-in reporting features, when it comes to in-
depth statistical analysis researchers consider SPSS the best-in-class solution. Most top research
agencies use SPSS to analyse survey data and mine text data so that they can get the most out of their
research projects.
SPSS offers four programs that assist researchers with their complex data analysis needs.
Statistics Program
SPSS’s Statistics program provides a plethora of basic statistical functions, some of which include
frequencies, cross tabulation, and bivariate statistics.