0% found this document useful (0 votes)
16 views10 pages

GROUP-6-RESEARCH-NOTES

Uploaded by

Vincent Lasconia
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
16 views10 pages

GROUP-6-RESEARCH-NOTES

Uploaded by

Vincent Lasconia
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 10

Cebu Technological University – Danao City Campus

College of Engineering

Course Code: ME 3212


Course Description: Methods of Research for ME
Names : Ariel Naquines
Christian Jude M. Odtohan
Noel L. Muring
Group No.: Group 6

5. Quantitative Research Methodologies

Quantitative research is defined as a systematic investigation of phenomena by gathering


quantifiable data and performing statistical, mathematical, or computational techniques.
Quantitative research collects information from existing and potential customers using
sampling methods and sending out online surveys, online polls, questionnaires, etc., the results
of which can be depicted in the form of numerical. After careful understanding of these
numbers to predict the future of a product or service and make changes accordingly.
Essential Elements of Research Methodology

 Research Design: It is very important aspect of research methodology which describes


the research mode (whether it is qualitative research or quantitative research, or if the
researcher will use a specific research type e.g., descriptive survey, historical, case or
experimental).
Two major designs in Quantitative Research:
1. Experimental Designs: is concerned primarily with cause and effect relationships in
studies that involve manipulation or control of the independent variables (causes) and
measurement of the dependent variables (effects). This design utilizes the principle of research
known as the method of difference. This means that the effect of a single variable applied to the
situation can be assessed and the difference can be determined (Mill as cited by Sevilla, 2003).
Types of Experimental Designs:
1. True Experimental Design
1.1. Pretest-posttest control design
1.2. Posttest only control group
1.3. Solomon four-group
2. Quasi-experimental Designs
2.1. Non-equivalent
Cebu Technological University – Danao City Campus
College of Engineering

2.2. Time Series


3. Pre-experimental Designs
3.1. One-shot case study
3.2. One group pretest-posttest

 Respondents of the Study: This describes the target population and the sample frame.
The population is composed of persons or objects that possess some common
characteristics that are of interest to the researcher. There are two groups of population:
the target population and the accessible population. The target population consists of the
entire group of people or objects to which the findings of the study generally apply.
Meanwhile, the accessible population is the specific study population.
A parameter is a numeric characteristic of a population. It is very impractical for
the researcher to get data from the entire population especially if it is very large; in this
case, a sample is derived. A sample is a subset of the entire population or a group of
individuals that represents the population and serves as the respondents of the study. A
statistic is a numeric characteristic of a sample. A single member of the sample is called an
element.

 Instruments of the Study: It describes the specific type of research instrument that will
be used such as questionnaire, checklist, questionnaire-checklist, interview schedule,
teacher-made tests, and the like.
Most frequently used data collection techniques:
1. Documentary Analysis. This technique is used to analyse primary and secondary
sources that are available mostly in churches, schools, public or private offices, hospitals, or
in community, municipal, and city halls.
2. Interview. The instrument used in this method is the interview schedule. The skill of
the interviewer determines if the interviewee is able to express his/her thoughts clearly.
Data obtained from an interview may be recorded on audiotapes or videotapes. Today, cell
phones or smartphones can be used as recording devices.
Three Types of Interview:
a. Unstructured. This interview can be in the form of normal conversations or a
freewheeling exchange of ideas. The researcher must be skilled in conducting the interview
so that he/she can steer the course of conversation.
Cebu Technological University – Danao City Campus
College of Engineering

b. Structured. The conduct of questioning follows a particular sequence and has a well-
defined content. The interviewer does not ask questions that are not part of the
questionnaire but he/she can ask the interviewee to clarify his/her answers.
c. Semi-structured. There is a specific set of questions, but there are also additional
probes that may be done in an open-ended or close-ended manner.
3. Observation. This process or technique enables the researcher to participate actively in
the conduct of the research. Observation must be done in a quiet and inconspicuous
manner so as to get realistic data.
Two Types of Observation:
a. Structured. The researcher uses a checklist as a data collection tool.
b. Unstructured. The researcher observes things as they happen. The researcher
conducts the observation without any preconceived ideas about what will be observed.
4. Physiological Measures. The technique applied for physiological measures involves the
collection of physical data from the subject. It is considered more accurate and objective
than other data-collection methods.
5. Psychological Tests. These include personality inventories and projective techniques.
Personality inventories are self-reported measures that assess the differences in personality
traits, needs, or values of people.
6. Questionnaire. It is the most commonly used instrument in research. It is a list of
questions about a particular topic, with spaces provided for the response to each question,
and intended to be answered by a number of persons (Good, 1984). The questionnaire can
be structured or unstructured. Structured questionnaires provide possible answers and
respondents just have to select from them. Unstructured questionnaires do not provide
options and the respondents are free to give whatever answer they want.

 Establishing and validating reliability: The instrument must pass the validity and
reliability tests before it is utilized. Validity is the ability of an instrument to measure
what it intends to measure. When a study investigates the common causes of absences,
the content of the instrument must focus on these variables and indicators. Reliability
refers to the consistency of results. A reliable instrument yields the same results for
individuals who take the test more than once.
 Statistical Treatment: One of the many ways of establishing the objectivity of research
findings is by subjecting the data to different but appropriate statistical formulas and
processes. Statistical treatment is the culmination of the long process of formulating a
hypothesis, constructing the instrument and collecting data. It is used to properly test the
hypothesis answer the research questions, and present the results of the study in a clear and
understandable manner. In quantitative research, which deals more with numerical data, as in
Cebu Technological University – Danao City Campus
College of Engineering
most surveys and experiments, it is logical to use the statistical treatment. Statistics is the body
of knowledge and techniques used in collecting, organizing presenting analysing, and
interpreting data. It is a prerequisite in any research researcher has sufficient knowledge of
various statistical techniques.
Two Branches of Statistics
1. Descriptive Statistics. It involves tabulating, depicting, and describing the collected
data. The data are summarized to reveal overall data patterns and make them manageable.
2. Inferential statistics. It involves making generalizations about the population through
a sample drawn from it. It also includes hypothesis testing and sampling. Similarly, it's
concerned with a higher degree of critical judgment and advanced mathematical modes such as
parametric (interval and ratio scale) and non-parametric (nominal and ordinal) statistical tools.

COMMON STATISTICAL TOOLS


- DESCRIPTIVE STATISTICS
1. Frequency distribution. It is the record of the number of individuals or cases located in each category
on the scale of measurement.

2. Proportion. It is the total frequency divided by the number of cases in each category. It can be be
derived from the frequency distribution

3. Percentage. It is the proportion expressed in percentage (proportion x 100)

4. Measure of central tendencies. It indicates where the center of the distribution tends to be located.
It refers to the typical or average score in a distribution

a. Mode. It refers to the most frequently occurring score in a distribution.

b. Median. It is the middlemost value in a distribution below.or above which is exactly 50% of
cases that are found.

c. Mean. It is the exact mathematical center of a distribution. It is equal to the sum of an scores
divided by the number of cases.

5. Variability or dispersion. It refers to the extent and manner in which the scores in distribution differ
from each other.

a. Range. It is the difference between the highest value and the lowest value in the given
distribution.

b. Average deviation. It is the measure of variation that takes into consideration deviations of
the individual scores from the mean.

c. Variance. It is the square of the standard deviation.

d. Standard deviation. It is the square root of the quotient of the total squared deviation of the
mean and the total number of cases.
Cebu Technological University – Danao City Campus
College of Engineering

INFERENTIAL STATISTICS
1. Parametric tests. These tests require a normal distribution. The level of measurements must either be
interval and ratio.

a. T-test. This test is used to compare two the means of two independent samples or two
independent groups or the means of two correlated samples before and It be used for samples
composed of at least 30 elements.

b. Z-test. It is to compare two the sample mean and the perceived population mean. It be used
when the sample has 30 or more elements.

c. F-test. Also known as the analysis of variance (ANOVA), it is used when comparing the means
of two or more independent groups. One-way ANOVA is used when there is one variable involved and
two-way ANOVA used when there are two variables involved

d. Pearson product-moment coefficient of correlation. It is an index of relationship between


two Simple linear regression analyses.

e. Simple linear regression analysis. It is used when there is a significant relationship between
the y variables. It is used predicting the value of y, given the value of x.

f. Multiple regression analysis. It is used in predictions. The dependent variable can be


predicted given several independent variables.

2. Non-parametric test. It does require the normal distribution of scores. It when the data are nominal
or ordinal.

a. Chi-square test. This is a test of difference between the observed and the expected
frequencies. The chi-square test has three functions:

i. Test of goodness of fit. It is a test of difference between the observed and expected
frequencies

ii. Test of homogeneity. It is concerned with two or more samples with only one
criterion variable. This test is used to determine if two or more populations are homogenous.

iii. Test of independence. The sample used in this test consists of members randomly
drawn from the same population. This test is used to look into which measures are taken or if
two criterion variables are either independent or associated with one in a given population.

b. Spearman’s Rank Order Correlation Coefficient. This is the non-parametric version of the
Pearson product-moment correlation. This measures the strength and direction of association between
two ranked variable.

5.1 Characteristics of Experimental Research

Major Characteristics of Experimental Research

a. The researcher manipulates the independent variable.


Cebu Technological University – Danao City Campus
College of Engineering
b. They decide the nature and the extent of the treatment.

c. After the treatment has been administered, researchers observe or measure the groups
receiving the treatments to see if they differ.

d. Experimental research enables researchers to go beyond description and prediction, and


attempt to determine what caused effects.

Essential Characteristics of Experimental Research Comparison of Groups:

a. The experimental group receives a treatment of some sort while the control group receives
no treatment.

b. Enables the researcher to determine whether the treatment has had an effect or whether
one treatment is more effective than another.

Manipulation of the Independent Variable:

a. The researcher deliberately and directly determines what forms the independent variable will
take and which group will get which form.

5.2 Group Designs in Experimental Research


TYPES OF EXPERIMENTAL RESEARCH DESIGNS

1. True experimental design. A design is considered a true experiment when the following criteria are
present: the researcher manipulates the experimental variables i.e., the researcher as control over the
independent variables, as well as the treatment and the subjects there must be one experimental group
and one comparison or control group and the subjects are randomly assigned either to the comparison
or experimental group. The control group is a group that does not receive the treatment.

a. Pretest-posttest controlled group design

1.a. Subjects are randomly assigned to groups.

2.a. A pretest is given to both groups.

3.a. The experimental group receives the treatment while the control group does not

4.a. A posttest is given to both groups.

b. Posttest only controlled group design

1.b. Subjects are randomly assigned to groups.

2.b. The experimental group receives the treatment while the control group does not receive
the treatment.

3.b. A posttest is given to both groups.


Cebu Technological University – Danao City Campus
College of Engineering
c. Solomon four-group design. It is considered as the most reliable and suitable experimental
design. It minimizes threats to both internal and external validity.

1.c. Subjects are randomly assigned to one or four groups.

2.c. Two of the groups (experimental group 1 and control group 1) are pretested.

3.c. The other two groups (experimental group 2 and control group 2) receive the routine
treatment or no treatment.

4.c. A posttest is given to all four groups.

2. Quasi-experimental design. A design in which either there is no control group or the subjects are not
randomly assigned to groups.

a. Non-equivalent controlled group design. This design is similar to the pretest-posttest control
group design except that there is no random assignment of subjects to the experimental and control
groups.

b. Time-series design. The researcher periodically observes or measures the subjects.

3. Pre-experimental design. This experimental design is considered very weak because it researcher has
little control over the research.

a. One-shot case study. A single group is exposed to an experimental treatment and observed
after the treatment.

b. One-group pretest-posttest design. It provides a comparative description of a group of


subjects before and after the experimental treatment.

5.3 Control of Treatments


Control groups are critical to the scientific method. Experimental research design depends on
the use of treatment and control groups to test a hypothesis. Without a control group, researchers
could report results specific to study participants who received a treatment, but they would have no
way of demonstrating that the treatment itself actually had any impact.

By including a control group to use as a point of comparison, researchers are better able to isolate the
effects of the treatment. Being able to report on the difference (or lack of difference) between the
control and experimental groups is very important to ensuring that conclusions drawn from the study
are valid.

Treatment and Control groups. In experimental research, some subjects are administered one or more
experimental stimulus called a treatment (the treatment group) while other subjects are not given such
a stimulus (the control group). The treatment may be considered successful if subjects in the treatment
group rate more favourably on outcome variables than control group subjects. Multiple levels of
experimental stimulus may be administered, in which case, there may be more than one treatment
group. For example, in order to test the effects of a new drug intended to treat a certain medical
condition like dementia, if a sample of dementia patients is randomly divided into three groups, with the
first group receiving a high dosage of the drug, the second group receiving a low dosage, and the third
Cebu Technological University – Danao City Campus
College of Engineering
group receives a placebo such as a sugar pill (control group), then the first two groups are experimental
groups and the third group is a control group. After administering the drug for a period of time, if the
condition of the experimental group subjects improved significantly more than the control group
subjects, we can say that the drug is effective. We can also compare the conditions of the high and low
dosage experimental groups to determine if the high dose is more effective than the low dose.

Treatment manipulation. Treatments are the unique feature of experimental research that sets this
design apart from all other research methods. Treatment manipulation helps control for the “cause” in
cause-effect relationships. Naturally, the validity of experimental research depends on how well the
treatment was manipulated. Treatment manipulation must be checked using pretests and pilot tests
prior to the experimental study. Any measurements conducted before the treatment is administered are
called pre-test measures , while those conducted after the treatment are posttest measures .

Between-subjects experiments are often used to determine whether a treatment works. In


psychological research, a treatment is any intervention meant to change people’s behavior for the
better. This includes psychotherapies and medical treatments for psychological disorders but also
interventions designed to improve learning, promote conservation, reduce prejudice, and so on. To
determine whether a treatment works, participants are randomly assigned to either a treatment
condition, in which they receive the treatment, or a control condition, in which they do not receive the
treatment. If participants in the treatment condition end up better off than participants in the control
condition—for example, they are less depressed, learn faster, conserve more, express less prejudice—
then the researcher can conclude that the treatment works. In research on the effectiveness of
psychotherapies and medical treatments, this type of experiment is often called a randomized clinical
trial.

There are different types of control conditions:

In a no-treatment control condition, participants receive no treatment whatsoever. One problem with
this approach, however, is the existence of placebo effects. A placebo is a simulated treatment that lacks
any active ingredient or element that should make it effective, and a placebo effect is a positive effect of
such a treatment. Many folk remedies that seem to work—such as eating chicken soup for a cold or
placing soap under the bed sheets to stop nighttime leg cramps—are probably nothing more than
placebos. Although placebo effects are not well understood, they are probably driven primarily by
people’s expectations that they will improve.

A control group study can be managed in two different ways. In a single-blind study, the researcher will
know whether a particular subject is in the control group, but the subject will not know. In a double-
blind study, neither the subject nor the researcher will know which treatment the subject is receiving. In
many cases, a double-blind study is preferable to a single-blind study, since the researcher cannot
inadvertently affect the results or their interpretation by treating a control subject differently from an
experimental subject.
Cebu Technological University – Danao City Campus
College of Engineering

5.4 Analysis of the Study


Analysis of the study is a process of systematically applying statistical and/or logical techniques to
describe and illustrate, condense and recap, and evaluate data. According to Shamoo and Resnik (2003)
various analytic procedures “provide a way of drawing inductive inferences from data and distinguishing
the signal (the phenomenon of interest) from the noise (statistical fluctuations) present in the data”. It is
a way on how to analyse a certain data.

While data analysis in qualitative research can include statistical procedures, many times analysis
becomes an on-going iterative process where data is continuously collected and analyzed almost
simultaneously. Indeed, researchers generally analyse for patterns in observations through the entire
data collection phase (Savenye, Robinson, 2004). The form of the analysis is determined by the specific
qualitative approach taken (field study, ethnography content analysis, oral history, biography,
unobtrusive research) and the form of the data (field notes, documents, audiotape, videotape).

ANALYSIS OF DATA

Numbers or figures simply presented will not be easily comprehended and the significance will
not be determined without a correct analysis. Analysis is the process of breaking a whole into parts. The
researcher must be critical in looking at details to prove or disprove a certain theory or claim.

In analysing the data, the following must be considered:

1. The highest numerical value such as scores, weighted means. percentages, variability, etc.

2. The lowest numerical value such as scores, weighted means, percentages, variability, etc.

3. The most common numerical values like mode or values that appear repeatedly.

4. The final numerical value like the average weighted mean, total, chi-square value. correlation
index, etc.

INTERPRETATION OF DATA

The following are the levels of interpretation which are considered in organizing the discussion
of the results of findings (Ducut and Pangilinan, 2006):

1. Level 1. Data collected are compared and contrasted. Unexpected results if any, may be
mentioned. The researcher is allowed to comment on certain shortcomings of the study but should not
concentrate too much on the flaws.

2. Level 2. The researcher should explain the internal validity of the results as well as their
consistency or reliability. The causes or factors that may have influenced the results may also be
described

3. Level 3. The researcher should explain the external validity of the results, that is, their
generality or applicability to external conditions.
Cebu Technological University – Danao City Campus
College of Engineering
4. Level 4. The researcher should relate or connect the interpretation of data with theoretical
research or with the reviewed literature.

DISCUSSION OF DATA

The following must be considered in the discussion of data:

1. The flow of the discussion of results or findings is based on how the problems are stated.

2. The manner or sequence of discussion should include the following

a. Discussion of the findings in relation to the results of previous studies cited in the
review of related literature and studies.

b. Implications, inferences, and other important information

What is SPSS?

SPSS is short for Statistical Package for the Social Sciences, and it’s used by various kinds of researchers
for complex statistical data analysis. SPSS is revolutionary software mainly used by research scientists
which help them process critical data in simple steps. Working on data is a complex and time consuming
process, but this software can easily handle and operate information with the help of some techniques.
These techniques are used to analyse, transform, and produce a characteristic pattern between
different data variables.

In addition to it, the output can be obtained through graphical representation so that a user can easily
understand the result. Officially dubbed IBM SPSS Statistics, most users still refer to it as SPSS. As the
world standard for social science data analysis, SPSS is widely coveted due it’s straightforward and
English-like command language and impressively thorough user manual.

SPSS is used by market researchers, health researchers, survey companies, government entities,
education researchers, marketing organizations, data miners, and many more for the processing and
analysing of survey data. While Alchemer has powerful built-in reporting features, when it comes to in-
depth statistical analysis researchers consider SPSS the best-in-class solution. Most top research
agencies use SPSS to analyse survey data and mine text data so that they can get the most out of their
research projects.

The Core Functions of SPSS

SPSS offers four programs that assist researchers with their complex data analysis needs.

Statistics Program

SPSS’s Statistics program provides a plethora of basic statistical functions, some of which include
frequencies, cross tabulation, and bivariate statistics.

You might also like