0% found this document useful (0 votes)
16 views65 pages

Chapter14

The document provides an overview of Simple Linear Regression, detailing the model, least squares method, and coefficient of determination. It explains the relationship between dependent and independent variables, the regression equation, and methods for testing significance. Additionally, it discusses the use of estimated regression equations for estimation and prediction, including confidence and prediction intervals.

Uploaded by

Serdar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
16 views65 pages

Chapter14

The document provides an overview of Simple Linear Regression, detailing the model, least squares method, and coefficient of determination. It explains the relationship between dependent and independent variables, the regression equation, and methods for testing significance. Additionally, it discusses the use of estimated regression equations for estimation and prediction, including confidence and prediction intervals.

Uploaded by

Serdar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 65

Simple Linear Regression

Chapter 14

Slide
1
Overview
Simple Linear Regression Model

Least Squares Method

Coefficient of Determination

Model Assumptions

Testing for Significance


Using the Estimated Regression
Equation for Estimation and
Prediction
Residual Analysis
Slide
2
Simple Linear Regression

 Managerial decisions often are based on the


relationship between two or more variables.
 Regression analysis can be used to develop an
equation showing how the variables are related.
 The variable being predicted is called the dependent
variable and is denoted by y.
 The variables being used to predict the value of the
dependent variable are called the independent
variables and are denoted by x.

Slide
3
Simple Linear Regression

 Simple linear regression involves one independent


variable and one dependent variable.
 The relationship between the two variables is
approximated by a straight line.
 Regression analysis involving two or more
independent variables is called multiple regression.

Slide
4
Simple Linear Regression Model

Regression Model, Regression Equation,


and Estimated Regression Equation

Slide
5
Simple Linear Regression Model

 The equation that describes how y is related to x and


an error term is called the regression model.
 The simple linear regression model is:

y = b0 + b1x +e

where:
b0 and b1 are called parameters of the model,
e is a random variable called the error term.

Slide
6
Simple Linear Regression Equation

 The simple linear regression equation is:

E(y) = 0 + 1x

• Graph of the regression equation is a straight line.


• b0 is the y intercept of the regression line.
• b1 is the slope of the regression line.
• E(y) is the expected value of y for a given x value.

Slide
7
Simple Linear Regression Equation

 Positive Linear Relationship

E(y)

Regression line

Intercept Slope b1
b0
is positive

Slide
8
Simple Linear Regression Equation

 Negative Linear Relationship

E(y)

Intercept
b0Regression line

Slope b1
is negative

Slide
9
Simple Linear Regression Equation

 No Relationship

E(y)

Intercept Regression line


b0
Slope b1
is 0

Slide
10
Estimated Simple Linear Regression
Equation
 The estimated simple linear regression
equation
ŷ b0  b1x

• The graph is called the estimated regression line.


• b0 is the y intercept of the line.
• b1 is the slope of the line.
• isŷthe estimated value of y for a given x value.

Slide
11
Estimation Process

Regression Model Sample Data:


y = b0 + b1x +e x y
Regression Equation x1 y1
E(y) = b0 + b1x . .
Unknown Parameters . .
b0, b1 xn yn

Estimated
b0 and b1 Regression Equation
provide estimates of ŷ b0  b1x
b0 and b1 Sample Statistics
b0, b1

Slide
12
Least Squares Method

Slide
13
Least Squares Method

 Least Squares Criterion

min  (yi  y i )2

where:
yi = observed value of the dependent variable
for the ith observation
yi =^estimated value of the dependent variable
for the ith observation

Slide
14
Least Squares Method

 Slope for the Estimated Regression Equation

b   (x  x)(y  y)
i i

 (x  x)
1 2
i

where:
xi = value of independent variable for ith
observation
yi = value of dependent variable for ith
_ observation
x = mean value for independent variable
_
y = mean value for dependent variable

Slide
15
Least Squares Method

 y-Intercept for the Estimated Regression


Equation
b0 y  b1x

Slide
16
Simple Linear Regression

 Example: Reed Auto Sales


Reed Auto periodically has a special week-long
sale.
As part of the advertising campaign Reed runs
one or
more television commercials during the weekend
preceding the sale. Data from a sample of 5
previous
sales are shown on the next slide.

Slide
17
Simple Linear Regression

 Example: Reed Auto Sales

Number of Number of
TV Ads (x) Cars Sold (y)
1 14
3 24
2 18
1 17
3 27
Sx = 10 Sy = 100
x 2 y 20

Slide
18
Estimated Regression Equation

 Slope for the Estimated Regression


Equation  (xi  x)(yi  y) 20
b1   5
 (xi  x) 2
4
 y-Intercept for the Estimated Regression
Equation b y  b x 20  5(2) 10
0 1

 Estimated Regression Equation


yˆ 10  5x

Slide
19
Coefficient of Determination

Slide
20
Coefficient of Determination

 Relationship Among SST, SSR, SSE

SST = SSR +
SSE

 i
( y  y )2
 i
( ˆ
y  y )2
  i i
( y  ˆ
y )2

where:
SST = total sum of squares
SSR = sum of squares due to regression
SSE = sum of squares due to error

Slide
21
Coefficient of Determination

 The coefficient of determination is:

r2 = SSR/SST

where:
SSR = sum of squares due to regression
SST = total sum of squares

Slide
22
Coefficient of Determination

r2 = SSR/SST = 100/114 = .8772

The regression relationship is very strong; 87.72%


of the variability in the number of cars sold can be
explained by the linear relationship between the
number of TV ads and the number of cars sold.

Slide
23
Sample Correlation Coefficient

rxy (sign of b1 ) Coefficient of Determination


rxy (sign of b1 ) r 2

where:
b1 = the slope of the estimated regression
equation yˆ b0  b1 x

Slide
24
Sample Correlation Coefficient

rxy (sign of b1 ) r 2

yˆ 10  5 x is “+”.
The sign of b1 in the equation

rxy =+ .8772

rxy = +.9366

Slide
25
Model Assumptions

Slide
26
Assumptions About the Error Term e

1. The error  is a random variable with mean of zero

2. The variance of  , denoted by  2, is the same for


all values of the independent variable (x).

3. The values of  are independent.

4. The error  is a normally distributed random


variable.

Slide
27
Testing for Significance

Slide
28
Testing for Significance

To test for a significant regression relationship, we


must conduct a hypothesis test to determine whether
the value of b1 is zero.

Two tests are commonly used:


t Test and F Test

Both the t test and F test require an estimate of s 2,


the variance of e in the regression model.

Slide
29
Testing for Significance

 An Estimate of s 2

The mean square error (MSE) provides the estimate


of s 2, and the notation s2 is also used.

s 2 = MSE = SSE/(n - 2)

where:

SSE  ( yi  yˆ i ) 2  ( yi  b0  b1 xi ) 2

Slide
30
Testing for Significance

 An Estimate of s
• To estimate s we take the square root of s 2.
• The resulting s is called the standard error of
the estimate.

SSE
s  MSE 
n 2

Slide
31
Testing for Significance: t Test

 Hypotheses

H 0:  1 0
H a:  1 0

 Test Statistic

b1 s
t where sb1 
sb1 2
(xi  x)

Slide
32
Testing for Significance: t Test

 Rejection Rule

Reject H0 if p-value < a


or t < -tor t > t

where:
t is based on a t distribution
with n - 2 degrees of freedom

Slide
33
Testing for Significance: t Test

1. Determine the hypotheses. H 0:  1 0


H a:  1 0
a = .05
2. Specify the level of significance.

b1
3. Select the test statistic.t 
sb1

4. State the rejection rule.


Reject H0 if p-value < .05
or |t| > 3.182 (with
3 degrees of freedom)

Slide
34
Testing for Significance: t Test

5. Compute the value of the test statistic.


b1 5
t  4.63
sb1 1.08

6. Determine whether to reject H0.


t = 4.541 provides an area of .01 in the upper
tail. Hence, the p-value is less than .02. (Also,
t = 4.63 > 3.182.) We can reject H0.

Slide
35
Confidence Interval for 1

 We can use a 95% confidence interval for 1 to test


the hypotheses just used in the t test.
 H0 is rejected if the hypothesized value of 1 is not
included in the confidence interval for 1.

Slide
36
Confidence Interval for 1

 The form of a confidence interval for 1 is: t / 2sb1


is the
b1  t / 2sb1 margin
b1 is the of error
point
where t / 2 is the t value providing an area
estimat
of a/2 in the upper tail of a t distribution
or
with n - 2 degrees of freedom

Slide
37
Confidence Interval for 1

 Rejection Rule
Reject H0 if 0 is not included in
the confidence interval for 1.
 95% Confidence Interval for 1
b1 t / 2=sb15 +/- 3.182(1.08) = 5 +/- 3.44
or 1.56 to 8.44
 Conclusion
0 is not included in the confidence interval.
Reject H0

Slide
38
Testing for Significance: F Test

 Hypotheses

H 0:  1 0
H a:  1 0
 Test Statistic

F = MSR/MSE

Slide
39
Testing for Significance: F Test

 Rejection Rule

Reject H0 if
p-value < a
or F > F
where:
F is based on an F distribution with
1 degree of freedom in the numerator and
n - 2 degrees of freedom in the denominator

Slide
40
Testing for Significance: F Test

1. Determine the hypotheses. H 0:  1 0


H a:  1 0

a = .05
2. Specify the level of significance.

3. Select the test statistic.F = MSR/MSE

4. State the rejection rule.


Reject H0 if p-value < .05
or F > 10.13 (with 1 d.f.
in numerator and
3 d.f. in denominator)

Slide
41
Testing for Significance: F Test

5. Compute the value of the test statistic.

F = MSR/MSE = 100/4.667 = 21.43

6. Determine whether to reject H0.


F = 17.44 provides an area of .025 in
the upper tail. Thus, the p-value
corresponding to F = 21.43 is less
than .025.
The Hence,evidence
statistical we rejectisHsufficient
0. to
conclude
that we have a significant relationship
between the
number of TV ads aired and the number of
cars sold.
Slide
42
Some Cautions about the
Interpretation of Significance Tests
 Rejecting H0: b1 = 0 and concluding that
the
relationship between x and y is significant
does not enable us to conclude that a cause-
and-effect
 Just because we are able to reject H0: b1 = 0 and
relationship is present between x and y.
demonstrate statistical significance does not enable
us to conclude that there is a linear relationship
between x and y.

Slide
43
Using the Estimated Regression Equation
for Estimation and Prediction

Point Estimators, Confidence Interval,


and Prediction Interval

Slide
44
Using the Estimated Regression Equation
for Estimation and Prediction
 A confidence interval is an interval estimate of the
mean value of y for a given value of x.
 A prediction interval is used whenever we want to
predict an individual value of y for a new observation
corresponding to a given value of x.
 The margin of error is larger for a prediction interval.

Slide
45
Using the Estimated Regression Equation
for Estimation and Prediction
 Confidence Interval Estimate of E(y*)

ˆ* t / 2syˆ*
y

 Prediction Interval Estimate of y*

yˆ* t / 2spred


where:
confidence coefficient is 1 -  and
t/2 is based on a t distribution
with n - 2 degrees of freedom

Slide
46
Point Estimation

If 3 TV ads are run prior to a sale, we


expect
the mean number of cars sold to be:
y =^ 10 + 5(3) = 25 cars

Slide
47
Confidence Interval for E(y*)

*
 ŷ of
Estimate of the Standard Deviation

1 (x*  x)2
syˆ* s 
n  (xi  x)2

1 (3 2)2
syˆ* 2.16025 
5 (1 2)2  (3 2)2  (2  2)2  (1 2)2  (3 2)2

1 1
syˆ* 2.16025   1.4491
5 4

Slide
48
Confidence Interval for E(y*)

The 95% confidence interval estimate of the


mean number of cars sold when 3 TV ads
are run is:
ˆ t / 2syˆ*
*
y

25 + 3.1824(1.4491)
25 + 4.61

20.39 to 29.61 cars

Slide
49
Prediction Interval for y*

 Estimate of the Standard


Deviation
of an Individual Value of y*
1 (x*  x)2
spred s 1 
n  (xi  x)2

1 1
spred 2.16025 1 
5 4
spred 2.16025(1.20416)  2.6013

Slide
50
Prediction Interval for y*

The 95% prediction interval estimate of the


number of cars sold in one particular week
when 3 TV ads are run is:

yˆ* t / 2spred

25 + 3.1824(2.6013)
25 + 8.28

16.72 to 33.28 cars

Slide
51
Residual Analysis

Residual Plot, Standardized Residuals,


and Detecting Outlair

Slide
52
Residual Analysis

 If the assumptions about the error term e appear


questionable, the hypothesis tests about the
significance of the regression relationship and the
interval estimation results may not be valid.
 The residuals provide the best information about e .
 Residual for Observation i

yi  yˆi

 Much of the residual analysis is based on an


examination of graphical plots.

Slide
53
Residual Plot Against x

 If the assumption that the variance of e is the


same for all values of x is valid, and the
assumed regression model is an adequate
representation of the relationship between the
variables, then
The residual plot should give an overall
impression of a horizontal band of points

Slide
54
Residual Plot Against x

y  yˆ
Good Pattern
Residual

Slide
55
Residual Plot Against x

y  yˆ
Nonconstant Variance
Residual

Slide
56
Residual Plot Against x

y  yˆ
Model Form Not Adequate
Residual

Slide
57
Residual Plot Against x

 Residuals
Observation Predicted Cars Sold Residuals
1 15 -1
2 25 -1
3 20 -2
4 15 2
5 25 2

Slide
58
Residual Plot Against x

TV Ads Residual Plot


3
2
Residuals

1
0
-1
-2
-3
0 1 2 3 4
TV Ads

Slide
59
Standardized Residuals

 Standardized Residual for Observation i

yi  yˆi
syi  yˆi

where: syi  yˆi s 1 hi

1 (xi  x)2
hi  
n  (xi  x)2

Slide
60
Standardized Residual Plot

 The standardized residual plot can provide


insight about the assumption that the error
term e has a normal distribution.
 If this assumption is satisfied, the distribution
of the standardized residuals should appear to
come from a standard normal probability
distribution.

Slide
61
Standardized Residual Plot

 Standardized Residuals

Standardized
Observation Predicted y Residual Residual
1 15 -1 -0.5345
2 25 -1 -0.5345
3 20 -2 -1.0690
4 15 2 1.0690
5 25 2 1.0690

Slide
62
Standardized Residual Plot

 Standardized Residual
Plot A B C D
1.5
28
Standard Residuals

29 1
RESIDUAL OUTPUT
30 0.5
31 Observation Predicted Y Residuals
Standard Residuals
0
32 1 15 -1 -0.534522
33 -0.5 0 2
10
25
20 30
-1 -0.534522
34 -1 3 20 -2 -1.069045
35 4 15 2 1.069045
-1.5
36 5 25 2 1.069045
Cars Sold
37

Slide
63
Standardized Residual Plot

 All of the standardized residuals are between –


1.5 and +1.5 indicating that there is no reason
to question the assumption that e has a normal
distribution.

Slide
64
Outliers and Influential Observations

 Detecting Outliers
• An outlier is an observation that is unusual
in comparison with the other data.
• Minitab classifies an observation as an
outlier if its standardized residual value is <
• -2 orstandardized
This > +2. residual rule sometimes
fails to identify an unusually large
observation as being an outlier.
• This rule’s shortcoming can be
circumvented by using studentized deleted
• residuals.
The |i th studentized deleted residual| will
be larger than the |i th standardized
residual|.

Slide
65

You might also like