Linear Regression
Linear Regression
Simple Regression
Introduction
What is regression
Regression analysis is a statistical process done to study the
relationship between a set of independent variables (explanatory
variables) and the dependent variable (response variable). Through this
technique, it will be possible to understand how the value of the response
variable changes when the explanatory variable is varied.
A regression analysis can have two objectives:
Explanatory analysis: To understand and weigh the effects of the
independent variable on the dependent variable according to a particular
theoretical model
Predictive analysis: To locate a linear combination of the independent
variable to predict the value assumed by the dependent variable optimally
Type of data
Economic data sets come in a variety of types. Whereas some
econometric methods can be applied with little or no
modification to many different kinds of data sets, the special
features of some data sets must be accounted for or should be
exploited. We next describe the most important data structures
encountered in applied work.
Cross-sectional data
1 3.10 11 2 Yes No
2 3.24 12 22 Yes Yes
… … … … … …
525 11.56 16 5 No No
526 3.5 14 5 Yes No
Type of data
Time series data
Observations on economic variables over time
stock prices, money supply, CPI, GDP, annual homicide rates, automobile
sales
frequencies: daily, weekly, monthly, quarterly, annually
Unlike cross-sectional data, ordering is important here!
typically, observations cannot be considered independent across time →
require more complex econometric techniques
X
Linear correlation (Pearson correlation)
The covariance
For sample:
n
cov(x, y) x i y i xy
1
n i1
Estimation for population:
n
1
ˆ xy
cov(x, y) (x i x )(y i y )
n 1 i1
n
1 n
cov(x, y) x y xy
n 1 i1 n 1
i i
Linear correlation (Pearson correlation)
x
Linear correlation (Pearson correlation)
sxy
For sample: rxy
sx2 s 2y
sxy
Estimation for population: ˆ xy rxy
sx2 sy2
Linear correlation (Pearson correlation)
X2 X2 X2
X2 X2 X2
X1
Linear correlation (Pearson correlation)
Conditions
Linearity
Linear relation
Y Y
Linear Non-linear
X X
Linear correlation (Pearson correlation)
Conditions
Normality
The probability distribution of the couple (X, Y) is a two-
dimensional normal distribution:
In particular, for each value of X, the values of Y are normally
distributed and vice versa.
r=0
r = 0.8
Linear correlation (Pearson correlation)
1.7
50
FKLNGTH
1.6
LFKL
40
1.5
30
1.4
20 1.3
0 10 20 30 40 50 0.5 1.0 1.5 2.0
AGE LAGE
Test of = 0
r n 2
Under Ho: t obs t n2,
2
1 r
We assume: y = f(x) = 0 + 1 x1
X = independent variable
controlled
Y
Y = dependent variable, random
Error: ℰi ~ N(0, σ) X
Simple linear regression
Estimate E[y|x] (called the population regression function).
In other words, we need to find a “good” mathematical expression
for f in E[y|x] = f(x): The simplest model is E[y|x] = β0 + β1 x
Simple linear regression
2. Parameter estimation
•Intercept: β0 = E[y|x = 0]
• Slope:
Simple linear regression
2. Parameter estimation
Mi
yi
ℰi Yi = 0 + 1Xi + ℰi
yˆ i
M’i
ℰi = yi - (0 + 1Xi)
Y
Minimum of
squared error:
X xi min ( yi yˆ i )2
Simple linear regression
2. Parameter estimation
Least square method
Simple linear regression
2. Parameter estimation
Least square method
Simple linear regression
2. Properties of Least -Square Estimators:
We assume that: Yi = 0 + 1 x1 + ℰi
Normality of error
Residuals
Predicted values
Simple linear regression
Predicted value
Possibility of transformation
Simple linear regression
Error structure?
Residual
4. Coefficient of determination
Decomposition of variation
n
SCET (y i y ) ns2 2
y
i1
Simple linear regression
4. Coefficient of determination
Decomposition of variation
Y
= +
= +
Simple linear regression
4. Coefficient of determination
Decomposition of variation
Simple linear regression
4. Coefficient of determination:
Relation between r et r2
n n
SCE reg.lin. ( yˆ i y ) 2 ((a bx i ) (a bx )) 2
i1 i1
n
b 2 (x i x ) 2 b2 nsx2 b 2 SCE x
i1
2 2 2 2
b ns cov(x, y) s (cov(x, y))
so r
2
2
x
( 2
) 2
2 x
2 2
(r) 2
nsy sx sy sx sy
If r = 0 <=> r2 = 0
Simple linear regression
5. Analysis of variance table (ANOVA table)
bˆ bˆ
Ho: b = 0
: Tn2
sˆb (1 r 2 )sy2
(n 2)s2
x
Simple linear regression
Question
Explanation of Y by X:
Is there a relation? Correlation Simple linear model
What relation? Regression
Model
Y = β0 + β0x + ℰi
(X,Y) binormal =>
linearity of regressions For X = xi,
Yi ~N(β0 + β0x i, s)
Chapter 2
Multiple Regression
Multiple linear regression
y = 0 + 1 x1 + ℰ
where:
0, 1, 2, . . . , p are the parameters, and ℰ is a
random variable called the error term
Multiple linear regression
Sample Data:
Multiple Regression Model x1 x2 . . . x p y
E(y) = 0 + 1x1 + 2x2 +. . .+ pxp + e . . . .
Multiple Regression Equation . . . .
E(y) = 0 + 1x1 + 2x2 +. . .+ pxp
Unknown parameters are
0, 1, 2, . . . , p
Estimated Multiple
b0, b1, b2, . . . , bp Regression Equation
provide estimates of Y-hat = b0 + b1x1 + b2x2 +. . .+ bpxp
0, 1, 2, . . . , p Sample statistics are
b0, b1, b2, . . . , bp
Least Squares Method
min ( yi yˆ i )2
y = 0 + 1x1 + 2x2 + e
where
y = annual salary ($1000)
x1 = years of experience
x2 = score on programmer aptitude test
Solving for the Estimates of 0, 1, 2
Least Squares
Input Data Output
x1 x2 y SPSS b0 =
for Solving b1 =
4 78 24
Multiple b2 =
7 100 43
. . . Regression
R2 =
. . . Problems
3 89 30 etc.
Solving for the Estimates of 0, 1, 2
b1 = 1.404
b1 = 1.404
b2 = 0.251
b2 = 0.251
i
( y y )2
= i
( ˆ
y y )2
+ i i
( y ˆ
y )2
where:
SST = total sum of squares
SSR = sum of squares due to regression
SSE = sum of squares due to error
Multiple Coefficient of Determination
ANOVA Output
A B C D E F
32
33 ANOVA
34 df SS MS F Significance F
35 Regression 2 500.3285 250.1643 42.76013 2.32774E-07
36 Residual 17 99.45697 5.85041
37 Total 19 599.7855
38
SSR
SST
Multiple Coefficient of Determination
R2 = SSR/SST = 1- SSE/SST
R2 = 500.3285/599.7855 = .83418
n1
Ra2 1 (1 R )
2
np1
20 1
R 1 (1 .834179)
2
.814671
20 2 1
a
Assumptions About the Error Term e
Hypotheses H 0 : 1 = 2 = . . . = p = 0
Ha: One or more of the parameters
is not equal to zero.
Hypotheses H0 : i 0
H a : i 0
where:
y^ = annual salary ($1000)
x1 = years of experience
x2 = score on programmer aptitude test
x3 = 0 if individual does not have a graduate degree
1 if individual does have a graduate degree
x3 is a dummy variable
Qualitative Independent Variables
Not significant
More Complex Qualitative Variables
Highest
Degree x1 x2
Bachelor’s 0 0
Master’s 1 0
Ph.D. 0 1
Residual Analysis
2
Residuals
Standard
0
0 10 20 30 40 50
-1
-2
Predicted Salary
Chapter 3
Diagnostics Regression
L8: Heteroscedasticity
Feng Li
feng.li@cufe.edu.cn
Yi 1 Xi ui
= β1 + β2 +
σi σi σi σi
which can be rewritten as
Yi˚ = β1 X˚ ˚ ˚
0i + β2 X1i + ui ,
and u˚
i = ui /σi is the new error term for the new model.
§ Var(ui /σi ) = 1 is now a constant. why?
§ We call β̂˚ ˚
1 β̂2 as GLS estimators
Ŷi˚ = β̂˚ ˚ ˚ ˚
1 X0i + β̂2 X1i ,
where wi = 1/σ2i .
When wi = w = 1/σ2 , the GLS estimator will reduce to the OLS estimator.
Verify this!
ř ř ř ř
β̂˚1 = Ȳ ˚ ´ β̂˚2 X̄˚ where Ȳ ˚ = ( wi Yi )/ (wi ), X̄˚ = ( wi Xi )/ (wi ).
Y = Xβ + ε, E[ε|X] = 0, Var[ε|X] = Ω.
p GLS = (X 1 Ω
β p ´1 y
p ´1 X)´1 X 1 Ω
OLS OLS
p can be iterated to
The procedure can be iterated and this estimation of Ω
convergence.
p GLS = Y ´ Xβ
u p GLS
p GLS = diag(p
Ω σ2GLS,1 , σ
p2GLS,2 , . . . , σ
p2GLS,n )
p GLS = (X 1 Ω
β p ´1 X)´1 X 1 Ω p ´1 y
GLS GLS
Question: How do you calculate the degrees of freedom of the model? [Hint:
think about the hat matrix]
Suppose there are heteroscedastic but we insist using OLS. What will go
wrong? – whatever conclusions we draw may be misleading.
We could not establish confidence intervals and test hypotheses with usual t,
F tests.
The usual tests are likely to give larger variance than the true variance.
The variance estimator of β̂ by OLS is a biased estimator of the true
variance.
ř
The usual estimator of σ2 which was û2i /(n ´ 2) is biased.
2
●
● ●
●● ●
●●
●●
●●
1
●●●●
●●●●
●
●●●
●●●
●●●
●●●
●
Sample Quantiles
●●
●
●
●
●
●●
●●
●
●
0
●●
●●
●●
●●
●●
●●
●●
●
●●●
●
●
●
●●
●●●●
●●
●
●●●
●
●
●●
●
●●●
−1
●
●
●
●●
●
● ●●
−2
● ● ●
−2 −1 0 1 2
Theoretical Quantiles
H0 : No heteroscedasticity.
Consider Yi = β1 + β2 X2i + β3 X3i + ui , (other models are the same)
step 1: Do the OLS to obtain the residuals ûi .
step 2: Run the following model with the covariates and their crossproducts
and obtain R2 .
step 3: nR2 „ χ2 (k ´ 1) where k is no. of unknown parameters in step 2.
step 4: If χ2obs (k ´ 1) ą χ2crit (k ´ 1), reject H0 .
Question: How do you carry out White’s test with the model
Yi = β1 + β2 X2i + β3 X3i + β4 X4i + ui ?
where Y = ratio of trade taxes (import and export taxes) to total government
revenue, X2 = ratio of the sum of exports plus imports to GNP, and X3 =
GNP per capita.
By applying White’s heteroscedasticity test. We first obtain the residuals
from regression.
Then we do the following auxiliary regression
û2i = ´5.8+2.5 ln X2i +0.69 ln X3i ´0.4(ln X2i )2 ´0.04(ln X3i )2 +0.002 ln X2i ln X3i
and R2 = 0.1148.
Can you compute the white’s test statistic?
What is your conclusion of heteroscedasticity? (The 5% critical value of
χ2df=5 = 11.07, and the 10% critical value of χ2df=5 = 9.24)
RSS2 /[(n
SSE2 ´ c)/2 ´ k]
λ= „ F(((n ´ c)/2 ´ k), ((n ´ c)/2 ´ k))
RSS1 /[(n
SSE1 ´ c)/2 ´ k]
In R: gqtest().
Default: Assume that the data are already ordered, split sample in the
middle without omitting any central observations.
data: jour_lm
GQ = 1.7, df1 = 88, df2 = 88, p-value = 0.007
alternative hypothesis: variance increases from segment 1 to 2
Christian Kleiber, Achim Zeileis © 2008–2017Applied Econometrics with R – 4 – Diagnostics and Alternative Methods of Regression – 32 / 86
Detecting heteroscedasticity (6)
ï Breusch-Pagan-Godfrey test
data: jour_lm
BP = 11, df = 2, p-value = 0.004
Christian Kleiber, Achim Zeileis © 2008–2017Applied Econometrics with R – 4 – Diagnostics and Alternative Methods of Regression – 30 / 86
Detecting heteroscedasticity (7)
ï Spearman’s rank correlation test
1 The null hypothesis: Heteroscedasticity
2 Obtain the residuals ûi from the regression.
3 Rank |ûi | and Xi (or Yi )
4 Compute the Spearman’s rank correlation coefficients
ř 2
di
rs = 1 ´ 6
n(n2 ´ 1)
where di are the differences of |ui | and Xi in the ranked order and n is
number of individuals.
5 The significance of the sample rs can be tested by the t test as
?
rs n ´ 2
tobs =
1 ´ r2s
6 Decision rule: if tobs ą tcritical , accept H0 . Otherwise, there is no
heteroscedasticity. Multiple regressors should repeat this procedure multiple
times.
Feng Li (Statistics, CUFE) Econometrics 20 / 22
How to obtain estimators
ï with Yi = β1 + β2 Xi + ui when E(ui ) = 0 and Var(ui ) = σ2i
When σi is known: use WLS method to obtain BLUE estimators. pp. 4–5
When σi is not known:
§ If V(ui ) = σ2 X2i , do OLS with model
Yi 1 ui
= β2 + β1 +
Xi Xi Xi
and Var(ui /Xi ) = σ2 . Why?
§ If Var(ui ) = σ2 Xi (Xi ą 0), do OLS with model
Y 1 u
? i = β2 + β1 ? + ? i
Xi Xi Xi
?
and Var(ui / Xi ) = σ2 . Why?
§ If Var(ui ) = σ2 [E(Yi )]2 (Xi ą 0), do OLS with model
Yi 1 ui
= β2 + β1 +
Yˆi Yˆi Yˆi
and Var(ui /Yˆi ) « Var(ui )/[EYˆi ]2 = Var(ui )/Yi 2 = σ2 .
§ Do OLS with log transformed data lnYi = β1 + β2 lnXi + vi can also reduce
heteroscedasticity.
Feng Li (Statistics, CUFE) Econometrics 21 / 22
L8: Heteroscedasticity
Feng Li
feng.li@cufe.edu.cn
Yi 1 Xi ui
= β1 + β2 +
σi σi σi σi
which can be rewritten as
Yi˚ = β1 X˚ ˚ ˚
0i + β2 X1i + ui ,
and u˚
i = ui /σi is the new error term for the new model.
§ Var(ui /σi ) = 1 is now a constant. why?
§ We call β̂˚ ˚
1 β̂2 as GLS estimators
Ŷi˚ = β̂˚ ˚ ˚ ˚
1 X0i + β̂2 X1i ,
where wi = 1/σ2i .
When wi = w = 1/σ2 , the GLS estimator will reduce to the OLS estimator.
Verify this!
ř ř ř ř
β̂˚1 = Ȳ ˚ ´ β̂˚2 X̄˚ where Ȳ ˚ = ( wi Yi )/ (wi ), X̄˚ = ( wi Xi )/ (wi ).
Y = Xβ + ε, E[ε|X] = 0, Var[ε|X] = Ω.
p GLS = (X 1 Ω
β p ´1 y
p ´1 X)´1 X 1 Ω
OLS OLS
p can be iterated to
The procedure can be iterated and this estimation of Ω
convergence.
p GLS = Y ´ Xβ
u p GLS
p GLS = diag(p
Ω σ2GLS,1 , σ
p2GLS,2 , . . . , σ
p2GLS,n )
p GLS = (X 1 Ω
β p ´1 X)´1 X 1 Ω p ´1 y
GLS GLS
Question: How do you calculate the degrees of freedom of the model? [Hint:
think about the hat matrix]
Suppose there are heteroscedastic but we insist using OLS. What will go
wrong? – whatever conclusions we draw may be misleading.
We could not establish confidence intervals and test hypotheses with usual t,
F tests.
The usual tests are likely to give larger variance than the true variance.
The variance estimator of β̂ by OLS is a biased estimator of the true
variance.
ř
The usual estimator of σ2 which was û2i /(n ´ 2) is biased.
2
●
● ●
●● ●
●●
●●
●●
1
●●●●
●●●●
●
●●●
●●●
●●●
●●●
●
Sample Quantiles
●●
●
●
●
●
●●
●●
●
●
0
●●
●●
●●
●●
●●
●●
●●
●
●●●
●
●
●
●●
●●●●
●●
●
●●●
●
●
●●
●
●●●
−1
●
●
●
●●
●
● ●●
−2
● ● ●
−2 −1 0 1 2
Theoretical Quantiles
H0 : No heteroscedasticity.
Consider Yi = β1 + β2 X2i + β3 X3i + ui , (other models are the same)
step 1: Do the OLS to obtain the residuals ûi .
step 2: Run the following model with the covariates and their crossproducts
and obtain R2 .
step 3: nR2 „ χ2 (k ´ 1) where k is no. of unknown parameters in step 2.
step 4: If χ2obs (k ´ 1) ą χ2crit (k ´ 1), reject H0 .
Question: How do you carry out White’s test with the model
Yi = β1 + β2 X2i + β3 X3i + β4 X4i + ui ?
where Y = ratio of trade taxes (import and export taxes) to total government
revenue, X2 = ratio of the sum of exports plus imports to GNP, and X3 =
GNP per capita.
By applying White’s heteroscedasticity test. We first obtain the residuals
from regression.
Then we do the following auxiliary regression
û2i = ´5.8+2.5 ln X2i +0.69 ln X3i ´0.4(ln X2i )2 ´0.04(ln X3i )2 +0.002 ln X2i ln X3i
and R2 = 0.1148.
Can you compute the white’s test statistic?
What is your conclusion of heteroscedasticity? (The 5% critical value of
χ2df=5 = 11.07, and the 10% critical value of χ2df=5 = 9.24)
RSS2 /[(n
SSE2 ´ c)/2 ´ k]
λ= „ F(((n ´ c)/2 ´ k), ((n ´ c)/2 ´ k))
RSS1 /[(n
SSE1 ´ c)/2 ´ k]
Feng Li
feng.li@cufe.edu.cn
The confidence intervals are likely wider than those from GLS.
0.4
0.2
Residuals
0.0
−0.2
−0.4
0 2 4 6 8 10 12
Time
Carry out the runs test with the residuals give in the previous slides.
What critical value are you look for?
Why this test is called nonparametric test?
(ût ´ ût´1 )2
řn
t=2ř
d= n 2
t=1 ût
Approximation of d statistic
d « 2(1 ´ ρ̂)
ř
ût ût´1
where ρ̂ = ř 2
ût
is the sample first-order coefficient of autocorrelation of
ût , i.e.,
ût = ρ̂ut´1 .
When ρ Ñ 1, positive autocorrelation;
When ρ Ñ ´1, negative autocorrelation;
When ρ Ñ 0, no autocorrelation.
R> dwtest(consump1)
Durbin-Watson test
data: consump1
DW = 0.087, p-value <2e-16
alternative hypothesis: true autocorrelation is greater than 0
Christian Kleiber, Achim Zeileis © 2008–2017Applied Econometrics with R – 4 – Diagnostics and Alternative Methods of Regression – 41 / 86
The comparison between Durbin–Watson test and runs test
Runs test does not require and probability distribution of the error term.
Warning: The d test is not valid if ui is not iid.
When n is large, ?
n(1 ´ d/2) „ N(0, 1)
We can use the normality approximation when n is large regardless of iid.
Durbin–Watson statistic requires the covariates be non stochastic which is
difficult to meet in econometrics.
In this case, try the test on next slides.
and obtain R2 .
When n large,
(n ´ p)R2 „ χ2 (p).
Reject H0 if χ2obs (p) ą χ2crit (p).
Question: Have you seen similar another test that has the similar way of
constructions as this one?
R> bgtest(consump1)
Breusch-Godfrey test for serial correlation of order up
to 1
data: consump1
LM test = 190, df = 1, p-value <2e-16
Christian Kleiber, Achim Zeileis © 2008–2017Applied Econometrics with R – 4 – Diagnostics and Alternative Methods of Regression – 45 / 86
Model misspecification and pure autocorrelation
STAT 512
Spring 2011
Background Reading
KNNL: 3.1-3.6
6-1
What Do We Need To Check?
Main Assumptions: Errors are independent,
normal random variables with common
variance 2
6-3
What Do We Need To Check?
Are there “outlying” values for the predictor
variables (X) that could unduly influence
the regression model?
6-5
Diagnostics for Predictors (X)
Dot plots, Stem-and-leaf plots, Box plots,
and Histograms can be useful in
identifying potential outlying observations
in X. Note that just because it is an
outlying observation does not mean it will
create a problem in the analysis. However
it is a data point that will probably have
higher influence over the regression
estimates.
Sequence plots can be useful for identifying
potential problems with independence.
6-6
Reminder – Scatterplot
6-8
UNIVARIATE Procedure (2)
Stem Leaf # Boxplot
78 00000 5 |
76 0000 4 |
74 0 1 |
72 000 3 |
70 000 3 +-----+
68 000 3 | |
66 0 1 | |
64 0000 4 | |
62 000 3 | |
60 0000 4 *--+--*
58 000 3 | |
56 0000 4 | |
54 000 3 | |
52 000 3 | |
50 0 1 | |
48 00 2 +-----+
46 0000 4 |
44 00 2 |
42 0000 4 |
40 000
----+----+----+----+
6-10
UNIVARIATE Procedure (3)
6-11
Diagnostics for Residuals (1)
Basic Distributional Assumptions on Errors
Model: Yi = β0 + β1Xi + εi
o Where (i.e., the εi are
i ~ N 0, 2
iid
6-15
! " # $ %
! &
' ( # $ %
( &
' ( ) # $ %
* + ( &
, - ,) # $ %
&
. $' / . 0 1 $
- -
2 / + -
* % -- #0 3 $ -- % / )
$ 0 3 4!,(4&
Checking Linearity
6-16
Checking Constant Variance
Plot e vs X (or Yˆ ) - residual plot
Patterns suggest issues!
Megaphone shape indicates
increasing/decreasing variance with X
Other shapes can indicate non-linearity
Outliers show up in obvious way
6-17
6-19
6-20
6-21
6-22
9 - 7 - $
9 - 6 - -
. 0 $ :) ;
- 6
8
Checking for Normality
Plot residuals in a Normal Probability Plot
o Compare residuals to their expected value
under normality (normal quantiles)
o Should be linear IF normal
Plot residuals in a Histogram
PROC UNIVARIATE is used for both of
these
Book shows method to do this by hand –
you do not need to worry about having to
do that.
6-23
6-25
6-26
Normality Plot
Outliers show up in a quite obvious way.
Non-normal distributions can look very
wacky.
Symmetric / Heavy tailed distributions show
an “S” shape.
Skewed distributions show exponential
looking curves (see figure 3.9)
6-27
6-28
6-29
150000
100000
R 50000
e
s
i
d
u
a
0
l
- 50000
- 100000
-4 -3 -2 -1 0 1 2 3 4
6-30
Checking Independence
6-31
Additional Predictors
6-32
6-33
Residuals vs Age
Plot looks great, right?
But what happens if we separate male and
female?
6-34
6-35
Summary of Diagnostic Plots
You will have noticed that the same plots are
used for checking more than one assumption.
These are your basic tools.
o Plot Y vs. X (check for linearity, outliers)
o Plot Residuals vs. X (check for constant
variance, outliers, linearity)
o Normal Probability Plot and/or
Histogram of residuals (normality, outliers)
If it makes sense, consider also doing a
sequence plot of the residuals (independence)
6-37
Plots vs. Significance Tests
If you are uncertain what to conclude after
examining the plots, you may additionally wish
to perform hypothesis tests for model
assumptions (normality, homogeneity of
variance, independence).
These tests are not a replacement for the plots,
but rather a supplement to them.
Note of caution: Plots are more likely to
suggest a remedy and significance test results
are very dependent on sample size.
6-38
Significance Tests for
Model Assumptions
Constancy of Variance:
o Brown-Forsythe (modified Levene)
o Breusch-Pagan
Normality
o Kolmogorov-Smirnov, etc.
Independence of Errors:
o Durbin-Watson Test
6-39
Tests for Normality
PROC UNIVARIATE data=diag normal;
var resid;
6-40
• # $ %
• & ' (
) ) *
• & ' ! !
!) ) *
o + ! , !
&
o ! ! !
"
!
" # ! $ %&'
• . ((
) ! ( ! '
' *
•
%/ ! !)
, ! 0
• 1 2 3' ****
# ! 4 5
! !
-
• ε ∼ /7' σ 0 ' 8
ε −7
∼ /7' 0 2 9 3
σ
• . ( '( 1 : (
σ' ( *
• # '
*
• '
! ! ( (
*
6
• 2< : ,3
{ }= ( − )
• # ! !
! ,' ( 7
• = !
' *
•
∗
= ∼ −
( − )
;
• > ! ?
) <
• ! ( !) @
8
= − ()
• 5 8 A ! !)
***
= ( − )
! "#$
•
()
{ }=
−
• = =
{ } () ( − )
• A ( C ( ? ?
! ( : (
( 2 , ! 3*
B
"%$
• > !
!
− −
=
( − )−
D
& '
• E 9 9
<
• ( : ( C '
9 '
! " ! / * 0(
* > ( !
( "! )
/ 0*
• < ! *
7
' !( )
• E !
! *
• 2 3) ) ) !
! !
) !) (
!
!) 2 , ! 3*
* +
• C !
*
• E F ?
! 7*76
• !) 9 /
0 C
) ' ? ?
!
• 1 !
*
!! " " # !$ %&&&&&&% % $"
# "! ! '! ! $ "$ ! %&&&&&&% % !
! $ #! ! ! " % &&&% % #
' $ "$ # ! '" # ' % %& % "
$ $ ' ! !% %& % $
" $ '# $ ' !$' ! " " % %&&&&&&% " '
$ ! # '" " % %& %
$ $ $ ' " " % &% % '
# $ ' $ ! !# % %& %
$ '" ## ' '$ "!! % %& %
5 8 H &>' H :' ; H I
6
' 3
$
( * C
F ! ! ,*
' ≤ ≤ ∑ =
&
! $ *
&
( ! ' *
&
, ) /) -77 I55&0*
;
! ! ( !) ) H7*7"
( ) * +
, - + # !$ ' ! '#
! $ !
! ) ! ! $ ! ! '
' , * + ' $
'! . #! #'
'!$ / 0 1 * ' '
'!# 2 + 3 ! " !!$
'' 4 + ! " " '! ! !
B
/
5
! * ! !
8
o !
o !
(
o !
)
D
, 45
> ) >&&
)
! >
&
/ !)
A/)' )0 0*
7
**
> ) C
#5 )
! >
@ ! (
) ( (
!)
& / '
0
( )
*6
) ) ! )
)
>
) !
>
7/ !!!
5 !) , )
F 1 : (
5
(
'
9 * , ' :
, +'
%! ,
! #
;, ' < +* , '
* :
* ! ! #
8
> *
"
$
? $
*@ !
A
$
=
$
( # %
' ! ! '
! #
= = −
; % < # 9 * , + *
) *+ !
, ' 0 , %
, ' ? * #
$
( # % #$%
2 ! * '
, ' # + *
' 3' 4#
% < ! 7 ,
. 0 /, ' *
$#
$
) * ( # %
$ ! ' +
' '
$+
, ' #
$
) * ( # % #$%
*
= − ( −)
= −
% < ? + 7 ,
< $+ *
' #
, ,
! < #
(
)-
) ) @
+
=
A * <@ A
+
=
( )
C < 3' 4#
> % !
3! 4 #
D ' + #
6
@ $! /% ! $ −
I
# 2% 0
/ $ 46 <5. $%
% ! $ 0
' # . (
/$ 4 $%
## 5I # $
% ! / !
$ ( ! $ 0
D/ / # ! #
B
( ) &
7 ! ! / 9 #
% / 0
3 2 $% . ( #
! ! / 9 . #
0
3 $% ./ / < 6J
! I #$ ! #
K 2 % # $ 0
E
( ) & "*%
C / $ / $
I // %
// $ 0
# #
/ 4 0#0 % 0 % 5
D/ # L
: #$ 0
J
+
C / $ 0
. !
4 / $5 $ 0
% ! !
4! %
50
+ "*%
L' $# % /
! ! .
$ # !
0
" /
$ ! !
. ! . #
$ ! 0
+ ",%
$!
/ ! (
$ . ( #/ .
$ . $ ! !
$ 0
) % K
# / 4 5/
$ 0 D .
2 0
+
@ $! /% ! $ −
I
# 2% 0
/ $ 46 <5. $%
% ! $ 0
' # . (
/$ 4 $%
## 5I # $
% ! / !
$ ( ! $ 0
D/ / # ! #
B
( ) &
7 ! ! / 9 #
% / 0
3 2 $% . ( #
! ! / 9 . #
0
3 $% ./ / < 6J
! I #$ ! #
K 2 % # $ 0
E
( ) & "*%
C / $ / $
I // %
// $ 0
# #
/ 4 0#0 % 0 % 5
D/ # L
: #$ 0
J
+
C / $ 0
. !
4 / $5 $ 0
% ! !
4! %
50
+ "*%
L' $# % /
! ! .
$ # !
0
" /
$ ! !
. ! . #
$ ! 0
+ ",%
$!
/ ! (
$ . ( #/ .
$ . $ ! !
$ 0
) % K
# / 4 5/
$ 0 D .
2 0
+
0
$ # I# !
/ %
' % .& ) ! ! 4
# 5 $% ! ;
$ 0
3 : /
/
! $ 0
B
1 -
( ) $ $ #
& ) !
/ ! $ .& )$ !
/ ! $% %
$ 0
E
@ $! /% ! $ −
I
# 2% 0
/ $ 46 <5. $%
% ! $ 0
' # . (
/$ 4 $%
## 5I # $
% ! / !
$ ( ! $ 0
D/ / # ! #
B
( ) &
7 ! ! / 9 #
% / 0
3 2 $% . ( #
! ! / 9 . #
0
3 $% ./ / < 6J
! I #$ ! #
K 2 % # $ 0
E
( ) & "*%
C / $ / $
I // %
// $ 0
# #
/ 4 0#0 % 0 % 5
D/ # L
: #$ 0
J
+
C / $ 0
. !
4 / $5 $ 0
% ! !
4! %
50
+ "*%
L' $# % /
! ! .
$ # !
0
" /
$ ! !
. ! . #
$ ! 0
+ ",%
$!
/ ! (
$ . ( #/ .
$ . $ ! !
$ 0
) % K
# / 4 5/
$ 0 D .
2 0
+
0
$ # I# !
/ %
' % .& ) ! ! 4
# 5 $% ! ;
$ 0
3 : /
/
! $ 0
B
1 -
( ) $ $ #
& ) !
/ ! $ .& )$ !
/ ! $% %
$ 0
E
5
• $ /
o @
o / $ 2%
o " 4 5 $%
• # % / .! /
% / ! 0
5 "*%
• & K )
F
∑4 − S5
=
= F
o ? / %
o S ?% /
! $ $ ! #
0
F
o ? $! / !0
A
5 ",%
• C $ % / .
$% & 0
• /& / & ) ##
$ ! 0
• /& $ # & ).
## &
& ) /
$ % / 0
• E0 / $ / $ 0