0% found this document useful (0 votes)
87 views22 pages

RSP Assignment: Kush Bansal Bba (Fia) 2A 18344

The document provides answers to questions about error terms, correlograms, the classical linear regression model (CLRM), serial correlation vs autocorrelation, and creating graphs in EViews. For error terms, it discusses the normality assumptions of data and residuals and the steps to check for non-normality in EViews. For correlograms, it explains what they show and how to display one in EViews. CLRM is defined as referring to the classical linear regression model and its assumptions. Serial correlation is defined as the relationship between a variable and its lag, while autocorrelation represents the degree of similarity between a time series and its lag. Finally, it provides the steps to create basic and series

Uploaded by

anand negi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
87 views22 pages

RSP Assignment: Kush Bansal Bba (Fia) 2A 18344

The document provides answers to questions about error terms, correlograms, the classical linear regression model (CLRM), serial correlation vs autocorrelation, and creating graphs in EViews. For error terms, it discusses the normality assumptions of data and residuals and the steps to check for non-normality in EViews. For correlograms, it explains what they show and how to display one in EViews. CLRM is defined as referring to the classical linear regression model and its assumptions. Serial correlation is defined as the relationship between a variable and its lag, while autocorrelation represents the degree of similarity between a time series and its lag. Finally, it provides the steps to create basic and series

Uploaded by

anand negi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 22

RSP

Assignment

Kush Bansal
BBA(FIA) 2A
18344
Question 1: Write a short note on the following –
a) Error term (of data and residuals and mention steps in e-views)
b) Correlogram
c) CLRM
d) Serial Correlation vs Autocorrelation
e) Graphs (brief meaning and steps in EViews, draw diagram also)

Answer:
a) Error Term
The normality of Data means that the sampling distribution of sample means in the
data is normally distributed. This implies that the data, when plotted graphically,
roughly fits a bell- shaped curve. The normality of Residuals means that the error
terms, after the expected value of the dependent variable has been estimated, for
a given value of the independent variable, should be normally distributed around
the estimated value. Normality implies; the data is: Symmetric, unimodal,
asymptotic and that the mean, median and mode are all equal. The assumption
requiring a normal distribution applies only to the disturbance term, not to the
independent variables as is often believed. Each case in the sample actually has a
different random variable which encompasses all the “noise” that accounts for
differences in the observed and predicted values produced by a regression
equation, and it is the distribution of this disturbance term or noise for all cases in
the sample that should be normally distributed.

Steps to check Non-Normality:


EViews provides tests for serial correlation, normality, heteroskedasticity, and
autoregressive conditional heteroskedasticity in the residuals from your estimated
equation.
This view displays a histogram and descriptive statistics of the residuals, including
the Jarque-Bera statistic for testing normality. If the residuals are normally
distributed, the histogram should be bell-shaped and the Jarque-Bera statistic
should not be significant; see “Histogram and Stats” , for a discussion of the
Jarque-Bera test.
To display the histogram and Jarque-Bera statistic, select View/Residual

Diagnostics/Histogram-Normality. The Jarque-Bera statistic has a distribution with


two degrees of freedom under the null hypothesis of normally distributed errors.

b) Correlogram
A correlogram (also called Auto Correlation Function ACF Plot or Autocorrelation
plot) is a visual way to show serial correlation in data that changes over time (i.e.
time series data). Serial correlation (also called autocorrelation) is where an error
at one point in time travels to a subsequent point in time. For example, you might
overestimate the value of your stock market investments for the first quarter,
leading to an overestimate of values for following quarters.
Correlograms can give you a good idea of whether or not pairs of data show
autocorrelation. They cannot be used for measuring how large that autocorrelation
is. Example: The plot shows the correlation coefficient for the series lagged (in
distance) by one delay at a time. For example, at x=1 you might be comparing
January to February or February to March. The horizontal scale is the time lag and
the vertical axis is the autocorrelation coefficient (ACF). The plot is often combined
with a measure of autocorrelation like Moran’s I; Moran’s values close to +1
indicate clustering while values close to -1 indicate dispersion.

The above image shows relatively small Moran’s I (between about -0.2 and 0.35). In
addition, there is no pattern in the autocorrelations (i.e. no consistent upward or
downward pattern as you travel across the x-axis). This set of data likely has no
significant autocorrelation.
When you select View/Correlogram… the Correlogram Specification dialog box
appears. You may choose to plot the correlogram of the raw series (level) x, the
first difference d(x)=x–x(–1), or the second difference d(x)-d(x(-1)) = x-2x(-1)+x(-2)
of the series.
You should also specify the highest order of lag to display the correlogram; type in
a positive integer in the field box. The series view displays the correlogram and
associated statistics

c) CLRM
CLRM refers to Classical Linear Regression Model The ordinary least squares (OLS)
technique is the most popular method of performing regression analysis and
estimating econometric models, because in standard situations (meaning the model
satisfies a series of statistical assumptions) it produces optimal (the best possible)
results.
The proof that OLS generates the best results is known as the Gauss-Markov
theorem, but the proof requires several assumptions. These assumptions, known
as the classical linear regression model (CLRM) assumptions, are the following:

The model parameters are linear, meaning the regression coefficients don’t enter
the
function being estimated as exponents (although the variables can have exponents).

The values for the independent variables are derived from a random sample of the
population, and they contain variability.

The explanatory variables don’t have perfect collinearity (that is, no independent
variable
can be expressed as a linear function of any other independent variables).

The error term has zero conditional mean, meaning that the average error is
zero at any specific value of the independent variable(s).

The model has no heteroskedasticity (meaning the variance of the error is the same
regardless of the independent variable’s value).

The model has no autocorrelation (the error term doesn’t exhibit a systematic
relationship
over time).

d) Serial VS Autocorrelation
Serial correlation is the relationship between a variable and a lagged version of itself
over various time intervals. Repeating patterns often show serial correlation when
the level of a variable affects its future level. In finance, this correlation is used by
technical analysts to determine how well the past price of a security predicts the
future price. Serial correlation is the relationship between a given variable and a
lagged version of itself over various time intervals. A variable that is serially
correlated has a pattern and is not random. Serial correlation was originally used in
engineering to determine how a signal, such as a computer signal or radio wave,
varies compared to itself over time.
Autocorrelation represents the degree of similarity between a given time series and
a lagged version of itself. Autocorrelation measures the relationship between a
variable's current value and its past values. An autocorrelation of +1 represents a
perfect positive correlation, while an autocorrelation of negative 1 represents a
perfect negative correlation. When the correlation occurs in same series then the
correlation is called autocorrelation.
But when the correlation occurs in different time series then it is called serial
correlation

e) Graphs
Graphing is an important part of the process of data analysis and presentation.
EViews provides a powerful, user-friendly, full featured set of tools that aids in
graphically displaying information. EViews graph types can be grouped into
four categories:
• Observation graphs – show data for each observation in the series or group
(covered in this tutorial).
• Analytical Graphs – show results obtained from analysis of the series or
group data (covered in tutorial Advanced Graphing).
• Auxiliary Graphs – show analytical graphs deriving from a
modification of observation graphs (i.e., regression line or kernel fit
line: covered in tutorial Advanced Graphing).
• Categorical Graphs – observation or analytical graphs created by dividing
data into categories defined by factor variables (covered in tutorial on
Advanced Graphing).
Steps to make a basic graph:

1.Click on page Monthly and open series


SP500. 2.Click View → Graph.
3.This brings up the Graph Option dialog box. Under Option Pages select Basic
type page. 4.Under the Graph type section, select Basic graph from the General
category.
5. Select Line & Symbol from the Specific category.
6. Leave Details settings as specified by default options (and as
shown here). 7.Click OK.

Steps to make a series graph:


There are a number of graph types and associated options that you can use to
plot single series in EViews.
If you open a single series, the following graph types appear: Line & Symbol,
Bar, Spike, Area, Dot Plot, Distribution, Quantile, Boxplot, Seasonal Graph

Let’s first create a group of the series that we will use.


1. Click on the Monthly page of the Data.wf1 workfile.
2. Right click the following series: tbill3m, tbill6m, tnote2y, tnote10y (in that order
and as shown here).
3. Select Open → as Group. The group of series is now created.
4. Click Name to name the group (in this case let’s name it group01).

1. Open group01.
2. Click View → Graph.
3. This brings up the Graph Option dialog box. The default is set on Basic type
page; keep the default setting.
4. Under the Graph type section, select Basic graph from the General category.
5. Select Line & Symbol from the Specific category.
6. Notice that now under Details/Multiple Series you have a few options. Let’s select
Single
Graph.
7. Click OK.
Question 2: Write the steps of the following in E-views
a) To convert simple linear regression into Log lin, Lin log and double log.
b) To convert Log lin into Lin log, double log.
c) To convert Lin log into log lin, double log.
d) To convert double log into log lin and lin log.

Answer:
a.) It has been assumed that the original data has variables in their linear form
Linear Regression:
1) Import the required excel file
2) Object -> New Object -> Equation
3) In the Equation Estimation Dialogue Box, specify
a) Dependent variable c Independent variable
b) The method of estimation; LS – Least Squares (NLS and ARMA)
c) The sample period, then click on OK
Linear to Log lin
1) Import the required excel file
2) Object -> New Object -> Equation
3) In the Equation Estimation Dialogue Box, specify
a) Log(Dependent variable) c Independent variable
b) The method of estimation; LS – Least Squares (NLS and ARMA)
c) The sample period, then click on OK
Linear to lin log
1) Import the required excel file
2) Object -> New Object -> Equation
3) In the Equation Estimation Dialogue Box, specify
a) Dependent variable c log(Independent variable)
b) The method of estimation; LS – Least Squares (NLS and ARMA)
c) The sample period, then click on OK
Linear to log log
4) Import the required excel file
5) Object -> New Object -> Equation
6) In the Equation Estimation Dialogue Box, specify
a) Log(Dependent variable) c log(Independent variable)
b) The method of estimation; LS – Least Squares (NLS and ARMA)
c) The sample period, then click on OK
b.) It has been assumed that the original data has dependent variable in log form
and independent variables in linear form
Log lin into lin log
1) Import the required excel file
2) Object -> New Object -> Equation
3) In the Equation Estimation Dialogue Box, specify
a) Exp(Dependent variable) c log(Independent variable)
b) The method of estimation; LS – Least Squares (NLS and ARMA)
c) The sample period, then click on OK
Log lin into log log
1) Import the required excel file
2) Object -> New Object -> Equation
3) In the Equation Estimation Dialogue Box, specify
a) Dependent variable c log(Independent variable)
b) The method of estimation; LS – Least Squares (NLS and ARMA)
c) The sample period, then click on OK

c.) It has been assumed that the original data has dependent variable in linear form
and independent variables in log form
Lin log into log lin
1) Import the required excel file
2) Object -> New Object -> Equation
3) In the Equation Estimation Dialogue Box, specify
a) Log(Dependent variable) c Exp(Independent variable)
b) The method of estimation; LS – Least Squares (NLS and ARMA)
c) The sample period, then click on OK

Lin log into log log


1) Import the required excel file
2) Object -> New Object -> Equation
3) In the Equation Estimation Dialogue Box, specify
a) Log(Dependent variable) c Independent variable
b) The method of estimation; LS – Least Squares (NLS and ARMA)
c) The sample period, then click on OK

d.) It has been assumed that the original data has variables in their log form
Log log into log lin
1) Import the required excel file
2) Object -> New Object -> Equation
3) In the Equation Estimation Dialogue Box, specify
a) Dependent variable c exp(Independent variable)
b) The method of estimation; LS – Least Squares (NLS and ARMA)
c) The sample period, then click on OK
Log log into lin log
1) Import the required excel file
2) Object -> New Object -> Equation
3) In the Equation Estimation Dialogue Box, specify
a) Exp(Dependent variable) c Independent variable
b) The method of estimation; LS – Least Squares (NLS and ARMA)
c) The sample period, then click on OK
Question 3: Distinguish the following:
Answer:
a) Linear in parameters and linear in variables.
The difference between linear and nonlinear regression models isn’t as
straightforward as it sounds. You’d think that linear equations produce straight
lines and nonlinear equations model curvature. Unfortunately, that’s not correct.
Both types of models can fit curves to your data—so that’s not the defining
characteristic. In this post, I’ll teach you how to identify linear and nonlinear
regression models.

The difference between nonlinear and linear is the “non.” OK, that sounds like a
joke, but, honestly, that’s the easiest way to understand the difference. First, I’ll
define what linear
regression is, and then everything else must be nonlinear regression. I’ll include
examples of
both linear and nonlinear regression models.

Linear Regression Equations

A linear regression model follows a very particular form. In statistics, a regression


model is linear when all terms in the model are one of the following:

• The constant
• A parameter multiplied by an independent variable (IV)

Then, you build the equation by only adding the terms together. These rules limit
the form to just one type:

Dependent variable = constant + parameter * IV + … + parameter * IV

Statisticians say that this type of regression equation is linear in the parameters.
However, it is possible to model curvature with this type of model. While the
function must be linear in the parameters, you can raise an independent variable
by an exponent to fit a curve. For example, if you square an independent variable,
the model can follow a U-shaped curve.

While the independent variable is squared, the model is still linear in the
parameters. Linear models can also contain log terms and inverse terms to follow
different kinds of curves and yet continue to be linear in the parameters.

The regression example below models the relationship between body mass index
(BMI) and body fat percent. In a different blog post, I use this model to show how
to make predictions with regression analysis. It is a linear model that uses a
quadratic (squared) term to model the curved relationship.

b) Series and Autoseries.


Series:
Series refers to the different datasets that we work with in order to develop an
econometric model. It contains the numerical data for various variables that are in
play for doing all the statistical analyses. The series object is the main data object in
EViews, represented by a yellow icon with a graph in it.
By nature, a series object contains one column of data (objects with more than one
series
are called “groups”). There are different ways to create a series. These involve: -
1. Select Object -> New Object from the main menu and choose
“Series.” In the command window by issuing the “series” command.
2. Transforming an existing series in the command window using the
generate (genr) command (to obtain first difference, log, etc.)
3. Transforming an existing series using EViews functions, such as Generate
Series by Equation via the Quick menu.
4. Transforming an existing series using sample conditions (Quick ->
Generate Series, Generate Series by Equation), such as an IF condition.
There are different series statistics also which can be used for different types of
analyses. This can be accessed by following the given steps:
1. Import the required data
2. Go to Quick -> Series
Statistics The following window
will appear:

Autoseries
Autoseries is an alternative way of working with databases which allows you to
make direct use of the series contained in a database without first copying the
series. The advantage of this approach is that you need not go through the
process of importing the data every time the database is revised.
So, basically autoseries is a quicker way of referring to as well as using the
series for performing various statistical analysis as well as tests in EViews.
In general, using auto-series directly from the database has the advantage that the
data will be completely up to date. If the series in the database are revised, you do
not need to repeat the step of importing the data into the workfile. You can simply
re-estimate the equation or model, and EViews will automatically retrieve new
copies of any data which are required.
There is one complication to this discussion which results from the rules which
regulate the updating and deletion of auto-series in general. If there is an existing
copy of an auto-series already in use in EViews, a second use of the same
expression will not cause the expression to be re-evaluated (in this case reloaded
from the database); it will simply make use of the existing copy. If the data in the
database have changed since the last time the auto-series was loaded, the new
expression will use the old data.

c) Dated – regular frequency, unstructured/undated and balanced panel.

1. EViews' design allows you to work with various types of data in an intuitive and
convenient way. We start with the basic concepts of how to working with
datasets using workfiles, and describing simple methods to get you started on
creating and working with workfiles in EViews.
2. In the majority of cases you start your work in EViews with a workfile - a container
for EViews objects. Before you perform any tasks with EViews' objects you first
have to either create a new workfile or to load an existing workfile from the disc.
3. In order to create a new workfile you need to provide and information about its
structure. Select File/New/Workfile from the main menu to open the Workfile
Create dialog. On the left side of the dialog is a combo box for describing the
underlying structure of your dataset. You have to choose between three options
regarding the structure of your data
- the Dated - regular frequency, the Unstructured, and the
Balanced Panel settings. Dated - regular frequency is normally used to work with
a simple time series data, Balanced Panel is used for a simple panel dataset and
Unstructured options is used for all other cases.
4. For the Dated - regular frequency, you may choose among the following options:
Annual, Semi-annual, Quarterly, Monthly, Weekly, Daily - 5 day week, Daily - 7
day week and Integer date. EViews will also ask you to enter a Start date and End
date for your workfile. When you click on OK, EViews will create a regular frequency
workfile with the specified number of observations and the associated identifiers.
5. The Unstructured data simply uses integer identifiers instead of date identifiers.
You would use this type of workfile while performing a crossectional analysis.
Under this option you would only need to enter the number of observations.
6. The Balanced Panel entry provides a method of describing a regular frequency
panel data structure. Panel data is the term that we use to refer to data containing
observations with both a group (cross-section) and time series identifiers. This
entry may be used when you wish to create a balanced structure in which every
crosssection follows the same regular frequency with the same date
observations. Under this option you should specify a desired Frequency, a Start
and End date, and Number of cross sections.
Question 4: Write the steps of the following in E-views
a) Create a lag variable
b) Basic boxplot graph
c) Generating a new workfile page
d) Perform frequency conversion from high frequency to low frequency data
e) Create a dummy variable equal to 1, if gpa>4, and sat> 110, and 0 otherwise

Answer:
(a) Use the code ‘genr a=b(-1)’ or ‘series a=b(-1)’ to create a lag variable ‘a’ for
variable ‘b’

(b) Use the code ‘scat(ab=boxplot) x y z’ to create a basic boxplot graph for variables
x, y
and z

(c) Call up the new page menu by clicking on the tab labelled ‘New Page’ and
select ‘Specify by Frequency/Range...’, and EViews will display the ‘Workfile
Create’ dialog. Describe the structure of your workfile page as you would for a
new workfile, and enter OK.

(d)The steps are:


1. Right click on the required series and select Copy.
2. Click on the destination page and select Paste Special.
3. The Paste Special dialog comes up. Select the series under Paste as section,
and General match merge criteria under Merge by section.
4. Set Source ID and Destination ID.
5. Select Sum under Contraction Method. This means that annual GDP will be
created by summing the quarterly GDP data for each individual country over
a given year.
6. Click OK.
(e) Enter the following code
smpl if gpa>4

series dummy1=1

smpl if sat>110

series dummy1=0

Question 5: Explain the purpose of these commands in E-views (1*10=10marks)


a) Expand
b) Recode
c) smpl
d) wfopen
e) exit
f) isna(x)
g) @all
h) freeze
i) @first @last
j) wfclose

Answer:
a) Expand
The @expand expression may be added in estimation to indicate the use of one
or more automatically created dummy variables.
• @expand(ser1[, ser2, ser3, ...][, drop_spec])
creates a set of dummy variables that span the unique values of the input series
ser1, ser2, etc.

b) Recode
Recode is a type of function that performs basic mathematical operations
• @recode(s,x,y) - recode by condition
Returns x if condition s is true; otherwise
returns y.

c) smpl
The smpl command sets the workfile sample to use for statistical operations
and series assignment expressions.
• smpl smpl_spec
• smpl sample_name
The smpl command also allows you to select observations on the basis of
conditions specified in an if statement. This enables you to use logical
operators to specify what observations to include in EViews’ procedures.
You can also use smpl to set the current observations to the contents of a
named sample object; put the name of the sample object after the keyword.
d) wfopen
Open a workfile. Reads in a previously saved workfile from disk, or reads the
contents of a foreign data source into a new workfile.
The opened workfile becomes the default workfile; existing workfiles in memory
remain on the desktop but become inactive.
• wfopen [path\]source_name
• wfopen(options) source_description [table_description]
[variables_description]
• wfopen(options) source_description [table_description]
[dataset_modifiers] where path is an optional local path or URL.
There are three basic forms of the wfopen command:
• the first form is used by EViews native (“EViews and MicroTSP”) and
time series database formats ( “Time Series Formats”).
• the second form used for raw data files—Excel, Lotus, ASCII text, and binary
files (
“Raw Data Formats”).
• the third form is used with the remaining source formats, which we term
dataset formats, since the data have already been arranged in named
variables (“Datasets”).

e) Exit
Exit from EViews (close the EViews application).
• You will be prompted to save objects and workfiles which have changed
since the last time they were saved to disk. Be sure to save your workfiles, if
desired, since all changes that you do not save to a disk file will be lost.

f) isna(x)
isna(x) is a type of operators and functions which may be used in series
assignment and generation, and in many cases, in matrix operations or element
evaluation. Isna(x) when used gives Not Applicable variables.

g) @all
@all is a sub-command used in smpl command which selects the entire workfile
range.

h) freeze
Creates graph, table, or text objects from a view.
• freeze(options, name) object_name.view_command
If you follow the keyword freeze with an object name but no view of the object,
freeze will use the default view for the object. You may provide a destination name
for the object containing the frozen view in parentheses.
i) @first @last
@first is a sub-command used in smpl command which selects the first
observation in the workfile.
@last is a sub-command used in smpl command which selects the last
observation in the workfile.

j) Wfclose
Close the active or specified workfile.
• wfclose(options) [name]
wfclose allows you to close the currently active workfile. You may optionally
provide the name of a workfile if you do not wish to close the active workfile. If
more than one workfile is found with the same name, the one most recently
opened will be closed.
Q6) (i) Perform the different tests available in E-views to detect
Multicollinearity (write steps also) and interpret the output.
(ii) Perform the descriptive statistics in E-views (write steps also) and
interpret the output.

Answer:
Descriptive Statistics
A. Steps
1. Import the data in question in Eviews (File -> Import)
2. In the workfile, select the variables for which the descriptive statistics are needed
3. Right click on them, and open as a group (Right Click -> Open -> as Group)
4. In the group workfile now opened, click on View-> Descriptive Statistics->
Common / Individual samples as per the requirements

A result window like the following should appear

Alternatively,
1. Import the data in question in Eviews (File -> Import)
2. In the workfile, open the variable for which the descriptive statistics are needed
3. In the workfile opened, View-> Descriptive Statistics & Tests-> Histogram and
Stats
1) Mean - The average value or say the central point in the data set
2) Median - The central value of the data set or the value that divides the
dataset into two halves
3) Maximum - Largest value in the data set
4) Minimum - Smallest value in the data set
5) Standard Deviation - Average spread in the data set
6) Skewness - Asymmetry of the data set. A perfectly symmetrical data set will
have a skewness of 0. A positive value will denote a right-tailed data set,
and a negative value denotes a left-tailed dataset. Here, X1, X2, X3, X4 are
all slightly positively skewed, while X4, X5 are negatively skewed.
7) Kurtosis – Peakedness or tailedness in the data set, since all of our observed
kurtosis values are less than 3, it is platykurtic.
8) Jarque-Bera- a test statistic for testing whether the series is normally
distributed. The test statistic measures the difference of the skewness and
kurtosis of the series with those from the normal distribution.
9) Probability- The reported Probability is the probability that a Jarque-Bera
statistic exceeds (in absolute value) the observed value under the null
hypothesis—a small probability value leads to the rejection of the null
hypothesis of a normal distribution.
10) Sum - Aggregate of the all values in the data set
11) Observations - Number of values in the data set
Multicollinearity
Steps
1. Run the regression by Quick-> Estimate Equation
2. Type the regressand and regressors
3. The regression result appears. Go to View-> Coefficient Diagnostics->
Variance Inflation Factors and the resultant screen appears

Primarily, we observe all of the p-values being greater than 5%, suggesting the
presence of multicollinearty, while we have R-squared greater than 0.7.

Alternatively:
Steps:
1. In the workfile, select the variables for which the descriptive statistics are
needed
2. Right click on them, and open as a group (Right Click -> Open -> as Group)
3. In the group workfile now opened, click on Quick-> Group Statistics->
Correlations
In multiple pairs there is a correlation coefficient greater than the standard 0.8.

There are two forms of the Variance Inflation Factor: centered and uncentered.
1) The centered VIF is the ratio of the variance of the coefficient estimate
from the original equation divided by the variance from a coefficient
estimate from an equation with only that regressor and a constant.
2) The uncentered VIF is the ratio of the variance of the coefficient estimate
from the original equation divided by the variance from a coefficient
estimate from an equation with only one regressor (and no constant). Note
that if your original equation did not have a constant only the uncentered
VIF will be displayed.
To interpret the derived VIF values as per the rule of thumb, A value of 1 means
that the predictor is not correlated with other variables. The higher the value, the
greater the correlation of the variable with other variables. Values of more than 4
or 5 are sometimes regarded as being moderate to high, with values of 10 or
more being regarded as very high.
Since the VIF is high all over here, it makes it difficult to disentangle the relative
importance of predictors in a model, particularly if the standard errors are
regarded as being large.

You might also like