0% found this document useful (0 votes)
93 views31 pages

Understanding ARMA Models in Econometrics

This lecture discusses autoregressive moving average (ARMA) models and their applications. Key points covered include: - The properties of moving average (MA) and autoregressive (AR) processes, including their moments, autocorrelation functions, and partial autocorrelation functions. - Mixed ARMA processes that combine AR and MA components, and how to identify their order using sample autocorrelations and partial autocorrelations. - Model selection techniques like information criteria aids and specification tests to select the best ARMA model for a time series. - Examples of applying ARMA models to financial and macroeconomic data, and how to forecast with estimated ARMA models.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
93 views31 pages

Understanding ARMA Models in Econometrics

This lecture discusses autoregressive moving average (ARMA) models and their applications. Key points covered include: - The properties of moving average (MA) and autoregressive (AR) processes, including their moments, autocorrelation functions, and partial autocorrelation functions. - Mixed ARMA processes that combine AR and MA components, and how to identify their order using sample autocorrelations and partial autocorrelations. - Model selection techniques like information criteria aids and specification tests to select the best ARMA model for a time series. - Examples of applying ARMA models to financial and macroeconomic data, and how to forecast with estimated ARMA models.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

Lecture 3: Autoregressive Moving

Average (ARMA) Models and their


Practical Applications
Dr. Francesco Rotondi

20192– Financial Econometrics

Winter/Spring 2023
Overview
 Moving average processes
 Autoregressive processes: moments and the Yule-Walker
equations
 Wold’s decomposition theorem
 Moments, ACFs and PACFs of AR and MA processes
 Mixed ARMA(p, q) processed
 Model selection: SACF and SPACF vs. information criteria
 Model specification tests
 Forecasting with ARMA models
 A few examples of applications
Lecture 3: Autoregressive Moving Average (ARMA) Models – Dr. Rotondi 2
Moving Average Process

 MA(q) models are always stationary as they are finite, linear


combination of white noise processes
o Therefore a MA(q) process has constant mean, variance and
autocovariances that differ from zero up to lag q, but zero afterwards

Lecture 3: Autoregressive Moving Average (ARMA) Models – Dr. Rotondi 3


Moving Average Process: Examples
 MA(q) models are always stationary as they are finite, linear
combination of white noise processes
o Therefore a MA(q) process has constant mean, variance and
autocovariances that differ from zero up to lag q, but zero afterwards

o Simulations are based on


Lecture 3: Autoregressive Moving Average (ARMA) Models – Dr. Rotondi 4
Moving Average Process : Examples

Lecture 3: Autoregressive Moving Average (ARMA) Models – Dr. Rotondi 5


Autoregressive Process
 An autoregressive (henceforth AR) process of order p is a process
in which the series 𝑦𝑦𝑡𝑡 is a weighted sum of p past variables in the
series (𝑦𝑦𝑡𝑡−1 , 𝑦𝑦𝑡𝑡−2 , … , 𝑦𝑦𝑡𝑡−𝑝𝑝 ) plus a white noise error term, 𝜖𝜖𝑡𝑡
o AR(p) models are simple univariate devices to capture the observed
Markovian nature of financial and macroeconomic data, i.e., the fact
that the series tends to be influenced at most by a finite number of
past values of the same series, which is often also described as the
series only having a finite memory (see below on this claim)

Lecture 3: Autoregressive Moving Average (ARMA) Models – Dr. Rotondi 6


The Lag and Difference Operators
 The lag operator, generally denoted by L, shifts the time index of a
variable regularly sampled over time backward by one unit
o Therefore, applying the lag operator to a generic variable 𝑦𝑦𝑡𝑡 , we
obtain the value of the variable at time t -1, i.e., 𝐿𝐿𝑦𝑦𝑡𝑡 = 𝑦𝑦𝑡𝑡−1
o Equivalently, applying 𝐿𝐿𝑘𝑘 means lagging the variable k > 1 times, i.e.,
𝐿𝐿𝑘𝑘 𝑦𝑦𝑡𝑡 = 𝐿𝐿𝑘𝑘−1 (𝐿𝐿𝐿𝐿𝑡𝑡 ) = 𝐿𝐿𝑘𝑘−1 𝑦𝑦𝑡𝑡−1 = 𝐿𝐿𝑘𝑘−2 (𝐿𝐿𝐿𝐿𝑡𝑡−1 ) = ⋯ = 𝑦𝑦𝑡𝑡−𝑘𝑘
 The difference operator, Δ, is used to express the difference
between consecutive realizations of a time series, Δ𝑦𝑦𝑡𝑡 = 𝑦𝑦𝑡𝑡 − 𝑦𝑦𝑡𝑡−1
o With Δ we denote the first difference, with Δ2 we denote the second-
order difference, i.e., Δ2 𝑦𝑦𝑡𝑡 = Δ Δ𝑦𝑦𝑡𝑡 = Δ 𝑦𝑦𝑡𝑡 − 𝑦𝑦𝑡𝑡−1 = Δ𝑦𝑦𝑡𝑡 − Δ𝑦𝑦𝑡𝑡−1 =
𝑦𝑦𝑡𝑡 − 𝑦𝑦𝑡𝑡−1 − 𝑦𝑦𝑡𝑡−1 − 𝑦𝑦𝑡𝑡−2 = 𝑦𝑦𝑡𝑡 − 2𝑦𝑦𝑡𝑡−1 + 𝑦𝑦𝑡𝑡−2 and so on
o Note that Δ2 𝑦𝑦𝑡𝑡 ≠ 𝑦𝑦𝑡𝑡 − 𝑦𝑦𝑡𝑡−2
o Δ𝑦𝑦𝑡𝑡 can also be rewritten using the lag operator, i.e., Δ𝑦𝑦𝑡𝑡 = (1 − 𝐿𝐿)𝑦𝑦𝑡𝑡
o More generally, we can write a difference equation of any order, Δ𝑘𝑘 𝑦𝑦𝑡𝑡
as Δ𝑘𝑘 𝑦𝑦𝑡𝑡 = (1 − 𝐿𝐿)𝑘𝑘 𝑦𝑦𝑡𝑡 , k  1
Lecture 3: Autoregressive Moving Average (ARMA) Models – Dr. Rotondi 7
Stability and Stationarity of AR(p) Processes
 In case of an AR(p), because it is a stochastic difference equation, it
can be rewritten as
or, more compactly, as 𝜙𝜙(𝐿𝐿)𝑦𝑦𝑡𝑡 = 𝜙𝜙0 + 𝜀𝜀𝑡𝑡 , where 𝜙𝜙(𝐿𝐿) is a
polynomial of order p,
 Replacing in the polynomial 𝜙𝜙(𝐿𝐿) the lag operator by a variable 𝜆𝜆
and setting it equal to 0, i.e., 𝜙𝜙 𝜆𝜆 = 0, we obtain the characteristic
equation associated with the difference equation 𝜙𝜙(𝐿𝐿)𝑦𝑦𝑡𝑡 = 𝜙𝜙0 + 𝜀𝜀𝑡𝑡
o A value of 𝜆𝜆 which satisfies the polynomial equation is called a root
o A polynomial of degree p has p roots, often complex numbers
 If the absolute value of all the roots of the characteristic equations
is higher than one the process is said to be stable
 A stable process is always weakly stationary
o Even if stability and stationarity are conceptually different, stability
conditions are commonly referred to as stationarity conditions
Lecture 3: Autoregressive Moving Average (ARMA) Models – Dr. Rotondi 8
Wold’s Decomposition Theorem


𝑦𝑦𝑡𝑡 − 𝜇𝜇 = 𝜖𝜖𝑡𝑡 + 𝜓𝜓1 𝜖𝜖𝑡𝑡−1 + 𝜓𝜓2 𝜖𝜖𝑡𝑡−2 + ⋯ = � 𝜓𝜓𝑖𝑖 𝜖𝜖𝑡𝑡−𝑖𝑖
𝑖𝑖=0
 An autoregressive process of order p with no constant and no other
predetermined, fixed terms can be expressed as an infinite order
moving average process, MA(), and it is therefore linear

 If the process is stationary, the sum ∑𝑖𝑖=0 𝜓𝜓𝑗𝑗 𝜖𝜖𝑡𝑡−𝑗𝑗 will converge
 The (unconditional) mean of an AR(p) model is

o The sufficient condition for the mean of an AR(p) process to exist and
be finite is that the sum of the AR coefficients is less than one in
absolute value, , see next
Lecture 3: Autoregressive Moving Average (ARMA) Models – Dr. Rotondi 9
Moments and ACFs of an AR(p) Process
 The (unconditional) variance of an AR(p) process is computed from
Yule-Walker equations written in recursive form (see below)
o In the AR(2) case, for instance, we have
(1 − 𝜙𝜙2 )𝜎𝜎𝜖𝜖2
𝑉𝑉𝑉𝑉𝑉𝑉 𝑦𝑦𝑡𝑡 =
(1 + 𝜙𝜙2 )(1 − 𝜙𝜙1 − 𝜙𝜙2 )(1 + 𝜙𝜙1 − 𝜙𝜙2 )
o For AR(p) models, the characteristic polynomials are rather
convoluted – it is infeasible to define simple restrictions on the AR
coefficients that ensure covariance stationarity
o E.g., for AR(2), the conditions are 𝜙𝜙1 + 𝜙𝜙2 < 1, 𝜙𝜙1 − 𝜙𝜙2 < 1, |𝜙𝜙2 | < 1
 The autocovariances and autocorrelations functions of AR(p)
processes can be computed by solving a set of simultaneous
equations known as Yule-Walker equations
o It is a system of K equations that we recursively solve to determine the
ACF of the process, i.e., 𝜌𝜌ℎ for h = 1, 2, …
o See example concerning AR(2) process given in the lectures and/or in
the textbook
 For a stationary AR(p), the ACF will decay gradually to zero
Lecture 3: Autoregressive Moving Average (ARMA) Models – Dr. Rotondi 10
ACF and PACF of AR(p) Process
 The SACF and SPACF are of primary importance to identify the lag
order p of a process

o F 11
Lecture 3: Autoregressive Moving Average (ARMA) Models – Dr. Rotondi
ACF and PACF of AR(p) and MA(q) Processes

 An AR(p) process is described by an ACF that may slowly tail off at


infinity and a PACF that is zero for lags larger than p
 Conversely, the ACF of a MA(q) process cuts off after lag q, while the
PACF of the process may slowly tail off at infinity

Lecture 3: Autoregressive Moving Average (ARMA) Models – Dr. Rotondi 12


ARMA(p,q) Processes
 In some applications, the empirical description of the dynamic
structure of the data requires us to specify high-order AR or MA
models, with many parameters
 To overcome this problem, the literature has introduced the class of
autoregressive moving-average (AR-MA) models, combinations of
AR and MA models

Lecture 3: Autoregressive Moving Average (ARMA) Models – Dr. Rotondi 13


ARMA(p,q) Processes
 We can also write the ARMA(p, q) process using the lag operator:

 The ARMA(p, q) model will have a stable solution (seen as a


deterministic difference equation) and will be co-variance
stationary if the roots of the polynomial
lie outside the unit circle
 The statistical properties of an ARMA process will be a combination
of those its AR and MA components
 The unconditional expectation of an ARMA(p, q) is

o An ARMA(p, q) process gives the same mean as the corresponding


ARMA(p, 0) or AR(p)
 The general variances and autocovariances can be found solving
the Yule-Walker equation, see the book
Lecture 3: Autoregressive Moving Average (ARMA) Models – Dr. Rotondi 14
ARMA(p,q) Processes
 For a general ARMA(p, q) model, beginning with lag q, the values of
𝜌𝜌𝑠𝑠 will satisfy:

o After the qth lag, the ACF of an ARMA model is geometrically


declining, similarly to a pure AR(p) model
 The PACF is useful for distinguishing between an AR(p) process and
an ARMA(p, q) process
o While both have geometrically declining autocorrelation functions,
the former will have a partial autocorrelation function which cuts off
to zero after p lags, while the latter will have a partial autocorrelation
function which declines geometrically

Lecture 3: Autoregressive Moving Average (ARMA) Models – Dr. Rotondi 15


ARMA(p,q) Processes

o As one would expect of an ARMA process, both the ACF and the PACF
decline geometrically: the ACF as a result of the AR part and the PACF
as a result of the MA part
o However, as the coefficient of the MA part is quite small the PACF
becomes insignificant after only two lags. Instead, the AR coefficient is
higher (0.7) and thus the ACF dies away after 9 lags and rather slowly
Lecture 3: Autoregressive Moving Average (ARMA) Models – Dr. Rotondi 16
Model Selection: SACF and SPACF
 A first strategy, compares the
sample ACF and PACF with the
theoretical, population ACF and
PACF and uses them to identify
the order of the ARMA(p, q) model
US CPI Inflation

o Process of some ARMA type, but it


remains quite difficult to determine
its precise order (especially the MA)
Lecture 3: Autoregressive Moving Average (ARMA) Models – Dr. Rotondi 17
Model Selection: Information Criteria
 The alternative is to use information criteria (often shortened to IC)
 They essentially trade off the goodness of (in-sample) fit and the
parsimony of the model and provide a (cardinal, even if specific to
an estimation sample) summary measure
o We are interested in forecasting out-of-sample: using too many para-
meters we will end up fitting noise and not the dependence structure
in the data, reducing the predictive power of the model (overfitting)
 Information criteria include in rather simple mathematical
formulations two terms: one which is a function of the sum of
squared residual (SSR), supplemented by a penalty for the loss of
degrees of freedom from the number of parameters of the model
o Adding a new variable (or a lag of a shock or of the series itself) will
have two opposite effects on the information criteria: it will reduce
the residual sum of squares but increase the value of the penalty term
 The best performing (promising in out-of-sample terms) model will
be the one that minimizes the information criteria
Lecture 3: Autoregressive Moving Average (ARMA) Models – Dr. Rotondi 18
Model Selection: Information Criteria
Number of parameters

Sample size

 The SBIC is the one IC that imposes the strongest penalty (lnT) for
each additional parameter that is included in the model.
 The HQIC embodies a penalty that is somewhere in between the
one typical of AIC and the SBIC
o SBIC is a consistent criterion, i.e.,
it determinates the true model
asymptotically
o AIC asymptotically overestimates
the order/complexity of a model
with positive probability
Estimation Methods: OLS vs MLE
o It is not uncommon that different criteria lead to different models
o Using the guidance derived from the inspection of the correlogram,
we believe that an ARMA model is more likely, given that the SACF and
SPACF show a similar behavior
o Could be inclined to conclude in favor of a ARMA(2,1) for the US
monthly CPI inflation rate
 The estimation of an AR(p) model is generally easy because it can
be performed simply by (conditional) OLS
o Conditional on p starting values for the series
 When an MA(q) component is included, the estimation becomes
more complicated and requires Maximum Likelihood
o Please review Statistics prep-course + see the textbook
 However, this opposition is only apparent: conditional on the p
starting values, under the assumptions of a classical regression
model, OLS and MLE are identical for an AR(p)
o See 20191 for the classical linear regression model
Lecture 3: Autoregressive Moving Average (ARMA) Models – Dr. Rotondi 20
Estimation Methods: MLE
 The first step in deriving the MLE consists of defining the joint
probability distribution of the observed data
 The joint density of the random variables in the sample may be
written as a product of conditional densities so that the log-like-
lihood function of ARMA(p, q) process has the form

o For instance, if 𝑦𝑦𝑡𝑡 has a joint and marginal normal pdf (which must
derive from the fact that 𝜖𝜖𝑡𝑡 has it), then

o MLE can be applied to any parametric distribution even when


different from the normal
 Under general conditions, the resulting estimators will then be
consistent and have an asymptotic normal distribution, which may
be used for inference
Lecture 3: Autoregressive Moving Average (ARMA) Models – Dr. Rotondi 21
Example: ARMA(2,1) Model of US Inflation

Lecture 3: Autoregressive Moving Average (ARMA) Models – Dr. Rotondi 22


Model Specification Tests
 If the model has been specified correctly, all the structure in the
(mean of the) data ought to be captured and the residuals shall not
exhibit any predictable patterns
 Most diagnostic checks involve the analysis of the residuals
 ① An intuitive way to identify potential problems with a ARMA
model is to plot the residuals or, better, the standardized residuals,
i.e.,
o If the residuals are normally distributed with zero mean and unit
variance, then approximately 95% of the standardized residuals
should fall in an interval of ±2 around zero
o Also useful to plot the squared (standardized) residuals: if the model
is correctly specified, such a plot of squared residuals should not
display any clusters, i.e., the tendency of high (low) squared residuals
to be followed by other high (low) squared standardized residuals
 ② A more formal way to test for normality of the residuals is the
Jarque-Bera test
Lecture 3: Autoregressive Moving Average (ARMA) Models – Dr. Rotondi 23
Model Specification Tests: Jarque-Bera Test
o Because the normal distribution is symmetric, the third central
moment, denoted by 𝜇𝜇3 , should be zero; and the fourth central
moment, 𝜇𝜇4 , should satisfy 𝜇𝜇4 = 3𝜎𝜎𝜖𝜖4
o A typical index of asymmetry based on the third moment (skewness),
̂ of the distribution of the residuals is
that we denote by 𝑆𝑆,

o The most commonly employed index of tail thickness based on the


� is
fourth moment (excess kurtosis), denoted by 𝐾𝐾,

o If the residuals were normal, 𝑆𝑆̂ and 𝐾𝐾


� would have a zero-mean
asymptotic distribution, with variances 6/T and 24/T, respectively
o The Jarque-Bera test concerns the composite null hypothesis:

Lecture 3: Autoregressive Moving Average (ARMA) Models – Dr. Rotondi 24


Model Specification Tests: Jarque-Bera Test
 Jarque and Bera prove that because the square roots of the sample
statistics 2 2

are 𝑁𝑁(0,1) distributed, the null consists of a joint test that 𝜆𝜆1 and
𝜆𝜆2 are zero tested as 𝐻𝐻0 : 𝜆𝜆1 +𝜆𝜆2 = 0, where 𝜆𝜆12 + 𝜆𝜆22 ~𝜒𝜒22 as T⟶
 ③ Compute sample autocorrelations of residuals and perform tests
of hypotheses to assess whether there is any linear dependence
o Same portmanteau tests based on the Q-statistic can be applied to test
the null hypothesis that there is no autocorrelation at orders up to h

ARMA(2,1) Model of US Inflation

Lecture 3: Autoregressive Moving Average (ARMA) Models – Dr. Rotondi 25


Example: ARMA(2,1) Model of US Inflation

Residuals Squared Residuals

Lecture 3: Autoregressive Moving Average (ARMA) Models – Dr. Rotondi 26


Forecasting with ARMA
 In-sample forecasts are those generated with reference to the same
data that were used to estimate the parameters of the model
o The R-square of the model is a measure of in-sample goodness of fit
o Yet, ARMA are time series models in which the past of a series is used
to explain the behavior of the series, so that using the R-square to
quantify the quality of a model faces limitations
 We are more interested in how well the model performs when it is
used to forecast out-of-sample, i.e., to predict the value of
observations that were not used to specify and estimate the model
 Forecasts can be one-step-ahead, 𝑦𝑦�𝑡𝑡 (1), or multi-step-ahead, 𝑦𝑦�𝑡𝑡 (ℎ)
 In order to evaluate the usefulness of a forecast we need to specify
a loss function that defines how concerned we are if our forecast
were to be off relative to the realized value, by a certain amount.
 Convenient results obtain if one assumes a quadratic loss function,
i.e., the minimization of:
Lecture 3: Autoregressive Moving Average (ARMA) Models – Dr. Rotondi 27
Forecasting with AR(p)
o This is known as the mean square forecast error (MSFE)
 It is possible to prove that MSFE is minimized when 𝑦𝑦�𝑡𝑡 (ℎ) is equal
to where ℑ𝑡𝑡 is the information set available
 In words, the conditional mean of 𝑦𝑦𝑡𝑡+ℎ given its past observations
is the best estimator of 𝑦𝑦�𝑡𝑡 (ℎ) in terms of MSFE
 In the case of an AR(p) model, we have:

where
o For instance,
o The forecast error is
o The h-step forecast can be computed recursively, see the
textbook/class notes
 For a stationary AR(p) model, 𝑦𝑦�𝑡𝑡 (ℎ) converges to the mean 𝐸𝐸[𝑦𝑦𝑡𝑡 ] as
h grows, the mean reversion property
Lecture 3: Autoregressive Moving Average (ARMA) Models – Dr. Rotondi 28
Forecasting with MA(q)
 Because the model has a memory limited to q periods only, the
point forecasts converge to the mean quickly and they are forced to
do so when the forecast horizon exceeds q periods
o E.g., for a MA(2),
because both shocks have been observed and are therefore known
o Because 𝜀𝜀𝑡𝑡 has not yet been observed at time t, and its expectation at
time t is zero, then
o By the same principle, because 𝜀𝜀𝑡𝑡+3 ,
𝜀𝜀𝑡𝑡+2 , and 𝜀𝜀𝑡𝑡+1 are not known at time t
 By induction, the forecasts of an ARMA(p, q) model can be obtained
from 𝑝𝑝 𝑞𝑞
𝑦𝑦�𝑡𝑡 ℎ = 𝜙𝜙0 + � 𝜙𝜙𝑖𝑖 𝑦𝑦�𝑡𝑡 ℎ − 𝑖𝑖 + � 𝜃𝜃𝑗𝑗 𝐸𝐸𝑡𝑡 [𝜖𝜖𝑡𝑡+ℎ−𝑗𝑗 ]
𝑖𝑖=1 𝑗𝑗=1
 How do we assess the forecasting accuracy of a model?

Lecture 3: Autoregressive Moving Average (ARMA) Models – Dr. Rotondi 29


Forecasting US CPI Inflation with ARMA Models

Lecture 3: Autoregressive Moving Average (ARMA) Models – Dr. Rotondi 30


Predictable Random Walks vs. Unpredictable White Noise
 By construction a white noise process is completely unpredictable
in a linear sense
 This means that knowledge of past values of 𝜖𝜖𝑡𝑡−ℎ cannot be useful
to forecast 𝜖𝜖𝑡𝑡 and that 𝜖𝜖𝑡𝑡+ℎ contains no memory of 𝜖𝜖𝑡𝑡 ∀ℎ > 0
o Formally, 𝐶𝐶𝐶𝐶𝐶𝐶 𝜖𝜖𝑡𝑡−ℎ , 𝜖𝜖𝑡𝑡 = 𝐸𝐸 𝜖𝜖𝑡𝑡 |𝜖𝜖𝑡𝑡−ℎ = 𝐶𝐶𝐶𝐶𝐶𝐶 𝜖𝜖𝑡𝑡 , 𝜖𝜖𝑡𝑡+ℎ = 𝐸𝐸 𝜖𝜖𝑡𝑡+ℎ |𝜖𝜖𝑡𝑡 =
0 ∀ℎ > 0, where it is well known that the covariance is a linear
operator because based on expectations of products
 Even more extreme, if the time series 𝜖𝜖𝑡𝑡 where IID, then the process
would be completely unpredictable in all possible ways and senses
o Formally, 𝐶𝐶𝐶𝐶𝐶𝐶 𝑔𝑔(𝜖𝜖𝑡𝑡−ℎ ), ℎ(𝜖𝜖𝑡𝑡 ) = 𝐶𝐶𝐶𝐶𝐶𝐶 𝑔𝑔(𝜖𝜖𝑡𝑡 ), ℎ(𝜖𝜖𝑡𝑡+ℎ ) = 0 ∀ℎ > 0 and
any “smooth” functions 𝑔𝑔 � and ℎ(�)
2 2
o For instance, 𝐶𝐶𝐶𝐶𝐶𝐶 𝜖𝜖𝑡𝑡−ℎ , 𝜖𝜖𝑡𝑡 = 𝐶𝐶𝐶𝐶𝐶𝐶 𝜖𝜖𝑡𝑡2 , 𝜖𝜖𝑡𝑡−ℎ = 0 ∀ℎ > 0
 Instead a random walk, 𝑦𝑦𝑡𝑡+1 = 𝜇𝜇 + 𝑦𝑦𝑡𝑡 + 𝜖𝜖𝑡𝑡+1 = 𝜇𝜇 + 𝑦𝑦𝑡𝑡 + 𝑤𝑤𝑤𝑤𝑤𝑤𝑤𝑤𝑤 𝑛𝑛𝑛𝑛𝑛𝑛𝑛𝑛𝑛𝑛
implies strong linear predictability as
𝐸𝐸𝑡𝑡 [𝑦𝑦𝑡𝑡+1 ] = 𝐸𝐸 𝜇𝜇 + 𝑦𝑦𝑡𝑡 + 𝜖𝜖𝑡𝑡+1 𝑦𝑦𝑡𝑡 = 𝜇𝜇 + 𝑦𝑦𝑡𝑡 + 𝐸𝐸 𝜖𝜖𝑡𝑡+1 𝑦𝑦𝑡𝑡 = 𝜇𝜇 + 𝑦𝑦𝑡𝑡
which means that predictable + unpredictable ∼ predictable
Lecture 3: Autoregressive Moving Average (ARMA) Models – Dr. Rotondi 31

You might also like