TSA Chapter 2
TSA Chapter 2
Key Difference:
● ACF shows how past values influence the present, including indirect effects.
● PACF isolates the direct impact of a specific lag by removing intermediate influences.
Mathematically, it's written as: (averaged across the most recent 3 times)
Yt = Yt-1 + ϵt for t > 1 Autocorrelation function
where:
● Yt is the value at time t.
● Yt-1 is the value at the previous time step.
● ϵt is a random number (white noise).
Ps: a random walk is similar to a "suite algébrique" (arithmetic sequence), but instead of adding a fixed Autoregessive (AR) process
r, we add a random value ϵt, which changes at every step. An Autoregressive (AR) process is a fundamental time series model where the current value depends
linearly on its past values and a random noise term.
The Autocorrelation function of a random walk process: Suppose that {et} is a zero mean white process with Var(et)=std².
Consider the stochastic process defined by: Yt = 0.75 Yt-1 + et
that is,Yt is directly related to the (downweighted) previous value of the process Yt-1 and the random
- Note that when k is closer to 0, the autocorrelation Corr(Yt,Yt−k) is closer to 1. That is, two error et (a "shock" or "innovation" that occurs at time t). This is called an autoregressive model.
observations Yt and Yt−k close together in time are likely to be close together, especially when t and Autoregression means "regression on itself." Essentially, we can envision "regressing" Yt on Yt-1.
state, without any deterministic trend. Variance Remains the same May increase or decrease
Seasonality/Trends Removed or absent Present
Autocovariance Depends only on lag Changes over time
Stationary vs. Non-Stationary The Ljung-Box test
The Ljung-Box test is a statistical test used to check for the presence of autocorrelation and
Process Type Stationary?
seasonality in a time series. It helps determine whether the residuals (errors) of a model are white noise
White noise (random fluctuations) Yes
(i.e., random and uncorrelated).
Random walk (e.g., stock prices) No
Moving average (MA) process Yes Pearson Correlation Coefficient in Time Series Analysis
Autoregressive (AR) process Depends on the coefficients The Pearson correlation coefficient (r) measures the strength and direction of a linear relationship
between two variables. In time series analysis, it is often used to assess the relationship between two
time-dependent variables.
Time SeriesTransforming Non-Stationary Data
If a time series is non-stationary, we can often make it stationary by: Granger Causality Test
● Differencing: Subtracting the previous value from the current value, i.e., ∇Yt = Yt−Yt-1 , for t = 1, 2, ...., T. The Granger Causality Test is a statistical hypothesis test used to determine whether one time series can
● Log transformation: Taking the natural log of values to stabilize variance. predict another. It does not imply direct causality in the traditional sense, but rather whether past values
● Removing trends and seasonality using techniques like decomposition. of one time series help forecast future values of another.
Null Hypothesis (H₀): The time series is non-stationary. - Method of least squares: The most commonly used mathematical method for measuring the trend.
Alternative Hypothesis (H₁): The time series is stationary. - Moving averages smooth out the time series
Explanation:
The Kwiatkowski-Phillips-Schmidt-Shin (KPSS):
Moving averages are used in time series analysis to smooth out short-term fluctuations and
(useful for data sets that might not fit standard distributions)
highlight long-term trends or patterns. They help in reducing noise, making it easier to identify the
Null Hypothesis (H₀): The time series is stationary.
underlying trend.
Alternative Hypothesis (H₁): The time series is not stationary.
- The straight line is fitted to the time series when the movements in the time series are: Linear
Why Does Stationarity Matter?
Stationarity is important because most time series models (like ARMA, ARIMA) assume that the data is
stationary.
If the series is not stationary, predictions may be unreliable.
By ensuring stationarity, we can build more accurate and interpretable models.