0% found this document useful (0 votes)
342 views126 pages

Itsm Windows A User's Guide To Time Series: Modelling and Forecasting

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
342 views126 pages

Itsm Windows A User's Guide To Time Series: Modelling and Forecasting

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 126

ITSM for Windows

A User's Guide to Time Series


Modelling and Forecasting
Peter J. Brockwell Richard A. Davis

ITSM for Windows


A User's Guide to Time Series
Modelling and Forecasting

With 63 Illustrations and 2 Diskettes

Written in collaboration with Rob J. Hyndman

Springer-Verlag
New York Berlin Heidelberg London Paris
Tokyo Hong Kong Barcelona Budapest
Peter J. Brockwell
Mathematics Departmenl
Royal Melbourne Institute of Technology
Melbourne, Victoria 3001
Australia

Richard A. Davis
Department of Statistics
Colorado State University
Fort Collins, CO 80523
USA

Library of Congress Cataloging in Publication Data applied for.

Printed on acid-free paper.

© 1994 Springer·Verlag New Yor k, Inc.

All rights reserved. This work may not be translated or copied in whole or in part without the
written permission of the publisher (Springer.Veriag New York, Inc., 17S Fifth Avenue, New
York, NY 10010, USA), except for brief excerpts in connection with revi ews or scholarly
analysis. Use in connection with any form of information -storage and retrieval, electronic
adaptation, computer software, or by similar or dissimilar methodology now known or hereaf·
ter developed is forbidden.
The use of general descriptive names, trade names, trademarks, etc., in this publication, even
if the former are not especially identified, is not to be taken as a sign thaI such names, as
understood by the Trade Marks and Merchandise Mar ks Act, may accordingly be used freely
by anyone.

Production managed by Ellen Seham; man ufacturing supervised by Jacqui Ashri.


Photocomposed copy prepared from the author's LaTeX files.

9876S432J
Additiona l materia l to this book ca n be do,,-nloaded from http://ex tras.springer.com.
ISBN-13: 978·0-387-94337-4 c- ISBN·13: 978·1·4612·2676-5
001 : 10.1 007/978-1-4612-2676-5
Preface

The package ITSM (Interactive Time Series Modelling) evolved from the
programs for the IBM PC written to accompany our book, Time Series :
Theory and Methods, published by Springer-Verlag. It owes its existence to
the many suggestions for improvements received from users of the earlier
programs. Since the release of ITSM Version 3.0 in 1991, a large number of
further improvements have been made and incorporated into the new ver-
sions, ITSM41 and ITSM50 , both of which are included with this package.
The latter is capable of handling longer series but requires a PC 80386 or
later with 8 Mbytes of RAM and an EGA or VGA card. The earlier version
ITSM41 requires only a PC 80286 or later with EGA or VGA. (For precise
system requirements, see Section 1.2 on page 2.) The main new features of
the programs are summarized below.
• Addition of two new modules, BURG and LONGMEM for multivariate
and long-memory modelling respectively;
• Adaptation of the programs to run either under DOS or under Mi-
crosoft Windows (Version 3.1 or later);
• An extremely easy to use menu system in which selections can be
made either with arrow-keys, hot-keys or mouse;
• Development of Version 5.0 which permits the analysis of univariate
series of length up to 20,000 and multivariate series of length up to
10,000 with as many as 11 components (on computers with 8Mb of
RAM);
• Incorporation into the program PEST of a number of new features in-
cluding Hannan-Rissanen estimation of mixed ARMA models, Ljung-
Box and McLeod-Li diagnostic statistics, automatic AleC minimiza-
tion for Yule-Walker and Burg AR models and superposition of the
graphs of sample and model spectra and autocovariance functions;
• Incorporation into SMOOTH of a frequency-based smoother (which
eliminates high-frequency components from the Fourier transform of
the data) and automatic selection of the parameter for exponential
smoothing;
vi Preface

• Addition of new features (described in Appendix A) to the screen


editor WORD6.

The package includes the screen editor WORD6 and eight programs,
PEST, SMOOTH, SPEC, TRANS, ARVEC, BURG, ARARand LONGMEM,
whose functions are summarized in Chapter 1.
If you choose to install the smaller version, ITSM41, the corresponding
programs PEST, SPEC and SMOOTH can deal with time series of up to
2300 observations and ARVEC, BURG, ARAR, LONGMEM and TRANS
can handle series of lengths 700, 700, 1000, 1000 and 800 respectively. If
your PC meets the system requirements, you should load I TSM50, which
can handle much longer series (20,000 univariate or 10,000 multivariate
observations) .
We are greatly indebted to many people associated with the develop-
ment of the programs and manual. Outstanding contributions were made
by Joe Mandarino, the architect of the original version of PEST, Rob Hynd-
man, who wrote the original version of the manual for PEST, and Anthony
Brockwell, who has given us constant support in all things computational,
providing WORD6, the graphics subroutines, the current menu system and
the expertise which made possible the development of Version 5.0. The first
version of the PEST manual was prepared for use in a short course given
by the Key Centre in Statistical Sciences at Royal Melbourne Institute of
Technology (RMIT) and The University of Melbourne. We are indebted to
the Key Centre for support and for permission to make use of that mate-
rial. We also wish to thank the National Science Foundation for support
of the research on which many of the algorithms are based, R. Schnabel
of the University of Colorado computer science department for permission
to use his optimization program, and Carolyn Cook for her assistance in
the final preparation of an earlier version of the manual. We are grateful
for the encouragement provided by Duane Boes and the excellent working
environments of Colorado State University, The University of Melbourne
and RMIT. The editors of Springer-Verlag have been a constant source of
support and encouragement and our families, as always, have played a key
role in maintaining our sanity.
Melbourne, Victoria P.J. Brockwell
Fort Collins, Colorado R.A. Davis
February, 1994
Contents

Preface v

1 Introduction 1
1.1 The Programs . 1
1.2 System Requirements 2
1.2.1 Installation .. 3
1.2.2 Running ITSM 7
1.2.3 Printing Graphs 7
1.3 Creating Data Files 8

2 PEST 9
2.1 Getting Started . 9
2.1.1 Running PEST 9
2.1.2 PEST Thtorial 10
2.2 Preparing Your Data for Modelling . 10
2.2.1 Entering Data 11
2.2.2 Filing Data . . . . . 12
2.2.3 Plotting Data . . . . 12
2.2.4 Transforming Data . 13
2.3 Finding a Model for Your Data 19
2.3.1 The ACF and PACF .. 19
2.3.2 Entering a Model . . . . 21
2.3.3 Preliminary Parameter Estimation 22
2.3.4 The AICC Statistic ........ 24
2.3.5 Changing Your Model ....... 26
2.3.6 Parameter Estimation; the Gaussian Likelihood . 27
2.3.7 Optimization Results . 31
2.4 Testing Your Model ......... 34
2.4.1 Plotting the Residuals . . . . 36
2.4.2 ACF /PACF of the Residuals 36
2.4.3 Testing for Randomness of the Residuals . 37
2.5 Prediction . . . . . . . . 41
2.5.1 Forecast Criteria . . . . . . . . . . . . . . 41
viii Contents

2.5.2 Forecast Results 41


2.5.3 Inverting Transformations . . . . . . . . . . . . . . . 42
2.6 Model Properties . . . . . . . . . . . . . . . . . . . . . . . . 44
2.6.1 ARMA Models . . . . . . . . . . . . . . . . . . . . . 45
2.6.2 Model ACF, PACF . . . . . . . . . . . . . . . . . . . 46
2.6.3 Model Representations . . . . . . . . . . . . . . . . . 47
2.6.4 Generating Realizations of a Random Series . . . . . 49
2.6.5 Model Spectral Density . . . . . . . . . . . . . . . . 50
2.7 Nonparametric Spectral Estimation. . . . . . . . . . . . . . 53
2.7.1 Plotting the Periodogram . . . . . . . . . . . . . . . 53
2.7.2 Plotting the Cumulative Periodogram . . . . . . . . 55
2.7.3 Fisher's Test . . . . . . . . . . . . . . . . . . . . . . 56
2.7.4 Smoothing to Estimate the Spectral Density ... . 57

3 SMOOTH 60
3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
3.2 Moving Average Smoothing . . . . . . . . . . . . . . . . . . 61
3.3 Exponential Smoothing . . . . . . . . . . . . . . . . . . . . 62
3.4 Removing High Frequency Components . . . . . . . . . . . 64

4 SPEC 66
4.1 Introduction........................... 66
4.2 Bivariate Spectral Analysis . . . . . . . . . . . . . . . . . . 66
4.2.1 Estimating the Spectral Density of Each Series .. . 67
4.2.2 Estimating the Absolute Coherency Spectrum. . . . 69
4.2.3 Estimating the Phase Spectrum. . . . . . . . . . . . 70

5 TRANS 72
5.1 Introduction........................... 72
5.2 Computing Cross Correlations. . . . . . . . . . . . . . . .. 72
5.3 An Overview of Transfer FUnction Modelling . . . . . . . . 74
5.4 Fitting a Preliminary Transfer FUnction Model . . . . . .. 76
5.5 Calculating Residuals from a Transfer FUnction Model . .. 78
5.6 LS Estimation and Prediction with Transfer FUnction Models 80

6A~C ~
6.1 Introduction........................... 86
6.1.1 Multivariate Autoregression. . . . . . . . . . . . .. 87
6.2 Model Selection with the AlCC Criterion . . . . . . . . . . 89
6.3 Forecasting with the Fitted Model . . . . . . . . . . . . .. 89

7 BURG 91
7.1 Introduction . . . . . 91
Contents ix

8 ARAR 95
8.1 Introduction . . . . . . . . . . . . . . . . 95
8.1.1 Memory Shortening . . . . . . . 95
8.1.2 Fitting a Subset Autoregression. 97
8.2 Running the Program . . . . . . . . . . 98
9 LONGMEM 101
9.1 Introduction . . . . . . 101
9.2 Parameter Estimation 102
9.3 Prediction . . . . . . . 104
9.4 Simulation . . . . . . . 105
9.5 Plotting the Model and Sample ACVF . 106

Appendix A: The Screen Editor WORD6 108


A.1 Basic Editing . 108
A.2 Alternate Keys . . . . . . . 108
A.3 Printing a File . . . . . . . 109
A.4 Merging Two or More Files 109
A.5 Margins and Left and Centre Justification 109
A.6 Tab Settings ... 110
A.7 Block Commands . 110
A.8 Searching . . . . . 111
A.9 Special Characters 111
A.lO Function Keys .. 112
A.11 Editing Information 112

Appendix B: Data Sets 113

Index 116
1
Introduction
1.1 The Programs
The time series programs described in this manual are all included in the
package ITSM (Interactive Time Series Modelling) designed to accompany
the book Time Series: Theory and Methods by Peter Brockwell and Richard
Davis, (Springer-Verlag, Second Edition, 1991). With this manual you will
find two versions of the package, ITSM41 and ITSMSO (each on a 3!"
diskette). The system requirements for ITSM41 are fewer than for ITSMSO
(see Section 1.2), however ITSMSO can handle larger data sets (univariate
series with up to 20000 observations and multivariate series with up to
10000 observations of each of 11 components). Both versions of the package
contain the programs listed below.
PEST is a program for the modelling, analysis and forecasting of uni-
variate time series. The name "PEST" is an abbreviation for Parameter
ESTimation.
SPEC is a program which performs non-parametric spectral estimation
for both univariate and bivariate time series.
SMOOTH permits the user to apply symmetric moving average, expo-
nential or low-pass smoothing operators to a given data set.
TRANS allows the calculation and plotting of sample cross-correlations
between two series of equal lengths, and the fitting of a transfer function
model to represent the relation between them.
ARVEC uses the Yule-Walker equations to fit vector autoregressive mod-
els to multivariate time series with up to 6 components (ITSM41) or 11 com-
ponents (lTSMSO) and allows automatic order-selection using the AICC
criterion.
BURG uses Burg's algorithm to fit autoregressive models to multivariate
time series with up to 6 components (lTSM41) or 11 components (lTSMSO)
and allows automatic order-selection using the AICC criterion.
ARAR is based on the ARARMA forecasting technique of Newton and
Parzen. For a univariate data set it first selects and applies (if necessary) a
memory-shortening transformation to the data. It then fits a subset autore-
gressive model to the memory-shortened series and uses the fitted model
to calculate forecasts.
2 1.2 System Requirements

LONGMEM can be used to simulate data from a specified fractionally


integrated ARMA model with zero mean. It can also be used to fit such
a model to a data set (by maximizing the Whittle approximation to the
Gaussian likelihood) and to forecast future values of the series.
This manual is designed to be a practical guide to the use of the programs.
For a more extensive discussion of time series modelling and the methods
used in ITSM, see the book Time Series: Theory and Methods, referred to
subsequently as BD. Information regarding the data sets included with the
package is contained in Appendix B. Further details, and in some cases an
analysis of the data, can be found in BD.

1.2 System Requirements


ITSM41 :

• mM PC (286 or later) or compatible computer operating under MS-


DOS; to run the programs in WINDOWS, version 3.1 or later is
required;

• at least 540 K of RAM available for applications (to determine your


available RAM use the DOS command mem and observe Largest ex-
ecutable program size); if you have DOS Version 6.0 or later you can
optimize your available RAM by running memmaker,
• a hard disk with at least 1.1 Mb of space available;
• an EGA or VGA card for graphics;
• a mathematics co-processor (recommended but not essential).
ITSM50:

• mM PC (386 or later) or compatible computer operating under MS-


DOS; to run the programs in WINDOWS, version 3.1 or later is
required;

• at least 8 Mb of RAM;
• a hard disk with at least 2.6 Mb of space available;

• an EGA or VGA card for graphics;


• a mathematics co-processor (recommended but not essential).
When booting the computer, the program ANSI.SYS should be loaded.
This is done by including the command DEVICE=ANS1.SYS in your
CONFIG.SYS file.
1. Introduction 3

1.2.1 INSTALLATION
1. Select a version of ITSM suitable for your system configuration. To
install the programs and data on your hard disk in a directory called
c: \ITSMW, place the program disk in Drive A and type
c: t--'

A: UNZIP A: ITSMW t--'

(other drives may be substituted for A: and c: ). The files on the disk
will then be copied into a directory c: \ ITSMW.
2. We shall assume now that you have installed ITSM as in 1 above and
are in the directory c: \ITSMW.

PRELIMINARIES FOR DOS OPERATION


(If you plan to run the programs under Microsoft Windows go to 4 below.)
3. Before running ITSM you will need to load a graphics dump program
if you wish to print hard copies of the graphs. This can be done as
follows:
(a) To print graphs on an HP LaserJet printer connected as Ipt1:,
type
HPDUMP 1 t--'

(b) To print graphs on an HP LaserJet printer connected as Ipt2:,


type
HPDUMP 2 t--'

(c) To print graphs on an Epson dot matrix printer connected as


Iptl: type
EPSDUMP t--'

(d) To save graphs in a disk file FNAME first execute either a or c


above and then type
LPTX -0 FNAME -1 t--'

Subsequent output directed to Ipt1: is then stored cumulatively


in the file FNAME. [To switch off this option, type LPTX -c
t--' .J
NOTES: If in steps a, b or c you get the message "highres already
loaded" it means that one of hpdump 1, hpdump 2 or epsdump has
already been loaded and you will need to reboot the computer if you
wish to load a different one. If you have an HP LaserJet III printer
with 1 Mb or more of optional memory installed, it is essential to set
Page Protection on the printer to the appropriate page size or you
will get the error message, 21 PRINT OVERRUN, and the bottom of
the printed graph will be cut off. (See the LaserJet III Printer User's
Manual, p. 4.25.)
4 1.2. System Requirements

PRELIMINARIES FOR OPERATION UNDER WINDOWS (3.1 OR


LATER)
4. If you wish to run the programs under Microsoft Windows you will
first need to carry out the following steps:

(a) Type
C:\ITSMW\INVSCRN ~
This loads the program invscm which will be used for printing
graphs and other screen displays. It is convenient to bypass this
step by adding the line
C:\ITSMW\INVSCRN
to your autoexec. bat file. This can be done by typing
WORD6 C:\AUTOEXEC.BAT ~
and inserting the required line. The modified file must then be
saved by holding down the <Alt> key while typing Wand then
typing ~ . To exit from WORD6 hold down the <Alt> key and
type X. The program invscm will then be automatically loaded
each time you boot your computer.
(If you have installed ITSM50 , you must also add the line
DEVICE=C: \ITSMW\DOSXNT. 386
immediately below the line
[386Enh]
of the system. ini file in the directory C: \ WINDOWS. Do this by
typing
WORD6 C:\WINDOWS\SYSTEM.INI ~
and proceeding as above.)
(b) Type
COpy C:\ITSMW\ITSMWIN.REC C:\WINDOWS ~
COpy C:\ITSMW\*.ICO C:\WINDOWS ~
(c) Run WINDOWS by typing WIN ~
(d) Double click on the RECORDER icon in the ACCESSORIES
window
(e) Click on FILE in the RECORDER window
(f) Click on the option OPEN
(g) Click on the file name ITSMWIN.REC
(h) Click on OK
(i) Click on MACRO in the RECORDER window
(j) Click on RUN
1. Introduction 5

You should now see a window labelled itsmw containing icons for the
ITSMmodules PEST, SMOOTH, etc. and the screen editor WORD6.
To run anyone of them, e.g. WORD6, double click on the appropriate
icon. To exit from the screen editor WORD6, hold down the <Alt> key
and press X. To terminate the RECORDER session, click on the icon
labelled RECORDER-ITSMWIN.REC and then click on CLOSE. In
case of difficulty running ITSMWIN.REC, the WINDOWS installa-
tion can be done manually as described below.
You may wish to resize and relocate the itsmw window. Once you
have done this, you can save the window display as follows. Click on
OPTIONS in the PROGRAM MANAGER window and click on the
SAVE SETTINGS ON EXIT option so that a check appears beside it.
Then exit from WINDOWS. When you next run WINDOWS by typ-
ing WIN you will see the same arrangement of windows, including the
itsmw window set up previously. To prevent inadvertently changing
this arrangement when you next exit from WINDOWS, click again on
OPTIONS in the PROGRAM MANAGER window and then click on
the SAVE SETTINGS ON EXIT option to remove the check mark.

MANUAL SETUP FOR OPERATION UNDER WINDOWS


In case you had trouble running the setup procedure in Step 4 above, here
is an alternative but less streamlined procedure to replace it:

(a) Load int/scm and (if you are installing ITSM50) modify your
system. ini file as described in 4(a)
(b) Copy the files as in 4(b) and run WINDOWS by typing WIll+->
(c) Click on FILE in the PROGRAM MANAGER window
(d) Click on the option NEW
(e) Click on PROGRAM GROUP
(f) Click on OK
(g) After the heading DESCRIPTION type i tBDIW
(h) Click on OK (At this point a window will open with the heading
itsmw.)
(i) Click again on FILE in the PROGRAM MANAGER window
(j) Click on NEW
(k) Click on PROGRAM ITEM
(1) Click on OK
(m) After the heading DESCRIPTION type pest
(n) After the heading COMMAND LINE type c: \ITSMW\PEST •PIF
(replace PIF by EXE for ITSM41 )
6 1.2. System Requirements

(0) After the heading WORKING DIRECTORY type C:\ITSMW


(p) Click on CHANGE ICON
(q) Click on OK
(r) Type PEST. ICO
(s) Click on OK
(t) Click on OK
(u) Click on OK

The itsmw window will now contain an icon labelled pest. To run pest,
double click on this icon and a title page will appear on the screen.
Follow the screen prompts to exit from PEST.
Repeat steps (i)-(u), replacing pest in (m), (n) and (r) by smooth. A
second icon will then appear in the itsmw window, labelled smooth.
Repeat steps (i)-(u) for each of the other modules, SPEC, TRANS,
ARVEC, BURG, ARAR, LONGMEM and WORD6, in each case re-
placing pest in (m), (n) and (r) by the appropriate module name.
You should then have nine icons in the itsmw window. Each module,
e.g. WORD6, is run by double clicking on the appropriate icon. To
exit from the screen editor WORD6 hold down the <AI t> key and
press X.
To save the window display, click on OPTIONS in the PROGRAM
MANAGER window and click on the SAVE SETTINGS ON EXIT
option so that a check appears beside it. Then exit from WINDOWS.
When you again run WINDOWS by typing WIN you will see the same
arrangement of windows, including the itsmw window set up previ-
ously. To prevent inadvertently changing this arrangement when you
next exit from WINDOWS, click on the SAVE SETTINGS ON EXIT
option again to remove the check mark.

It is usually advantageous, especially when saving or printing graphs, to


run the programs in full-screen mode. Holding down the <AI t> key and
pressing <Enter> toggles the programs between full-screen and window
modes.
NOTE. If after installation you select a module and nothing happens, it
is very likely that you do not have sufficient RAM available for applications
(to check your available RAM, use the DOS command mem). To run the
ITSM41 programs under WINDOWS you will need a Largest executable
program size of 537K. To run ITSM41 under DOS you will need 548K,
however the modules can also be run directly from the DOS prompt (by
typing PEST+--> • SMOOTH+--> , etc. instead of ITSM+--> ) in which case 537K
of RAM will suffice. If you have DOS 6.0 or later you can optimize your
available RAM using the DOS program memmaker.
1. Introduction 7

1.2.2 RUNNING ITSM


5. You are now ready to run ITSM. (The preliminary loading of hpdump
or epsdump for DOS operation or of invscm for Windows operation
is required only when you boot the computer.)
If you are running the programs in DOS, change the directory to
ITSMW by typing
CD \ITSHW~
Making sure the <Num Lock> key is off, type
ITSM~

and select a module (e.g. SMOOTH) from the ITSM menu by high-
lighting your selection with the arrow keys and pressing ~ . (If you
have an activated mouse, the mouse pointer must be clear of the
menu choices before you can use the arrow keys. Depending on your
mouse driver, you may be able to use your mouse for menu selection.
In case of problems with the mouse, you should deactivate it and use
the arrow keys.)
If you are running the programs in WINDOWS, double click with
the mouse on one of the icons (e.g. SMOOTH) located in the itsmw
window.
When you see the module title enclosed in a box on the screen, press
any key to continue, selecting items from the menus as they appear.
After exiting from the module SMOOTH you can exit from ITSM (if
you are running in DOS) by pressing <FlO>.

1.2.3 PRINTING GRAPHS


6. DOS: Assuming you have carried out the appropriate steps described
in 3, graphs or text which appear on the screen can be printed (or
filed) by pressing <Shift> <Prt Scr> when you see the required
screen display. If you chose to file graphics output in FNAME, you
can print the stored image after exiting from ITSM by typing, e.g.,
COpy /B FNAME LPT2:
(assuming you chose 3a and 3d above and now have an HP LaserJet
printer connected as Ipt2:), or
COPY FNAME LPT2:
(if you chose 3c and 3d above and now have an Epson dot matrix
printer connected as Ipt2:).
WINDOWS: To print any screen display from ITSM, first make sure
you are operating in full screen mode (by holding down the <Alt> key
and pressing <Enter> if necessary). Then hold down the <Shift> key
8 1.2. System Requirements

and press <Print Scm>. Provided you have loaded the program in-
vscm as described in step 4(a) above, this will cause the screen image
to be inverted to black on white. Then press the <Print Scm> key
and the displayed text or graph will be copied to the CLIPBOARD.
You can transfer it to a document in (for example) Microsoft Write
using the commands EDIT then PASTE. The document containing
the graph can be printed on whatever printer you have set up to op-
erate under WINDOWS by using the commands FILE and PRINT
in Microsoft Write. To switch between applications in Windows (such
as ITSM and Microsoft Write), hold down the <AIt> key and press
<Tab>.

1.3 Creating Data Files


All data to be used in the programs (except those for ARVEC and BURG)
should be stored in standard ASCII files in column form. That is, each
value must be on a separate row. There must also be a blank line at the
end of each data file. The programs will read the first item of data from
each row. Most of the data sets used in BD (and a number of others) are
included on the diskettes in this form. Data sets for ARVEC and BURG
are multivariate, with the m components observed at time t stored in row
t of the file. (See for example the 150 observations of the bivariate series
contained in the file LS2.DAT.)
All data files can be examined and edited using WORD6 - the screen
editor provided on the diskette. New data files can also be created using
WORD6. For example, to create a data file containing the numbers 1 to 5:
• Double click on the WORD6 icon (in WINDOWS) or type WORD6~
(in DOS) to invoke the screen editor WORD6.

• Then type
1~2~3~4~5~

• Hold down the <AIt> key and press W. You will be asked for a filename.
Type TEST .DAT~ . Your new data file consisting of the column of
numbers 1 2 3 4 5 will then be stored on your disk under the name
TEST. OAT.

• To leave WORD6, hold down the <AIt> key again and press x.
• To read your new file, invoke WORD6 again as above. Then hold down
the <A1t> key and press R. You will be asked for a file name. Type
TEST. OAT~ . Your new data file consisting of the column of numbers
1 2 3 4 5 will then be read into WORD6 and printed on the screen.
For further information on the use of WORD6 see Appendix A.
2
PEST
2.1 Getting Started
2.1.1 RUNNING PEST
Double click on the icon labelled pest in the itsmw window (or in DOS
type PEST+-' from the C: \ITSMW directory) and you should see the figure
displayed in Figure 2.1. Then press any key and you will see the Main Menu
of PEST as shown in Figure 2.2.
At this stage 7 options are available. Further options will appear in the
Main Menu after data are entered.
PEST is menu-driven so that you are required only to make choices be-
tween options specified by the program. For example, you can choose the
first option of the Main Menu [Data entry; statistics; transformations] by typ-
ing the highlighted letter D. (In the text, the letter corresponding to the
"hot" key for immediate selection of menu options will always be printed in
boldface.) This option can also be chosen by moving the highlight bar with
the mouse to the first row of the menu and clicking. A third alternative
is to move the mouse pointer out of the menu box, use the arrow keys to
move the highlight bar and then press +-' . After selecting this option you
will see the Data Menu, from which you can make a further selection, e.g.
Load new data set, in the same way. To return to the Main Menu, select
the last item of the data menu (e.g. by typing R). For the remainder of the
book we shall indicate selection of menu items by typing the highlighted
letter, but in all cases the other two methods of menu selection can equally
well be used.
There are several distinct functions of the program PEST. The first is
to plot, analyze and possibly transform time series data, the second is
to compute properties of time series models, and the third utilizes the
previous two in fitting models to data. The latter includes checking that
the properties of the fitted model match those of the data in a suitable sense.
Having found an appropriate model, we can (for example) then use it in
conjunction with the data to forecast future values of the series. Sections
2.2-2.5 and 2.7 ofthis manual deal with the modelling and analysis of data,
while Section 2.6 is concerned with model properties.
It is important to keep in mind the distinction between data and model
properties and not to confuse the data with the model. At any particular
time PEST typically stores one data set and one model (which can be
identified using the option [Current model and data file status] of the Main
Menu). Rarely (if ever) is a real time series generated by a model as simple
10 2.1. Getting Started

J T S H : PROGRAM PES T
P.J. Brockwell. R.A. Davis and J.V. Handarino
(Cl Copyright 1986. All Rights Reser~d.
(Version 4.1. Jan. 1994)

<Press any key to continue>

FIGURE 2.1. The title page of the program PEST for ITSM41

as those used for fitting purposes. Our aim is to develop a model which
mimics important features of the data, but is still simple enough to be
used with relative ease.

2.1.2 PEST TUTORIAL


The examples in this chapter constitute a tutorial session for PEST in
serialized form. They lead you through a complete analysis of the well-
known Airline Passenger Series of Box and Jenkins (see Appendix B).

2.2 Preparing Your Data for Modelling


Once the observed values of your time series are available in a single-column
ASCII file (see Section 1.3), you can begin model fitting with PEST. The
program will read your data from the file, plot it on the screen, compute
sample statistics and allow you to do a number of transformations designed
to make your transformed data representable as a realization of a zero-mean
stationary process.

EXAMPLE: To illustrate the analysis we shall use the data file


AIRPASS.DAT, which contains the number of international air-
line passengers (in thousands) for each month from Jan '49
2. PEST 11

PROCRAII PES T Va ... lon 4 . 1


flAl" HEHl

ata on~; atatiatlea; tran.ro~ation.


ntl'!l or an AlIM(p.'1) ..adel
odol ACF/PACf. AJVlIA In! Inlty Rftp ....sllntatlons
poctrai dens i ty or KaDEL on (- pl.pl)
IIncration or s ioM.Liatcd data
~rent ..adol and data rile .t.tu.
i lt r...,.. PEST

FIGURE 2.2. The Main Menu of PEST

to Dec '60.

2.2.1 ENTERING DATA


From the Main Menu of PEST select the first option (Data entry; statistics;
transformations) by typing D . The Data Menu will then appear. Choose
Option 1 and you will be asked to confirm that you wish to enter new data.
Respond by typing Y . A list of data files will then appear (in ITSM50 you
must first use the arrow keys to move the highlight bar to <DATA> and then
press +-> to see the data files). To select a data file for analysis, move the
highlight bar to the name of the required file and press +-> . The program
PEST will then read in your data and display on the screen the number of
observations in the data as well as the first three and last data points.
A new data file can always be imported using Option 1 of the Data
Menu. Note however that the previous data file is eliminated from PEST
each time a new file is read in.

EXAMPLE: Go through the above steps to read the airline pas-


senger data into PEST. The file name is AIRPASS.DAT. Once
the file has been read in, the screen should appear as in Figure
2.3.
12 2.2. Preparing Your Data for Modelling

DATA FILE " AIRPASS . DAT


"unber of observations = 141
Sa~ple ftean " . 28838£-B3
S .. pIe" ri nee " . 11292£-85
Std.£rror(noan) " .38881£+82
(square root or (1/n)8Un(C1- lhl/r)acvfCh». Ihl(r" [sqrt(n»))
DATA "~U ; Load new data set
Plot t .... d .. ta: flncl ...... n and varl&nce
Plot sa"ple ACf/PACf of current data file
file ca.ple ACf/PACF of current d&ta file
Box- Cox tr .. ncCo .... tlon (HOT after 6.7.B)
for CI .. ss lc.. 1 Dec.... posltlon use" .. nd/or 7 .
For Di fferencing use 8.
6 . RetoOvo se .. se_l c","ponent [HOT &fter 7.9.9)
7 . R toOue pol~noooial trend [HOT after 9 Or 91
11, . Difference current dat.. [HOT &ftcr 6.7.or 9)
. Subtract the ...... n
18. lIe the curront data ~et
11. turn to ft.. ln aenu

FIGURE 2.3. The PEST screen after reading in the file AIRPASS.DAT

2.2.2 FILING DATA


You may wish to change your data using PEST and then store it in another
file. At any time before or after transforming the data in PEST, the data
can be filed by choosing Option 10 from the Data Menu. Do not use the
name of a file that already exists or it will be overwritten.

2.2.3 PLOTTING DATA


The first step in the analysis of any time series is to plot the data. With
PEST the data can be plotted by selecting Option 2 from the Data Menu.
This will first produce a histogram of the data; pressing any key then causes
a graph of the data vs. time to appear on the screen.
Under the histogram several sample statistics are printed. These are de-
fined as follows:
Mean:

Standard Deviation:

s= ~ (txl-nx2)
\=1
2. PEST 13

Frcqucnc!l

-5
ID-nn +5
Horl:rontal Scale: 1 unit = . 12"4E*83
"ax. Frequency : 16 In (1 . 88, 1.25)
l1ean = .28838£*83: Std . Dey . = . 11955E·83: C.Skewnou:s = .5nl

FIGURE 2.4. The histogram of the series AIRPASS.DAT

Coefficient of Skewness:

EXAMPLE: Continuing with our analysis of the data file AIR-


PASS.DAT, choose Option 2 from the Data Menu. The first
graph displayed is a histogram of the data, shown in Figure
2.4. Then press any key to obtain the time-plot shown in Fig-
ure 2.5. Finally press any key and type C to return to the Data
Menu.

2.2.4 TRANSFORMING DATA (BD Sections 1.4,9.2)


Transformations are applied in order to produce data which can be suc-
cessfully modelled as "stationary time series". In particular we need to
eliminate trend and cyclic components and to achieve approximate con-
stancy of level and variability with time.

EXAMPLE: The airline passenger data are clearly not stationary.


The level and variability both increase with time and there
appears to be a large seasonal component (with period 12).
14 2.2. Preparing Your Data for Modelling

G~ r---------------------------------------~

144
Vertical aca lo : 1 unit = . 1eeee9C·Sl :
Kax. on .... r"tlcal sca le = . GZZ888E'83; " in. = . 184888£+83

FIGURE 2.5. The time-plot of the series AIRPASS.DAT

Non-stationary data must be transformed before attempting to fit a sta-


tionary model. PEST provides a number of transformations which are useful
for this purpose.

BOX-COX TRANSFORMATIONS (BD Section 9.2)


Box-Cox transformations can be carried out by selecting Option 5 of the
Data Menu. IT the original observations are YI, Y2 , ••• , Yn , the Box-Cox
transformation f>. converts them to fA (Y1),!A (Y2), ... , fA (Yn) , where
IC.=! A #0,
f(y) = { A'
log(y), A=O.
These transformations are useful when the variability of the data in-
creases or decreases with the level. By suitable choice of A, the variability
can often be made nearly constant. In particular, for positive data whose
standard deviation increases linearly with level, the variability can be sta-
bilized by choosing A = O.
The choice of A can be made by trial and error, using the graphs of
the transformed data which can be plotted using Option 2 of the Data
Menu. (After inspecting the graph for a particular A you can invert the
transformation using Option 5 of the Data Menu, after which you can then
try another value of A.) Very often it is found that no transformation is
needed or that the choice A = 0 is satisfactory.
2. PEST 15

~3 r---------------------------------------~

Vertical s cale : 1 unit = . 188888£- 81 ;


nax . on uertical scale = . ~32'J4E+-81: "in. = • 46«39E+81

FIGURE 2.6. The series AIRPASS.DAT after taking logs

EXAMPLE: For the series AIRPASS.DAT the variability in-


creases with level and the data are strictly positive. Taking nat-
ural logarithms (Le. choosing a Box-Cox transformation with
A = 0) gives the transformed data shown in Figure 2.6. You
can perform this transformation and plot the graph (starting in
the Data Menu with the data file AIRPASS.DAT) by typing 5
o+-' (to transform the data), then 2 +-' (to plot the graph) .

Notice how the variation no longer increases. The seasonal ef-


fect remains, as does the upward trend. These will be removed
shortly. Since the log transformation has stabilized the variabil-
ity, it is not necessary to consider other values of A. Note that
the data stored in PEST now consists of the natural logarithms
of the original data.

CLASSICAL DECOMPOSITION (BD Section 1.4)


There are two methods provided in PEST for the elimination of trend and
seasonality. These are
(i) "classical decomposition" of the series into a trend component, a sea-
sonal component and a random residual component and
(ii) differencing.
16 2.2. Preparing Your Data for Modelling

14

- 15 L - - _ - - L_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _-4----'

e
Vertical s cale : 1 unit : . 168888&- 81 ;
nax . on vortical scale = . 136713£-88; nln . = -. 151341£-88

FIGURE 2.7. The logged AIRPASS.DAT series after classical decomposition

Classical decomposition of the series {Xtl is based on the model,

X t = mt + St + Yi
where X t is the observation at time t, mt is a ''trend component", St is
a "seasonal component" and Yi is a ''random noise component" which is
stationary with mean zero. The objective is to estimate the components fflt
and St and subtract them from the data to generate a sequence of residuals
(or estimated noise) which can then be modelled as a stationary time series.
To achieve this, select Option 6 then Option 7 from the Data Menu. (You
can also estimate trend only or seasonal component only by selecting the
appropriate option separately.)
The estimated noise sequence automatically replaces the previous data
stored in PEST.

EXAMPLE: The logged airline passenger data has an apparent


seasonal component of period 12 (corresponding to the month
of the year) and an approximately linear trend. Remove these
by typing 6 12f-' f-' 7 If-' (starting from the Data Menu).
Figure 2.7 shows the transformed data (or residuals) Yi, ob-
tained by classical decomposition of the logged AIRPASS.DAT
series. {Yi} shows no obvious deviations from stationarity and
it would now be reasonable to attempt to fit a stationary time
2. PEST 17

series model to this series. We shall not pursue this approach


any further in our tutorial, but turn instead to the differenc-
ing approach. (After completing the tutorial, you should have
no difficulty in returning to this point and completing the clas-
sical decomposition analysis by fitting a stationary time series
model to {yt}.)
Restore the original airline passenger data into PEST by us-
ing Option 1 of the Data Menu and reading in the file Affi-
PASS.DAT.

DIFFERENCING (BD Sections 1.4, 9.1, 9.6)


Differencing is a technique which can also be used to remove seasonal com-
ponents and trends. The idea is simply to consider the differences between
pairs of observations with appropriate time-separations. For example, to re-
move a seasonal component of period 12 from the series {Xt }, we generate
the transformed series,
yt = X t - X t - 12 •
It is clear that all seasonal components of period 12 are eliminated by this
transformation, which is called differencing at lag 12. A linear trend can
be eliminated by differencing at lag 1, and a quadratic trend by differencing
twice at lag 1 (i.e. differencing once to get a new series, then differencing
the new series to get a second new series). Higher-order polynomials can
be eliminated analogously. It is worth noting that differencing at lag 12
not only eliminates seasonal components with period 12 but also any linear
trend.
Repeated differencing can be done with PEST by selecting Option 8 from
the Data Menu.

EXAMPLE: At this stage of the analysis we have restored the


original data set AmPASS.DAT into PEST with the Data Menu
displayed on the screen. Type 5 O+-> to replace the stored
observations by their natural logs. The transformed series can
now be deseasonalized by differencing at lag 12. To do this
type 8 12+-> . Inspection of the graph of the deseasonalized
series suggests a further differencing at lag 1 to eliminate the
remaining trend. To do this type 8 1+-> . Then type 2 +->
and you should see the transformed and twice differenced series
shown in Figure 2.8.

SUBTRACTING THE MEAN


The term ARMA model is used in this manual (and in BD) to mean a
stationary zero mean process satisfying the defining difference equations
18 2.2. Preparing Your Data for Modelling

14 ~ ______- .______________________________ ~

- 14 L..-__--'-__________________________________----I
8 131

Vertical .co lo : 1 unit : . 1eeeee£-81 ;


nax. on uertlcal scale = . 148728£.88: rUn . = -. 141343£·00

FIGURE 2.8. The series AIRPASS.DAT after taking logs and differencing at lags
12 and 1

in Section 2.6.1. In order to fit such a model to data, the sample-mean


of the data should therefore be small. (An estimate of the standard error
of the sample mean is displayed on the screen just after reading in the
data file (see Figure 2.3).) Once the apparent deviations from stationarity
of the data have been removed, we therefore (in most cases) subtract the
sample mean of the transformed data from each observation to generate
a series to which we then fit a zero-mean stationary model. Effectively we
are estimating the mean of the model by the sample mean, then fitting
a (zero-mean) ARMA model to the "mean-corrected" transformed data.
If we know a priori that the observations are from a process with zero
mean then this process of mean correction is omitted. PEST keeps track
of all the transformations (including mean correction) which are made.
You can check these for yourself by going to Option 10 of the Main Menu.
When it comes time to predict the original series, PEST will invert all these
transformations automatically.

EXAMPLE: Subtract the mean of the transformed and twice


differenced AIRPASS.DAT series by typing 9. Type R to return
to the Main Menu, then C to check the status of the data and
model which currently reside in PEST. You will see in particular
that the default white noise model (ARMA(O,O)) with variance
1 is displayed since no model has yet been entered.
2. PEST 19

2.3 Finding a Model for Your Data


After transforming the data (if necessary) as described in Section 2.2.4,
we are now in a position to fit a zero-mean stationary time series model.
PEST restricts attention to ARMA models (see Section 2.6.1). These consti-
tute a very large class of zero-mean stationary time series. By appropriate
choice of the parameters of an ARMA process {Xt }, we can arrange for
the covariances COV(XHh' X t ) to be arbitrarily close, for all h, to the cor-
responding covariances -y(h) of any stationary series with -y(0) > 0 and
limh-+oo -y(h) = O. But how do we find the most appropriate ARMA model
for a given series? PEST uses a variety of tools to guide us in the search.
These include the ACF (autocorrelation function), the PACF (partial au-
tocorrelation function) and the AICC statistic (a bias-corrected form of
Akaike's AIC statistic, see BO Section 9.3).

2.3.1 THE ACF AND PACF (BO Sections 1.3,3.3,3.4,8.2)


The autocorrelation function (ACF) of the stationary time series {Xt }
is defined as

p(h) = Corr(XHh' X t ) for h = 0, ±1, ±2, ...

(Clearly p(h) = p( -h) if X t is real-valued, as we assume throughout.)


The ACF is a measure of dependence between observations as a func-
tion of their separation along the time axis. PEST estimates this function
by computing the sample autocorrelation function, jJ( h) of the data

jJ(h) = i(h)/i(O), 0 ~ h < n,


where i(-) is the sample autocovariance function,
n-h
i(h) = n- 1L:)XHh - x) (x; - x), 0 ~ h < n.
;=1
Option 3 of the Data Menu can be used to compute and plot the sample
ACF for values of the lag h from 1 up to 40. Values which decay rapidly as
h increases indicate short term dependency in the time series, while slowly
decaying values indicate long term dependency. For ARMA fitting it is
desirable to have a sample ACF which decays fairly rapidly (see BO Chapter
9). A sample ACF which is positive and very slowly decaying suggests
that the data may have a trend. A sample ACF with very slowly damped
periodicity suggests the presence of a periodic seasonal component. In either
of these two cases you may need to transform your data before continuing
(see Section 2.2.4).
Another useful diagnostic tool is the sample partial autocorrelation
function or sample PACF.
20 2.3. Finding a Model for Your Data

The partial autocorrelation function (PACF) of the stationary time series


{Xt } is defined (at lag h > 0) as the correlation between the residuals of
X t+h and X t after linear regression on X t+1,Xt+2,'" ,Xt+h-l. This is a
measure of the dependence between X t +h and X t after removing the effect
of the intervening variables X t+b X t+2, ... , Xt+h-l. The sample PACF is
estimated from the data Xl, ... ,X n as described in BD Section 3.4.
The sample ACF and PACF are computed and plotted by choosing Op-
tion 3 of the Data Menu. PEST will prompt you to specify the maximum
lag required. This is restricted by PEST to be less than or equal to 40. (As
l
a rule of thumb, the estimates are reliable for lags up to about of the
sample size. It is clear from the definition of the sample ACF, {J(h), that it
will be a very poor estimator of p( h) for h close to the sample size n.)
Once you have specified the maximum lag, M, the sample ACF and
PACF values will be plotted on the screen for lags h from 0 to M. The
horizontal lines on the graph display the bounds ±1.96/ vIn which are ap-
proximate 95% bounds for the autocorrelations of a white noise sequence.
If the data is a (large) sample from an independent white noise sequence,
approximately 95% of the sample autocorrelations should lie between these
bounds. Large or frequent excursions from the bounds suggest that we need
a model to explain the dependence and sometimes suggest the kind of model
we need (see below). Press any key and the numerical values of the sample
ACF and PACF will be printed below the graphs. Press any key again to
return to the Data Menu.
The ACF and PACF may be filed for later use using Option 4.
The graphs of the sample ACF and PACF sometimes suggest an appro-
priate ARMA model for the data.
Suppose that the data Xb ••. ,Xn are in fact observations of the MA(q)
process,
X t = Zt + lhZt-l + ... + (}qZt-q
where {Zt} is a sequence of uncorrelated random variables with mean 0 and
variance 0"2. The ACF of {Xt } vanishes for lags greater than q and so the
plotted sample ACF of the data should be negligible (apart from sampling
fluctuations) for lags greater than q. As a rough guide, if the sample ACF
falls between the plotted bounds ±1.96/vIn for lags h > q then an MA(q)
model is suggested.
Analogously, suppose that the data are observations of the AR(p) process
defined by
X t = ¢lXt - l + ... + ¢pXt - p + Zt·
The PACF of {Xt} vanishes for lags greater than p and so the plotted
sample PACF of the data should be negligible (apart from sampling fluc-
tuatiOns) for lags greater than p. As a rough guide, if the sample PACF
falls between the plotted bounds ±1.96/vIn for lags h > p then an AR(P)
model is suggested.
2. PEST 21

1
ACF PACF

..., TTT 1 1
II
fIT 111 'T 11 rnr lTll Tf I T
IT 1 IT T

-1
~F : - .341 . IllS -. 21lZ .1!Z1 .1156 .1131 - .856 - .881 .176 - .876
.864 - .387 . 152 - .858 . 158 - .139 .878 .816 -. 811 -. 117
.839 -. 891 .223 -. 818 -. 188 .849 - .838 .847 -. 818 - .851
- .854 .196 -. 122 . 878 -. 152 - .818 .847 .831 - .815 - .831
PACF: - .341 - .813 -. 193 - .125 .833 .835 -.868 -.828 . 226 .843
.847 - .339 -. 189 - .877 - .822 - .148 .826 .115 - .813 - .167
.132 -. 872 .143 - .867 -. 183 -. 818 .84" -. 898 . 847 -. 885
- .896 -. 8lS .812 - .819 .823 -. 165 -. 834 . 1189 . 845 -. 876

FIGURE 2.9. Sample ACF and PACF of the transformed AIRPASS.DAT series

If neither the sample ACF nor PACF "cuts off" as in the previous two
paragraphs, a more refined model selection technique is required (see the
discussion of the AlCC statistic in Section 2.3.4 below) . Even if the sample
ACF or PACF does cut off at some lag, it is still advisable to explore models
other than those suggested by the sample ACF and PACF values.

EXAMPLE: Figure 2.9 shows the ACF and PACF for the AIR-
PASS.DAT series after taking logarithms, differencing at lags
12 and 1 and subtracting the mean. These graphs suggest we
consider an MA model of order 12 (or perhaps 23) with a large
number of zero coefficients, or alternatively an AR model of
order 12.

2.3.2 ENTERING A MODEL


To do any serious analysis with PEST, a model must be entered. This
can be done either by specifying an ARMA model directly using the option
[Entry of an ARMA(p,q) model] or (if the program contains a data file which
is to be modelled as an ARMA process) by using the option [Preliminary
estimation of ARMA parameters] of the Main Menu. If no model is entered,
PEST assumes the default ARMA(O,O) or white noise model,
22 2.3. Finding a Model for Your Data

where {Zt} is an uncorrelated sequence of random variables with mean zero


and variance one.
If you have data and no particular ARMA model in mind, it is best to let
PEST find the model by using the option [Preliminary estimation of ARM A
parameters].
Sometimes you may wish to try a model used in a previous session with
PEST or a model suggested by someone else. In that case use the option
[Entry of an ARMA(p,q) model].
A particularly useful feature of the latter option is the ability to import
a model stored in an earlier session. PEST can read the stored model,
saving you the trouble of repeating an optimization or entering the model
coefficient by coefficient.
To enter a model directly, specify the order of the autoregressive and
moving average polynomials as requested. You will then be required to
enter the coefficients. Initially PEST will set the white noise variance to 1.
To enter a model stored in a file, choose the autoregressive order to be -1.
After you have entered the model, you will see the Model Menu which
gives you the opportunity to make any required changes.
H you wish to alter a specific coefficient in the model, enter the number
of the coefficient. The autoregressive coefficients are numbered 1, 2, ... , p
and the moving average coefficients are numberedp + 1, p + 2, ... , p + q. For
example, to change the 2nd moving average coefficient in an ARMA(3,2)
model, type C to change a coefficient and then type 5+-> .

2.3.3 PRELIMINARY PARAMETER ESTIMATION (80 Sections


8.1-8.5)
The option [preliminary estimation of ARMA parameters] of the Main Menu
contains fast (but somewhat rough) model-fitting algorithms. These are
useful for suggesting the most promising models for the data, but they
should be followed by the more refined maximum likelihood estimation
procedure in the option [ARMA parameter estimation] of the Main Menu.
The fitted preliminary model is generally used as an initial approximation
with which to start the non-linear optimization carried out in the course
of maximizing the (Gaussian) likelihood.
The AR and MA orders p and q of the model to be fitted must be entered
first (see Section 2.6.1). For pure AR models, the preliminary estimation
option of PEST offers you a choice between the Burg and Yule-Walker
estimates. The Burg estimates frequently give higher values of the Gaussian
likelihood than the Yule-Walker estimates. For the case q > 0, PEST will
also give you a choice between the two preliminary estimation methods
based on the Hannan-Rissanen procedure and the innovations algorithm.
H you choose the innovations option by typing I, a default value of m will
be displayed on the screen. This is a parameter required in the estimation
algorithm (discussed in BO Sections 8.3-8.4). The standard choice is the
2. PEST 23

default value of m computed by PEST.


Once the values of p, q and m have been entered, PEST will quickly
estimate the parameters of the specified model and display a number of
useful diagnostic statistics.
The estimated parameters are given with the ratio of each estimate to
1. 96 times its standard error. The denominator (1. 96 x standard error) is
the critical value for the coefficient. Thus if the ratio is greater than one
in absolute value, we may conclude (at level 0.05) that the corresponding
coefficient is different from zero. On the other hand, a ratio less than one
in absolute value suggests the possibility that the corresponding coefficient
in the model may be zero. (If the innovations option is chosen, the ratios
of estimates to 1.96 x standard error are displayed only when p = q.)
After the estimated coefficients are displayed on the screen, press any
key and PEST will then do one of two things depending on whether or not
the fitted model is causal (see Section 2.6.1).
If the model is causal, PEST will give an estimate (,2 of the white
noise variance, Var(Zt), and some further diagnostic statistics. These are
-2lnL(cf" 8, (,2), where L denotes the Gaussian likelihood (see BO equation
(8.7.4)), and the Alee statistic,
-21nL + 2(p + q + l)nj(n - p - q - 2),
(see Section 2.3.4 below).
Our eventual aim is to find a model with as small an Alee value as
possible. Smallness of the Alee value computed in the option [Preliminary
estimation] is indicative of a good model, but should be used only as a
rough guide. Final decisions between models should be based on maximum
likelihood estimation computed in the option [ARMA parameter estimation],
since for fixed p and q, the values of 4>, (J and (12 which minimize the
Alee statistic are the maximum likelihood estimates, not the preliminary
estimates. In the option [Preliminary estimation] of the Main Menu, it is
possible to minimize the Alee for pure autoregressive models fitted either
by Burg's algorithm or the Yule-Walker equations by entering -1 as the
selected autoregressive order. Autoregressions of all orders up to 26 will
then be fitted by the chosen algorithm and the model with smallest Alee
value will be selected.
If the preliminary fitted model is non-causal, PEST will set all coefficients
to .001 to generate a causal model with the specified values of p and q.
Further investigation of this model must then be done with the option
[ARMA parameter estimation].
After completing the preliminary estimation, PEST will store the fitted
model coefficients and white noise variance. The stored estimate of the
white noise variance is the sum of squares of the residuals (or one-step
prediction errors) divided by the number of observations.
At this point you can try a different model, file the current model or
return to the Main Menu. When you return to the Main Menu, the most
24 2.3. Finding a Model for Your Data

recently fitted preliminary model will be stored in PEST. You will now see
a large number of options available on the Main Menu.

EXAMPLE: Let us first find the minimum-AICC AR model for


the logged, differenced and mean-corrected AIRPASS.DAT se-
ries currently stored in PEST. From the Main Menu type P and
then type -1+-' for the order of the autoregression. Type Y to
select the Yule-Walker estimation procedure. The minimum-
AICC AR model is of order 12 with an AICC value of -458.13.
Now let us fit a preliminary MA(25) model to the same data
set. Select the option [Try another model] and type O+-' for
the order of the autoregressive polynomial and 25+-' for the
order of the moving average polynomial. Choose the Innova-
tions estimation procedure by typing I and type N to use the
default value for m, the number of autocovariances used in the
estimation procedure.
The ratios, (estimated coefficient)/(1.96xstandard error), in-
dicate that the coefficients at lags 1 and 12 are non-zero, as
we suspected from the ACF. The estimated coefficients at lags
3 and 23 also look substantial even though the corresponding
ratios are less than 1 in absolute value.
The displayed values are shown in Figure 2.10. Press any key
to see the value of the white noise variance.
Press +-' once again to display the values of - 2ln L and the
AICC. After pressing +-' , you can return to the Main Menu by
typing R with the fitted MA(25) model now stored in PEST.
Note that at this stage of the modelling process the fitted
AR(12) model has a smaller AICC value than the MA(25)
model. Later we shall find a subset MA(25) model which has
an even smaller AICC value.

2.3.4 THE AICC STATISTIC (BD Sections 9.2,9.3)


One measure of the "goodness of fit" of a model is the Gaussian likelihood
of the observations under the fitted model. (i.e. the joint probability den-
sity, evaluated at the observed values, of the random variables Xl, ... ,Xn'
assuming that the fitted model is correct and the white noise is Gaussian.)
At first glance, maximization of the Gaussian likelihood seems a plausible
criterion for deciding between rival candidates for "best" model to repre-
sent a given data set. For fixed p and q, maximization of the (Gaussian)
likelihood is indeed a good criterion and is the primary method used for
estimation in the option [ARMA parameter estimation] of the Main Menu.
The problem with using the likelihood to choose between models of dif-
ferent orders is that for any given model, we can always find one with
2. PEST 25

MA COEFFICIENTS
-.3567584 .8673283 -.1628928 -.8414917 .1267971
.6264363 .6282778 -.8647944 .1326293 -.8761577
-.6666283 -.4987471 .1786694 -.8317712 .1475751
-.1466599 .6439758 -.6225789 -.8748716 -.8455962
-.8284891 -.8885378 .2813822 -.8767226 -.8789431
RATIO OF COEFFICIENTS TO (1.96*STANDARD ERROR)
-2.8833188 .3782629 -.8941181 -.2251246 .6874689
.1423154 .1522181 -.3486669 .7124265 - .4868748
-.6352571 -2.6528788 .8623497 -.1522181 .7867652
-.6944412 .2876199 -.1864952 -.3532841 -.2147868
-.8968385 -.8481669 .9474893 -.3563148 -.3659467
(Press any key to continue>

FIGURE 2.10. Coefficients of the preliminary MA(25) model

equal or greater likelihood by increasing either p or q. For example, given


the maximum likelihood AR(lO) model for a given data set, we can find an
AR(20) model for which the likelihood is at least as great. Any improve-
ment in the likelihood however is offset by the additional estimation errors
introduced. The AICC statistic allows for this by introducing a penalty
for increasing the number of model parameters. The AICC statistic for the
model with parameters p, q, cp, 0, and a 2 is defined as

AICC(cp, 0, a 2 ) = -2 In L(cp, 0, a 2 ) + 2(P + q + l)n/(n - p - q - 2),

and a model chosen according to the AICC criterion minimizes this statis-
tic. (The AICC value is a bias-corrected modification of the AIC statistic,
-2ln L + 2(p + q), see BD Section 9.3.)
Model selection statistics other than AICC are also available. A Bayesian
modification of the AIC statistic, known as the BIC statistic is also com-
puted in the option [ARMA parameter estimation]. It is used in the same
way as the AICC.
An exhaustive search for a model with minimum AICC or BIC value
can be very slow. For this reason the sample ACF and PACF and the
preliminary estimation techniques described above are useful in narrowing
down the range of models to be considered more carefully in the maximum-
likelihood estimation stage of model fitting.
26 2.3. Finding a Model for Your Data

The datarile is AIRPASS.DAT ; Total data points= 131


Box-Cox transfor~ation applied with la~bda = .88
Difference lag
1 12
2 1
The subtracted ~ean is .8883
THE ARMA( 8,25) HODEL IS X(t> = Z(t)
+( - .357 )"Z(t- 1> +( .8&7)"Z(t- Z) +( -.1&3)*Z(t- 3) +( - .841>*Z(t- 4)
+( .1Z7)"Z(t- 5) +( .8Z6)"Z(t- 6) +( .1I2B) ..zet- 7) +e -.865)*Z(t- B)
+( .133)*Z(t- 9) +( -.1176) ..zet-ll1) +( -.11117)"zet-u) +( -.499)*Z(t-1Z)
+( . 17B)*Z(t-13) +( -.1132)*Z(t-14) +( . 14B)*zet-15) +( -.14&)*Z(t-16)
+( .844)*Z(t-17) +( - . IIZ3)*Z(t-18) +( -.1175)*Z(t-19) +( -.84&)*Z(t-28)
+( - .8ZII)*Z(t-Z1> +( -.11119)*Z(t-2Z) +( . ZII1> *Z(t-Z3) +( -.877)*Z(t-Z4)
+( -.879)*Z(t-25)
"I10DEL MOT IMIJERTIBLE*
White noise variance = . 115169E-8Z
AICC -.4489Z7E+83
(Press an~ ke~ to continue>

FIGURE 2.11. The PEST screen after choosing the option [Current model and
data file status)

2.3.5 CHANGING YOUR MODEL


The model currently stored by the program and the status of the data file
can be checked at any time using the option [Current model and data file
status] of the Main Menu. Any parameter can be changed with this option,
including the white noise variance, and the model can be filed for use at
some other time.

EXAMPLE: We shall now set some of the coefficients in the


current model to zero. To do this choose the option [Current
model and data file status] from the the Main Menu by typing
C. The resulting screen display is shown in Figure 2.11.
The preliminary estimation in Section 2.3.3 suggested that the
most significant coefficients in the fitted MA(25) model were
those at lags 1, 3, 12 and 23. Let us therefore try setting all
the other coefficients to zero. To change the lag-2 coefficient,
select [Change a coefficient] from the menu and enter its number
followed by the new value, 0, i.e. press return and type C 2t->
Ot-> . Repeat for each coefficient to be changed. The screen
should then look like Figure 2.12. Type R to return to the
Main Menu.
2. PEST 27

THE ARnA( e.Z5) HODEL IS X(t) ~ Z<t)


'C -. 3S7)"Z(t- 1> .C -. 1i'3)"Z(t- 3) .( -. <lGG).Z(t- 12)·( . Z8U"zCt - Z3)

Yhlte noise uarlance = .


115169E-8Z
<Press .. ~ k~ to continue)

nDIU:

!stu....
i I" the 0K>d,,1
nUl.. TIC" ROd,,1
Jter t.ho whlt.o noise variance
),.nge .. coerrlclent

FIGURE 2.12. The PEST screen after setting coefficients to zero

2.3.6 PARAMETER ESTIMATION; THE GAUSSIAN


LIKELIHOOD (BO Section 8.7)
Once you have specified values of p and q and possibly set some coefficients
to zero, you can use the full power of PEST to estimate parameters. For
efficient parameter estimation you must use the option [ARMA parameter
estimation] of the Main Menu.
From the Main Menu type A to obtain the Estimation Menu displayed
in Figure 2.13.
Much of the information displayed in this menu concerns the optimiza-
tion settings. For most purposes you will need to use the default settings
only. (With the default settings, any coefficients which are set to zero will
be treated as fixed values and not as parameters. If you wish to include a
particular coefficient in the parameters to be optimized you must therefore
not set its initial value equal to zero.)
To find the maximum likelihood estimates of your parameters choose
the option [Optimize with current settings] by typing O. PEST will then
try to find the parameters which maximize the likelihood of your model
with respect to all the non-zero coefficients in the model currently stored
by PEST.
If you wish to compute the Gaussian likelihood (or one-step predictors
and residuals) without doing any optimization, type L to select the option
[Likelihood of Model (no optimization)].
28 2.3. Finding a Model for Your Data

~lp
~Ikallhood of node I (no optlalzatlon)
~tlRlzft with cur~ent settings
~~ Aocur.c~ P.raNOter
let the /taxi ....... "". of Ite~atlons
~nRt~aln OptlAlzed eoerrlcientR (e.9 . ro~ ~ltlpllc.tlue ~el)
lte~ Conve~enco C~lterlon ro~ th(n.Jl
~thod or EstlRation
;.,turn to t1aln t'lenu

FIGURE 2.13. The Estimation Menu

EXAMPLE: Let US find the maximum likelihood estimates of the


parameters in the current model for the logged, differenced and
mean-corrected airline passenger data stored in PEST. Starting
from the Main Menu type A ~ and you will see the Estimation
Menu. Choose the default option by typing O. After a short
delay the iterations will cease and you will see the the message
STOPPING VALUE 2 : WITHIN ACCURACY LEVEL
The stopping value of 2 indicates that the minimum of -2ln L
has been located with the specified accuracy. The fitted model
is displayed in Figure 2.14. If you see the message
STOPPING VALUE 4 : ITERATION LIMIT EXCEEDED
then the minimum of - 2ln L could not be located with the
number of iterations (50) allowed. You can continue the search
(starting from the point at which the iterations were inter-
rupted) by typing C to return to the Estimation Menu, then
typing 0 as before.

CHANGING THE OPTIMIZATION SETTINGS


There are several options in the Estimation Menu which enable you to alter
the way in which the optimization is carried out. In particular, it is possible
to input a new accuracy parameter a (Option N), set the maximum number
2. PEST 29

THE ARKA( 8,25) KODEL IS X(t) = Z(t)


+( -.355)*Z(t- 1) +( -.281)*Z(t- 3) +( -.523)*Z(t-12) +( .242)*Z(t-23)

KOVING AVERAGE PARAKETERS :


THETA( 1)= -.35528658 THETA( 3)= -.28131578
THETA(12)= -.52297488 THETA(23)= .24152318

WHITE NOISE VARIANCE = . 125824E-82


(Press any key to continue>
BIC STATISTIC -487.613888
-2 In(LIKELIHOOD) -496.517488
AICC STATISTIC -486.837488 LAST= -448.926988
1 FUNCTION CALLS = 46 ;1 ITERATIONS = S;ACCURACY PARAK.= .882853
STOPPING VALUE 2 : WITHIN ACCURACY LEVEL
(Press any key to continue)

FIGURE 2.14. The maximum likelihood estimates for the tmnsformed AIR-
PASS.DAT series

of iterations m (Option S), alter the convergence criterion c (Option A) and


change the method of optimization (Option M). By far the most frequently
used option is 0, however it is a good idea to conclude the estimation with
a further optimization using a smaller accuracy parameter.
The following options on the Estimation Menu can be used to alter the
optimization settings.
Help
This option prints several hints for using the optimizer.
New Accuracy Parameter
Allows you to enter a new accuracy parameter a between 0 and 1. Re-
ducing a gives more accurate optimization.
Set the Maximum No. of Iterations
Changes the maximum number of iterations m required before terminat-
ing the search. Reducing m terminates the search after fewer iterations.
Constrain Optimized Coefficients (e.g. for Multiplicative Model)
Unless you specify otherwise, PEST will optimize only the non-zero co-
efficients in the current model. Sometimes you may not want this. This
option enables you to specify constraints on the coefficients to be opti-
mized. Coefficients can be set to non-zero constant values and coefficients
which are currently zero can be treated as parameters and included in
30 2.3. Finding a Model for Your Data

the optimization. It is also possible to specify optimization subject to


multiplicative relationships between the parameters (see below under
Multiplicative Models).
Alter Convergence Criterion for th(n,j)
Changes the convergence criterion c. Setting c = 0 gives the exact like-
lihood but setting c to be small (say 0.0005) will usually give an almost
identical value with far less computation.
Method of Estimation
Toggles the method of optimization between Maximum Likelihood (the
default option) and Least Squares.

STOPPING NUMBER
Each time the optimizing iterations cease, the resulting model will be dis-
played on the screen with a "stopping number" indicating the conditions
under which the search was terminated. The stopping numbers have the
following meanings:
1. The relative gradient of the surface is close to zero.
2. Successive iterations did not change any of the optimized parameters
by more than the required accuracy parameter a.
3. The last step failed to locate a better point. Either the value is an
approximate local minimum or the model is too non-linear at this
point - perhaps because of a root near the unit circle. Try different
initial values.
4. The iteration limit (m) was reached. (Continue optimization by en-
tering S from the results screen then entering L again.)
5. The step-size of the search has grown too large. Try different initial
values.

EXAMPLE: In the optimization just completed, stopping num-


ber 2 appeared after 5 iterations.

MULTIPLICATIVE MODELS (BD Section 9.6)


The option [Constrain Optimized Coefficients] of the Estimation Menu al-
lows the imposition of more complicated constraints on the parameters.
Multiplicative models are handled by specifying multiplicative relationships
between the ARMA coefficients. For example the multiplicative ARMA
model,
(1 - aB)Xt = (1 + cB)(1 + dB12)Zt,
is fitted as follows. After entering the data (and transforming if necessary),
use the option [Entry of an ARMA(p,q) model] of the Main Menu to enter
2. PEST 31

the ARMA{1,13) model with all coefficients zero except the AR coefficient
and the MA coefficients at lags 1, 12 and 13. These may be set initially
to .001 (or some better non-zero initial values obtained for example from
the preliminary estimation option of PEST). We then choose the option
[ARMA parameter estimation] and enter the following sequence of letters
and numbers:
C (To constrain the optimized coefficients),
D (To define multiplicative relationships),
I+-> (The number of multiplicative relationships),
2+-> 13+-> 14+-> (To indicate that the 14th coefficient is constrained to
be the product of the 2nd and 13th),
R (To return to the Optimization Menu), and finally
o (To optimize with the current settings).

HINTS FOR ADVANCED USERS


• You can optimize with respect to as many as 52 coefficients.
• If the optimization search takes the MA coefficients outside the in-
vertible region you can convert the model to an equivalent (from a
second-order point of view) invertible model. See Section 2.3.7 for fur-
ther information about switching to invertible models. PEST (unlike
some programs) has no difficulty in computing Gaussian likelihoods
and best linear predictors for non-invertible models.
• For complicated or high-order models be sure to try a variety of initial
values to check that you are not finding a local rather than a global
minimum of -2lnL.
• You cannot keep constant the 3rd coefficient in a multiplicative rela-
tionship (Le. the product of the first 2).

2.3.7 OPTIMIZATION RESULTS


After running the optimization algorithm, PEST displays the model param-
eters (coefficients and white noise variance), the values of -2lnL, AICC,
and BIC, information regarding the computations, and the Results Menu.

EXAMPLE: Figure 2.14 shows the PEST screen after complet-


ing optimization for the logged, differenced and mean-corrected
series AIRPASS.DAT, using an MA(23) model with non-zero
coefficients at lags 1, 3, 12 and 23.
The next stage of the analysis is to consider a variety of com-
peting models and to select the most suitable. The following
32 2.3. Finding a Model for Your Data

table shows the AlCC statistics for a variety of subset moving


average models of order less than 24.

Lags AICC
1 3 12 23 -486.04
1 3 12 13 23 -485.78
1 3 5 12 23 -489.95
1 3 12 13 -482.62
1 12 -475.91

The best of these models from the point of view of AlCC value
is the one with non-zero coefficients at lags 1, 3, 5, 12 and 23.
To substitute this model for the one currently stored in PEST
(starting from the Results Menu), type
C +--> C C 1+--> 5+--> R +--> 0
and optimization will begin as before. You should obtain the
non-invertible model (BD Example 9.2.2),

Xt = Zt -.439Zt _ 1 - .302Zt - 3 + .242Zt - 5


-.656Zt- 12 + .348Zt -23, {Zt} rv WN(O, .00103)

For future reference, store the model under the filename AlR-
PASS.MOD using the option [Store the Model] of the Results
Menu.

We have seen in our example how, when optimizing iterations cease,


the stopping code indicates whether or not more iterations are required.
The display of results which includes this information also contains a menu
which allows us to investigate properties of the fitted model, including its
goodness of fit. The options available in the Results Menu are described
below.
Store the Model
It is wise to file the fitted model, particularly if it was the result of a
time-consuming optimization as in the previous example.
File and Analyze the Residuals
The differences (suitably rescaled, see BD Section 9.4) between the ob-
servations and the corresponding one-step predictors are the residuals
from the model. If the fitted model were the true model, they would
constitute a white noise sequence. This allows us to check, by studying
the residuals, whether or not the model is a good fit to the data. Further
details of this option are given in Section 2.4.
Predict Future Data (and Return to Main Menu)
The fitted model can be used to compute best linear h-step predictors
2. PEST 33

THE ARMA( 8,25) MODEL IS X(t) = Z(t)


+( -.433)"Z(t- 1) +( -.385)"Z(t- 3) +( .238)*Z(t- 5) +( -.656)*Z(t-1Z)
+( .351)"Z(t-23)

ESTIMATED MA COEFFICIENTS at lags 1 3 5 12 23


-.433178 -.385418 .238279 -.655968 .358812
STANDARD ERRORS
.181268 .878332 .886834 .888842 .892937

(Press any key to continue>

FIGURE 2.15. The standard errors of the coefficient estimators

for both the transformed and original series. This option is also available
directly from the Main Menu and is discussed later in Section 2.6.
Standard Errors of Estimated Coefficients
This option prints the standard errors (estimated standard deviations)
of the coefficient estimators on the screen. These standard errors are
evaluated by numerical determination of the Hessian matrix of - 2ln L
(BD Section 9.2).
EXAMPLE: Starting from the Results Menu, type t to display
the standard errors for the coefficient estimators in the model,
AIRPASS.MOD, just fitted to the logged, differenced and mean-
corrected AIRPASS.DAT series (see Figure 2.15).

Matrix of Correlations of the Estimated Coefficients


The estimated correlation matrix of the coefficient estimators can be
printed on the screen by typing M.

EXAMPLE: Type M to obtain the estimated correlations for the


coefficient estimators in the model for the logged, differenced
and mean-corrected AIRPASS.DAT series (see Figure 2.16).

NON-INVERTIBLE MODEL. Switch to Invertible Model


If PEST fits a non-invertible (Section 2.6.1) ARMA(p, q) model to your
34 2.3. Finding a Model for Your Data

THE ARMA( 8,25) MODEL IS X(t) = Z(t)


+( -.433)*Z(t- 1) +( -.385)*Z(t- 3) +( .23B)*Z(t- 5) +( -.G5G)*Z(t-12)
+( .351)*Z(t-Z3)
ESTIMATED MA COEFFICIENTS at lags 1 3 5 12 23
-.433178 -.385418 . 23B279 -.G55%B . 358B12
CORRELATION MATRIX OF ESTIMATORS
(Printed one row at a tine: press any key to see the next row.)

.188E+81 -.4Z8E+88 -.334E+88 -.18ZE+88 -.173E+88


.4Z8E+88 .188E+81 -.118E+88 .Z3ZE+88 -.3GBE+88
.334E+88 -.118E+88 .188E+81 -.4B9E+88 .3GGE-81
. 18ZE+88 .Z3ZE+88 -.489E+88 .188E+81 -.393E+88
.173E+88 -.36BE+88 .36GE-81 -.393E+88 . 188E+81
(Press any key to continue)

FIGURE 2.16. The correlation matrix of the coefficient estimators

data set, you can convert to an equivalent invertible ARMA(p, q) model


by typing N. Equivalent here means from a second-order point of view.
Note however that a non-invertible subset ARMA(p, q) model will gener-
ally convert to an invertible ARMA(p, q) model with all q moving average
coefficients non-zero. Once the model is converted to an invertible model
(if it is important to do so), Options t and M (for computing standard
errors and correlations) disappear from the Results Menu. At this stage,
one should reoptimize (type CO) with the invertible model to get the
standard errors of the estimated parameters.

2.4 Testing Your Model (BO Section 9.4)


Once we have a model, it is important to check whether it is any good or
not. Typically this is judged by comparing observations with correspond-
ing predicted values obtained from the fitted model. If the fitted model is
appropriate then the prediction errors should behave in a manner which is
consistent with the model.
We define the residuals to be the rescaled one-step prediction errors,

where Xt is the best linear mean-square predictor of X t based on the ob-


2. PEST 35

RES I DUALS "DlU:

I~""" ., . . ,. .
IP:lotIe resldua
F Ie ACF/PACF
ls
rescaled residuals
or ~e.lduals
t. or rand~ness of residuals
turn to dlspla~ of estl-atlon resu l ts

FIGURE 2.17. The Residuals Menu

servations up to time t - 1, Tt-l = E(Xt - Xt )2/(12 and (12 is the white


noise variance of the fitted model.
If the data were truly generated by the fitted ARMA(p, q) model with
white noise sequence {Zt}, then for large samples the properties of {Wt }
should reflect those of {Zt} (see BD Section 9.4) . To check the appropriate-
ness of the model we can therefore examine the residual series {Wt}, and
check that it resembles a realization of a white noise sequence.
PEST provides a number of tests for doing this in the Residuals Menu
which is obtained by selecting the option [File and Analyze Residuals] of the
Results Menu.
To examine the residuals from a specified model without doing any op-
timization, enter the data and model, then use the option [Likelihood of
Model] of the Estimation Menu followed by [File and Analyze Residuals] of
the Results Menu.

EXAMPLE: Type F while in the Results Menu and you will see
the screen display shown in Figure 2.17.
36 2.4. Testing Your Model

2.4.1 PLOTTING THE RESIDUALS


The residuals Wt , t = 1, ... , n were defined in Section 2.4. The rescaled
residuals are defined as
n
(r) r:::
Wt
A

= ynWt!(L.,.. Wt2 )·
A ~ A

j=l

From the Residuals Menu type P and you will see a histogram of the
rescaled residuals.
If the fitted model is appropriate, the histogram of the rescaled residuals
should have mean close to zero. If the fitted model is appropriate and the
data is Gaussian, this will be reflected in the shape of the histogram, which
should then resemble a normal density with mean zero and variance one.
Press any key after inspecting the histogram and you will see a graph
of wt) vs t. If the fitted model is appropriate this should resemble a real-
ization of a white noise sequence. Look for trends, cycles and non-constant
variance, any of which suggest that the fitted model is inappropriate. If
substantially more than 5% of the rescaled residuals lie outside the bounds
±1.96 or if there are rescaled residuals far outside these bounds, then the
fitted model should not be regarded as Gaussian.

EXAMPLE: After selecting the option [Plot rescaled residuals] of


the Residuals Menu you will see the histogram of the rescaled
residuals as shown in Figure 2.18. The mean is close to zero and
the shape suggests that the assumption of Gaussian white noise
is not unreasonable in our proposed model for the transformed
airline passenger data.
Press any key to see the graph shown in Figure 2.19. A few of
the rescaled residuals are greater in magnitude than 1.96 (as is
to be expected), but there are no obvious indications here that
the model is inappropriate. Press any key then enter i-> and
type C to return to the Residuals Menu.

2.4.2 ACF /PACF OF THE RESIDUALS (BD Section 9.4)


If we were to assume that our fitted model is the true process generating
the data, then the observed residuals would be realized values of a white
noise sequence. We can check the hypothesis that {Wt } is an independent
white noise sequence by examining the sample autocorrelations of the ob-
served residuals which should resemble observations of independent normal
random variables with mean 0 and variance lin (see BD Example 7.2.1).
In particular the sample ACF of the observed residuals should lie within
the bounds ±1.961Vn roughly 95% of the time. These bounds are displayed
on the graphs of the ACF and PACF. If substantially more than 5% of the
2. PEST 37

frequeT1C!l

Horizontal Scale ; 1 unit =1 std . deulatlon


Max . Frequency ; 19 In [ -. 25. .88)
n.,an = - . 49857£- 81; SUI . Deu . = . 99876£-00 : C . Sk...._,.,. = . 1869

FIGURE 2.18. Histogram of the rescaled residuals from AIRPASS.MOD

correlations are outside these limits, or if there are a few very large values,
then we should look for a better-fitting model. (More precise bounds, due
to Box and Pierce, can be found in BD Section g.4.)

EXAMPLE: Choose the option [ACF jPACF of residuals] of the


Residuals Menu. After entering +--> ,the sample ACF and PACF
of the residuals will then appear as shown in Figure 2.20. No
correlations are outside the bounds in this case. They appear
to be compatible with the hypothesis that the residuals are in
fact observations of a white noise sequence. Type +--> to return
to the Residuals Menu.

2.4.3 TESTING FOR RANDOMNESS OF THE RESIDUALS (BD


Section g.4)
The option [Tests of randomness of the residuals] in the Residuals Menu
provides six tests of the hypothesis that the residuals are observations from
an independent and identically distributed (iid) sequence.
38 2.4. Testing Your Model

-~e ~------------~------------------------~
131
Vertlc.1 sc.,lo : 1 unit ; . 188888£- 81 ;
"->c. on uert lea I cc.. le = .38S385E·81: nin. = -. 32518ZE-Sl

FIGURE 2.19. Time plot of the rescaled residuals from AIRPASS.MOD

THE LJUNG-BOX PORTMANTEAU TEST


This test, due to Ljung and Box, pools the sample autocorrelations of the
residuals instead of looking at them individually. The statistic used is
h
Q = n(n + 2) I>~(k)/(n - k),
k=l

where pw(k) is the sample autocorrelation of the residuals at lag k, and


h is to be specified. As a rule of thumb, h should be of the order of y'n,
where n is the sample size (h=20 is a commonly used value).
If the data had in fact been generated by the fitted ARMA(p, q) model,
then for large n, Q would have an approximate X2 distribution with h-p-q
degrees of freedom. The test rejects the proposed model at level a if the
observed value of Q is larger than the (1 - a) quantile of the p- q xL
distribution.
This test frequently fails to reject poorly fitting models. Care should be
taken not to accept a model on the basis of the portmanteau test alone.

THE MCLEOD-LI PORTMANTEAU TEST


This test is used for testing the hypothesis that the residuals are observa-
tions from an iid sequence of normally distributed random variables. It is
based on the same statistic used for the Ljung-Box test, except that the
2. PEST 39

1
ACF PACF

e " I I'

-1

~F; - .817 .846 .814 .814 .812 .898 .879 -.885 .849 -.839
.834 -. 853 .125 .823 . 139 -. 123 .812 .IU2 - .822 -. 1156
-. 1115 -. 118& -. Ir.JZ -. 1116
-. IIII~ .938 - .1111 -. B31 -.865 -.888
-. 845 . 118 -. 118 .815 -.123 -.898 .819 .842 . 887 -. 849
PACF: -. 847 . 811 .919 .814 . 812 .898 .887 -.ees . 939 - .839
.823 -. 868 .1lK .833 .133 -.122 - .867 .814 - .827 - .8%
-. 832 -. 825 .838 -.121 -. 828 .837 .831 -. 894 -. 948 -. 891
.818 . 898 -. ~ .B28 -. 887 -. 183 .858 .869 .818 -. 879

FIGURE 2.20. ACF/PACF of the residuals from AIRPASS.MOD

sample autocorrelations of the residuals, pw(h), are replaced by the sample


autocorrelations of the "squared" residuals, Pww (h), giving
h
Q = n(n + 2) LP~w(k)/(n - k).
k=l

The hypothesis of iid normal residuals is then rejected at level a if the


observed value of Q is larger than the (1 - a) quantile of the p- q xL
distribution.

A TEST BASED ON TURNING POINTS


The statistic, T, used in this test is the number of turning points in the se-
quence of residuals. It can be shown that for an iid sequence, T is asymptot-
ically normal with mean /-LT = 2(n-2)/3 and variance a? = (16n-29)/90.
The hypothesis that the residuals constitute a sequence of iid observa-
tions is rejected if
\T - /-LT\/aT > ~1-a/2'
where ~1-a/2 is the (1- a/2) quantile of the standard normal distribution.

THE DIFFERENCFrSIGN TEST


Let S be the number of times the differenced residual series Wt - Wt - 1 is
positive. If {Wtl is an iid sequence it can be shown that S is asymptotically
40 2.4. Testing Your Model

normal with mean J.l.s = ~(n - 1) and variance a~ = (n + 1)/12.


The hypothesis that the residuals constitute a sequence of iid observa-
tions is rejected if
IS - J.l.sl/as > iP l - a / 2 .
This test must be used with caution. If the residuals have a strong cyclic
component they will be likely to pass the difference-sign test since roughly
half of the differences will be positive.

THE RANK TEST


This test is particularly useful for detecting a linear trend in the residuals.
Let P be the number of pairs (i,j) such thatWj > Wi, and j > i, i =
1, ... , n - 1. If the residuals are iid, the mean of Pis J.l.p = ~n(n - 1), the
variance of P is a;' = n(n - 1)(2n + 5)/8 and P is asymptotically normal.
The hypothesis that the residuals constitute a sequence of iid observations
is rejected if
IP - J.l.pl/ap > iP l - a / 2 .
THE MINIMUM-AICC AR MODEL
If the residuals are compatible with a sequence of iid observations, then
the minimum AICC autoregression fitted to the residuals should have order
p = O. PEST computes the AICC values for AR models of orders 0, 1, ... ,26
fitted to the residuals using the Yule-Walker equations, after which the
order p of the model with minimum-AICC value is displayed on the screen.
A value of p ;::: 1 suggests that some correlation remains in the residuals,
contradicting the hypothesis that the residuals are observations from a
white noise sequence.

EXAMPLE: Type T from the Residuals Menu to test the resid-


uals for randomness. You will then be prompted to input the
value of h, the total number of sample autocorrelations of the
residuals used to compute the Portmanteau statistics. After en-
tering the suggested value of h (in this case PEST suggests
using h = 25), the results of the six tests of randomness de-
scribed above are shown in Figure 2.21. Every test is easily
passed by our fitted model with a < .05. Observe that the or-
der of the minimum-AICC AR model for the residuals is p = O.
Press t-' to return to the Residuals Menu. For later use, save
the residuals under the filename AIRRES.DAT using the option
[File residuals].
2. PEST 41

RAHDOHHESS TEST STATISTICS (see section 9.4)

~JUHG-BOX PORTH. = 13.76 CHISQUR( 2H)


HCLEOD-LI PORTH. = 17.37 CHISQUR( 2H)
TURHIHG POIHTS 87. AHORHAL( 86.HH
DIFFEREHCE-SIGH = 65. AHORHAL( 65.HH
RAHK TEST 3935. AHORHAL( 4257.5H 753.91-2)
ORDER OF HIH AICC YU HODEL FOR RESIDUALS = H

(Press any key to continue>

FIGURE 2.21. Tests of mndomness for the residuals from AIRPASS.MOD

2.5 Prediction (BD Chapter 5, Section 9.5)

One of the main purposes of time series modelling is the prediction of future
observations. Once you have found a suitable model for your data, you can
predict future values using either the option [Forecasting] of the Main Menu
or (equivalently) the option [Predict Future Data] of the Results Menu.

2.5.1 FORECAST CRITERIA


Given observations Xl, ... ,Xn of a series which we assume to be appro-
priately modelled as an ARMA(p, q) process, PEST predicts future values
of the series Xn+h from the data and the model by computing the linear
combination Pn(Xn+h) of Xl! ... ,Xn which minimizes the mean squared
error E(Xn+h - Pn(Xn+h »2.
2.5.2 FORECAST RESULTS
Assuming that you have data stored in PEST which has been adequately
fitted by an ARMA(p,q) model, also stored in PEST, choose [Forecasting]
from the Main Menu, after which you will be asked for the number of future
values you wish to predict.
After the model is displayed on the screen you will be asked if you wish
42 2.5. Prediction

to change the white noise variance. (This will not affect the predictors but
only their mean squared errors.) The predicted values of the fitted ARMA
process will then be displayed in the column labelled XHAT. In the column
labelled SQRT (MSE) you will see the square roots of the estimated mean
squared errors of the corresponding predictors. These are calculated under
the assumption that the observations are truly generated by the current
model. They measure the uncertainty of the corresponding forecasts. A
smaller value indicates a more reliable forecast. As is to be expected, the
mean squared error of Pn(Xn+h) increases with the lead time h of the
forecast.
Approximate 95% prediction bounds (SO Section 5.4) can be obtained
from each predicted value by adding and subtracting l.96v'MSE. These
are exact under the assumptions that the model is Gaussian and faithfully
represents the data. They should not be interpreted as 95% bounds if the
histogram of the residuals is decidedly non-Gaussian in appearance.
If your data was mean-corrected, the third column of the PEST output
will show the predicted values in Column 1 plus the previously subtracted
sample mean. If there has been no mean-correction, the third column will
be the same as the first.

2.5.3 INVERTING TRANSFORMATIONS


The predictors and mean squared errors calculated so far do not pertain
to your original time series unless you have made no data transformations
other than mean-correction (in which case the relevant predictors are those
in the third column). What we have found are predictors of your trans-
jO'TTned series. To predict the original series, you will need to invert all
the data transformations which you have made in order to fit a zero-mean
stationary model. PEST will do this for you. In fact one transformation,
mean-correction, has already been inverted to generate the predicted values
which were displayed in Column 3.
If you used differencing transformations, you will see the Prediction Menu
displayed following the printing of the ARMA predictors. Type'U to select
the option [Undo differencing]. The predictors of the undifferenced data
(still Box-Coxed if you made such a transformation) will then be printed
on the screen together with the square roots of their mean squared errors.
Type +-> and the undifferenced data will be plotted. Type +-> again and
the predicted values will be added to the graph of the data. Type C and
you will be asked if you wish to invert the Box-Cox transformation (if you
made one). If so type Y and the original data will be plotted on the screen.
Type +-> again and the predictors of the original series will be added to
the graph. Type C and you will be asked if you wish to file the predicted
values, then returned to the Prediction Menu.
If you used classical decomposition rather than differencing, PEST will
automatically add back the trend and/or seasonal component immediately
2. PEST 43

II XHAT SQRTCltSE) XHAT+I'IEAH (= . 29888E-83)


132 .11'12745E-1'I1 . 342159E-1'I1 .11'15654E-1'I1
133 . 8691'11'13E-1'I2 . 363938E-1'I1 . 8981'191E-1'I2
134 •391'1377E-1'I1 . 363936E-1'I1 . 393286E-1'I1
135 -.41'11249E-1'I1 . 374919E-1'I1 -.398348E-1'I1
136 -.1261'129E-1'I1 . 375666E-81 -.123128E-81
137 . 892824E-1'I2 . 3793B1'1E-1'I1 . 921912E-1'I2
138 . 698649E-1'I3 . 379463E-81 . 989527E-1'I3
139 . 913227E-1'I2 . 388124E-81 .942314E-1'I2
141'1 -.179913E-1'I2 . 381'141'16E-81 - . 158826E-1'I2
141 . 221'11'169E-1'I2 . 388461E-81 . 249156E-82
142 . 1391'182E-1'I1 . 381'1449E-81 . 141991'1E-1'I1
143 -.1511'146E-1'I1 . 388461E-81 -.148137E-1'I1
144 -.21'13344E-1'I2 . 427621E-81 -.174256E-1'I2
145 -.21481'18E-1'I1 . 427639E-81 -.21191'11'1E-1'I1
146 . 256439E-81 . 427648E-81 . 259348E-1'I1
147 .11'13776E-1'I2 . 427759E-81 . 132864E-1'I2
148 -.63641I'1E-1'I2 .427885E-81 -.687322E-82
149 . 761748E-1'I3 •427923E-81 .11'15263E-1'I2
151'1 -.386986E-1'I2 . 428887E-81 -.357799E-1'I2
151 -.343899E-1'I2 . 428821E-81 -.314811E-82
(Press any key to continue>

FIGURE 2.22. Prediction of the tmnsforrned series, AIRPASS.DAT

after listing the ARMA predictors .. The resulting data values and corre-
sponding predictors will then be plotted, after which you will again be
given the opportunity to invert the Box-Cox transformation (if any) as in
the previous paragraph.
EXAMPLE: We left our logged, differenced and mean-corrected
airline passenger data stored in PEST as AIRPASS.DAT along
with the fitted MA(23) model, AIRPASS.MOD. To predict the
next 24 values of the original series AIRPASS.DAT, return to
the Main Menu and select the option [Forecasting]. Type 24f-'
to specify that 24 predicted values are required after the last
observation. After a brief delay and a f-' , you will be asked if
you wish to change the white noise variance. Type N and the
predictors will be displayed as in Figure 2.22. (Only the first 11
predicted points are shown.)
To obtain forecasts of the undifferenced series, choose the option
[Undo differencing] from the Prediction Menu and follow the
program prompts to obtain the graph shown in Figure 2.23.
Here the hollow squares represent the observations and the solid
squares represent the forecast values. Notice how the model has
captured the regular cyclic behaviour in the data.
To undo the Box-Cox transformation and recover the original
data and predictors, type f-' C Y f-' and a graph of the
44 2.5. Prediction

8 1&8
Vertical scale : 1 unit = .818888

FIGURE 2.23. The forecast values with differencing inverted

original AIRPASS data will be plotted on the screen. Type +-'


and the 24 predicted values will be added, giving the graph
shown in Figure 2.24.

2.6 Model Properties


PEST can be used to analyze the properties of a specified ARMA process
without reference to any data set. This enables us in particular to compare
the properties of potential ARMA models for a given data set in order to
see which of them best reproduces particular features of the data.
PEST allows you to look at the autocorrelation function and spectral
density, to examine MA( 00) and AR( 00) representations and to generate
realizations for any specified ARMA process. The use of these options is
described in this section.

EXAMPLE: We shall illustrate the use of PEST for model anal-


ysis using the model AIRPASS.MOD which is currently stored
in the program.
2. PEST 45

94
168
Vertical sca le : 1 unit = 1.888888

FIGURE 2.24. The forecasts of the original AIRPASS data

2.6.1 ARMA MODELS (BD Chapter 3)


For modelling zero-mean stationary time series, PEST uses the class of
ARMA processes. The initials stand for AutoRegressive Moving Average.
PEST enables you to compute characteristics of specific ARMA models and
to find appropriate models for given data sets (assuming of course that
the data can be reasonably represented by such a model - preliminary
transformations of the data may be necessary to ensure this).
{Xtl is an ARMA(p, q) process with coefficients <PI."" <pp,(h, .. . ,9q
and white noise variance a 2 if it is a stationary solution of the difference
equations,

X t = <PIXt-l +<P2X t-2+" . +<PpXt-p+Zt +91 Zt- 1 +92Zt- 2+· · ·+9qZt_q,

where {Ztl rv WN(O,a 2 ) (i.e. {Zt} is an uncorrelated sequence of ran-


dom variables with mean 0 and variance a 2 , known as a white-noise se-
quence.)
If p = 0 we call X t an MA(q) (moving average of order q) process. In
this case,
X t = Zt + 91 Zt - 1 + 92Zt - 2 + ... + 9qZt - q.
If q = 0 we call X t an AR(p) (autoregressive of order p) process. In this
case,
46 2.6. Model Properties

An ARMA model is said to be causal if X t has the MA( 00) representa-


tion in terms of {Zt},

=L
00

Xt 'ljJjZt-j, t = 0, ±1, ±2, ... ,


j=O

00
where E l'ljJjl < 00 and 'ljJ0 := 1. If the AR polynomial, 1- 4hZ - ... -
j=O
<PpzP, and the MA polynomial, 1 + (hz + ... + opzq, have no common
zeroes, then a necessary and sufficient condition for causality is that the
autoregressive polynomial has no zeroes inside or on the unit circle.
PEST works exclusively with causal ARMA models. It will not permit
you to enter a model for which 1 - <PIZ - ••• - <PpzP has a zero inside or
on the unit circle, nor does it generate fitted models with this property.
From the point of view of second order properties this represents no loss
of generality (BD Section 3.1). If you are trying to enter an ARMA(p,q)
model manually, the simplest way to ensure that your model is causal is
to set all the autoregressive coefficients close to zero (e.g..001). PESTwill
not accept a non-causal model.
An ARMA model is said to be invertible if Zt can be written as

L 'lrjXt_j ,
00

Zt = t = 0, ±1, ±2, ... ,


j=O

00
where E l'lrj I < 00 and 'lro := 1. This condition ensures that Zt, the noise
j=O
at time t, is determined by the observations at times up to and including
t, or equivalently that {Xt } has an "AR(oo)" representation in terms of
{Zt}.
PEST does not restrict models to be invertible, however if the current
model is non-invertible, i.e. if the moving average polynomial, 1 + OlZ +
... + Oqzq has a zero inside or on the unit circle, you will be informed
by the program. (You can check the model status by choosing the option
[Current model and data file status] of the Main Menu.) A non-invertible
model can always be converted to an invertible model with the same auto-
covariance function by choosing [ARMA parameter estimation] of the Main
Menu, then option [Likelihood of Model] of the Estimation Menu, then op-
tion [NONINVERTIBLE MODEL] from the Results Menu.

2.6.2 MODEL ACF, PACF (BDSections3.3, 3.4)


See Section 2.3.1 for a definition ofthe ACF and PACF and the use of the
sample ACF and PACF in model fitting.
2. PEST 47

The model ACF and PACF can be obtained using the selection [Model
ACF /PACF. AR/MA infinity representations) of the Main Menu. They can
be calculated for lags up to 1150. Normally you should not need more than
about 40.
After typing M from the Main Menu, the Model ACF /PACF Menu con-
sisting of 7 items will appear. It allows you to reset the maximum lag (the
default value is 40), to plot the ACF and PACF, to file the values, to plot
the sample ACF and PACF with the model ACF and PACF, to change the
white noise variance and to compute the coefficients in the MA( 00) and
AR(oo) representations of the process (see Section 2.6.3).

EXAMPLE: Starting from the Main Menu, the ACF and PACF
for the current model AIRPASS.MOD may be plotted by typ-
ing M A. To compare the sample ACF /PACF with the model
ACF /PACF, press +-> to return to the Model ACF /PACF Menu
and type S. The graphs are shown in Figure 2.25. The verti-
cal lines represent the model ACF /PACF and the solid black
squares correspond to the sample ACF /PACF. These graphs
show that the data and the model ACF and PACF all have
large values at lag 12 while the sample and model partial aut(}-
correlation functions both tend to die away geometrically after
the peak at lag 12. The similarities between the graphs indicate
that the model is capturing some of the important features of
the data.

2.6.3 MODEL REPRESENTATIONS (BO Sections 3.1, 3.2)


As indicated in Section 2.6.1, if {Xt} is a causal ARMA process, then it
has as an MA( 00) representation,
00

Xt = L'ljJjZt-j, t = 0, ±1, ±2, ... ,


j=O

00

where L l'ljJj I < 00 and 'ljJo = l.


j=O
Similarly, if {Xt } is an invertible ARMA process, then it has an AR( 00)
representation,

L
00

Zt = 7r j X t- j , t = 0, ±1, ±2, ... ,


j=O

00

where L 17rj 1< 00 and 7r0 = l.


1=0
48 2.6. Model Properties

1
ACr PAcr

' .1. i . ·· ·
. I I ... " !II' i../,,11 • '.'1' . •

- 1

~F : - .229 .831 -. 1&2 - .855 .12& .888 - .883 .888 .186 .886
. 829 -. 347 .888 .888 . 888 .888 .888 .844 .888 -. 857
.888 -. 886 . 186 .888 . 888 .888 .888 .888 . 888 .888
.668 .888 .8118 . 8811 . 888 .888 .888 .888 . 888 .888
fACr: - .229 -. 822 - .168 - .141 .883 .822 -. 116 - .817 .148 .B13
.813 -. 312 - .163 - .885 - .167 - .153 .883 .B"" - .866 - .188
.B59 -. 894 .188 - .842 -. 898 -. 842 - .866 - .144 -. 864 .B33
-. 828 -. 126 - .818 - .875 . 843 .818 - .821 .B19 . 1114 -. B8S

FIGURE 2.25. The ACF and PACF of AIRPASS.MOD together with the sample
ACF and PACF of the transformed AIRPASS.DAT series

For any specified ARMA model you can determine the coefficients in
these representations by selecting the option [MA or AR infinity represen-
tations] from the Model ACF /PACF Menu. Starting from the Main Menu
type M M. You will then be asked to choose between the [MA-Infinity
Representation] and [AR-Infinity Representation] of the model. (If the model
is not invertible the AR( 00) choice will not be possible.) After entering the
maximum lag required, PEST will then print the desired coefficients ('ljJj or
7rj) on the screen. They can be stored either as they appear on the screen
or in model format for later use in PEST.

EXAMPLE: AIRPASS.MOD does not have an AR(oo) represen-


tation since it is not invertible. However, we can convert AIR-
PASS.MOD to an equivalent invertible model and then find an
AR( 00) representation for it. To convert to an invertible model,
start from the Main Menu and type A +-' L +-' +-' N +-' +-' R.
To find the AR( 00) representation, type M M A 50+-' . This
gives 50 coefficients, the first 19 of which are shown in Figure
2.26. There is little point in using PEST to find the MA( 00)
representation of this model. What is it?
2. PEST 49

See Section 3.2 for notation


AR-infinit~ caeffs up to lag 58
j pi (j)
8 1.8888888
1 .3613552
2 .1173539
3 .3852557
4 .2738822
5 -.8851888
6 .8538636
7 .1687211
8 .1885488
9 .8288243
18 .8813816
11 .8672451
12 .5841435
13 .4196698
14 .2387828
15 .3741191
16 .3943868
17 .1859944
18 .8887224
19 .2328299
(Press an~ key to continue>

FIGURE 2.26. The AR( 00) representation of the invertible equivalent of AIR-
PASS. MOD

2.6.4 GENERATING REALIZATIONS OF A RANDOM SERIES


(BD Problem 8.17)
PEST can be used to generate realizations of a random time series defined
by the currently stored model.
To generate such a realization, select the option [Generation of simulated
datal from the Main Menu. You will be asked if you wish to continue with
the simulation (type Y) and if you wish to change the white noise variance.
Next you will be prompted for the number of data points you wish to
generate and then you will be asked to enter a random number seed. This
should be an integer with fewer than 10 digits. By using the same random
number seed you can reproduce the same realization of the process at any
other time.
Once the values of an ARMA process have been generated, you will be
given the opportunity to add any specified mean to the observations. If
you have previously mean-corrected a data set, the subtracted mean will
have been stored by PEST and it will be displayed so that you may choose
to add this value to the simulated ARMA data. (If you have previously
performed a classical decomposition on a data set you will also be given the
opportunity to add the stored trend and seasonal components. This allows
you to simulate the original data, not just the random noise component.
If, however, you transform your original data by differencing, PEST allows
50 2.6. Model Properties

1
ACF PACF

I
8 1\J, oil,
"II
I 'I

-1

~F : - .169 . 873 - .259 - .15 8 . 819 .8Bi -. 8U . W . 158 .8H


. 883 -.339 -. 836 -. &H . 819 . 882 .891 - .897 . 87'S -. 227
-. 819 -. 162 . 868 . 184 . 878 . 8 32 -. 899 - .184 -. 853 . 839
-. 812 . 893 -. 823 -. 885 . 1ZZ -. 878 -. 889 -.883 .821 .113
PAC .. : -. 169 . 846 - .2i7 - .263 -. 8043 .831 -. 116 .859 . 269 . 115
.8&3 - . ZI9 -. 867 - .878 -. 162 -. 1ZZ .844 - .898 .817 - .l1Z
. 817 -. 153 -. 866 -. 826 - .857 -. &44 -. lH -. 113 -. 832 -. 834
-. B31 -. 864 -. 883 -. 212 . 871 -. 829 -.863 -. 118 . 884 -. 828

FIGURE 2.27. The sample AGF and PAGF 0/ the genemted data

you to simulate the differenced data only.)


The simulated data will be stored in PEST, overwriting any data previ-
ously stored in the program.
EXAMPLE:To generate 135 data points using the model AIR-
PASS.MOD, type G from the Main Menu. Then type
G Y N 135+-> 1327+-> O+-> N
(The number 1327 is the random number seed.) Plot the sample
ACF and PACF of the generated data using Option 3 of the
Data Menu (see Figure 2.27) . Compare the graphs with those
in Figure 2.25. By computing the sample ACF and PACF for
a variety of different realizations you can get a feeling for the
magnitude of the random fluctuations in these functions.
The sample ACF and PACF of the transformed airline passen-
ger data (Figure 2.9) look equally compatible with the model
ACF and PACF (Figure 2.25) as the sample ACF and PACF
of the simulated series. This reinforces our earlier decision that
the model provides a good representation of the data.

2.6.5 MODEL SPECTRAL DENSITY (8D Sections 4.1-4.4)


Just as we compared the sample ACF and PACF of the data with the ACF
and PACF of the fitted model, we can compare the estimated spectral
2. PEST 51

SPECTRAL D£ltS ITY "EriU :

~pectral denslt~
i~r the s pectro
n(spectral l denslt~
denstt~)
F Ie In( . pect.... 1 denalt~)
W uil l ufl tor n
~ange white noi s e variance
turn to ..oln .... l1l.I

FIGURE 2.28. The Spectral Density Menu

density based on the data with the spectral density of the model. Spectral
density estimation is treated in Section 2.7. Here we consider only the
spectral density of the model. This is determined by selecting the option
[Spectral density of MODEL on (-pi,pi)] of the Main Menu. The Spectral
Density Menu is shown in Figure 2.28.
The spectral density of a stationary time series {Xt, t = 0 ± 1, ... } with
absolutely summable autocovariances (in particular of an ARMA process)
can be written as
1
L
00

f(w) = 271" ,(k)e-iwk, -71" ~ w ~ 71",


k=-oo

where ,(k) is the autocovariance at lag k and i = A.


The spectral representation of X t decomposes the sequence into sinu-
soidal components and f(w) measures the relative contributions to the
variance of X t from the components of different frequencies (measured in
radians per unit time). For real-valued series f(w) = f(-w} so it is neces-
sary only to plot f(w), 0 <::; w <::; 71". A peak in the spectral density function
at frequency A indicates a relatively large contribution to the variance from
frequencies near A.
For example the maximum likelihood AR(2) model,

X t = 1.407Xt - 1 - O.713Xt _ 2 + Zt,


52 2.6. Model Properties

~ ,----------------,,----------------------,

8 ~--~~------------~----~____________~
8 368
( . 888p I ) (1.888pi>

Verti c al scale : 1 uni t = . lOO88eE- 61;


114" . on ucrt I C A I SC& I.. = . 983577£- 83; "In . = .288586£- 85

FIGURE 2.29. The spectral density of AIRPASS.MOD

for the data file SUNSPOTS.DAT has a peak in the spectral density at
frequency .1871" radians per year. This indicates that a relatively large part
of the variance of the series can be attributed to sinusoidal components
with period close to 271"/(.1871") = 11.2 years.

EXAMPLE: To plot the spectral density of AIRPASS.MOD,


start from the Main Menu and type S +--> S. You will then
see the graph displayed in Figure 2.29. A notable feature of this
graph is that there are small values at each integer multiple of
71" /6. These are due to our earlier differencing at lag 12 which
had the effect of removing period 12 components from the data.

The other options in the Spectral Density Menu allow you to file the
spectral density, and to plot and file the logarithm of the spectral density.
You can change the resolution of the spectral density graph using the op-
tion [New value for n]. The option [Change white noise variance] allows you
specify a new white noise variance for the model.
2. PEST 53

Hu"ber of ob eru.. tlon~ = 131


<co.puting the Fourier transfor .. >

SPECTRAL AHALYSIS "EMU :


IrerlodDg't'.uo/(Z"Pi) and It. l~arltM
'b.II,,"I .. tlve

I
perlodogr .... (nor"allzed)
'ile Fourier tran~for.
Ap pl~ Fish r's test
E~ter welght~ for ~pectral window
P~lod~N.OO/(Z4Op1) with /IODEL s pectru ..
~ulatlve perlodogra .. with "ODEL an .. logue
lIt>turn to .... In .... nu

FIGURE 2.30. The Spectml Analysis Menu

2.7 Nonparametric Spectral Estimation (BD Chapter


10)
Spectral analysis is typically concerned with two problems: the detection
of cyclical behavior in the data and the estimation of the spectral density.
Both of these problems may be addressed by selecting [Nonparametric spec-
tral estimation] from the Main Menu of PEST. After choosing this option,
the Spectral Analysis Menu (see Figure 2.30) will appear on the screen.

2.7.1 PLOTTING THE PERIODOGRAM

The periodogram and/or In(periodogram) may be plotted by choosing [Plot


periodogram/(2*pi) and its logarithm] of the menu. The periodogram is de-
fined by
n
I(Wj) =n-llL:Xte-itw;12
t=l

where Wj = 27rj In, j = 0,1, . . . ,[n/2] are the Fourier frequencies in [0, 7r]
and [n/2] is the integer part of n/2. (The program actually plots the
rescaled periodogram I(wj}/(27r).) A large value of I(wj) suggests the pres-
ence of a sinusoidal component in the data at frequency Wj . The presence
54 2.7. Nonparametric Spectral Estimation

of such a component may be tested using an analysis of variance table as


described in BD Section 10.1. Alternatively, one can test for hidden peri-
odicities in the data using the Kolmogorov-Smirnov test or Fisher's test
as described below. The periodogram is computed for nonzero Fourier fre-
quencies only, since the value at 0, 1(0) = nlXnl 2 , depends on the sample
mean only and is generally not a useful quantity. The periodogram is com-
puted using the fast Fourier transform. The discrete Fourier transform of
the data, defined by
n
aj = n- 1 / 2 I: Xte-itwj, -[(n - 1)/2J ~ j ~ [n/2]'
t=l

may be filed using the option [File Fourier transformJ. This option will save
the coefficients {aj, j = 0, ... , [n/2]} as an array of complex numbers.

EXAMPLE: Read in the stored data file AIRPASS.DAT, take


logarithms, difference at lags 12 and 1 and then subtract the
mean. From the Main Menu, type N P to plot the periodogram
divided by (211'), The option [Periodogram/(2*pi) with MODEL
spectrumJ overlays the model spectral density with 1/{211') times
the periodogram of the data. To choose this option, press f-> to
return to the Spectral Analysis Menu, then type E to obtain
the graphs shown in Figure 2.31. Notice the similarity between
them. Now return to the Main Menu and read in the resid-
uals AIRRES.DAT which we filed earlier after fitting AIR-
PASS.MOD to the transformed AIRPASS.DAT series. To check
the compatibility of the residuals with white noise, enter a white
noise model with variance .0010277 (the sample variance of
AIRRES.DAT) using the option [Entry of an ARMA(p,q) modelJ
in the Main Menu. (Starting from the Main Menu, the necessary
keystrokes to enter this model are E Of-> Of-> f-> A .0010277
f-> f-> R.)

Now if we type N E to compute the periodogram/{211') and


overlay it with the current model spectrum. (which is constant),
we see (Figure 2.32) that there are no dominant frequency com-
ponents, so that in this respect the residual series resembles a
realization of white noise. For an iid sequence with variance (12
the periodogram ordinates should be approximately iid expo-
nential variables with mean (12.
2. PEST 55

21 r-------------------------------r--------,

Uortical seal .. : 1 unit = . 1.89888E--83


Horizontal scal .. : 1 unlt = 2~I'n. n : 131

FIGURE 2.31. Periodogram of the logged and twice differenced AIRPASS.DAT


series

2.7.2 PLOTTING THE CUMULATIVE PERIODOGRAM


Select [Cumulative periodogram (normalized)J from the Spectral Analysis
Menu to plot the standardized cumulative periodogram defined as

0, x<l
C(x) = { Yi, i :S x < i + 1, i = 1, ... ,q - 1,
1, x ~ q,

where q = [(n - 1)/2J and

Yi = I:~-1 I(wk) .
I:k=l I (Wk)
If {Xt } is Gaussian white noise, then Yi, i = 1, ... , q - 1 are distributed
as the order statistics from a sample of q - 1 independent uniform(0,1)
random variables, and the standardized cumulative periodogram should be
approximately linear. The hypothesis of Gaussian white noise is rejected
at level .05 if C(x) exits from the boundaries

y = x-I ± 1.36(q _1)-1/2 , 1:S x 5. q.


q-1
EXAMPLE: After returning to the Spectral Analysis Menu, type
U to plot the standardized cumulative periodogram with the
56 2.7. Nonparametric Spectral Estimation

59 rT----------------------------------------,

8 ~~~----------~--~--------~~------~
8

V"rtlcal seal" : 1 unit= .188888E-84


Horizontal seal,,: 1 unlt= 2wpl/n. n = 131

FIGURE 2.32. Periodogram of the residuals from AIRPASS.DAT

model analogue (Figure 2.33) for AIRRES.DAT. Since the cur-


rent model in PEST is white noise the model analogue will be a
straight line. As can be seen from the figure, the function C(x)
lies well within the above boundaries (here q = [(131- 1)/2] =
65), supporting the hypothesis that the residuals are observa-
tions of independent white noise.

2.7.3 FISHER'S TEST


Fisher's test enables you to test the data for the presence of hidden peri-
odicities with unspecified frequency. If the test statistic defined by
e - maxl<i<q [(Wi)
q - q-l L~=l [(Wi)

is large then the hypothesis that the data is Gaussian white noise is rejected.
The option [Apply Fisher's test] of the Spectral Analysis Menu gives the
e
observed value of q and the p-value of the test (i.e. the probability that
~q exceeds the observed value under the null hypothesis that the data is
Gaussian white noise) .
EXAMPLE: To apply Fisher's test to AIRRES.DAT, type A
starting from the Spectral Analysis Menu and you will see the
2. PEST 57

188

8 ~------------------ ____________________ ~

8 6S
Vertical ,"ca l" : 1 unlt= . 188888£- 81
Horlzontel scale: 1 unit: 2wpl~n. n : 131

FIGURE 2.33. Cumulative periodogram of the residuals

display,
Observed ratio of maximum periodogram to average = 3.5954
Probability (under Ho) of ratio larger than observed = 0.8840
Since the p-value is rather large, Fisher's test does not suggest
rejecting the hypothesis of iid residuals.

2.7.4 SMOOTHING TO ESTIMATE THE SPECTRAL DENSITY


(BD Section 10.4)
The spectral density of a stationary process is estimated by smoothing the
periodogram. The weight function {W(j),ljl s
m} used for smoothing
is entered through the selection [Enter weights for spectral window] of the
menu. After typing N, you will be asked to enter a value for m. Type
-1 t--' if a weight function is to be read from a file and type Ot--' if you
want to return to the Spectral Analysis Menu. For positive values of m you
will be requested to enter the weights W(O), W(I), .. . ,W(m), all of which
must be nonnegative. The program ensures that the weight function is
symmetric by defining W( -j) = W(j),j = 1, ... , m, and then rescales the
weights so that they add to one. After the weights have been entered, the
program returns to the Spectral Analysis Menu which now contains the
3 new items, [Weight function (plot and file)], [Spectrum density estimate
(windowed)], and [Windowed spectral estimate with MODEL spectrum].
58 2.7. Nonparametric Spectral Estimation

m n-------------------------------------~

Var~lcal 8cala : 1 unit : . 188888£-94


Horizonta l . cala : 1 unit : 2"1'I/n. n = 131

FIGURE 2.34. Smoothed spectrum estimate for AIRRES.DAT

EXAMPLE: Try estimating the spectral density of the data file


AIRRES.DAT using the weight function, W(O) = W(I) =
W(2) = i1' W(3) = i1'
and W(4) = 211 ' Begin by typing N
4+-> 3+-> 3+-> 3+-> 2+-> 1+-> . (The program automatically di-
vides the weights entered by 21 so that they add to 1). Plot the
weight function by typing W. The entries +-> C N return you
to the Spectral Analysis Menu. Type I+-> to plot the smoothed
periodogram together with the model spectral density (Figure
2.34). This can be plotted on a more natural scale by typing
+-> R+-> . 0003+-> O+-> (see Figure 2.35). Approximate 95% con-
fidence bounds for the true spectral density, f(wj), are given
(BD Section 1O.4) by,

These bounds are compatible with the constant spectral density


of white noise.
The estimate 10 j of the 1o[spectrumj can be plotted by typing
+-> C N Y . Approximate 95% confidence bounds for 1of(wj}
2. PEST 59

~ ~---------------------------------------.

8 L -______________________________________ ~

8 ~

v..rtleal sc:<ol .. : 1 unit : . 188888E-84


Horizontal $cal .. : 1 unlt = 2-,I/n. n : 131

FIGURE 2.35. Rescaled spectrum utimate for AIRRES.DAT

are given by

Inj(Wj) ± 1.96 (Ellcl~m W2(k) f/2 or


Inj(Wj) ± .6921
It is often more convenient to make inferences for In f since the
widths of the confidence intervals are the same for all frequen-
cies.
3

SMOOTH
3.1 Introduction (BD Section 1.4)

To run the program SMOOTH, double click on the icon labelled smooth
from the itsmw window (or in DOS type SMOOTII from the c: \ITSMW direc-
tory). After pressing ~ to clear the screen of the title page, you will be
asked if you wish to [Enter data] or to [Exit SMOOTH]. Type E and se-
lect the name of the data file to be smoothed. After following the program
prompts, you will see the Smoothing Menu which provides a choice of three
smoothing methods for the series {Xt, t = 1, ... , n}.
Smooth the data using a symmetric moving average
The smoothed values are found from
q
mt = L a(j)Xt_j , t = 1, ... , n,
j=-q

where X t := Xl for t < 1 and X t := Xn for t > n.


Exponentially smooth the data
The smoothed values are found from the recursions, m1 = Xl and

mt = aXt + (1- a)fflt-b t = 2, ... ,n,


where a is a specified smoothing constant (0 ~ a ~ 1).
Remove high frequency components
First the discrete Fourier transform,
n 2 . n-1 n
7rJ
aj = n -1/2 L.J
~X -itw·
te " Wj =--;-, ---<J.<-
2 - - 2'
t=l

is computed. Next the coefficients aj corresponding to frequencies greater


in absolute value than f7r (where f is a parameter between 0 and 1) are
set equal to zero. The resulting transform is then inverted to produce
the smoothed data,

mt=n -1/2
A
j = 1, ... ,n.
3. SMOOTH 61

3.2 Moving Average Smoothing


If you select the option [Smooth the data using a symmetric moving av-
erage] you will be asked to enter the half-length q and the coefficients
a(O), a(l), ... , a(q) of the required moving average,
q
mt = L a(j)X t- j , t = 1, ... , n,
j=-q

where a(j) = a( -j), j = 1, ... ,q.


The integer q can take any any value greater than or equal to zero and
less than n/2.
You may enter any real numbers for the coefficients a(j),j = 0, ... ,q.
These will automatically be rescaled by the program so that a( 0) + 2a( 1) +
... + 2a(q) = 1. (This is achieved by dividing each entered coefficient by
the sum a(O) + 2a(1) + ... + 2a(q). The program therefore prevents you
from entering weights for which this sum is zero.)
Once the parameters q, a(O), ... , a(q) have been entered, the program
will print on the screen the square root of the average squared deviation of
the smoothed values from the original observations, i.e.
SQRT(MSE)=Vn-1 L7=1 (mj - X j )2.
It will then plot the original data. Typing +-' will cause the smoothed
values to be plotted on the same graph. When plotting has been completed
you will be given the option of filing the smoothed values. Finally you will
be returned to the menu and offered the four choices listed in Section 3.1.

EXAMPLE: To smooth the data set STRIKES.DAT using the


moving average with weights a(j) = .2, j = -2, -1,0,1,2, and
a(j) = 0, Ijl > 2, use the following sequence of entries (starting
from the Smoothing Menu):
S +-' 2+-' 1+-' 1+-' 1+-' +-'
At this point the screen will display the value
SQRT(MSE)=1956.142000
Typing +-' +-' will then produce the graph displayed in Figure
3.1. The points denoted by squares are the original data and
the points joined by a continuous line are the smoothed values.

To file the smoothed values under the file name SMST .DAT,
type +-' C Y SMST . OAT +-'. The smoothed values will then be
filed under the name specified and the screen will again display
the Smoothing Menu listed above in Section 3.1.
62 3.2. Moving Average Smoothing

~? ~--------------------------------~-------,
a a

a a

~ L -____________~~~----------------------~

" Vertical scale: 1 unit = 18.888888

FIGURE 3.1. The series STRIKES.DAT with smoothed values obtained from a
simple moving avemge of length 5

3.3 Exponential Smoothing


If you select the option [Exponentially smooth the data] you will be asked
to enter the parameter a in the smoothing recursions, ml = Xl and

mt = aXt + (1 - a)mt-b t = 2, .. . , n.

The choice a = 1 gives no smoothing (mt = X t , t = 1, .. . , n) while the


choice a = 0 gives maximum smoothing (mt = Xb t = 1, .. . , n). Enter -1
if you would like the program to select a value for a automatically. This
option is particularly useful if you plan to use the smoothed value mn as
the predictor of the next observation X nH . The automatic selection option
determines the value of a which minimizes the sum of squares,
n

~)Xj - mj_d 2 ,
j=2

of the prediction errors when each smoothed value mj-l is used as the
predictor of the next observation X j •
Once the parameter a has been entered (or automatically selected), the
program will display the square root of the average squared deviation of
the smoothed values from the original observations, i.e.
3. SMOOTH 63

GB7 ~ ________________________________ ~ ____- - ;

3 33 L--_ _ _ _ _ _ _ _ _ _ _ _........_ _.l:L...-_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _----J

B
Vertical scale : 1 unit = IB . BB8BBB

FIGURE 3.2. The series STRIKES.VAT showing smoothed values obtained by


exponential smoothing with pammeter a = .68

SQRT(MSE)=Vn-1 L;=l (mj - X j )2.


It will then plot the original data. Typing ~ will cause the smoothed
values to be plotted on the same graph. When plotting has been completed
you will be given the option of filing the smoothed values. Finally you will
be returned to the menu and offered the four choices listed in Section 3.1.
EXAMPLE: Continuing with the example in Section 3.2 (again
starting from the Smoothing Menu), exponential smoothing of
STRIKES.DAT with automatic selection of a can be achieved
by typing:
E ~-1~
At this point the screen will display the values

SQRT(MSE) 974.188000
Parameter a = .68

Typing ~ ~ will then produce the graph displayed in Figure


3.2. The points denoted by squares are the original data and
the points joined by a continuous line are the smoothed values.
To file the smoothed values under the file name SMST .DAT,
type ~ C Y SMST. OAT ~. When filing is completed you will
be returned to the Smoothing Menu.
64 3.4. Removing High Frequency Components

687 r-----------------------------------------~
D D

D 0
D 0

o
D

3D ~ ____________ ~~~------ ________________ ~

8 38
Ue~tlcal sca le : 1 unit = 18.888888

FIGURE 3.3. The series STRIKES.DAT showing smoothed values obtained by


removing Fourier components with frequencies greater than f7r where f = .25

Exponential smoothing is sometimes used for forecasting. The forecasts


of all future data values X n +1, X n +2 , ••• , are given by mn . For the strikes
data, the forecasts of each X n , n > 30 are m30 = 4167.38. Forecasting
by exponential smoothing is a simple but crude procedure which uses very
little of the information contained in the data. Better forecasts can usually
be obtained from the programs PEST or ARAR.

3.4 Removing High Frequency Components


If you select the option [Remove high frequency components] you will be
asked to enter a smoothing parameter f between 0 and 1. The smaller
the value of f the more the series is smoothed, with maximum smoothing
occurring when f = o.
Once the parameter f has been entered, the program will print on the
screen the square root of the average squared deviation of the smoothed
values from the original observations, i.e.
SQRT(MSE)=v',....n...:...-l-~-;=-l-(m-AJ-.--X-j )-2.
It will then plot the original data. Typing ~ will cause the smoothed
values to be plotted on the same graph. When plotting has been completed
you will be given the option of filing the smoothed values. Finally you will
3. SMOOTH 65

be returned to the Smoothing Menu.

EXAMPLE: Continuing with STRlKES.DAT and starting again


from the Smoothing Menu, we can remove the top 75 percent
of the frequency components of the data by typing
R .25~~~

The smoothed values are plotted in Figure 3.3. They can be filed
exactly as descibed above for moving average and exponential
smoothing.
4

SPEC
4.1 Introduction
To run the program SPEC, double click on the icon labelled spec in the
itsm window (or in DOS type SPEC~ from the c: \ ITSMW directory). After
pressing ~ to clear the title page, you will see a menu offering the choice of
spectral analysis for one or two data sets. For spectral analysis of a univari-
ate series, type 0 and select the name of the data file to be analyzed. Once
the data has been read in, a menu will appear which is practically identi-
cal to the Spectral Analysis Menu of PEST(in the option [Nonparametric
spectral estimation (SPEC)] of the Main Menu). For instructions on the use
of the univariate options available in this menu see Section 2.7.

4.2 Bivariate Spectral Analysis (BO Section 11.7)


If you select the bivariate option you will be asked to specify the file names
of the two series, {Xtl, t = 1, ... , n} and {Xt2' t = 1, ... , n}, to be analyzed.
(These are assumed to be stored in separate ASCII files.) Once the series
have been read in by the program, you will see the Bivariate Spectral
Analysis Menu shown in Figure 4.l.
At this point no weight function has been specified, so the estimated
spectral densities of the first and second series, in and i22, are just the
respective periodograms divided by 211", i.e.
1 I n (Wk )
211" -_ 211" Ltt=1 X tl e-itwk 12 ,
1 n -1 I"n
II ( ) -- 211"
211" 22 Wk Ltt=1 X t2 e-itWkl2 ,
1 n -11"n

where Wk = 211"k/n, k = 0,1, ... ,[n/2] are the Fourier frequencies in [0,11"]
and [n/2] is the integer part of n/2. The cross periodogram is defined as

I I2 (Wk) = n- 1
t=1
(t
Xtle-itwk)
t=1
(t
Xt2e-itwk).

Without smoothing, the estimated absolute coherency spectrum is


Ih2(Wk)1
IKI2 (Wk)1 = 1/2 1/2 = l.
In (wk)I22 (Wk)
In order to find a meaningful estimate of the absolute coherency spectrum
(and better estimates of the marginal and phase spectra), it is necessary
to smooth the periodogram.
4. SPEC 67

Hu .. ber of observations: 149

""an of s .. rles 1 : . 82275168


Vorlonc.. of serl ... 1 = .99327348£-111

""an of serl ... 2 : .42813418


Uarlanc.. of s .. rl ... 2 = .28711388£0-81

"EttU : B IUAR lATE SPECTRAL AIVIL'iS IS

•mPot
i0t estl
estl_tf><l. fl1
....tedf22
(no 8OOOOtJdng: first .... rl ... )
(no sJOOOthlng: second series)

Ii;.
PI estl_t..d obs . coherency IK121 (no aRoothlng)
Plo .... tl_t..d phas.. speet...... PHI12 (no _oothlng)
r _Ight funct.ion for sROOthlng
gin another analy.t.
Ii It SPEC

FIGURE 4.1. The Bivariate Spectral Analysis Menu

A weight function {W(j), IJI ::; m} for smoothing the periodogram is


entered by selecting the option [Enter weight function for smoothing) of the
Bivariate Spectral Analysis Menu. After typing E +-> you will be asked to
enter a value for m. Type -1 +-> if a weight function is to be read from a
file and 0 +-> if you wish to return to the Bivariate Spectral Analysis Menu
without entering a weight function. If you enter a positive integer value for
m, you will be asked to enter the weights W(O), W(I), ... , W(m), all of
which must be nonnegative. The program ensures that the weight function
is symmetric by defining W(-j) = W(j),j = 1, ... ,m and then rescales
the weights so that they add to one. After the weights have been entered,
the program returns to the menu which now contains the additional option,
[Graph and file the weight function).

4.2.1 ESTIMATING THE SPECTRAL DENSITY OF EACH


SERIES
After a weight function {W(k)} has been entered, the marginal and cross
spectral densities are estimated as

In(Wj) = 2~ Llkl:Sm W(k)In(wj +Wk),


J22(Wj) = 2~ Llkl:Sm W(k)I22(wj + Wk),
!t2(Wj) = 2~ Llkl~m W(k)I12(Wj + Wk)'
68 4.2. Bivariate Spectral Analysis

43 r---------------------------------------~

4 L-~~~~ _______________________________ J
8 74
Vertical cale : 1 unit : .1eeeeeE- 8Z;
Horizontal scale: 1 unit : 2~I/n. n : 149 .

FIGURE 4.2. Smoothed spectrum estimate for DLEAD.DAT

The estimates of the marginal densities In and 122 are plotted by selecting
the options [Plot estimated fll for first series] and [Plot estimated f22 for
second series] of the Bivariate Spectral Analysis Menu.

EXAMPLE: Use PEST to difference the series LEAD.DAT and


SALES.DAT each once at lag 1. File the resulting series as
DLEAD.DAT and DSALES.DAT respectively. To conduct a bi-
variate spectral analysis of DLEAD.DAT and DSALES.DAT,
proceed as follows. Run SPEC as described above, press ~ to
clear the title page and type T to indicate that you wish to an-
alyze two data sets. Select DLEAD.DAT and DSALES.DAT as
the first and second series respectively. The Bivariate Spectral
Analysis Menu will then appear. To compute smoothed spec-
tral density estimates with weight function W(O) = W(1) =
1\'
... = W(6) = the first step is to enter the weights by typing
E 6 ~ 1~ 1~ 1~ 1~ 1~ 1~ 1~ (the program auto-
matically rescales the weights so that they add to 1). Plot the
weight function by typing G. The sequence of entries ~ C N
will then return you to the Main Menu. To plot the estimated
spectral density of DLEAD.DAT or DSALES.DAT type P or
L respectively (see Figures 4.2 and 4.3). Confidence bounds for
the spectral densities of DLEAD.DAT and DSALES.DAT are
computed as described in Section 2.7.4.
4. SPEC 69

126 ...--_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _----.

71
Ver~lcal scale : 1 unlt = . 188868£- 81 :
Horh:on~ 1 Kale: 1 unlt= 2"PI/n. n = 149 .

FIGURE 4.3. Smoothed spectrum estimate for DSALES.DAT

4.2.2 ESTIMATING THE ABSOLUTE COHERENCY


SPECTRUM
The absolute coherency spectrum is estimated by

" ()I
IK12 IA2(wj)1
Wj = "1/2 "1/2
III (Wj)/22 (Wj)
where A2 (.) is the estimate of the cross spectrum given by
1 '"'
!t2(Wj) = 271"
A
L..J W(k)h2(Wj + Wk).
Iklsm
Roughly speaking, the absolute coherency at frequency >. is the absolute
value of the correlation between the frequency->. harmonic components in
the two series (see BO Sections 11.6 and 11.7). An absolute coherency near
1 indicates a strong linear relationship between the sinusoidal components
in the two series.

EXAMPLE: For DLEAD.DAT and DSALES.DAT, select the op-


tion [Plot estimated abs. coherency IK121l of the Bivariate Spec-
tral Analysis Menu to plot the estimated absolute coherency
(see Figure 4.4). For this example, the estimated absolute co-
herency is rather large for all Fourier frequencies. A 100(1-0:)%
70 4.2. Bivariate Spectral Analysis

1118

9 L -____________________________________~

e 71

Ue~tlcal sca le : 1 unlt = . 188888E-81:


Ho~lzontal scale: 1 unlt = Z~I~n. n = 119 .

FIGURE 4.4. Estimated absolute coherency for (DLEAD.DAT,DSALES.DAT)

confidence interval for 1K: 12 (Wj) I is the intersection of [0, 1] with


the interval

where a~ = I:lkl~m W 2 (k) and <.l>a is the a percentile of a stan-


dard normal distribution (see BD equation (11.7.13). For this
example, an = 1/vI3. The lower limit of the 95% confidence
interval for 1K:12 (Wj)1 is bounded well away from the 0 which
suggests that the absolute coherency is positive for all frequen-
cies.

4.2.3 ESTIMATING THE PHASE SPECTRUM


The phase spectrum, 4>120 E [-11",11"], is defined as arg(f12(A». The phase
spectrum is a measure of the the phase lag of the frequencY-A component
of {Xt2 } behind that of {Xu}. The derivative of 4>12 (A) can be interpreted
as the time lag by which the frequencY-A component of X t2 follows that of
Xu . For example if 4>12 (A) is piecewise linear with slope d, then X t2 lags d
time units behind Xu. The phase spectrum is estimated by

~12(Wj) = arg(!t2(Wj»
4. SPEC 71

3 14

- 314 1--_ _ _ _-""--_ _ _ _ _ _ _ _ _ _ _ _ _-----'


8 71

V.... tical seal .. : 1 unit : _188888£-81:


Ho .. lzontal seal .. : 1 unit : 2"Pl/n. n = 11').

FIGURE 4.5. Estimated phase spectrum for (DLEAD.DAT,DSALES.DAT)

EXAMPLE: Continuing with the analysis of the bivariate se-


ries (DLEAD.DAT,DSALES.DAT), select the option [Plot esti-
mated phase spectrum PHI12j to plot the estimated phase spec-
trum (see Figure 4.5). The graph of ~12(Wj) is roughly piece-
wise linear with slope 4.1 at the low frequencies and slope 2.7
at higher frequencies. This suggests that DSALES.DAT follows
DLEAD.DAT by approximately 3 time units. A transfer func-
tion model with input DLEAD.DAT and output DSALES.DAT
(see Chapter 5 and BO Section 13.1) and a bivariate AR model
fitted to the series (DLEAD.DAT,DSALES.DAT) (see Chapter
6 and BO Section 11.5) both support this observation.
5
TRANS
5.1 Introduction (BO Section 13.1)

To run the program TRANS, double click on the icon labelled trans in the
itsm window (or in DOS type TRANS+-> from the c: \ITSMW directory). After
pressing +-> to clear the title page, you will see the Main Menu (Figure 5.1)
with five options and two explanatory statements.

5.2 Computing Cross Correlations (BO Section 11.2)

IT you choose the option [Compute sample cross correlations of two series],
you will be asked to select the file names of the first series {Y1 (t), t =
1, ... , n} and the second series {Y2 (t), t = 1, ... , n}.
You may then calculate the sample cross correlations

PYloY2 (h) = .yYl,Y2 (h) (.yYl ,Yl (0).yY2'Y2 (0))-1/2, Ihl < n,
where

Alternatively you may apply up to two differencing operators to the two


series (the same operators will be applied to both) and compute the sample
cross correlations of the differenced series. For example, if you select two
differencing operators with lags h and 12 , the program will compute the
sample cross correlations of {X1 (t)} and {X2 (t)}, where

X 1 (t) = (1-BIt)(1-Bh)Y1 (t), t=h+l2+ 1, ... ,n


X 2(t) = (1- Bit )(1 - Bh)Y2(t), t = h + l2 + 1, ... , n,
and B is the backward shift operator (i.e. B'Yi(t) = Yi(t -l)).

EXAMPLE: To compute the sample cross correlations of the two


data sets Y1 =LEAD.DAT and Y2 =SALES.DAT, select the
data files as described above. Then type 0 +-> to indicate that
no differencing is required. Press +-> and you will see the graph
of cross correlations shown in Figure 5.2.
5. TRANS 73

MAl" I1EHlJ :
,:o..puta ......ph. crose cor .... l .. tlon .. of two ...1'1 ....
,it prel ' ~lna~ ~d .. l. X2Ct) = t(8)X1Ct)· . .. ·t(~)Xl(t-~)+H(t )
~;I_to ....... Idu.. ls froo. prell .. ln~ t"an" re" runctlon 000<101
a....rer function MldolllT1!J and prediction
i t froo. TRANS

For the s econd optloR 1 90U wil l


need file .. containing Rl . the
.... " I duals obta I ned fl'OOl PEST h!J In the fourth option. you ....!,j
r Itt Ins an ARIIA nod.. 1 to tha dlfroro""o the Input ..nd output
obseruatloN: of Xl. and RZ. the before. tr.nsfor function oood .. 1
..... Idu .. t. obtaIned f ....... PEST ..hen I. r ltted i.e> t ..... _"' ..-cor ...."ted
the ..a_ ARI1A f II tal' Is applied data.
to X2.

FIGURE 5.1. The Main Menu of TRANS.

When you have inspected the graph, press +-' and you will be
asked if you wish to list the sample autocorrelations on the
screen. Type Y and you will see a listing of py1. y 2 (h), h =
-30, -29, ... ,30 and be asked whether or not you wish to file
the cross correlations. Type N and you will then be returned
to the Main Menu.
Inspection of the graphs of the two data sets Y I =LEAD.DAT
and Y2 =SALES.DAT and their autocorrelations using PEST
suggests a single differencing at lag 1 to make the series sta-
tionary. If X I and X 2 denote the series

Xi(t) = (1 - B)Yi(t) = Yi(t) - Yi(t - 1), i = 1,2,

then the sample autocorrelation function of Xl and X 2 can be


computed using the following entries immediately after reading
in the two data files.
1+-' 1+-' +-' dlead +-' dsales +-'

At this point the screen will display the graph of cross corre-
lations shown in Figure 5.3. As before, you will be given the
options of listing and filing the cross correlations before being
returned to the Main Menu.
74 5.2. Computing Cross Correlations

1.8 .

- 1.8 .

S ....pl .. cross-correlatlon of {SALES . DAT (t+h). LEAD . DAT (t.)}


h =-38 • •..• • 38

FIGURE 5.2. The cross correlations of LEAD.DAT and SALES.DAT

5.3 An Overview of Transfer Function Modelling


• Given observations of an "input" series {Yl (t)} and an "output" series
{Y2 (t)}, the steps in setting up a transfer function model relating Y2
to Y l begin with differencing and mean correction to generate trans-
formed input and output series Xl and X 2 which can be modelled as
zero mean stationary processes. Suitable differencing operators (up
to two are allowed by TRANS) can be found by examination of the
series YI and Y2 using PEST. The same differencing operations will
be applied to both series.
• An ARMA model is fitted to the transformed input series Xl using
PEST, and the residual series Rl is filed for later use. The same
ARMA filter is then applied to X 2 using the option [Likelihood of
Model (no optimization)] of the Estimation Menu of PEST (to reach
the Estimation Menu select [ARMA parameter estimation] from the
Main Menu). The residual series R2 is then filed.
• A preliminary transfer function model relating X 2 to Xl is found
using the option [Fit preliminary model] of TRANS. This model has
the form,
m
X 2 (t) = Lt(j)XI(t - j) + N(t),
j=O
5. TRANS 75

1.8 .

- 1.8 ·

S.~pIB cross-correlation of (dsales Ct+hl • dleAd (tl}


h =-38 •....• 38

FIGURE 5.3. The cross correlations of DLEAD and DSALES, obtained by dif-
ferencing LEAD.DAT and SALES.DAT at lag 1

where {N(t)} is a zero mean stationary noise sequence.

• It is often convenient to replace the transfer function E7=o t(j)Bj


by a rational function of B with fewer coefficients. For example, the
transfer function,

2B + .22B2 + .018B 3 + .OO2B4,

could be approximated by the more parsimonious transfer function,

T(B) = 2B
1- .1B

• Given the series Xl and X 2 and given any rational transfer function
T(B), the option [Estimate residuals from preliminary transfer function
model] of TRANS calculates values of the noise series {N(t)} in the
model
X 2 (t) = T(B)Xl(t) + N(t).
• An ARMA model </>(B)N(t) = O(B)W(t) is then fitted to the noise
series {N (t)} . This gives the preliminary transfer function model,
76 5.3. An Overview of Transfer FUnction Modelling

• The option [Transfer function modelling and prediction] of TRANS re-


quires that you enter the preliminary model just determined. It rees-
timates the coefficients in the preliminary model using least squares.
A Kalman filter representation of the model is used to determine
minimum mean squared error linear predictors of the output series.
Model selection can be made with the AICC statistic, which is com-
puted for each fitted model. Model checking can be carried out by
checking the residuals for whiteness and checking the cross correla-
tions of the input residuals and the transfer function residuals.

5.4 Fitting a Preliminary Transfer Function


Model
The option [Fit preliminary model] of the Main Menu of TRANS is con-
cerned with the problem of providing rough estimates of the coefficients
t{O), t{l), ... in the following model for the relation between two zero-mean
stationary time series Xl and X 2 :

=L
00

X 2{t) t(j)Xl{t - j) + N{t),


;=0
where {N{t)} is a zero-mean stationary process, uncorrelated with the "in-
put" process Xl. (See BD Section 13.1 for more details.)
Before using this program it is necessary to have filed the residual series
Rl obtained from PEST after fitting an ARMA model to the series Xl. The
residual series R2, obtained by applying the same ARMA filter to the series
X 2 , is also needed. This is obtained by applying the option [Likelihood of
the Model] of the Estimation Menu in PEST to the data X 2 with the same
ARMA model which ~ fitted to the series Xl. The residuals so obtained
constitute the required series R2 •
When [Fit preliminary model] is selected from the Main Menu of TRANS,
you will be asked for the names of the files containing the "input residuals" ,
R l , and the "output residuals", R2. You will then be asked for the order of
the moving average relating X 2 to Xl. If you specify the order as m{ < 31),
estimates will be printed on the screen of the coefficients in the relation,
m
X 2{t) = L t{j)Xl (t - j) + N{t).
;=0
You may wish to print the estimated coefficients t(j) for later use.
To check which of the estimated coefficients are significantly different
from zero and to check the appropriateness of the model, we next plot the
sample cross correlations of R2{t + h) and Rl{t) for h = -30, -29, ... ,30.
These correlations p{ h) are directly proportional to the estimates of t{ h)
5. TRANS 77

(see 8D Section 13.1). Sample correlations which fall outside the plotted
bounds (±1.96/ yin) are significantly different from zero (with significance
level approximately .05). The plotted values fJ(h) should therefore lie within
the bounds for h < b, where b, the smallest non-negative integer such that
IfJ(b) I > 1.96/yIn, is our estimate of the delay parameter. Having identified
the delay parameter b, the model previously printed on the screen is revised
by setting t(j) = 0, j < b, giving
m
X 2 (t) = L t(j)XI (t - j) + N(t).
j=b

After inspecting the graph and recording the estimated delay parameter
b and coefficients t(b), ... , t(m), press any key and you will be returned to
the Main Menu.

EXAMPLE: We shall illustrate the use of the option [Fit prelim-


inary model] with reference to the data sets YI =LEAD.DAT
and Y2 =SALES.DAT.
Analysis of these data sets using PEST suggests that differ-
encing at lag 1 and subtracting the means from each of the
resulting two series gives rise to series Xl and X2 which can be
well modelled as zero mean stationary series. The values of the
two series are

XI(t) = YI(t) - YI(t -1) - .0228, t = 2, ... ,150,


X 2 (t) = Y2 (t) - Y2 (t - 1) - .420, t = 2, ... ,150,
and the ARMA model fitted by PEST to X I is

XI(t) = Z(t) - .474Z(t - 1), {Z(t)} rv WN(O, .0779).

The residuals RI computed from PEST have already been filed


under the file name LRES.DAT. Likewise the residuals R2 ob-
tained by applying the filter (1- .474B)-1 to the series X 2 have
been filed as SRES.DAT. (To generate the latter from PEST,
input the data set Y2 , difference at lag 1, subtract the mean,
input the MA(l) model X(t) = Z(t) - .474Z(t - 1), and use
the option [Likelihood of the Model] of the Estimation Menu to
compute and file the residuals.)
To find a preliminary transfer function model relating X 2 to
Xl, start from the point where the Main Menu of TRANS is
displayed upon the screen and type F. Select LRES.DAT and
SRES.DAT as the "input" and "output" residuals respectively.
Press +-' and type to+-' . At this point the estimated coefficients
78 5.4. Fitting a Preliminary Transfer Function Model

Order of HA required, ~ «31) : 18


PRELIHIHARY TRAHSFER COEFFICIEHTS:
t( 8) = .51882818
t( 1) = .66472588
t( 2) = .33665358
t( 3) = 4.86258588
t( 4) = 3.38969488
t( 5) = 2.68583388
t( 6) = 2.88288488
t( 7) = 2.83665688
t( 0) = 1.52098288
t( 9) = 1.32632388
t(18) = .78683178
HODEL: X2(j)= t(8)Xl(j)+ ... +t(10)Xl(j-18)+ H(t)
(Press any key to continue>

FIGURE 5.4. The estimated coefficients in the transfer function model relating
X 2 to Xl

t{O), t{l), ... , t{1O), will be displayed on the screen (see Figure
5.4).
On pressing +-> +-> , you will then see the sample cross correla-
tions shown in Figure 5.5. It is clear from the graph that the
correlations are negligible for lags h < 3 and that the estimated
delay parameter is b = 3. The preliminary model is therefore,

X 2 {t) = t(3)Xl{t - 3) + ... + t{1O)Xl{t - 10) + N{t),


where t(3), ... ,t{1O) are as shown in Figure 5.4.

5.5 Calculating Residuals from a Transfer


Function Model
The option [Estimate residuals from preliminary transfer function model] of
TRANS uses observed values of Xl (t) and X 2 {t) and a postulated transfer
function model,

X 2{t) = Bb{w{O)+w{l)B+ .. ·+w{r)Br){l-v{l)B-.·. -v{s)B8 )-l Xl{t)

+N{t),
5. TRANS 79

1.8 .

- 38'''''''''
1 """"1if'T"
1' i ,.,..,.-'
ITI"' . , ........
' -.... TTT " ,u.J.,ilLJ..J.
I i ,.I-,',-' -L......I.IJ..J.J 11IJ.L1 L
111.l.L11..J..L
L....., -1..1..........
, . '., . . - . " ' -3838

- 1.8 .

S .. .. pl .. c ro"s-carr"l .. tlon of {SHES . DAT (t'h) , LRES . DAT (tH


h:-38 • .... • 38

FIGURE 5.5. The cross correlations of LRES.DAT and SRES.DAT

to generate estimated values N(t), t > m = max(r + b,s), of N(t). The


estimates are evaluated from the preceding equation by setting N (t) = 0
for t ~ m and solving for N(t), t > m.

EXAMPLE: Continuing with the example of Section 5.4, we


observe that the estimated moving average transfer function
model relating X 2 to Xl can be well approximated by a model
with fewer coefficients, namely,

To generate estimated values of the noise, N(t), 3 < t ~ 149, we


first generate the series Xl and X 2 by appropriate differencing
and mean correcting of the input series, LEAD.DAT, and the
output series, SALES.DAT. Again start from the Main Menu
and type E. After selecting LEAD.DAT and SALES.DAT as
the input and output series respectively, difference the data at
lag 1 by typing 1+---' 1+---' .
Next enter the transfer function 4.86B3(1- .7B)-1 by typing
+---' +---' 3 +---' O+---' 4. 86 +---' 1+---' • 7 +---'
You will then be asked for a file name under which to store
{N(t)}. The entries,
NOISE. OAT+---' +---'
80 5.6. LS Estimation and Prediction with Transfer Function Models

will cause the 146 noise estimates, {N(t), t = 4, ... , 149}, to


be stored in the file NOISE.DAT and return you to the Main
Menu. Subsequent analysis of this series using PEST suggests
the model

N(t) = (1 - .582B)W(t), {W(t)} rv WN(O, .0486),


for the noise in the transfer function model.

5.6 L8 Estimation and Prediction with Transfer


Function Models
The option [Transfer function modelling and prediction] requires specification
of a previously fitted ARMA model for the input process and a tentatively
specified transfer function (including a model for the noise {N(t)}). It then
estimates the parameters in the model by least squares. The exact Gaussian
likelihood is computed using a Kalman filter representation of the model, so
that different models can be compared on the basis of their AleC statistics.
The Kalman filter representation is also used to give exact best linear
predictors of the output series using the fitted model. The mean squared
errors of the predictors are estimated using a large-sample approximation
for the k-step mean squared error.
The first step is to read in the input and output series and to generate the
stationary zero mean series Xl and X 2 by performing up to two differencing
operations followed. by mean correction.
The next step is to specify the ARMA model fitted to the series Xl using
PEST and to specify the delay parameter, b, the orders, r, s, q and p and
preliminary estimates of the coefficients in the transfer function model (BD
Section 13.1),

X 2 (t) = Bb(w(O) + w(l)B + ... + w(r)Br) Xl(t)


1- v(l)B - ... - V(S)BB

+ 1 + O(l)B + ... + O(q)Bq W(t).


1-I/J(l)B - ... -1/J(p}BP
When the model has been specified, the Estimation and Prediction Menu
will appear as in Figure 5.6.
The option [Least squares estimation] computes least squares estimators
of all the parameters in the model and prints out the parameters of the
fitted model. Optimization is typically done with gradually decreasing step-
sizes, e.g.. 1 for the first optimization, then .01 when the first optimization
is complete, and .001 or .0001 for the final optimization.
Once the parameters in the model have been estimated, AleC calculation
(for comparison of alternative models) and prediction of future values of the
5. TRANS 81

ESTlnATION AND PREDICTION "ENU


Illtor .. the cur.... nt "od,,1
Le s t s quares estlnatlon
Alec valu .. and pr .. dlctlon
Fll o residual s and p l ot cross-cor .... latlons
(access to Input residual s filed ~ PEST
Is ne .. d"d to plot cross-correlatlons>
rT~ a ncw node I
Int .. r a n.... data .. t
I ~turn to natn IItOnu

FIGURE 5.6. The Estimation and Prediction Menu

output series can both be done using the option [AICC value and prediction].
Estimated mean squared errors for the predictors are obtained from large-
sample approximations to the k-step prediction errors for the fitted model
(see BO Section 13.1).
To check the goodness of fit of the model, the residuals {W(t)} should
be examined to check that they resemble white noise and that they are
uncorrelated with the residuals from the model fitted to the input process.
The option [File residuals and plot cross-correlations] allows them to be filed
for further study and checks the cross correlations with the input residuals,
provided the latter have been stored in a file which is currently accessible.

EXAMPLE: Continuing with the example of Section 5.4, we note


that the tentative transfer function model we have found relat-
ing X2 to Xl can now be expressed as,

X 2 (t) = 4.86B 3 (1 - .7B)-1 XI(t) + (1 - .582B)W(t),


{W(t)} rv WN(O, .0486),
where

XI(t) = (1 - .474B)Z(t), {Z(t)} rv WN(O, .0779).

Starting from the screen display of the Main Menu, we first


select the option [Transfer function modelling and prediction] and
82 5.6. LS Estimation and Prediction with Transfer FUnction Models

CURRENT HODEL PARAftETERS ARE:


b 3

weB) 4.91BBBB8B

u(t) .7BBBB888

thU) = -.432B9988

INPUT AND OUTPUT UN VARIANCES


.779BB811BE-81 • 59163488E-81

INPUT I'IA COEFFS


-4.749988E-81
(Press any key to continue>

FIGURE 5.7. The fitted model after tlSing least squares with step-size .1

generate the series Xl and X 2 by appropriate differencing and


mean correcting of the input series, LEAD.DAT, and the output
series, SALES.DAT.
After the data has been successfully entered and differenced, the
model previously fitted to Xl and the orders and coefficients of
the tentative transfer function model found in Section 5.4 are
now entered as follows :
~~O~1~-.474~ .0779~3~
O~4.86~ 1~ .7~ 1~ -.582~O~ ~

The specified model will then be displayed on the screen. Press


any key to see the Estimation and Prediction Menu shown in
Figure 5.6.
To obtain least squares estimates of the transfer function coeffi-
cients, select the option [Least squares estimation] with step-size
.1 by typing
L .1~
There will be a short delay while optimization is performed. The
screen will then display the new fitted coefficients and white
noise variance, as shown in Figure 5.7.
To refine the estimates, optimize again with step-size .01 by
typing ~ L .01~ and again with step-size .001 by typing
5. TRANS 83

CURRENT HODEL PARAHETERS ARE:


b 3

wUI) 4.71899788
u(t) .72449998
thO) = -.58249988
INPUT AND OUTPUT WN VARIANCES
. 77988888E-81 . 48644498E-81
INPUT HA COEFFS
-4.748888E-81
{Press an~ ke~ to continue>

FIGURE 5.8. The fitted model after two further optimizations with step-sizes .01
and .001

+--> L .OOl+--> . The resulting fitted model is shown in Figure


5.8.
Future values of the original output series SALES.DAT may be
predicted with the fitted model by selecting the option [AICC
value and prediction] of the Estimation and Prediction Menu. To
predict the next 10 values of SALES.DAT, type A lO+--> . (After
typing A in ITSM41, the following warning will be displayed
on your screen:

Some mathematics coprocessors will have underflow


problems in this option. If this occurs you will
need to exit from TRANS,switch off the coprocessor
and rerun this option. The DOS command required
to switch off the coprocessor is
SET n087=COPROCESSOR OFF
To switch it on again use the command
SET n087=

If you have not already filed the current model, it


may save time to do so now.

Do you wish to file the model (yin)?


84 5.6. LS Estimation and Prediction with Transfer Function Models

1. 8 .

- 38 I I I I. ,I ,I I, 1111.38
I I I II II I I' I I
I !

I
• !

, I I I 'I iii
I

- 1.8 .

s.. .. p la cro..-co...... I.t Ion of (z .dat (t+h) .... d .,t


. Ct)}
h=-38 • .•.• • 38

FIGURE 5.9. The sample cross correlations of the residual series W.DAT and
Z.DAT

If this warning is applicable to your mathematics coprocessor,


you must turn if off as described in the above message. Assum-
ing that this is not necessary, continue by typing N 10+-> ).
After a short delay you will see the message

AICC value = .277041E+02

Typing +-> gives ten predicted values of SALES.DAT, together


with the estimated root mean squared errors. The mean squared
errors are computed from the large sample approximations de-
scribed in BD Section 13.1. Type +-> Y +-> and the original
output series will be plotted on the screen. Then press any key
and the predictors will also be plotted on the same graph. Type
+-> eN+-> to return to the Estimation and Prediction Menu.
To check the goodness of fit of the model, the option [File resid-
uals and plot cross-correlations] of this menu allows you to file
the estimated residuals W(t) from the transfer function model
and to check for zero cross-correlations with the input residuals
R 1 . To do this type
F W.DAT+-> Y LRES.DAT+-> Z.DAT+-> +->
At this point the estimated residuals, W(t),3 < t :::; 149, will
have been stored under the filename W.DAT and the corre-
5. TRANS 85

sponding 146 values of R 1 (t) under the filename Z.DAT. You


will see on the screen the sample cross-correlations of these two
sets of residuals. For a good fit, approximately 95% of the plot-
ted values should lie within the plotted bounds. Inspection of
the graph shown in Figure 5.9 indicates that the fitted model is
satisfactory from the point of view of residual cross correlations.
(The sample autocorrelations of the residuals filed in W.DAT
and Z.DAT are also found, using PEST, to be consistent with
those of white noise sequences.)
After inspecting the graph of sample cross correlations, type ~
~ and you will be returned to the Estimation and Prediction
Menu.
The option [Try a new model] allows you to input a different
preliminary model, for which the preceding analysis can be re-
peated. Different models can be compared on the basis of their
AICC statistics.
The option [Enter a new data set] allows you to input a new
data set.
The last option returns you to the Main Menu.
6

ARVEC
6.1 Introduction
The program ARVEC fits a multivariate autoregression of any specified or-
der p < 21 to a multivariate time series {Y t = (¥it. ... , ¥im)', t = 1, ... ,n}.
To run the program, double click on the icon arvec in the itsmw window
(or type ARVEC +-' from the DOS prompt) and you will see a title page
followed by a brief introductory statement describing the program. After
reading this statement, follow the program prompts, selecting the option
[Enter data] by typing the highlighted letter E. You will then be asked to
enter the dimension m ~ 6 (m ~ 11 for ITSM50) of Y t and to select the
file containing the observations {Y t , t = 1, ... ,n}. For example, to model
the bivariate data set LS2.DAT you would enter the dimension m = 2 and
then select the file LS2.DAT from the list of data files. The data must
be stored as an ASCII file such that row t contains the m components,
Y t = (¥it. ... , ¥im)', each separated by at least one blank space. (The
sample size n can be at most 700 for ITSM41 and 10000 for ITSM50.) The
value of n will then be printed on the screen and you will be given the
option of plotting the component series.
Examination of the graphs of the component series and their autocorre-
lations (which can be checked using PEST) indicates whether differencing
transformations should be applied to the series {Yt} before attempting
to fit an autoregressive model. After inspecting the graphs you will there-
fore be asked if you wish to difference the data and, if so, to enter the
number of differencing transformations required (0,1 or 2) and the corre-
sponding lags. If, for example, you request two differencing operations with
LAG(1)=1 and LAG(2)=12, then the series {Yt } will be transformed to
the differenced series, (1-B)(1-B 12 )Yt = Y t - Y t - 1 - Y t - 12 +Y t - 13 • The
resulting differenced data is then automatically mean-corrected to generate
the series {Xt }. To fit a multivariate autoregression to the series {Xt } you
can either specify the order of the autoregression to be fitted or select the
automatic minimum AICC option. The estimation algorithm is given in
the following section.
6. ARVEC 87

6.1.1 MULTIVARIATE AUTOREGRESSION (BD Sections


11.3-11.5)
An m-variate time series {Xt} is said to be a (causal) multivariate AR(p)
process if it satisfies the recursions

where ~pl' •.. ,~pp are m x m coefficient matrices, Vp is the error covari-
ance matrix, and det(I - Z~pl - ..• - zp~pp) 1:- 0 for all Izi ~ 1. (The
first subscript p of ~pj represents the order of the autoregression.) The co-
efficient matrices and the error covariance matrix satisfy the multivariate
Yule-Walker equations,

i=I, ... ,p,

Given observations x}, •.• , Xn of a zero-mean stationary m-variate time


series, ARVEC determines (for a specified value of p) the AR(P) model
defined by

where ~pI"'" ~pp and Vp satisfy the Yule-Walker equations above with
r(h) replaced by the sample covariance matrix t(h), h = 0,1, ... ,p. The
coefficient estimates are computed using Whittle's multivariate version of
the Durbin-Levinson algorithm (BD Section 11.4).

EXAMPLE: Let us now use ARVEC to model and forecast the bi-
variate leading indicator-sales data, {(¥tl, ¥t2)', t = 1, ... , 150}
contained in the ASCII file LS2.DAT. Double click on the aruec
icon in the itsmw window and you will see the aruec title page.
Type +-' and you will see the introductory description of the
program. Then type
+-' E 2+-',
and select LS2.DAT from the list of data files by moving the
highlight bar over the entry LS2.DAT and pressing +-' . (In
ITSM50, you must first move the highlight bar over <DATA>
and press +-' to view the data files.) After the data has been
read into ARVEC, a menu will appear giving you the option
of plotting either of the component series. After inspecting the
graphs of the component series, type C to continue and you will
then be asked the question,
Do you wish to difference the data?
88 6.1. Introduction

The graphs suggest that both series should be differenced at lag


1 to generate data which are more compatible with realizations
from a stationary process. To apply the differencing operator
1- B to {Yt }, type Y 1 ~ 1 ~. The program then computes
the mean-corrected series,

[ ~:~ ] = [ ~~ =~=~:~ ]-[ :~~~i~ ]


for t = 2, ... , 150. At this stage, you have the opportunity to
plot the differenced and mean-corrected series to check for any
obvious deviations from stationarity (after which you can also
change the differencing operations if necessary). In this exam-
ple, type N in response to the question
Try new differencing operations ?
since the single differencing at lag 1 appears to be satisfactory.
You will then be asked to choose between the options [Find min-
imum Alee model], [Specify order for fitted model] and [Exit from
ARVEC]. IT you choose the second option by typing S you will
then be asked to specify the order p( < 21) of the multivariate
AR process to be fitted to {Xt }. Try fitting an AR(2) model
by typing 2~ . The screen will then display the estimated co-
efficient matrices ~21' ~22 in the following format:

PHI( 1)
-.5096E+OO . 2645E-01
-.7227E+OO . 2809E+OO

PHI( 2)
-.1511E+OO -.1033E-01
-.2148E+01 .2045E+OO

<Press any key to continue>

Type ~ and you will see the estimated white noise covariance
matrix and the Alee statistic (for order selection). To return to
the point at which a new value of p may be entered, type ~ N
N Y. The choice p = 0 will result in a white noise fit to the
data. Selection of the option [Find minimum Alee model] will
cause the program to find the model with the smallest Alee
value (see Section 6.2 below).
6. ARVEC 89

6.2 Model Selection with the AICC Criterion (BD


Section 11.5)

The Akaike information criterion (AIC) is a commonly used criterion for


choosing the order of a model. This criterion prevents overfitting of a model
by effectively assigning a cost to the introduction of each additional param-
eter. For an m-variate AR(p) process the AICC statistic (a bias-corrected
modification of the AIC) computed by the program is
2 2
AICC = -2lnL{~p1."" ~pp, Vp) + 2(pm + l)nm/{nm -
A A A

pm - 2),

where L is the Gaussian likelihood of the model based on the n observations,


and ~Pl"'" ~pp, Vp are the Yule-Walker estimates described in Section 6.1.
The order p of the model is chosen to minimize the AICC statistic.

EXAMPLE: For the differenced and mean-corrected LS2.DAT


series, the optimal order is found by selecting the option [Find
minimum AICC model] instead of the option [Specify order for
fitted model] chosen previously. For this example the optimal
order is 5 with AICC=109.49. The fact that the upper right
component of each of the coefficient estimates is near 0 suggests
that {Xtl} could be modelled independently of {Xt2 }. Also note
that the first large component in the bottom left comer of the
coefficient matrices occurs at lag 3. This suggests that {Xt2 }
lags 3 time units behind {Xtl} (see BD Example 11.5.1).

6.3 Forecasting with the Fitted Model (BD Sections


11.4, 11.5)

After the fitted model is displayed, the entries ~ Y 10~ will produce
forecasts of the next 10 values of X t . To examine the forecasts and the
corresponding standard errors (SQRT(MSE)) of a given component of the
series {Xt } or {Yt } proceed as in the following example.
EXAMPLE: From the point at which the AICC value of the
optimal AR(5) model is displayed on the screen, the forecasts of
sales for the next 10 time periods are found by typing ~ Y 10
C 2 (see Figure 6.1). The forecast of sales at time 153 is 263.4
with a standard error of .5640. Approximate 95% prediction
bounds based on the fitted AR(5) model and assuming that the
noise is Gaussian are therefore,

263.4 ± (1.96)(.564).
90 6.3. Forecasting with the Fitted Model

FORECASTS :
TIKE ORIG. V2 SQRTCtISE)
151 •2629E+83 •3884E+88
152 .2641E+83 • 4254E+88
153 • 2631E+83 .5618£+88
154 • 2636E+83 .1468£+81
155 .2639E+83 • 2187E+81
156 • 2612E+83 • 2874E+81
157 • 2614E+83 .3539£+81
158 • 2647E+83 .4236£+81
159 •2658E+83 . 4988E+81
1&8 .26S4E+83 • 5518E+81
(Press any key to continue>

FIGURE 6.1. Forecasts of the next 10 sales values

To plot the sales data and the 10 predictors, type +-> Y +->
+-> To get the forecasts of the leading indicator series 10 steps
ahead, press any key and type C 1.

After escaping from the forecasting part of ARVEC, you will be given
the option to file the one-step prediction errors for {Xt } ,

t=p+l, ... ,n
and to fit a different model (i.e. one with a different value of p) to the series
{Xt }.
7
BURG
7.1 Introduction
Like ARVEC, the program BURG fits a multivariate autoregression (of order
p < 21) to a multivariate time series {Yt = (¥ib"" ¥im)', t = 1, ... , n}.
To run the program, double click on the icon burg in the itsmw window
(or type BURG +--> from the DOS prompt) and you will see a title page
followed by a brief introductory statement describing the program. After
reading this statement, follow the program prompts, selecting the option
[Enter data] by typing the highlighted letter E. You will then be asked to
enter the dimension m :::; 6 (m :::; 11 for ITSMSO) of Y t and to select the
file containing the observations {Yt , t = 1, ... , n}. For example, to model
the bivariate data set LS2.DAT you would enter the dimension m = 2 and
then select the file LS2.DAT from the list of data files. The data must
be stored as an ASCII file such that row t contains the m components,
Y t = (¥ib"" ¥im)', each separated by at least one blank space. (The
sample size n can be at most 700 for ITSM41 and 10000 for ITSM50.) The
value of n will then be printed on the screen and you will be given the
option of plotting the component series.
Examination of the graphs of the component series and their autocorre-
lations (which can be checked using PEST) indicates whether differencing
transformations should be applied to the series {Yt} before attempting
to fit an autoregressive model. After inspecting the graphs you will there-
fore be asked if you wish to difference the data and, if so, to enter the
number of differencing transformations required (0,1 or 2) and the corre-
sponding lags. IT, for example, you request two differencing operations with
LAG(I)=1 and LAG(2)=12, then the series {Yt } will be transformed to
the differenced series, (I-B}(I-B12)Y t = Y t - Y t - 1 - Y t - 12 +Y t - 13 . The
resulting differenced data is then automatically mean-corrected to generate
the series {Xt }. To fit a multivariate autoregression to the series {Xt } you
can either specify the order of the autoregression to be fitted or select the
automatic minimum AICC option.
The only difference between ARVEC and BURG lies in the fitting algo-
rithm, which for the latter is the multivariate version of the Burg algorithm
due to R.H. Jones. Details are given in the book Applied Time Series Anal-
ysis, ed. D. Findley, Academic Press, 1978. We shall therefore confine our-
selves here to a reanalysis, using BURG, of the example given in Chapter
6.
92 7.1. Introduction

EXAMPLE: We shall use BURG to fit a multivariate AR(p)


model to the differenced leading indicator-sales series as was
done in Chapter 6 using ARVEC . Double click on the burg icon
in the itsmw window and you will see the burg title page. Type
~ and you will see the introductory description of the program.
After typing
~E 2~,

select LS2.DAT from the list of data files by moving the high-
light bar over the entry LS2.DAT and pressing ~ . (To view
the data files in ITSM50, you must first move the highlight bar
over <DATA> and press ~ .) Once the data has been read into
BURG, a menu will appear giving you the option of plotting
either of the component series. After inspecting the graphs of
the component series, type C to continue and you will then be
asked the question,
Do you wish to difference the data?
Inspection of the graphs of the component series suggests that
both series should be differenced at lag 1 to generate data which
are more compatible with realizations from a stationary process.
To apply the differencing operator 1-B to {Yt}, type Y 1 ~ 1
~ . The program then computes the mean-corrected series,

[ Xtl ] = [ Ytl - Yt-l,l ] _ [ .02275 ]


X t2 Yt2 - Yt-l,2 .42013

for t = 2, ... , 150. At this stage, you have the opportunity to


plot the differenced and mean-corrected series to check for any
obvious deviations from stationarity (after which you can also
change the differencing operations if necessary). In this exam-
ple, type N in response to the question
Try new differencing operations ?
since the single differencing at lag 1 appears to be satisfactory.
You will then be asked to choose between the options [Find
minimum AICC model], [Specify order for fitted model] and [Exit
from ARVEC]. If you choose the second option by typing S you
will be asked to specify the order p( < 21) of the multivariate
AR process to be fitted to {Xt }. Try fitting an AR(2) model
by typing 2 ~ . The screen will then display the estimated
coefficient matrices ~21' ~22 in the following format:

PHI( 1)
-.5129E+OO . 2662E-01
-.7341E+OO . 2816E+OO
7. BURG 93

PHI( 2)
-.1526E+00 -.1055E-01
-.2168E+01 .2054E+00

<Press any key to continue>

Type +-> and you will see the estimated white noise covariance
matrix and the AlCC statistic (for order selection). To return to
the point at which a new value of p may be entered, type +-> N
NY. The choice p = 0 will result in a white noise fit to the
data. Automatic order selection is obtained by selecting the op-
tion [Find minimum AICC model] instead of the option [Specify
order for fitted model] chosen previously. For this example the
minimum AICC BURG model has order 8 with AICC=56.32.
The first large component in the bottom left corner of the co-
efficient matrices occurs again at lag 3 suggesting that {Xt2 }
lags 3 time units behind {Xtl} (see BD Example 11.5.1).
From the point at which the AICC value of the AR(8) model
is displayed on the screen, the forecasts of sales for the next
10 time periods are found by typing +-> Y 10 C 2 (see Figure
7.1). The forecast of sales at time 153 is 263.5 with a standard
error of .2566. Approximate 95% prediction bounds based on
the fitted AR(5) model and assuming that the noise is Gaussian
are therefore,
263.5 ± (1.96){.257).
The predicted value is very close to the value obtained from
ARVEC but the standard error (assuming the validity of the
BURG model) is smaller than for the ARVEC model. To plot
the sales data and the 10 predictors, type +-> Y +-> +-> . To
get the forecasts of the leading indicator series 10 steps ahead,
press any key and type C 1.

After escaping from the forecasting part of BURG, you will be given the
option to me the one-step prediction errors for {Xt } ,

t =p+ 1, ... ,n,

and to fit a different model (i.e. one with a different value of p) to the
series {Xt }. The one-step prediction errors should resemble a multivariate
white noise sequence if the fitted model is appropriate. Goodness of fit
can therefore be tested by checking if the minimum AICC model for the
prediction errors has order p = O. This test can be carried out for our
current example as follows.
94 7.1. Introduction

FORECASTS
TInE ORIG. YZ SQRT(HSE)
151 .Z6Z9E+1I3 .ZII58E+1I1I
15Z .Z643E+1I3 . 238BE+88
153 .Z635E+1I3 . 2566E+88
154 .Z641E+1I3 . 1295E+81
155 . Z643E+1I3 . 2835E+81
156 .Z649E+1I3 .2?41E+81
15? . Z653E+83 . 3431E+81
158 .Z658E+1I3 . 4148E+81
159 .Z6611E+II3 .484BE+81
168 .Z666E+83 .S684E+81
(Ppess any key to continue)

FIGURE 7.1. Forecasts of the next 10 sales values

EXAMPLE: Continuing from the displayed list of forecasts of


the leading indicator series, type f-> N C Y res. dat f-> •
These commands will store the one-step prediction errors (or
residuals) in a data file called RES.DAT. Then type N E 2
f-> and read in the new data file RES.DAT using the highlight

bar. Then type f-> C N f-> C N F. At this point you will see
that the fitted minimum AICe model for RES.DAT has order
p = 0, the only estimated parameter being the white noise
covariance matrix. This lends support to the goodness of fit of
the minimum AleC AR(8) model fitted by BURG to the series
{Xt }.
8

ARAR
8.1 Introduction
To run the program ARAR, double click on the arar icon in the itsmw win-
dow (or type ARAR+-' from the DOS prompt) and press +-' . You will then
see a brief introductory statement. The program is an adaptation of the
ARARMA forecasting scheme of Newton and Parzen (see The Accuracy
of Major Forecasting Procedures, ed. Makridakis et al., John Wiley, 1984,
pp.267 - 287). The latter was found to perform extremely well in the fore-
casting competition of Makridakis, the results of which are described in the
book. The ARARMA scheme has a further advantage over most standard
forecasting techniques in being more readily automated.
On typing +-' you will be given the options [Enter a new data set] and
[Exit from ARAR]. Choose the first of these by typing E and you will see
the list of data files from which you can select by moving the highlight bar
over the desired filename with the arrow keys and pressing +-' . (To view
the data files in ITSM50, you must first move the highlight bar over <DATA>
and press +-' .) Once you have selected a data set and pressed +-' you will
see the Main Menu shown in Figure 8.1.

8.1.1 MEMORY SHORTENING


Given a data set {yt, t = 1,2, ... , n}, the first step is to decide whether or
not the process is "long-memory", and if so to apply a memory-shortening
transformation before attempting to fit an autoregressive model. The differ-
encing operations permitted by PEST are examples of memory-shortening
transformations, however the ones allowed by ARAR are more general.
There are two types allowed :

Yt = yt - ~(f)yt-f (1)

and
Yt = yt - ~lYi-l - ~Yi-2. (2)
With the aid of the five-step algorithm described below, we shall classify
{Yi} and take one of the following three courses of action .

• L. Declare {Yi} to be long-memory and form {Yt} using (1) .

• M. Declare {Yi} to be moderately long-memory and form {Yt} using


(2).
96 8.1. Introduction

I1AI" I1EHU :
a ne.. data sot..
~lot
~er
the da ta.
~er .. h", ~"" .." .... ry-short .. nlng pol!,l...... la l and
fit a subset AR ..odol to the ~ransfo~d data .
Jlypass _nary-shorten Ing and r Ita subs ..t AJi
,~ ..ode1 to the or l 9'lnal dau, .
E~lt frDlO ARAR.

FIGURE 8.1. The main menu of ARAR

• S. Declare {Yi} to be short-memory.

H the alternatives L or M are chosen then the transformed series {lit} is


again checked. H it is found to be long-memory or moderately long-memory,
then a further transformation is performed. The process continues until
the transformed series is classified as short-memory. The program ARAR
allows at most three memory-shortening transformations. It is very rare to
require more than two. The algorithm for deciding between L, M and S can
be described as follows:

1. For each T = 1,2, ... ,15, we find the value ~(T) of ¢J which minimizes

We then define

E ( ) _ ~:=T+1[Yi - ~(T)Yi-Tj2
rr T - "n
2 '
L..,t=T+1 ~
and choose the lag f to be the value of T which minimizes Err(T).
2. H Err(f) ::; 81n, go to L.
3. If ~(f) 2: .93 and f > 2, go to L.
8. ARAR 97

4. If ~(f) 2: .93 and f = 1 or 2, determine the values ~l and ~2 of l/Jl


and l/J2 which minimize
n
L[Yt -l/JlYt-l -l/J2Yt_2]2.
t=3

Go to M.
5. If ~(f) < .93, go to S.

8.1.2 FITTING A SUBSET AUTOREGRESSION


Let {St, t = 1, ... , T} denote the memory-shortened series derived from
{Yt} by the algorithm of the previous section and let S denote the sample
mean of S1, ... ,ST.
The next step in the modelling procedure is to fit an autoregressive
process to the mean-corrected series,

Xt = St - S, t = 1, ... , T.

The fitted model has the form

+Zt,
where {Zt} '" WN (0, (72), and, for given lags, h, l2' and l3, the coefficients
l/Jj and the white noise variance (72 are found from the Yule-Walker equa-
tions,
1 p(h - 1)
[ P(ll - 1) 1
p(l2 - 1) P(l2 - h)
P(l3 - 1) P(l3 - h)

and
(72 = 1'(0)[1 -l/JlP(I) -l/JhP(h) -l/JbP(h) -l/Jlsp(l3)],
where 1'(j) and p(j),j = 0,1,2, ... , are the sample autocovariances and
autocorrelations of the series {Xt }.
The program computes the coefficients l/Jj for each set of lags such that
98 8.1. Introduction

< Finding best MeMOry shortening po I ynOM ia I>


~EST LOHG-tIEI10RY LAG
12
AGGED AR COEFFICIENT
9.778918E-B1
~81DUAL 8.8./TOTAL 8.8.
3.66B375E-83

"EST LONG-HEHORY LAG


1
~AGGED AR COEFFICIENT
7.824837£-81
~E8IDUAL 8.8./TOTAL 8.8.
4.839839E-81

< HeMory shortening is now coaplete)

~OEFFICIENT OF B-j IN tmtORY-SHOIITEHIHG POLYHOHIAL. j=8,1 •...


1.81188 .8888 .B8B8 .8B8B .8888
.8888 •888B .8888 .BBBB .8BB8
.8888 . BBBB - .9779

<Press any key to continue>

FIGURE 8.2. Memory-shortening filter selected lor DEATHS.DAT

where m can be chosen to be either 13 or 26. It then selects the model for
which the Yule-Walker estimate u 2 is minimum and prints out the lags,
coefficients and white noise variance for the fitted model.
A slower procedure chooses the lags and coefficients (computed from the
Yule-Walker equations as above) which maximize the Gaussian likelihood
of the observations. For this option the maximum lag m is 13.
The options are displayed in the Subset AR Menu (Figure 8.3) which
appears on the screen when memory-shortening has been completed (or
when you opt to by-pass memory shortening and fit a subset AR to the
original (mean-corrected) data).

8.2 Running the Program


To determine an ARAR model for the given data set {yt} and to use
it to forecast future values of the series, we first read in the data set.
Following the appearance on the screen of the Main Menu, we type D +->
to select the option (Determine the memory-shortening polynomial ...] which
then finds the best memory-shortening filter. After a short time delay the
coefficients 1,,,p1, ... ,,,pk of the chosen filter will be displayed on the screen.
The memory shortened series is
8. ARAR 99

OF SHORr- ~RV SERIES 23 . 2217


EllGTH OF SHORT- KE"ORY SERIES = 68
SUBSET All I1ErtJ :

Ind tho 1-cootrlclent vule-Walker Nadel with


.IInh...... lilt u.... h.nce e.'tI_t.. ( ..." I"g = 13>'
the 4-coerCIcient Vule-Walker nodel with
.IInh,.... lilt u...riance ..stl_te ( ...... I .. g = Z6>'
the 1-coerrlclent Vul e-Yalker nodel with
....."1 ....... Ga ...sa lan likelihood ( ....,. lag = 13L
. t..... n to I14ln ...,nu.

pU .. al lags 1 3 12 13
ptl ..al coerc. .5915 . Z893 - .3822 .2'378
V.. ri .. nce : . 12314E·1I6
EFF ICIEIITS OF OVERALL Io'HlTDIIHG FII.TEJI
1 . 8888 -. 5915 .8888 -. Z893 .8888
. 8888 .8888 .8888 .8888 .88118
.8888 .8888 - .6757 .Z811 .88118
. Z847 .8888 .8888 .8888 .88118
. 8888 . 8888 . 9888 . 68811 - .2955
.2981

FIGURE 8.3. The four-coefficient autoregression fitted to the memory-shortened


DEATHS.DAT series

Type f - ' and the Subset AR Menu will appear. The first option (selected
by typing F) fits an autoregression with four non-zero coefficients to the
mean-corrected series X t = St - S, choosing the lags and coefficients which
minimize the Yule-Walker estimate of white noise variance. Type F and
the optimal lags and corresponding coefficients in the model
Xt = ¢>IXt-1 + <P,l X t- ll + <P12Xt-12 + <P13Xt-13 + Zt,
will be printed on the screen. The coefficients ~j of Bj in the overall whiten-
ing filter (B is the backward shift operator),
~(B) = (1 + 'thB + ... + 'ljJkBk) (1 - <pIB - <Pll Bll - <PhBh - <p13BI3),
are also printed.
Type f - ' again and you will be asked for the number of future values of
{Yi} to be predicted. Enter the required number and type f - ' f - ' to see the
graph of the original data. Type f - ' again and the predicted values will be
added to the graph. Type f - ' C N and you will be asked if you wish to file
the predictors. Following this you will be returned to the Subset AR Menu,
from which you may either select one of the other fitting options or return
to the Main Menu from which you may leave the program.

EXAMPLE: To use the program ARAR to predict 24 values of


the data file DEATHS.DAT, proceed as follows (starting from
100 8.2. Running the Program

Vertical scale; 1 unit = 188 .1I888B8

FIGURE 8.4. The data set DEATHS.DAT with 24 predicted values

the Main Menu, immediately after having read in the data file
DEATHS.DAT).
Type D and the coefficients of the selected memory-shortening
filter will appear. Figure 8.2 shows that the chosen filter is (1-
.9779B 12 ).
To continue, type ~ F. The program then fits a four-coefficient
AR model to the mean-corrected memory-shortened data, with
maximum lag 13, selecting the model with minimum estimated
whit~noise variance. The fitted model is displayed in Figure
8.3.
To predict 24 future values of the series, type ~ 24~ At this
stage the screen will show the square root of the observed mean
squared error of the on~step predictors of the data itself. To
plot the predictors of future values type ~ ~ and you will see
the original data with the 24 predictors plotted on the same
graph as in Figure 8.4.
To exit from the program type ~ C N R x.
9

LONGMEM
9.1 Introduction (so Section 13.2)
The program LONGMEM is designed for simulation, model-fitting and pre-
diction with ARlMA(p, d, q) processes, where -.5 < d < .5. Such processes,
known as fractionally integrated ARMA processes, are stationary solutions
of difference equations of the form

where tjJ(z) and 8(z) are polynomials of degrees p and q respectively, satis-
fying
tjJ(z) '" 0 and 8(z) '" 0 for alllzi such that Izl ~ 1,
B is the backward shift operator (Bk X t = Xt-k and Bk Zt = Zt-k) and
{Zt} is a white noise sequence with mean 0 and variance (12. The operator
(1- B)d is defined by the binomial expansion,

L '!rjBj,
00

(1- B)d =
j=O

where
'!rj = II k-l-d
k ,j = 0, 1, ....
O<k~j

The autocorrelation p(h) at lag h of an ARlMA(p, d, q) process with -.5 <


d < .5 is asymptotically (as h - 00) of the form (Const)h 2d - 1 • This con-
verges to zero as h - 00 at a much slower rate than p( h) for an ARMA
process which is bounded in absolute value by (Const)rh, with r < 1.
Consequently fractionally integrated ARMA processes are said to have
"long memory". In contrast, stationary processes whose ACF converges
to 0 rapidly, such as ARMA processes, are said to have "short memory" .
To begin a session with the program LONGMEM, double click on the
icon labelled longmem in the itsm window (or in DOS type LONGMEM+->
from the c: \ ITSMW directory). You will see a title screen followed by a brief
explanation of LONGMEM, and will then be given the choice of entering
both data and a model or just a model. IT the option [Input data and model]
is selected, you will be asked to choose the data file and whether or not you
wish to subtract the sample mean from the data. Type S unless you wish
to assume that the data is generated by a zero-mean model. You must then
102 9.1. Introduction

choose to [Enter model from keyboard] or [Read model from a file]. Choose
the latter if the required model has been entered previously and saved in a
file. To enter a model from the keyboard, type E and follow the program
prompts. These will ask for the parameter d (between -.5 and .5), the
order of the autoregression p, the AR coefficients, </>1,' .. ,</>p, the order of
the moving average q, the MA coefficients, (h, ... ,Oq and finally the white
noise variance a 2 • The following message will then be displayed on your
screen:
The model auto covariance function is calculated from

GAHMA(h)= SUM [psi(j)*psi(k)*gamma(h+j-k)]

where gamma is the ACVF of fractionally integrated


WN with index d. and j and k run from 0 to N. The
default value of N is 50. You may wish to increase
N if you have an autoregressive zero very close to
the unit circle.
The exact autocovariance function of the model can be expressed as

L L 'l/Jj'I/Jk'Yy (h + j -
00 00

'Yx (h) = k),


j=Ok=O

where "£'::0 'l/Ji zi = O(z)/<I>(z), Izl ~ 1, and 'Yy (.) is the autocovariance
function of fractionally integrated white noise with parameters d and a 2
(see BD. equations (13.2.8) and (13.2.9)). As the screen message indicates,
the upper limits of summation are replaced in the program by N, where N
has the default value of 50. If </>(z) has a zero close to the unit circle you may
wish to increase the value of N (up to 200) to reduce the truncation error.
After entering N press ~ to view a summary of the current model stored
in the program and then press ~ to see the Estimation and Prediction
Menu (Figure 9.1).

9.2 Parameter Estimation (BD p.527-532)

LONGMEM estimates the parameters, (3 = (d,</>t, ... ,</>p,Ot, ... ,Oq)' and
a 2 of a fractionally integrated model by maximizing the Whittle approxi-
mation, Lw, to the likelihood function. This is equivalent to minimization
of

-2ln(Lw) =nln(211')+2nlna+a- 2 L. 9~n(~~)


w}'
+ L lng(w,; (3),
.
} }

where In is the periodogram, a 2 g is the model spectral density and "£j de-
notes the sum over all non-zero Fourier frequencies, Wj = 211'jfn E (-11',11'].
9. LONGMEM 103

ESTlMT10li AMD PRED1CTI0tI IIEKJ :

110 the current .....1.01


~ Estl-atlon using Whlttlo's approxlNatlon
edlctlon
aph the _an-corrected data
ntor a new NOde)

l
i~PUt a new data .et
1...... lation
p t Node I ACVF
P) ..odol and ....p 10 ACUF
It f rooo LOHGItEl1

FIGURE 9.1. The Estimation and Prediction Menu of LONGMEM

The option [ML Estimation using Whittle's approximation] performs the nec-
essaryoptimization.

EXAMPLE: Start the program LONGMEM and press +-> +-> to


clear the first two displays. Type I to input both data and a
model. After selecting the data set E1321.DAT, subtract its
mean by typing S. Enter the ARIMA(O, .3, 1) model with 81 =
.2 and 0'2 = 1.0 by typing E .3+-> O+-> 1+-> .2+-> 1. O+-> . Then
type N to use the default value of N = 50. At this point the
model and the corresponding value of -2ln(Lw) (where Lw
is the Whittle likelihood) will be displayed. Press +-> to view
the Estimation and Prediction Menu and type M to estimate
the parameters of the model. You must then choose whether
to optimize with respect to all parameters or to keep d fixed.
The usual choice (unless d is known or has been otherwise esti-
mated) is [Optimize with respect to all parameters]. After typing
0, specify the optimization step size as 0.1 by typing .i+-> .
Once the optimization with this (rather large) step size is com-
plete, optimization should be repeated with step sizes .01 and
then .001. This leads to the model displayed in Figure 9.2.
104 9.3. Prediction

aJRREI1 r IIODE.L PA.RAftE TEAS ARE:


/1A COEFFICIEKTS
.8115881
ORDER OF DIFFEREHCING
.39saeee
uti VAll I Af«:E
.51.... 785E+88
- Zln(L) (Whittle) = .429278£-83
AKAII<£ AIC STATISTIC .435278£-93

(Press a~ ~ey to continue)

FIGURE 9.2. The model fitted by maximum likelihood

9.3 Prediction (8D p.533-534)

Select the option [Prediction] to forecast future values of the time se-
ries using the fitted model. Assuming that the observations Xl,"" Xn
(Xl - m, ... , Xn - m if you chose to subtract the sample mean m) were
generated by the fitted model, LONGMEM predicts Xn+h as the linear com-
bination Pn(Xn+h) of 1, Xl, ... , Xn which minimizes the mean squared er-
ror E(Xn+h - Pn(Xn+h))2. The Durbin-Levinson algorithm (8D Section
5.2) is used to compute the predictors from the data and the model auto-
covariance function.

EXAMPLE: Starting from the Estimation and Prediction Menu


with the mean-corrected data set and model just fitted to it,
type P 40+--> to predict the next 40 observations of EI321.DAT.
The first 20 predicted values and the corresponding values of
SQRT (MSE) will appear on the screen. Press +--> to see the next
20. Type +--> Y +--> +--> and you will see a graph of the origi-
nal data with the forty predicted values appended (Figure 9.3) .
Notice that the predictors are converging to the sample mean
m = - .0434 quite slowly. H however we fit an ARMA(3,3)
model to the mean-corrected data, we find that the correspond-
ing predictors converge to m much more rapidly.
9. LONGMEM 105

333

3S2
e 248
Vertical seale: 1 unit = . 818eee

FIGURE 9.3. Predictors of E1921.DAT and the values of SQRT(MSE)

9.4 Simulation
The option [Simulation] of the Estimation and Prediction Menu of LONG-
MEM can be used for generating realizations of a fractionally integrated
ARMA process. To generate such a realization type S and then Y when
asked if you wish to continue with the simulation. The program will ask
you to enter the number of data points required (up to 1000 for ITSM41 or
20000 for ITSM50 ) and a random number seed (an integer with fewer than
10 digits). The simulated data set will be stored in LONGMEM , overwriting
any data previously stored in the program. You will not be given the option
of subtracting the sample mean since the simulated series is generated by
a model with zero mean.

EXAMPLE: To generate 200 data points from the model fitted


above to E1321.DAT, start in the Estimation and Prediction
Menu and type S Y 200+-> 7486+-> . (You can get an indepen-
dent realization by choosing some number other than 7486 as
the random number seed.) Return to the Estimation and Pre-
diction Menu and type G to graph the simulated data (Figure
9.4).
106 9.4. Simulation

FIGURE 9.4. Data genemted from a fractionally integrated model

9.5 Plotting the model and sample ACVF


The option [Plot model ACVFj of the Estimation and Prediction Menu
allows you to plot the autocovariance function of the model. Provided data
have also been read in (or simulated), the model ACVF can be overlayed
with the sample ACVF using the option [Plot model and sample ACVFJ.
If the data have not been mean-corrected they are assumed to have come
from a zero-mean series and the sample ACVF is computed as i'(h) =
'L.::lh Xt+hXt/n, h ~ o.

EXAMPLE: Continuing with the previous example, return to the


Estimation and Prediction Menu and type o. The model ACVF
should then appear on your screen. Press ~ to superimpose the
sample ACVF (Figure 9.5).
9. LONGMEM 107

591

199
Vertical scale : 1 unit = .819888

FIGURE 9.5. The model ACVF and sample ACVF of the generated data
Appendix A
The Screen Editor WORD6
By Anthony E. Brockwell

A.I Basic Editing


The cursor can be moved with the four cursor control keys. The <End> key
moves the cursor to the end of the line, and the <Home> key to the first
character of the line. <PgUp> and <PgDn> move the cursor 22 lines up and
22 lines down, respectively. <Ctrl>-<Home> and <Ctrl>-<End> move the
cursor to the beginning and end of the text respectively «Ctrl>-<Home>
means holding down the <Ctrl> key while pressing the <Home> key).
The backspace key <+-> (upper right of keyboard) deletes the character
at the cursor position and moves the cursor back one space. The <Del>
key deletes the character at the cursor position without moving the cursor.
To merge two lines, move the cursor to the far left of the screen (using
<Home> and then the left arrow) and press the <+-> key. The line will then
be moved up and put on the end of the line above.
The <Ins> key toggles insert and overwrite modes. In insert mode charac-
ters will be inserted into the text at the current cursor position. In overwrite
mode they replace the old character and the <Enter> key moves the cursor
to the next line without inserting a new line. At the bottom of the screen,
a message shows whether you are in insert or overwrite mode.

A.2 Alternate Keys


To perform special functions, WORD6 makes use of the <Al t> key. The
<Alt> key works in the same way as the <Shift> key. To enter <Alt>-X,
for example, press the <AI t> key, and while still holding it down, press
X. Note - either X or x will do, as the computer does not differentiate
between upper and lower case alternate keys.
The two most essential <Alt>-keys are <Alt>-R and <Alt>-W. To read
an ASCII file into the editor, type <Alt>-R, and then enter the name of
the file to be read in. (Alternatively, you can type WORD6 FRAME to begin
editing an existing file called FNAME.) When you have finished editing or
creating a file, <Alt>-W can be used to write the file to disk.
To exit from WORD6, enter <Alt>-X. If you have edited a file without
Appendix A. The Screen Editor WORD6 109

saving it, you will be asked whether you really want to exit without saving
the file. To save the file, answer n and then use <Alt>-W.
<Al t>-D and <AI t>- I can be used to delete and insert large sections of
text quickly. <Al t>-D deletes the entire line at the current cursor position
and moves all the text below it up one line. <Al t>-I inserts a blank line
above the current cursor position.
If you have a color monitor, <Alt>-Z can be used to change the screen
color.

A.3 Printing a File


To print a file, use <AI t>-W as though writing a file. Then when prompted
for the file name, enter LPTl (or possibly LPT2 if you have two printers).
This is a DOS filename which allows the printer to be treated as though it
were a file.
To make use of special printer control codes (for underlining, bold-face,
etc.) enter these codes directly into the document. Use <Alt>-O to redefine
<Alt>-(1-9) by ASCII code, and then any combination of control codes
can be sent to the printer.

AA Merging Two or More Files


The <AI t>-R command does not replace the old document with a new one.
It merges the new file into the current text. If there is no current text - as
after using <AI t>-N or just after entering WORD6 from the DOS prompt
- the new file will obviously not be merged. For example, to merge a fifty
line file between the tenth and eleventh lines of an old sixty line file, read in
the sixty line file, insert a blank line between its tenth and eleventh lines,
position the cursor anywhere on the blank line, and then read in the fifty
line file using <Alt>-R. To merge the new file onto the end of the old one,
just position the cursor at the end of the old one using <Ctrl>-<End>, press
t-' (optional), and read in the new one.

A.5 Margins and Left and Centre Justification


<Alt>-L and <Alt>-P set left and right margins, respectively. The margin
will be set at wherever the cursor is when the key is pressed. The shading
over the tab settings will change to show only what is included between
the left and right margin. Text will automatically wrap around to the left
margin on the next line if the cursor moves past the right margin on the
current line. To left-justify text, press <Ctrl>-L (like <Alt>-L, except use
110 A.5. Margins and Left and Centre Justification

the <CtrI> key instead of the <AIt> key). The current line will be moved
so that it starts right on the left margin. <CtrI>-M will centre-justify text
by placing it centrally between the right and left margins.

A.6 Tab Settings


<AIt>-T sets or removes a tab setting. If there is a tab setting at the current
cursor position, it will be removed, if there is no tab setting, one will be
added. Tab settings are indicated by little white hats at the bottom of the
screen. When the tab key is pressed, the cursor will automatically move to
the next tab position.
EXAMPLE: To get rid of the next tab setting, press the tab key
to move there, and then press <AI t>-T to remove the setting.
The hat marking that tab setting will disappear.

A.7 Block Commands


Large sections of text can be moved or erased as follows using the <AI t>-M
command. Move to the first line of the section to be marked and press
<AIt>-M. Then move to the last line and press <AIt>-M again. The en-
tire block between and including the two lines will change color to show
that it has been marked. After marking a block, the <AI t>-E and <AI t>-C
commands can be used. <AI t>-E deletes the entire block. <AI t>-C makes
a second copy of the block after the line at the current cursor position.
For instance, to delete the entire text, press <CtrI>-<Home>, <A1t>-M,
<CtrI>-<End>, <AIt>-M and then <AIt>-E will erase the entire text.
Note:
1. Only one block can exist at once. <AIt>-C makes a copy of the old
block and leaves it marked.
2. To unmark a block, press <AIt>-M. If a block already exists, <AIt>-M
removes the marking.
3. To move a section of text, mark it, move the cursor to the line before
the new desired position and press <AIt>-C, and then press <AIt>-E
to get rid of the old block.
If you wish to write only part of the text to a file, mark the required
block, and then press <AIt>-B. You will be prompted for the file name.
Vertical blocks may also be manipulated by using <AIt>-F. Mark each
end of the block by pressing <AIt>-F. To delete a marked block press
<AIt>-G. To move a marked block to the right or left, press <AIt>-U and
use the arrow keys. When the marked block is appropriately located press
~ . To unmark the block press <AI t>-F again.
Appendix A. The Screen Editor WORD6 111

A.S Searching
To locate a certain word or set of characters in a file, use <Alt>-R to read
the file into WORD6. Then type <Alt>-S. You will be prompted for a string
to search for and what to replace it with. If you want to search and not
replace, just press +-' when asked Replace with 1. You will then be asked
the question Ignore case (YIN) ? (If you answer Nthen a search for The will
not find the.) The cursor will then be moved to the first occurrence of the
string after the current cursor position. If the string is not found in the text,
the cursor will reappear at the end of the file. Searching and replacing is
always global, but can be aborted with the <Esc> key. Each time the string
is found, you will be prompted as to whether or not to replace it. If you
enter N or n, the search will go on to the next occurrence of the string.
Since a search always starts at the current cursor position, it is usually a
good idea to go to the beginning of the text using <Ctrl>-<Home> before
carrying out a search.

EXAMPLE: To replace every occurrence of this in the text with


that, go to the beginning of the text by pressing <Ctrl>-<Home>.
Then press <Alt>-S. Then enter this and then enter that.
WORD6 will then give the prompt Replace (YIN)? for every
occurrence of this in the text. If you enter y or Y the this at the
cursor position will be changed to a that.
<Alt>-Q repeats the last search, and does not replace.

Instead of searching only within a file, you can search through specified
files in a directory with the <Alt>-J command.

EXAMPLE: To replace every occurrence of xaxis by yaxis in the


files with names ending in .for in the current directory, type
word6 from the DOS prompt and then type <Alt>-J. When
File specification is requested, type *. for. Then proceed
as in the previous example, typing xaxis (the string to be re-
placed) and yaxis (the replacement string) as required. Each
time the search reaches the end of a file you will be given the
opportunity to save the new file with the specified changes.

A.9 Special Characters


By making use of <Alt>-(1-9), WORD6 can access characters which can-
not normally be accessed from the keyboard. Each time WORD6 is run,
a set of some of the more useful Greek letters are loaded into the keys
<Alt>-l, <Alt>-2, " . <Alt>-9. However these can be redefined by ASCII
code by pressing <Alt>-O.
112 A.9. Special Characters

A box can be created by using ASCII codes 192, 196, 217, 179, 218 and
191. These each display a different segment of the box. Press <Alt>-O and
enter these six numbers for six of the nine <Al t>-keys. Then by pressing
<Alt>-(1-9), these segments of the box can be put on the screen and
edited to the correct position.

A.I0 Function Keys


Function can be defined to be any string of up to forty characters. It can
save time to redefine commonly used phrases as function keys (for instance
write(*,*) in Fortran.) Press <FlO> to redefine a function key. When
asked which one to define, press the function key you wish to assign a string
to. l Then enter the string. You may define up to nine different function
keys at once.

A.II Editing Information


At the bottom of the screen is a list of parameters. At the far left is a
message Fl = Help. Next to that is either Insert or Overwrite. This is the
current editing mode, which can be toggled using the <Ins> key. Next to
that a number displays the column number of the cursor (anywhere from
1 to 65535). At the far right are two numbers, separated by a slash. The
number on the left of the slash is the number of the line at which the
cursor is currently located. The number on the right of the slash is the
total number of lines in the document.
On the line above all this information, a series of hats may be displayed.
These are all the tab settings. In addition to the tab settings, this line is
shaded to show the left and right margins.

IThe tab key will appear as a small circle when used in the definition of a
function key, and will be decoded when the function key is pressed while editing.
Thus function key definitions including tabs will be placed on the screen as though
the tab key is pressed at the position it appears on the screen. It is not converted
into a set number of spaces to be put on the screen.
Appendix B
Data Sets
USPOP.DAT Population of United States at ten-year intervals, 1790-
1980 (U.S.Bureau of the Census). BO Example 1.1.2.
STRIKES.DAT Strikes in the U.S.A., 1951-1980 (Bureau of Labor Statis-
tics). BO Example 1.1.3.
SUNSPOTS.DAT The Wolfer sunspot numbers, 1770-1869. BO Example
1.1.5.
DEATHS.DAT Monthly accidental deaths in the U.S.A., 1973-1978 (Na-
tional Safety Council). BO Example 1.1.6.
AIRPASS.DAT International airline passenger monthly totals (in thou-
sands), Jan. 49 - Dec. 60. From Box and Jenkins (Time Series Analysis:
Forecasting and Control, 1970). BO Example 9.2.2.
E911.DAT 200 simulated values of an ARIMA(1,1,0) process. BO Example
9.1.1.
E921.DAT 200 simulated values of an AR(2) process. BO Example 9.2.1.
E923.DAT 200 simulated values of an ARMA(2,1) process. BO Example
9.2.3.
E951.DAT 200 simulated values of an ARIMA(1,2,1) process. BO Example
9.5.1.
EI021.DAT Sinusoid plus simulated Gaussian white noise. BO Example
10.2.1.
EI042.DAT 160 simulated values of an MA(1) process. BO Example 10.4.2.
EI062.DAT 400 simulated values of an MA(1) process. BO Example 10.6.2.
LEAD.DAT Leading Indicator Series from Box and Jenkins (Time Series
Analysis: Forecasting and Control, 1970). BO Example 11.2.2.
SALES.DAT Sales Data from Box and Jenkins (Time Series Analysis:
Forecasting and Control, 1970). BO Example 11.2.2.
E1321.DAT 200 values of a simulated fractionally differenced MA(1) se-
ries. BO Example 13.2.1.
E1331.DAT 200 values of a simulated MA(1) series with standard Cauchy
white noise. BO Example 13.3.2.
114 Appendix B. Data Sets

E1332.DAT 200 values of a simulated AR(1) series with standard Cauchy


white noise. BD Example 13.3.2.
APPA.DAT Lake level of Lake Huron in feet (reduced by 570),1875-1972.
BD Appendix Series A.
APPB.DAT Dow Jones Utilities Index, Aug.28-Dec.18, 1972. BD Ap-
pendix Series B.
APPC.DAT Private Housing Units Started, U.S.A. (monthly). From the
Makridakis competition, series 922. BD Appendix Series C.
APPD.DAT Industrial Production, Austria (quarterly). From the Makri-
dakis competition, Series 337. BD Appendix Series D.
APPE.DAT Industrial Production, Spain (monthly). From the Makri-
dakis competition, Series 868. BD Appendix Series E.
APPF .DAT General Index of Industrial Production (monthly). From the
Makridakis competition, Series 904. BD Appendix, Series F.
APPG.DAT Annual Canadian Lynx Trappings, 1821-1934. BD Appendix
Series G.
APPH.DAT Annual Mink Trappings, 1848-1911. BD Appendix Series H.
APPI.DAT Annual Muskrat Trappings, 1848-1911. BD Appendix Series
I.
APPJ .DAT Simulated input series for transfer function model. BD Ap-
pendix Series J.
APPK.DAT Simulated output series for transfer function model. BD Ap-
pendix Series K.
LRES.DAT Whitened Leading Indicator Series obtained by fitting an
MA(1) to the mean-corrected differenced series LEAD.DAT. BD Section
13.1.
SRES.DAT Residuals obtained from the mean-corrected and differenced
SALES.DAT data when the filter used for whitening the mean-corrected
differenced LEAD.DAT series is applied. BD Section 3.1.
APPJK2.DAT The two series APPJ and APPK (see above) in bivariate
format for analysis by ARVEC and BURG.
LS2.DAT Lead-Sales data in bivariate format for analysis by ARVEC and
BURG.
GNFP.DAT Australian gross non-farm product at average 1984/5 prices
in millions of dollars. September quarter, 1959, through March quarter,
1990 (Australian Bureau of Statistics).
Appendix B. Data Sets 115

FINSERV.DAT Australian expenditure on financial services in millions


of dollars. September quarter, 1969, through March quarter, 1990 (Aus-
tralian Bureau of Statistics).
BEER.DAT Australian monthly beer production in megalitres, includ-
ing ale and stout and excluding beverages with alcohol percentage less
than 1.15. January, 1956, through April, 1990 (Australian Bureau of
Statistics) .
ELEC.DAT Australian monthly electricity production in millions of kilo-
watt hours. January, 1956, through April, 1990 (Australian Bureau of
Statistics) .
CHOCS.DAT Australian monthly chocolate-based confectionery produc-
tion in tonnes. July, 1957, through October, 1990 (Australian Bureau
of Statistics).
IMPORTS.DAT Australian imports of all goods and services in millions
of Australian dollars at average 1984/85 prices. September quarter,
1959, through December quarter, 1990 (Australian Bureau of Statis-
tics).
Index

absolute coherency 69 decomposition 15


accuracy parameter for optimiza- delay parameter 77, 80
tion 29 diagnostics 34
ACF 19, 36, 46, 50 differencing 17
AICC statistic 19, 23, 32, 40, 80, discrete Fourier transform 53
93
AIC statistic 19 efficient estimation 27
AR model 20, 45 entering a model 21
multivariate 87,91 entering data 11
subset 97 estimation 22, 26, 52, 57, 66, 80
AR polynomial 46 exponential smoothing 60, 62
AR( 00) representation 46, 47
ARARMA forecasting 95 files
ARMA model 19, 44 data 8, 11, 12, 113
simulation 49 model 21, 22, 26, 32
autocorrelation function Fisher's test 56
see ACF forecasting
autoregression see prediction
seeAR frequency domain 50

BIC Statistic 25 generating a series 49


Box-Cox transformation 14 goodness of fit 24, 32, 34, 81
graphing data 12
causal model 23, 46
classical decomposition 15 iid 37
coherency 66, 69 installation 3
CONFIG.SYS file 2 invertible model 31, 34, 46, 47
convergence criterion 30 conversion to 46
correlation matrix 33 iterations in optimization 29, 30
cross correlation 72
cross periodogram 66 Kalman filter 76, 80
cross spectral density 67
cumulative periodogram 55 lead time 42
least squares estimation 30,80
data files 8, 11, 12, 113 likelihood 23-25, 27, 89
data transformations linear trend 16, 17
see transforming data Ljung-Box statistic 38
116
Index 117

long-memory series 95, 101 see PACF


long-term dependency 19 periodic components 51
periodogram 53, 66
M A model 20, 45 phase spectrum 66, 70
M A polynomial 46 plotting data 12
M A smoothing 60, 61 portmanteau test 38
M A( 00) representation 45, 47 prediction 32, 41, 80, 83, 104
McLeod-Li statistic 39 I-step 27, 35
maximum likelihood 27, 30 preliminary estimation 22, 76
mean 17 printing graphs 7
mean squared error 41
memory shortening 95,98 quadratic trend 17
menus 7
data 11 residual analysis 34-40
estimation 27 ACF and PACF 36
main 9 histogram 36
model 22 plotting residuals 36
prediction 42 portmanteau test 38
residuals 35 tests of randomness 37
results 31 residuals 32, 34, 36, 78
spectral density 51
screen graphics 2
model 9, 21, 44
seasonal component 16, 19
ACF and PACF 46
short-memory series 96, 101
changing 26
short-term dependency 19
entering 21
simulating ARM A processes 49
filing 26, 32
simulating fractionally integrated
representations 47
ARMA processes 105
spectral density 50
smoothing a periodogram 57,66
testing
smoothing data 60
see residual analysis
spectral density 50, 57, 66
moving average
estimation 52, 66
seeMA standard errors 33
multiplicative models 30
stopping number 30
multivariate series 86, 91
storing data 8, 11, 12
storing models 23, 26, 32
optimization results 31
system requirements 2
optimization settings 28
transfer function 72, 74-85
PACF 19, 36, 46, 50 transformation of data 13--18
parameter estimation 27 Box-Cox 14
preliminary 22 decomposition 15
optimization settings 28 differencing 17
standard errors 33 inversion 18, 42
correlation matrix 33 subtracting the mean 17
partial autocorrelation function trend component 16
118 Index

tutorial 10

weight function 57, 66


white noise 21, 26, 35, 44, 56, 88,
94
Whittle approximation 102
WORD68, 108-112

Yule-Walker equations 87,97

You might also like