Modelling Time Series With Neural Networks: Volker Tresp Summer 2017
Modelling Time Series With Neural Networks: Volker Tresp Summer 2017
Networks
Volker Tresp
Summer 2017
1
Modelling of Time Series
2
Neural Networks for Time-Series Modelling
• For simplicity, we assume that both zt and xt are scalar. The goal is the prediction of
the next value of the time-series
3
Neural Networks for Time-Series Modelling (cont’d)
• The neural network can be trained as before with simple back propagation if in training
all zt and all xt are known!
• This is a NARX model: Nonlinear Auto Regressive Model with external inputs. Ano-
ther name: TDNN (time-delay neural network).
• Predicting more than one time step in the future is not trivial
• The model noise needs to be properly considered in multiple step prediction (for
example by a stochastic simulation); if possible one could also simulate future inputs
(multivariate prediction)
5
Recurrent Neural Network
• Recurrent Neural Networks are powerful methods for time series and sequence model-
ling
6
Generic Recurrent Neural Network Architecture
• Consider a feedforward neural network where there are connections between the hidden
units
zt,h = sig(zT T
t−1 ah + xt vh )
and, as before,
ŷt = sig(zT
t w)
• In Recurrent Neural Networks (RNNs) the next state of the neurons in the hidden
layer depends on their last state and both are not directly measured
• Note that in must applications, one is interested in the output yt (and not in zt,h)
• The next figure shows an example. Only some of the recurrent connections are shown
(blue). The blue connections also model a time lag. Without recurrent connections
(ah = 0, ∀h), we obtain a regular feedforward network
7
• Note, that a recurrent neural network has an internal memory
A Recurrent Neural Network Architecture unfolded in Time
• Consider that at each time-step a feedforward Neural Network predicts outputs based
on some inputs
• In addition, the hidden layer also receives input from the hidden layer of the previous
time step
• Without the nonlinearities in the transfer functions, this is a linear state-space model;
thus a RNN is a nonlinear state-space model
8
Training of Recurrent Neural Network Architecture
9
Echo-State Network
• This works surprisingly well and is done in the Echo-State Network (ESN)
10
Iterative Prediction
• Assume a trained model where the prediction is: ŷt → (xt, yt) → ŷt+1, ...
• Thus we predict (e.g., the DAX of the next day) and the obtain a measurement of
the next day
• 1: In probabilistic models, the measurement can change the hidden state estimates
accordingly (HMM, Kalman filter, particle filter, ....)
11
Bidirectional RNNs
12
Long Short Term Memory (LSTM)
• As a recurrent structure the Long Short Term Memory (LSTM) approach has been
very successful
• Basic idea: at time T a newspaper announces that the Siemens stock is labelled as
“buy”. This information will influence the development of the stock in the next days.
A standard RNN will not remember this information for very long. One solution is to
define an extra input to represent that fact and that is on as along as “buy” is valid. But
this is handcrafted and does not exploit the flexibility of the RNN. A flexible construct
which can hold the information is a long short term memory (LSTM) block.
• The LSTM was used very successful for reading handwritten text and is the basis for
many applications involving sequential data (NLP, translation of text, ...)
13
LSTM in Detail
• The LSTM block replaces one hidden unit zh, together with its input weights ah and
vh. In general all H hidden units are replaced by H LSTM blocks. It produces one
output zh (in the figure it is called y)
• All inputs in the figure are weighted inputs
• Thus in the figure z would be the regular RNN-neuron output with a tanh transfer
function
• Three gates are used that control the information flow
• The input gate (one parameter) determines if z should be attenuated
• The forget gate (one parameter) determines if the last z should be added and with
which weight
• Then another tanh, modulated by the output gate
• See http://www.wildml.com/2015/10/recurrent-neural-network-tutorial-part-4-implementing-
a-grulstm-rnn-with-python-and-theano/
14
LSTM Applications
• Wiki: LSTM achieved the best known results in unsegmented connected handwriting
recognition, and in 2009 won the ICDAR handwriting competition. LSTM networks
have also been used for automatic speech recognition, and were a major component
of a network that in 2013 achieved a record 17.7% phoneme error rate on the classic
TIMIT natural speech dataset
• Applications: Robot control, Time series prediction, Speech recognition, Rhythm lear-
ning, Music composition, Grammar learning, Handwriting recognition, Human action
recognition, Protein Homology Detection
15
Gated Recurrent Units (GRUs)
• Some people found LSTMs too complicated and invented GRUs with fewer gates
16
Encoder Decoder Architecture
17