0% found this document useful (0 votes)
23 views

The Little Book of Deep Learning

Uploaded by

cesarruizh
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
23 views

The Little Book of Deep Learning

Uploaded by

cesarruizh
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 168

The Little Book

of
Deep Learning

François Fleuret
François Fleuret is a professor of computer sci-
ence at the University of Geneva, Switzerland.

The cover illustration is a schematic of the


Neocognitron by Fukushima [1980], a key an-
cestor of deep neural networks.

This ebook is formatted to fit on a phone screen.


Contents

Contents 5

List of figures 7

Foreword 8

I Foundations 10
1 Machine Learning 11
1.1 Learning from data ........................12
1.2 Basis function regression ...............14
1.3 Under and overfitting.....................16
1.4 Categories of models .....................18
2 Efficient computation 20
2.1 GPUs, TPUs, and batches ..............21
2.2 Tensors ..........................................23
3 Training 25
3.1 Losses ...........................................26
3.2 Autoregressive models ...................30
3.3 Gradient descent ............................35
3
3.4 Backpropagation . . . . . . . . 40
3.5 The value of depth . . . . . . . 45
3.6 Training protocols . . . . . . . 48
3.7 The benefits of scale . . . . . . 51

II Deep models 56
4 Model components 57
4.1 The notion of layer . . . . . . . 58
4.2 Linear layers . . . . . . . . . . . 60
4.3 Activation functions . . . . . . 70
4.4 Pooling . . . . . . . . . . . . . . 73
4.5 Dropout . . . . . . . . . . . . . 76
4.6 Normalizing layers . . . . . . . 79
4.7 Skip connections . . . . . . . . 83
4.8 Attention layers . . . . . . . . . 86
4.9 Token embedding . . . . . . . . 94
4.10 Positional encoding . . . . . . . 95
5 Architectures 97
5.1 Multi-Layer Perceptrons . . . . 98
5.2 Convolutional networks . . . . 100
5.3 Attention models . . . . . . . . 107

III Applications 115


6 Prediction 116
6.1 Image denoising . . . . . . . . . 117
6.2 Image classification . . . . . . . 119
6.3 Object detection . . . . . . . . . 120
4
6.4 Semantic segmentation ................ 125
6.5 Speech recognition ...................... 128
6.6 Text-image representations .......... 130
6.7 Reinforcement learning................ 133
7 Synthesis 137
7.1 Text generation ............................ 138
7.2 Image generation ......................... 141

The missing bits 145

Bibliography 150

Index 159

5
List of Figures

1.1 Kernel regression ................................. 14


1.2 Overfitting of kernel regression ........... 16

3.1 Causal autoregressive model ................ 32


3.2 Gradient descent .................................. 36
3.3 Backpropagation .................................. 40
3.4 Feature warping................................... 46
3.5 Training and validation losses .............. 49
3.6 Scaling laws ......................................... 52
3.7 Model training costs ............................ 54

4.1 1D convolution .................................... 62


4.2 2D convolution .................................... 63
4.3 Stride, padding, and dilation ................ 64
4.4 Receptive field ..................................... 67
4.5 Activation functions ............................ 71
4.6 Max pooling ........................................ 74
4.7 Dropout ............................................... 77
4.8 Dropout 2D.......................................... 78
4.9 Batch normalization............................. 80
4.10 Skip connections .................................. 84

6
4.11 Attention operator interpretation . 87
4.12 Complete attention operator .............. 89
4.13 Multi-Head Attention layer ..................91

5.1 Multi-Layer Perceptron ........................98


5.2 LeNet-like convolutional model ......... 101
5.3 Residual block ................................... 102
5.4 Downscaling residual block ............... 103
5.5 ResNet-50 .......................................... 104
5.6 Transformer components .................... 108
5.7 Transformer ....................................... 109
5.8 GPT model ........................................ 111
5.9 ViT model.......................................... 113

6.1 Convolutional object detector ............. 121


6.2 Object detection with SSD ................. 122
6.3 Semantic segmentation with PSP ....... 126
6.4 CLIP zero-shot prediction .................. 132
6.5 DQN state value evolution ................. 135

7.1 Few-shot prediction with a GPT ......... 139


7.2 Denoising diffusion............................ 142

7
Foreword

The current period of progress in artificial in-


telligence was triggered when Krizhevsky et al.
[2012] showed that an artificial neural network
with a simple structure, which had been known
for more than twenty years [LeCun et al., 1989],
could beat complex state-of-the-art image recog-
nition methods by a huge margin, simply by
being a hundred times larger and trained on a
dataset similarly scaled up.

This breakthrough was made possible thanks


to Graphical Processing Units (GPUs), mass-
market, highly parallel computing devices de-
veloped for real-time image synthesis and repur-
posed for artificial neural networks.

Since then, under the umbrella term of “deep


learning,” innovations in the structures of these
networks, the strategies to train them, and ded-
icated hardware have allowed for an exponen-
tial increase in both their size and the quantity
8
of training data they take advantage of [Sevilla
et al., 2022]. This has resulted in a wave of suc-
cessful applications across technical domains,
from computer vision and robotics to speech
and natural language processing.

Although the bulk of deep learning is not difficult


to understand, it combines diverse components
such as linear algebra, calculus, probabilities,
optimization, signal processing, programming,
algorithmic, and high-performance computing,
making it complicated to learn.

Instead of trying to be exhaustive, this little book


is limited to the background necessary to under-
stand a few important models. This proved to be
a popular approach, resulting in 250,000 down-
loads of the PDF file in the month following its
announcement on Twitter.

If you did not get this book from its official URL

https://fleuret.org/public/lbdl.pdf

please do so, so that I can estimate the number


of readers.

François Fleuret,
June 23, 2023

9
PART I

FoUndATIOns

10
Chapter 1

Machine Learning

Deep learning belongs historically to the larger


field of statistical machine learning, as it funda-
mentally concerns methods that are able to learn
representations from data. The techniques in-
volved come originally from artificial neural net-
works, and the “deep” qualifier highlights that
models are long compositions of mappings, now
known to achieve greater performance.

The modularity, versatility, and scalability of


deep models have resulted in a plethora of spe-
cific mathematical methods and software devel-
opment tools, establishing deep learning as a
distinct and vast technical field.

11
1.1 Learning from data
The simplest use case for a model trained from
data is when a signal x is accessible, for instance,
the picture of a license plate, from which one
wants to predict a quantity y, such as the string
of characters written on the plate.

In many real-world situations where x is a high-


dimensional signal captured in an uncontrolled
environment, it is too complicated to come up
with an analytical recipe that relates x and y.

What one can do is to collect a large training


set 𝒟 of pairs (xn,yn), and devise a paramet-
ric model f . This is a piece of computer code
that incorporates trainable parameters w that
modulate its behavior, and such that, with the
proper values w∗, it is a good predictor. “Good”
here means that if an x is given to this piece
of code, the value ŷ = f (x;w ∗ ) it computes is
a good estimate of the y that would have been
associated with x in the training set had it been
there.

This notion of goodness is usually formalized


with a loss ℒ(w) which is small when f (· ;w) is
good on 𝒟. Then, training the model consists of
computing a value w∗ that minimizes ℒ(w∗).

12
Most of the content of this book is about the defi-
nition of f , which, in realistic scenarios, is a com-
plex combination of pre-defined sub-modules.

The trainable parameters that compose w are of-


ten called weights, by analogy with the synaptic
weights of biological neural networks. In addi-
tion to these parameters, models usually depend
on meta-parameters, which are set according to
domain prior knowledge, best practices, or re-
source constraints. They may also be optimized
in some way, but with techniques different from
those used to optimize w.

13
1.2 Basis function regression
We can illustrate the training of a model in a sim-
ple case where xn and yn are two real numbers,
the loss is the mean squared error:
N
Σ
1
ℒ(w) = (y n − f (xn ;w))2, (1.1)
N
n=1

and f (· ;w) is a linear combination of a pre-


defined basis of functions f1 ,.. .,fK , with w =
(w1 ,... ,w K ):

Σ
K
f (x;w) = wkfk(x).
k=1

Since f (xn;w) is linear with respect to the wks


and ℒ(w) is quadratic with respect to f (xn;w),

Figure 1.1: Given a basis of functions (blue curves)


and a training set (black dots), we can compute an
optimal linear combination of the former (red curve)
to approximate the latter for the mean squared error.

14
the loss ℒ(w) is quadratic with respect to the
wks, and finding w∗ that minimizes it boils down
to solving a linear system. See Figure 1.1 for an
example with Gaussian kernels as f k.

15
1.3 Under and overfitting
A key element is the interplay between the capac-
ity of the model, that is its flexibility and ability
to fit diverse data, and the amount and quality
of the training data. When the capacity is insuf-
ficient, the model cannot fit the data, resulting
in a high error during training. This is referred
to as underfitting.

On the contrary, when the amount of data is in-


sufficient, as illustrated in Figure 1.2, the model
will often learn characteristics specific to the
training examples, resulting in excellent perfor-
mance during training, at the cost of a worse

Figure 1.2: If the amount of training data (black dots)


is small compared to the capacity of the model, the em-
pirical performance of the fitted model during training
(red curve) reflects poorly its actual fit to the underly-
ing data structure (thin black curve), and consequently
its usefulness for prediction.

16
fit to the global structure of the data, and poor
performance on new inputs. This phenomenon
is referred to as overfitting.

So, a large part of the art of applied machine


learning is to design models that are not too
flexible yet still able to fit the data. This is done
by crafting the right inductive bias in a model,
which means that its structure corresponds to
the underlying structure of the data at hand.

Even though this classical perspective is relevant


for reasonably-sized deep models, things get con-
fusing with large ones that have a very large
number of trainable parameters and extreme ca-
pacity yet still perform well on prediction. We
will come back to this in § 3.6 and § 3.7.

17
1.4 Categories of models
We can organize the use of machine learning
models into three broad categories:

• Regression consists of predicting a


continuous-valued vector y ∈ RK , for instance,
a geometrical position of an object, given an
input signal X. This is a multi-dimensional
generalization of the setup we saw in § 1.2. The
training set is composed of pairs of an input
signal and a ground-truth value.

• Classification aims at predicting a value from


a finite set{1,...,C ,} for instance, the label Y of
an image X. As with regression, the training set
is composed of pairs of input signal, and ground-
truth quantity, here a label from that set. The
standard way of tackling this is to predict one
score per potential class, such that the correct
class has the maximum score.

• Density modeling has as its objective to model


the probability density function of the data µX
itself, for instance, images. In that case, the train-
ing set is composed of values xn without associ-
ated quantities to predict, and the trained model
should allow for the evaluation of the probability
density function, or sampling from the distribu-
tion, or both.
18
Both regression and classification are generally
referred to as supervised learning, since the
value to be predicted, which is required as a
target during training, has to be provided, for in-
stance, by human experts. On the contrary, den-
sity modeling is usually seen as unsupervised
learning, since it is sufficient to take existing
data without the need for producing an associ-
ated ground-truth.

These three categories are not disjoint; for in-


stance, classification can be cast as class-score
regression, or discrete sequence density model-
ing as iterated classification. Furthermore, they
do not cover all cases. One may want to predict
compounded quantities, or multiple classes, or
model a density conditional on a signal.

19
Chapter 2

Efficient computation

From an implementation standpoint, deep learn-


ing is about executing heavy computations with
large amounts of data. The Graphical Processing
Units (GPUs) have been instrumental in the suc-
cess of the field by allowing such computations
to be run on affordable hardware.

The importance of their use, and the resulting


technical constraints on the computations that
can be done efficiently, force the research in the
field to constantly balance mathematical sound-
ness and implementability of novel methods.

20
2.1 GPUs, TPUs, and batches
Graphical Processing Units were originally de-
signed for real-time image synthesis, which re-
quires highly parallel architectures that happen
to be well suited for deep models. As their usage
for AI has increased, GPUs have been equipped
with dedicated tensor cores, and deep-learning
specialized chips such as Google’s Tensor Pro-
cessing Units (TPUs) have been developed.

A GPU possesses several thousand parallel units


and its own fast memory. The limiting factor
is usually not the number of computing units,
but the read-write operations to memory. The
slowest link is between the CPU memory and
the GPU memory, and consequently one should
avoid copying data across devices. Moreover,
the structure of the GPU itself involves multiple
levels of cache memory, which are smaller but
faster, and computation should be organized to
avoid copies between these different caches.

This is achieved, in particular, by organizing the


computation in batches of samples that can fit
entirely in the GPU memory and are processed
in parallel. When an operator combines a sample
and model parameters, both have to be moved
to the cache memory near the actual computing

21
units. Proceeding by batches allows for copying
the model parameters only once, instead of doing
it for each sample. In practice, a GPU processes
a batch that fits in memory almost as quickly as
it would process a single sample.

A standard GPU has a theoretical peak perfor-


mance of 1013–1014 floating-point operations
(FLOPs) per second, and its memory typically
ranges from 8 to 80 gigabytes. The standard
FP32 encoding of float numbers is on 32 bits, but
empirical results show that using encoding on
16 bits, or even less for some operands, does not
degrade performance.

We will come back in § 3.7 to the large size of


deep architectures.

22
2.2 Tensors
GPUs and deep learning frameworks such as Py-
Torch or JAX manipulate the quantities to be
processed by organizing them as tensors, which
are series of scalars arranged along several dis-
crete axes. They are elements of RN1×···×ND
that generalize the notion of vector and matrix.

Tensors are used to represent both the signals to


be processed, the trainable parameters of the
models, and the intermediate quantities they
compute. The latter are called activations, in
reference to neuronal activations.

For instance, a time series is naturally encoded


as a T × D tensor, or, for historical reasons, as a
D× T tensor, where T is its duration and D is
the dimension of the feature representation at
every time step, often referred to as the number
of channels. Similarly, a 2D-structured signal can
be represented as a D × H ×W tensor, where H
and W are its height and width. An RGB image
would correspond to D = 3, but the number of
channels can grow up to several thousands in
large models.

Adding more dimensions allows for the represen-


tation of series of objects. For example, fifty RGB
images of resolution 32 × 24 can be encoded as
23
a 50 × 3 × 24 × 32 tensor.
Deep learning libraries provide a large number
of operations that encompass standard linear
algebra, complex reshaping and extraction, and
deep-learning specific operations, some of which
we will see in Chapter 4. The implementation of
tensors separates the shape representation from
the storage layout of the coefficients in mem-
ory, which allows many reshaping, transposing,
and extraction operations to be done without
coefficient copying, hence extremely rapidly.

In practice, virtually any computation can be


decomposed into elementary tensor operations,
which avoids non-parallel loops at the language
level and poor memory management.

Besides being convenient tools, tensors are


instrumental in achieving computational effi-
ciency. All the people involved in the develop-
ment of an operational deep model, from the
designers of the drivers, libraries, and models
to those of the computers and chips, know that
the data will be manipulated as tensors. The
resulting constraints on locality and block de-
composability enable all the actors in this chain
to come up with optimal designs.

24
Chapter 3

Training

As introduced in § 1.1, training a model consists


of minimizing a loss ℒ(w) which reflects the
performance of the predictor f (· ;w) on a train-
ing set 𝒟.
Since models are usually extremely complex, and
their performance is directly related to how well
the loss is minimized, this minimization is a key
challenge, which involves both computational
and mathematical difficulties.

25
3.1 Losses
The example of the mean squared error from
Equation 1.1 is a standard loss for predicting a
continuous value.

For density modeling, the standard loss is the


likelihood of the data. If f (x;w) is to be inter-
preted as a normalized log-probability or log-
density, the loss is the opposite of the sum of its
values over training samples, which corresponds
to the likelihood of the data-set.

Cross-entropy
For classification, the usual strategy is that the
output of the model is a vector with one com-
ponent f (x;w)y per class y, interpreted as the
logarithm of a non-normalized probability, or
logit.

With X the input signal and Y the class to pre-


dict, we can then compute from f an estimate
of the posterior probabilities:

ˆ expf (x;w)y
P (Y = y | X = x) = Σ .
z expf (x;w) z

This expression is generally called the softmax,


or more adequately, the softargmax, of the logits.

26
To be consistent with this interpretation, the
model should be trained to maximize the proba-
bility of the true classes, hence to minimize the
cross-entropy, expressed as:
N
Σ1
ℒce (w) = − log P̂ (Y = y
n | X = xn)
N n=1
N
1Σ expf (xn;w)yn
= −log Σ expf (x ;w) .
N n=1 z n z
`
˛¸ x
Lce (f (xn ;w),yn )
Contrastive loss
In certain setups, even though the value to be
predicted is continuous, the supervision takes
the form of ranking constraints. The typical do-
main where this is the case is metric learning,
where the objective is to learn a measure of dis-
tance between samples such that a sample xa
from a certain semantic class is closer to any
sample xb of the same class than to any sample
xc from another class. For instance, xa and xb
can be two pictures of a certain person, and xc a
picture of someone else.

The standard approach for such cases is to min-


imize a contrastive loss, in that case, for in-
stance, the sum over triplets (xa,xb,xc), such
27
that ya = yb ̸= yc, of

max(0,1 − f (xa,xc;w) + f (xa,xb;w)).

This quantity will be strictly positive unless


f (xa,xc;w) ≥ 1 + f (xa,xb;w).

Engineering the loss


Usually, the loss minimized during training is
not the actual quantity one wants to optimize
ultimately, but a proxy for which finding the best
model parameters is easier. For instance, cross-
entropy is the standard loss for classification,
even though the actual performance measure is
a classification error rate, because the latter has
no informative gradient, a key requirement as
we will see in § 3.3.

It is also possible to add terms to the loss that


depend on the trainable parameters of the model
themselves to favor certain configurations.

The weight decay regularization, for instance,


consists of adding to the loss a term proportional
to the sum of the squared parameters. This can
be interpreted as having a Gaussian Bayesian
prior on the parameters, which favors smaller
values and thereby reduces the influence of the
data. This degrades performance on the train-
28
ing set, but reduces the gap between the per-
formance in training and that on new, unseen
data.

29
3.2 Autoregressive models
A key class of methods, particularly for deal-
ing with discrete sequences in natural language
processing and computer vision, are the autore-
gressive models,

The chain rule for probabilities


Such models put to use the chain rule from prob-
ability theory:

P (X1 = x 1,X2 = x 2 ,...,X T = xT ) =


P (X1 = x1)
× P (X2 = x2 | X1 = x1)
...
× P (XT = xT | X1 = x 1 ,...,X T −1 = xT −1).

Although this decomposition is valid for a ran-


dom sequence of any type, it is particularly effi-
cient when the signal of interest is a sequence
of tokens from a finite vocabulary {1,...K}.
With the convention that the additional token ∅
stands for an “unknown” quantity, we can rep-
resent the event {X1 = x 1 ,...,X t = xt} as the
vector (x1 ,. ..,xt ,∅,... ,∅).

30
Then, a model

f : {∅,1,. ..,K}T → RK
which, given such an input, computes a vector
lt of K logits corresponding to
P̂ (Xt | X1 = x1 ,.. .,Xt−1 = xt−1 ),
allows to sample one token given the previous
ones.

The chain rule ensures that by sampling T to-


kens xt, one at a time given the previously sam-
pled x 1 ,...,x t −1, we get a sequence that follows
the joint distribution. This is an autoregressive
generative model.

Training such a model can be done by minimiz-


ing the sum across training sequences and time
steps of the cross-entropy loss

Lce f (x1 ,... ,xt−1 ,∅,.. .,∅; w),xt ,


which is formally equivalent to maximizing the
likelihood of the true xts.

The value that is classically monitored is not the


cross-entropy itself, but the perplexity, which is
defined as the exponential of the cross-entropy.
It corresponds to the number of values of a uni-
form distribution with the same entropy, which
is generally more interpretable.
31
l1 l2 l3 ... lT −1 lT

x1 x2 ... xT −2 xT −1

Figure 3.1: An autoregressive model f, is causal if a


time step xt of the input sequence modulates the pre-
dicted logits ls only if s > t, as depicted by the blue
arrows. This allows computing the distributions at all
the time steps in one pass during training. During sam-
pling, however, the lt and xt are computed sequentially,
the latter sampled with the former, as depicted by the
red arrows.

Causal models
The training procedure we described requires
a different input for each t, and the bulk of the
computation done for t < t′ is repeated for t′.
This is extremely inefficient since T is often of
the order of hundreds or thousands.

The standard strategy to address this issue is to


design a model f that predicts all the vectors of
logits l 1 ,...,l T at once, that is:

f : {1,...,K} T → RT ×K,

32
but with a computational structure such that the
computed logits lt for xt depend only on the
input values x 1 ,...,x t−1 .

Such a model is called causal, since it corre-


sponds, in the case of temporal series, to not
letting the future influence the past, as illustrated
in Figure 3.1.

The consequence is that the output at every posi-


tion is the one that would be obtained if the input
were only available up to before that position.
During training, it allows one to compute the
output for a full sequence and to maximize the
predicted probabilities of all the tokens of that
same sequence, which again boils down to mini-
mizing the sum of the per-token cross-entropy.

Note that, for the sake of simplicity, we have


defined f as operating on sequences of a fixed
length T . However, models used in practice,
such as the transformers we will see in § 5.3, are
able to process sequences of arbitrary length.

Tokenizer
One important technical detail when dealing
with natural languages is that the representation
as tokens can be done in multiple ways, ranging
from the finest granularity of individual symbols
33
to entire words. The conversion to and from the
token representation is carried out by a separate
algorithm called a tokenizer.

A standard method is the Byte Pair Encoding


(BPE) [Sennrich et al., 2015] that constructs to-
kens by hierarchically merging groups of char-
acters, trying to get tokens that represent frag-
ments of words of various lengths but of similar
frequencies, allocating tokens to long frequent
fragments as well as to rare individual symbols.

34
3.3 Gradient descent
Except in specific cases like the linear regression
we saw in § 1.2, the optimal parameters w∗ do
not have a closed-form expression. In the general
case, the tool of choice to minimize a function is
gradient descent. It starts by initializing the pa-
rameters with a random w0, and then improves
this estimate by iterating gradient steps, each
consisting of computing the gradient of the loss
with respect to the parameters, and subtracting
a fraction of it:

wn+1 = wn − η∇ℒ|w(wn). (3.1)

This procedure corresponds to moving the cur-


rent estimate a bit in the direction that locally
decreases ℒ(w) maximally, as illustrated in Fig-
ure 3.2.

Learning rate
The meta-parameter η is called the learning rate.
It is a positive value that modulates how quickly
the minimization is done, and must be chosen
carefully.

If it is too small, the optimization will be slow


at best, and may be trapped in a local minimum
early. If it is too large, the optimization may
35
w

ℒ(w)

Figure 3.2: At every point w, the gradient ∇ ℒ|w(w) is


in the direction that maximizes the increase of ℒ, or-
thogonal to the level curves (top). The gradient descent
minimizes ℒ(w) iteratively by subtracting a fraction
of the gradient at every step, resulting in a trajectory
that follows the steepest descent (bottom).

36
bounce around a good minimum and never de-
scend into it. As we will see in § 3.6, it can depend
on the iteration number n.

Stochastic Gradient Descent


All the losses used in practice can be expressed as
an average of a loss per small group of samples,
or per sample such as:
N

ℒ(w) = 𝒟n(w),
N
n=1

where 𝒟 n(w) = L(f (x n;w),yn) for some L, and


the gradient is then:

1 Σ
N
∇ℒ |w (w) = ∇ n|w (w). (3.2)
N n=1

The resulting gradient descent would compute


exactly the sum in Equation 3.2, which is usu-
ally computationally heavy, and then update the
parameters according to Equation 3.1. However,
under reasonable assumptions of exchangeabil-
ity, for instance, if the samples have been prop-
erly shuffled, any partial sum of Equation 3.2
is an unbiased estimator of the full sum, albeit
noisy. So, updating the parameters from partial
sums corresponds to doing more gradient steps
37
for the same computational budget, with noisier
estimates of the gradient. Due to the redundancy
in the data, this happens to be a far more efficient
strategy.

We saw in § 2.1 that processing a batch of sam-


ples small enough to fit in the computing de-
vice’s memory is generally as fast as processing
a single one. Hence, the standard approach is to
split the full set 𝒟 into batches, and to update
the parameters from the estimate of the gradient
computed from each. This is called mini-batch
stochastic gradient descent, or stochastic gradi-
ent descent (SGD) for short.

It is important to note that this process is ex-


tremely gradual, and that the number of mini-
batches and gradient steps are typically of the
order of several million.

As with many algorithms, intuition breaks down


in high dimensions, and although it may seem
that this procedure would be easily trapped in
a local minimum, in reality, due to the number
of parameters, the design of the models, and
the stochasticity of the data, its efficiency is far
greater than one might expect.

Plenty of variations of this standard strategy


have been proposed. The most popular one is
38
Adam [Kingma and Ba, 2014], which keeps run-
ning estimates of the mean and variance of each
component of the gradient, and normalizes them
automatically, avoiding scaling issues and differ-
ent training speeds in different parts of a model.

39
3.4 Backpropagation
Using gradient descent requires a tech-
nical means to compute ∇𝒟| w(w) where
𝒟 = L(f (x;w);y). Given that f and L are
both compositions of standard tensor opera-
tions, as for any mathematical expression, the
chain rule from differential calculus allows us to
get an expression of it.

For the sake of making notation lighter, we will


not specify at which point gradients are com-
puted, since the context makes it clear.

f (d) (· ;wd )
x(d−1) x(d)
×Jf (d)|x
∇𝒟 |x(d−1) ∇𝒟 |x(d)
×Jf (d)|w

∇𝒟|wd

Figure 3.3: Given a model f = f (D) ◦ ··· ◦ f (1), the


forward pass (top) consists of computing the outputs
x(d) of the mappings f (d) in order. The backward pass
(bottom) computes the gradients of the loss with respect
to the activation x(d) and the parameters wd backward
by multiplying them by the Jacobians.

40
Forward and backward passes
Consider the simple case of a composition of
mappings:

f = f (D) ◦ f (D− 1) ◦ ··· ◦ f (1).

The output of f (x;w) can be computed by start-


ing with x(0) = x and applying iteratively:

x(d) = f (d) x(d−1);wd ,

with x (D) as the final value.

The individual scalar values of these interme-


diate results x(d) are traditionally called acti-
vations in reference to neuron activations, the
value D is the depth of the model, the individual
mappings f (d) are referred to as layers, as we
will see in § 4.1, and their sequential evaluation
is the forward pass (see Figure 3.3, top).

Conversely, the gradient ∇𝒟 |x(d−1) of the loss


with respect to the output x(d−1) of f (d−1) is
the product of the gradient ∇𝒟 |x(d) with respect
to the output of f (d) multiplied by the Jacobian
Jf (d−1) x| of f (d−1) with respect to its variable
x. Thus, the gradients with respect to the out-
puts of all the f (d)s can be computed recursively
backward, starting with ∇𝒟 |x(D) = ∇L|x .
41
And the gradient that we are interested in for
training, that is ∇ 𝒟 |wd , is the gradient with re-
spect to the output of f (d) multiplied by the Ja-
cobian Jf (d) |w of f (d) with respect to the param-
eters.

This iterative computation of the gradients with


respect to the intermediate activations, com-
bined with that of the gradients with respect
to the layers’ parameters, is the backward pass
(see Figure 3.3, bottom). The combination of
this computation with the procedure of gradient
descent is called backpropagation.

In practice, the implementation details of the


forward and backward passes are hidden from
programmers. Deep learning frameworks are
able to automatically construct the sequence of
operations to compute gradients.

A particularly convenient algorithm is Autograd


[Baydin et al., 2015], which tracks tensor opera-
tions and builds, on the fly, the combination of
operators for gradients. Thanks to this, a piece of
imperative programming that manipulates ten-
sors can automatically compute the gradient of
any quantity with respect to any other.

42
Resource usage
Regarding the computational cost, as we will
see, the bulk of the computation goes into linear
operations, each requiring one matrix product
for the forward pass and two for the products by
the Jacobians for the backward pass, making the
latter roughly twice as costly as the former.

The memory requirement during inference is


roughly equal to that of the most demanding
individual layer. For training, however, the back-
ward pass requires keeping the activations com-
puted during the forward pass to compute the
Jacobians, which results in a memory usage that
grows proportionally to the model’s depth. Tech-
niques exist to trade the memory usage for com-
putation by either relying on reversible layers
[Gomez et al., 2017], or using checkpointing,
which consists of storing activations for some
layers only and recomputing the others on the fly
with partial forward passes during the backward
pass [Chen et al., 2016].

Vanishing gradient
A key historical issue when training a large net-
work is that when the gradient propagates back-
wards through an operator, it may be scaled by a

43
multiplicative factor, and consequently decrease
or increase exponentially when it traverses many
layers. A standard method to prevent it from
exploding is gradient norm clipping, which con-
sists of re-scaling the gradient to set its norm to
a fixed threshold if it is above it [Pascanu et al.,
2013].

When the gradient decreases exponentially, this


is called the vanishing gradient, and it may
make the training impossible, or, in its milder
form, cause different parts of the model to be
updated at different speeds, degrading their co-
adaptation [Glorot and Bengio, 2010].

As we will see in Chapter 4, multiple techniques


have been developed to prevent this from hap-
pening, reflecting a change in perspective that
was crucial to the success of deep-learning: in-
stead of trying to improve generic optimization
methods, the effort shifted to engineering the
models themselves to make them optimizable.

44
3.5 The value of depth
As the term “deep learning” indicates, useful
models are generally compositions of long se-
ries of mappings. Training them with gradient
descent results in a sophisticated co-adaptation
of the mappings, even though this procedure is
gradual and local.

We can illustrate this behavior with a simple


model R2→ R2 that combines eight layers, each
multiplying its input by a 2×2 matrix and ap-
plying Tanh per component, with a final linear
classifier. This is a simplified version of the stan-
dard Multi-Layer Perceptron that we will see in
§ 5.1.

If we train this model with SGD and cross-en-


tropy on a toy binary classification task (Figure
3.4, top left), the matrices co-adapt to deform the
space until the classification is correct, which
implies that the data have been made linearly
separable before the final affine operation (Fig-
ure 3.4, bottom right).

Such an example gives a glimpse of what a deep


model can achieve; however, it is partially mis-
leading due to the low dimension of both the sig-
nal to process and the internal representations.
Everything is kept in 2D here for the sake of
45
d=0 d=1 d=2

d=3 d=4 d=5

d=6 d=7 d=8


Figure 3.4: Each plot shows the deformation of the
space and the resulting positioning of the training
points in R2 after d layers of processing, starting with
the input to the model itself (top left). The oblique line
in the last plot (bottom right) shows the final affine
decision.

46
visualization, while real models take advantage
of representations in high dimensions, which, in
particular, facilitates the optimization by provid-
ing many degrees of freedom.

Empirical evidence accumulated over twenty


years demonstrates that state-of-the-art perfor-
mance across application domains necessitates
models with tens of layers, such as residual net-
works (see § 5.2) or Transformers (see § 5.3).

Theoretical results show that, for a fixed com-


putational budget or number of parameters, in-
creasing the depth leads to a greater complexity
of the resulting mapping [Telgarsky, 2016].

47
3.6 Training protocols
Training a deep network requires defining a pro-
tocol to make the most of computation and data,
and to ensure that performance will be good on
new data.

As we saw in § 1.3, the performance on the train-


ing samples may be misleading, so in the sim-
plest setup one needs at least two sets of samples:
one is a training set, used to optimize the model
parameters, and the other is a test set, to evaluate
the performance of the trained model.

Additionally, there are usually meta-parameters


to adapt, in particular, those related to the model
architecture, the learning rate, and the regular-
ization terms in the loss. In that case, one needs
a validation set that is disjoint from both the
training and test sets to assess the best configu-
ration.

The full training is usually decomposed into


epochs, each of which corresponds to going
through all the training examples once. The
usual dynamic of the losses is that the training
loss decreases as long as the optimization runs,
while the validation loss may reach a minimum
after a certain number of epochs and then start
to increase, reflecting an overfitting regime, as
48
Overfitting

Loss
Validation

Training
Number of epochs

Figure 3.5: As training progresses, a model’s perfor-


mance is usually monitored through losses. The train-
ing loss is the one driving the optimization process and
goes down, while the validation loss is estimated on
an other set of examples to assess the overfitting of
the model. Overfitting appears when the model starts
to take into account random structures specific to the
training set at hand, resulting in the validation loss
starting to increase.

introduced in § 1.3 and illustrated in Figure 3.5.

Paradoxically, although they should suffer from


severe overfitting due to their capacity, large
models usually continue to improve as training
progresses. This may be due to the inductive
bias of the model becoming the main driver of
optimization when performance is near perfect

49
on the training set [Belkin et al., 2018].

An important design choice is the learning rate


schedule during training, that is, the specifica-
tion of the value of the learning rate at each iter-
ation of the gradient descent. The general policy
is that the learning rate should be initially large
to avoid having the optimization being trapped
in a bad local minimum early, and that it should
get smaller so that the optimized parameter val-
ues do not bounce around and reach a good min-
imum in a narrow valley of the loss landscape.

The training of extremely large models may take


months on thousands of powerful GPUs and
have a financial cost of several million dollars. At
this scale, the training may involve many man-
ual interventions, informed, in particular, by the
dynamics of the loss evolution.

50
3.7 The benefits of scale
There is an accumulation of empirical results
showing that performance, for instance, esti-
mated through the loss on test data, improves
with the amount of data according to remarkable
scaling laws, as long as the model size increases
correspondingly [Kaplan et al., 2020] (see Figure
3.6).

Benefiting from these scaling laws in the multi-


billion sample regime is possible in part thanks
to the structural plasticity of models, which al-
lows them to be scaled up arbitrarily, as we will
see, by increasing the number of layers or fea-
ture dimensions. But it is also made possible
by the distributed nature of the computation
implemented by these models and by stochas-
tic gradient descent, which requires only a tiny
fraction of the data at a time and can operate
with datasets whose size is orders of magnitude
greater than that of the computing device’s mem-
ory. This has resulted in an exponential growth
of the models, as illustrated in Figure 3.7.

Typical vision models have 10–100 million train-


able parameters and require 1018–1019 FLOPs
for training [He et al., 2015; Sevilla et al., 2022].
Language models have from 100 million to hun-

51
Test loss

Compute (peta-FLOP/s-day)
Test loss

Dataset size (tokens)


Test loss

Number of parameters

Figure 3.6: Test loss of a language model vs. the amount


of computation in petaflop/s-day, the dataset size in
tokens, that is fragments of words, and the model size
in parameters [Kaplan et al., 2020].

52
Dataset Year Nb. of images Size
ImageNet 2012 1.2M 150Gb
Cityscape 2016 25K 60Gb
LAION-5B 2022 5.8B 240Tb
Dataset Year Nb. of books Size
WMT-18-de-en 2018 14M 8Gb
The Pile 2020 1.6B 825Gb
OSCAR 2020 12B 6Tb

Table 3.1: Some examples of publicly available datasets.


The equivalent number of books is an indicative esti-
mate for 250 pages of 2000 characters per book.

dreds of billions of trainable parameters and re-


quire 1020–1023 FLOPs for training [Devlin et al.,
2018; Brown et al., 2020; Chowdhery et al., 2022;
Sevilla et al., 2022]. These latter models require
machines with multiple high-end GPUs.

Training these large models is impossible using


datasets with a detailed ground-truth costly to
produce, which can only be of moderate size.
Instead, it is done with datasets automatically
produced by combining data available on the
internet with minimal curation, if any. These
sets may combine multiple modalities, such as
text and images from web pages, or sound and
images from videos, which can be used for large-
scale supervised training.

53
1GWh
PaLM

1024
GPT-3 LaMDA

AlphaZero Whisper
Training cost (FLOP)

ViT
1MWh
AlphaGo CLIP-ViT
GPT-2
1021
BERT

Transformer
GPT
ResNet
1KWh
VGG16
1018 AlexNet GoogLeNet

2015 2020
Year
Figure 3.7: Training costs in number of FLOP of some
landmark models [Sevilla et al., 2023]. The colors in-
dicate the domains of application: Computer Vision
(blue), Natural Language Processing (red), or other
(black). The dashed lines correspond to the energy con-
sumption using A100s SXM in 16-bit precision. For
reference, the total electricity consumption in the US in
2021 was 3920TWh.

54
The most impressive current successes of artifi-
cial intelligence rely on the so-called Large Lan-
guage Models (LLMs), which we will see in § 5.3
and § 7.1, trained on extremely large text datasets
(see Table 3.1).

55
PART II

DEEp MODEls

56
Chapter 4

Model components

A deep model is nothing more than a complex


tensorial computation that can ultimately be
decomposed into standard mathematical oper-
ations from linear algebra and analysis. Over
the years, the field has developed a large collec-
tion of high-level modules with a clear semantic,
and complex models combining these modules,
which have proven to be effective in specific ap-
plication domains.

Empirical evidence and theoretical results show


that greater performance is achieved with deeper
architectures, that is, long compositions of map-
pings. As we saw in section § 3.4, training such
a model is challenging due to the vanishing gra-
dient, and multiple important technical contri-
butions have mitigated this issue.

57
4.1 The notion of layer
We call layers standard complex compounded
tensor operations that have been designed and
empirically identified as being generic and effi-
cient. They often incorporate trainable param-
eters and correspond to a convenient level of
granularity for designing and describing large
deep models. The term is inherited from sim-
ple multi-layer neural networks, even though
modern models may take the form of a complex
graph of such modules, incorporating multiple
parallel pathways.
Y
4×4
g n=4

f
×K
32 × 32
X

In the following pages, I try to stick to the con-


vention for model depiction illustrated above:

• operators / layers are depicted as boxes,

• darker coloring indicates that they embed


trainable parameters,

• non-default valued meta-parameters are

58
added in blue on their right,

• a dashed outer frame with a multiplicative


factor indicates that a group of layers is repli-
cated in series, each with its own set of trainable
parameters, if any, and

• in some cases, the dimension of their output is


specified on the right when it differs from their
input.

Additionally, layers that have a complex internal


structure are depicted with a greater height.

59
4.2 Linear layers
The most important modules in terms of compu-
tation and number of parameters are the Linear
layers. They benefit from decades of research
and engineering in algorithmic and chip design
for matrix operations.

Note that the term “linear” in deep learning gen-


erally refers improperly to an affine operation,
which is the sum of a linear expression and a
constant bias.

Fully connected layers


The most basic linear layer is the fully connected
layer, parameterized by a trainable weight ma-
trix W of size D′ ×D and bias vector b of dimen-
sion D′. It implements an affine transformation
generalized to arbitrary tensor shapes, where
the supplementary dimensions are interpreted
as vector indexes. Formally, given an input X
of dimension D1 × · · · × DK × D, it computes an
output Y of dimension D1 × · · · × DK × D ′ with

∀d1 ,... ,dK ,


Y [d1 ,. ..,dK ] = W X[d1 ,.. .,dK ] + b.

While at first sight such an affine operation


60
seems limited to geometric transformations such
as rotations, symmetries, and translations, it can
in facts do more than that. In particular, projec-
tions for dimension reduction or signal filtering,
but also, from the perspective of the dot product
being a measure of similarity, a matrix-vector
product can be interpreted as computing match-
ing scores between the queries, as encoded by
the input vectors, and keys, as encoded by the
matrix rows.

As we saw in § 3.3, the gradient descent starts


with the parameters’ random initialization. If
this is done too naively, as seen in § 3.4, the net-
work may suffer from exploding or vanishing
activations and gradients [Glorot and Bengio,
2010]. Deep learning frameworks implement ini-
tialization methods that in particular scale the
random parameters according to the dimension
of the input to keep the variance of the activa-
tions constant and prevent pathological behav-
iors.

Convolutional layers
A linear layer can take as input an arbitrarily-
shaped tensor by reshaping it into a vector, as
long as it has the correct number of coefficients.
However, such a layer is poorly adapted to deal-

61
Y Y

ϕ ψ

X X

Y Y

ϕ ψ

X X
... ...

Y Y

ϕ ψ

X X

1D transposed
1D convolution
convolution
Figure 4.1: A 1D convolution (left) takes as input
a D ×T tensor X, applies the same affine mapping
ϕ( ·;w) to every sub-tensor of shape D
× K, and stores
the resulting D′ ×
1 tensors into Y . A 1D transposed
convolution (right) takes as input a D×T tensor, ap-
plies the same affine mapping ψ(· ;w) to every sub-
tensor of shape D×1, and sums the shifted resulting
D′ ×K tensors. Both can process inputs of different
sizes.

62
ϕ ψ

Y X
X Y
2D transposed
2D convolution
convolution
Figure 4.2: A 2D convolution (left) takes as input a
D × H × W tensor X, applies the same affine mapping
ϕ(·;w) to every sub-tensor of shape D × K × L, and
stores the resulting D′ × 1 × 1 tensors into Y . A 2D
transposed convolution (right) takes as input a D ×
H × W tensor, applies the same affine mapping ψ(·;w)
to every D × 1 × 1 sub-tensor, and sums the shifted
resulting D′ × K × L tensors into Y .

ing with large tensors, since the number of pa-


rameters and number of operations are propor-
tional to the product of the input and output
dimensions. For instance, to process an RGB
image of size 256×256 as input and compute a
result of the same size, it would require approxi-
mately 4 × 1010 parameters and multiplications.

Besides these practical issues, most of the high-


dimension signals are strongly structured. For
instance, images exhibit short-term correlations

63
Y

Y ϕ

X
ϕ
p=2
X
Padding
Y
Y
ϕ

X ϕ

X
s = 2 ...
d=2
Stride
Dilation
Figure 4.3: Beside its kernel size and number of input
/ output channels, a convolution admits three meta-
parameters: the stride s (left) modulates the step size
when going through the input tensor, the padding p
(top right) specifies how many zero entries are added
around the input tensor before processing it, and the
dilation d (bottom right) parameterizes the index count
between coefficients of the filter.

64
and statistical stationarity with respect to trans-
lation, scaling, and certain symmetries. This
is not reflected in the inductive bias of a fully
connected layer, which completely ignores the
signal structure.

To leverage these regularities, the tool of choice


is convolutional layers, which are also affine, but
process time-series or 2D signals locally, with
the same operator everywhere.

A 1D convolution is mainly defined by three


meta-parameters: its kernel size K, its number
of input channels D, its number of output chan-
nels D′, and by the trainable parameters w of an

affine mapping ϕ(· ;w) : RD×K → RD ×1 .
It can process any tensor X of size D× T with
T ≥K, and applies ϕ( ·;w) to every sub-tensor
of size D × K of X, storing the results in a tensor
Y of size D′ ×
(T K +−1), as pictured in Figure
4.1 (left).

A 2D convolution is similar but has a K× L ker-


nel and takes as input a ×
D H×W tensor (see
Figure 4.2, left).

Both operators have for trainable parameters


those of ϕ that can be envisioned as D′ filters
of size D × K or D × K × L respectively, and a
65
bias vector of dimension D′.

Such a layer is equivariant to translation, mean-


ing that if the input signal is translated, the out-
put is similarly transformed. This property re-
sults in a desirable inductive bias when dealing
with a signal whose distribution is invariant to
translation.

They also admit three additional meta-parame-


ters, illustrated on Figure 4.3:

• The padding specifies how many zero coeffi-


cients should be added around the input tensor
before processing it, particularly to maintain the
tensor size when the kernel size is greater than
one. Its default value is 0.

• The stride specifies the step size used when go-


ing through the input, allowing one to reduce the
output size geometrically by using large steps.
Its default value is 1.

• The dilation specifies the index count between


the filter coefficients of the local affine opera-
tor. Its default value is 1, and greater values
correspond to inserting zeros between the coef-
ficients, which increases the filter / kernel size
while keeping the number of trainable parame-
ters unchanged.

66
Figure 4.4: Given an activation in a series of convolu-
tion layers, here in red, its receptive field is the area in
the input signal, in blue, that modulates its value. Each
intermediate convolutional layer increases the width
and height of that area by roughly those of the kernel.

Except for the number of channels, a convo-


lution’s output is usually smaller than its in-
put. In the 1D case without padding nor di-
lation, if the input is of size T , the kernel of
size K, and the stride is S, the output is of size
T ′ = (T − K)/S + 1.

Given an activation computed by a convolutional


layer, or the vector of values for all the channels
at a certain location, the portion of the input
signal that it depends on is called its receptive
field (see Figure 4.4). One of the H× W sub-
tensors corresponding to a single channel of a
67
D× H W× activation tensor is called an activa-
tion map.

Convolutions are used to recombine information,


generally to reduce the spatial size of the rep-
resentation, in exchange for a greater number
of channels, which translates into a richer local
representation. They can implement differential
operators such as edge-detectors, or template
matching mechanisms. A succession of such lay-
ers can also be envisioned as a compositional and
hierarchical representation [Zeiler and Fergus,
2014], or as a diffusion process in which infor-
mation can be transported by half the kernel size
when passing through a layer.

A converse operation is the transposed convo-


lution that also consists of a localized affine op-
erator, defined by similar meta and trainable pa-
rameters as the convolution, but which, for in-
stance, in the 1D case, applies an affine mapping

ψ( ·;w) : RD×1 → RD ×K, to every D ×1 sub-
tensor of the input, and sums the shifted D ×′ K
resulting tensors to compute its output. Such an
operator increases the size of the signal and can
be understood intuitively as a synthesis process
(see Figure 4.1, right, and Figure 4.2, right).

A series of convolutional layers is the usual ar-

68
chitecture for mapping a large-dimension signal,
such as an image or a sound sample, to a low-
dimension tensor. This can be used, for instance,
to get class scores for classification or a com-
pressed representation. Transposed convolution
layers are used the opposite way to build a large-
dimension signal from a compressed representa-
tion, either to assess that the compressed repre-
sentation contains enough information to recon-
struct the signal or for synthesis, as it is easier
to learn a density model over a low-dimension
representation. We will revisit this in § 5.2.

69
4.3 Activation functions
If a network were combining only linear com-
ponents, it would itself be a linear operator,
so it is essential to have non-linear operations.
These are implemented in particular with activa-
tion functions, which are layers that transform
each component of the input tensor individually
through a mapping, resulting in a tensor of the
same shape.

There are many different activation functions,


but the most used is the Rectified Linear Unit
(ReLU) [Glorot et al., 2011], which sets nega-
tive values to zero and keeps positive values un-
changed (see Figure 4.5, top right):
(
relu(x) = 0 if x < 0,
x otherwise.

Given that the core training strategy of deep-


learning relies on the gradient, it may seem prob-
lematic to have a mapping that is not differen-
tiable at zero and constant on half the real line.
However, the main property gradient descent
requires is that the gradient is informative on
average. Parameter initialization and data nor-
malization make half of the activations positive

70
Tanh ReLU

Leaky ReLU GELU

Figure 4.5: Activation functions.

when the training starts, ensuring that this is the


case.

Before the generalization of ReLU, the standard


activation function was the hyperbolic tangent
(Tanh, see Figure 4.5, top left) which saturates
exponentially fast on both the negative and pos-
itive sides, aggravating the vanishing gradient.

Other popular activation functions follow the


same idea of keeping positive values unchanged
and squashing the negative values. Leaky ReLU
[Maas et al., 2013] applies a small positive multi-
71
plying factor to the negative values (see Figure
4.5, bottom left):
(
ax if x < 0,
leakyrelu(x) =
x otherwise.

And GELU [Hendrycks and Gimpel, 2016] is de-


fined using the cumulative distribution function
of the Gaussian distribution, that is:

gelu(x) = xP (Z ≤ x),

where Z ∼𝒟(0,1). It roughly behaves like a


smooth ReLU (see Figure 4.5, bottom right).

The choice of an activation function, in partic-


ular among the variants of ReLU, is generally
driven by empirical performance.

72
4.4 Pooling
A classical strategy to reduce the signal size is to
use a pooling operation that combines multiple
activations into one that ideally summarizes the
information. The most standard operation of this
class is the max pooling layer, which, similarly
to convolution, can operate in 1D and 2D and is
defined by a kernel size.

In its standard form, this layer computes the


maximum activation per channel, over non-
overlapping sub-tensors of spatial size equal to
the kernel size. These values are stored in a re-
sult tensor with the same number of channels
as the input, and whose spatial size is divided
by the kernel size. As with the convolution, this
operator has three meta-parameters: padding,
stride, and dilation, with the stride being equal
to the kernel size by default. A smaller stride
results in a larger resulting tensor, following the
same formula as for convolutions (see § 4.2).

The max operation can be intuitively interpreted


as a logical disjunction, or, when it follows a
series of convolutional layers that compute lo-
cal scores for the presence of parts, as a way
of encoding that at least one instance of a part
is present. It loses precise location, making it

73
Y

max

max

...

max

1D max pooling

Figure 4.6: A 1D max pooling takes as input a D × T


tensor X, computes the max over non-overlapping 1×
L sub-tensors (in blue) and stores the resulting values
(in red) in a D ×(T/L) tensor Y .

74
invariant to local deformations.

A standard alternative is the average pooling


layer that computes the average instead of the
maximum over the sub-tensors. This is a linear
operation, whereas max pooling is not.

75
4.5 Dropout
Some layers have been designed to explicitly
facilitate training or improve the learned repre-
sentations.

One of the main contributions of that sort was


dropout [Srivastava et al., 2014]. Such a layer
has no trainable parameters, but one meta-
parameter, p, and takes as input a tensor of arbi-
trary shape.

It is usually switched off during testing, in which


case its output is equal to its input. When it is ac-
tive, it has a probability p of setting to zero each
activation of the input tensor independently, and
it re-scales all the activations by a factor of 1
1 −p
to maintain the expected value unchanged (see
Figure 4.7).
The motivation behind dropout is to favor
meaningful individual activation and discourage
group representation. Since the probability that
a group of k activations remains intact through
− p) , joint representations
k
a dropout layer is (1
become unreliable, making the training proce-
dure avoid them. It can also be seen as a noise
injection that makes the training more robust.

When dealing with images and 2D tensors, the


76
Y Y

01 1 1 1 1 10 1 1 1 10
1 01 1 01 1 1 1 1 1 1
× 1 1 01 1 1 1 1 01 1 1 × 1−p
1
1 1 1 1 1 10 1 1 01 1
01 1 1 01 1 1 1 1 1 1

X X

Train Test
Figure 4.7: Dropout can process a tensor of arbitrary
shape. During training (left), it sets activations at ran-
dom to zero with probability p and applies a multiply-
ing factor to keep the expected values unchanged. Dur-
ing test (right), it keeps all the activations unchanged.

short-term correlation of the signals and the re-


sulting redundancy negate the effect of dropout,
since activations set to zero can be inferred from
their neighbors. Hence, dropout for 2D tensors
sets entire channels to zero instead of individual
activations (see Figure 4.8).

Although dropout is generally used to improve


training and is inactive during inference, it can
be used in certain setups as a randomization
strategy, for instance, to estimate empirically
confidence scores [Gal and Ghahramani, 2015].

77
D
H,W

× 1 1 0 1 0 0 1 × 1−p
1

Train Test
Figure 4.8: 2D signals such as images generally exhibit
strong short-term correlation and individual activa-
tions can be inferred from their neighbors. This redun-
dancy nullifies the effect of the standard unstructured
dropout, so the usual dropout layer for 2D tensors drops
entire channels instead of individual values.

78
4.6 Normalizing layers
An important class of operators to facilitate the
training of deep architectures are the normaliz-
ing layers, which force the empirical mean and
variance of groups of activations.

The main layer in that family is batch normal-


ization [Ioffe and Szegedy, 2015], which is the
only standard layer to process batches instead
of individual samples. It is parameterized by a
meta-parameter D and two series of trainable
scalar parameters β 1 ,...,β D and γ 1 ,...,γ D .

Given a batch of B samples x 1 ,...,x B of dimen-


sion D, it first computes for each of the D com-
ponents an empirical mean m̂ d and variance v̂d
across the batch:
B
Σ
1
m̂d = xb,d
B
b=1
B

v̂d = (xb,d − m̂d)2,
B
b=1

from which it computes for every component


xb,d a normalized value zb,d, with empirical
mean 0 and variance 1, and from it the final
result value yb,d with mean βd and standard de-

79
D
H,W

γd · +βd γd,h,w · +βd,h,w

√ √
(· − m̂d )/ v̂d + ϵ (· − m̂b )/ v̂b + ϵ

batchnorm layernorm

Figure 4.9: Batch normalization (left) normalizes in


mean and variance each group of activations for a
given d, and scales/shifts that same group of activation
with learned parameters for each d. Layer normaliza-
tion (right) normalizes each group of activations for a
certain b, and scales/shifts each group of activations
for a given d,h,w with learned parameters indexed by
the same.

80
viation γd:
xb,d − m̂d
∀b, zb,d = √v̂ + ϵ
d

yb,d = γdzb,d + βd.

Because this normalization is defined across a


batch, it is done only during training. During
testing, the layer transforms individual samples
according to the m̂ d s and v̂d s estimated with a
moving average over the full training set, which
boils down to a fixed affine transformation per
component.

The motivation behind batch normalization was


to avoid that a change in scaling in an early layer
of the network during training impacts all the
layers that follow, which then have to adapt their
trainable parameters accordingly. Although the
actual mode of action may be more complicated
than this initial motivation, this layer consider-
ably facilitates the training of deep models.

In the case of 2D tensors, to follow the prin-


ciple of convolutional layers of processing all
locations similarly, the normalization is done
per-channel across all 2D positions, and β and
γ remain vectors of dimension D so that the
scaling/shift does not depend on the 2D posi-
tion. Hence, if the tensor to be processed is
81
of shape B × D × H × W , the layer computes
(m̂ d ,v̂d ), for d = 1,. ..,D from the correspond-
ing B × H W × slice, normalizes it accordingly,
and finally scales and shifts its components with
the trainable parameters βd and γd.

So, given a B×D tensor, batch normalization


normalizes it across b and scales/shifts it ac-
cording to d, which can be implemented as a
component-wise product by γ and a sum with
β. Given a B ×D × H ×W tensor, it normal-
izes across b,h,w and scales/shifts according to
d (see Figure 4.9, left).

This can be generalized depending on these di-


mensions. For instance, layer normalization [Ba
et al., 2016] computes moments and normalizes
across all components of individual samples, and
scales and shifts components individually (see
Figure 4.9, right). So, given a B× D tensor, it
normalizes across d and scales/shifts also accord-
ing to the same. Given a B× D× H × W tensor,
it normalizes it across d,h,w and scales/shifts
according to the same.

Contrary to batch normalization, since it pro-


cesses samples individually, layer normalization
behaves the same during training and testing.

82
4.7 Skip connections
Another technique that mitigates the vanishing
gradient and allows the training of deep archi-
tectures are skip connections [Long et al., 2014;
Ronneberger et al., 2015]. They are not layers
per se, but an architectural design in which out-
puts of some layers are transported as-is to other
layers further in the model, bypassing process-
ing in between. This unmodified signal can be
concatenated or added to the input of the layer
the connection branches into (see Figure 4.10). A
particular type of skip connections are the resid-
ual connections which combine the signal with
a sum, and usually skip only a few layers (see
Figure 4.10, right).

The most desirable property of this design is to


ensure that, even in the case of gradient-killing
processing at a certain stage, the gradient will
still propagate through the skip connections.
Residual connections, in particular, allow for the
building of deep models with up to several hun-
dred layers, and key models, such as the residual
networks [He et al., 2015] in computer vision
(see § 5.2), and the Transformers [Vaswani et al.,
2017] in natural language processing (see § 5.3),
are entirely composed of blocks of layers with
residual connections.

83
···

f (8)
···
(7)
···
f
(6)
f +
f (6)
f (5) f (4)
(5)
f
f (4) f (3)
(4)
f
f (3) +
(3)
f
f (2) f (2)
f (2)
f (1) f (1)
f (1)
··· ···
···
Figure 4.10: Skip connections, highlighted in red on this
figure, transport the signal unchanged across multiple
layers. Some architectures (center) that downscale and
re-upscale the representation size to operate at multiple
scales, have skip connections to feed outputs from the
early parts of the network to later layers operating at
the same scales [Long et al., 2014; Ronneberger et al.,
2015]. The residual connections (right) are a special
type of skip connections that sum the original signal
to the transformed one, and usually bypass at most a
handful of layers [He et al., 2015].

84
Their role can also be to facilitate multi-scale rea-
soning in models that reduce the signal size be-
fore re-expanding it, by connecting layers with
compatible sizes, for instance for semantic seg-
mentation (see § 6.4). In the case of residual
connections, they may also facilitate learning
by simplifying the task to finding a differential
improvement instead of a full update.

85
4.8 Attention layers
In many applications, there is a need for an op-
eration able to combine local information at lo-
cations far apart in a tensor. For instance, this
could be distant details for coherent and realistic
image synthesis, or words at different positions
in a paragraph to make a grammatical or seman-
tic decision in natural language processing.

Fully connected layers cannot process large-


dimension signals, nor signals of variable size,
and convolutional layers are not able to prop-
agate information quickly. Strategies that ag-
gregate the results of convolutions, for instance,
by averaging them over large spatial areas, suf-
fer from mixing multiple signals into a limited
number of dimensions.

Attention layers specifically address this prob-


lem by computing an attention score for each
component of the resulting tensor to each com-
ponent of the input tensor, without locality con-
straints, and averaging the features across the
full tensor accordingly [Vaswani et al., 2017].

Even though they are substantially more com-


plicated than other layers, they have become a
standard element in many recent models. They
are, in particular, the key building block of Trans-
86
Q Y
K A V A

Computes Aq,1,...,Aq,NKV Computes Yq


Figure 4.11: The attention operator can be inter-
preted as matching every query Qq with all the
keys K 1 ,...,K N KV to get normalized attention scores
Aq,1,...,Aq,NKV (left, and Equation 4.1), and then av-
eraging the values V 1 ,...,V N KV with these scores to
compute the resulting Yq (right, and Equation 4.2).

formers, the dominant architecture for Large


Language Models. See § 5.3 and § 7.1.

Attention operator
Given

• a tensor Q of queries of size N Q × DQK ,


• a tensor K of keys of size N KV × DQK , and
• a tensor V of values of size N KV × DV ,
the attention operator computes a tensor

Y = att(K,Q,V )

of dimension N Q × DV . To do so, it first com-


putes for every query index q and every key in-
87
dex k an attention score Aq,k as the softargmax
of the dot products between the query Qq and
the keys:

exp √ 1QK Q q ·Kk (4.1)


D
Aq,k = Σ ,
l exp √ 1
DQK
Q q ·K l

where the scaling factor √ 1QK keeps the range


D
of values roughly unchanged even for large DQK .

Then a retrieved value is computed for each


query by averaging the values according to the
attention scores (see Figure 4.11):
Σ
Yq = Aq,kVk. (4.2)
k

So if a query Qn matches one key Km far more


than all the others, the corresponding attention
score An,m will be close to one, and the retrieved
value Yn will be the value Vm associated to that
key. But, if it matches several keys equally, then
Yn will be the average of the associated values.

This can be implemented as

QK⊤
att(Q,K,V ) = softargmax √ V.
` ˛ ¸ DQK x
A

88
Y

×
A

dropout
Masked
softargmax M 1/Σk

exp
×

Q K V
Figure 4.12: The attention operator Y = att(Q,K,V )
computes first an attention matrix A as the per-query
softargmax of QK⊤, which may be masked by a con-
stant matrix M before the normalization. This atten-
tion matrix goes through a dropout layer before being
multiplied by V to get the resulting Y . This operator
can be made causal by taking M full of 1s below the
diagonal and zeros above.

89
This operator is usually extended in two ways,
as depicted in Figure 4.12. First, the attention
matrix can be masked by multiplying it before
the softargmax normalization by a Boolean ma-
trix M . This allows, for instance, to make the
operator causal by taking M full of 1s below the
diagonal and zero above, preventing Yq from de-
pending on keys and values of indices k greater
than q. Second, the attention matrix is processed
by a dropout layer (see § 4.5) before being multi-
plied by V , providing the usual benefits during
training.

Multi-head Attention Layer


This parameterless attention operator is the key
element in the Multi-Head Attention layer de-
picted in Figure 4.13. The structure of this layer
is defined by several meta-parameters: a number
H of heads, and the shapes of three series of H
trainable weight matrices

• W Q of size H × D × DQK ,
• W K of size H × D × DQK , and
• W V of size H × D × DV ,
to compute respectively the queries, the keys,
and the values from the input, and a final weight
matrix W O of size HD V × D to aggregate the
90
Y

×W O

(Y1 | ··· | YH )

attattatt
attatt

Q Q K V
×W
×W1 2 ×W×W 1 2 ×W
K K V V

3 4×W
Q Q
×W 3 4 Q×W K×W
K 1 V
×W×W H
×W×W H
×W
2
×W
3 4 V
H

×H
XQ XK XV
Figure 4.13: The Multi-head Attention layer applies
for each of its h = 1, ,H heads a parametrized lin-
ear transformation to individual elements of the input
sequences X Q ,X K ,X V to get sequences Q,K,V that
are processed by the attention operator to compute Yh.
These H sequences are concatenated along features,
and individual elements are passed through one last
linear operator to get the final result sequence Y .

91
per-head results.

It takes as input three sequences

• XQ of size N Q × D,
• XK of size N KV × D, and
• XV of size N KV × D,
from which it computes, for h = 1,...,H,

Yh = att XQ W Qh,X K W Kh,X V W Vh .

These sequences Y 1 ,...,Y H are concatenated


along the feature dimension and each individual
element of the resulting sequence is multiplied
by W O to get the final result:

Y = (Y1 | ··· | YH)W O .

As we will see in § 5.3 and in Figure 5.6, this


layer is used to build two model sub-structures:
self-attention blocks, in which the three input
sequences XQ , XK , and XV are the same, and
cross-attention blocks, where XK and XV are
the same.

It is noteworthy that the attention operator,


and consequently the multi-head attention layer
when there is no masking, is invariant to a per-
mutation of the keys and values, and equivariant
92
to a permutation of the queries, as it would per-
mute the resulting tensor similarly.

93
4.9 Token embedding
In many situations, we need to convert discrete
tokens into vectors. This can be done with an em-
bedding layer, which consists of a lookup table
that directly maps integers to vectors.

Such a layer is defined by two meta-parameters:


the number N of possible token values, and the
dimension D of the output vectors, and one train-
able N × D weight matrix M .
Given as input an integer tensor X of dimen-
sion D1 × · · · × DK and values in { 0,...,N − 1}
such a layer returns a real-valued tensor Y of
dimension D1 × · · · × DK × D with

∀d1 ,... ,dK ,


Y [d1 ,. ..,d K ] = M [X [d1 ,. ..,dK ]].

94
4.10 Positional encoding
While the processing of a fully connected layer
is specific to both the positions of the features
in the input tensor and to the positions of the
resulting activations in the output tensor, con-
volutional layers and Multi-Head Attention lay-
ers are oblivious to the absolute position in the
tensor. This is key to their strong invariance and
inductive bias, which is beneficial for dealing
with a stationary signal.

However, this can be an issue in certain situ-


ations where proper processing has to access
the absolute positioning. This is the case, for
instance, for image synthesis, where the statis-
tics of a scene are not totally stationary, or in
natural language processing, where the relative
positions of words strongly modulate the mean-
ing of a sentence.

The standard way of coping with this problem


is to add or concatenate to the feature represen-
tation, at every position, a positional encoding,
which is a feature vector that depends on the po-
sition in the tensor. This positional encoding can
be learned as other layer parameters, or defined
analytically.

For instance, in the original Transformer model,


95
for a series of vectors of dimension D, Vaswani
et al. [2017] add an encoding of the sequence
index as a series of sines and cosines at various
frequencies:

pos-enc[t,d] =

sin t
Td/D if d ∈ 2N
t
cos
T (d−1)/D
otherwise,
with T = 104.

96
Chapter 5

Architectures

The field of deep learning has developed over


the years for each application domain multiple
deep architectures that exhibit good trade-offs
with respect to multiple criteria of interest: e.g.
ease of training, accuracy of prediction, memory
footprint, computational cost, scalability.

97
5.1 Multi-Layer Perceptrons
The simplest deep architecture is the Multi-Layer
Perceptron (MLP), which takes the form of a
succession of fully connected layers separated
by activation functions. See an example in Figure
5.1. For historical reasons, in such a model, the
number of hidden layers refers to the number of
linear layers, excluding the last one.

A key theoretical result is the universal approxi-


mation theorem [Cybenko, 1989] which states
that, if the activation function σ is continuous

Y
2
fully-conn

relu
10
Hidden fully-conn
layers
relu
25
fully-conn
50
X
Figure 5.1: This multi-layer perceptron takes as input
a one-dimensional tensor of size 50, is composed of
three fully connected layers with outputs of dimensions
respectively 25, 10, and 2, the two first followed by
ReLU layers.

98
and not polynomial, any continuous function f
can be approximated arbitrarily well uniformly
on a compact domain, which is bounded and
contains its boundary, by a model of the form
l2 ◦σ ◦l1 where l1 and l2 are affine. Such a model
is a MLP with a single hidden layer, and this
result implies that it can approximate anything
of practical value. However, this approximation
holds if the dimension of the first linear layer’s
output can be arbitrarily large.

In spite of their simplicity, MLPs remain an im-


portant tool when the dimension of the signal
to be processed is not too large.

99
5.2 Convolutional networks
The standard architecture for processing images
is a convolutional network, or convnet, that com-
bines multiple convolutional layers, either to re-
duce the signal size before it can be processed by
fully connected layers, or to output a 2D signal
also of large size.

LeNet-like
The original LeNet model for image classifica-
tion [LeCun et al., 1998] combines a series of 2D
convolutional layers and max pooling layers that
play the role of feature extractor, with a series of
fully connected layers which act as a MLP and
perform the classification per se (see Figure 5.2).

This architecture was the blueprint for many


models that share its structure and are simply
larger, such as AlexNet [Krizhevsky et al., 2012]
or the VGG family [Simonyan and Zisserman,
2014].

Residual networks
Standard convolutional neural networks that fol-
low the architecture of the LeNet family are not
easily extended to deep architectures and suffer
from the vanishing gradient problem. The resid-
100
P̂ (Y )

10
fully-conn
Classifier
relu
200
fully-conn
256
reshape

relu
64 × 2 × 2
maxpool k=2
64 × 4 × 4
Feature conv-2d k=5
extractor
relu
32 × 8 × 8
maxpool k=3
32 × 24 × 24
conv-2d k=5
1 × 28 × 28
X
Figure 5.2: Example of a small LeNet-like network for
classifying 28×28 grayscale images of handwritten
digits [LeCun et al., 1998]. Its first half is convolutional,
and alternates convolutional layers per se and max
pooling layers, reducing the signal dimension from
28 ×28 scalars to 256. Its second half processes this
256-dimensional feature vector through a one hidden
layer perceptron to compute 10 logit scores correspond-
ing to the ten possible digits.

101
Y
C ×H ×W
relu
+
batchnorm
C ×H ×W
conv-2d k=1

relu
batchnorm
conv-2d k=3 p=1

relu
batchnorm
C
2 ×H ×W
conv-2d k=1
C ×H ×W

X
Figure 5.3: A residual block.

ual networks, or ResNets, proposed by He et al.


[2015] explicitly address the issue of the vanish-
ing gradient with residual connections (see § 4.7),
which allow hundreds of layers. They have be-
come standard architectures for computer vision
applications, and exist in multiple versions de-
pending on the number of layers. We are going
to look in detail at the architecture of the ResNet-
50 for classification.

As other ResNets, it is composed of a series of

102
Y
4C H W
S × S × S
relu
+
batchnorm batchnorm 4C H W
S × S × S
conv-2d k=1 s=S conv-2d k=1

relu
batchnorm C H W
S × S × S
conv-2d k=3 s=S p=1

relu
batchnorm
C
S ×H ×W
conv-2d k=1

C ×H ×W
X
Figure 5.4: A downscaling residual block. It admits a
meta-parameter S, the stride of the first convolution
layer, which modulates the reduction of the tensor size.

residual blocks, each combining several convolu-


tional layers, batch norm layers, and ReLU layers,
wrapped in a residual connection. Such a block
is pictured in Figure 5.3.

A key requirement for high performance with


real images is to propagate a signal with a large
number of channels, to allow for a rich repre-
sentation. However, the parameter count of a

103
P̂ (Y )

1000
fully-conn
2048
reshape
2048 × 1 × 1
avgpool k=7

resblock
×2
2048 × 7 × 7
dresblock
S=2

resblock
×5
1024 × 14 × 14
dresblock
S=2

resblock
×3
512 × 28 × 28
dresblock
S=2

resblock
×2
256 × 56 × 56
dresblock
S=1
64 × 56 × 56
maxpool k=3 s=2 p=1
relu
batchnorm
64 × 112 × 112
conv-2d k=7 s=2 p=3

3 × 224 × 224
X
Figure 5.5: Structure of the ResNet-50 [He et al., 2015].

104
convolutional layer, and its computational cost,
are quadratic with the number of channels. This
residual block mitigates this problem by first re-
ducing the number of channels with a 1×1 con-
volution, then operating spatially with a 3× 3
convolution on this reduced number of chan-
nels, and then upscaling the number of channels,
again with a 1 × 1 convolution.

The network reduces the dimensionality of the


signal to finally compute the logits for the clas-
sification. This is done thanks to an architec-
ture composed of several sections, each starting
with a downscaling residual block that halves
the height and width of the signal, and doubles
the number of channels, followed by a series
of residual blocks. Such a downscaling resid-
ual block has a structure similar to a standard
residual block, except that it requires a residual
connection that changes the tensor shape. This
is achieved with a 1×1 convolution with a stride
of two (see Figure 5.4).

The overall structure of the ResNet-50 is pre-


sented in Figure 5.5. It starts with a 7×7 convo-
lutional layer that converts the three-channel in-
put image to a 64-channel image of half the size,
followed by four sections of residual blocks. Sur-
prisingly, in the first section, there is no down-

105
scaling, only an increase of the number of chan-
nels by a factor of 4. The output of the last resid-
ual block is 2048× 7×7, which is converted to a
vector of dimension 2048 by an average pooling
of kernel size 7 × 7, and then processed through
a fully-connected layer to get the final logits,
here for 1000 classes.

106
5.3 Attention models
As stated in § 4.8, many applications, particu-
larly from natural language processing, benefit
greatly from models that include attention mech-
anisms. The architecture of choice for such tasks,
which has been instrumental in recent advances
in deep learning, is the Transformer proposed
by Vaswani et al. [2017].

Transformer
The original Transformer, pictured in Figure 5.7,
was designed for sequence-to-sequence transla-
tion. It combines an encoder that processes the
input sequence to get a refined representation,
and an autoregressive decoder that generates
each token of the result sequence, given the en-
coder’s representation of the input sequence and
the output tokens generated so far.

As the residual convolutional networks of § 5.2,


both the encoder and the decoder of the Trans-
former are sequences of compounded blocks
built with residual connections.

• The feed-forward block, pictured at the top of


Figure 5.6 is a one hidden layer MLP, preceded
by a layer normalization. It can update represen-
tations at every position separately.
107
Y

+
dropout
fully-conn
gelu
fully-conn
layernorm

XQKV
Y Y

+ +
mha mha
Q K V Q K V

layernorm layernorm

XQKV XQ XKV
Figure 5.6: Feed-forward block (top), self-attention
block (bottom left) and cross-attention block (bottom
right). These specific structures proposed by Radford
et al. [2018] differ slightly from the original architec-
ture of Vaswani et al. [2017], in particular by having
the layer normalization first in the residual blocks.

108
P̂ (Y1 ),...,P̂ (YS | Ys<S )

S ×V
fully-conn
S×D
ffw

cross-att
Q KV
Decoder
causal
self-att ×N
pos-enc +
S×D
embed

S
0,Y1,...,YS−1

Z 1 ,...,Z T

T ×D
ffw

self-att
×N
Encoder
pos-enc +
T ×D
embed
T
X 1 ,...,X T

Figure 5.7: Original encoder-decoder Transformer


model for sequence-to-sequence translation [Vaswani
et al., 2017].

109
• The self-attention block, pictured on the bot-
tom left of Figure 5.6, is a Multi-Head Attention
layer (see § 4.8), that recombines information
globally, allowing any position to collect infor-
mation from any other positions, preceded by
a layer normalization. This block can be made
causal by using an adequate mask in the atten-
tion layer, as described in § 4.8

• The cross-attention block, pictured on the bot-


tom right of Figure 5.6, is similar except that it
takes as input two sequences, one to compute
the queries and one to compute the keys and
values.

The encoder of the Transformer (see Figure


5.7, bottom), recodes the input sequence of dis-
crete tokens X 1 ,...X T with an embedding layer
(see § 4.9), and adds a positional encoding (see
§ 4.10), before processing it with several self-
attention blocks to generate a refined represen-
tation Z 1 ,...,Z T .

The decoder (see Figure 5.7, top), takes as in-


put the sequence Y 1 ,...,Y S −1 of result tokens
produced so far, similarly recodes them through
an embedding layer, adds a positional encoding,
and processes it through alternating causal self-
attention blocks and cross-attention blocks to

110
P̂ (X1 ),...,P̂ (XT | Xt<T )

T×V
fully-conn
T ×D
ffw

causal
self-att ×N
pos-enc +
T ×D
embed

T
0,X1,...,XT −1

Figure 5.8: GPT model [Radford et al., 2018].

produce the logits predicting the next tokens.


These cross-attention blocks compute their keys
and values from the encoder’s result represen-
tation Z 1 ,...,Z T , which allows the resulting se-
quence to be a function of the original sequence
X 1 ,...,X T .

As we saw in § 3.2 being causal ensures that


such a model can be trained by minimizing the
cross-entropy summed across the full sequence.

Generative Pre-trained Transformer


The Generative Pre-trained Transformer (GPT)
[Radford et al., 2018, 2019], pictured in Figure 5.8
111
is a pure autoregressive model that consists of a
succession of causal self-attention blocks, hence
a causal version of the original Transformer en-
coder.

This class of models scales extremely well, up


to hundreds of billions of trainable parameters
[Brown et al., 2020]. We will come back to their
use for text generation in § 7.1.

Vision Transformer
Transformers have been put to use for image
classification with the Vision Transformer (ViT)
model [Dosovitskiy et al., 2020] (see Figure 5.9).

It splits the three-channel input image into M


patches of resolution P ×P , which are then flat-
tened to create a sequence of vectors X 1 ,...,X M
of shape M × 3P 2. This sequence is multiplied
by a trainable matrix W E of shape 3P ×2 D to
map it to an M ×D sequence, to which is con-
catenated one trainable vector E0. The resulting
(M + 1)× D sequence E 0 ,...,E M is then pro-
cessed through multiple self-attention blocks.
See § 5.3 and Figure 5.6.

The first element Z0 in the resultant sequence,


which corresponds to E0 and is not associated
with any part of the image, is finally processed
112
P̂ (Y )

C
fully-conn
gelu
MLP
readout fully-conn
gelu
fully-conn

D
Z0,Z1,...,ZM

(M + 1) × D
ffw

self-att
×N
pos-enc +

(M + 1) × D
E0,E1,...,EM

×W
E
Image E0
encoder 2
M × 3P
X 1 ,...,X M

Figure 5.9: Vision Transformer model [Dosovitskiy


et al., 2020].

113
by a two-hidden-layer MLP to get the final C
logits. Such a token, added for a readout of a
class prediction, was introduced by Devlin et al.
[2018] in the BERT model and is referred to as a
CLS token.

114
PART III

ApplicATIOns

115
Chapter 6

Prediction

A first category of applications, such as face


recognition, sentiment analysis, object detection,
or speech recognition, requires predicting an un-
known value from an available signal.

116
6.1 Image denoising
A direct application of deep models to image
processing is to recover from degradation by
utilizing the redundancy in the statistical struc-
ture of images. The petals of a sunflower in a
grayscale picture can be colored with high confi-
dence, and the texture of a geometric shape such
as a table on a low-light, grainy picture can be
corrected by averaging it over a large area likely
to be uniform.

A denoising autoencoder is a model that takes


a degraded signal X̃ as input and computes an
estimate of the original signal X. For images, it
is a convolutional network that may integrate
skip-connections, in particular to combine repre-
sentations at the same resolution obtained early
and late in the model, as well as attention layers
to facilitate taking into account elements that
are far away from each other.

Such a model is trained by collecting a large num-


ber of clean samples paired with their degraded
inputs. The latter can be captured in degraded
conditions, such as low-light or inadequate fo-
cus, or generated algorithmically, for instance,
by converting the clean sample to grayscale, re-
ducing its size, or aggressively compressing it

117
with a lossy compression method.

The standard training procedure for denoising


autoencoders uses the MSE loss summed across
all pixels, in which case the model aims at com-
puting the best average clean picture, given the
degraded one, that is E[X | X̃]. This quantity
may be problematic when X is not completely
determined by X̃, in which case some parts
of the generated signal may be an unrealistic,
blurry average.

118
6.2 Image classification
Image classification is the simplest strategy for
extracting semantics from an image and consists
of predicting a class from a finite, predefined
number of classes, given an input image.

The standard models for this task are convolu-


tional networks, such as ResNets (see § 5.2), and
attention-based models such as ViT (see § 5.3).
These models generate a vector of logits with as
many dimensions as there are classes.

The training procedure simply minimizes the


cross-entropy loss (see § 3.1). Usually, perfor-
mance can be improved with data augmenta-
tion, which consists of modifying the training
samples with hand-designed random transfor-
mations that do not change the semantic content
of the image, such as cropping, scaling, mirror-
ing, or color changes.

119
6.3 Object detection
A more complex task for image understanding is
object detection, in which the objective is, given
an input image, to predict the classes and posi-
tions of objects of interest.

An object position is formalized as the four co-


ordinates (x1,y1,x2,y2) of a rectangular bound-
ing box, and the ground truth associated with
each training image is a list of such bounding
boxes, each labeled with the class of the object
contained therein.

The standard approach to solve this task, for in-


stance, by the Single Shot Detector (SSD) [Liu
et al., 2015]), is to use a convolutional neural
network that produces a sequence of image
representations Zs of size Ds × Hs × Ws, s =
1,...,S, with decreasing spatial resolution H× s
Ws down to 1 × 1 for s = S (see Figure 6.1). Each
of these tensors covers the input image in full, so
the h,w indices correspond to a partitioning of
the image lattice into regular squares that gets
coarser when s increases.

As seen in § 4.2, and illustrated in Figure 4.4,


due to the succession of convolutional layers, a
feature vector (Zs[0,h,w],...,Zs[Ds − 1,h,w])
is a descriptor of an area of the image, called its
120
X

Z1
Z2
ZS−1 ZS
...

...

Figure 6.1: A convolutional object detector processes the


input image to generate a sequence of representations
of decreasing resolutions. It computes for every h,w, at
every scale s, a pre-defined number of bounding boxes
whose centers are in the image area corresponding to
that cell, and whose sizes are such that they fit in its
receptive field. Each prediction takes the form of the
estimates (x̂1 ,x̂2 ,ŷ1 ,ŷ2 ), represented by the red boxes
above, and a vector of C + 1 logits for the C classes of
interest, and an additional “no object” class.

121
Figure 6.2: Examples of object detection with the Single-
Shot Detector [Liu et al., 2015].

122
receptive field, that is larger than this square but
centered on it. This results in a non-ambiguous
matching of any bounding box (x1,x2,y1,y2) to
a s,h,w, determined respectively by max(x2 −
x1,y2 − y1), y1+y 2
2 , and
x 1+x2
2 .

Detection is achieved by adding S convolutional


layers, each processing a Zs and computing, for
every tensor indices h,w, the coordinates of a
bounding box and the associated logits. If there
are C object classes, there are C + 1 logits, the
additional one standing for “no object.” Hence,
each additional convolution layer has 4 + C + 1
output channels. The SSD algorithm in particu-
lar generates several bounding boxes per s,h,w,
each dedicated to a hard-coded range of aspect
ratios.
Training sets for object detection are costly to
create, since the labeling with bounding boxes
requires a slow human intervention. To mitigate
this issue, the standard approach is to start with
a convolutional model that has been pre-trained
on a large classification dataset such as VGG-16
for the original SSD, and to replace its final fully-
connected layers with additional convolutional
ones. Surprisingly, models trained for classifica-
tion only learn feature representations that can
be repurposed for object detection, even though

123
that task involves the regression of geometric
quantities.

During training, every ground-truth bounding


box is associated with its s,h,w, and induces a
loss term composed of a cross-entropy loss for
the logits, and a regression loss such as MSE
for the bounding box coordinates. Every other
s,h,w free of bounding-box match induces a
cross-entropy only penalty to predict the class
“no object”.

124
6.4 Semantic segmentation
The finest-grain prediction task for image under-
standing is semantic segmentation, which con-
sists of predicting, for each pixel, the class of the
object to which it belongs. This can be achieved
with a standard convolutional neural network
that outputs a convolutional map with as many
channels as classes, carrying the estimated logits
for every pixel.

While a standard residual network, for instance,


can generate a dense output of the same reso-
lution as its input, as for object detection, this
task requires operating at multiple scales. This
is necessary so that any object, or sufficiently
informative sub-part, regardless of its size, is
captured somewhere in the model by the feature
representation at a single tensor position. Hence,
standard architectures for this task downscale
the image with a series of convolutional layers
to increase the receptive field of the activations,
and re-upscale it with a series of transposed con-
volutional layers, or other upscaling methods
such as bilinear interpolation, to make the pre-
diction at high resolution.

However, a strict downscaling-upscaling archi-


tecture does not allow for operating at a fine

125
Figure 6.3: Semantic segmentation results with the
Pyramid Scene Parsing Network [Zhao et al., 2016].

grain when making the final prediction, since all


the signal has been transmitted through a low-
resolution representation at some point. Models
that apply such downscaling-upscaling serially
mitigate these issues with skip connections from
layers at a certain resolution, before downscal-
ing, to layers at the same resolution, after upscal-
ing [Long et al., 2014; Ronneberger et al., 2015].
Models that do it in parallel, after a convolutional

126
backbone, concatenate the resulting multi-scale
representation after upscaling, before making
the final per-pixel prediction [Zhao et al., 2016].

Training is achieved with a standard cross-


entropy summed over all the pixels. As for ob-
ject detection, training can start from a network
pre-trained on a large-scale image classification
dataset to compensate for the limited availability
of segmentation ground truth.

127
6.5 Speech recognition
Speech recognition consists of converting a
sound sample into a sequence of words. There
have been plenty of approaches to this problem
historically, but a conceptually simple and recent
one proposed by Radford et al. [2022] consists of
casting it as a sequence-to-sequence translation
and then solving it with a standard attention-
based Transformer, as described in § 5.3.

Their model first converts the sound signal into a


spectrogram, which is a one-dimensional series
T× D, that encodes at every time step a vector
of energies in D frequency bands. The associ-
ated text is encoded with the BPE tokenizer (see
§ 3.2).

The spectrogram is processed through a few 1D


convolutional layers, and the resulting repre-
sentation is fed into the encoder of the Trans-
former. The decoder directly generates a discrete
sequence of tokens, that correspond to one of
the possible tasks considered during training.
Multiple objectives are considered: transcription
of English or non-English text, translation from
any language to English, or detection of non-
speech sequences, such as background music or
ambient noise.

128
This approach allows leveraging extremely large
datasets that combine multiple types of sound
sources with diverse ground truths.

It is noteworthy that even though the ultimate


goal of this approach is to produce a transla-
tion as deterministic as possible given the input
signal, it is formally the sampling of a text dis-
tribution conditioned on a sound sample, hence
a synthesis process. The decoder is, in fact, ex-
tremely similar to the generative model of § 7.1.

129
6.6 Text-image representations
A powerful approach to image understanding
consists of learning consistent image and text
representations, such that an image, or a textual
description of it, would be mapped to the same
feature vector.

The Contrastive Language-Image Pre-training


(CLIP) proposed by Radford et al. [2021] com-
bines an image encoder f , which is a ViT, and
a text encoder g, which is a GPT. See § 5.3 for
both.

To repurpose a GPT as a text encoder, instead of a


standard autoregressive model, they add an “end
of sentence” token to the input sequence, and use
the representation of this token in the last layer
as the embedding. Its dimension is between 512
and 1024, depending on the configuration.

Those two models are trained from scratch using


a dataset of 400 million image-text pairs (ik,tk)
collected from the internet. The training proce-
dure follows the standard mini-batch stochastic
gradient descent approach but relies on a con-
trastive loss. The embeddings are computed for
every image and every text of the N pairs in the
mini-batch, and a cosine similarity measure is
computed not only between text and image em-
130
beddings from each pair, but also across pairs, re-
sulting in an N × N matrix of similarity scores:

lm,n = f (im )· g(tn ), m = 1,... ,N,n = 1,. ..,N.

The model is trained with cross-entropy so that,


∀ the values l1,n,...,lN,n interpreted as logit
n
scores predict n, and similarly for ln,1,...,ln,N .
This means that∀n,m, s.t. n ≠ m the similarity
ln,n is unambiguously greater than both ln,m and
lm,n.

When it has been trained, this model can be used


to do zero-shot prediction, that is, classifying a
signal in the absence of training examples by
defining a series of candidate classes with text
descriptions, and computing the similarity of the
embedding of an image with the embedding of
each of those descriptions (see Figure 6.4).

Additionally, since the textual descriptions are


often detailed, such a model has to capture a
richer representation of images and pick up cues
beyond what is necessary for instance for classifi-
cation. This translates to excellent performance
on challenging datasets such as ImageNet Adver-
sarial [Hendrycks et al., 2019] which was specifi-
cally designed to degrade or erase cues on which
standard predictors rely.
131
Figure 6.4: The CLIP text-image embedding [Radford
et al., 2021] allows for zero-shot prediction by predicting
which class description embedding is the most consis-
tent with the image embedding.

132
6.7 Reinforcement learning
Many problems, such as strategy games or
robotic control, can be formalized with a discrete-
time state process St and reward process Rt that
can be modulated by choosing actions At. If
St is Markovian, meaning that it carries alone
as much information about the future as all the
past states until that instant, such an object is a
Markovian Decision Process (MDP).

Given an MDP, the objective is classically to find


a policy π such that At = π(St) maximizes the
expectation of the return, which is an accumu-
lated discounted reward:

Σ
E γ t Rt ,
t≥0

for a discount factor 0 < γ < 1.

This is the standard setup of Reinforcement


Learning (RL), and it can be worked out by intro-
ducing the optimal state-action value function
Q(s,a) which is the expected return if we exe-
cute action a in state s, and then follow the opti-
mal policy. It provides a means to compute the
optimal policy as π(s) = argmaxa Q(s,a), and,
thanks to the Markovian assumption, it verifies

133
the Bellman equation:
Q(s,a) = (6.1)
E Rt + γ max Q(St+1,a′). St = s,At = a ,
a′

from which we can design a procedure to train


a parametric model Q(· , · ; w).
To apply this framework to play classical Atari
video games, Mnih et al. [2015] use for St the con-
catenation of the frame at time t and the three
that precede, so that the Markovian assumption
is reasonable, and use for Q a model dubbed the
Deep Q-Network (DQN), composed of two con-
volutional layers and one fully connected layer
with one output value per action, following the
classical structure of a LeNet (see § 5.2).

Training is achieved by alternatively playing and


recording episodes, and building mini-batches of
tuples (sn,an,rn,s′n) ∼ (S t,At,Rt,S t+1) taken
across stored episodes and time steps, and mini-
mizing
N
Σ
1
ℒ(w) = (Q(s n ,an ;w) − yn )2 (6.2)
N
n=1

with one iteration of SGD, where yn = rn if this


tuple is the end of the episode, and yn = rn +
γ maxa Q(s′ n ,a;w̄) otherwise.
134
Value

Frame number

Figure 6.5: This graph shows the evolution of the state


value V (St) = maxa Q(St,a) during a game of Break-
out. The spikes at time points (1) and (2) correspond to
clearing a brick, at time point (3) it is about to break
through to the top line, and at (4) it does, which ensures
a high future reward [Mnih et al., 2015].

Here w̄ is a constant copy of w, i.e. the gradient


does not propagate through it to w. This is nec-
essary since the target value in Equation 6.1 is
the expectation of yn, while it is yn itself which
is used in Equation 6.2. Fixing w in yn results in
a better approximation of the desirable gradient.

A key issue is the policy used to collect episodes.


Mnih et al. [2015] simply use the ϵ-greedy strat-
egy, which consists of taking an action com-
pletely at random with probability ϵ, and the
optimal action argmaxa Q(s,a) otherwise. In-
jecting a bit of randomness is necessary to favor
135
exploration.

Training is done with ten million frames corre-


sponding to a bit less than eight days of game-
play. The trained network computes accurate
estimates of the state values (see Figure 6.5), and
reaches human performance on a majority of the
49 games used in the experimental validation.

136
Chapter 7

Synthesis

A second category of applications distinct from


prediction is synthesis. It consists of fitting a
density model to training samples and providing
means to sample from this model.

137
7.1 Text generation
The standard approach to text synthesis is to
use an attention-based, autoregressive model. A
very successful model proposed by Radford et al.
[2018], is the GPT which we described in § 5.3.

This architecture has been used to create very


large models, such as OpenAI’s 175-billion-
parameter GPT-3 [Brown et al., 2020]. It is com-
posed of 96 self-attention blocks, each with 96
heads, and processes tokens of dimension 12,288,
with a hidden dimension of 49,512 in the MLPs
of the attention blocks.

When such a model is trained on a very large


dataset, it results in a Large Language Model
(LLM), which exhibits extremely powerful prop-
erties. Besides the syntactic and grammatical
structure of the language, it has to integrate
very diverse knowledge, e.g. to predict the word
following “The capital of Japan is”, “if water is
heated to 100 Celsius degrees it turns into”, or
“because her puppy was sick, Jane was”.

This results in particular in the ability to solve


few-shot prediction, where only a handful of
training examples are available, as illustrated
in Figure 7.1. More surprisingly, when given a
carefully crafted prompt, it can exhibit abilities
138
I: I love apples, O: positive, I: music is my passion, O:
positive, I: my job is boring, O: negative, I: frozen pizzas
are awesome, O: positive,
I: I love apples, O: positive, I: music is my passion, O:
positive, I: my job is boring, O: negative, I: frozen pizzas
taste like cardboard, O: negative,
I: water boils at 100 degrees, O: physics, I: the square
root of two is irrational, O: mathematics, I: the set of
prime numbers is infinite, O: mathematics, I: gravity is
proportional to the mass, O: physics,
I: water boils at 100 degrees, O: physics, I: the square
root of two is irrational, O: mathematics, I: the set of
prime numbers is infinite, O: mathematics, I: squares
are rectangles, O: mathematics,

Figure 7.1: Examples of few-shot prediction with a 120


million parameter GPT model from Hugging Face. In
each example, the beginning of the sentence was given
as a prompt, and the model generated the part in bold.

for question answering, problem solving, and


chain-of-thought that appear eerily close to high-
level reasoning [Chowdhery et al., 2022; Bubeck
et al., 2023].

Due to these remarkable capabilities, these mod-


els are sometimes called foundation models
[Bommasani et al., 2021].

However, even though it integrates a very large


body of knowledge, such a model may be inad-
139
equate for practical applications, in particular
when interacting with human users. In many
situations, one needs responses that follow the
statistics of a helpful dialog with an assistant.
This differs from the statistics of available large
training sets, which combine novels, encyclope-
dias, forum messages, and blog posts.

This discrepancy is addressed by fine-tuning


such a language model. The current dominant
strategy is Reinforcement Learning from Human
Feedback (RLHF) [Ouyang et al., 2022], which
consists of creating small labeled training sets by
asking users to either write responses or provide
ratings of generated responses. The former can
be used as-is to fine-tune the language model,
and the latter can be used to train a reward net-
work that predicts the rating and use it as a target
to fine-tune the language model with a standard
Reinforcement Learning approach.

Due to the dramatic increase in the size of ar-


chitectures of language models, training a single
model can cost several million dollars (see Fig-
ure 3.7), and fine-tuning is often the only way to
achieve high performance on a specific task.

140
7.2 Image generation
Multiple deep methods have been developed to
model and sample from a high-dimensional den-
sity. A powerful approach for image synthesis
relies on inverting a diffusion process.

The principle consists of defining analytically


a process that gradually degrades any sample,
and consequently transforms the complex and
unknown density of the data into a simple and
well-known density such as a normal, and train-
ing a deep architecture to invert this degradation
process [Ho et al., 2020].

Given a fixed T , the diffusion process defines a


probability distribution over series of T + 1 im-
ages as follows: sample x0 uniformly from the
dataset, and then sequentially sample x t+1∼
p(xt+1 |xt),t = 0,...,T − 1, where the condi-
tional distribution p is defined analytically and
such that it gradually erases the structure that
was in x0. The setup should degrade the signal
so much that the distribution p(xT ) has a known
analytical form which can be sampled.

For instance, Ho et al. [2020] normalize the data


to have a mean of 0 and a variance of 1, and their
diffusion process consists of adding a bit of white
noise and re-normalizing the variance to 1. This
141
xT

x0

Figure 7.2: Image synthesis with denoising diffusion


[Ho et al., 2020]. Each sample starts as a white noise
xT (top), and is gradually de-noised by sampling iter-
atively xt−1 | xt ∼ 𝒟(xt + f (xt,t;w),σt).

142
process exponentially reduces the importance of
x0, and xt’s density can rapidly be approximated
with a normal.

The denoiser f is a deep architecture that


should model and allow sampling from
f (x t≃1,xt,t;w)|− p(x t 1 xt). It can be shown,
thanks to −a variational bound, that if this
one-step reverse process is accurate enough,
sampling xT ∼p(xT ) and denoising T steps
with f results in x0 that follows p(x0).
Training f can be achieved by generating a large
number of sequences 0x(n),...,x(n), picking a tn
T
in each, and maximizing
Σ
log f x(n)
t −1
n
,x(n)
tn ,tn ;w .
n

Given their diffusion process, Ho et al. [2020]


have a denoising of the form:

xt−1 | xt ∼ 𝒟(xt + f (x t,t;w);σt), (7.1)


where σ t is defined analytically.

In practice, such a model initially hallucinates


structures by pure luck in the random noise, and
then gradually builds more elements that emerge
from the noise by reinforcing the most likely
continuation of the image obtained thus far.
143
This approach can be extended to text-
conditioned synthesis, to generate images
that match a description. For instance, Nichol
et al. [2021] add to the mean of the denoising
distribution of Equation 7.1 a bias that goes in
the direction of increasing the CLIP matching
score (see § 6.6) between the produced image
and the conditioning text description.

144
The missing bits

For the sake of concision, this volume skips many


important topics, in particular:

Recurrent Neural Networks


Before attention models showed greater perfor-
mance, Recurrent Neural Networks (RNN) were
the standard approach for dealing with temporal
sequences such as text or sound samples. These
architectures possess an internal hidden state
that gets updated each time a component of the
sequence is processed. Their main components
are layers such as LSTM [Hochreiter and Schmid-
huber, 1997] or GRU [Cho et al., 2014].

Training a recurrent architecture amounts to


unfolding it in time, which results in a long
composition of operators. This has historically
prompted the design of key techniques now used
for deep architectures such as rectifiers and gat-
ing, a form of skip connections which are modu-
145
lated dynamically.

Autoencoder
An autoencoder is a model that maps an input
signal, possibly of high dimension, to a low-
dimension latent representation, and then maps
it back to the original signal, ensuring that infor-
mation has been preserved. We saw it in § 6.1
for denoising, but it can also be used to auto-
matically discover a meaningful low-dimension
parameterization of the data manifold.

The Variational Autoencoder (VAE) proposed by


Kingma and Welling [2013] is a generative model
with a similar structure. It imposes, through
the loss, a pre-defined distribution on the latent
representation. This allows, after training, the
generation of new samples by sampling the la-
tent representation according to this imposed
distribution and then mapping back through the
decoder.

Generative Adversarial Networks


Another approach to density modeling is the
Generative Adversarial Networks (GAN) intro-
duced by Goodfellow et al. [2014]. This method
combines a generator, which takes a random in-

146
put following a fixed distribution as input and
produces a structured signal such as an image,
and a discriminator, which takes a sample as
input and predicts whether it comes from the
training set or if it was generated by the genera-
tor.

Training optimizes the discriminator to mini-


mize a standard cross-entropy loss, and the gen-
erator to maximize the discriminator’s loss. It
can be shown that, at equilibrium, the gener-
ator produces samples indistinguishable from
real data. In practice, when the gradient flows
through the discriminator to the generator, it
informs the latter about the cues that the dis-
criminator uses that need to be addressed.

Graph Neural Networks


Many applications require processing signals
which are not organized regularly on a grid. For
instance, proteins, 3D meshes, geographic loca-
tions, or social interactions are more naturally
structured as graphs. Standard convolutional
networks or even attention models are poorly
adapted to process such data, and the tool of
choice for such a task is Graph Neural Networks
(GNN) [Scarselli et al., 2009].

147
These models are composed of layers that com-
pute activations at each vertex by combining
linearly the activations located at its immediate
neighboring vertices. This operation is very sim-
ilar to a standard convolution, except that the
data structure does not reflect any geometrical
information associated with the feature vectors
they carry.

Self-supervised training
As stated in § 7.1, even though they are trained
only to predict the next word, Large Language
Models trained on large unlabeled datasets such
as GPT (see § 5.3) are able to solve various tasks,
such as identifying the grammatical role of a
word, answering questions, or even translating
from one language to another [Radford et al.,
2019].

Such models constitute one category of a larger


class of methods that fall under the name of self-
supervised learning, and try to take advantage
of unlabeled datasets [Balestriero et al., 2023].

The key principle of these methods is to define a


task that does not require labels but necessitates
feature representations which are useful for the
real task of interest, for which a small labeled

148
dataset exists. In computer vision, for instance,
image features can be optimized so that they are
invariant to data transformations that do not
change the semantic content of the image, while
being statistically uncorrelated [Zbontar et al.,
2021].

In both NLP and computer vision, a powerful


generic strategy is to train a model to recover
parts of the signal that have been masked [Devlin
et al., 2018; Zhou et al., 2021].

149
Bibliography

J. L. Ba, J. R. Kiros, and G. E. Hinton. Layer


Normalization. CoRR, abs/1607.06450, 2016.
[pdf]. 82

R. Balestriero, M. Ibrahim, V. Sobal, et al. A


Cookbook of Self-Supervised Learning. CoRR,
abs/2304.12210, 2023. [pdf]. 148

A. Baydin, B. Pearlmutter, A. Radul, and


J. Siskind. Automatic differentiation in
machine learning: a survey. CoRR,
abs/1502.05767, 2015. [pdf]. 42

M. Belkin, D. Hsu, S. Ma, and S. Mandal. Rec-


onciling modern machine learning and the
bias-variance trade-off. CoRR, abs/1812.11118,
2018. [pdf]. 50

R. Bommasani, D. Hudson, E. Adeli, et al. On


the Opportunities and Risks of Foundation
Models. CoRR, abs/2108.07258, 2021. [pdf].
139
150
T. Brown, B. Mann, N. Ryder, et al. Lan-
guage Models are Few-Shot Learners. CoRR,
abs/2005.14165, 2020. [pdf]. 53, 112, 138

S. Bubeck, V. Chandrasekaran, R. Eldan, et al.


Sparks of Artificial General Intelligence:
Early experiments with GPT-4. CoRR,
abs/2303.12712, 2023. [pdf]. 139

T. Chen, B. Xu, C. Zhang, and C. Guestrin. Train-


ing Deep Nets with Sublinear Memory Cost.
CoRR, abs/1604.06174, 2016. [pdf]. 43

K. Cho, B. van Merrienboer, Ç. Gülçehre,


et al. Learning Phrase Representations using
RNN Encoder-Decoder for Statistical Machine
Translation. CoRR, abs/1406.1078, 2014. [pdf].
145

A. Chowdhery, S. Narang, J. Devlin, et al. PaLM:


Scaling Language Modeling with Pathways.
CoRR, abs/2204.02311, 2022. [pdf]. 53, 139

G. Cybenko. Approximation by superpositions


of a sigmoidal function. Mathematics of Con-
trol, Signals, and Systems, 2(4):303–314, De-
cember 1989. [pdf]. 98

J. Devlin, M. Chang, K. Lee, and K. Toutanova.


BERT: Pre-training of Deep Bidirectional
Transformers for Language Understanding.
151
CoRR, abs/1810.04805, 2018. [pdf]. 53, 114,
149

A. Dosovitskiy, L. Beyer, A. Kolesnikov, et al.


An Image is Worth 16x16 Words: Transform-
ers for Image Recognition at Scale. CoRR,
abs/2010.11929, 2020. [pdf]. 112, 113

K. Fukushima. Neocognitron: A self-organizing


neural network model for a mechanism of
pattern recognition unaffected by shift in po-
sition. Biological Cybernetics, 36(4):193–202,
April 1980. [pdf]. 2

Y. Gal and Z. Ghahramani. Dropout as


a Bayesian Approximation: Representing
Model Uncertainty in Deep Learning. CoRR,
abs/1506.02142, 2015. [pdf]. 77

X. Glorot and Y. Bengio. Understanding the dif-


ficulty of training deep feedforward neural
networks. In International Conference on Arti-
ficial Intelligence and Statistics (AISTATS), 2010.
[pdf]. 44, 61

X. Glorot, A. Bordes, and Y. Bengio. Deep Sparse


Rectifier Neural Networks. In International
Conference on Artificial Intelligence and Statis-
tics (AISTATS), 2011. [pdf]. 70

152
A. Gomez, M. Ren, R. Urtasun, and R. Grosse.
The Reversible Residual Network: Backprop-
agation Without Storing Activations. CoRR,
abs/1707.04585, 2017. [pdf]. 43

I. J. Goodfellow, J. Pouget-Abadie, M. Mirza,


et al. Generative Adversarial Networks. CoRR,
abs/1406.2661, 2014. [pdf]. 146

K. He, X. Zhang, S. Ren, and J. Sun. Deep Resid-


ual Learning for Image Recognition. CoRR,
abs/1512.03385, 2015. [pdf]. 51, 83, 84, 102,
104

D. Hendrycks and K. Gimpel. Gaussian Error


Linear Units (GELUs). CoRR, abs/1606.08415,
2016. [pdf]. 72

D. Hendrycks, K. Zhao, S. Basart, et al. Natural


Adversarial Examples. CoRR, abs/1907.07174,
2019. [pdf]. 131

J. Ho, A. Jain, and P. Abbeel. Denoising Diffusion


Probabilistic Models. CoRR, abs/2006.11239,
2020. [pdf]. 141, 142, 143

S. Hochreiter and J. Schmidhuber. Long Short-


Term Memory. Neural Computation, 9(8):1735–
1780, 1997. [pdf]. 145

153
S. Ioffe and C. Szegedy. Batch Normalization: Ac-
celerating Deep Network Training by Reduc-
ing Internal Covariate Shift. In International
Conference on Machine Learning (ICML), 2015.
[pdf]. 79

J. Kaplan, S. McCandlish, T. Henighan, et al. Scal-


ing Laws for Neural Language Models. CoRR,
abs/2001.08361, 2020. [pdf]. 51, 52

D. Kingma and J. Ba. Adam: A Method for


Stochastic Optimization. CoRR, abs/1412.6980,
2014. [pdf]. 39

D. P. Kingma and M. Welling. Auto-Encoding


Variational Bayes. CoRR, abs/1312.6114, 2013.
[pdf]. 146

A. Krizhevsky, I. Sutskever, and G. Hinton. Ima-


geNet Classification with Deep Convolutional
Neural Networks. In Neural Information Pro-
cessing Systems (NIPS), 2012. [pdf]. 8, 100

Y. LeCun, B. Boser, J. S. Denker, et al. Back-


propagation applied to handwritten zip code
recognition. Neural Computation, 1(4):541–
551, 1989. [pdf]. 8

Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner.


Gradient-based learning applied to document

154
recognition. Proceedings of the IEEE, 86(11):
2278–2324, 1998. [pdf]. 100, 101

W. Liu, D. Anguelov, D. Erhan, et al. SSD: Single


Shot MultiBox Detector. CoRR, abs/1512.02325,
2015. [pdf]. 120, 122

J. Long, E. Shelhamer, and T. Darrell. Fully Con-


volutional Networks for Semantic Segmenta-
tion. CoRR, abs/1411.4038, 2014. [pdf]. 83, 84,
126

A. L. Maas, A. Y. Hannun, and A. Y. Ng. Rec-


tifier nonlinearities improve neural network
acoustic models. In proceedings of the ICML
Workshop on Deep Learning for Audio, Speech
and Language Processing, 2013. [pdf]. 71

V. Mnih, K. Kavukcuoglu, D. Silver, et al. Human-


level control through deep reinforcement
learning. Nature, 518(7540):529–533, February
2015. [pdf]. 134, 135

A. Nichol, P. Dhariwal, A. Ramesh, et al. GLIDE:


Towards Photorealistic Image Generation and
Editing with Text-Guided Diffusion Models.
CoRR, abs/2112.10741, 2021. [pdf]. 144

L. Ouyang, J. Wu, X. Jiang, et al. Training lan-


guage models to follow instructions with hu-

155
man feedback. CoRR, abs/2203.02155, 2022.
[pdf]. 140

R. Pascanu, T. Mikolov, and Y. Bengio. On the dif-


ficulty of training recurrent neural networks.
In International Conference on Machine Learn-
ing (ICML), 2013. [pdf]. 44

A. Radford, J. Kim, C. Hallacy, et al. Learn-


ing Transferable Visual Models From Natural
Language Supervision. CoRR, abs/2103.00020,
2021. [pdf]. 130, 132

A. Radford, J. Kim, T. Xu, et al. Robust Speech


Recognition via Large-Scale Weak Supervi-
sion. CoRR, abs/2212.04356, 2022. [pdf]. 128

A. Radford, K. Narasimhan, T. Salimans, and


I. Sutskever. Improving Language Understand-
ing by Generative Pre-Training, 2018. [pdf].
108, 111, 138

A. Radford, J. Wu, R. Child, et al. Language


Models are Unsupervised Multitask Learners,
2019. [pdf]. 111, 148

O. Ronneberger, P. Fischer, and T. Brox. U-Net:


Convolutional Networks for Biomedical Im-
age Segmentation. In Medical Image Comput-
ing and Computer-Assisted Intervention, 2015.
[pdf]. 83, 84, 126
156
F. Scarselli, M. Gori, A. C. Tsoi, et al. The Graph
Neural Network Model. IEEE Transactions
on Neural Networks (TNN), 20(1):61–80, 2009.
[pdf]. 147

R. Sennrich, B. Haddow, and A. Birch. Neural


Machine Translation of Rare Words with Sub-
word Units. CoRR, abs/1508.07909, 2015. [pdf].
34

J. Sevilla, L. Heim, A. Ho, et al. Compute Trends


Across Three Eras of Machine Learning. CoRR,
abs/2202.05924, 2022. [pdf]. 9, 51, 53

J. Sevilla, P. Villalobos, J. F. Cerón, et al. Param-


eter, Compute and Data Trends in Machine
Learning, May 2023. [web]. 54

K. Simonyan and A. Zisserman. Very Deep Con-


volutional Networks for Large-Scale Image
Recognition. CoRR, abs/1409.1556, 2014. [pdf].
100

N. Srivastava, G. Hinton, A. Krizhevsky, et al.


Dropout: A Simple Way to Prevent Neural
Networks from Overfitting. Journal of Ma-
chine Learning Research (JMLR), 15:1929–1958,
2014. [pdf]. 76

M. Telgarsky. Benefits of depth in neural net-


works. CoRR, abs/1602.04485, 2016. [pdf]. 47
157
A. Vaswani, N. Shazeer, N. Parmar, et al. Atten-
tion Is All You Need. CoRR, abs/1706.03762,
2017. [pdf]. 83, 86, 96, 107, 108, 109

J. Zbontar, L. Jing, I. Misra, et al. Barlow Twins:


Self-Supervised Learning via Redundancy Re-
duction. CoRR, abs/2103.03230, 2021. [pdf].
149

M. D. Zeiler and R. Fergus. Visualizing and Un-


derstanding Convolutional Networks. In Eu-
ropean Conference on Computer Vision (ECCV),
2014. [pdf]. 68

H. Zhao, J. Shi, X. Qi, et al. Pyramid Scene


Parsing Network. CoRR, abs/1612.01105, 2016.
[pdf]. 126, 127

J. Zhou, C. Wei, H. Wang, et al. iBOT: Im-


age BERT Pre-Training with Online Tokenizer.
CoRR, abs/2111.07832, 2021. [pdf]. 149

158
Index

1D convolution, 65
2D convolution, 65

activation, 23, 41
function, 70, 98
map, 68
Adam, 39
affine operation, 60
artificial neural network, 8, 11
attention operator, 87
autoencoder, 146
denoising, 117
Autograd, 42
autoregressive model, see model, autoregressive
average pooling, 75

backpropagation, 42
backward pass, 42
basis function regression, 14
batch, 21, 38
batch normalization, 79, 103
Bellman equation, 134
159
bias vector, 60, 66
BPE, see Byte Pair Encoding
Byte Pair Encoding, 34, 128

cache memory, 21
capacity, 16
causal, 32, 89, 110
model, see model, causal
chain rule (derivative), 40
chain rule (probability), 30
channel, 23
checkpointing, 43
classification, 18, 26, 100, 119
CLIP, see Contrastive Language-Image
Pre-training
CLS token, 114
computational cost, 43
Contrastive Language-Image Pre-training, 130
contrastive loss, 27, 130
convnet, see convolutional network
convolution, 65
convolutional layer, see layer, convolutional
convolutional network, 100
cross-attention block, 92, 108, 110
cross-entropy, 27, 31, 45

data augmentation, 119


deep learning, 8, 11
Deep Q-Network, 134

160
denoising autoencoder, see autoencoder,
denoising
density modeling, 18
depth, 41
diffusion process, 141
dilation, 66, 73
discriminator, 147
downscaling residual block, 105
DQN, see Deep Q-Network
dropout, 76, 90

embedding layer, see layer, embedding


epoch, 48
equivariance, 66, 92

feed-forward block, 107, 108


few-shot prediction, 138
filter, 65
fine-tuning, 140
flops, 22
forward pass, 41
foundation model, 139
FP32, 22
framework, 23

GAN, see Generative Adversarial Networks


GELU, 72
Generative Adversarial Networks, 146
Generative Pre-trained Transformer, 111, 130,
138, 148
161
generator, 146
GNN, see Graph Neural Network
GPT, see Generative Pre-trained Transformer
GPU, see Graphical Processing Unit
gradient descent, 35, 37, 40, 45
gradient norm clipping, 44
gradient step, 35
Graph Neural Network, 147
Graphical Processing Unit, 8, 20
ground truth, 18

hidden layer, see layer, hidden


hidden state, 145
hyperbolic tangent, 71

image processing, 100


image synthesis, 86, 141
inductive bias, 17, 49, 65, 66, 95
invariance, 75, 92, 95, 149

kernel size, 65, 73


key, 87

Large Language Model, 55, 87, 138, 148


layer, 41, 58
attention, 86
convolutional, 65, 73, 86, 95, 100, 103, 120,
125, 128
embedding, 94, 110
fully connected, 60, 86, 95, 98, 100
162
hidden, 98
linear, 60
Multi-Head Attention, 90, 95, 110
normalizing, 79
reversible, 43
layer normalization, 82, 107, 110
Leaky ReLU, 71
learning rate, 35, 50
learning rate schedule, 50
LeNet, 100, 101
linear layer, see layer, linear
LLM, see Large Language Model
local minimum, 35
logit, 26, 31
loss, 12

machine learning, 11, 17, 18


Markovian Decision Process, 133
Markovian property, 133
max pooling, 73, 100
MDP, see Markovian, Decision Process
mean squared error, 14, 26
memory requirement, 43
memory speed, 21
meta parameter, see parameter, meta
metric learning, 27
MLP, see multi-layer perceptron
model, 12
autoregressive, 30, 31, 138
163
causal, 33, 90, 110, 111
parametric, 12
pre-trained, 123, 127
multi-layer perceptron, 45, 98–100, 107

natural language processing, 86


NLP, see natural language processing
non-linearity, 70
normalizing layer, see layer, normalizing

object detection, 120


overfitting, 17, 48

padding, 66, 73
parameter, 12
meta, 13, 35, 48, 65, 66, 73, 90, 94
parametric model, see model, parametric
peak performance, 22
perplexity, 31
policy, 133
optimal, 133
pooling, 73
positional encoding, 95, 110
posterior probability, 26
pre-trained model, see model, pre-trained
prompt, 138, 139

query, 87

random initialization, 61
164
receptive field, 67, 123
rectified linear unit, 70, 145
recurrent neural network, 145
regression, 18
Reinforcement Learning, 133, 140
Reinforcement Learning from Human Feedback,
140
ReLU, see rectified linear unit
residual
block, 103
connection, 83, 102
network, 47, 83, 102
ResNet-50, 102
return, 133
reversible layer, see layer, reversible
RL, see Reinforcement Learning
RLHF, see Reinforcement Learning from Human
Feeback
RNN, see recurrent neural network

scaling laws, 51
self-attention block, 92, 108, 110
self-supervised learning, 148
semantic segmentation, 85, 125
SGD, see stochastic gradient descent
Single Shot Detector, 120
skip connection, 83, 126, 145
softargmax, 26, 88
softmax, 26
165
speech recognition, 128
SSD, see Single Shot Detector
stochastic gradient descent, 38, 45, 51
stride, 66, 73
supervised learning, 19

Tanh, see hyperbolic tangent


tensor, 23
tensor cores, 21
Tensor Processing Unit, 21
test set, 48
text synthesis, 138
token, 30
tokenizer, 34, 128
TPU, see Tensor Processing Unit
trainable parameter, 12, 23, 51
training, 12
training set, 12, 25, 48
Transformer, 47, 83, 87, 95, 107, 109, 128
transposed convolution, 68, 125

underfitting, 16
universal approximation theorem, 98
unsupervised learning, 19

VAE, see variational, autoencoder


validation set, 48
value, 87
vanishing gradient, 44, 57
variational
166
autoencoder, 146
bound, 143
Vision Transformer, 112, 130
ViT, see Vision Transformer
vocabulary, 30

weight, 13
decay, 28
matrix, 60

zero-shot prediction, 131

167
This book is licensed under the Creative Com-
mons BY-NC-SA 4.0 International License.

V1.1.1–September 20, 2023

168

You might also like