A Geometric Modeling of Occam's Razor in Deep Learning: How To Measure The Simplicity or The Complexity of A Model
A Geometric Modeling of Occam's Razor in Deep Learning: How To Measure The Simplicity or The Complexity of A Model
Ke Sun
CSIRO’s Data61, Australia
The Australian National University
[email protected]
Frank Nielsen
Sony Computer Science Laboratories Inc. (Sony CSL)
Tokyo, Japan
arXiv:1905.11027v6 [cs.LG] 6 Jun 2024
[email protected]
Version: Jun 2024
Abstract
Why do deep neural networks (DNNs) benefit from very high dimensional parameter
spaces? Their huge parameter complexities vs. stunning performances in practice is all
the more intriguing and not explainable using the standard theory of model selection for
regular models. In this work, we propose a geometrically flavored information-theoretic
approach to study this phenomenon. Namely, we introduce the locally varying dimensionality
of the parameter space of neural network models by considering the number of significant
dimensions of the Fisher information matrix, and model the parameter space as a manifold
using the framework of singular semi-Riemannian geometry. We derive model complexity
measures which yield short description lengths for deep neural network models based on
their singularity analysis thus explaining the good performance of DNNs despite their large
number of parameters.
1 Introduction
Deep neural networks (DNNs) are usually large models in terms of storage costs. In the classical
model selection theory, such models are not favored as compared to simple models with the same
training performance. For example, if one applies the Bayesian information criterion (BIC) [62]
to DNN, a shallow neural network (NN) will be preferred over a deep NN due to the penalty
term with respect to (w.r.t.) the complexity. A basic principle in science is the Occam1 ’s Razor,
which favors simple models over complex ones that accomplish the same task. This raises the
fundamental question of how to measure the simplicity or the complexity of a model.
Formally, the preference of simple models has been studied in the area of minimum description
length (MDL) [22, 58, 59], also known in another thread of research as the minimum message
length (MML) [68].
Consider a parametric family of distributions M = {p(x | θ)} with θ ∈ Θ ⊂ RD . The
distributions are mutually absolutely continuous, which guarantees all densities to have the same
∗ This work first appeared under the former title “Lightlike Neuromanifolds, Occam’s Razor and Deep Learning”.
1William of Ockham (ca. 1287 — ca. 1347), a monk (friar) and philosopher.
1
support. Otherwise, many problems of non-regularity will arise as described by [24, 54]. The
Fisher information matrix (FIM) I(θ) is a D × D positive semi-definite (psd) matrix: I(θ) ⪰ 0.
The model is called regular if it is (i) identifiable [11] with (ii) a non-degenerate and finite Fisher
information matrix (i.e., I(θ) ≻ 0).
In a Bayesian setting, the description length of a set of N i.i.d. observations X = {xi }N i=1 ⊂ X
w.r.t. M can be defined as the number of nats with the coding scheme of a parametric model
p(x | θ) and a prior p(θ). The code length of any xi is given by the cross entropy between the
empirical
R distribution δi (x) = δ(x − xi ), where δ(·) denotes the Dirac’s delta function, and
p(x) = p(x | θ)p(θ) dθ. Therefore, the description length of X is
N
X N
X Z
− log p(X) = h× (δi : p) = − log p(xi | θ) p(θ) dθ, (1)
i=1 i=1
where h× (p : q) := − p(x) log q(x)dx denotes the cross entropy between p(x) and q(x), and log
R
denotes natural logarithm throughout the paper. The code length means the cumulative loss of
the Bayesian mixture model p(x) w.r.t. the observations X.
By using Jeffreys2 ’ non-informative prior [3] as p(θ), the MDL in eq. (1) can be approximated
(see [7, 58, 59]) as
geometric complexity
z }| Z {
D N
(2)
p
χ = − log p(X | θ̂) + log + log |I(θ)| dθ .
| {z } |2 {z 2π}
fitness penalize high dof
| {z }
model capacity
where θ̂ ∈ Θ is the maximum likelihood estimation (MLE), or the projection [3] of X onto the
model, D = dim(Θ) is the model size, N is the number of observations, and | · | denotes the
matrix determinant. In this paper, the symbols χ and O and the term “razor” all refer to the
same concept, that is the description length of the data X by the model M. The smaller those
quantities, the better.
The first term in eq. (2) is the fitness of the model to the observed data. The second and the
third terms measures the geometric complexity [43] and make χ favor simple models. The second
O(log N ) term only depends on the number of parameters D and the number of observations N .
It penalizes large models with a high degree of freedom (dof). The third O(1) term is independent
to the observed data and measures the model capacity, or the total “number” of distinguishable
distributions [43] in the model.
Unfortunately, this razor χ in eq. (2) does not fit straightforwardly into DNNs, which are
high-dimensional singular models. The FIM I(θ) are large singular matrices (not full rank) and
the last term may be difficult to evaluate. Based on the second term on the right-hand-side
(RHS), a DNN can have very high complexity and therefore is less favored against a shallow
network. This contradicts the good generalization of DNNs as compared to shallow NNs. These
issues call for a new analysis of the MDL in the DNN setting.
Towards this direction, we made the following contributions in this paper:
– New concepts and methodologies from singular semi-Riemannian geometry [36] to analyze
the space of neural networks;
– A definition of the local dimensionality, that is the amount of non-singularity, with bounding
analysis;
2 Sir Harold Jeffreys (1891–1989), a British statistician.
2
– A new MDL formulation, which explains how the singularity contribute to the “negative
complexity” of DNNs: That is, the model turns simpler as the number of parameters grows.
The rest of this paper is organized as follows. Section 2 reviews singularities in information
geometry. In the setting of a DNN, section 3 introduces its singular parameter manifold, and
bound the number of singular dimensions. Sections 4 to 7 derive our MDL criterion based on two
different priors, and discusses how model complexity is affected by the singular geometry. We
discuss related work in section 8 and conclude in section 9.
metric tensor, ∇ a torsion-free affine connection, and C is a symmetric covariant tensor of order 3.
4 Using the cyclic property of the matrix trace, we have ds2 = tr(I(θ)dθdθ ⊺ ) = dθ ⊺ I(θ)dθ.
3
a null curve
(equivalent models)
∂θi
θ
θ′
∂θi′
manifold obtained by combining all tangent spaces for all θ ∈ M. A vector field is a smooth
mapping from M to T M such that each point θ ∈ M is attached a tangent vector originating
from itself. Vector fields are cross-sections of the tangent bundle. In a local coordinate chart θ,
the vector fields along the frame are denoted as ∂θi . A distribution (not to be confused with
probability distributions which are points on M) means a vector subspace of the tangent bundle
spanned by several independent vector fields, such that each point θ ∈ M is associated with a
subspace of Tθ (M) and those subspaces vary smoothly with θ. Its dimensionality is defined by
the dimensionality of the subspace, i.e., the number of vector fields that span the distribution.
In a lightlike manifold [15, 36] M, I(θ) can be degenerate. The tangent space Tθ (M) is
a vector space with a kernel subspace, i.e., a nullspace. A null vector field is formed by null
vectors, whose lengths measured according to the Fisher metric tensor are all zero. The radical5
distribution Rad(T M) is the distribution spanned by the null vector fields. Locally at θ ∈ M,
the tangent vectors in Tθ (M) which span the kernel of I(θ) are denoted as Radθ (T M). In a
local coordinate chart, Rad(T M) is well defined if these Radθ (T M) form a valid distribution.
We write T M = Rad(T M) ⊕ S(T M), where ‘⊕” is the direct sum, and the screen distribution
S(T M) is complementary to the radical distribution Rad(T M) and has a non-degenerate induced
metric. See fig. 1 for an illustration of the concept of radical distribution.
We can find a local coordinate frame (a frame is an ordered basis) (θ1 , · · · , θd , θd+1 , · · · , θD ),
where the first d dimensions θ s = (θ1 , · · · , θd ) correspond to the screen distribution, and the
remaining d¯ := D − d dimensions θ r = (θd+1 , · · · , θD ) correspond to the radical distribution. The
local inner product ⟨·, ·⟩I satisfies
where δij = 1 if and only if (iff) i = j and δij = 0, otherwise. Unfortunately, this frame is
5Radical stems from Latin and means root.
4
not unique [14]. We will abuse I to denote both the FIM of θ and the FIM of θ s . One has to
remember that I(θ) ⪰ 0, while I(θ s ) ≻ 0 is a proper Riemannian metric. Hence, both I −1 (θ s )
and log |I(θ s )| are well-defined.
Remark 1. Notice that the Fisher information matrix is covariant under reparameterization.
That is, let θ(λ) be an invertible smooth reparameterization of λ. Then the FIM rewrites in the
θ-parameterization as:
I(θ) = J⊺θ→λ I(λ(θ))Jθ→λ , (4)
where Jθ→λ is the full rank Jacobian matrix.
The natural gradient flows with respect to λ and θ coincide but not the natural gradient descent
methods because of the non-zero learning step sizes.
Furthermore, the ranks of I(θ) and I(λ) as well as the dimensions of the screen and radical
distributions coincide. Hence, the notion of singularities is intrinsic and independent of the
smooth reparameterization.
3 Local Dimensionality
This section instantiates the concepts in the previous section 2 in terms of a simple DNN structure.
We consider a deep feed-forward network with L layers, uniform width M except the last layer
which has m output units (m < M ), input z ∈ Z with dim(Z) = M , pre-activations hl of size M
(except that in the last layer, hL has m elements), post-activations z l of size M , weight matrices
W l and bias vectors bl (1 ≤ l ≤ L). The layers are given by
z l = ϕ(hl ),
hl = W l z l−1 + bl , (5)
0
z = z,
y ∼ Multinomial SoftMax(hL )
5
p(x) = p(z) p(y | z, θ)p(θ)dθ w.r.t. to the observed pairs (zi , yi ). The smaller the code length,
R
By considering the neural network weights and biases as random variables satisfying a
prescribed prior distribution [30, 53], this I(θ) can be regarded as a random matrix [42] depending
on the structure of the DNN and the prior. The empirical density of I(θ) is the empirical
PD
distribution of its eigenvalues {λi }Di=1 , that is, ρD (λ) = D
1
i=1 δ(λi ). If at the limit D → ∞,
the empirical density converges to a probability density function (pdf), then
The local dimensionality d(θ) is the number of degrees of freedom at θ ∈ M which can change
the probabilistic model p(y | z, θ) in terms of information theory. One can find a reparameterized
6
DNN with d(θ) parameters, which is locally equivalent to the original DNN with D parameters.
Recall the dimensionality of the tangent bundle is two times the dimensionality of the manifold.
Remark 2. The dimensionality of the screen distribution S(T M) at θ is 2 d(θ).
By definition, the FIM as the singular semi-Riemannian metric of M must be psd. Therefore
it only has positive and zero eigenvalues, and the number of positive eigenvalues d(θ) is not
constant as θ varies in general.
Remark 3. The local metric signature (number of positive, negative, zero eigenvalues of the
FIM) of the neuromanifold M is (d(θ), 0, D − d(θ)), where d(θ) is the local dimensionality.
For DNN, we assume that
(A1) At the MLE θ̂, the prediction SoftMax(hL (zi )) perfectly recovers (tending to be one-hot
vectors) the training target yi , for all the training samples (zi , yi ).
In this case, the negative Hessian of the average log-likelihood
N
1 ∂ 2 log p(X | θ) 1 X ∂ 2 log p(yi | zi , θ)
J(θ) := − = −
N ∂θ∂θ ⊺ N i=1 ∂θ∂θ ⊺
is called the observed FIM (sample-based FIM). In our notations, the FIM I depends on the
true distribution p(z) and does not depend on the observed samples (unless p(z) = p̂(z) in which
case I become Î). The observed FIM J depends on the samples zi . If p(z) = p̂(z), the observed
FIM coincides with the FIM at the MLE θ̂ and J(θ̂) = Î(θ̂)7 . For general statistical models,
there is a residual term in between these two matrices which scales with the training error (see
e.g. Eq. 6.19 in section 6 of [4], or eq. (19) in the appendix).
The local dimensionality
d(θ) depends on the specific choice of p(z). If p(z) = p̂(z), then
ˆ
d(θ) = d(θ) = rank Î(θ) . On the other hand, one can use the rank of the negative Hessian
J(θ) (i.e., observed rank) to get an approximation of the local dimensionality d(θ) ≈ rank (J(θ)).
In the MLE θ̂, this approximation becomes accurate. We simply denote d and d, ˆ instead of d(θ)
ˆ
and d(θ), if θ is clear from the context.
We first show that the lightlike dimensions of M do not affect the neural network model in
eq. (5).
Lemma 1. If (θ, j αj ∂θj ) ∈ Rad(T M), i.e. ⟨ j αj ∂θj , j αj ∂θj ⟩I(θ) = 0, then almost surely
P P P
∂hL (z)
we have ∂θ α = λ(z)1, where λ(z) ∈ R, and 1 is a vector of ones.
L
By lemma 1, the Jacobian ∂h∂θ(z) is the local linear approximation of the map θ → hL . The
dynamic α (coordinates of a tangent vector) on M causes a uniform increment on the output
hL , which, after the SoftMax function, does not change the neural network map z → y.
Then, we have the following bounds.
ˆ ≤ min(D, (m − 1)N ).
Proposition 2. ∀θ ∈ M, d(θ)
Remark 4. While the total number D of free parameters is unbounded in DNNs, the local
dimensionality estimated by d(θ)ˆ grows at most linearly w.r.t. the sample size N , given fixed m
ˆ
(size of the last layer). If both N and m are fixed, then d(θ) is bounded even when the network
width M → ∞ and/or depth L → ∞.
7 There has been a discussion on different variants of the FIM [35] used in machine learning. To clarify this
confusion in terminologies is out of the scope of this paper. Here, J(θ) refers to the observed FIM (the negative
Hessian of the log-likelihood) usually evaluated at the MLE, while I(θ) refers to the FIM.
7
To understand d(θ), one can parameterize the DNN, locally, with only d(θ) free parameters
while maintaining the same predictive model. The log-likelihood is a function of these d(θ)
parameters, and therefore its Hessian has at most rank d(θ). In theory, one can only reparameterize
M so that at one single point θ̂, the screen and radical distributions are separated based on
the coordinate chart. Such a chart may neither exist locally (in a neighborhood around θ̂) nor
globally.
The local dimensionality is not constant and may vary with θ. The global topology of the
neuromanifold is therefore like a stratifold [5, 16]. As θ has a large dimensionality in DNNs,
singularities are more likely to occur in M. Compared to the notion of intrinsic dimensionality [38],
our d(θ) is well-defined mathematically rather than based on empirical evaluations. One can
regard our local dimensionality as an upper bound of the intrinsic dimensionality, because a
very small singular value of I still counts towards the local dimensionality. Notice that random
matrices have full rank with probability 1 [17]. On the other hand, if we regard small singular
values (below a prescribed threshold ε > 0) as ε-singular dimensions, the spectral density ρI
(probability distribution of the eigenvalues of I(θ)) affects the expected local dimensionality of
M. On the support of ρI , the higher the probability of the region [0, ϵ), the more likely M
is singular. By the Cramér-Rao lower bound, the variance of an unbiased 1D estimator θ̂ must
satisfy
1
var(θ̂) ≥ I(θ)−1 ≥ .
ε
Therefore the ε-singular dimensions lead to a large variance of the estimator θ̂: a single observation
xi carries little or no information regarding θ, and it requires a large number of observations to
achieve the same precision. The notion of thresholding eigenvalues close to zero may depend on
the parameterization but the intrinsic ranks given by the local dimensionality are invariant.
In a DNN, there are several typical sources of singularities:
• First, if the neuron is saturated and gives constant output regardless of the input sample
zi , then all dynamics of its input and output connections are in Rad(T M).
• Second, two neurons in the same layer can have linearly dependent output, e.g. when they
share the same weight vector and bias. They can be merged into one single neuron, as there
exists redundancy in the original reparametrization.
• Third, if the activation function ϕ(·) is homogeneous, e.g. ReLU, then any neuron in the
DNN induces a reparametrization by multiplying the input links by α and output links by
1/αk (k is the degree of homogeneity). This reparametrization corresponds to a null curve
in the neuromanifold parameterized by α.
• Fourth, certain structures such as recurrent neural networks (RNNs) suffer from vanishing
gradient [20]. As the FIM is the variance of the gradient of the log-likelihood (known as
variance of the score in statistics), its scale goes to zero along the dimensions associated
with such structures.
It is meaningful to formally define the notion of “lightlike neuromanifold”. Using the geometric
tools, related studies can be invariant w.r.t. neural network reparametrization. Moreover, the
connection between neuromanifold and singular semi-Riemannian geometry, which is used in
general relativity, is not yet widely adopted in machine learning. For example, the textbook [69]
in singular statistics mainly used tools from algebraic geometry which is a different field.
Notice that the Fisher-Rao distance along a null curve is undefined because there the FIM is
degenerate and there is no arc-length reparameterization along null curves [32].
8
4 General Formulation of Our Razor
In this section, we derive a new formula of MDL for DNNs, aiming to explain how does the
high dimensional DNN structure can have a short code length of the given data? Notice that,
this work focuses on the concept of model complexity but not the generalization bounds. We
try to argue the DNN model is intrinsically simple because it can be described shortly. The
theoretical connection between generalization power and MDL is studied in PAC-Bayesian theory
and PAC-MDL (see [21, 25, 46] and references therein). This is beyond the scope of this paper.
We derive a simple asymptotic formula for the case of large sample size and large network
size. Therefore crude approximations are taken and the low-order terms are ignored, which are
common practices in deriving information criteria [1, 62].
In the following, we will abuse p(x | θ) to denote the DNN model p(y | z, θ) for shorter
equations and to be consistent with the introduction. Assume
(A2) The absolute values of the third-order derivatives of log p(x | θ) w.r.t. θ are bounded by
some constant.
√
(A3) ∀i, |θi − θ̂i | = O(1/ M ), where O(·) is the Bachmann–Landau’s big-O notation.
Recall that M is the √width of the neural network. We consider that the neural networks weights
have a order of O(1/ M ). For example, if the √ input of a neuron follows the standard Gaussian
distribution, then its weights with order O(1/ M ) guarantee the output is O(1). In practice,
this constraint can be guaranteed by clipping the weight vector to a prescribed range.
We rewrite the code length in eq. (1) based on the Taylor expansion of log p(X | θ) at θ = θ̂
up to the second order:
Z
N
− log p(X) = − log p(θ) exp log p(X | θ̂) − (θ − θ̂)⊺ J(θ̂)(θ − θ̂)
M 2
+ O N ∥θ − θ̂∥3 dθ. (9)
Notice that the first order term vanishes because θ̂ is a local optimum of log p(X | θ), and in the
second order term, −N J(θ̂) is the Hessian matrix of the likelihood function log p(X | θ) evaluated
at θ̂. At the MLE, J(θ̂) ⪰ 0, while in general the Hessian of the loss of a DNN evaluated at θ ̸= θ̂
can have a negative spectrum [2, 60]. √
Through a change of variable ϕ := N (θ − θ̂), the density of ϕ is p(ϕ) = √1N p( √ϕN + θ̂) so
that M p(ϕ)dϕ = 1. In the integration in eq. (9), the term − N2 (θ − θ̂)⊺ J(θ̂)(θ − θ̂) has an order
R
of O(∥ϕ∥2 ). The cubic remainder term has an order of O( √1N ∥ϕ∥3 ). If N is sufficiently large,
this remainder can be ignored. Therefore we can write
N ⊺
− log p(X) ≈ − log p(X | θ̂) − log Ep exp − (θ − θ̂) J(θ̂)(θ − θ̂) . (10)
2
On the RHS, the first term measures the error of the model w.r.t. the observed data X. The
second term measures the model complexity. We have the following bound.
Proposition 3.
N
0 ≤ − log Ep exp − (θ − θ̂)⊺ J(θ̂)(θ − θ̂)
2
N
≤ tr J(θ̂) (µ(θ) − θ̂)(µ(θ) − θ̂)⊺ + cov(θ) ,
2
9
where µ(θ) and cov(θ) denote the mean and covariance matrix of the prior p(θ), respectively.
Therefore the complexity is always non-negative and its scale is bounded by the prior p(θ).
The model has low complexity when θ̂ is close to the mean of p(θ) and/or when the variance of
p(θ) is small.
Consider
R the prior p(θ) = κ(θ)/ M κ(θ)dθ, where κ(θ) > 0 is a positive measure on M so
R
that 0 < M κ(θ)dθ < ∞. Based on the above approximation of − log p(X), we arrive at a
general formula
Z
O := − log p(X | θ̂) + log κ(θ)dθ
Z M
N
− log κ(θ) exp − (θ − θ̂)⊺ J(θ̂)(θ − θ̂) dθ, (11)
M 2
where “O” stands for Occam’s razor. Compared with previous formulations of MDL [7, 58, 59],
eq. (11) relies on a quadratic approximation of the log-likelihood function and can be instantiated
based on different assumptions of κ(θ).
Informally, the term M κ(θ)dθ gives the total capacity of models in M specified by the
R
improper Rprior κ(θ), up to constant scaling. For example, if κ(θ) is uniform on a subregion in
M, then Mκ(θ)dθ corresponds to the size of this region w.r.t. the base measure dθ. The term
κ(θ) exp − 2 (θ − θ̂) J(θ̂)(θ − θ̂) dθ gives the model capacity specified by the posterior
N
R ⊺
M
p(θ | X) ∝ p(θ)p(X | θ) ∝ κ(θ) exp − N2 (θ − θ̂)⊺ J(θ̂)(θ − θ̂) . It shrinks to zero when the
number N of observations increases. The last two terms in eq. (11) is the log-ratio between the
model capacity w.r.t. the prior and the capacity w.r.t. the posterior. A large log-ratio means
there are many distributions
on M which have a relatively large value of κ(θ) but a small
value of κ(θ) exp − N2 (θ − θ̂)⊺ J(θ̂)(θ − θ̂) . The associated model is considered to have a high
complexity, meaning that only a small “percentage” of the models are helpful to describe the
given data.
DNNs have a large amount of symmetry: the parameter space consists many pieces that looks
exactly the same. This can be caused e.g. by permutate the neurons in the same layer. This
is a different non-local property than singularity that is a local differential property. Our O is
not affected by the model size caused by symmetry, because these symmetric models are both
counted in the prior and the posterior, and the log-ratio in eq. (11) cancels out symmetric models.
Formally, M has ζ symmetric pieces denoted by M1 , · · · , Mζ . Note any MLE on Mi is mirrored
on those ζ pieces. Then both integrations on the RHS of eq. (11) are multiplied by a factor of ζ.
Therefore O is invariant to symmetry.
The f -mean, also known as the quasi-arithmetic mean was studied in [33, 44]: Thus they are
also-called Kolmogorov-Nagumo means [34]. By definition, the image of Mf (T) under f is the
arithmetic mean of the image of T under the same mapping. Therefore, Mf (T) is in between the
10
smallest and largest elements of T. If f (x) = x, then Mf becomes the arithmetic mean, which we
denote as T. We have the following bound.
Lemma 4. Given a real matrix T = (tij )n×m , we use ti to denote i’th row of T , and t:,j to
denote the j’th column of T . If f = exp(−t), then
where Mf (T ) is the f -mean of all n × m elements of T , and T is the their arithmetic mean.
In the above inequality, if the arithmetic mean of each row is first evaluated, and then
their f -mean is evaluated, we get an upper bound of the arithmetic mean of the f -mean of the
columns. In simple terms, the f -mean of arithmetic mean is lower bounded by the arithmetic
mean P of the f -mean. The proof is straightforward from Jensen’s inequality, and by noting that
− log i exp(−ti ) is a concave function of t. The last “≤” leads to a proof of the upper bound in
proposition 3.
Remark 5. The second complexity term on the RHS of eq. (10) is the f -mean of the quadratic
term N2 (θ − θ̂)⊺ J(θ̂)(θ − θ̂) w.r.t. the prior p(θ), where f (t) = exp(−t).
PD
Based on the spectrum decomposition J(θ̂) = j=1 λj vj vj⊺ , where the eigenvalues λj := λj (θ̂)
and the eigenvectors vj := vj (θ̂) depend on the MLE θ̂, we further write this term as
D
N X λj N
(θ − θ̂)⊺ J(θ̂)(θ − θ̂) = · tr(J(θ̂))⟨θ − θ̂, vj ⟩2 .
2 j=1 tr(J(θ̂))
2
By lemma 4, we have
N
− log Ep exp (θ − θ̂)⊺ J(θ̂)(θ − θ̂)
−
2
D
X λj N
≥− log Ep exp − tr(J(θ̂))⟨θ − θ̂, vj ⟩2 ,
j=1 tr(J(θ̂))
2
λj
where the f -mean and the mean w.r.t. tr(J(θ̂))
is swapped on the RHS.
Denote φj = ⟨θ − θ̂, vj ⟩. φ = V (θ − θ̂) serves as a new coordinate system of M, where V
⊺
is a D × D unitary matrix whose j’th column is vj . The prior of φ is given by p(V φ + θ̂). Then
N N
2
− log Ep exp − tr(J(θ̂))⟨θ − θ̂, vj ⟩ = − log Ep(φj ) exp − tr(J(θ̂))φj . 2
(12)
2 2
Therefore,
the model complexity has a lower bound, which is determined by the quantity
N
2 tr J(θ̂) φ2j after evaluating the f -mean and some weighted mean, where φj is an orthogonal
transformation of the local coordinates θi based on the spectrum of J(θ̂). Recall that the trace of
the observed FIM J(θ̂) means the overall amount of information a random observation
contains
w.r.t. the underlying model. Given the same sample size N , the larger tr J(θ̂) is, the more
complex the model is likely to be.
As θ̂ is the MLE, we have J(θ̂) = I(θ̂). Recall from eq. (6) that the FIM I(θ̂) is a numerical
average over all observed samples. We can have another lower bound of the model complexity
11
based on lemma 4:
N ⊺
− log Ep exp − (θ − θ̂) J(θ̂)(θ − θ̂)
2
N L ⊺
∂hL (zi )
1 X N ∂h (zi )
≥− log Ep exp − (θ − θ̂)⊺
Ci (θ − θ̂) , (13)
N i=1 2 ∂θ ∂θ
where the f -mean and the numerical average of the samples are swapped on the RHS. Therefore
L
the model complexity can be bounded by the average scale of the vector ∂h∂θ(zi ) (θ − θ̂), where
L
θ ∼ p(θ). Note that ∂h∂θ(zi ) is the parameter-output Jacobian matrix, or a linear approximation
of the neural network mapping θ → hL . The complexity lower bound on the RHS of eq. (13)
means how the local parameter change (θ − θ̂) w.r.t. the prior p(θ) affect the output. If the
prior p(θ) is chosen so that the output is sensitive to the parameter variations, then the model is
considered to have high complexity. As our model complexity is in the form of an f -mean, one
can derive meaningful bounds and study its intuitive meanings.
where diag (·) means a diagonal matrix constructed with given entries, and σ > 0 (elementwisely).
Equivalently, pG (θ) = G(θ | 0, diag (σ)), meaning a Gaussian distribution with mean 0 and
covariance matrix diag (σ). We further assume
(A4) M has a global coordinate chart and M is homeomorphic to RD .
By assumption (A5), the MLE θ̂ has a non-zero probability under the Gaussian prior.
From eq. (11), we get a closed form expression (see appendix E for the derivations) of the
razor
rank J(θ̂)
OG := − log p(X | θ̂) + log N
2
rank(J(θ̂))
1 X 1
+ +
log λi J(θ̂)diag (σ) + + O(1), (14)
2 i=1
N
where λ+ i J(θ̂)diag (σ) denotes the i’th positive eigenvalue of J(θ̂)diag (σ). Notice that
√ √
J(θ̂)diag (σ) and diag σ J(θ̂)diag
σ share the same set of non-zero eigenvalues, and
the latter is psd with rank J(θ̂) positive eigenvalues.
In our razor expressions, all terms that do not scale with the sample size N or the number
of parameters D are discarded. The first two terms on the RHS are similar to BIC [62] up to
12
scaling. To see the meaning of the third term on the RHS, we have
rank(J(θ̂)) rank(J(θ̂))
X 1 X 1
log σmin λ+
i (J(θ̂)) + ≤ log λ+
i (J( θ̂)diag (σ)) +
i=1
N i=1
N
rank(J(θ̂))
X 1
≤ log σmax λ+
i (J(θ̂)) + ,
i=1
N
where σmax and σmin denote the largest and smallest element of σ, respectively. Therefore the
term can be bounded based on the spectrum of J(θ̂). If D is large, we can also write the razor in
terms of the spectrum density ρI (λ), which is straightforward and omitted here for brevity.
The complexity terms (second and third terms on the RHS of eq. (14)) do not scale with D
but are bounded by the rank of the Hessian, or the observed FIM. In other words, the radical
distribution associated with zero-eigenvalues of J(θ̂) does not affect the model complexity. This
is different from previous formulations of MDL [7, 58, 59] and BIC [62]. For example, the 2nd
term on the RHS of eq. (2) increases linearly with D. Interestingly, if λ+i (J(θ̂)) < σmax 1 − N ,
1 1
rank(J(θ̂))
penalty in the term 2 log N . In other words, the corresponding parameter is added free
(without increasing the model complexity). Informally, we call similar terms that are helpful in
decreasing the complexity while contributing to model flexibility the negative complexity.
The Gaussian prior pG is helpful to give simple and intuitive expressions of OG . However, the
problem in choosing pG is two fold. First, it is not invariant. Under a reparametrization (e.g.
normalization or centering techniques), the Gaussian prior in the new parameter system does
not correspond to the original prior. Second, it double counts equivalent models. Because of
the many singularities of the neuromanifold, a small dynamic in the parameter system may not
change the prediction model. However, the Gaussian prior is defined in a real vector space and
may not fit in this singular semi-Riemannian structure. Gaussian distributions are defined on
Riemannian manifolds [61] which lead to potential extensions of the discussed prior pG (θ).
network model θ1 is prioritized over any other model θ2 . It is invariant to the choice of the
coordinate system. Under a reparameterization θ → η,
s
⊺
p ∂θ ∂θ p ∂θ p
|I(η)|dη = I(θ) · dη = |I(θ)| · dη = |I(θ)|dθ,
∂η ∂η ∂η
showing that the Riemannian volume element is the same in different coordinate systems.
Unfortunately, the Jeffreys’ prior is
pnot well defined on the lightlike neuromanifold M, where
the metric I(θ) is degenerate and |I(θ)| becomes zero. The stratifold structure of M, where
d(θ) varying with θ ∈ M, makes it difficult to properly define the base measure dθ and integrate
functions as in eq. (11). From a mathematical standpoint, one has to integrate on the screen
distribution S(T M), which has a Riemannian structure. We refer the reader to [29, 65] for other
extensions of Jeffreys’ prior.
13
In this paper, we take a simple approach by examining a submanifold of M denoted as M f
and parameterized by ξ, which has a Riemannian metric I(ξ) ≻ 0 that is induced by the FIM
I(θ) ⪰ 0. The dimensionality of M f is upper-bounded by the local dimensionality d(θ). Any
infinitesimal dynamic on M means such a change of neural network parameters that leads to
f
a non-zero change of the global predictive model z → y. Therefore, the following results are
constrained to the choice ofpthe submanifold M.
f
In eq. (11), let κ(ξ) = |I(ξ)|. We further assume
(A6) 0 < M
R p
f |I(ξ)|dξ < ∞;
Let us examine the meaning of OJ (ξ). As I(ξ) is the Riemannian metric of M f based
1
on information geometry, |I(ξ)| dξ is a Riemannian
2
R volume element (volume form). In the
1
second term on the RHS of eq. (15), the integral M f |I(ξ)| 2 dξ is the information volume, or
the total “number”
of different DNN models [43] on M. In the last (third) term, because
f
ω(ξ) := exp − 2 (ξ − ξ̂) J(ξ̂)(ξ − ξ̂) ≤ 1, the integral on the LHS of
N ⊺
Z Z p
N p
exp − (ξ − ξ̂)⊺ J(ξ̂)(ξ − ξ̂) |I(ξ)|dξ ≤ |I(ξ)|dξ
M
f 2 M
f
means a “weighted volume” of M, f where the weights ω(ξ) are determined by the observed FIM
J(ξ̂) and satisfy 0 < ω(ξ) ≤ 1. Combining these two terms, the model complexity is the log-ratio
between the unweighted volume and the weighted volume and islower bounded by 0.
Assume the spectrum decomposition J(ξ̂) = Qdiag λi (J(ξ̂)) Q⊺ , where Q has orthonormal
columns, and λi (J(ξ̂)) are the eigenvalues of J(ξ̂). Equation (15) becomes
Z p
OJ (ζ) = − log p(X | ζ̂) + log |I(ζ)|dξ
Mf
Z rank(J(ξ̂))
N X
(16)
p
− log exp − λ+
i J(ξ̂) (ζi − ζ̂i )2 |I(ζ)|dζ,
Mf 2 i=1
14
full rank, we can further write
dim(M)
Z p
f N
OJ (M) = − log p(X | ξ̂) +
f log + log |I(ξ)|dξ
2 2π M
f
|I(ξ)|1/2
Z
1
− log G ξ | ξ̂, J −1 (ξ̂) dξ. (17)
Mf N |J(ξ)|1/2
By assumption (A6), the RHS of eq. (15) is well defined, while the RHS of eq. (17) is only
meaningful for a full rank J(ξ̂). If J(ξ̂) is not invertible, one can consider the limit case when the
zero eigenvalues of J(ξ̂) are replaced by a small ϵ > 0 and still enjoy the expression in eq. (17).
One has to note that Z
1
G ξ | ξ̂, J −1 (ξ̂) ≤ 1,
Mf N
|I(ξ)|1/2
|J(ξ̂)|
Z
1 −1 1
− log G ξ | ξ̂, J (ξ̂) 1/2
dξ ≈ log . (18)
Mf N |J(ξ)| 2 |I(ξ̂)|
Under this approximation, eq. (17) gives the MDL criterion discussed in [7, 43]. We therefore
consider the spectrum of both matrices I(ξ) and J(ξ), noting that in the large sample limit
N → ∞, they become identical. Because of the finite N , the observed FIM J(ξ̂) is singular in
potentially many directions. As a result, the log-ratio in eq. (18) serves as a negative complexity
term and explains how singularities of J(ξ̂) correspond to the simplicity of DNNs.
Compared with OG , OJ is based on a more accurate geometric modeling, However, it is hard
to be computed numerically. Despite that they have different expressions, their preference to
model dimensions with small Fisher information (as in DNNs) is similar.
Hence, we can conclude that the intrinsic complexity of a DNN is affected by the singularity
and spectral properties of the Fisher information matrix.
8 Related Work
The dynamics of supervised learning of a DNN describes a trajectory on the parameter space of
the DNN geometrically modeled as a manifold when endowed with the FIM (e.g., ordinary/natural
gradient descent learning the parameters of a MLP). Singular regions of the neuromanifold [70]
correspond to non-identifiable parameters with rank-deficient FIM, and the learning trajectory
typically exhibit chaotic patterns [4] with the singularities which translate into slowdown plateau
phenomena when plotting the loss function value against time. By building an elementary
singular DNN, [4] (and references therein) show that GD learning dynamics yields a Milnor-type
attractor with both attractor/repulser subregions where the learning trajectory is attracted in
the attractor region, then stay a long time there before escaping through the repulser region. The
natural gradient is shown to be free of critical slowdowns. Furthermore, although DNNs have
potentially many singular regions, it is shown that the interaction of elementary units cancels
out the Milnor-type attractors. It was shown [48] that skip connections are helpful to reduce the
effect of singularities. However, a full understanding of the learning dynamics [71] for generic
DNN architectures with multiple output values or recurrent DNNs is yet to be investigated.
15
The MDL criterion has undergone several fundamental revisions, such as the original crude
MDL [58] and refined MDL [8, 59]. We refer the reader to the book [22] for a comprehensive
introduction to this area and [21] for a recent review. We should also mention that the relationship
between MDL and generalization is not fully understood yet. See [21] for related remarks.
Our derivations based on a Taylor expansion of the log-likelihood are similar to [7]. This
technique is also used for deriving natural gradient optimization for deep learning [4, 40, 50].
Recently MDL has been ported to deep learning [9] focusing on variational methods. MDL-
related methods include weight sharing [18], binarization [27], model compression [12], etc.
In the deep learning community, there is a large body of literature on a theory of deep
learning, for example, based on PAC-Bayes theory [46], statistical learning theory [72], algorithmic
information theory [67], information geometry [39], geometry of the DNN mapping [55], or through
defining an intrinsic dimensionality [38] that is much smaller than the network size. Our analysis
depends on J(θ̂) and therefore is related to the flatness/sharpness of the local minima [13, 25].
Investigations are performed on the spectrum of the input-output Jacobian matrix [52], the
Hessian matrix w.r.t. the neural network weights [51], and the FIM [23, 30, 31, 49, 53].
9 Conclusion
We considered mathematical tools from singular semi-Riemannian geometry to study the locally
varying intrinsic dimensionality of a deep learning model. These models fall in the category of
non-identifiable parameterizations. We take a meaningful step to quantify geometric singularity
through the notion of local dimensionality d(θ) yielding a singular semi-Riemannian neuromanifold
with varying metric signature. We show that d(θ) grows at most linearly with the sample size
N . Recent findings show that the spectrum of Fisher information matrix shifts towards 0+ with
a large number of small eigenvalues. We show that these singular dimensions help to reduce
the model complexity. As a result, we contributed a simple and general MDL for deep learning.
It provides theoretical justifications on the description length of DNNs. DNNs benefit from a
high-dimensional parameter space in that the singular dimensions contribute a negative complexity
to describe the data, which can be seen in our derivations based on Gaussian and Jeffreys’ prior.
A more careful analysis of the FIM’s spectrum, e.g. through considering higher-order terms, could
give more practical formulations of the proposed criterion. We leave empirical studies as potential
future work.
where OneHot(y) is the binary vector with the same dimensionality as hL (zi ), with the y’th bit
set to 1 and the rest bits set to 0. Therefore,
L ⊺
∂ log p(yi | zi , θ) ∂h
OneHot(yi ) − SoftMax(hL (zi )) .
=
∂θ ∂θ
16
Therefore,
L ⊺
∂ 2 log p(yi | zi , θ) X ∂ 2 hL
j ∂h ∂hL
= OneHot(yi ) − SoftMax(hL
(zi )) − · C i · . (19)
∂θ∂θ ⊺ j
j ∂θ∂θ ⊺ ∂θ ∂θ
where
∂SoftMax(hL (zi ))
Ci = = diag (oi ) − oi o⊺i , oi = SoftMax(hL (zi )).
∂hL (zi )
By (A1), at the MLE θ̂,
∀i, SoftMax(hL (zi )) = OneHot(yi ).
Therefore L ⊺
∂ 2 log p(yi | zi , θ) ∂h ∂hL
∀i, − = · C i · .
∂θ∂θ ⊺ ∂θ ∂θ
Taking the sample average on both sides, we get
J(θ̂) = I(θ̂).
where oj (z) > 0 is the j’th element of o(z). Hence, almost surely
∂hL (z)
α = λ(z)1.
∂θ
17
Remark. α is associated with a tangent vector in Rad(T M), meaning a dynamic along the
L
lightlike dimensions. The Jacobian ∂h∂θ(z) is the local linear approximation of the mapping
θ → hL (z). By lemma 1, with probability 1 such a dynamic leads to uniform increments in
the output units,
meaning h (z) → h (z) + λ(z)1, ∀i, and therefore the output distribution
L L
SoftMax h (z) is not affected. In summary, we have verified that the radical distribution does
L
where ℓ is the log-likelihood, and ℓi = log p(yi | zi , θ). We write the analytical form of the
elementwise Hessian
m
∂ 2 ℓi X ∂hL
j (zi )
⊺
= ⊺
(OneHotj (y) − SoftMaxj (hL )) − I(θ),
∂θ∂θ j=1
∂θ∂θ
where OneHot(·) denote the one-hot vector associated with the given target label y. Therefore
m
!
2 L
∂ ℓ i
X ∂hj (zi )
α⊺ α= α⊺ α (OneHotj (y) − SoftMaxj (hL )) − α⊺ I(θ)α.
∂θ∂θ ⊺ j=1
∂θ∂θ ⊺
Because of the first term on the RHS, the kernels of the two matrices J(θ) and Î(θ) are different,
and thus their ranks are also different.
18
Appendix D Proof of Proposition 3
Proof. As θ̂ is the MLE, we have J(θ̂) ⪰ 0, and ∀θ ∈ M,
N
− (θ − θ̂)⊺ J(θ̂)(θ − θ̂) ≤ 0.
2
Hence,
N
Ep exp − (θ − θ̂)⊺ J(θ̂)(θ − θ̂) ≤ 1.
2
Hence,
N
− log Ep exp − (θ − θ̂)⊺ J(θ̂)(θ − θ̂) ≥ 0.
2
This proves the first “≤”.
As − log(x) is convex, by Jensen’s inequality, we get
N ⊺
− log Ep exp − (θ − θ̂) J(θ̂)(θ − θ̂)
2
N ⊺
≤ Ep − log exp − (θ − θ̂) J(θ̂)(θ − θ̂)
2
N ⊺
= Ep (θ − θ̂) J(θ̂)(θ − θ̂)
2
N ⊺
= tr Ep J(θ̂)(θ − θ̂)(θ − θ̂)
2
N
= tr J(θ̂) (µ(θ) − θ̂)(µ(θ) − θ̂)⊺ + cov(θ) .
2
This proves the second “≤”.
Appendix E Derivations of OG
We recall the general formulation in eq. (11):
Z
O := − log p(X | θ̂) + log κ(θ)dθ
Z M
N
− log κ(θ) exp − (θ − θ̂)⊺ J(θ̂)(θ − θ̂) dθ.
M 2
Z Z
1 ⊺ 1
log κ(θ)dθ = log exp − θ diag θ dθ
M M 2 σ
D 1
= log 2π + log |diag (σ) |
2 Z 2
D 1 1 ⊺ 1
+ log exp − log 2π − log |diag (σ) | − θ diag θ dθ
M 2 2 2 σ
D 1
= log 2π + log |diag (σ) |.
2 2
19
The third (last) term on the RHS is
Z
N
− log κ(θ) exp − (θ − θ̂)⊺ J(θ̂)(θ − θ̂) dθ
M 2
Z
1 1 N
= − log exp − θ ⊺ diag θ − (θ − θ̂)⊺ J(θ̂)(θ − θ̂) dθ
M 2 σ 2
Z
1 ⊺
= − log exp − θ Aθ + b⊺ θ + c dθ,
M 2
where
1 N ⊺
A = N J(θ̂) + diag ≻ 0, b = N J(θ̂)θ̂, c=− θ̂ J(θ̂)θ̂.
σ 2
Then,
Z
N ⊺
− log κ(θ) exp − (θ − θ̂) J(θ̂)(θ − θ̂) dθ
M 2
Z
1 ⊺ 1 ⊺
= − log exp − (θ − θ̄) A(θ − θ̄) + c + θ̄ Aθ̄ dθ
M 2 2
D 1 1 ⊺
= − log 2π + log |A| − c − θ̄ Aθ̄
2 Z 2 2
D 1 1
− log exp − log 2π + log |A| − (θ − θ̄)⊺ A(θ − θ̄) dθ
M 2 2 2
D 1 1 ⊺
= − log 2π + log |A| − c − θ̄ Aθ̄,
2 2 2
where Aθ̄ = b. To sum up,
D 1
OG = − log p(X | θ̂) + log 2π + log |diag (σ) |
2 2
D 1 1
− log 2π + log |A| − c − θ̄ ⊺ Aθ̄
2 2 2
1 1 1
= − log p(X | θ̂) + log |diag (σ) | + log |A| − c − θ̄ ⊺ Aθ̄,
2 2 2
1 1 1
= − log p(X | θ̂) + log |diag (σ) | + log |N J(θ̂) + diag |
2 2 σ
⊺ −1
N ⊺ 1 1
+ θ̂ J(θ̂)θ̂ − N J(θ̂)θ̂ N J(θ̂) + diag N J(θ̂)θ̂
2 2 σ
1
= − log p(X | θ̂) + log |N J(θ̂)diag (σ) + I|
2
−1
1 ⊺ 1 1 1
+ θ̂ ⊺ J(θ̂) J(θ̂) + diag diag θ̂
2 N σ σ
1
= − log p(X | θ̂) + log |N J(θ̂)diag (σ) + I|
2
−1
1 ⊺ 1
+ θ̂ J(θ̂) diag (σ) J(θ̂) + I θ̂.
2 N
20
The last term does not scale with N and has a smaller order as compared to other terms. Indeed,
−1
as N → ∞, J(θ̂) + N1 diag σ1 → J(θ̂)+ , the Moore–Penrose inverse of J(θ̂). Hence,
−1
1 ⊺ 1 1 ⊺ + 1
θ̂ J(θ̂) diag (σ) J(θ̂) + I θ̂ → θ̂ J(θ̂)J(θ̂) diag θ̂
2 N 2 σ
1 1
≤ θ̂ ⊺ diag θ̂.
2 σ
By assumption (A5), the RHS is O(1). This term is therefore dropped. We get
1
OG = − log p(X | θ̂) + log N J(θ̂)diag (σ) + I + O(1).
2
Note that rank J(θ̂) ≤ D, and the matrix J(θ̂)diag (σ) has the same rank as J(θ̂). We can
write J(θ̂) = L(θ̂)L(θ̂)⊺ , where L(θ̂) has shape D × rank J(θ̂) . We abuse I to denote both
the identity matrix of shape D × D and the identity matrix of shape rank J(θ̂) × rank J(θ̂) .
By the Weinstein–Aronszajn identity,
1
OG = − log p(X | θ̂) + log N L(θ̂)L(θ̂)⊺ diag (σ) + I + O(1)
2
1
= − log p(X | θ̂) + log N L(θ̂)⊺ diag (σ) L(θ̂) + I + O(1)
2
rank J(θ̂) 1 1
= − log p(X | θ̂) + log N + log L(θ̂)⊺ diag (σ) L(θ̂) + I + O(1).
2 2 N
Note L(θ̂)⊺ diag (σ) L(θ̂) have the same set of non-zero
eigenvalues as L(θ̂)L(θ̂)⊺ diag (σ) =
J(θ̂)diag (σ) , which we denote as λ+ i J(θ̂)diag (σ) . Then,
rank J(θ̂)
OG = − log p(X | θ̂) + log N
2
rank(J(θ̂))
1 X 1
+ log λ+
i J(θ̂)diag (σ) + + O(1).
2 i=1
N
Hence,
1 1 1 1
log L(θ̂)⊺ diag (θ) L(θ̂) + I ≤ log σmax L(θ̂)⊺ L(θ̂) + I
2 N 2 N
rank(J(θ̂))
1 X
+ 1
= log σmax λi (J(θ̂)) + .
2 i=1
N
21
Similarly,
rank(J(θ̂))
1 1 1 X 1
log L(θ̂)⊺ diag (θ) L(θ̂) + I ≥ log σmin λ+
i (J(θ̂)) + .
2 N 2 i=1
N
If σ = σ1, then σmax = σmin = σ. Both “≤” and “≥” in the above inequalities become tight.
where dE θ is the Euclidean volume element. We artificially shift θ to be positive definite and
define the volume element as
p
dθ := |I(θ) + ε1 I| dθ1 ∧ dθ2 ∧ · · · ∧ dθD
(21)
p
= |I(θ) + ε1 I| dE θ s ,
where ε1 > 0 is a very small value as compared to the scale of I(θ) given by D 1
tr(I(θ)), i.e. the
average of its eigenvalues. Notice this element will vary with θ: different coordinate systems will
yield different volumes. Therefore it depends on how θ can be uniquely specified. This is roughly
guaranteed by our A1: the θ-coordinates correspond to the input coordinates (weights and biases)
up to an orthogonal transformation. Despite that eq. (21) is a loose mathematical definition, it
makes intuitive sense and is convenient for making derivations. Then, we can integrate functions
Z Z
(22)
p
f (θ)dθ = f (θ) |I(θ) + ε1 I| dE θ,
M
“razor” of the model Ms . However, we will instead use aR Gaussian-like prior, because Jeffreys’
prior is not well defined on M. Moreover, the integral Ms |I(θ s )|dE θ s is likely to diverge
p
based on our revised volume element in eq. (21). If the parameter space is real-valued, one can
easily check that, the volume based on eq. (21) along the lightlike dimensions will diverge. The
zero-centered Gaussian prior corresponds to a better code, because it is commonly acknowledged
that one can achieve the same training error and generalization without using large weights. For
example, regularizing the norm of the weights is widely used in deep learning. By using such an
22
informative prior, one can have the same training error in the first term in eq. (2), while having a
smaller “complexity” in the rest of the terms, because we only encode such models with constrained
weights. Given the DNN, we define an informative prior on the lightlike neuromanifold
1 1
(24)
p
p(θ) = exp − 2 ∥θ∥2 |I(θ) + ε1 I|,
V 2ε2
Here, the base measure is the Euclidean volume element dE θ, as |I(θ) + ε1 I| already appeared
p
in p(θ). Keep in mind, again, that this p(θ) is defined in a special coordinate system, and is not
invariant to re-parametrization. By A1, this distribution is also isotropic in the input coordinate
system, which agrees with initialization techniques8 .
This bi-parametric prior connects Jeffreys’ prior (that is widely used in MDL) and a Gaussian
prior (that is widely used in deep learning). If ε2 → ∞, ε1 → 0, it coincides with Jeffreys’ prior (if
it is well defined and I(θ) has full rank); if ε1 is large, the metric (I(θ) + ε1 I) becomes spherical,
and eq. (24) becomes a Gaussian prior. We refer the reader to [29, 65] for other extensions of
Jeffreys’ prior.
The normalizing constant of eq. (24) is an information volume measure of M, given by
Z
1
V := exp − 2 ∥θ∥2 dθ. (25)
M 2ε2
Unlike Jeffreys’ prior whose information volume (the 3rd term on the RHS of eq. (2)) can be
unbounded, this volume is better bounded by
Theorem 5. √
(26)
p
( 2πε1 ε2 )D ≤ V ≤ ( 2π(ε1 + λm )ε2 )D ,
where λm is the largest eigenvalue of the FIM I(θ).
Notice λm may not exist, as the integration is taken over θ ∈ M. Intuitively, V is a weighted
volume w.r.t. a Gaussian-like prior distribution on M, while the 3rd term on the RHS of eq. (2) is
an unweighted volume. The larger the radius ε2 , the more “number” or possibilities of DNNs are
included; the larger the parameter ε1 , the larger the local volume element in eq. (21) is measured,
and therefore the total volume is measured larger. log V is an O(D) terms, meaning the volume
grows with the number of dimensions.
By (A1), θ is an orthogonal transformation of the neural network weights and biases, and
therefore θ ∈ RD . We have
p p D
|I(θ) + ε1 I| ≥ |ε1 I| = ε12 .
8 Different layers, or weights and biases, may use different variance in their initialization. This minor issue can
23
Hence
Z
1 2
D
V ≥ exp − 2 ∥θ∥ ε12 dE θ
2ε2
Z
D
D 2
D D 1 2 1 2
= (2π) ε2 ε1
2 exp − log 2π − log |ε2 I| − 2 ∥θ∥ dE θ
2 2 2ε2
D D √ D
= (2π) 2 εD
2 ε1 = 2πε1 ε2 .
2
D
! D2 D2
p Y 1 1
|I(θ) + ε1 I| = (λi + ε1 ) D
≤ tr(I(θ)) + ε1 .
i=1
D
Therefore
√ D 1 D2
V ≤ 2πε2 tr(I(θ)) + ε1 .
D
If one applies 1
D tr(I(θ)) ≤ λm to the RHS, the upper bound is further relaxed as
√ D D
p D
V ≤ 2πε2 (λm + ε1 ) 2 = 2π(ε1 + λm )ε2 .
In the last term on the RHS, inside the parentheses is a quadratic function w.r.t. θ. However the
integration is w.r.t. to the non-Euclidean volume element dθ and therefore does not have closed
form. We need to assume
1 p q
, |I(θ) + ε1 I| ≈ |I(θ̂) + ε1 I|, exp log p(X | θ̂) = p(X | θ̂)
V
can all be taken out of the integration as constant scalers, as they do not depend on θ. The main
24
difficulty is to perform the integration
∥θ∥2
Z
N
exp − 2 − (θ − θ̂)⊺ J(θ̂)(θ − θ̂) dE θ
2ε2 2
Z
1 ⊺
= exp − θ Aθ + b⊺ θ + c dE θ
2
Z
1 1
= exp − (θ − A−1 b)⊺ A(θ − A−1 b) + b⊺ A−1 b + c dE θ
2 2
Z
1 ⊺ −1 1
= exp b A b+c exp − (θ − A−1 b)⊺ A(θ − A−1 b) dE θ
2 2
1 ⊺ −1 D 1
= exp b A b + c exp log 2π − log |A|
2 2 2
1 ⊺ −1 D 1
= exp b A b + c + log 2π − log |A| .
2 2 2
where
1 1
A = N J(θ̂) + I, b = N J(θ̂)θ̂, c = − θ̂ ⊺ N J(θ̂)θ̂.
ε22 2
The rest of the derivations are straightforward. Note R = −c − 12 b⊺ A−1 b.
After derivations and simplifications, we get
D N
− log p(X) ≈ − log p(X | θ̂) + log + log V
2 2π
1 1 1
+ log J(θ̂) + I − log I(θ̂) + ε1 I + R. (27)
2 N ε22 2
We need to analyze the order of this R term. Assume the largest eigenvalue of J(θ̂) is λm , then
N λm
|R| ≤ 2 ∥θ̂∥2 . (29)
ε2 N λm + 1
We assume
(A7) The ratio between the scale of each dimension of the MLE θ̂ to ε2 , i.e. θ̂i
ε2 (i = 1, · · · , D)
is in the order O(1).
Intuitively, the scale parameter ε2 in our prior p(θ) in eq. (24) is chosen to “cover” the good
models. Therefore, the order of R is O(D). As N turns large, R will be dominated by the 2nd
O(D log N ) term. We will therefore discard R for simplicity. It could be useful for a more delicate
analysis. In conclusion, we arrive at the following expression
D N 1 J(θ̂) + N1ε2 I
O := − log p(X | θ̂) + log + log V + log 2
. (30)
2 2π 2 I(θ̂) + ε1 I
25
B
Truth
A
C
Figure 2: A: a model far from the truth (underlying distribution of observed data); B: close to
the truth but sensitive to parameter; C (deep learning): close to the truth with many good local
optima.
Notice the similarity with eq. (2), where the first two terms on the RHS are exactly the same.
The 3rd term is an O(D) term, similar to the 3rd term in eq. (2). It is bounded according to
theorem 5, while the 3rd term in eq. (2) could be unbounded. Our last term is in a similar form to
the last term in eq. (2), except it is well defined on lightlike manifold. If we let ε2 → ∞, ε1 → 0,
we get exactly eq. (2) and in this case O = χ. As the number of parameters D turns large, both
the 2nd and 3rd terms will grow linearly w.r.t. D, meaning that they contribute positively to
the model complexity. Interestingly, the fourth term is a “negative complexity”. Regard N1ε2 and
2
ϵ1 as small positive values. The fourth term essentially is a log-ratio from the observed FIM to
the true FIM. For small models, they coincide, because the sample size N is large based on the
model size. In this case, the effect of this term is minor. For DNNs, the sample size N is very
limited based on the huge model size D. Along a dimension θi , J(θ) is likely to be singular as
stated in proposition 2, even if I has a very small positive value. In this case, their log-ratio will
be negative. Therefore, the razor O favors DNNs with their Fisher-spectrum clustered around 0.
In fig. 2, model C displays the concepts of a DNN, where there are many good local optima. The
performance is not sensitive to specific values of model parameters. On the lightlike neuromanifold
M, there are many directions that are very close to being lightlike. When a DNN model varies
along these directions, the model slightly changes in terms of I(θ), but their prediction on the
samples measured by J(θ) are invariant. These directions count negatively towards the complexity,
because these extra freedoms (dimensions of θ) occupy almost zero volume in the geometric sense,
and are helpful to give a shorter code to future unseen samples.
To obtain a simpler expression, we consider the case that I(θ) ≡ I(θ̂) is both constant and
diagonal in the interested region defined by eq. (24). In this case,
D 1
log V ≈ log 2π + D log ε2 + log |I(θ̂) + ε1 I|. (31)
2 2
On the other hand, as D → ∞, the spectrum of the FIM I(θ) will follow the density ρI (θ). We
plug these expressions into eq. (30), discard all lower-order terms, and get a simplified version of
the razor
D ∞
Z
D 1
O ≈ − log p(X | θ̂) + log N + ρI (λ) log λ + dλ, (32)
2 2 0 N ε22
26
References
[1] Hirotugu Akaike. A new look at the statistical model identification. IEEE Trans. Automat.
Contr., 19(6):716–723, 1974.
[2] Guillaume Alain, Nicolas Le Roux, and Pierre-Antoine Manzagol. Negative eigenvalues of
the Hessian in deep neural networks. In ICLR’18 workshop, 2018. arXiv:1902.02366 [cs.LG].
[3] Shun-ichi Amari. Information Geometry and Its Applications, volume 194 of Applied Mathe-
matical Sciences. Springer, Japan, 2016.
[4] Shun-ichi Amari, Tomoko Ozeki, Ryo Karakida, Yuki Yoshida, and Masato Okada. Dynamics
of learning in MLP: Natural gradient and singularity revisited. Neural Computation, 30(1):1–
33, 2018.
[5] Toshiki Aoki and Katsuhiko Kuribayashi. On the category of stratifolds. Cahiers de
Topologie et Géométrie Différentielle Catégoriques, LVIII(2):131–160, 2017. arXiv:1605.04142
[math.CT].
[6] Oguzhan Bahadir and Mukut Mani Tripathi. Geometry of lightlike hypersurfaces of a
statistical manifold, 2019. arXiv:1901.09251 [math.DG].
[7] Vijay Balasubramanian. MDL, Bayesian inference and the geometry of the space of probability
distributions. In Advances in Minimum Description Length: Theory and Applications, pages
81–98. MIT Press, Cambridge, Massachusetts, 2005.
[8] A. Barron, J. Rissanen, and Bin Yu. The minimum description length principle in coding
and modeling. IEEE Transactions on Information Theory, 44(6):2743–2760, 1998.
[9] Léonard Blier and Yann Ollivier. The description length of deep learning models. In Advances
in Neural Information Processing Systems 31, pages 2216–2226. Curran Associates, Inc., NY
12571, USA, 2018.
[10] Ovidiu Calin. Deep learning architectures. Springer, London, 2020.
[11] Ovidiu Calin and Constantin Udrişte. Geometric modeling in probability and statistics.
Springer, Cham, 2014.
[12] Yu Cheng, Duo Wang, Pan Zhou, and Tao Zhang. Model compression and acceleration for
deep neural networks: The principles, progress, and challenges. IEEE Signal Processing
Magazine, 35(1):126–136, 2018.
[13] Laurent Dinh, Razvan Pascanu, Samy Bengio, and Yoshua Bengio. Sharp minima can
generalize for deep nets. In International Conference on Machine Learning, volume 70 of
Proceedings of Machine Learning Research, pages 1019–1028, 2017.
[14] Krishan Duggal. A review on unique existence theorems in lightlike geometry. Geometry,
2014, 2014. Article ID 835394.
[15] Krishan Duggal and Aurel Bejancu. Lightlike Submanifolds of Semi-Riemannian Manifolds
and Applications, volume 364 of Mathematics and Its Applications. Springer, Netherlands,
1996.
[16] Pascal Mattia Esser and Frank Nielsen. Towards modeling and resolving singular parameter
spaces using stratifolds. arXiv preprint arXiv:2112.03734, 2021.
27
[17] Xinlong Feng and Zhinan Zhang. The rank of a random matrix. Applied Mathematics and
Computation, 185(1):689–694, 2007.
[18] Adam Gaier and David Ha. Weight agnostic neural networks. In Advances in Neural
Information Processing Systems 32, pages 5365–5379. Curran Associates, Inc., NY 12571,
USA, 2019.
[19] Xavier Glorot, Antoine Bordes, and Yoshua Bengio. Deep sparse rectifier neural networks. In
International Conference on Artificial Intelligence and Statistics, volume 15 of Proceedings
of Machine Learning Research, pages 315–323, 2011.
[20] Ian Goodfellow, Yoshua Bengio, and Aaron Courville. Deep learning. MIT press, Cambridge,
Massachusetts, 2016.
[21] Peter Grünwald and Teemu Roos. Minimum description length revisited. International
Journal of Mathematics for Industry, 11(01), 2020.
[22] Peter D. Grünwald. The Minimum Description Length Principle. Adaptive Computation
and Machine Learning series. The MIT Press, Cambridge, Massachusetts, 2007.
[23] Tomohiro Hayase and Ryo Karakida. The spectrum of Fisher information of deep networks
achieving dynamical isometry. In International Conference on Artificial Intelligence and
Statistics, pages 334–342, 2021.
[24] Masahito Hayashi. Large deviation theory for non-regular location shift family. Annals of
the Institute of Statistical Mathematics, 63(4):689–716, 2011.
[25] Sepp Hochreiter and Jürgen Schmidhuber. Flat minima. Neural Computation, 9(1):1–42,
1997.
[26] Harold Hotelling. Spaces of statistical parameters. Bull. Amer. Math. Soc, 36:191, 1930.
[27] Itay Hubara, Matthieu Courbariaux, Daniel Soudry, Ran El-Yaniv, and Yoshua Bengio.
Binarized neural networks. In Advances in Neural Information Processing Systems 29, pages
4107–4115. Curran Associates, Inc., NY 12571, USA, 2016.
[28] Varun Jain, Amrinder Pal Singh, and Rakesh Kumar. On the geometry of lightlike submani-
folds of indefinite statistical manifolds, 2019. arXiv:1903.07387 [math.DG].
[29] Ruichao Jiang, Javad Tavakoli, and Yiqiang Zhao. Weyl prior and Bayesian statistics.
Entropy, 22(4), 2020.
[30] Ryo Karakida, Shotaro Akaho, and Shun-ichi Amari. Universal statistics of Fisher information
in deep neural networks: Mean field approach. In International Conference on Artificial
Intelligence and Statistics, volume 89 of Proceedings of Machine Learning Research, pages
1032–1041, 2019.
[31] Ryo Karakida, Shotaro Akaho, and Shun-ichi Amari. Pathological Spectra of the Fisher In-
formation Metric and Its Variants in Deep Neural Networks. Neural Computation, 33(8):2274–
2307, 2021.
[32] David C Kay. Schaum’s outline of theory and problems of tensor calculus. McGraw-Hill,
New York, 1988.
28
[33] Andreı̆ Nikolaevich Kolmogorov. Sur la notion de la moyenne. G. Bardi, tip. della R. Accad.
dei Lincei, Rome, Italy, 1930.
[34] Osamu Komori and Shinto Eguchi. A unified formulation of k-Means, fuzzy c-Means and
Gaussian mixture model by the Kolmogorov–Nagumo average. Entropy, 23(5):518, 2021.
[35] Frederik Kunstner, Philipp Hennig, and Lukas Balles. Limitations of the empirical Fisher
approximation for natural gradient descent. In Advances in Neural Information Processing
Systems 32, pages 4158–4169. Curran Associates, Inc., NY 12571, USA, 2019.
[36] D.N. Kupeli. Singular Semi-Riemannian Geometry, volume 366 of Mathematics and Its
Applications. Springer, Netherlands, 1996.
[39] Tengyuan Liang, Tomaso Poggio, Alexander Rakhlin, and James Stokes. Fisher-Rao metric,
geometry, and complexity of neural networks. In International Conference on Artificial
Intelligence and Statistics, volume 89 of Proceedings of Machine Learning Research, pages
888–896, 2019.
[40] Wu Lin, Valentin Duruisseaux, Melvin Leok, Frank Nielsen, Mohammad Emtiyaz Khan, and
Mark Schmidt. Simplifying momentum-based positive-definite submanifold optimization
with applications to deep learning. In International Conference on Machine Learning, pages
21026–21050. PMLR, 2023.
[41] David J.C. MacKay. Bayesian methods for adaptive models. PhD thesis, California Institute
of Technology, 1992.
[42] James A. Mingo and Roland Speicher. Free Probability and Random Matrices, volume 35 of
Fields Institute Monographs. Springer, 2017.
[43] In Jae Myung, Vijay Balasubramanian, and Mark A. Pitt. Counting probability distributions:
Differential geometry and model selection. Proceedings of the National Academy of Sciences,
97(21):11170–11175, 2000.
[44] Mitio Nagumo. Über eine Klasse der Mittelwerte. In Japanese journal of mathematics:
transactions and abstracts, volume 7, pages 71–79. The Mathematical Society of Japan, 1930.
[45] Naomichi Nakajima and Toru Ohmoto. The dually flat structure for singular models.
Information Geometry, 4(1):31–64, 2021.
[46] Behnam Neyshabur, Srinadh Bhojanapalli, David Mcallester, and Nati Srebro. Exploring
generalization in deep learning. In Advances in Neural Information Processing Systems 30,
pages 5947–5956. Curran Associates, Inc., NY 12571, USA, 2017.
[47] Katsumi Nomizu, Nomizu Katsumi, and Takeshi Sasaki. Affine differential geometry:
geometry of affine immersions. Cambridge Tracts in Mathematics. Cambridge university
press, Cambridge, United Kingdom, 1994.
29
[48] A Emin Orhan and Xaq Pitkow. Skip connections eliminate singularities. In International
Conference on Learning Representations (ICLR), 2018.
[49] Vardan Papyan. Traces of class/cross-class structure pervade deep learning spectra. Journal
of Machine Learning Research, 21(252):1–64, 2020.
[50] Razvan Pascanu and Yoshua Bengio. Revisiting natural gradient for deep networks. In
International Conference on Learning Representations (ICLR), 2014.
[51] Jeffrey Pennington and Yasaman Bahri. Geometry of neural network loss surfaces via random
matrix theory. In International Conference on Machine Learning, volume 70 of Proceedings
of Machine Learning Research, pages 2798–2806, 2017.
[52] Jeffrey Pennington, Samuel Schoenholz, and Surya Ganguli. The emergence of spectral
universality in deep networks. In International Conference on Artificial Intelligence and
Statistics, volume 84 of Proceedings of Machine Learning Research, pages 1924–1932, 2018.
[53] Jeffrey Pennington and Pratik Worah. The spectrum of the Fisher information matrix of a
single-hidden-layer neural network. In Advances in Neural Information Processing Systems
31, pages 5410–5419. Curran Associates, Inc., NY 12571, USA, 2018.
[54] David Pollard. A note on insufficiency and the preservation of Fisher information. In From
Probability to Statistics and Back: High-Dimensional Models and Processes–A Festschrift in
Honor of Jon A. Wellner, pages 266–275. Institute of Mathematical Statistics, Beachwood,
Ohio, 2013.
[55] Maithra Raghu, Ben Poole, Jon Kleinberg, Surya Ganguli, and Jascha Sohl-Dickstein. On
the expressive power of deep neural networks. In International Conference on Machine
Learning, volume 70 of Proceedings of Machine Learning Research, pages 2847–2854, 2017.
[56] Calyampudi Radhakrishna Rao. Information and the accuracy attainable in the estimation
of statistical parameters. Bulletin of Cal. Math. Soc., 37(3):81–91, 1945.
[57] Calyampudi Radhakrishna Rao. Information and the accuracy attainable in the estimation
of statistical parameters. In Breakthroughs in statistics, pages 235–247. Springer, New York,
NY, 1992.
[58] Jorma Rissanen. Modeling by shortest data description. Automatica, 14(5):465–471, 1978.
[59] Jorma Rissanen. Fisher information and stochastic complexity. IEEE Trans. Inf. Theory,
42(1):40–47, 1996.
[60] Levent Sagun, Utku Evci, V. Ugur Guney, Yann Dauphin, and Leon Bottou. Empirical
analysis of the Hessian of over-parametrized neural networks. In ICLR’18 workshop, 2018.
arXiv:1706.04454 [cs.LG].
[61] Salem Said, Hatem Hajri, Lionel Bombrun, and Baba C Vemuri. Gaussian distributions
on Riemannian symmetric spaces: statistical learning with structured covariance matrices.
IEEE Transactions on Information Theory, 64(2):752–772, 2017.
[62] Gideon Schwarz. Estimating the dimension of a model. Ann. Stat., 6(2):461–464, 1978.
[63] Alexander Soen and Ke Sun. On the variance of the Fisher information for deep learning. In
Advances in Neural Information Processing Systems 34, pages 5708–5719, NY 12571, USA,
2021. Curran Associates, Inc.
30
[64] Ke Sun and Frank Nielsen. Relative Fisher information and natural gradient for learning
large modular models. In International Conference on Machine Learning, volume 70 of
Proceedings of Machine Learning Research, pages 3289–3298, 2017.
[65] Junnichi Takeuchi and S-I Amari. α-parallel prior and its properties. IEEE Transactions on
Information Theory, 51(3):1011–1023, 2005.
[66] Philip Thomas. Genga: A generalization of natural gradient ascent with positive and negative
convergence results. In International Conference on Machine Learning, volume 32 (2) of
Proceedings of Machine Learning Research, pages 1575–1583, 2014.
[67] Guillermo Valle-Pérez, Chico Q. Camargo, and Ard A. Louis. Deep learning generalizes
because the parameter-function map is biased towards simple functions. In International
Conference on Learning Representations (ICLR), 2019.
[68] Christopher Stewart Wallace and D. M. Boulton. An information measure for classification.
Computer Journal, 11(2):185–194, 1968.
[69] Sumio Watanabe. Algebraic Geometry and Statistical Learning Theory, volume 25 of Cam-
bridge Monographs on Applied and Computational Mathematics. Cambridge University Press,
Cambridge, United Kingdom, 2009.
[70] Haikun Wei, Jun Zhang, Florent Cousseau, Tomoko Ozeki, and Shun-ichi Amari. Dynamics
of learning near singularities in layered networks. Neural computation, 20(3):813–843, 2008.
[71] Yuki Yoshida, Ryo Karakida, Masato Okada, and Shun-ichi Amari. Statistical mechanical
analysis of learning dynamics of two-layer perceptron with multiple output units. Journal of
Physics A: Mathematical and Theoretical, 2019.
[72] Chiyuan Zhang, Samy Bengio, Moritz Hardt, Benjamin Recht, and Oriol Vinyals. Under-
standing deep learning requires rethinking generalization. In International Conference on
Learning Representations (ICLR), 2017.
31