(Ebook) Metrics of Curves in Shape Optimization and Analysis by Mennucci A. ISBN 9780769523729, 0769523722 New Release 2025
(Ebook) Metrics of Curves in Shape Optimization and Analysis by Mennucci A. ISBN 9780769523729, 0769523722 New Release 2025
https://ebooknice.com/product/metrics-of-curves-in-shape-
optimization-and-analysis-2045964
★★★★★
4.7 out of 5.0 (24 reviews )
DOWNLOAD PDF
ebooknice.com
(Ebook) Metrics of curves in shape optimization and analysis
by Mennucci A. ISBN 9780769523729, 0769523722 Pdf Download
EBOOK
Available Formats
https://ebooknice.com/product/shape-optimization-and-spectral-
theory-11305268
https://ebooknice.com/product/invariant-distances-and-metrics-in-complex-
analysis-49193690
https://ebooknice.com/product/shape-optimization-and-spectral-
theory-47486412
https://ebooknice.com/product/invariant-distances-and-metrics-in-complex-
analysis-4582184
https://ebooknice.com/product/new-trends-in-shape-optimization-6744894
https://ebooknice.com/product/applied-shape-optimization-for-fluids-1437342
https://ebooknice.com/product/biota-grow-2c-gather-2c-cook-6661374
https://ebooknice.com/product/software-metrics-a-guide-to-planning-
analysis-and-application-1397834
Metrics of curves in shape optimization and
analysis
Andrea C. G. Mennucci
Key words: Shape space, shape optimization, shape analysis, computer vision,
Riemannian geometry, manifold of curves, hyperspace of compact sets, edge
detection, image segmentation, visual tracking, curve evolution, gradient flow,
active contour, sobolev active contour.
Introduction
In these lecture notes we will explore the mathematics of the space of immersed
curves, as is nowadays used in applications in computer vision. In this field,
the space of curves is employed as a “shape space”; for this reason, we will also
define and study the space of geometric curves, that are immersed curves up
to reparameterizations. To develop the usages of this space, we will consider
the space of curves as an infinite dimensional differentiable manifold; we will
then deploy an effective form of calculus and analysis, comprising tools such
as a Riemannian metric, so as to be able to perform standard operations such
as minimizing a goal functional by gradient descent, or computing the distance
between two curves. Along this path of mathematics, we will also present some
current literature results. (Another common and interesting example of “shape
spaces” is the space of all compact subsets of lRn — we will briefly discuss this
option as well, and relate it to the aforementioned theory).
These lecture notes aim to be as self-contained as possible, so as to be acces-
sible to young mathematicians and non-mathematicians as well. For this reason,
many examples are intermixed with the definitions and proofs; in presenting
advanced and complex mathematical ideas, the rigorous mathematical definitions
and proofs were sometimes sacrificed and replaced with an intuitive description.
These lecture notes are organized as follows. Section 1 introduces the def-
initions and some basilar concepts related to immersed and geometric curves.
Section 2 overviews the realm of applications for a shape space in computer
vision, that we divide in the fields of “shape optimization” and “shape analysis”;
1
and hilights features and problems of those theories as were studied up to a
few years ago, so as to identify the needs and obstacles to further developments.
Section 3 contains a summary of all mathematical concepts that are needed for
the rest of the notes. Section 4 coalesces all the above in more precise defini-
tions of spaces of curves to be used as “shape spaces”, and sets mathematical
requirements and goals for applications in computer vision. Section 5 indexes
examples of “shape spaces” from the current literature, inserting it in a common
paradigm of “representation of shape”; some of this literature is then elaborated
upon in the following sections 6,7,8,9, containing two examples of metrics of
compact subsets of lRn , two examples of Finsler metrics of curves, two examples
of Riemannian metrics of curves “up to pose”, and four examples of Riemannian
metrics of immersed curves. The last such example is the family of Sobolev-type
Riemannian metrics of immersed curves, whose properties are studied in Section
10, with applications and numerical examples. Section 11 presents advanced
mathematical topics regarding the Riemannian spaces of immersed and geometric
curves.
I gratefully acknowledge that a part of the theory and many numerical
experiments exposited were developed in joint work with Prof. Yezzi (GaTech)
and Prof. Sundaramoorthi (UCLA); other numerical experiments were by A.
Duci and myself. I also deeply thank the organizers for inviting me to Cetraro
to give the lectures that were the basis for this lecture notes.
1.1 Shapes
A wide interest for the study of shape spaces arose in recent years, in particular
inside the computer vision community. Some examples of shape spaces are as
follows.
• The family of all collections of k points in lRn .
00
11
111
000
00
11
000
111
• The family of all non empty compact subsets of lRn . 00
11
000
111
00
11
000
111
00
11
000
111
There are two different (but interconnected) fields of applications for a good
shape space in computer vision:
shape optimization where we want to find the shape that best satisfies a
design goal; a topic of interest in engineering at large;
2
shape analysis where we study a family of shapes for purposes of statistics,
(automatic) cataloging, probabilistic modeling, among others, and possibly
create an a-priori model for a better shape optimization.
1.2 Curves
S 1 = {x ∈ lR2 | |x| = 1} is the circle in the plane. It is the template for all
possible closed curves. (Open curves will be called paths, to avoid confusion).
at all points θ ∈ S 1 .
• An immersed curve is a C 1 curve c such that c0 (θ) 6= 0 at all points
θ ∈ S1.
c : S1 → c(S 1 )
7→
Note that, in our terminology , the “curve” is the function c, and not just the
image c(S 1 ) inside lRn .
Most of the theory following will be developed for curves in lRn , when this
does not complicate the math. We will call planar curves those whose image
is in lR2 .
The class of immersed curves is a differentiable manifold. For the purposes
of this introduction, we present a simple, intuitive definition.
Definition 1.2 The manifold of (parametric) curves M is the set of all
closed immersed curves. Suppose that c ∈ M , c : S 1 → lRn is a closed immersed
curve.
• A deformation of c is a function h : S 1 → lRn .
3
1.3 Geometric curves and functionals
Shapes are usually considered to be geometric objects. Representing a curve
using c : S 1 → lRn forces a choice of parameterization, that is not really part
of the concept of “shape”. To get rid of this, we first summarily present what
reparameterizations are. (We will provide more detailed definitions and properties
in Section 3.8).
B = M/Diff(S 1 )
4
1.4 Curve–related quantities
A good way to specify the design goal for shape optimization is to define an
objective function (a.k.a. energy) F : M → lR that is minimum in the curve
that is most fit for the task.
When designing our F , we will want it to be geometric; this is easily ac-
complished if we use geometric quantities to start from. We now list the most
important such quantities.
In the following, given v, w ∈ lRn we will write |v| for the standard Euclidean
norm, and hv, wi or (v · w) for the standard scalar product. We will again
write c0 (θ) := ∂θ c(θ).
Definition 1.7 (Derivations) If the curve c is immersed, we can define the
derivation with respect to the arc parameter
1
∂s = ∂θ .
|c0 |
(We will sometimes also write Ds instead of ∂s .)
Definition 1.8 (Tangent) At all points where c0 (θ) 6= 0, we define the tangent
vector
c0 (θ)
T (θ) = 0 = ∂s c(θ) .
|c (θ)|
(At the points where c0 = 0 we may define T = 0).
It is easy to prove (and quite natural for our geometric intuition) that T is a
geometric quantity: if c̃ = c ◦ φ and T̃ is its tangent, then T̃ = T ◦ φ.
Definition 1.9 (Length) The length of the curve c is
Z
len(c) := |c0 (θ)| dθ . (1.9.∗)
S1
5
κ>0
H N
N
H
H
N
κ<0
1.4.1 Curvature
Suppose moreover that the curve is immersed and is C 2 regular (that means that
it is twice differentiable, and the second derivative is continuous); in this case
we may define curvatures, that indicate how much the curve bends. There are
two different definitions of curvature of an immersed curve: mean curvature H
and signed curvature κ, that is defined when c is planar. H and k are extrinsic
curvatures, they are properties of the embedding of c into lRn .
Definition 1.11 (H) If c is C 2 regular and immersed, we can define the mean
curvature H of c as
H = ∂s ∂s c = ∂s T
where ∂s = |c10 | ∂θ is the derivation with respect to the arc parameter. It enjoys
the following properties.
Definition 1.13 (N) When the curve c is immersed and planar, we can define
a normal vector N to the curve, by requiring that |N | = 1, N ⊥ T and N is
rotated π/2 degree anticlockwise with respect to T .
Definition 1.14 (κ) If c is planar and C 2 regular, then we can define a signed
scalar curvature κ = hH, N i, so that
∂s T = κN = H and ∂s N = −κT .
See fig. 1.
6
2 Shapes in applications
A number of methods have been proposed in shape analysis to define distances
between shapes, averages of shapes and statistical models of shapes. At the
same time, there has been much previous work in shape optimization, for ex-
ample image segmentation via active contours, 3D stereo reconstruction via
deformable surfaces; in these later methods, many authors have defined energy
functionals F (c) on curves (or on surfaces), whose minima represent the desired
segmentation/reconstruction; and then utilized the calculus of variations to derive
curve evolutions to search minima of F (c), often referring to these evolutions
as gradient flows. The reference to these flows as gradient flows implies a cer-
tain Riemannian metric on the space of curves; but this fact has been largely
overlooked. We call this metric H 0 , and properly define it in eqn. (2.9).
uA
uB
x2
B
A
x1
bA
bB
x2
B
A
x1
7
2.1.1 Shape distances
A variety of distances have been proposed for measuring the difference between
two given shapes. Two examples follows.
Definition 2.2 The Hausdorff distance
dH (A, B) := sup uB (x) ∨ sup uA (x) (2.2.∗)
x∈A x∈B
If c1 , c2 are immersed and planar, we may otherwise use dH (c̊1 ,c̊2 ) where
c̊1 ,c̊2 denote the internal region enclosed by c1 , c2 .
A∆B := (A \ B) ∪ (A \ B)
d(A, B) = A∆B .
8
• More in general, we can define the signed distance level set averaging
by using as a representative of a shape its signed distance function, and
computing the average shape by
N
1 X
c̄ = x | b̄(x) = 0 , where b̄(x) = bc (p)
N n=1 n
where Si are constant vectors, and Yi are uncorrelated real random variables
with zero mean, and with decreasing variance. Si is known as the i-th mode of
principal variation.
The PCA is possible in general in any finite dimensional linear space equipped
with an inner product. In infinite dimensional spaces, or equivalently in case of
random processes, the PCA is obtained by the Karhunen-Loève theorem.
Given a finite number of samples, it is also possible to define an empirical
principal component analysis, by estimating the expectation and variance of the
data.
In many practical cases, the variances of Yi decrease quite fast: it is then
sensible to replace X by a simplified variable
k
X
X̃ := X + Yi Si
i=1
1 Due to Fréchet, 1948; but also attributed to Karcher.
9
with k < n.
So, if shapes are represented by some “linear” representation, the PCA is
a valuable tool to study the statistics, and it has then become popular for
imposing shape priors in various shape optimization problems. For more general
manifold representations of shapes, we may use the exponential map (defined in
3.30), replacing Sn by tangent vectors and following geodesics to perform the
summation.
We present here an example in applications.
Figure 2: PCA of plane shapes. (From Tsai et al. [60] c 2001 IEEE. Reproduced with
permission).
Figure 3: Segmentation of occluded plane shape. (From Tsai et al. [60] c 2001 IEEE.
Reproduced with permission).
10
2.2 Shape optimization & active contours
2.2.1 A short history of active contours
In the late 20th century, the most common approach to image segmentation
was a combination of edge detection and contour continuation. With edge
detection [5], small edge elements would be identified by a local analysis of
the image; then a continuation method would be employed to reconstruct
full contours. The methods were suffering from two drawbacks, edge detection
being too sensitive to noise and to photometric features (such as, sharp shadows,
reflections) that were not related to the physical structure of the image; and
continuation was in essence a NP-complete algorithm.
Active contours, introduced by Kass et al. [26], have been widely used for
the segmentation problem. The idea is to minimize an energy F (c) (where the
variable c is a contour i.e. a curve), that contains an edge-based attraction
term and a smoothness term, which becomes large when the curve is irregular.
An evolution is derived to minimize the energy based on principles from the
calculus of variations.
An unjustified feature of the model of [26] is that the evolution is dependent
on the way the contour is parameterized. Hence there have been geometric
evolutions similar to the idea of [26] in Caselles et al. [6], Malladi et al. [33],
which can be implemented by the level set method by Osher and Sethian [44].
Thereafter, Kichenassamy et al. [27] Caselles et al. [7] considered minimizing
a geometric energy, which is a generalization of Euclidean arc length, defined
on curves for the edge detection problem. The authors derived the gradient
descent flow in order to minimize the geometric energy.
In contrast to the edge-based approaches for active contours (mentioned
above), region-based energies for active contours have been proposed in Ronfard
[46] Zhu et al. [69] Yezzi et al. [64] Chan and Vese [8]. In these approaches, an
energy is designed to be minimized when the curve partitions the image into
statistically distinct regions. This kind of energy has provided many desirable
features; for example, it provides less sensitivity to noise, better ability to capture
concavities of objects, more dependence on global features of the image, and less
sensitivity to the initial contour placement.
In Mumford and Shah [42, 43], the authors introduce and rigorously study a
region-based energy that is designed to both extract the boundary of distinct
regions while also smoothing the image within these regions. Subsequently,
Tsai et al. [61] Vese and Chan [63] gave a curve evolution implementation of
minimizing the energy functional considered by Mumford&Shah in a level set
method; the gradient descent flows are calculated to minimize these energies.
11
• the region based energies:
Z Z
R2
F (c) = fin (x) dx + fout (x) dx
R Rc
Z 2
Z 2
F (c) = I(x) − avgR (I) dx + I(x) − avgRc (I) dx
R Rc
Z
1
where avgR I = I(x)dx is the average of I on the region R.
|R| R
12
2.3 Geodesic active contour method
2.3.1 The “geodesic active contour” paradigm
The general procedure for geodesic active contours goes as follows:
1. Choose an appropriate geometric energy functional, E.
2. Compute the directional derivative (a.k.a Gâteaux differential)2
d
DE(c)(h) = E(c + th)|t=0
dt
where c is a curve and h is an arbitrary perturbation of c.
3. Manipulate DE(c)(h) into the form
Z
h(s) · v(s) ds .
c
13
2.3.3 Implicit assumption of H 0 inner product
We have made a critical assumption in going from the directional derivative
Z
DE(c)(h) = h(s) · v(s) ds
c
∂C
= ∂s ∂s C
∂t
In the H 0 inner-product, this is the gradient descent for curve length.4
This flow has been studied deeply by the mathematical community, and is
known to enjoy the following properties.
Properties 2.11 • Embedded planar curves remain embedded.
• Embedded planar curves converge to a circular point.
14
• Comparison principle: if two embedded curves are used as initialization,
and the first contains the second, then it will contain the second for all
time of evolution. This is important for level set methods.
• The flow is well posed only for positive times, that is, for increasing t
(similarly to the usual heat equation).
For the proofs, see Gage and Hamilton [21], Grayson [23].
15
then the Gâteaux differential of avgc (g(c)) is
Z
∂avgc (g(c)) 1
= ∇g(c) · h + g(c)h∂s h · T i ds −
∂h L c
Z Z
1
− 2 g(c) ds h∂s h · T i ds =
L c c
Z
= ∇g(c) · h + g(c) − avgc (g(c)) h∂s h · T i ds . (2.13)
c
In the above, we omit the argument (s) for brevity; ∇g is the gradient of g w.r.to
x.
If the curve is planar, we define the N and T as by 1.13 and 1.14; then,
integrating by parts, the above becomes
Z
∂avgc (g(c)) 1
= ∇g(c) · h − h∇g(c) · T ihh · T i −
∂h L S1
− g(c) − avgc (g(c)) hh · Ds2 ci ds =
Z
= ∇g(c) · N − κ g(c) − avgc (g(c)) hh · N i ds (.2.14)
S1
16
where in turn (by eqn. (2.13) and (2.14))
Z
D(c)(h) = h + (c − c)(Ds h · Ds c) ds = (2.15.†)
Z
= h − Ds c (h · Ds c) − (c − c)(h · Ds2 c) ds
h − Ds c (h · Ds c) = N (h · N )
so Z
DE(c)(h) = hc − v, N i(h · N ) − hc − v, c − ciκ(h · N ) ds .
2.4.6 Conclusions
More recent works that use active contours for segmentation are not only based
on information from the image to be segmented (edge-based or region-based),
but also prior information, that is information known about the shape of the
desired object to be segmented. The work of Leventon et al. [31] showed how to
incorporate prior information into the active contour paradigm. Subsequently,
there have been a number of works, for example Tsai et al. [62], Rousson and
Paragios [47], Chen et al. [12], Cremers and Soatto [13], Raviv et al. [45], which
design energy functionals that incorporate prior shape information of the desired
object. In these works, the main idea is designing a novel term of the energy
that is small when the curve is close, in some sense, to a pre-specified shape.
The need for prior information terms arose from several factors such as
• the fact that some images contain limited information,
• the energies functions considered previously could not incorporate complex
information,
17
• the energies had too many extraneous local minima, and the gradient flows
to minimize these energies allowed for arbitrary deformations that gave
rise to unlikely shapes; as in the example in Fig. 4 of segmentation using
the Chan-Vese energy, where the flowing contour gets trapped into noise.
18
of regions, the H 0 gradient still depends on local data. These facts imply
that the H 0 gradient descent flow is sensitive to noise and local features.
3. The H 0 norm gives non-preferential treatment to arbitrary deformations
regardless of whether the deformations are global motions (not changing
the shape of the curve) such as translations, rotations and scales or whether
they are more local deformations.
4. Many geometric active contours (such as edge and region-based active
contours) require that the unit normal to the evolving curve be defined. As
such, the evolution does not make sense for polygons. Moreover, since in
general, an H 0 active contour does not remain smooth, one needs special
numerical schemes based on viscosity theory in a level set framework to
define the flow.
5. Many simple and apparently sound energies cannot be implemented for
shape optimization tasks;
• some energies generate ill-posed H 0 flows;
• if an energy integrand uses derivatives of the curve of degree up to d,
then the PDE driving the flow has degree 2d; but derivatives of the
curves of high degree are noise sensitive and are difficult to implement
numerically, and
• the active contours method works best when implemented using the
level set method, but this is difficult for flows PDEs of degree higher
than 2.
In conclusion, if one wishes to have a consistent view of the geometry of the
space of curves in both shape optimization and shape analysis, then one should
use the H 0 metric when computing distances, averages and morphs between
shapes. Unfortunately, H 0 does not yield a well define metric structure, since
the associated distance is identically zero. So to achieve our goal, we will need
to devise new metrics.
19
The topology is the simplest and most general way to define what are the
“convergent sequences of points” and the “continuous functions”. We will not
provide details regarding topological spaces, since in the following we will mostly
deal with normed spaces and metric spaces, where the topology is induced by a
norm or a metric. We just recall this definition.
d : M × M → [0, ∞]
such that
1. d(x, x) = 0,
2. if d(x, y) = 0 then x = y,
3. d(x, y) = d(y, x) (d is symmetric)
4. d(x, z) ≤ d(x, y) + d(y, z) (the triangular inequality).
Definition 3.4 A metric space (M, d) is complete iff, for any given sequence
(cn ) ⊂ M , the fact that
lim d(cm , cn ) = 0
m,n→∞
20
Example 3.5 • lRn is usually equipped with the Euclidean distance
v
u n
uX
d(x, y) = |x − y| = t (xi − yi )2 ;
i=1
k · k : E → [0, ∞]
such that
1. k · k is convex
2. if kxk = 0 then x = 0.
3. kλxk = |λ| kxk for λ ∈ lR
d(x, y) := kx − yk.
21
3.2.1 Examples of spaces of functions
We present some examples and definitions.
Definition 3.7 A locally-convex topological vector space E (shortened
as l.c.t.v.s. in the following) is a vector space equipped with a collection of
seminorms k · kk (with k ∈ K an index set); the seminorms induce a topology,
such that cn → c iff kcn − ckk →n 0 for all k; and all vector space operations
are continuous w.r.to this topology.
The simplest example of l.c.t.v.s. is obtained when there is only one norm; this
gives raise to two renowned examples of spaces.
Definition 3.8 (Banach and Hilbert spaces) • A Banach space is a
vector space E with a norm k · k defining a distance d(x, y) := kx − yk such
that E is metrically complete.
• A Hilbert space
p is a space with an inner product hf, gi, that defines a
norm kf k := hf, gi such that E is metrically complete.
(Note that a Hilbert space is also a Banach space).
Example 3.9 Let I ⊂ lRk be open; let p ∈ [1, ∞]. A standard example of
Banach space is the Lp space of functions f : I → lRn with norm
sZ
kf kLp := p
|f (x)|p dx for p ∈ [1, ∞) , kf kL∞ := sup |f (x)| ;
I I
those spaces contain all functions such that |f |p is Lebesgue integrable (resp.
f ∈ L∞ when |f | is bounded, on I \ N where N is a set of measure zero). If
p = 2, L2 is a Hilbert space by inner product
Z
hf, gi := f (x) · g(x) dx .
I
Note that, in these spaces, by definition, f = g iff the set {f 6= g} has zero
Lebesgue measure.
lim kcm − cn kk = 0
m,n→∞
22
Hausdorff when, for any given c, if kckk = 0 for all k then c = 0;
metrizable when there are countably many seminorms associated to E.
The reason for the last definition is that, if E is metrizable, then we can define a
distance
∞
X 2−k kx − ykk
d(x, y) :=
1 + kx − ykk
k=0
that generates the same topology as the family of seminorms k · kk ; and the vice
versa is true as well, see [48].
(i)
where f (i) is the j-th derivative. In this space fn → f iff fn → f (i)
uniformly for all i ≤ j.
where f (k) is the k-th derivative. In this space, fn → f iff all the derivatives
(k)
fn converge uniformly to derivatives f (k) .
This last is one of the strongest topology between the topologies usually associated
to spaces of functions.
5 The derivatives are computed in distributional sense, and must exists as Lebesgue
integrable functions
23
3.2.4 Dual spaces.
Definition 3.12 Given a l.c.t.v.s. E, the dual space E ∗ is the space of all
linear functions L : E → lR.
The biggest problem when dealing with Fréchet spaces, is that the dual of
a Fréchet space is not in general a Fréchet space, since it often fails to be
metrizable. (In most cases, the duals of Fréchet spaces are “quite wide” spaces;
a classical example being the dual elements of smooth functions, that are the
distributions). So given F, G Fréchet spaces, we cannot easily work with “the
space L(F, G) of linear functions between F and G”.
As a workaround, given an auxiliary space H, we will consider “indexed
families of linear maps” L : F × H → G, where L(·, h) is linear, and L is jointly
continuous; but we will not consider L as a map
h 7→ (f 7→ L(f, h))
(3.13)
H → L(F, G)
3.2.5 Derivatives
An example is the Gâteaux differential.
Definition 3.14 We say that a continuous map P : U → G, where F, G are
Fréchet spaces and U ⊂ F is open, is Gâteaux differentiable if for any h ∈ F
the limit
P (f + th) − P (f )
DP (f )(h) := lim (3.14.∗)
t→0 t
exists. The map DP (f ) : F → G is the Gâteaux differential.
Also, the implicit function theorem holds, in the form due to Nash&Moser:
see again [24] for details.
24
3.2.6 Troubles in calculus
But some important parts of calculus are instead lost.
A consequence (that we will discuss more later on) is that the exponential
map in Riemannian manifolds may fail to be locally surjective. See [24, Sec.
5.5.2].
3.3 Manifold
To build a manifold of curves, we have ahead two main definitions of “manifold”
to choose from.
3.3.1 Submanifold
Definition 3.20 Suppose A, B are open subsets of two linear spaces. A diffeo-
morphism is an invertible differentiable function φ : A → B, whose inverse
φ−1 is again differentiable.
25
Tc M
c
V1
M
φ1
x
U1 U
T M := {(c, h) | c ∈ M, h ∈ Tc M } ⊂ X × X .
The tangent bundle T M is itself a differentiable manifold; its charts are of the
form (φk , Dφk ), where φk are the charts for M .
26
charts φ−1
1 ◦ φ2 is smooth.
Some objects that we may find useful in the following are Fréchet manifolds.
C ∞ (S; R)/Diff(S)
So the theory of Fréchet space seems apt to define and operate on the manifold
of geometric curves.
27
3.6.2 Finsler metric, length
Definition 3.25 We define a Finsler metric to be a function F : T M → lR+ ,
such that
• F is continuous and,
3.6.3 Distance
Definition 3.26 The distance d(c0 , c1 ) is the infimum of Len(γ) between all
C 1 paths γ connecting c0 , c1 ∈ M .
Remark 3.27 In the following chapter, we will define some differentiable man-
ifolds M of curves, and add a Riemann (or Finsler) metric G on those; there
are two different choices for the model space,
• suppose we model the differentiable manifold M on a Hilbert space U , with
scalar product h, iU ; this implies that M has a topology τ associated to
it, and this topology, through the charts φ, is the same as that of U . Let
now G be a Riemannian metric; since the derivative of a chart Dφ(c)
maps U onto Tc M , one natural hypothesis will be to assume that h, iU and
h, iG,c be locally equivalent (uniformly w.r.to small movements of c); as
a consequence, the topology generated by the Riemannian distance d will
coincide with the original topology τ . A similar request will hold for the
case of a Finsler metric G, in this case U will be a Banach space with a
norm equivalent to that defined by G on Tc M .
• We will though find out that, for technical reasons, we will initially model
the spaces of curves on the Fréchet space C ∞ ; but in this case there cannot
be a norm on Tc M that generates the same original topology (for the proof,
see I.1.46 in [48]).
28
3.6.4 Minimal geodesics
Definition 3.28 If there is a path γ ∗ providing the minimum of Len(γ) between
all paths connecting c0 , c1 ∈ M , then γ ∗ is called a minimal geodesic.
The minimal geodesic is also the minimum of the action (up to reparameteri-
zation).
Proposition 3.29 Let ξ ∗ provide the minimum minγ E(γ) in the class of all
paths γ in M connecting x to y. Then ξ ∗ is a minimal geodesic and its speed
|ξ˙∗ | is constant.
Vice versa, let γ ∗ be a minimal geodesic, then there is a reparameterization
ξ = γ ∗ ◦ φ s.t. ξ ∗ provides the minimum minγ E(γ).
∗
29
3.6.6 Hopf–Rinow theorem
Theorem 3.32 (Hopf–Rinow) Suppose M is a finite dimensional Rieman-
nian or Finsler manifold. The following are equivalent:
• (M, d) is metrically complete;
Example 3.35 (Atkin [3]) There exists an infinite dimensional complete Hilbert
smooth manifold M and x, y ∈ M such that there is no critical geodesic connect-
ing x to y.
That is,
• (M, d) is complete 6⇒ expc is surjective,
• and (M, d) is complete 6⇒ minimal geodesics exist.
30
Other documents randomly have
different content
Welcome to our website – the ideal destination for book lovers and
knowledge seekers. With a mission to inspire endlessly, we offer a
vast collection of books, ranging from classic literary works to
specialized publications, self-development books, and children's
literature. Each book is a new journey of discovery, expanding
knowledge and enriching the soul of the reade
Our website is not just a platform for buying books, but a bridge
connecting readers to the timeless values of culture and wisdom. With
an elegant, user-friendly interface and an intelligent search system,
we are committed to providing a quick and convenient shopping
experience. Additionally, our special promotions and home delivery
services ensure that you save time and fully enjoy the joy of reading.
ebooknice.com