0% found this document useful (0 votes)
228 views30 pages

Dual Methods For The Minimization of The Total Variation

This document discusses convex optimization techniques and their application to image restoration problems. It introduces total variation as a regularization term, covers convex optimization concepts like subdifferentials and duality, and describes algorithms like gradient descent and primal-dual methods for solving related optimization problems.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
0% found this document useful (0 votes)
228 views30 pages

Dual Methods For The Minimization of The Total Variation

This document discusses convex optimization techniques and their application to image restoration problems. It introduces total variation as a regularization term, covers convex optimization concepts like subdifferentials and duality, and describes algorithms like gradient descent and primal-dual methods for solving related optimization problems.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
You are on page 1/ 30

Introduction Convex optimization Application to image restoration Conclusion Bibliography

Dual methods for the minimization of


the total variation

Rémy Abergel
supervisor Lionel Moisan

MAP5 - CNRS UMR 8145

– Different Learning Seminar, LTCI –


Thursday 21st April 2016

1 / 30
Introduction Convex optimization Application to image restoration Conclusion Bibliography

Plan

1 Introduction

2 Convex optimization
Generalities
Differentiable framework
Dual methods

3 Application to image restoration


Denoising (dual approach)
Inverse problems (primal-dual approach)

4 Conclusion

2 / 30
Introduction Convex optimization Application to image restoration Conclusion Bibliography

Plan

1 Introduction

2 Convex optimization
Generalities
Differentiable framework
Dual methods

3 Application to image restoration


Denoising (dual approach)
Inverse problems (primal-dual approach)

4 Conclusion

3 / 30
Introduction Convex optimization Application to image restoration Conclusion Bibliography

Mathematical framework

A gray level image is represented as a function

u:Ω→R

where Ω denotes

Continuous framework: a bounded open set of R2 .


Discrete framework: a rectangular subset of Z2 .

In both cases, we will note u ∈ RΩ .

4 / 30
Introduction Convex optimization Application to image restoration Conclusion Bibliography

Total variation (continuous framework)

We will focus on image restoration process involving the total


variation functional, which is defined by
Z
1,1
∀u ∈ W (Ω), TV(u) = k∇u(x)k2 dx ,

or, more generally,


Z
∀u ∈ BV(Ω), TV(u) = sup − u(x)divφ(x)dx ,
φ∈Cc∞ (Ω;R2 ) Ω
∀x∈Ω, kφ(x)k2 ≤1

where BV(Ω) = u ∈ L1loc (Ω); TV(u) < +∞ .




5 / 30
Introduction Convex optimization Application to image restoration Conclusion Bibliography

Total variation (discrete framework)

Let Ω = {0, . . . , M − 1} × {0, . . . , N − 1} denote a discrete


rectangular domain, and u ∈ RΩ a discrete image. We
generally adapt the continuous definition of TV(u) as follows,
X
TV(u) = k∇uk1,2 := k∇u(x, y )k2 ,
(x,y )∈Ω

where ∇ denotes a finite difference operator.

6 / 30
Introduction Convex optimization Application to image restoration Conclusion Bibliography

Plan

1 Introduction

2 Convex optimization
Generalities
Differentiable framework
Dual methods

3 Application to image restoration


Denoising (dual approach)
Inverse problems (primal-dual approach)

4 Conclusion

7 / 30
Introduction Convex optimization Application to image restoration Conclusion Bibliography

Optimization problem

b ∈ E, a minimizer of a
We are interested in the computation of u
given cost function J over a subset C ⊂ E (constraint set).
Such a problem is usually written

b ∈ argmin J(u)
u
u∈C

J denotes a function from E to R := R ∪ {±∞},


E denotes (for shake of simplicity) a Hilbert space,
in general, C = {u ∈ E; g(u) ≤ 0 , h(u) = 0}
where g is called the inequality constraint,
and h is called the equality constraint.

8 / 30
Introduction Convex optimization Application to image restoration Conclusion Bibliography

Differentiable and unconstrained framework

Theorem (first order necessary condition for optimality)


b achieves a minimum of J over E, then ∇J(u
If u b) = 0.

This condition becomes sufficient when the cost function J is


convex.

Theorem (sufficient condition for the existence of a minimizer)


If J : E → R is a proprer, continuous and coercive function,
then the unconstrained problem admits at least one solution.

If moreover J is strictly convex, the problem admits exactly


one solution.

9 / 30
Introduction Convex optimization Application to image restoration Conclusion Bibliography

Example of resolvant algorithm (C = E = Rn )

Algorithm (gradient descent)


1. Initialization:
Choose u0 ∈ Rn , α0 > 0 and ε > 0.
2. Iteration: k
compute ∇J(uk )
compute αk
uk +1 = uk − αk ∇J(uk )
3. Example of stopping criterion:
if kJ(uk +1 ) − J(uk )k < ε, STOP
otherwise, set k = k + 1 and go back to 2.

Remark: a first order Taylor expansion of J(uk + αk ∇J(uk )) at


point uk helps to understand that J(uk +1 ) ≤ J(uk ) as soon as
αk is small enough.

10 / 30
Introduction Convex optimization Application to image restoration Conclusion Bibliography

Differentiable and constrained framework

Theorems can be adapted (in the convex setting), leading


to the so-called Karush-Kuhn-Tucker conditions.
A numerical solution of the constrained problem can be
numerically computed using the projected gradient
algorithm, which simply consists in replacing

uk +1 = uk − αk ∇J(uk )

by
uk +1 = Proj C (uk − αk ∇J(uk ))
into the gradient descent algorithm.

11 / 30
Introduction Convex optimization Application to image restoration Conclusion Bibliography

Legendre-Fenchel transform

Let E denote a finite dimensional Hilbert space, E ? its dual


space, and h·, ·i the bilinear mapping over E ? × E defined by

∀ϕ ∈ E ? , ∀u ∈ E, hϕ, ui = ϕ(u) .

Definition (affine continuous applications)


An affine continuous application is a funtion of the type

A : u 7→ hϕ, ui + α

where ϕ ∈ E ? is called the slope of A,


and α is a real number, called the constant term of A.

12 / 30
Introduction Convex optimization Application to image restoration Conclusion Bibliography

Legendre-Fenchel transform

Q. At which condition(s) does the affine continuous application


A, with slope ϕ ∈ E ? and constant term α ∈ R, lower bound J
everywhere on E?

∀u ∈ E, A(u) ≤ J(u)
⇔ ∀u ∈ E, hϕ, ui + α ≤ J(u)
⇔ ∀u ∈ E, hϕ, ui − J(u) ≤ −α
⇔ sup { hϕ, ui − J(u) } ≤ −α
u∈E
⇔ J ? (ϕ) ≤ −α
⇔ −J ? (ϕ) ≥ α

13 / 30
Introduction Convex optimization Application to image restoration Conclusion Bibliography

Legendre-Fenchel transform

Definition (Legendre-Fenchel transform)


Let J : E → R, the Legendre-Fenchel transform of J is the
application J ? : E ? → R defined by:

∀ϕ ∈ E ? , J ? (ϕ) = sup { hϕ, ui − J(u) }


u∈E

Geometrical intuition:
−J ? (ϕ) represents the largest constant term α that can
assume any affine continuous function with slope ϕ, to
remain under J everywhere on E.

14 / 30
Introduction Convex optimization Application to image restoration Conclusion Bibliography

Transformée de Legendre-Fenchel

By definition of J ? , we have

∀ϕ ∈ E ? , J ? (ϕ) = sup { hϕ, ui − J(u) } .


u∈E

We remark that

J ? (0E ? ) = − inf J(u)


u∈E
we retrieve here a link between “null slope” and “infimum
of J”

15 / 30
Introduction Convex optimization Application to image restoration Conclusion Bibliography

Subdifferentiability

Definition (exact applications)


Let u ∈ E, ϕ ∈ E ? , then, the affine continuous application

A : v 7→ hϕ, v − ui + J(u)

satisfies A(u) = J(u). We say that A is exact at u.

Definition (subdifferentiability & subgradient)


A J : E → R is said subdifferentiable at the point u ∈ E if it
admits at least one lower bounding affine continuous
function which is exact at u.
The slope ϕ of such an affine function is then called a
subgradient of J at the point u.
The set of all subgradients of J at u is noted ∂J(u).
16 / 30
Introduction Convex optimization Application to image restoration Conclusion Bibliography

Subdifferentiability

Basic properties:
ϕ ∈ ∂J(u) ⇔ ∀v ∈ E, hϕ, v − ui + J(u) ≤ J(v )
0 ∈ ∂J(u
b) ⇔ b ∈ argmin J(u)
u
u∈E

Remark: transformation of a constrained problem into an


unconstrained problem

argmin J(u) = argmin J(u) + ıC (u)


u∈C u∈E

si u ∈ C

0
where ıC (u) =
+∞ si u 6∈ C

17 / 30
Introduction Convex optimization Application to image restoration Conclusion Bibliography

Properties & subdifferential calculus

Any convex and lower semi-continous (l.s.c.) function is


subdifferentiable over the interior of its domain.
If J is convex and differentiable at u, then
∂J(u) = {∇J(u)}.
∀u ∈ E, ∂(J1 + J2 )(u) ⊃ ∂J1 (u) + ∂J2 (u) .
The converse inclusion is satisfied under some additional
(but weak) hypotheses on J1 and J2 .
If J is convex, lower semi-continuous, then

ϕ ∈ ∂J(u) ⇔ u ∈ ∂J ? (ϕ) .

If J is convex, and lower semi-continuous, then


J ?? (u) = J(u) .
18 / 30
Introduction Convex optimization Application to image restoration Conclusion Bibliography

Plan

1 Introduction

2 Convex optimization
Generalities
Differentiable framework
Dual methods

3 Application to image restoration


Denoising (dual approach)
Inverse problems (primal-dual approach)

4 Conclusion

19 / 30
Introduction Convex optimization Application to image restoration Conclusion Bibliography

Legendre-Fenchel transform of the discrete TV

Theorem (Legendre-Fenchel transform of the discrete TV)


The Legendre-Fenchel transform of TV is the indicator function
of the convex set C = divB, where

B = {p ∈ RΩ × RΩ , kpk∞,2 ≤ 1} ,

and k · k∞,2 := p 7→ max(x,y )∈Ω kp(x, y )k2 is the dual norm of


the k · k1,2 norm.

In other words:
if ∃p ∈ B, ϕ = divp

? 0
TV (ϕ) = ıC (ϕ) =
+∞ otherwise.
Proof: this result is easy to prove using the convex analysis
tools presented before (see the proof in appendix).
20 / 30
Introduction Convex optimization Application to image restoration Conclusion Bibliography

The ROF (Rudin, Osher, Fatemi) model


We are interested in the computation of

1
u
bMAP = argmin J(u) := ku − u0 k22 + λTV(u) .
u∈RΩ 2

Thanks to the previous properties, we have

1
bMAP = argmin ku − u0 k22 + λTV(u)
u
u∈RΩ 2
⇔ 0∈u bMAP − u0 + λ∂TV(ubMAP )
 
? u0 − uMAP
b
⇔ uMAP ∈ ∂TV
b
λ
 
u0 u0 − u bMAP 1 ? u0 − uMAP
b
⇔ ∈ + ∂TV
λ λ λ λ

21 / 30
Introduction Convex optimization Application to image restoration Conclusion Bibliography

The ROF (Rudin, Osher, Fatemi) model

b = u0 −λubMAP , we
Dual formulation of the ROF problem: Let w
have
1
0 ∈ w b − u0 /λ + ∂TV? (w)
b ,
λ
Thus,
1 1
b = argmin kw − u0 /λk22 + TV? (w) .
w
w∈R Ω 2 λ
Last, since TV? (w) = ıC (w), we have

b = argmin kw − u0 /λk22 = Proj C (u0 /λ) ,


w
w∈C

bmap = u0 − λ Proj C (u0 /λ).


an thus, u

22 / 30
Introduction Convex optimization Application to image restoration Conclusion Bibliography

Inverse problems (primal-dual approach)

1
A : RΩ → Rω , bMAP = argmin kAu − u0 k2 + λTV(u) .
u
u∈RΩ 2

Primal-dual formulation: Let us use F ?? = F (valid as soon as


F is convex, and lower semi-continuous).

TV(u) = TV?? (u) yields a dual formulation (also called


weak formulation) of the discrete TV,

TV(u) = max h∇u, pi .


p∈B
1
2 kAu − u0 k22 = f (Au) = f ?? (Au) = maxω hq, Aui − f ? (q) ,
q∈R
and we can easily show that f ? (q) = 12 kq + u0 k22 − 12 ku0 k22 .

23 / 30
Introduction Convex optimization Application to image restoration Conclusion Bibliography

Inverse problems (primal-dual approach)


By replacing these two terms into the initial problem, we get a
primal-dual reformulation:
1
bMAP = argmin max h(λ∇u, Au) , (p, q)i −
u kq + u0 k22
u∈RΩ p∈B 2
q∈Rω

Such a problem can be handled using the Chambolle-Pock


algorithm (2011), which boils down to the numerical scheme
n + σλ∇u n
 n+1 

 p = Proj B p
 q n+1 = q n + σ Au n − u  /(1 + σ)

0
 u n+1 = u n + τ λdivp n+1 − τ A∗ q n+1


 n+1
= u n+1 + θ u n+1 − u n

u
The convergence of the iterates (u n , pn , q n ) toward a solution of
the primal-dual problem is ensured for θ = 1 and τ σ < |||K |||2 ,
noting K = u 7→ (λ∇u, Au).
24 / 30
Introduction Convex optimization Application to image restoration Conclusion Bibliography

Plan

1 Introduction

2 Convex optimization
Generalities
Differentiable framework
Dual methods

3 Application to image restoration


Denoising (dual approach)
Inverse problems (primal-dual approach)

4 Conclusion

25 / 30
Introduction Convex optimization Application to image restoration Conclusion Bibliography

Conclusion

The tools presented here are based on very simple


notions.
They are useful to reformulate a (convex) problem into a
dual (or primal-dual) one, which can be sometimes much
more simple than the initial problem.
What is the good framework for using these tools?
The cost function must be convex and lower
semi-continuous (Γ space). When it is not the case, it may
be replaced by a convex approximation (Γ-regularization,
Moreau-Yoshida envelope, surrogate functions, etc.).
A dual reformulation often starts with the computation of the
Legendre-Fenchel transform of a part of the cost function
(which is particularly easy in the case of `p norms).
The dual variables are easy to manipulate when E is a
Hilbert space.
26 / 30
Introduction Convex optimization Application to image restoration Conclusion Bibliography

Bibliography

I. Ekeland, and R. Témam: “Convex analysis and variational


problems”, 1999.
A. Chambolle, and T. Pock: “A first-order primal-dual algorithm
for convex problems with applications to imaging”, 2011.
A. Chambolle, V. Caselles, D. Cremers, M. Novaga, and T. Pock:
“An introduction to total variation for image analysis”, 2010.
R. T. Rockafellar: “Convex analysis”, 1997.
S. Boyd, and L. Vandenberghe: “Convex optimization”, 2009.
L. I. Rudin, S. Osher, and E. Fatemi: “Nonlinear total variation
based noise removal algorithms”, 1992.

27 / 30
Introduction Convex optimization Application to image restoration Conclusion Bibliography

Appendix (Computation of TV? )

Lemma (Legendre-Fenchel transform of a norm)


Let E denote a Hilbert space, endowed with a norm k · k, and a
scalar product h·, ·i. We have

0 if kv k∗ ≤ 1
∀v ∈ E, kv k∗ = ıB∗ (v ) :=
+∞ otherwise,

where k · k∗ = v 7→ supu∈E, kuk≤1 hv , ui denotes the dual norm.

In other words, k · k? is the indicator function of the closed unit


ball for the dual norm k · k∗ .

Proof. We have ı?B∗ (u) = supv ∈E, kv k∗ ≤1 hv , ui = kuk∗∗ = kuk,


for any u ∈ E. Thus, k · k? = ı??
B∗ = ıB∗ , since ıB∗ ∈ Γ(E).

28 / 30
Introduction Convex optimization Application to image restoration Conclusion Bibliography

Appendix (Computation of TV? )

Lemma (dual norm of the k · k1,2 norm)


The two norms k · k1,2 and k · k∞,2 over the Hilbert space
E := RΩ × RΩ are dual to each other.
Proof. Since E is reflexive, we just need to show that one norm
is the dual of the other. Let us show that k · k1,2 is the dual norm
of k · k∞,2 . For any p ∈ E, we have
P
sup hp, qiE = sup x∈Ω hp(x), q(x)iR2
q∈E, kqk∞,2 ≤1 q∈E
∀x∈Ω, kq(x)k2 ≤1
P
= x∈Ω sup hp(x), q(x)iR2
q(x)∈R2 , kq(x)k2 ≤1
P
= x∈Ω kp(x)k2
= kpk1,2 .
29 / 30
Introduction Convex optimization Application to image restoration Conclusion Bibliography

Appendix (Computation of TV? )

Theorem (Legendre-Fenchel transform of TV)


TV? = ıC , where C = divB and B = {p ∈ E, kpk∞,2 ≤ 1}.

Proof.
Since the two norms k · k1,2 and k · k∞,2 are dual to each
other, we have k · k?1,2 = ıB , and thus k · k1,2 = k · k?? ?
1,2 = ıB .
Besides, for all u ∈ RΩ , we have

ı?C (u) = suphu, v i = suphu, divpi = suph∇u, pi = ı?B (∇u) .


v ∈C p∈B p∈B

Therefore, ı?C (u) = ı?B (∇u) = k∇uk1,2 = TV(u), for any u.


Thus TV = ı?C , and finally TV? = ı??
C = ıC .

30 / 30

You might also like