0% found this document useful (0 votes)
7 views

2023 02 Exam

Uploaded by

Juan Ponitra
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views

2023 02 Exam

Uploaded by

Juan Ponitra
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 5

Optimization and Algorithms

February 6, 2023

Write your name:

Write your student number:

Write your answers (A, B, C, D, E, or F) to problems 1 to 3 in this box

Your answer to problem 1:

Your answer to problem 2:

Your answer to problem 3:

Exam
1. Deconflicted trajectories. A trajectory T of duration T in Rd is a sequence of T
points in Rd , denoted as T = {x(1), x(2), . . . , x(T )}, with x(t) ∈ Rd for 1 ≤ t ≤ T .
Note that t denotes discrete-time; thus t is an integer (such as t = 0, 1, 2, 3. . . .).
Let T1 = {x1 (1), x1 (2), . . . , x1 (T )} and T2 = {x2 (1), x2 (2), . . . , x2 (T )} be two tra-
jectories of duration T in Rd . We say that T1 and T2 are space-deconflicted if
kx1 (t) − x2 (s)k2 >  for 1 ≤ t, s ≤ T , where  is a given positive number. We say
that T1 and T2 are time-deconflicted if kx1 (t) − x2 (t)k2 >  for 1 ≤ t ≤ T .
Consider the following two controlled dynamic linear systems. The state of system
1 at time t is denoted by x1 (t) ∈ Rd , for 1 ≤ t ≤ T and obeys the recursion

x1 (t + 1) = A1 x1 (t) + B1 u1 (t), 0 ≤ t ≤ T − 1,

where A1 ∈ Rd×d and B1 ∈ Rd×p are given matrices, x1 (0) ∈ Rd is a given initial
state and u1 (t) ∈ Rp is the control input of system 1 at time t, for 0 ≤ t ≤ T − 1.
Note that the trajectory T1 depends on the inputs u1 (t), 0 ≤ t ≤ T − 1.
Similarly, for system 2 we have

x2 (t + 1) = A2 x2 (t) + B2 u2 (t), 0 ≤ t ≤ T − 1.

Note that the trajectory T2 depends on the inputs u2 (t), 0 ≤ t ≤ T − 1.


Finally, let Tref = {r(1), r(2), . . . , r(T )} be a given, fixed reference trajectory of
duration T in Rd .
We want to design the control inputs u1 (t) (0 ≤ t ≤ T − 1) and u2 (t) (0 ≤ t ≤ T − 1)
so that:

• the final state x1 (T ) of system 1 is as close as possible to a given, desired state


p1 ∈ Rd ;

1
• the final state x2 (T ) of system 2 is as close as possible to a given, desired state
p2 ∈ Rd ;
• the trajectories T1 and T2 are time-deconflicted;
• the trajectories T1 and Tref are space-deconflicted;
• the trajectories T2 and Tref are space-deconflicted.

One of the following problem formulations is suitable for the given context.

(A)

minimize kx1 (T ) − p1 k22 + kx2 (T ) − p2 k22 (1)


{x1 (t),u1 (t),x2 (t),u2 (t)}
subject to x1 (t + 1) = A1 x1 (t) + B1 u1 (t), 0 ≤ t ≤ T − 1
x2 (t + 1) = A2 x2 (t) + B2 u2 (t), 0 ≤ t ≤ T − 1
min{kx1 (t) − x2 (s)k2 : 1, s ≤ t ≤ T } < 
min{kx1 (t) − r(s)k2 : 1 ≤ t, s ≤ T } > 
min{kx2 (t) − r(s)k2 : 1 ≤ t, s ≤ T } > 

(B)

minimize kx1 (T ) − p1 k22 + kx2 (T ) − p2 k22 (2)


{x1 (t),u1 (t),x2 (t),u2 (t)}
subject to x1 (t + 1) = A1 x1 (t) + B1 u1 (t), 0 ≤ t ≤ T − 1
x2 (t + 1) = A2 x2 (t) + B2 u2 (t), 0 ≤ t ≤ T − 1
max{kx1 (t) − x2 (t)k2 : 1 ≤ t ≤ T } > 
max{kx1 (t) − r(s)k2 : 1 ≤ t, s ≤ T } > 
max{kx2 (t) − r(s)k2 : 1 ≤ t, s ≤ T } > 

(C)

minimize kx1 (T ) − p1 k22 + kx2 (T ) − p2 k22 (3)


{x1 (t),u1 (t),x2 (t),u2 (t)}
subject to x1 (t + 1) = A1 x1 (t) + B1 u1 (t), 0 ≤ t ≤ T − 1
x2 (t + 1) = A2 x2 (t) + B2 u2 (t), 0 ≤ t ≤ T − 1
min{kx1 (t) − x2 (t)k2 : 1 ≤ t ≤ T } < 
min{kx1 (t) − r(s)k2 : 1 ≤ t, s ≤ T } < 
min{kx2 (t) − r(s)k2 : 1 ≤ t, s ≤ T } < 

(D)

minimize kx1 (T ) − p1 k22 + kx2 (T ) − p2 k22 (4)


{x1 (t),u1 (t),x2 (t),u2 (t)}
subject to x1 (t + 1) = A1 x1 (t) + B1 u1 (t), 0 ≤ t ≤ T − 1
x2 (t + 1) = A2 x2 (t) + B2 u2 (t), 0 ≤ t ≤ T − 1
max{kx1 (t) − x2 (t)k2 : 1 ≤ t ≤ T } < 
max{kx1 (t) − r(s)k2 : 1 ≤ t, s ≤ T } < 
max{kx2 (t) − r(s)k2 : 1 ≤ t, s ≤ T } < 

2
(E)

minimize kx1 (T ) − p1 k22 + kx2 (T ) − p2 k22 (5)


{x1 (t),u1 (t),x2 (t),u2 (t)}
subject to x1 (t + 1) = A1 x1 (t) + B1 u1 (t), 0 ≤ t ≤ T − 1
x2 (t + 1) = A2 x2 (t) + B2 u2 (t), 0 ≤ t ≤ T − 1
min{kx1 (t) − x2 (t)k2 : 1 ≤ t ≤ T } > 
min{kx1 (t) − r(s)k2 : 1 ≤ t, s ≤ T } > 
min{kx2 (t) − r(s)k2 : 1 ≤ t, s ≤ T } > 

(F)

minimize kx1 (T ) − p1 k22 + kx2 (T ) − p2 k22 (6)


{x1 (t),u1 (t),x2 (t),u2 (t)}
subject to x1 (t + 1) = A1 x1 (t) + B1 u1 (t), 0 ≤ t ≤ T − 1
x2 (t + 1) = A2 x2 (t) + B2 u2 (t), 0 ≤ t ≤ T − 1
min{kx1 (t) − x2 (s)k2 : 1 ≤ t, s ≤ T } > 
min{kx1 (t) − r(s)k2 : 1 ≤ t, s ≤ T } > 
min{kx2 (t) − r(s)k2 : 1 ≤ t, s ≤ T } > 

Which one?
Write your answer (A, B, C, D, E, or F) in the box at the top of page 1

2. Unconstrained optimization. Consider the optimization problem

minimize ex−a + e−x + x2 − 2x + x+ . (7)


x∈R

The point x? = 0 is a global minimizer of (7) for one of the following choices of a:

(A) a = −2
(B) a = −1
(C) a = 0
(D) a = 1
(E) a = 2
(F) a = 3

Which one?
Write your answer (A, B, C, D, E, or F) in the box at the top of page 1
Hint: the numerical values log(2) ' 0.7 and log(3) ' 1.1 might be useful

3. Gradient descent algorithm. Consider the function f : R2 → R given by f (a, b) =


1 2 2
2 a +(a−b) . Suppose we do one iteration of the gradient descent algorithm (applied
to f ) starting from the point  
1
x0 = ,
1
and using the stepsize 1.
Which of the following points is the next iteration x1 ?

3
(A)  
1
0
(B)  
−1
1
(C)  
0
1
(D)  
0
−1
(E)  
−1
0
(F)  
1
−1

Write your answer (A, B, C, D, E, or F) in the box at the top of page 1

4. Signal-denoising as a least-squares problem. Consider the function f : Rn → R,


f (r) = rT Dr, where D is a given n × n diagonal matrix with positive diagonal
entries:  
d1
 d2 
D= ,
 
..
 . 
dn
with di > 0 for 1 ≤ i ≤ n.
Consider the following optimization problem

minimize ks − sk22 + f (v) (8)


s,v
subject to y = As + v,

where the variables to optimize are s ∈ Rp and v ∈ Rn ; the matrix A ∈ Rn×p and
the vectors y ∈ Rn , and s ∈ Rp are given. This problem can be interpreted as a
signal-denoising problem: we observe y and want to decompose it as the sum of a
signal of interest s and noise v; we know that s should be close to the nominal signal
s and that v should be close to zero (the larger the di , the more confident we are
that the component vi should be close to zero).
Problem (8) can be reduced to a least-squares problem involving only the variable
s, that is, it can be reduced to a problem of the form

minimize kAs − βk22 (9)


s

4
for some matrix A and vector β.
Give A and β in terms of the constants D, y, A, and s.

5. A simple optimization problem. Consider the function f : R2 → R, f (x) = 12 xT M x,


where  
a b
M= .
b a
The constants a and b satisfy 0 < a < b.
Solve in closed-form the optimization problem

maximize f (x) (10)


x
subject to 1T x = 1,

where 1 denotes the vector 1 = (1, 1).

6. A convex optimization problem. Consider the following optimization problem

minimize g(x1 − c1 ) + g(x2 − c2 ) + · · · + g(xn − cn ) (11)


x1 ,x2 ,...,xn
subject to a1 x1 + a2 x2 + · · · + an xn = b,

where the variables to optimize are xi ∈ R, for 1 ≤ i ≤ n. The vectors ai ∈ Rp ,


1 ≤ i ≤ n and b ∈ Rp are given. The constants ci , 1 ≤ i ≤ n, are also given. The
function g : R → R is defined as follows:
 2
x , if x ≥ 0
g(x) =
−x, if x < 0.

Show that (11) is a convex optimization problem.

7. A convex function based on a worst-case representation. Show that the function


f : R → R,
f (x) = max {k(a + u)x − bk2 : kuk2 = r} (12)
is convex, where the vectors a, b ∈ Rn and the constant r > 0 are given.
In words: f takes as input a number x and returns as output the largest value of
the expression
k(a + u)x − bk2
as u ranges over the sphere centered at the origin and with radius r.

You might also like