0% found this document useful (0 votes)
35 views3 pages

Solutions To Exam in 2E1262 Nonlinear Control, Apr 16, 2004

The document contains solutions to an exam in nonlinear control. It addresses 5 problems: 1) Analyzes the stability of two nonlinear systems using Lyapunov methods. 2) Examines the stability of an equilibrium point and defines an invariant set. 3) Matches nonlinear systems to descriptions of their stability properties. 4) Derives the describing function for a system with a nonlinear element. 5) Formulates an optimal control problem, defines the Hamiltonian, and derives the adjoint equations.

Uploaded by

Ali Duraz
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
35 views3 pages

Solutions To Exam in 2E1262 Nonlinear Control, Apr 16, 2004

The document contains solutions to an exam in nonlinear control. It addresses 5 problems: 1) Analyzes the stability of two nonlinear systems using Lyapunov methods. 2) Examines the stability of an equilibrium point and defines an invariant set. 3) Matches nonlinear systems to descriptions of their stability properties. 4) Derives the describing function for a system with a nonlinear element. 5) Formulates an optimal control problem, defines the Hamiltonian, and derives the adjoint equations.

Uploaded by

Ali Duraz
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 3

Solutions to Exam in 2E1262 Nonlinear Control, Apr 16, 2004 1. x2 ) = k/2(1, 1), k Z.

. The (a) The equilibria are given by x = ( x 1, linearized system about x is given by z = Az with A= 1 1 (1)k (1)k+1

which is unstable for every k, since det(sI A) = (s 1)(s (1)k+1 ) + (1)k+1 is an unstable polynomial. (b) The system is on strict feedback form because it can be written as x 1 = f1 (x1 ) + g1 (x1 )x2 x 2 = f2 (x1 , x2 ) + g2 (x1 , x2 )u see Lecture 10. (c) Using the notation of Lecture 10, we can choose 1 (x1 ) = 2x1 and 2 /2, and thus V1 (x1 ) = x1 u1 = d 1 dV1 (x1 + x2 ) (x2 1 ) = 5x1 3x2 dx1 dx1

Then, we choose u = u1 sin(x1 x2 ) = 5x1 3x2 sin(x1 x2 ) (d) Consider Lyapunov function candidate V (x) = V2 (x) =
2 x1 (2x1 + x2 )2 + 2 2

as suggested by the back-stepping lemma. Then, with the control as in (c), we have
2 = dV f (x, u) = 15(x1 + x2 /2)2 x2 /4 < 0, V dx

x = 0

so since V is positive denite, the system is asymptotically stable. 2. (a) The equilibria are (0, 0) and (1, 1) with linearized systems given by z = 0 1 z, 0 0 z = 3 1 z 6 3

respectively. Hence, the origin is not locally stable, while (1, 1) is locally stable. (b) Consider the banana shaped set (draw a picture). For each initial 2 (0), and thus point x(0) on the left boundary of , we have x2 (0) = x1
3 3 2 (0) + x2 (0) = x1 (0) + x1 (0) > 0 x 1 (0) = x1 6 3 (0) x2 (0) = 0 x 2 (0) = x1

so the trajectory is directed inwards . For each initial point x(0) on 3 (0), and thus the right boundary of , we have x2 (0) = x1
3 (0) + x2 (0) = 0 x 1 (0) = x1 6 3 6 9 (0) x2 (0) = x1 (0) x1 (0) > 0 x 2 (0) = x1

so the trajectory is again directed inwards . Hence, is invariant. (c) Draw a trajectory illustrating how a trajectory starting in x(0) close to the origin tends to the point (1, 1). 3. (a) (i) corresponds to (b) because the origin is a stable focus for (i). (ii) corresponds to (d) because (ii) has an unstable equilibrium in the origin. (iii) corresponds to (a) because the linearization of (iii) has a marginally stable equilibrium in the origin (linearized system with eigenvalues in i). (iv) corresponds to (c) because (iv) has no equilibrium in the origin. (b) For (i)(iii), we have the linearized systems as z = 0 1 z, 1 1 z = 0 1 z, 1 1 z = 0 1 z 1 0

(c) The closed-loop system consists of a system with gain less than or equal to a and a linear system with gain equal to one. Small Gain Theorem hence gives the result. 4. (a) The describing function is given by N f (A) = (b1 + ia1 )/A, so we need to show that a1 = 0. Recall that a1 = 1
2 0

y() cos d =

y() cos d

where y() = f (A sin ) is the output when the input is u() = A sin . Since f and sin are odd functions, we have a1 = 1
0

y() cos d +

y() cos d

y() cos d +

y() cos d = 0

(b) The describing function is given by N f (A) = (b1 + ia1 )/A where a1 = 0, see (a), and b1 = 1
2 0

y() sin d =

2 0

sin6 d

5A5 = = 8 (c) The describing function represents an amplitude-depending gain N (A). A rough sketch is shown below:

N (A)

A 5. (a) The optimal control problem (on generalized form) is given by minu minu t f with z 1 = z2 z 2 = z2 z 3 = u2 dg (z1 ) + u dx
tf 0

L dt =

and (z(t f )) = 0, where 1 (z) = z1 89 and 2 (z) = z3 100. Here z(0) = 0. (b) The Hamiltonian is given by H = n0 L + T f = n0 + 1 z2 + 2 (z2 g (z1 ) + u) + 3 u2 (c) The adjoint equations are given by (t ) = H (z (t ), u (t ), (t ), n0 ) z (t f , z (t f )) T (t f ) = n0 (t f , z (t f )) + T z z where = 0, = (1 , 2 ) and 1 0 0 = 0 0 1 z Hence, 1 = 2 g (z1 ) 2 = 2 1 3 =0 with 1 (t f ) = 1 2 (t f ) = 0 3 (t f ) = 2
T

You might also like