0% found this document useful (0 votes)
4 views54 pages

Chapter 003

Chapter Three discusses special matrices and determinants in economics, focusing on the Jacobian and Hessian determinants. The Jacobian determinant is used to analyze functional dependence among functions, while the Hessian determinant is applied in optimization problems to determine conditions for maxima and minima. The chapter also covers eigenvalues and eigenvectors, which are essential in various applications including solving differential equations and analyzing dynamic models.

Uploaded by

abebedagi04
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views54 pages

Chapter 003

Chapter Three discusses special matrices and determinants in economics, focusing on the Jacobian and Hessian determinants. The Jacobian determinant is used to analyze functional dependence among functions, while the Hessian determinant is applied in optimization problems to determine conditions for maxima and minima. The chapter also covers eigenvalues and eigenvectors, which are essential in various applications including solving differential equations and analyzing dynamic models.

Uploaded by

abebedagi04
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 54

Chapter Three

Special Matrices And


Determinants In Economics

1
Partial derivatives and the Jacobian
determinant
Matrix Recap: for systems of linear equation

 Matrices when we have linear equations: Determinant


helps us to
• Test for consistency
• Test for functional independence
• Conclude for presence of a unique solution
• Test for singularity

2
But, how do we

•Test for consistency

• Test for functional independence

• Conclude for presence of a unique solution

• Test for singularity In non-linear equations. This


comes the concept of Jacobian.
3
3.1 The Jacobian Determinant

 Comparative static analysis studies how the equilibrium value of an


endogenous variable will change when there is a change in any of
the exogenous variables or parameters.

 Partial derivatives can provide a means of testing whether there


exists functional dependence among a set of m functions in n
variables. This is related to the notion of Jacobian determinants
(named after Jacobi).

 Partial derivatives can also be used to test for linear or non-linear


dependency/ consistency in a system of equation expressed using
4
 If we have m differentiable functions and n variables, not
necessarily linear,

 where the symbol fm denotes the mth function (and not the function
raised to the mth power), we can derive a total mth partial
derivatives. Together they will give rise to the Jacobean matrix.
 The Jacobian matrix is associated with a system of equations and is
the matrix composed of all first-order partial derivatives, arranged
in ordered sequence.

5
 Notice that the elements of the ith row are partial
derivatives of the ith function with respect to each of the
independent variables.
 The Jacobian determinant (often simply called the
Jacobian) is the determinant of the Jacobian matrix.

 A Jacobean test for the existence of functional


dependence among a set of m functions is provided by
the following theorem: the Jacobean |J|= zero for all
values of x1, x2… xn if and only if the m functions are
functionally (linear or non-linear) dependent.

6
Example 1. Consider

If we get all partial derivatives

 That is, the Jacobian vanishes for all values of x 1 and


x2. Therefore, according to the theorem, the two
functions must be dependent. y2= (y1)2 →non-linear
dependence.
7
 We have earlier learnt that the rows of coefficient matrix A of a linear
equation system

are linearly dependent if and only if the determinant |A|=0. This result
can now be interpreted as a special application of the Jacobian criterion
of functional dependence. For systems of linear equations |A|=|J|.
8
Example 2. Test the functional dependency using
Jacobian Metrix.

Solution:

9
Solution of Non-linear Equation by Jacobian
Matrix Method
 A system of linear equations can be solved using .
 How about for system of non-linear equations?
 To solve non-linear equations using matrix method, they must linearize
the equations. The coefficients of the linearized equations (specifically
the partial derivatives taken with respect to the unknowns variables)
will be gathered to form the Jacobian Matrix.
 For example: For two non-linear equations F and G, its Jacobian Matrix
is

10
 Generally, let be the matrix of unknown corrections be
the first order Jacobian matrix and be the matrix of
constants, then we can form the matrix equation:

 How to solve the equation?


i. Begin with a set of approximation
ii. Form the matrices
iii. is computed using the matrix methods.
iv. Having updated the unknowns, the matrices are formed
again and the solution for computed again.
v. This procedure is iterated until convergence is achieved.
11
Example: Find the solution of the non-linear system of equations. For initial
approximation, use .

Solution:

i. Set-up the required matrices:

12
 1st iteration:

= . Since the correction are positive we add them on the initial values of until the both are zero.

 2nd iteration:

= Since values are zero, the solution converged. Iteration will now stop. And the values of are therefore: .

13
3.2 The Hessian Determinant (for unconstraint
function)
 It is associated with a single equation of optimizing unconstrained
function.
 Let be a real function of two real variables.
 The FOC and SOC for extremum can be expressed in terms of
derivatives and/or total differential.
 Derivative: FOC:
SOC: min) if:for a minimum
max) if: for a maximum
 A convenient test for this second-order condition is the (plain)
Hessian. The Hessian |H| is a determinant composed of all the
second-order partial derivatives, with the second-order direct
partials on the principal diagonal and the second-order cross partials
off the principal diagonal. Thus,
14
 Cross (mixed) partial derivatives measure the rate of change
of one first order partial derivative with respect to the other
variable.
 The Young’s theorem states that cross partial derivatives are
identical with each other as long as the two cross partials are
continuous.
 Let A be a symmetric n × n matrix. A minor of A of order k is
principal if it is obtained by deleting n − k rows and the n − k
columns with the same numbers. The leading principal minor
of A of order k is the minor of order k obtained by deleting the
last n − k rows and columns.
 If the first element on the principal diagonal, the first principal
minor, is positive and the second principal minor, the second
order condition for minimum is met. If , the Hessian |H| is
called positive definite. A positive definite Hessian fulfils15 the

second-order conditions for a minimum.


 If , the second order condition for a maximum is met. If ,
the Hessian |H| is called

negative definite. A negative definite Hessian fulfils the


second-order conditions for a maximum.

 If , a critical point is saddle and if , test is inconclusive.

 In general, if , Hessian is positive definite but if the sign


of successive Hessian alternates between negative and
positive, Hessian is negative definite.
16
3rd order Hessians
 For a three variable function Hessian will be the 3 × 3 matrix of
second-order partial derivatives
H = and the determinants of the three principal minors will be
,
 The SOC conditions for unconstrained optimization of a three
variable function are:
a) For a Maximum (Hessian is Negative definite)
b) For a Minimum (Hessian is Positive definite)
c) For a Minimum (Hessian is Saddle point)
Exercise:
, (0,0,0) is a global minimum point.
17
Higher order Hessians

• Although you will not be asked to use the Hessian to tackle


any problems in this text that involve more than three
variables, for your future reference the general SOC
conditions that apply to a Hessian of any order are:
A. Maximum
 Principal minors alternate in sign, starting with |H1| < 0
(Negative definite)
 Thus a principal minor |Hi | of order i should have the sign

B. Minimum
All principal minors |Hi | > 0 (Positive definite)

18
Economic Application of Hessian Optimization
problem
1. A firm produces two goods sold in (choice variables) that maximize profit
two separate markets where the (objective function).
average revenues are given as: 1. Define objective function

a. Find values of and where total profit


is maximized. Use Cramer’s rule for
the FOC and
b. Check the SOC whether min of max
is achieved.

Solution: this is an optimization


problem as it is asking for values of

19
2. Finding the turning linear case,
points

 FOC=0:

130 Therefore, this is negative


definite and profit is

Using Cramer’s rule: maximized

 Check the SOC: In the


20
Bordered Hessian (optimization for constrained functions)
 If second-order partial derivatives are taken for a Lagrange constrained optimization
objective function and put into a matrix format this will give what is known as the
Bordered Hessian.
For example, to maximize a utility function U(X1,X2) subject to the budget constraint
M − P1X1 − P2X2 = 0
 The Lagrange equation will be G = U(X1,X2) + λ(M − P1X1 − P2X2)
 Taking first-order derivatives and setting equal to zero we get the first-order
conditions:
G1 = U1 − λP1 = 0 (1)
G2 = U2 − λP2 =0 (2)
Gλ = M − P1X1 − P2X2 = 0 (3)
 These are used to solve for the optimum values ofX1 and X2 when actual values are
specified for the parameters.
 Differentiating (1), (2) and (3) again with respect to X1,X2 and λ gives the bordered
Hessian matrix of second-order partial derivatives.
21
=
HB ==

 You can see that the bordered Hessian HB has one more row and one
more column than the ordinary Hessian. Its borders in the last row
and last column, apart from the 0 element in the bottom right
position, are the first-order partial derivatives of g.

 Although it is possible to use the Lagrange method to tackle


constrained optimization problems with several constraints, we will
only consider problems with one constraint here.

22
 The second-order conditions for optimization of a Lagrangian with one
constraint require that for
Maximization
 If there are two variables in the objective function (i.e. HB is 3×3) then
the determinant of the bordered principal minor = |HB| > 0. And Hessian
is negative definite
 If there are three variables in the objective function (i.e. HB is 4×4) then
the determinant |HB| < 0 and the determinant of the naturally ordered
principal minor of |HB| > 0. (The naturally ordered principal minor is the
matrix remaining when the last row and column have been eliminated
from HB.)
Minimization
 If there are two variables in the objective function (i.e. HB is 3×3) the
determinant of the bordered principal minor = |HB| < 0. And Hessian is
positive definite
 If there are three variables in objective function then the determinant
23 |
HB| < 0 and the determinant of the naturally ordered principal minor of |
Constrained optimization with any number of variables and
constraints
 Second-order conditions requirements for optimization for the general case
with n variables xi in the objective function and r constraints are that the
naturally ordered border preserving principal minors of dimension n of HB
must have the sign

 For a maximum: and alternate in sign.

 For a minimum: and have the same sign as

 and is neither a local minimum nor a local maximum.

 ‘Border preserving’ means not eliminating the borders added to the basic
Hessian, i.e. the last column and the bottom row, which typically show the
prices of the variables. 24
 These requirements only apply to the principal minors of order ≥ (1 + 2r). For example, if
the problem was to maximize a utility function U = U(X1,X2,X3) subject to the budget
constraint M = P1X1 + P2X2 + P3X3 then, as there is only one constraint, r = 1. Therefore
we would just need to consider the principal minors of order greater than three since (1 +
2r) = (1 + 2) = 3.

 As the full-bordered Hessian in this example with three variables is 4th order then only HB
itself plus the first principal minor need be considered, as this is the only principal minor
with order equal to or greater than 3.

The second-order conditions will therefore require that for a maximum:

 For the full bordered Hessian n = 4 and so |HB| must have the sign = = = −1 < 0 and the
determinant of the 3rd order naturally ordered principal minor of |HB| must have the sign =
= = +1>0 These are the same as the basic rules for the three variable case stated earlier.

25
Numerical examples

1. Optimize

Subject to:

2. Find the maximizer of the objective function f(w; x; y; z) =


subject to the following constraints:
g(w; x; y; z) = 4w - 3y + z + 15 = 0
h(w; x; y; z) = -2x - y + z + 5 = 0
In this problem, we would define the Lagrangian function to be

L(w; x; y; z; λ; µ) = f + λg + µh
26
3.3 Eigenvalues and Eigenvectors

 The eigenvalue problem is a problem of considerable theoretical interest and


wide-ranging application. For example, this problem is crucial in solving
systems of differential equations, analysing population growth models, and
calculating powers of matrices.

 Linear equations come from steady state problems. Eigenvalues have


their greatest importance in dynamic problems. A good model comes from
the powers , …… of a matrix. Suppose you need the hundredth power .

 Almost all vectors change direction, when they are multiplied by A. Certain
exceptional vectors x are in the same direction as Ax. Those are the
“eigenvectors”. Multiply an eigenvector by A, and the vector Ax is a number
27

times the original x.


The basic equation is . The number is an eigenvalue of .

 The eigenvalue could be zero! Then means that this eigenvector is in the nullspace.

 For a given matrix A of order n, what are the vectors x that satisfy the equation for some
scalar ?

 The vector x = 0 (that is, the vector whose elements are all zero) satisfies this equation.
With such a trivial answer, we might ask the question again in another way: For a given
matrix A, what are the nonzero vectors x that satisfy the equation Ax = λx for some scalar
λ?

 To answer this question, we first perform some algebraic manipulations upon the equation
Ax = λx. We note first that, if I = In, then we can write
Ax = λx ⇔ Ax − λx = 0
⇔ Ax − λIx = 0
⇔ (A − λI) x = 0.
28
 Remember that we are looking for nonzero x that satisfy this last equation. But A −
λI is an n × n matrix and, should its determinant be nonzero, this last equation will
have exactly one solution (i.e., trivial), namely x = 0.

 Thus our question above has the following answer: The equation Ax = λx has
nonzero (or nontrivial) solutions for the vector x if and only if the matrix A − λI has
zero determinant (is singular or not invertible). .

 For a given matrix A there are only a few special values of the scalar λ for which A −
λI will have zero determinant, and these special values are called the eigenvalues of
the matrix A. We do allow for the possibility that λ=0.

29
 A scalar λ is an eigenvalue of an n × n matrix A if and only if λ
satisfies the characteristic equation det (A − λI) = 0. It can be shown
that if A is an n × n matrix, then det(A − λI) is a polynomial in the
variable λ of degree n. We call this polynomial the characteristic
polynomial of A.

 If det(A − λI) = 0, then the equation (A − λI)x = b has either no


solutions or infinitely many solutions. When we take b = 0 however, it
is clear by the existence of the solution x = 0 that there are infinitely
many solutions (i.e., we may rule out the “no solution” case). A
homogeneous system of linear equations with more unknowns than
30
the number of equations always has non-trivial solution (infinitely
 Corresponding to an eigenvalue, the non-trivial solution of
the system will be one value of the solutions. Hence, the
eigenvectors corresponding to an eigenvalue is not unique.
 For each eigenvalue λ, we find eigenvectors x by solving
the linear system (A − λI)x = 0.
 The multiplicity of an eigenvalue α is the number of times
the factor λ – α appears in the characteristic polynomial.
 We only consider real roots of the characteristic equation.
 If the eigenvalues of a matrix are distinct, then the
associated eigenvectors are linearly independent.
 The eigenvalues for A are the solutions of the characteristic
equation p(λ) = 0 (i.e., the roots of the characteristic
polynomial).
31
•Consider the matrix and
the three column vectors , , .
We have , , .
In other words, we have , . In this case we say 0, -4
& 3 are eigenvalues of the matrix A, and X1, X2 & X3
are eigenvectors of A.

32
Method of Finding Eigenvalues and Eigenvectors

 To find eigenvalues and eigenvectors of a given


matrix:
1. Form the matrix A − λI, that is, subtract λ from each
diagonal element of A.
2. Solve the characteristic equation |A − λI| = 0 for λ.
3. Take each value of λ in turn, substitute it into (A –
λiI) X = 0 and solve the resulting homogeneous
system for X using Gaussian elimination (convert
the augmented matrix to row echelon form, and
solve the resulting linear system by back
substitution). Note that, since the determinant of33

the coefficient matrix is zero, row reduction of the


E.g. 1 Find the eigenvalue and eigenvectors of

(5-λ) (2-λ) - 4=0 → λ2-7λ+6=0 → λ = 1, 6

• The eigenvector corresponding to any λ is given by (A − λI)X


= 0. That is,

•When λ = 1, the eigenvector is given by the system

34
•So the augmented matrix of the system is ,

•, by taking x2 as free (arbitrary vs leading) variables.


Note that it is possible to take any of the variables as
free variable. Let x2=t, then the eigenvector
corresponding to λ1 = 1 is .

35
 Thus, the eigenvector corresponding to λ1 = 1 is . This tells
us that the eigenvectors corresponding to the eigenvalue 1
are precisely the set of scalar multiples of the vector .
 When λ2 = 6, the eigenvector is given by the system

 So the augmented matrix of the system is ≈ . So x1 = 4x2


 Therefore, the eigenvector corresponding to λ2 = 6 is

36
E.g. 2 Find the eigenvalue and eigenvectors of
When λ = −2, x1 + 7x2 + x3 = 0 → x1=-7x2-x3→x1=-x3
-20x2 = 0→x2=0
 Here we have three unknown with two equations. In
principle, we’re finished with the problem in the sense that
we have the solution in hand. But it’s customary to rewrite
the solution in vector form so that its properties are more
evident. Let x3=t, then x1=-t, x2=0. Hence, eigenvector
corresponding to λ1 = −2 is

37
 When λ = 3, x1 + 2x2 + x3 = 0
5x2 + 5x3 = 0
Therefore, the eigenvector corresponding to λ2 = 3 is
 When λ = 6, x1 + 2x2 + x3 = 0
5x2 + 5x3 = 0
Therefore, the eigenvector corresponding to λ3 = 6 is
 If a reduced system with n equations in m unknowns,
where n < m, has p entirely zeros rows in the reduced
augmented matrix, then there are m − n + p number of
free variables in the solution set.

38
E.g. 3 Find the eigenvalues and associated eigenvectors of the matrix

E.g. 4 Find the eigenvalues and eigenvectors of the matrix

Therefore, the eigenvalues of A are λ = 4, − 2. (λ = − 2 is a repeated root of


the characteristic equation.)
• When λ = 4,
• When λ = − 2,

39
Properties of Eigenvalues
 A square matrix A and its transpose have the same
eigenvalues.
 If |A|≠0, then all eigenvalues are nonzero.
 The sum of the eigenvalues of a matrix is equal to
the sum of the principal diagonal elements of A.
 The product of the eigenvalues of a matrix A is equal
to |A|.
 If λ1, λ2, . . . , λn are the eigenvalues of a matrix A,
then
a. kλ1, kλ2. . . kλn are the eigenvalue of the matrix
kA. 40
 The eigenvalues of a triangular matrix are the entries of the main
diagonal. A triangular matrix has the property that either all of its
entries below the main diagonal are 0 or all of its entries above the
main diagonal are 0.

Ex. , λ = 0, 2, or 3

 The eigenvalues of the real symmetric matrix are real.

 The eigenvectors corresponding to distinct eigenvalues of real


symmetric matrix are orthogonal.

41
3.4 Quadratic Forms
 The FOC and SOC for extremum can be expressed in terms of
derivatives and/or total differential.
 Derivative: FOC:
SOC: for a minimum
for a maximum
 Total differential: FOC:
SOC:

42
 , where

 For any values of dx and dy, not both zero SOSC


For maximum of z: , iff

For minimum of z: , iff

 exemplifies what are known as quadratic forms, for which there exist
established criteria for determining whether their signs are always positive,
negative, non-positive or non-negative for any values of dx and dy, not both
zero.

 Since second order condition for extremum pivots directly on the sign of
those criteria are of direct interest.
43
 The special case of polynomials where each term has a
uniform degree (i.e., where the sum of the exponents in
each term is the same) is called a form.
E.g. 4x-9y+z is a linear form while is a quadratic form.
Given , find dz and d2z.

 If we consider the differentials dx and dy as variables and


the partial derivatives as coefficients (i.e., if we let u=dx,
v=dy, a=zxx, b=zyy, h=zxy=zyx), then the above second order
total differential can be identified as a quadratic form q in
two variables u and v:
44
 Note that in this quadratic form dx=u and dy=v
are cast in the role of variables whereas the
second partial derivatives are treated as
constants, i.e., they will assume specific values at
the points we are examining as possible
extremum points.
 The major question now becomes: what restriction
must be placed upon a, h and b, when u and v are
allowed to take any values, in order to ensure a
definite sign for q?
 A quadratic form has a critical point at x = 0,
where it takes on the value 0. Therefore, we can
classify quadratic forms by whether x = 0 is a 45
Determinant Test for Sign Definiteness
 This test happens to be more easily applicable to positive and
negative definiteness.
 For the two variable case, determinant conditions for sign
definiteness of q are relatively easy to derive.

The device that will do the trick of making u and v appear only in
some squares is that of completing the square.

46
 We can now predicate the sign of q entirely on the
values of the coefficients a, b and h as follows:

q can be expressed using determinants

 with the squared terms placed on the main diagonal


and with the 2huv term split into two equal parts and
placed off the main diagonal. The coefficients now form
a symmetric matrix, with a and b on the principal
diagonal and h off the diagonal.
47
 The determinant of the 2x2 coefficient matrix, |D|= is referred to as
the discriminant of the quadratic form q. Thus, the above condition
for definiteness can be alternatively expressed as:

The determinant |a|=a is the first leading principal minor of |D| and the
determinant is the second leading principal minor of |D| and their signs
will serve to determine the positive or negative definiteness of q.
 We can test for the definiteness of the matrix in the following fashion:
1. A is positive definite iff all of its n leading principal minors are strictly
positive.
2. A is negative definite iff all of its n leading principal minors alternate
in sign.
3. If some kth order leading principal minor of A is nonzero but does not
fit either of the above sign patterns, then A is indefinite.
48
 If the matrix A would meet the criterion for positive or negative
definiteness if we relaxed the strict inequalities to weak inequalities
(i.e. we allow zero to fit into the pattern), then although the matrix
is not positive or negative definite, it may be positive or negative
semi-definite.
 In this case, we employ the following tests:
1. A is positive semi-definite iff every principal minor of A is ≥ 0.
2. A is negative semi-definite iff every principal minor of A of odd
order is ≤ 0 and every principal minor of even order is ≥ 0.
 Notice that for determining semi-definiteness, we can no longer
check just the leading principal minors, but we must check all
principal minors. What a pain!
 The cases of positive and negative definiteness of are related to
the second order sufficient condition for a minimum and a
maximum respectively. The cases of semi-definiteness, on the 49other
hand, relate to second order necessary conditions. When is
E.g. 1 Is either positive or negative definite? The
discriminant of q is , with principal minors 5>0, and
=7.75>0. Therefore, q is positive definite.
E.g. 2 Given fxx= -2, fxy=1, and fyy= -1at a certain
point on function z=f(x,y) does d2z have a definite
sign at that point regardless of the values of dx and
dy? The discriminant of the quadratic form d2z is ,
with principal minors -2<0, and =1>0. Thus, d2z is
negative definite.

50
E.g. 3 Determine whether is either positive or negative definite.
The discriminant of q is , with principal minors 1>0, =5>0 and
=11>0. Thus, the quadratic form is positive definite.

E.g. 4 Determine whether is either positive or negative definite.


The discriminant , 2>0, =-3<0. Thus, q is neither positive nor
negative definite.

51
Determine the definiteness of the following
matrices:
1.
2.
3.
4.
5.

6.

52
• Express the quadratic form as a matrix product involving a symmetric
coefficient matrix.

• Check the following:


i. Every diagonal matrix whose diagonal elements are all positive is
positive definite.
ii. Every diagonal matrix whose diagonal elements are all negative is
negative definite.
iii. Every diagonal matrix whose diagonal elements are all positive or
zero is positive semi-definite.
iv. Every diagonal matrix whose diagonal elements are all negative or
zero is negative semi-definite.
v. All other diagonal matrices are indefinite. 53
 POSITIVE DEFINITE: Matrix A is positive definite if all
eigenvalues are greater than 0, in which case q(x) is
positive for all nonzero x, and the determinants of all
principle sub-matrices will be greater than 0.
 NEGATIVE DEFINITE: Matrix A is negative definite if all
eigenvalues are less than 0, in which case q(x) is negative
for all nonzero x.
 INDEFINITE: Matrix A is indefinite if there are negative and
positive eigenvalues in which case q(x) may also have
negative and positive values.
 What about eigenvalues which include 0? The definition
here varies among authors.

54

You might also like