0% found this document useful (0 votes)
11 views

Linear Algebra

The document outlines a comprehensive course on Linear Algebra, covering topics such as matrix definitions, operations, determinants, and methods for solving systems of linear equations. It includes discussions on various matrix types, properties, and advanced concepts like the Jacobian and Hessian determinants, as well as eigenvalues and eigenvectors. The course aims to provide a foundational understanding of linear algebra's applications in solving complex equations and systems.

Uploaded by

tolinan123
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views

Linear Algebra

The document outlines a comprehensive course on Linear Algebra, covering topics such as matrix definitions, operations, determinants, and methods for solving systems of linear equations. It includes discussions on various matrix types, properties, and advanced concepts like the Jacobian and Hessian determinants, as well as eigenvalues and eigenvectors. The course aims to provide a foundational understanding of linear algebra's applications in solving complex equations and systems.

Uploaded by

tolinan123
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 122

Course Outline

Course Outline
Course Outline
Course Outline
Course Outline
Chapter One: Linear Algebra
I. Introduction
 Linear Algebra
 Permits expression of a complicated system of equations in a succinct, simplified way
 Provides a shorthand method to determine whether a solution exists before it is
attempted,
• Furnishes the means of solving the equation system
• Can be applied to system of linear equations ( Many economic relationships can be
approximated by linear equations, if not they can be converted to linear)
I. Introduction
I. Introduction: Definition and Terms
•A matrix is a rectangular array of numbers, parameters, or variables, each of which
has a carefully ordered place within the matrix.
•The numbers (parameters, or variables) are referred to as elements of the matrix. The
numbers in a horizontal line are called rows; the numbers in a vertical line are called
columns.
•The number of rows r and columns c defines the dimensions of the matrix (rxc),
which is read “r by c”. The row number always precedes the column number. In a
square matrix, the number of rows equals the number of columns (that is, r =c).
•If the matrix is composed of a single column, such that its dimensions are r*1, it is a
column vector and if the matrix is a single row, with dimensions 1xc, it is a row
vector.
•A matrix which converts the rows of A to columns and the columns of A to rows is
called the transpose of A and is designated by A’ or AT
I. Introduction: Definition and Terms
I. Introduction: Definition and Terms
• The transpose of A is
I. Introduction: Addition and Subtraction of Matrices
I. Introduction: Scalar Multiplication
I. Introduction: Vector Multiplication
I. Introduction: Vector Multiplication
I. Introduction: Multiplication Matrices
• The matrices should be conformable (Column of the first matrix or
lead matrix should be equal to number of rows in the second or lag
matrix)
• Each row vector in the lead matrix is then multiplied by each
column vector of the lag matrix
• The row-column products, called inner products or dot products, are
then used as elements in the formation of the product matrix, such
that each element cij of the product matrix C is a scalar derived
from the multiplication of the ith row of the lead matrix and the jth
column of the lag matrix.
I. Commutative, Associative, and Distributive Laws in Matrix Algebra
I. Commutative, Associative, and Distributive Laws in Matrix Algebra
I. Commutative, Associative, and Distributive Laws in Matrix Algebra
I. Commutative, Associative, and Distributive Laws in Matrix Algebra
I. Identity and Null Matrices
• Identity matrix (I) is square matrix which has 1 for every element on the
principal diagonal from left to right and 0 every where else.
• When a subscript is used, as in In , n denotes the dimensions of the matrix(nxn).
• Identity matrix is like the number 1 in algebra as multiplication of a matrix by an
identity matrix leaves the original matrix unchanged (AI=IA=A)
• Multiplication of an identity matrix by itself leaves the identity matrix
unchanged: IxI=I2=I
• Any matrix which A=A- is a symmetric matrix
• A symmetric matrix for which AxA=A is an idempotent matrix. The identity
matrix is symmetric and idempotent.
I. Identity and Null Matrices
• A null matrix is composed of all 0s and can be any dimension
• It is not necessarily a square matrix
• Addition or subtraction of the null matrix leaves the original matrix unchanged
• Multiplication of a null matrix produces a null matrix.
I. Identity and Null Matrices
I. Identity and Null Matrices
Matrix Expression of a System of Linear Equations
• Matrix algebra allows concise expression of system of linear equations

• A is the coefficient matrix, X is the solution vector and B is vector of constant


terms
• X and B will always be column vectors
Diagonal Matrices
• These matrices have entries on the diagonal and
either above or below the diagonal, while the other
part of the matrix consists of zeros.
• A is an upper triangular matrix and B is a lower
triangular matrix
Determinant of A matrix
• The Determinants |A| of a 2x2 matrix ( second-order determinant) is
derived by taking the product of the two elements on the principal
diagonal and subtracting from it the product of the two elements off
the principal diagonal.
• Given a 2x2 matrix

• The determinant

• The determinant is a single number or scalar and is found only for


square matrices.
• If the determinant of a matrix is equal to zero, the determinant is said
to vanish and the matrix is termed singular
Determinant of A matrix
• A singular matrix is one in which there exists linear dependence
between at least two rows or columns
• If |A|≠0, matrix A is nonsingular and all its rows and columns are
linearly independent
• If linear dependence exists in a system of equations, the system as a
whole will have an infinite number of possible solutions ( no unique
solution) ( that is if |A|=0 )
• If |A|≠0, the matrix is nonsingular and there is no linear dependence
among the equations. A unique solution can be found
Determinant of A matrix: Rank(
• The rank of a matrix is defined as the maximum number of linearly
independent rows or columns in the matrix.
• The rank of a matrix also allows for a simple test of linear dependence which
follows immediately.
• Assuming a square matrix of order n,
Determinant of A matrix
• Example
Third order Determinants

is called a third-order determinant and is the summation of three products.


• To derive the three products:
Third order Determinants

Next slides
Third order determinant
Minors and Cofactors
• The elements of a matrix remaining after the deletion process
described above form a sub-determinant of the matrix called a
minor.
• The minor |Mij| is the determinant of a submatrix formed by
deleting ith row and jth column of the matrix.

• Where |M11| is the minor of a11, |M12| is the minor of a12 and |
M13| is the determinant of a13.
• Thus, the determinant of the above the previous 3x3 matrix
equals
Minors and Cofactors
• A cofactor is a minor with a prescribed sign.
• The rule for the sign of the cofactor is
Minors and Cofactors
• Example: The cofactors (1) |C11| , (2) |C12| , and (3) |C13| for the matrix
Laplace Expansion and Higher-order Determinants
• Laplace expansion is a method for evaluating determinants in terms of
cofactors.
• It thus simplifies matters by permitting higher-order determinants to
be established in terms of lower-order determinants.
• Laplace expansion of a third-order determinant can be expressed as
Laplace Expansion and Higher-order Determinants
• Laplace expansion permits evaluation of a determinant along any row or
column.
• Selection of a row or column with more zeros than others simplifies
evaluation of the determinant by eliminating terms.
• Laplace expansion also serves as the basis for evaluating determinants of
orders higher than three.
Laplace Expansion and Higher-order Determinants
• Example
Laplace Expansion and Higher-order Determinants
• Laplace expansion for a fourth-order determinant is

• where the cofactors are third-order subdeterminants which in turn can be


reduced to second-order subdeterminants, as above.
• Fifth-order determinants and higher are treated in similar fashion
Properties of a Determinant
• The following seven properties of determinants provide the ways in which a
matrix can be manipulated to simplify its elements or reduce part of them to
zero, before evaluating the determinant:
Properties of a Determinant
Partitioning A matrix
• Splitting up into smaller matrices
Partitioning A matrix
•.
Operation with Partitioned/block matrices
•.
Operation with Partitioned/block matrices
•.
Operation with Partitioned/block matrices
•.

This is the definition of matrix multiplication: the columns of AB


are A multiplied by the columns of B
Operation with Partitioned/block matrices
•.

Another understanding of matrix multiplication: the rows of AB are the rows of A


multiplied by B.
Operation with Partitioned/block matrices
•.
Operation with Partitioned/block matrices
•.
Cofactor and Adjoint Matrices
Cofactor and Adjoint Matrices
Inverse of A matrix
Inverse of A matrix
Inverse of A matrix
Solving Simultaneous Equations using Matrix
• An inverse matrix can be used to solve matrix
equations.
• If


Solving Simultaneous Equations using Matrix

CRAMER’S RULE FOR MATRIX SOLUTIONS
• Cramer’s rule provides a simplified method of
solving a system of equations through the use of
determinants.
• It states
• where xi is the ith unknown variable in a series of
equations, |A| is the determinant of the coefficient
matrix, and | is the determinant of a special matrix
formed from the original coefficient matrix by replacing
the column of coefficients of xi with the column vector
of constants.
CRAMER’S RULE FOR MATRIX SOLUTIONS

CRAMER’S RULE FOR MATRIX SOLUTIONS

Gauss Elimination Method
• The idea is to add multiples of one equation to the others
in order to eliminate a variable and to continue this
process until one variable is left.
• Once this final variable is determined, its value is
substituted back into the other equations in order to
evaluate the remaining unknowns.
• This method, characterized by step‐by‐step elimination of
the variables, is called Gaussian elimination.
Gauss Elimination Method

Gauss Elimination Method

Gauss Elimination Method

Gauss Elimination Method

Gauss Elimination Method

Gauss Elimination Method

Gauss Elimination Method

Gauss Elimination Method

Gauss Elimination Method
• Since the coefficient matrix has been transformed into echelon form, the
“forward” part of Gaussian elimination is complete
• What remains now is to use the third row to evaluate the third unknown, then to
back‐substitute into the second row to evaluate the second unknown, and,
finally, to back‐substitute into the first row to evaluate the first unknown

Gauss Elimination Method
Gauss Elimination Method
•.
Gauss Elimination Method

Gauss Elimination Method

Gauss Elimination Method

Gauss Jordan Method
• Reduce ( eliminate entirely) the computations
involved in back-substitution above by performing
additional row operations to transform the matrix from
echelon to reduced echelon form.
• A matrix is in reduced echelon form when, in addition
to being in echelon form, each column that contains
a non-zero entry( usually made to be 1) has
zeros not just below that entry but also above
that entry.
Gauss Jordan Method

Gauss Jordan Method

Gauss Jordan Method

Gauss Jordan Method

Gauss Jordan Method

Gauss Jordan Method

Gauss Jordan Method

Gauss Jordan Method

Gauss Jordan Method

Homogeneous System of Linear Equations
• The constant term in every equation is equal to zero(0); no
equation in such systems has a constant term in it.
• A system with n unknowns

• A homogeneous linear system may have one or infinitely many


solutions. But it has at least one solution always.
Homogeneous System of Linear Equations
• A homogeneous system may have two types of solutions: trivial solutions and
nontrivial solutions.
• Since there is no constant term present in the homogeneous systems, (x₁,
x₂, ..., xₙ) = (0, 0, ..., 0) is obviously a solution to the system and is called
the trivial solution (the most obvious solution).
• For example, the system formed by three equations x + y + z = 0, y - z = 0, and
x + 2y = 0 has the trivial solution (x, y, z) = (0, 0, 0). But it may (or may not)
have other solutions than the trivial solutions that are called nontrivial
solutions.
• We can find them using the matrix method and applying row operations.
Homogeneous System of Linear Equations
Homogeneous System of Linear Equations
Homogeneous System of Linear Equations
Homogeneous System of Linear Equations
Homogeneous System of Linear Equations
Homogeneous System of Linear Equations
Homogeneous System of Linear Equations
Homogeneous System of Linear Equations
Homogeneous System of Linear Equations
Homogeneous System of Linear Equations
Homogeneous System of Linear Equations
Homogeneous System of Linear Equations
Homogeneous System of Linear Equations
e s
t r ic
M a
d
s an
nt
i na
e t er m
l D
c i a
e
Sp
THE JACOBIAN
• Jacobian determinant permits testing for functional dependence, both
linear and non-linear.
• A Jacobian determinant |J| is composed of all the first-order partial
derivatives of a system of equations, arranged in ordered sequence.
• Given
THE JACOBIAN

• Elements of each row are the partial derivatives of one function yi with respect to

each of the independent variables x1, x2, x3, and the elements of each column are

the partial derivatives of each of the functions y1, y2, y3 with respect to one of the

independent variables xj.

• If |J|=0, the equations are functionally dependent

• If |J|, the equations are functionally independent


THE JACOBIAN
THE HESSIAN

• A convenient test for this second-order condition is the


Hessian
• A Hessian |H| is a determinant composed of all the second-order partial
derivatives, with the second-order direct partials on the principal diagonal
and the second-order cross partials off the principal diagonal.
THE HESSIAN
• Thus,
THE HESSIAN
THE HESSIAN

• Example
Higher Order HESSIANS
Higher Order HESSIANS

conditions for a relative maximum


Higher Order HESSIANS
Higher Order HESSIANS
Higher Order HESSIANS
THE BORDERED HESSIAN FOR CONSTRAINED OPTIMIZATION
• To optimize a function f(x,y) subject to a constraint g(x,y), a new
function could be formed F(x,y, , where the first order conditions are
Fx=Fy=F =0
THE BORDERED HESSIAN FOR CONSTRAINED OPTIMIZATION

3<0, 4> 0, etc., the bordered Hessian is negative definite, and a negative
definite Hessian always
Eigenvalues and Eigenvectors
• Eigenvalue is a number λ which when subtracted
from the diagonal of a square matrix A results in the
matrix to be singular.
• We say that a matrix is singular when the inverse of
the matrix does not exist. This means the
determinant is 0 and the rank is less than n.
• Consider the following matrix
Eigenvalues and Eigenvectors
• The matrix is singular as all columns are exactly the same
now and the rank of the matrix thus is less than 3.
• Therefore, the number 2 is an eigenvalue of this matrix.
Eigenvalues and Eigenvectors
• Example :
Eigenvalues and Eigenvectors
Eigenvectors
• Eigenvectors are vectors that correspond to a certain
eigenvalue and can be found by solving the equation:
Eigenvectors

Eigenvectors

You might also like