Linear Algebra
Linear Algebra
Course Outline
Course Outline
Course Outline
Course Outline
Chapter One: Linear Algebra
I. Introduction
Linear Algebra
Permits expression of a complicated system of equations in a succinct, simplified way
Provides a shorthand method to determine whether a solution exists before it is
attempted,
• Furnishes the means of solving the equation system
• Can be applied to system of linear equations ( Many economic relationships can be
approximated by linear equations, if not they can be converted to linear)
I. Introduction
I. Introduction: Definition and Terms
•A matrix is a rectangular array of numbers, parameters, or variables, each of which
has a carefully ordered place within the matrix.
•The numbers (parameters, or variables) are referred to as elements of the matrix. The
numbers in a horizontal line are called rows; the numbers in a vertical line are called
columns.
•The number of rows r and columns c defines the dimensions of the matrix (rxc),
which is read “r by c”. The row number always precedes the column number. In a
square matrix, the number of rows equals the number of columns (that is, r =c).
•If the matrix is composed of a single column, such that its dimensions are r*1, it is a
column vector and if the matrix is a single row, with dimensions 1xc, it is a row
vector.
•A matrix which converts the rows of A to columns and the columns of A to rows is
called the transpose of A and is designated by A’ or AT
I. Introduction: Definition and Terms
I. Introduction: Definition and Terms
• The transpose of A is
I. Introduction: Addition and Subtraction of Matrices
I. Introduction: Scalar Multiplication
I. Introduction: Vector Multiplication
I. Introduction: Vector Multiplication
I. Introduction: Multiplication Matrices
• The matrices should be conformable (Column of the first matrix or
lead matrix should be equal to number of rows in the second or lag
matrix)
• Each row vector in the lead matrix is then multiplied by each
column vector of the lag matrix
• The row-column products, called inner products or dot products, are
then used as elements in the formation of the product matrix, such
that each element cij of the product matrix C is a scalar derived
from the multiplication of the ith row of the lead matrix and the jth
column of the lag matrix.
I. Commutative, Associative, and Distributive Laws in Matrix Algebra
I. Commutative, Associative, and Distributive Laws in Matrix Algebra
I. Commutative, Associative, and Distributive Laws in Matrix Algebra
I. Commutative, Associative, and Distributive Laws in Matrix Algebra
I. Identity and Null Matrices
• Identity matrix (I) is square matrix which has 1 for every element on the
principal diagonal from left to right and 0 every where else.
• When a subscript is used, as in In , n denotes the dimensions of the matrix(nxn).
• Identity matrix is like the number 1 in algebra as multiplication of a matrix by an
identity matrix leaves the original matrix unchanged (AI=IA=A)
• Multiplication of an identity matrix by itself leaves the identity matrix
unchanged: IxI=I2=I
• Any matrix which A=A- is a symmetric matrix
• A symmetric matrix for which AxA=A is an idempotent matrix. The identity
matrix is symmetric and idempotent.
I. Identity and Null Matrices
• A null matrix is composed of all 0s and can be any dimension
• It is not necessarily a square matrix
• Addition or subtraction of the null matrix leaves the original matrix unchanged
• Multiplication of a null matrix produces a null matrix.
I. Identity and Null Matrices
I. Identity and Null Matrices
Matrix Expression of a System of Linear Equations
• Matrix algebra allows concise expression of system of linear equations
• The determinant
Next slides
Third order determinant
Minors and Cofactors
• The elements of a matrix remaining after the deletion process
described above form a sub-determinant of the matrix called a
minor.
• The minor |Mij| is the determinant of a submatrix formed by
deleting ith row and jth column of the matrix.
• Where |M11| is the minor of a11, |M12| is the minor of a12 and |
M13| is the determinant of a13.
• Thus, the determinant of the above the previous 3x3 matrix
equals
Minors and Cofactors
• A cofactor is a minor with a prescribed sign.
• The rule for the sign of the cofactor is
Minors and Cofactors
• Example: The cofactors (1) |C11| , (2) |C12| , and (3) |C13| for the matrix
Laplace Expansion and Higher-order Determinants
• Laplace expansion is a method for evaluating determinants in terms of
cofactors.
• It thus simplifies matters by permitting higher-order determinants to
be established in terms of lower-order determinants.
• Laplace expansion of a third-order determinant can be expressed as
Laplace Expansion and Higher-order Determinants
• Laplace expansion permits evaluation of a determinant along any row or
column.
• Selection of a row or column with more zeros than others simplifies
evaluation of the determinant by eliminating terms.
• Laplace expansion also serves as the basis for evaluating determinants of
orders higher than three.
Laplace Expansion and Higher-order Determinants
• Example
Laplace Expansion and Higher-order Determinants
• Laplace expansion for a fourth-order determinant is
•
Solving Simultaneous Equations using Matrix
•
CRAMER’S RULE FOR MATRIX SOLUTIONS
• Cramer’s rule provides a simplified method of
solving a system of equations through the use of
determinants.
• It states
• where xi is the ith unknown variable in a series of
equations, |A| is the determinant of the coefficient
matrix, and | is the determinant of a special matrix
formed from the original coefficient matrix by replacing
the column of coefficients of xi with the column vector
of constants.
CRAMER’S RULE FOR MATRIX SOLUTIONS
•
CRAMER’S RULE FOR MATRIX SOLUTIONS
•
Gauss Elimination Method
• The idea is to add multiples of one equation to the others
in order to eliminate a variable and to continue this
process until one variable is left.
• Once this final variable is determined, its value is
substituted back into the other equations in order to
evaluate the remaining unknowns.
• This method, characterized by step‐by‐step elimination of
the variables, is called Gaussian elimination.
Gauss Elimination Method
•
Gauss Elimination Method
•
Gauss Elimination Method
•
Gauss Elimination Method
•
Gauss Elimination Method
•
Gauss Elimination Method
•
Gauss Elimination Method
•
Gauss Elimination Method
•
Gauss Elimination Method
• Since the coefficient matrix has been transformed into echelon form, the
“forward” part of Gaussian elimination is complete
• What remains now is to use the third row to evaluate the third unknown, then to
back‐substitute into the second row to evaluate the second unknown, and,
finally, to back‐substitute into the first row to evaluate the first unknown
•
Gauss Elimination Method
Gauss Elimination Method
•.
Gauss Elimination Method
•
Gauss Elimination Method
•
Gauss Elimination Method
•
Gauss Jordan Method
• Reduce ( eliminate entirely) the computations
involved in back-substitution above by performing
additional row operations to transform the matrix from
echelon to reduced echelon form.
• A matrix is in reduced echelon form when, in addition
to being in echelon form, each column that contains
a non-zero entry( usually made to be 1) has
zeros not just below that entry but also above
that entry.
Gauss Jordan Method
•
Gauss Jordan Method
•
Gauss Jordan Method
•
Gauss Jordan Method
•
Gauss Jordan Method
•
Gauss Jordan Method
•
Gauss Jordan Method
•
Gauss Jordan Method
•
Gauss Jordan Method
•
Homogeneous System of Linear Equations
• The constant term in every equation is equal to zero(0); no
equation in such systems has a constant term in it.
• A system with n unknowns
• Elements of each row are the partial derivatives of one function yi with respect to
each of the independent variables x1, x2, x3, and the elements of each column are
the partial derivatives of each of the functions y1, y2, y3 with respect to one of the
• Example
Higher Order HESSIANS
Higher Order HESSIANS
3<0, 4> 0, etc., the bordered Hessian is negative definite, and a negative
definite Hessian always
Eigenvalues and Eigenvectors
• Eigenvalue is a number λ which when subtracted
from the diagonal of a square matrix A results in the
matrix to be singular.
• We say that a matrix is singular when the inverse of
the matrix does not exist. This means the
determinant is 0 and the rank is less than n.
• Consider the following matrix
Eigenvalues and Eigenvectors
• The matrix is singular as all columns are exactly the same
now and the rank of the matrix thus is less than 3.
• Therefore, the number 2 is an eigenvalue of this matrix.
Eigenvalues and Eigenvectors
• Example :
Eigenvalues and Eigenvectors
Eigenvectors
• Eigenvectors are vectors that correspond to a certain
eigenvalue and can be found by solving the equation:
Eigenvectors
•
Eigenvectors
•