Solution For Linear Systems
Solution For Linear Systems
• L-U-Decomposition
• Summary
Summary
1. Rank of a matrix: The rank of a matrix is the order r of the largest non-
vanishing minor of the matrix.
Method – I:
Echelon Form: Transform the given matrix to an echelon form using
elementary transformations. The rank of the matrix is equal to the number
of non-zero rows of echelon form.
Method II:
Canonical Form OR Normal Form:
Reduce the given matrix A to one of the normal forms
Ir 0 , Ir , Ir 0 or Ir , using elementary transformation,
0 0 0 Then Rank of A = r
5.Gauss elimination:
Conclusions:
i)The system AX=0 is always consistent. Since the trival solution x1= x2=
x3=…=xn=0 always exist.
ii)If rank of (A/B) = rank of A = n , [│A│≠ 0] , then the trival solution is
the only solution.
iii)If the rank of (A/B)= rank of A = r < n, [│A│≠ 0] the solution has
infinite number of non-trival solutions involving (n-r) arbitrary
constants.
UNIT-II
• Diagonalization of a matrix
• Summary
Summary
1. Eigen values & Eigen vectors : Let A = (aij)mxn
(a)Characteristic equation of A is given by │A- λ I│ =0
(b)Roots of this equation are λ1, λ2 , λ3,…………….., λn. They are
called Eigen values of A.
(c)A non-zero vector X = [x1,x2,x3,……………..xn]1 which satisfies
the relation [A- λ I]X=0 (or) AX= λX, is called the Eigen vector
of A corresponding to λ. This each Eigen value has an Eigen
vector.
Am=(PDP-1)m= [P DmP-1]
UNIT-III
LINEAR TRANSFORMATIONS
• A Summary
Summary
1. Definitions: and properties of some real and complex matrices are
following
3. Transformations:
(a) Index: The number of +ve terms in q1 is called the index ‘s’ of
quadratic form ‘q’
(b) The number of non +ve terms = r-s
(c) Signature = S- (r-s)= 2s-r.
2. The quadratic form ‘q’ is said to be
The entries of a symmetric matrix are symmetric with respect to the main diagonal (top
left to bottom right). So if the entries are written as A = (aij), then
Skew-symmetric matrix
In linear algebra, a skew-symmetric (or antisymmetric or antimetric[1]) matrix is a
square matrix A whose transpose is also its negative; that is, it satisfies the equation:
Compare this with a symmetric matrix whose transpose is the same as the matrix
Every diagonal matrix is symmetric, since all off-diagonal entries are zero. Similarly,
each diagonal element of a skew-symmetric matrix must be zero, since each is its own
negative.
Orthogonal matrix
In linear algebra, an orthogonal matrix is a square matrix with real entries whose
columns (or rows) are orthogonal unit vectors (i.e., orthonormal). Because the columns
are unit vectors in addition to being orthogonal, some people use the term orthonormal
to describe such matrices.
alternatively,
(OR)
"Adjoint matrix" redirects here. An adjugate matrix is sometimes called a "classical adjoint
matrix".
where the subscripts denote the i,j-th entry, for 1 ≤ i ≤ n and 1 ≤ j ≤ m, and the overbar denotes a
scalar complex conjugate. (The complex conjugate of a + bi, where a and b are reals, isa − bi.)
where denotes the transpose and denotes the matrix with complex conjugated entries.
Other names for the conjugate transpose of a matrix are Hermitian conjugate, or transjugate.
The conjugate transpose of a matrix A can be denoted by any of these symbols:
In some contexts, denotes the matrix with complex conjugated entries, and thus the
conjugate transpose is denoted by or .
EXAMPLE:
then
Hermitian matrix
A Hermitian matrix (or self-adjoint matrix) is a square matrix with complex entries which is
equal to its own conjugate transpose – that is, the element in the ith row and jth column is equal
to the complex conjugate of the element in the jth row and ith column, for all indices i and j:
If the conjugate transpose of a matrix is denoted by , then the Hermitian property can be
written concisely as
Hermitian matrices can be understood as the complex extension of a real symmetric matrix.
For example,
is a Hermitian matrix
Skew-Hermitian matrix
In linear algebra, a square matrix with complex entries is said to be skew-Hermitian or
antihermitian if its conjugate transpose is equal to its negative.[1] That is, the matrix A is
skew-Hermitian if it satisfies the relation
where denotes the conjugate transpose of a matrix. In component form, this means that
for all i and j, where ai,j is the i,j-th entry of A, and the overline denotes complex
conjugation.
Unitary matrix
In mathematics, a unitary matrix is an n by n complex matrix U satisfying the condition
where is the identity matrix in n dimensions and is the conjugate transpose (also
called the Hermitian adjoint) of U. Note this condition says that a matrix U is unitary if
and only if it has an inverse which is equal to its conjugate transpose
A unitary matrix in which all entries are real is an orthogonal matrix. Just as an
orthogonal matrix G preserves the (real) inner product of two real vectors,
for all complex vectors x and y, where stands now for the standard inner product on
.
UNIT-IV
1. Bisection Method
2. Method of False Position
3. The Iteration Method
4. Newton Raphson Method
• Interpolation
- Finite Differences
- Forward Differences
- Backward Differences
- Central Differences
• Summary
Summary
Repeat the same process till the root is obtained to the desired
accuracy.
(iii) Iteration method:
If a function F(x) is continuous between a and b,
f(a) & f(b) are of opposite sign then there exists at least one root
between a and b. The approximate value of the root between them is
Xo=(a+b)/2
We can use this method,if we can express f(x)=0 , as
X =Φ(X0) such that │Φ1(X0)│< 1 then
The successive approximate roots are given by
Xn =Φ(Xn-1), n=1,2----
(iv) Newton Raphson method: The successive
approximate roots are
given by Xn+1=Xn- F(Xn) /F1(Xn) , n=0,1,2----
2. Interpolation
Where P=[X-Xn]/h
(iii) Gauss forward interpolation formula:
using central differences, delta as an operator
the Gauss forward interpolation formula is
Yp=Yo + P δY1/2 + [(P)(P-1)] /2! δ2yo+ [(P+1)(P-1)]/3! δ3Y1/2
+ [(P+1)P(P-1)(P-2)] /4! δ4Yo +…
Where P=[X-Xo]/w
FINITE DIFFERENCE :-
⇒ ∆ y0 = y1 – y0
⇒ ∇ y1 = y1 – y0
⇒δ y1/2 = y1 – y0
9. δ = E1/2 – E-1/2
10. ∆ = E ∇ = ∇ E = δ E1/2
11. δ 2
= ∆∇ = ∇∆
2. Gauss Backward:-
yp = y0 + p ∆ y-1 + (p+1)p ∆ 2y-1 + (p+1)p(p-1) ∆ 3y-2+(p+2)(p+1)p(p-1) ∆ 4y-1+…..
2! 3! 4!
Where p = x-x0
h
∆ y-1 ∆ 3y-2 ∆ 5y-3 ∆ 7y-4
3.Stirling’s: -
yp = y0+p ∆ y0+∆ y-1 + p2 ∆ 2y-1 + p(p2-1) ∆ 3y-1 + ∆ 3y-2 +p2(p2-1) ∆ 4y-2+
…….
2 2! 3! 2 4!
UNIT-V
• Curve Fitting
• Trapezoidal Rule
• Gaussian Integration
• Summary
Summary
1. Curve Fitting
2. Interpolation
4. Trapezoidal Rule
I=(h/2)[(Yo+Yn)+2(Y1+Y2+…+Yn-1)]
Where yo, y1, …yn i.e.,yi= ƒ(xi) are the values corresponding to the
argument xo=a,X1,=Xo+h…Xn=Xo+nh=b
This rule can be applied when the given internal (a,b) is divided into
even number of sub intervals of length ‘h’
6. Gaussian Integration
=∑ wi ƒ(xi) i from 1 to n
OR
∑ y = na + b ∑ x
∑xy = a ∑x + b ∑ x2
∑ y = na + b∑ x + c ∑ x2
∑xy = a ∑x + b ∑x2 + c ∑ x3
∑y = na + b ∑x2
(NUMERICAL DIFFERENTIATION)
1. Newton Forward:
y = y0+p∆ y0+ p (p-1) ∆ 2y0 + p(p-1)(p-2) ∆ 3y0+……….
2! 3!
2. Newton Backward:
3.Stirling’s: -
(NUMERICAL INTEGRATION)
∫ y dx = (2h/45) [7y0+32y1+12y2+32y3+14y4+32y5+12y6+……….]
Note: Subintervals should be multiples of 4.
• Picard’s method
• Euler’s method
• Summary
Summary
2. Picard’s method
4. Runge-kutta method
OR
y2 = y1+ h f(x1,y1)
y3 = y2+h f(x2,y2)
4.Runge-Kutta Order 4: -
y 1 = y0 + (1/6)[k1+2k2+2k3+k4]
where k1 = h f(x0,y0) , k2 = h f(x0+h/2, y0+k1/2)
5.Milne’s Predictor-corrector :-
FOURIER SERIES
• Periodic Functions
• Fourier Series
• Euler’s Formulae
Summary
1. Periodic functions
4. Dirichlet’s conditions:
(i) ƒ(x) and its integrals are finite and single valued
(ii) ƒ(x) has a finite number of discontinuities
(iii) ƒ(x) has finite number of maxima and minima . Then the
Fourier series converges to ƒ(x) at all points where ƒ(x) is
continuous. Also the series converges to the average of the
left limit and the right limit of ƒ(x) at each point of
discontinuity of ƒ(x).
(i) If ƒ(x) is an even function in (0,2π) or (-π;π) the Fourier Series for
ƒ(x) is
(ii) If ƒ(x) is an odd function in (0,2π) or (-π;π) the Fourier Series for
ƒ(x) is
where n from 1 to ∞
• Laplace’s equation
Summary
1. Formation of Partial equations by the elimination of arbitrary
constants and arbitrary function.
∂f + ∂f . ∂z = 0 ∂f + q ∂f = 0 …………..(3)
∂y ∂z ∂y ∂y ∂z
∂Φ ∂u + ∂u . ∂z = 0 ∂Φ ∂v + ∂v∂z = 0
∂u ∂x ∂z ∂x ∂v ∂x ∂z∂x
And
∂Φ ∂u + ∂u . ∂z = 0 ∂Φ ∂v + ∂v∂z = 0
∂u ∂y ∂z ∂y ∂v ∂y ∂z∂y
P = ∂z , q = ∂z
∂x ∂y
Where, P = ∂u ∂v - ∂u ∂v
∂x ∂z ∂z ∂y
Q = ∂u ∂v - ∂u ∂v
∂z ∂x ∂x ∂z
R = ∂u ∂v - ∂u ∂v
∂x ∂y ∂y ∂x
Q = ∂u ∂v - ∂u ∂v
∂z ∂x ∂x ∂z
R = ∂u ∂v - ∂u ∂v
∂x ∂y ∂y ∂x
To solve (1),
first write Lagrange’s auxiliary equation (subsidiary equation)
∂x = ∂y = ∂z ………………….(2)
P Q R
δΦ/δa=0 ……………..(3)
δΦ/δb=0 ………............(4)
Elimination of ‘a’ and ‘b’ from (2) (3) (4) gives an equation of the
form f(x, y, z)=0 is called singular integral.
a = ∂z , b = ∂z
∂x ∂y
Now replacing p= ∂z =a ,and q = ∂z = b in the given
P.D.E ∂x ∂y
F(a,b) = 0 b = Φ (a)
Substituting in
P = Φ(z) ………………(3)
from (2) , (3) q = a Φ(z) ……………….(4)
dz = p. dx + q dy
dz = Φ(z) dx + a Φ(z) dy
∂z / Φ(z) = dx + a dy
0 = x + ∂f ……………..(3)
∂a
0 = y + ∂f ……………..(4)
∂b
The eliminate of a, b from (3) (4) gives the singular integral