0% found this document useful (0 votes)
1K views

Solution For Linear Systems

1. Eigenvalues and eigenvectors of matrices have several important properties: - The sum of eigenvalues is the trace, and their product is the determinant - Eigenvalues of a matrix and its transpose are equal - Zero is an eigenvalue if and only if the matrix is singular 2. The Cayley-Hamilton theorem states that every square matrix satisfies its own characteristic equation. This allows calculating powers and inverses of matrices using only the matrix and its eigenvalues. 3. A matrix can be diagonalized if it has n linearly independent eigenvectors. The diagonalized matrix D contains the eigenvalues, and powers and functions of the matrix can then be computed from D.

Uploaded by

shantan02
Copyright
© Attribution Non-Commercial (BY-NC)
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
1K views

Solution For Linear Systems

1. Eigenvalues and eigenvectors of matrices have several important properties: - The sum of eigenvalues is the trace, and their product is the determinant - Eigenvalues of a matrix and its transpose are equal - Zero is an eigenvalue if and only if the matrix is singular 2. The Cayley-Hamilton theorem states that every square matrix satisfies its own characteristic equation. This allows calculating powers and inverses of matrices using only the matrix and its eigenvalues. 3. A matrix can be diagonalized if it has n linearly independent eigenvectors. The diagonalized matrix D contains the eigenvalues, and powers and functions of the matrix can then be computed from D.

Uploaded by

shantan02
Copyright
© Attribution Non-Commercial (BY-NC)
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
You are on page 1/ 47

UNIT-I

SOLUTION FOR LINEAR SYSTEMS


• Elementary row(and column)transformations

• Rank of a matrix-Echelon form-Normal form

• Solution of linear systems-Direct methods

• L-U-Decomposition

• L-U-Decomposition from Gauss elimination

• Solution of Tridiagonal system

• Summary
Summary
1. Rank of a matrix: The rank of a matrix is the order r of the largest non-
vanishing minor of the matrix.

2.Elementary transformation of a matrix:


a)Row Transformations :
i) Interchange of ith and jth row …….Rij
ii)Multiplication of ith row by non-zero scalar l….Ri(l)
iii)Addition of l times the elements of jth row to corresponding elements of
ith row---Rij(l)
b)Column Transformations are similar to above to (a)---replace R by C

3.Computation of Rank of matrix:

Method – I:
Echelon Form: Transform the given matrix to an echelon form using
elementary transformations. The rank of the matrix is equal to the number
of non-zero rows of echelon form.

Method II:
Canonical Form OR Normal Form:
Reduce the given matrix A to one of the normal forms
Ir 0 , Ir , Ir 0 or Ir , using elementary transformation,
0 0 0 Then Rank of A = r

4.Simulataneous Linear Equations-Methods of Solution.

1.System for m linear equations in n unknowns can be written in a matrix


form as AX =B,where A = a11 a12 a1n , X = x1 , B = b1
a21 a22 a2n x2 b2
….. ….. …… …. ….
am1 am2 amn xn bn
If bi = 0 ,the system is homogeneous i.e. B=0 ,otherwise ,it is
non- homogeneous.

2.Condition for the consistency: A system of linear equations AX= B is


consistent iff the rank of A is equal to the rank of the augmented matrix
[A/B]
3.Solution of AX= B Working rule:

i)Find r(A) and r(A/B)by applying elementary transformations.


ii)Ifr(A)=r(A/B)=n ,(n,being number of unknowns).The system is
consistent and has unique solution.[In case │A│≠ 0]
iii)If r(A)=r(A/B)<n,the system is consistent and has infinite solutions.
iv)If r(A) ≠ r(A/B),the system is inconsistent and has no solution.

4.Other method: Let A be a 3x3 matrix then,

i)Matrix inverse method: AX=B has the solution,X=A-1B.(if │A │≠ 0)


ii)Cramer’s Rule:[Method of determinant]:Let │A │≠ 0
Let Δ = │A│: we obtain 3 more determinants 1,2,3, of 3 matrices
obtained by replacing the 1st,2nd and 3rd columns of A by the column
matrix B of the system respectively.
Then x1 = Δ1/Δ , x2 = Δ2/Δ , x3 = Δ3/Δ
iii)Gauss Jordan Method: Reduce the augmented matrix(A/B) to the form
[I3 X] where I3 is the unit matrix. Then X = [ x1 x2 x3 ] is solution.

5.Gauss elimination:

Step1. Eliminate the unknowns x1,x2,x3,……xn-1 successively and


obtain an upper triangular matrix.
Step2.The last equation of this matrix gives the values of xn.
Step3.The back substitutions of unknowns in the other equations give
the other unknowns.

6.LU Decomposition method: Solution of AX=B

i)Let all principal minors of A be non-singular.


ii) L = 1 0 0 and U = u11 u12 u13 then A=LU, LUX=B
l21 1 0 0 u22 u23
l31 l32 1 0 0 u33

Let LY =B where y = [ y1 y2 y3 ]1 and Y=UX


iii)From LU = A ,we get both L and U
iv)From LY = B ,we get y1,y2,y3 : and then
v) From UX=Y , we get x1,x2,x3
7.Tri-Diagonal matrix:

Matrices of the type a11 a12 0 0 are Tri Diagonal matrices


a21 a22 a23 0
0 a32 a33 a34
0 0 a43 a44

8.Solution of Tri diagonal system procedure is like above method.

9.Homogeneous linear equations:AX=0 , where A =(aij)mxn:


X=[x1,x2,,….xn]1: solution of AX=0 can be done by elementary
transformations.

Conclusions:

i)The system AX=0 is always consistent. Since the trival solution x1= x2=
x3=…=xn=0 always exist.
ii)If rank of (A/B) = rank of A = n , [│A│≠ 0] , then the trival solution is
the only solution.
iii)If the rank of (A/B)= rank of A = r < n, [│A│≠ 0] the solution has
infinite number of non-trival solutions involving (n-r) arbitrary
constants.
UNIT-II

EIGEN VALUES & EIGEN VECTORS


& THEIR APPLICATIONS

• Eigen values,eigen vecors - Properties

• Cayley-Hamilton theorem-Inverse & powers of matrix by Cayley-


Hamilton theorem

• Diagonalization of a matrix

• Calculations of powers of a matrix-modal & spectral matrices

• Summary
Summary
1. Eigen values & Eigen vectors : Let A = (aij)mxn
(a)Characteristic equation of A is given by │A- λ I│ =0
(b)Roots of this equation are λ1, λ2 , λ3,…………….., λn. They are
called Eigen values of A.
(c)A non-zero vector X = [x1,x2,x3,……………..xn]1 which satisfies
the relation [A- λ I]X=0 (or) AX= λX, is called the Eigen vector
of A corresponding to λ. This each Eigen value has an Eigen
vector.

2. Properties of Eigen values & Eigen vectors:


1.The sum of the Eigen values of a square matrix A is its trace &
their product is │A│.
2.The Eigen values of A and AT are equal.
3.If A is non-singular matrix and λ is an eigen values of A,
then 1/λ is an Eigen values of A-1.
4.If λ is an eigen values of A , then µλ is an eigen values of
µA where µ is a non-zero scalar.
5.If λ is an eigen values of A, then λm is an eigen values of Am,
m being any positive integer.
6.The eigen values of a diagonal matrix are its diagonal elements.
7.If B be a non-singular matrix, and A,B are matrices of same
order,then A and B-1AB have same eigen values.
8.λ is a characteristic root of a square matrx A iff their exists a
non- zero vector X such that AX= λ X.
9.If X be an eigen vector of A corresponding to the eigen value λ
,then c X is also an eigen vector of A corresponding to λ , c
being a non-zero scalar.
10.If X is an eigen vector of a square matrix A, then X cannot
corresponding to more than one eigen values of A.
11.Zero is an eigen value of a matrix iff it is singular.
12.If λ is an eigen value of a non-singular matrix A, then
│A│ / λ is an eigen values of Adj A.

3. Cayley Hamilton Theorem-State that


‘Every square matrix satisfies its own characteristic equation.’
4. To find the inverse of a square matrix A; by using C-H theorem.
Let A be a square matrix and and
λn+a1 λn-1+a2 λn-2+……+an= 0….(1)be the characteristic
equation of A. (ai,i=1 to n are constants).
Then C-H theorem gives An +a1An-1+………anI=0…(2)
(2) x A-1 = An-1+a1An-2+……an-1I+anA-1 =0
A-1 = (-1)/an[An-1+An-2+……an-1I]

5. To find positive integral powers of A using C-H theorem:


Let m ≥ n, be a positive integer.
Then Am-n x (2) ==Am+a1Am-1+…..anAm-n = 0,from which we can
find Am interms of powers of lower order of A.

6. Diagonalization of a square matrix:


Let A be a square matrix of order n having n linearly
independent Eigen vectors. Then there exists a non singular
matrix P such that P-1AP =D is a diagonal matrix, and
D=Diag[λ1, λ 2,….., λn ]

7. Working rule to diagonalise A=(aij)nxn


Step1:Find Eigen values λi (i=1,2,….,n)of A.
Step2:Find Eigen vectors Xi corresponding to λi (λi,i=1,2,….,n are
distinct).
Step3:Form the matrix P=[X1 X2 X3 Xn]where column
vectors Xi are the Eigen vectors of λi.
(The matrix P is known as the Modal matrix of A)
Step4:Find D= P-1AP=Diag[λ1 λ2 λ3 λn].This is the
diagonalisation of A.
The matrix D is known as the Spectral matrix of A.
Computation of positive powers of A:
If m is a positive integer of A: Then,

Am=(PDP-1)m= [P DmP-1]
UNIT-III

LINEAR TRANSFORMATIONS

• Real Matrices – Symmetric, Skew-symmetric, Orthogonal

• Linear transformations- Orthogonal Transformation

• Complex Matrices- Hermitian, Skew-Hermitian and Unitary

• Eigen Values and Eigen Vectors of Complex matrices and Their


Properties

• Quadratic forms- Reduction to Canonical Form

• Rank- Positive, Negative Definite; Semi definite – Index, Signature-


Sylvester Law

• A Summary
Summary
1. Definitions: and properties of some real and complex matrices are
following

2. Properties of eigen values of real and complex matrices are given:

1.If λ is a characterstic root of an orthogonal matrix, then 1/ λ is also a


characterstic root.
2.The eigen values of an orthogonal matrix are of unit modulus.
3. The eigen values of a hermitian matrix are all real.
4. The eigen values of a real symmetric matrix are all real.
5. The eigen values of a skew hermitian matrix are either purely
imaginary or zero.
6.The eigen values of a real skew symmetric matrix are purely
imaginary or zero.
7. The eigen values of a unitary matrix are of unit modulus.
8. If A is nilpotent matrix, then 0 is the only eigen value of A
9. If A is involuntary matrix its possible eigen values are 1 and -1
10.If A is an idempotent matrix its possible eigen values are 0 and 1

3. Transformations:

(a) The transformations X = AY where A = (aij)nXn; X = [x x2 …. xn];


Y = columns of [y y2 …. yn]; transforms vector Y to vector X over the
matrix ‘A’.
The transformations is linear.

(b) Non-singular transformation:


(i) If ‘A’ is non-singular, (‫׀‬A 0 ≠‫ )׀‬then Y = AX is non-singular
transformation.
(ii) Then, X = A-1Y is inverse transformation of Y = AX.

(c) Orthogonal transformation: If ‘A’ is an orthogonal matrix, then


Y = AX is an orthogonal transformation;
A is orthogonal , A1 = A- 1 => Y1Y = X1X
i.e., Y = AX transforms ( x12 + x22 +….+ xn2) to (y12 + y22 +…..
+yn2)
4. Quadratic forms: A homogeneous polynomial of 2nd degree in ‘n’
variables x1, x2,…xn is called of quadratic form.

Thus , q = ∑ ∑ aijxixj from i , j = 1 to n


(or) q = [a11x12 + a22x22 +……+ annxn2 + (a12+a21)x1x2 +
(a13+a31)x1x3 +…+…]
is a quadratic form in ‘n’ variables x1,x2……xn.

5. Matrix of a quadratic form ‘q’: If ‘A’ is a symmetric matrix,

q = X1AX is the matrix representation of ‘q’ and ‘A’ is the matrix


of ‘q’ where ,(aij+aji)=2aij is coefficient of xixj.
[i.e. aij=aji=1/2 coefficient of xixj]
Then q = X1AX = [x1x2.xn] A columns of[x1 x2.. xn]

6. Rank of quadratic form: If q = X1AX, then rank of A is the rank of


quadratic form ‘q’

(a) If rank of A = r = n , q is non-singular form


(b) If r < n , ‘q’ is singular

7. Canonical form or Normal form of q: A real quadratic form ‘q’ in


which product terms are missing (i.e. all terms are square terms only)
is called the canonical form of q.
i.e. q = a1x12 + a2x22 + …+ anxn2 is canonical form.

8. Reduction to canonical form: If D = Diag [d1, d2,….dr] is the


diagonalization of A, then q1 = d1x12 + d2x22 + …. + drxr2 ,
(where r = rank of A) is canonical form of q = X1AX.

9. Nature of a quadratic form:


1. If q= X1AX is the given quadratic form (in n variables) of rank ‘r’,
then, q1=d1x12 + d2x22 +….+ drxr2 is the canonical for of q.
[di is +ve, -ve, or zero]

(a) Index: The number of +ve terms in q1 is called the index ‘s’ of
quadratic form ‘q’
(b) The number of non +ve terms = r-s
(c) Signature = S- (r-s)= 2s-r.
2. The quadratic form ‘q’ is said to be

(a) +ve definite if r=n, and s=n


(b) –ve definite if r=n, and s=0
(c) +ve semi-definite if r<n and s=r
(d) –ve semi-definite if r<n and s=0
(e) Indefinite in all other cases

3. To find the nature of ‘q’ with the help of Principal minors:

Let q=X1AX be the given quadratic form and let M1,M2,M3,…..


be the principal minors of A.

(a) ‘q’ is +ve definite iff Mj>0 for every j≤n


(b) ‘q’ is –ve definite if M1,M3,M5….are all –ve and M2,M4,M6,….be
the principal minors of A.
(c) ‘q’ is +ve semi-definite if Mj≥0 for every j≤n and at least one
Mj=0.
(d) ‘q’ is –ve semi-definite if in case (b) some Mj are =0.
(e) In all other cases ‘q’ is indefinite

4. To find the nature of ‘q’ by examining eigen valus of ‘A’:-


If ‘q’ =X1AX is quadratic form in ‘n’ variables then, it is

a. +ve definite iff all eigen valus of A are +ve.


b. –ve definite iff all eigen values are –ve
c. +ve semi-definite if all eigen values are ≥0 and at least one
eigen value =0.
d. –ve semi-definite if all eigen values are ≤0 and at least one
eigen value is zero.
e. Indefinite if A has +ve as well as –ve eigen values.
10. Methods of Reduction of quadratic form to the canonical form.

(a) Lagrange’s method: A quadratic form can be reduced by this


method to a canonical form by completion of squares.

(b) Diagonalization method: Write A=I3AI3 [if A=(aij)3x3] apply


elementary row transformation on L.H.S and on prefactor of
R.H.S. Apply corresponding column transformations on L.H.S
as well as the post-factor of R.H.S continue this process till the
equation is reduced to the form,

D = P1 A P , where D is a diagonal matrix D = [d1 0 0]


[0 d2 0]
[0 0 d3]

Then the canonical form is q1=y1(P1AP)Y=Diag(d1 d2 d3) where


Y = [y1 y2 y3], i.e., if q = X1 A X, X = [x1 x2 x3] ,
q1 = d1y12 + d2y22 + d3y32.
Here X=PY is corresponding transformation.

(C) Orthogonal Reduction of q = X1AX:

(i) Find eigen values λi and corresponding eigen vectors Xi,


(i=1,2,…n) of A.
(ii) Find modal matrix B = [X1 X2 … Xn]
(iii) Normalize each column vector Xi of B by dividing it with its
magnitude and write the normalized modal matrix P which is
orthogonal (i.e. P1 = P-1)
(iv) Then X = PY reduces ‘q’ to q1
where q1 = λ1 y12 + λ2 y22 + …+ λn yn2
= Y1(P1AP)Y
( X=PY is know as orthogonal transformation)

11.Sylvester’s law of inertia: The signature of real quadratic form is


invariant for all normal reductions.
Symmetric matrix
In linear algebra, a symmetric matrix is a square matrix, A, that is equal to its transpose

The entries of a symmetric matrix are symmetric with respect to the main diagonal (top
left to bottom right). So if the entries are written as A = (aij), then

for all indices i and j. The following 3×3 matrix is symmetric:

A matrix is called skew-symmetric or antisymmetric if its transpose is the same as its


negative. The following 3×3 matrix is skew-symmetric:

Skew-symmetric matrix
In linear algebra, a skew-symmetric (or antisymmetric or antimetric[1]) matrix is a
square matrix A whose transpose is also its negative; that is, it satisfies the equation:

or in component form, if : for all and

For example, the following matrix is skew-symmetric:

Compare this with a symmetric matrix whose transpose is the same as the matrix

or an orthogonal matrix, the transpose of which is equal to its inverse:


The following matrix is neither symmetric nor skew-symmetric:

Every diagonal matrix is symmetric, since all off-diagonal entries are zero. Similarly,
each diagonal element of a skew-symmetric matrix must be zero, since each is its own
negative.

Orthogonal matrix
In linear algebra, an orthogonal matrix is a square matrix with real entries whose
columns (or rows) are orthogonal unit vectors (i.e., orthonormal). Because the columns
are unit vectors in addition to being orthogonal, some people use the term orthonormal
to describe such matrices.

Equivalently, a matrix Q is orthogonal if its transpose is equal to its inverse:

alternatively,

(OR)

Definition: An n × n matrix A is called an orthogonal matrix whenever


AT A = I .
EXAMPLE:
 −1 0  1 0   −1 0  cosθ −sinθ 
 ,  ,  ,  
 0 − 1  0 −1   0 1  sinθ cosθ 
Conjugate transpose

"Adjoint matrix" redirects here. An adjugate matrix is sometimes called a "classical adjoint
matrix".

In mathematics, the conjugate transpose, Hermitian transpose, or adjoint matrix of an m-by-


n matrix A with complex entries is the n-by-m matrix A* obtained from A by taking
thetranspose and then taking the complex conjugate of each entry (i.e. negating their imaginary
parts but not their real parts). The conjugate transpose is formally defined by

where the subscripts denote the i,j-th entry, for 1 ≤ i ≤ n and 1 ≤ j ≤ m, and the overbar denotes a
scalar complex conjugate. (The complex conjugate of a + bi, where a and b are reals, isa − bi.)

This definition can also be written as

where denotes the transpose and denotes the matrix with complex conjugated entries.

Other names for the conjugate transpose of a matrix are Hermitian conjugate, or transjugate.
The conjugate transpose of a matrix A can be denoted by any of these symbols:

 or , commonly used in linear algebra


 (sometimes pronounced "A dagger"), universally used in quantum mechanics
 , although this symbol is more commonly used for the Moore-Penrose
pseudoinverse

In some contexts, denotes the matrix with complex conjugated entries, and thus the
conjugate transpose is denoted by or .

EXAMPLE:

then
Hermitian matrix

A Hermitian matrix (or self-adjoint matrix) is a square matrix with complex entries which is
equal to its own conjugate transpose – that is, the element in the ith row and jth column is equal
to the complex conjugate of the element in the jth row and ith column, for all indices i and j:

If the conjugate transpose of a matrix is denoted by , then the Hermitian property can be
written concisely as

Hermitian matrices can be understood as the complex extension of a real symmetric matrix.

For example,

is a Hermitian matrix

Skew-Hermitian matrix
In linear algebra, a square matrix with complex entries is said to be skew-Hermitian or
antihermitian if its conjugate transpose is equal to its negative.[1] That is, the matrix A is
skew-Hermitian if it satisfies the relation

where denotes the conjugate transpose of a matrix. In component form, this means that

for all i and j, where ai,j is the i,j-th entry of A, and the overline denotes complex
conjugation.

Skew-Hermitian matrices can be understood as the complex versions of real skew-


symmetric matrices, or as the matrix analogue of the purely imaginary numbers.[2]

Unitary matrix
In mathematics, a unitary matrix is an n by n complex matrix U satisfying the condition
where is the identity matrix in n dimensions and is the conjugate transpose (also
called the Hermitian adjoint) of U. Note this condition says that a matrix U is unitary if
and only if it has an inverse which is equal to its conjugate transpose

A unitary matrix in which all entries are real is an orthogonal matrix. Just as an
orthogonal matrix G preserves the (real) inner product of two real vectors,

so also a unitary matrix U satisfies

for all complex vectors x and y, where stands now for the standard inner product on

.
UNIT-IV

SOLUTIONS OF NON-LINEAR SYSTEMS


• Solution of Algorithm and Transcendental Equations

1. Bisection Method
2. Method of False Position
3. The Iteration Method
4. Newton Raphson Method

• Interpolation

- Finite Differences
- Forward Differences
- Backward Differences
- Central Differences

• Newton’s Forward Interpolation Formula

• Newton’s Backward Interpolation Formula

• Gauss Forward Interpolation Formula

• Gauss Backward Interpolation Formula

• Lagrange’s Interpolation Formula

• Spline Interpolation and Cubic Splines

• Summary
Summary

Solution of algebraic and transcendental equations

1. The numerical methods to find the roots of f(x)=0

(i) Bisection method:


If a function F(x) is continuous between a and b,
f(a) & f(b) are of opposite sign then there exists at least one root
between a and b. The approximate value of the root between them is
Xo=(a+b)/2

If F(Xo)=0 then the Xo is the correct root of F(x)=0.


If F(Xo)≠0, then the root either lies in between [a, (a+b)/2] or
[(a+b)/2,b] depending on whether F(xo) is negative or positive .
Again bisection the interval and repeat same method until the
accurate root is obtained.

(ii) Method of false position; (Regular false


method):
This is another method to find the root of F(x)=0. in this method,
we choose two points and taking the point of intersection of the
chord with the x-axis as an approximate root (using y=0 on x-axis) is
X1=[aF(b)- b(F(a)]/[F(b)-F(a)]

Repeat the same process till the root is obtained to the desired
accuracy.
(iii) Iteration method:
If a function F(x) is continuous between a and b,
f(a) & f(b) are of opposite sign then there exists at least one root
between a and b. The approximate value of the root between them is
Xo=(a+b)/2
We can use this method,if we can express f(x)=0 , as
X =Φ(X0) such that │Φ1(X0)│< 1 then
The successive approximate roots are given by
Xn =Φ(Xn-1), n=1,2----
(iv) Newton Raphson method: The successive
approximate roots are
given by Xn+1=Xn- F(Xn) /F1(Xn) , n=0,1,2----

Provided that the initial approximate root Xo is chosen sufficiently


close to root of F(x)=0

2. Interpolation

(i) Newton’s forward interpolation formula:


Let y=F(x) be the function which take the values Yo,Y1,Y2,….Yn
corresponding to the equally spaced values Xo, X1, X2…..Xn of X
with h as the interval length between two consecutive points.
The Newton’s forward interpolation formula is
F(Xo+ph)=Yp=Yo+pΔYo=[P(P-1)]/ 2! Δ2Yo+ [P(P-1)(P-2)]/3! Δ3Yo+…+
[P(P-1)(P-2)….(P-n+1)]/ n! ΔnYo

Where X= Xo+ph i.e., P=[X-Xo]/h


This is also called Newton- Gregory forward interpolation formula.
(ii) Newton’s backward interpolation formula;

Yp=Yn+P▼yn + [P(P-1)]/ 2! ▼2Yn + [P(P-1)(P-2)]/3! ▼3Yn +….

Where P=[X-Xn]/h
(iii) Gauss forward interpolation formula:
using central differences, delta as an operator
the Gauss forward interpolation formula is
Yp=Yo + P δY1/2 + [(P)(P-1)] /2! δ2yo+ [(P+1)(P-1)]/3! δ3Y1/2
+ [(P+1)P(P-1)(P-2)] /4! δ4Yo +…

Where P=[X-Xo]/w

(iv) Gauss backward interpolation formula:

Yp=Yo + PΔY-1 + [(P+1)P] /2! Δ2Y-1 + [(P+1)P(P-1) ]/3! Δ3Y-2


+[(P+2)(P+1)(P-1)]/4! Δ4Y-2 +…..
(v) Lagrange’s Interpolation formula:
Let Yo, Y1, Y2,…..Yn be the values of Y=ƒ(x) corresponding to
Xo, X1, X2,…Xn (not necessarily equispaced)

Lagrange’s Interpolation formula is

Y= ƒ(x)= [(X-X1)(X-X2)…..(X-Xn)] Y0+(X-Xo)(X-X2)(X-X3)(X-Xn) Y1


[(Xo-X1)(Xo-X2)…..(X-Xn)] (X1-Xo)(X1-X2)(X1-X3)(X1-Xn)
……………..
+ (X-Xo)(X-X1)(X-X3)…(X-Xn) Yn
(X2-Xo)(X2-X1)(X2-X3)…(X2-Xn)
3. Spline interpolation and cubic Splines

Let the given interval [a,b] is sub divided into n subintervals


[Xo,X1], [X1,X2],…..[Xn-1,Xn] where a = Xo<X1<X2<…<Xn= b, the
points Xo,X1…Xn are called nodes.
(i) Spline function:
A Spline function of degree n with nodes Xo, X1, X2,…Xn is a
function F(x) satisfying the following properties

(a) F(Xi)=ƒ(xi), i=0,1,2,…n (conditions of


interpolation)
(b) In each subinterval [xi-1,xi], ≤i≤n
(c) F(x) and its first (n-1) derivatives are continuous
on [a,b]

(ii) Cubic Spline function has the following properties

(a) F(xi)= ƒ(xi), i=0,1,…n


(b) In each sub interval [xi-1, xi], 1<i<m F(x) is a
third degree (cubic)
polynomials.
(c) F(x), F1(x),F11(x), are continuous an [a,b]

Continue to next page…………….


OR

1)Bisection or Bolzano’s method:-


i) f(a) = +ve , f(a) = -ve then c ∈(a,b)
ii) f(x0) = 0 ⇒ x0
≠ 0 ⇒ f(x0) = +ve ⇒ the root lies in a & x0 , x0 = (a+b)/2
≠0 ⇒ f(x0) = -ve ⇒ the root lies in x0& b
iii) f(x0) = +ve ⇒then second approximation x1 = (a+x0)/2
= -ve ⇒then x1 = (x0+b)/2
till we get an repeated end values.

2)Regula Falsi Method or False position method:-


i) f(a) = +ve, f(b) = -ve ∴ c∈(a, b)
ii) let x1 = a f(b) – b f(a)
f(b) – f(a)
a) f(x1) & f(a) are opposite then x2 = af(x1) – x1f(a)
f(x1) – f(a)
b) f(x1) & f(a) are same then x2 = x1f(b) – b f(x1)
f(b) – f(x1)
up to accuracy root i.e. repeated end values.

3)Iteration or Successive Approximation method:-


i) f(a) = +ve, f(b) = -ve ∴ c∈(a,b)
ii) f(x) = 0 ⇒ x= φ (x) since | φ 1(x) | < 1
let x0 = a+b ⇒ x1 = φ (x0)
2 x2 = φ (x1)
x3 = φ (x2)
and so on up to accuracy root i.e., repeated end values

4)Newton Raphson method Or Tangents method:-


i) f(a) = +ve, f(b) = -ve ∴ c∈(a,b)
ii) let x0 = a+b ⇒ x1 = x0 – f(x0)
2 f1(x0)
x2 = x1 – f(x1)
f1(x1)
and so on up to accuracy root i.e. repeated end values.

FINITE DIFFERENCE :-

1. Forward Difference operator: ∆ f (x) = f(x+h) – f(x)

⇒ ∆ y0 = y1 – y0

2. Backward Difference operator: ∇ f (x) = f (x) – f (x – h)

⇒ ∇ y1 = y1 – y0

3. Central Difference operator: δ f (x) = f (x+h/2) – f (x – h/2)

⇒δ y1/2 = y1 – y0

4. Shift operator : E f(x) = f(x+h) ⇒ Eyx = yx + h

5. Inverse operator : E-1 = E-1f(x) ⇒ f (x – h)

6. Averaging/Mean operator: µ f(x) = f(x + h/2) + f( x – h/2)


2
⇒ µ yx = 1/2[yx + h/2 + yx - h/2 ]
7. E = 1 + ∆

8. E = 1/2 [E1/2 + E-1/2]

9. δ = E1/2 – E-1/2

10. ∆ = E ∇ = ∇ E = δ E1/2
11. δ 2
= ∆∇ = ∇∆

12. 1 = (1+∆ ) (1-∇ )

(INTERPOLATION WITH EQUAL & UNEQUAL INTERVALS)

I. 1. Newton Gregory Forward Interpolation Formula:-


y = f(x) = y0 + p ∆ y0 + p (p-1) ∆ 2y0 + p (p-1)(p-2) ∆ 3y0+…..
2! 3! Where p = x - x0
h
2. Newton Gregory Backward Interpolation Formula:-
y =f(x) = yn+ p∇ yn + p(p+1) ∇ 2yn + p (p+1)(p+2) ∇ 3yn +...Where p = x-xn
2! 3! h
II Central Difference Interpolation Formula: -
1. Gauss Forward: -
yp = y0 + p ∆ y0 + p(p-1) ∆ 2y-1 + (p+1)p(p-1) ∆ 3y-1+….. Where p = x-
x0
2! 3! h
y0 --- ∆ y-1 --- ∆ y-2
2 4
--- ∆ y-3
6
--- ∆ y-4
8

∆ y0 ∆ 3y-1 ∆ 5y-2 ∆ 7y-3

2. Gauss Backward:-
yp = y0 + p ∆ y-1 + (p+1)p ∆ 2y-1 + (p+1)p(p-1) ∆ 3y-2+(p+2)(p+1)p(p-1) ∆ 4y-1+…..
2! 3! 4!
Where p = x-x0
h
∆ y-1 ∆ 3y-2 ∆ 5y-3 ∆ 7y-4

y0 ---- ∆ 2y-1 ---- ∆ 4y-2 ---- ∆ 6y-3 ---- ∆ 8y-4

3.Stirling’s: -
yp = y0+p ∆ y0+∆ y-1 + p2 ∆ 2y-1 + p(p2-1) ∆ 3y-1 + ∆ 3y-2 +p2(p2-1) ∆ 4y-2+
…….
2 2! 3! 2 4!

∆ y-1 ∆ 3y-2 ∆ 5y-3 ∆ 7y-4


y0 ---- ∆ 2y-1 ---- ∆ 4y-2 ---- ∆ 6y-3 ---- ∆ 8y-4
∆ y0 ∆ 3y-1 ∆ 5y-2 ∆ 7y-3

4.Lagrange’s Interpolation:(Unequal Intervals):-


y = f(x) = (x-x1)(x-x2)……(x-xn) f(x0)+ (x-x0)(x-x2)…..(x-xn) f(x1)+
(x0-x1)(x0-x2)…..(x0-xn) (x1-x0)(x1-x2)…(x1-xn)

(x-x0)(x-x1)…..(x-xn) f(x2) +…….+ (x-x0)(x-x1)…(x-xn-1) f(xn)


(x2-x0)(x2-x1)…(x2-xn) (xn-x0)(xn-x1)…(xn-xn-1)

UNIT-V

Curve Fitting & Numerical Integration

• Curve Fitting

1. Fitting a straight line

2. Fitting Quadratic Polynomial or parabola

• Numerical differentiation Numerical Integration

• Trapezoidal Rule

• Simpson’s 1/3 Rule and Simpson’s 3/8 Rule

• Gaussian Integration

• Summary
Summary

1. Curve Fitting

Fitting a straight line

Let Y(x)=ax+b be the straight line approximation for the data.

The normal equations are

a∑xi2 + b∑xi = ∑yixi

a∑xi + b∑1 = ∑yi i from 1 to n

Solving above equations we get a and b

(ii) Fitting quadratic polynomial or parabola

Let Y(x)= aX2+ bX + c be the quadratic polynomial

The normal equations are

a∑xi4 + b∑xi3 +c∑xi2 = ∑xi2yi

a∑xi3 + b∑xi2 +c∑xi = ∑xiyi


a∑xi2 + b∑xi +c∑1 = ∑yi i from 1 to n

Solving the above equations we get the values of a, b, c

2. Interpolation

Derivates using Newton’s forward difference interpolation


Formula

(i) [dy/dx]x=xo = [ΔYo- (1/2)Δ2Yo + (1/3)Δ3Yo-


(1/4)Δ4Yo +…]

(ii) [d2y/dx2]x=xo = (1/h2)[Δ2Yo-Δ3Yo + (11/12)Δ4Yo+..]

3. Derivates using Newton’s backward Interpolation Formula

(i) [dy/dx]x=xo = (1/h)[▼yn-(1/2)▼2yn+(1/3)▼3yn+…]

(ii) [d2y/dx2]x=xo = (1/h2)[▼2yn-


▼3yn+(11/12)▼4yn+(5/6)▼5yn+…]

4. Trapezoidal Rule

The integral I=∫ƒ(x)dx in-between a to b

I=(h/2)[(Yo+Yn)+2(Y1+Y2+…+Yn-1)]
Where yo, y1, …yn i.e.,yi= ƒ(xi) are the values corresponding to the
argument xo=a,X1,=Xo+h…Xn=Xo+nh=b

5. Simpson’s 1/3 Rule

The integral I=∫ƒ(x)dx = ∫y dx in-between a to b


I = ∫ƒ(x)dx=( h/3)[(yo + yn) + 4(y1+y3+..+yn-1)+ 2(y2+ y4+…+ yn-2)]
in-between a to b

This rule can be applied when the given internal (a,b) is divided into
even number of sub intervals of length ‘h’

6. Gaussian Integration

The definite integral I=∫ƒ (x)dx is expressed as

I=∫ƒ(x)dx=w1 ƒ(x1) + w2 ƒ(x2) +…+wn ƒ(xn)

=∑ wi ƒ(xi) i from 1 to n

Limit integral in-between a to b

Which is called Gaussian integral formula where Wi are called


weights and xi are called abscissa. The weights and abscissa are
symmetrical with respect to the middle points of the interval.

OR

(PART-A) (CURVE FITTING)


1. Fitting of a Straight Line (y=a+bx): -

∑ y = na + b ∑ x

∑xy = a ∑x + b ∑ x2

2. i) Parabola (y= a+bx+cx2)

∑ y = na + b∑ x + c ∑ x2
∑xy = a ∑x + b ∑x2 + c ∑ x3

∑x2y = a ∑x2 + b ∑x3 + c ∑x4

ii) Parabola (y = a+bx2)

∑y = na + b ∑x2

∑x2y = a ∑x2 + b ∑x4

4. y= aebx (y=a+bx) : logy = loga+xbloge10 ⇒Y=A+Bx

logy = Y, loga=A, B= b/loge10

5.y=abx(y=a+bx) : logy = loga+xlogb ⇒Y=A+Bx

logy = Y, loga = A, logb = B

6. y = axb(y=a+bx): logy = loga+blogx⇒Y=A+bX

since logx=X, logy=Y, loga=A

Weighted least square Approximation:-

1.Straight line(y=a0+a1x): ∑Wy = a0 ∑W + a1 ∑Wx

∑Wxy = a0 ∑Wx + a1 ∑Wx2

2.Parabola(y=a0+a1x+a2x2): ∑Wy = a0 ∑W + a1 ∑Wx + a2 ∑Wx2

∑Wxy = a0 ∑Wx + a1∑Wx2 + a2∑Wx3

∑Wx2y = a0 ∑Wx2 + a1 ∑Wx3 + a2 ∑Wx4

(NUMERICAL DIFFERENTIATION)
1. Newton Forward:
y = y0+p∆ y0+ p (p-1) ∆ 2y0 + p(p-1)(p-2) ∆ 3y0+……….
2! 3!

y1=1 ∆ y0+(2p-1) ∆ 2y0+(3p2-6p+2) ∆ 3y0+(4p3-18p2+22p-6) ∆ 4y0+


…..
h 2 6 24

y11 = 1 ∆ 2y0 + (p-1) ∆ 3y0+(6p2-18p+11) ∆ 4y0+…………


h2 12

y111 = 1 ∆ 3y0 – 3 ∆ 4y0 + 7 ∆ 5y0-…….. since p = x-x0


h3 2 4 h

2. Newton Backward:

y = yn+ p ∇ yn + p(p+1) ∇ 2yn+p(p+1)(p+2) ∇ 3yn+………


2! 3!

y1=1 ∇ yn + (2p+1) ∇ 2yn +( 3p2+6p+2 )∇ 3yn+…………


h 2 6

y11 =1 ∇ 2yn+(p+1) ∇ 3yn + (6p2+18p+11) ∇ 4yn+……where p = x - xn


h2 12 h

3.Stirling’s: -

y1 = 1 ∆ y0+∆ y-1 + p∆ 2y-1 + (3p2-1) ∆ 3y-1+∆ 3y-2 + (2p3-p) ∆ 4y-2+


……… h 2 6 2
12

(NUMERICAL INTEGRATION)

General Numerical Integration formula:-(Newton-cote’s qudrature formula)

nh [y0+n ∆ y0 + n(2n-3) ∆ 2y0 + n(n-2)2 ∆ 3y0+( n4-3n3+11n2-3n) ∆ 4y0


2 12 24 5 2 3 4!
+(n5-2n4+35n3-50n2+12n)∆ 5y0+(n6-15n5+17n4-225n3+274n2-60n) ∆ 6y0+…]
6 4 3 5! 6 6 4 3 6!

1. Trapezoidal Rule (n=1): -

∫ y dx = (h/2) [(y0+yn) + 2(y1+y2+………..+yn-1]


Note: Number of subintervals odd or even.

2. Simpson 1/3 Rule (n=2): -

∫ y dx = (h/3) [( y0+yn) + 4(y1+y3+y5+….) + 2(y2+y4+y6+……)]


Note: Number of subintervals should be even.

3. Simpson 3/8 Rule (n=3)

∫ y dx = (3h/8) [ (y0+yn) + 3(y1+y2+y4+y5+y7+y8+……)+2(y3+y6+y9+….)]


Note: Sub intervals should be multiples of 3.

4. Boole’s Rule (n=4): -

∫ y dx = (2h/45) [7y0+32y1+12y2+32y3+14y4+32y5+12y6+……….]
Note: Subintervals should be multiples of 4.

5. Weddle’s Rule (n=6): -

∫ y dx = (3h/10) [(y0+yn) +(y2+y4+y8+y10+y14+..+yn-4+yn-)+5(y1+y5+y7+y11+..+


yn-5+yn-1)+ 6(y3+y9+y15+…..+yn-3)+2(y6+y12+……+yn-6)]

Note: - Subintervals should be multiples of 6.


UNIT-VI

Numerical solutions of Initial Value Problems in


Ordinary Differential Equations

• Numerical Solution of Ordinary Differential equations

• Taylor’s series method

• Picard’s method

• Euler’s method

• Modified Euler’s method

• Runge – kutta method


• Predictor – corrector method

• Adam’s – Bashforth method

• Summary

Summary

The most important methods of solving ordinary differential equations


numerically are
1. Taylor’s series method

2. Picard’s method

3. Euler’s modified method

4. Runge-kutta method

5. Predictor corrector method

1. Taylor’s series method: The numerically solution of the differential


equation

dy/dx = ƒ(x,y) with the given initial condition y(xo) = yo is

yn-1 = yn +(h/1!)y1n + (h2/2!)y11n + (h3/3!)y111n +…


2. Picard ‘s method

To solve the differential equation dy/dx = ƒ(x,y) , y(xo) = Yo


using Picard’s method of successive approximations with the help of y(x) =
yo + ∫ xo to x ƒ(x,y) dx which is called an integral equation.

It can be solved by a process of successive approximations y(1)(x), y(2)(x)


The first approximation y(1)(x)= yo + ∫ xo to x ƒ(x,xo0dx

The second approximation y(2)(x)= yo + ∫xo to x (x,y(1))dx

OR

Consider dy/dx = f(x,y) and the initial condition is y(x0)= y0

1. Taylor’s:- y(x0) = y0+(x-x0) y01+(x-x0)2 y011 + (x-x0)3 y0111+………


2! 3!

2. Picard’s: - y1 = y0+ ∫ f (x,y0) dx


y2= y0 + ∫ f(x, y1) dx

y3= y0+ ∫ f(x, y2) dx

Similarly yn= y0 + ∫ f(x,yn-1) dx

3. Euler’s :- y1 = y0 +h f(x0, y0)

y2 = y1+ h f(x1,y1)

y3 = y2+h f(x2,y2)

Similarly yn+ 1 = yn + h f(xn,yn)

4.Runge-Kutta Order 4: -

y 1 = y0 + (1/6)[k1+2k2+2k3+k4]
where k1 = h f(x0,y0) , k2 = h f(x0+h/2, y0+k1/2)

k3 = h f(x0+h/2, y0+k2/2) , k4 = h f(x0+h, y0+k3)

yn+1 = yn+ (1/6) [k1+2k2+2k3+k4]

where k1 = h f(xn,yn) , k2 = h f(x0+h/2, y0+k1/2)

k3 = h f(x0+h/2, y0+k2/2) , k4 = h f( x0+h, y0+k3) k=0,1,2…….

5.Milne’s Predictor-corrector :-

Predictor:- y4= y0 + 4h [2y11-y21+2y31] where yk1 = f(xk,yk), k=0,1,2……


3

Corrector:- y4 =y2+h [y21+4y31+y41] where yk1= f(xk, yk), k=0,1,2…


3

6.Adam’s Moulton Predictor-Corrector:-

Predictor: y4 = y3+ (h/24)[ 55y31-59y21+37y11-9y01]

where yk1 = f(xkyk) , k = 0,1,2,3……...

Corrector: y4 = y3+ (h/24) [9y41+19y31-5y21+y11]

where yk1 = f(xk,yk) , k = 0,1,2,3…..……


UNIT-VII

FOURIER SERIES

• Periodic Functions

• Even and odd Function

• Fourier Series

• Euler’s Formulae

• Fourier Series in an arbitrary Interval (change of interval)

• Fourier Series of Even and odd Functions

• Half-Range Fourier Sine and Cosine Series


• Summary

Summary
1. Periodic functions

Definition: A function f: R R is said to be periodic if there exists a positive


number T such that ƒ(x+T)= ƒ (x)for all x belongs to R, T is called period of
ƒ (x).

2. Even and odd functions

(i) A function ƒ (x) is said to be even if ƒ (-x)= ƒ (x)


(ii) A function ƒ (x) is said to be odd if ƒ (-x)= -ƒ (x)

3. Definition: The Fourier series for ƒ (x)in the interval (C,C+2π) is

ƒ (x)= ao /2 + ∑ [an cos nx + bn sin nx ] where n from 1 to ∞

where ao= 1/π ∫ ƒ(x)dx limits in between c to C+2π

an= 1/ π ∫ ƒ(x)cos nx dx in between c to C+2π

bn= 1/ π ∫ ƒ(x)sin nx dx in between c to C+2π where C is


constant

ao, an, bn, are called Fourier coefficients (Fourier constant ),


these formulae are called Euler’s formula
Note:

(i) If C =0 , than the interval becomes (0,2π)

The Fourier coefficients are

ao= 1/ π ∫ƒ(x)dx limits in between 0 to 2π

an= 1/ π ∫ ƒ(x)cos nx dx limits in between 0 to 2π

bn= 1/ π ∫ ƒ(x)sin nx dx limits in between 0 to 2π

(ii) If C=- π then the interval becomes (-π, π)

The Fourier coefficients are

ao= 1/ π ∫ ƒ(x)dx limits in between - π to π

an= 1/ π ∫ ƒ(x)cos nx dx limits in between - π to π

bn= 1/ π ∫ ƒ(x)sin nx dx limits in between - π to π

4. Dirichlet’s conditions:

A function ƒ(x) defined in the interval a1 ≤ x ≤ a2 can be


represented as a Fourier series if ƒ(x) satisfies the following
conditions in the interval

(i) ƒ(x) and its integrals are finite and single valued
(ii) ƒ(x) has a finite number of discontinuities
(iii) ƒ(x) has finite number of maxima and minima . Then the
Fourier series converges to ƒ(x) at all points where ƒ(x) is
continuous. Also the series converges to the average of the
left limit and the right limit of ƒ(x) at each point of
discontinuity of ƒ(x).

5. Change of interval (Arbitrary interval):


If a function ƒ(x) is defined in (C, C+21). The Fourier expansion for
ƒ(x) is

ƒ(x)= ao /2 + ∑ [an cos(nπx)/l + bn sin(nπx)/l ] where n from 1


to ∞

ao=1/l ∫ƒ(x) dx limits in between c to c+2l

an= 1/l ∫ƒ(x)cos(nπx)/l dx limits in between c to c+2l

bn= 1/l ∫ƒ(x)sin(nπx)/l dx limits in between c to c+2l

Note: If c = 0 then the interval becomes (0,2l)


If c = -l then the interval becomes (-l,l)

6. Fourier Series for even and odd functions.

(i) If ƒ(x) is an even function in (0,2π) or (-π;π) the Fourier Series for
ƒ(x) is

ƒ (x) = ao /2 + ∑ [an cos nx ] where n from 1 to ∞

where ao= 2/π ∫ ƒ(x) dx limits in between 0 to 2π

an= 2/ π ∫ ƒ(x)cos nx dx in between 0 to 2π

here ,if f(x) is an even function, the Fourier coefficient bn = 0

(ii) If ƒ(x) is an odd function in (0,2π) or (-π;π) the Fourier Series for
ƒ(x) is

f(x) = ∑ [bn sin nx ]

where bn = 2/π ∫ ƒ(x)sinnx dx limits in between 0 to 2π

here, the coefficients a0 = 0 , an = 0

similarly in the case of the intervals (0,2l) or (-l,l)

7. Half range Fourier series:


(i) Half range fourier sine series for f(x) in (0, )

f(x) = ∑ [bn sin nx ]

when bn= 2/π ∫ ƒ(x)sinnx dx limits in between 0 to π

(ii) Half range fourier sine series for f(x) in (0, l )

f(x) = ∑ [bn sin(nπx)/l]

when bn= 2/l ∫ ƒ(x)sinnx dx limits in between 0 to π

(iii) Half range fourier cosine series

ƒ (x)= ao /2 + ∑ [an cos nx ] where n from 1 to ∞

where ao= 2/π ∫ ƒ(x) dx limits in between 0 to π

an= 2/ π ∫ ƒ(x)cos nx dx in between 0 to π

(iv) Half range fourier cosine series

ƒ(x)= ao /2 + ∑ [an cos(nπx)/l + bn sin(nπx)/l ]

where n from 1 to ∞

ao = 2/l ∫ƒ(x) dx limits in between 0 to l

an = 2/l ∫ƒ(x)cos(nπx)/l dx limits in between 0 to l


UNIT-VIII

PARTIAL DIFFERENTIAL EQUATIONS

• Formation of Partial Differential Equation by eliminating arbitrary


constants and arbitrary functions

• First Order Linear (Lagrange’s) Equations

• Non-Linear (Standard Types) Equations

• Method of Separation of variables for second order

• One Dimensional Wave equation

• One Dimensional Heat equation

• Laplace’s equation

• Two Dimensional Wave equation


• Summary

Summary
1. Formation of Partial equations by the elimination of arbitrary
constants and arbitrary function.

(a) Elimination of arbitrary constants

Let ƒ(x,y,z,a,b)=0 ……(1)


be the equation where a,b are arbitrary constants.
Differentiating partially w.r.to x and y
∂f + ∂f . ∂z = 0 ∂f + p ∂f = 0 ………….(2)
∂y ∂z ∂x ∂x ∂z

∂f + ∂f . ∂z = 0 ∂f + q ∂f = 0 …………..(3)
∂y ∂z ∂y ∂y ∂z

Elimination of two constants a,b from (1) (2) (3) gives an


equation of the form Φ (x, y, z, p, q)=0 which is the first order
P.D.E.

If the numbers of constants are more than the number of


independent variables, then the result of eliminating the
constants will give rise to a P.D.E of higher order than the
first.

(b) Elimination of arbitrary functions:


Let Φ (u,v)=0 ……………..(1)
Be the equation where u,v are functions of x, y, z and Φ be the
arbitrary function.
Differentiating (1) partially with respect to x and y

∂Φ ∂u + ∂u . ∂z = 0 ∂Φ ∂v + ∂v∂z = 0
∂u ∂x ∂z ∂x ∂v ∂x ∂z∂x

And

∂Φ ∂u + ∂u . ∂z = 0 ∂Φ ∂v + ∂v∂z = 0
∂u ∂y ∂z ∂y ∂v ∂y ∂z∂y

Eliminating∂Φ , ∂Φ from 2 nd 3 we get


∂u ∂v
∂u ∂v + ∂u ∂v = 0
∂x ∂y ∂y ∂x

P = ∂z , q = ∂z
∂x ∂y

Pp+Qq=R is the required P.D.E

Where, P = ∂u ∂v - ∂u ∂v
∂x ∂z ∂z ∂y

Q = ∂u ∂v - ∂u ∂v
∂z ∂x ∂x ∂z

R = ∂u ∂v - ∂u ∂v
∂x ∂y ∂y ∂x

2. Lagrange’s Linear P.D.E

The P.D.E Pp+Qq=R ……………..(1)


Is called Lagrange’s first order partial differential equation
P = ∂u ∂v - ∂u ∂v
∂x ∂z ∂z ∂y

Q = ∂u ∂v - ∂u ∂v
∂z ∂x ∂x ∂z

R = ∂u ∂v - ∂u ∂v
∂x ∂y ∂y ∂x

To solve (1),
first write Lagrange’s auxiliary equation (subsidiary equation)

∂x = ∂y = ∂z ………………….(2)
P Q R

Auxiliary equation gives two independent solutions u=c1 and v=c2


where u, v are functions of x, y, z
From these two solutions, the general solutions is Φ (u,v)=0

3. Non linear partial differential equations of order one

(i) Complete integral:


If F(x, y, z, p, q)=0 ………………(1)
Is the non linear partial differential equation of first order then the
equation
Φ(x, y, z, a, b)=0 ……………..(2)
Which contains as many constants as the number of independent
variables is called the complete integral.

(ii) Partial integral : A partial integral of (1) is obtained by giving


particular values is called the complete integral.

(iii) Singular integral: Differentiating the complete integral Φ(x,


y, z, a, b)=0 …………….(2)
Partially w. r. to ‘a’ and ‘b’ and then equate to zero

δΦ/δa=0 ……………..(3)
δΦ/δb=0 ………............(4)

Elimination of ‘a’ and ‘b’ from (2) (3) (4) gives an equation of the
form f(x, y, z)=0 is called singular integral.

There are four standard forms of non-linear first order partial


differential equations.

(i) Standard Form I:


The equation of the form f(p,q) = 0
(i.e., the equation in terms of p and q only)is called standard type I.
The solution of equation is Z=ax+by+c

a = ∂z , b = ∂z
∂x ∂y
Now replacing p= ∂z =a ,and q = ∂z = b in the given
P.D.E ∂x ∂y

F(a,b) = 0  b = Φ (a)

Substituting b = Φ (a) in (1)


Z = ax + Φ (a)y + c is called complete integral

(ii) Standard Form II:


The equation of the form ƒ(x, y, p, q) = 0…(1) is called standard
type II

Arrange (1) in the form f1(x, p) = f2(y, q) = a(constant)


From these two equations we get p=Φ1(x, a) and q=Φ2(y, a)

Substituting in

Integrating ∫dz= ∫pdx + ∫qdy + c


∫dz =Φ1(x, a)dx + Φ2(y, a)dy

z=∫Φ1(x, a)dx + ∫Φ2(y, a)dy + c is the complete integral


(iii) Standard form III:
The equation of the form ƒ(z, p, q)=0…..(1)
Substituting q=ap ………..(2)
In (1) we get

P = Φ(z) ………………(3)
from (2) , (3) q = a Φ(z) ……………….(4)

From (3) , (4) substituting p, q values in

dz = p. dx + q dy
dz = Φ(z) dx + a Φ(z) dy

∂z / Φ(z) = dx + a dy

Integrating F(z) = x + ay +c is complete integral

(iv) Standard form IV : (Clairaut’s equation)

The P.D.E of form z = px + qy + ƒ(p, q) …………(1)


is called clairaut’s equation

The complete integral of (1) is

z = ax + by + ƒ(a ,b) …………….(2)

To find the singular integral difference (2) partially with respect to


‘a’ and ‘b’

0 = x + ∂f ……………..(3)
∂a
0 = y + ∂f ……………..(4)
∂b
The eliminate of a, b from (3) (4) gives the singular integral

4. Application of P.D.E s (Method of separation of variables)


(1) One- dimensional wave equation : ∂u/∂t = c2 ∂2u/∂x2
(2) One- dimensional wave equation :
∂2u/∂x2 + ∂2u/∂y2 = 1/c2. ∂2u/∂t2
(3) One- dimensional wave equation : ∂u/∂t = c2 ∂2u/∂x2
(4) Laplace’s equation : ∂2u/∂x2 + ∂2u/∂y2 = 0
Problems which satisfy certain initial and boundary conditions are
called boundary value problems. The suitable method to solve such
problems is the method of separation of variables also known as
product method.

You might also like