Scilab 5 B
Scilab 5 B
By
Gilberto E. Urroz, Ph.D., P.E.
Distributed by
i nfoClearinghouse.com
A "zip" file containing all of the programs in this document (and other
SCILAB documents at InfoClearinghouse.com) can be downloaded at the
following site:
http://www.engineering.usu.edu/cee/faculty/gurro/Software_Calculators/Scil
ab_Docs/ScilabBookFunctions.zip
2
2
2
2
3
4
4
5
5
6
6
7
8
8
9
9
11
13
15
16
17
17
20
21
22
24
25
27
29
Sparse matrices
Creating sparse matrices
Getting information about a sparse matrix
Sparse matrix with unit entries
Sparse identity matrices
Sparse matrix with random entries
Sparse matrices with zero entries
Visualizing sparse matrices
Factorization of sparse matrices
Solution to system of linear equations involving sparse matrices
Solution to system of linear equations using the inverse of a sparse matrix
31
31
33
33
34
34
35
35
36
39
40
41
43
45
46
Download at InfoClearinghouse.com
Download at InfoClearinghouse.com
X = -1, Y = 1, Z = 2.
6
2 4
X
14
A = 3 2 1 , x = Y , b = 3.
4 2 1
Z
4
To obtain a solution to the system matrix equation using Gaussian elimination, we first create
what is known as the augmented matrix corresponding to A, i.e.,
A aug
2 4
6 14
= 3 2 1 3 .
4 2 1 4
The matrix Aaug is nothing more than the original matrix A with a new row, corresponding to the
elements of the vector b, added (i.e., augmented) to the right of the rightmost column of A.
Once the augmented matrix is put together, we can proceed to perform row operations on it
that will reduce the original A matrix into an upper-triangular matrix similar to what we did
with the system of equations shown earlier. If you perform the forward elimination by hand,
you would write the following:
A aug
2 4
6 14 2 4
6
14 2 4
6 14
= 3 2 1 3 0 8 8 24 . 0 8 8 24
4 2 1 4 0 6 13 32 0 0 7 14
The symbol ( is congruent to) indicates that what follows is equivalent to the previous
matrix with some row (or column) operations involved.
After the augmented matrix is reduced as shown above, we can proceed to perform the
backward substitution by converting the augmented matrix into a system of equations and
solving for x3, x2, and x1 in that order, as performed earlier.
Download at InfoClearinghouse.com
a11
a
A = 21
M
a n1
a12
a 22
M
an 2
L a1m
x1
b1
L a2m
x
, x = 2 , b = 2
M
M
O M
L a nm
xm
bn
Aaug
a11
a
= 21
M
a n1
L a1m b1
L a 2 m b2
.
O M
M
L a nm bn
a12
a 22
M
an2
Thus, the augmented matrix can be referred to as (Aaug)nx(n+1) = [aij], with ai,n+1 = bi, for i = 1,2,
, n.
__________________________________________________________________________________
Augmenting a matrix in SCILAB
In SCILAB augmenting a matrix A by a column vector b is straightforward. For example, for the
linear system introduced earlier, namely,
6
2 4
14
A = 3 2 1 , b = 3,
4 2 1
4
2.
3.
4.
4.
- 2.
2.
6. !
1. !
- 1. !
Download at InfoClearinghouse.com
!
1. !
! - 3. !
! - 4. !
-->A_aug = [A b]
A_aug =
!
!
!
2.
3.
4.
4.
- 2.
2.
6.
1.
- 1.
1. !
- 3. !
- 4. !
__________________________________________________________________________________
Algorithm for first step in forward elimination
After the augmented matrix has been created the first elimination pass will fill with zeros
those values in the first column below a11, i.e.,
a11
0
M
a12
a * 22
M
a *n 2
a1,n +1
L a1m
L a * 2 m a *2,n +1
.
M
O
M
L a *nm a *n ,n +1
Download at InfoClearinghouse.com
!
!
!
2.
3.
4.
4.
- 2.
2.
6.
1.
- 1.
1. !
- 3. !
- 4. !
Next, we define n as 3 and implement the first step in the forward elimination through the use
of for..end constructs:
--> n = 3;
-->for i=2:n, for j=2:n+1, a(i,j)=a(i,j)-a(1,j)*a(i,1)/a(1,1); end; a(i,1) = 0;
end;
2.
0.
0.
4.
- 8.
- 6.
6.
- 8.
- 13.
14. !
- 24. !
- 32. !
__________________________________________________________________________________
Algorithm for second step in forward elimination
After the second elimination pass, the generic augmented matrix is:
a11
0
0
M
0
a12
a *22
0
M
0
a13
a *23
a * *33
M
a * *n 3
a1, n+1
L
a1n
L a *2 n a *2,n +1
L a * *3n a * *3,n +1
M
O
M
L a * *nn a * *n ,n +1
Download at InfoClearinghouse.com
=
2.
0.
0.
4.
- 8.
0.
6.
- 8.
- 7.
14. !
- 24. !
- 14. !
__________________________________________________________________________________
a11
0
0
M
0
a12
a (1) 22
0
M
0
0
a13
a (1) 23
a ( 2) 33
M
0
0
L
a1,n 1
L
a (1) 2,n 1
L
a ( 2) 3,n 1
O
M
L a ( n 2) n 1,n 1
L
0
a1n
a (1) 2 n
a ( 2 ) 3n
M
a ( n 2 ) n 1, n
a ( n 1) n ,n
( n2)
a
n 1, n +1
a ( n 1) n ,n +1
a1,n +1
a (1) 2,n +1
a ( 2) 3,n +1
M
If we use the array aij to store the modified coefficients, we can simply write:
aij = aij - akjaik/akk; i = k+1,k+2,...,n; j = k+1,k+2,...,n+1.
aij = 0; for i = k+1,k+2,,n; and, j = k
Download at InfoClearinghouse.com
for k = 1,2,...,n-1.
Note: By using the array aij to store the newly calculated coefficients you loose the information
stored in the original array aij. Therefore, when implementing the algorithm in SCILAB, if you
need to keep the original array available for any reason, you may want to copy it into a
different array as we did in the SCILAB example presented earlier.
__________________________________________________________________________________
2.
3.
4.
4.
- 2.
2.
6.
1.
- 1.
1. !
- 3. !
- 4. !
-->n=3;
-->for k=1:n-1, for i=k+1:n, for j=k+1:n+1, a(i,j)=a(i,j)-a(k,j)*a(i,k)/a(k,k);
end; for j = 1:k, a(i,j) = 0; end; end; end;
-->a
a =
!
!
!
2.
0.
0.
4.
- 8.
0.
6.
- 8.
- 7.
14. !
- 24. !
- 14. !
__________________________________________________________________________________
Algorithm for backward substitution
The next step in the algorithm is to calculate the solution x1, x2,..., xn, by back substitution,
starting with
xn = an,n+1/ann.
and continuing with
xn-1 = (an-1,n+1-an-1,nxn)/an-1,n-1
Download at InfoClearinghouse.com
xn-2 = (an-2,n+1-an-2,n-1xn-1-an-2,nxn)/an-2,n-2
.
.
x1 =
a1,n +1 a1k x k
k =2
a11
xn = an,n+1/ann.
and, then, calculating
xi =
ai , n +1
k = i +1
aii
ik
xk
,
for i = n-1, n-2, ..., 2,1 (i.e., counting backwards from (n-1) to 1).
__________________________________________________________________________________
SCILAB example for calculating the unknown xs
Using the current value of the matrix a that resulted from the forward elimination, we can
calculate the unknowns in the linear system as follows. First, the last unknown is:
-->x(n) = a(n,n+1)/a(n,n);
for
k=i+1:n,
sumk=sumk+a(i,k)*x(k);
end;
_________________________________________________________________________________
SCILAB function for Gaussian elimination
Download at InfoClearinghouse.com
The following SCILAB function implements the solution of the system of linear equations,
Ax=b. The functions arguments are a nxn matrix A and a nx1 vector b. The function returns
the solution x.
function [x] = gausselim(A,b)
//This function obtains the solution to the system of
//linear equations A*x = b, given the matrix of coefficients A
//and the right-hand side vector, b
[nA,mA] = size(A)
[nb,mb] = size(b)
if nA<>mA then
error('gausselim - Matrix A must be square');
abort;
elseif mA<>nb then
error('gausselim - incompatible dimensions between A and b');
abort;
end;
a = [A b];
//Matrix augmentation
//Forward elimination
n = nA;
for k=1:n-1
for i=k+1:n
for j=k+1:n+1
a(i,j)=a(i,j)-a(k,j)*a(i,k)/a(k,k);
end;
end;
end;
//Backward substitution
x(n) = a(n,n+1)/a(n,n);
for i = n-1:-1:1
sumk=0
for k=i+1:n
sumk=sumk+a(i,k)*x(k);
end;
x(i)=(a(i,n+1)-sumk)/a(i,i);
end;
//End function
Note: In this function we did not include the statements that produce the zero values in the
lower triangular part of the augmented matrix. These terms are not involved in the solution
at all, and were used earlier only to illustrate the effects of the Gaussian elimination.
Application of the function gausselim to the problem under consideration produces:
->A = [2,4,6;3,-2,1;4,2,-1]; b = [14;-3;-4];
-->getf('gausselim')
Download at InfoClearinghouse.com
10
-->gausselim(A,b)
ans =
! - 1. !
!
1. !
!
2. !
-->x = gausselim(A,b)
x =
! - 1. !
!
1. !
!
2. !
Pivoting
If you look carefully at the row operations in the examples shown above, you will notice that
many of those operations divide a row by its corresponding element in the main diagonal. This
element is called a pivot element, or simply, a pivot. In many situations it is possible that the
pivot element become zero, in which case a division by zero occurs. Also, to improve the
numerical solution of a system of equations using Gaussian or Gauss-Jordan elimination, it is
recommended that the pivot be the element with the largest absolute value in a given column.
This operation is called partial pivoting. To follow this recommendation is it often necessary
to exchange rows in the augmented matrix while performing a Gaussian or Gauss-Jordan
elimination.
While performing pivoting in a matrix elimination procedure, you can improve the numerical
solution even more by selecting as the pivot the element with the largest absolute value in the
column and row of interest. This operation may require exchanging not only rows, but also
columns, in some pivoting operations. When row and column exchanges are allowed in
pivoting, the procedure is known as full pivoting.
When exchanging rows and columns in partial or full pivoting, it is necessary to keep track of
the exchanges because the order of the unknowns in the solution is altered by those exchanges.
One way to keep track of column exchanges in partial or full pivoting mode, is to create a
permutation matrix P = Inn, at the beginning of the procedure. Any row or column exchange
required in the augmented matrix Aaug is also registered as a row or column exchange,
respectively, in the permutation matrix. When the solution is achieved, then, we multiply the
permutation matrix by the unknown vector x to obtain the order of the unknowns in the
solution. In other words, the final solution is given by Px = b, where b is the last column of
the augmented matrix after the solution has been found.
Download at InfoClearinghouse.com
11
A function such as lu in SCILAB automatically take care of using partial or full pivoting in the
solution of linear systems. No special provision is necessary in the call to this function to
activate pivoting. The function lu take care of reincorporating the permutation matrix into
the final result.
__________________________________________________________________________________
Consider, as an example, the linear system shown below:
2Y + 3Z = 7,
2X +
3Z = 13,
8X +16Y - Z = -3.
The augmented matrix is:
A aug
7
0 2 3
= 2 0 3 13 .
8 16 1 3
The zero in element (1,1) will not allow the simple Gaussian elimination to proceed since a
division by zero will be required. Trying a solution with function gausselim, described earlier,
produces an error:
-->A = [0,2,3;2,0,3;8,16,-1], b = [7;13;-3]
A =
!
!
!
b
0.
2.
8.
=
2.
0.
16.
3. !
3. !
- 1. !
!
7. !
!
13. !
! - 3. !
-->getf('gausselim')
-->x = gausselim(A,b)
!--error
27
division by zero...
at line
26 of function gausselim
x = gausselim(A,b)
called by :
0.
0.
1.
=
0.
1.
0.
1. !
0. !
0. !
Download at InfoClearinghouse.com
12
!
!
!
L
!
!
!
8.
0.
0.
=
16.
- 4.
0.
1.
.25
0.
0.
1.
- .5
- 1.
!
3.25 !
4.625 !
0. !
0. !
1. !
The solution to the system of linear equations through LU decomposition proceeds in two parts:
-->y = L\c
y =
! - 3.
!
!
13.75 !
!
13.875 !
-->x = U\y
x =
!
2. !
! - 1. !
!
3. !
Download at InfoClearinghouse.com
13
[nA,mA] = size(A)
[nb,mb] = size(b)
if nA<>mA then
error('gausselim - Matrix A must be square');
abort;
elseif mA<>nb then
error('gausselim - incompatible dimensions between A and b');
abort;
end;
a = [A b];
n = nA
;
// Augmented matrix
// Matrix size
Application of the function gausselimPP for the case under consideration is shown next:
-->A = [0,2,3;2,0,3;8,16,-1]; b = [7;13;-3];
-->getf('gausselimPP')
-->gausselimPP(A,b)
ans =
!
2. !
! - 1. !
!
3. !
Download at InfoClearinghouse.com
14
2x1 +4x2+6x3 = 9,
3x1 -2x2+ x3 = -5,
4x1 +2x2 - x3 = 19,
We can write the three systems of equations as a single matrix equation: AX = B, where
3
1 2
x11
A = 3 2 1 , X = x 21
4 2 1
x31
x12
x 22
x32
x13
14 9 2
x 23 , B = 2 5 2 .
5 19 12
x33
In the unknown matrix X, the first sub-index identifies the original sub-index, while the second
one identifies to which system of linear equations a particular variable belongs.
The solution to the system, AX = B, can be found in SCILAB using the backward slash operator,
i.e., X = A\B, or the inverse matrix, X = A-1B. For the matrices defined above we can write:
-->A = [1,2,3;3,-2,1;4,2,-1], B=[14,9,-2;2,-5,2;5,19,12]
A =
!
1.
2.
3. !
!
3. - 2.
1. !
!
4.
2. - 1. !
B =
!
14.
9.
- 2. !
!
2.
- 5.
2. !
!
5.
19.
12. !
1.
2.
3.
2.
5.
- 1.
2. !
1. !
- 2. !
1.
2.
3.
2.
5.
- 1.
2. !
1. !
- 2. !
Download at InfoClearinghouse.com
2
1 2
x := 2 5
1 .
3 1 2
15
// Augmented matrix
// Number of rows and columns in A, rows in B
// Number of columns in B
Download at InfoClearinghouse.com
16
for k=i+1:n
sumk=sumk+a(i,k)*x(k,j);
end;
x(i,j)=(a(i,n+j)-sumk)/a(i,i);
end;
end;
//End function
Next, function gausselimm is applied to the matrices A and B defined earlier:
-->A = [1,2,3;3,-2,1;4,2,-1]; B=[14,9,-2;2,-5,2;5,19,12];
-->getf('gausselimm')
-->gausselimm(A,B)
ans =
!
!
!
1.
2.
3.
2.
5.
- 1.
2. !
1. !
- 2. !
0.
.125
.25
.1428571
.2321429
.1071429
.1428571 !
.1428571 !
.1428571 !
You can check that this result is the same as that obtained from SCILABs function inv:
-->inv(A)
ans =
!
!
!
0.
.125
.25
.1428571
.2321429
.1071429
.1428571 !
.1428571 !
.1428571 !
Download at InfoClearinghouse.com
17
completed. In other words, after completing the forward elimination of a nave Gaussian
elimination, the determinant of a nxn matrix A = [aij] is calculated as
n
det( A) = a kk .
k =1
If partial pivoting is included, however, the sign of the determinant changes according to the
number of row switches included in the pivoting process. In such case, if N represents the
number of row exchanges, the determinant is given by
n
det( A) = (1) N a kk .
k =1
The calculation of the determinant is included in the Gaussian elimination function called
gausselimd. The call to this function includes return variables for the solution X to the system
AX = B as well as for the determinant of A. A listing of the function is shown below:
function [x,detA] = gausselimd(A,B)
//This function obtains the solution to the system of
//linear equations A*X = B, given the nxn matrix of coefficients A
//and the nxm right-hand side matrix, B. Matrix X is nxm.
[nA,mA] = size(A);
[nB,mB] = size(B);
if nA<>mA then
error('gausselim - Matrix A must be square');
abort;
elseif mA<>nB then
error('gausselim - incompatible dimensions between A and b');
abort;
end;
a = [A B];
// Augmented matrix
n = nA
;
// Number of rows and columns in A, rows in B
m = mB
;
// Number of columns in B
//Forward elimination with partial pivoting
nswitch = 0;
for k=1:n-1
kpivot = k; amax = abs(a(k,k));
//Pivoting
for i=k+1:n
if abs(a(i,k))>amax then
kpivot = i; amax = a(k,i);
nswitch = nswitch+1
end;
end;
temp = a(kpivot,:); a(kpivot,:) = a(k,:); a(k,:) = temp;
for i=k+1:n
//Forward elimination
for j=k+1:n+m
a(i,j)=a(i,j)-a(k,j)*a(i,k)/a(k,k);
end;
end;
end;
Download at InfoClearinghouse.com
18
3.
2.
1.
5.
2.
1.
- 1. !
3. !
2. !
-->b = [-4;17;11]
b =
! - 4. !
!
17. !
!
11. !
-->getf('gausselimd')
-->[x,detA] = gausselimd(A,b)
detA =
- 2.
=
!
2. !
! - 1. !
!
5. !
Download at InfoClearinghouse.com
19
-->A = [2,3,1;4,6,2;1,1,2]
A =
!
!
!
2.
4.
1.
3.
6.
1.
1. !
2. !
2. !
-->[x,detA] = gausselimd(A,b)
!--error 9999
gausselimd - singular matrix
at line
53 of function gausselimd
[x,detA] = gausselimd(A,b)
called by :
Gauss-Jordan elimination
Gauss-Jordan elimination consists in continuing the row operations in the augmented matrix
that results from the forward elimination from a Gaussian elimination process, until an identity
matrix is obtained in place of the original A matrix. For example, for the following augmented
matrix, the forward elimination procedure results in:
A aug
2 4
6 14 2 4
6
14 2 4
6 14
= 3 2 1 3 0 8 8 24 . 0 8 8 24
4 2 1 4 0 6 13 32 0 0 7 14
We can continue performing row operations until the augmented matrix gets reduced to:
1 2 3 7 1 2 3 7 1 2 0 1 1 0 0 1
0 1 1 3 0 1 1 1 0 1 0 1 0 1 0 1 .
0 0 1 2 0 0 1 2 0 0 1 2 0 0 1 2
The first matrix is the same as that one obtained before at the end of the forward elimination
process, with the exception that all rows have been divided by the corresponding diagonal
term, i.e., row 1 was divided by 2, row 2 was divided by -8, and row 3 was divided by 7. The
final matrix is equivalent to the equations: X = -1, Y = 1, Z= 2, which is the solution to the
original system of equations.
Download at InfoClearinghouse.com
20
SCILAB provides the function rref (row-reduced echelon form) that can be used to obtain a
solution to a system of linear equations using Gauss-Jordan elimination. The function requires
as argument an augmented matrix, for example:
-->A = [2,4,6;3,-2,1;4,2,-1]; b = [14;-3;-4];
-->A_aug = [A b]
A_aug =
!
!
!
2.
3.
4.
4.
- 2.
2.
6.
1.
- 1.
14. !
- 3. !
- 4. !
-->rref(A_aug)
ans =
!
!
!
1.
0.
0.
0.
1.
0.
0.
0.
1.
- 1. !
1. !
2. !
The last result indicates that the solution to the system is x 1 = -1, x2 = 1, and x3 = 2.
3
1 2
A = 3 2 1 ,
4 2 1
we would write this augmented matrix as
A aug
1 2
3 1 0 0
= 3 2 1 0 1 0.
4 2 1 0 0 1
The SCILAB commands needed to obtain the inverse using Gauss-Jordan elimination are:
-->A = [1,2,3;3,-2,1;4,2,-1]
A =
!
1.
2.
3. !
!
3. - 2.
1. !
!
4.
2. - 1. !
-->A_aug = [A eye(3,3)]
A_aug =
!
1.
2.
3.
1.
!
3. - 2.
1.
0.
!
4.
2. - 1.
0.
Download at InfoClearinghouse.com
//Original matrix
//Augmented matrix
0.
1.
0.
0. !
0. !
1. !
21
.1428571
.2321429
.1071429
.1428571 !
.1428571 !
.1428571 !
0.
.125
.25
.1428571
.2321429
.1071429
.1428571 !
.1428571 !
.1428571 !
1.
0.
0.
0.
1.
0.
0. !
0. !
1. !
This exercise is presented to illustrate the calculation of inverse matrices through Gauss-Jordan
elimination. In practice, you should use the function inv in SCILAB to obtain inverse matrices.
For the case under consideration, for example, the inverse is obtained from:
-->A_inv_2 = inv(A)
A_inv_2 =
!
!
!
0.
.125
.25
.1428571
.2321429
.1071429
.1428571 !
.1428571 !
.1428571 !
The result is exactly the same as found above using Gauss-Jordan elimination.
Download at InfoClearinghouse.com
22
(A I)x = 0.
This equation will have a non-trivial solution only if the characteristic matrix, (A I), is
singular, i.e., if
det(A I) = 0.
The last equation generates an algebraic equation involving a polynomial of order n for a
square matrix Ann. The resulting equation is known as the characteristic polynomial of matrix
A. Solving the characteristic polynomial produces the eigenvalues of the matrix.
Using SCILAB we can obtain the characteristic matrix, characteristic polynomial, and
eigenvalues of a matrix as shown below for a symmetric matrix
3 2 5
A = 2 3 6.
5
6 4
First, we define the matrix A:
-->A = [3,-2,5;-2,3,6;5,6,4]
A =
!
3.
! - 2.
!
5.
- 2.
3.
6.
5. !
6. !
4. !
charMat
!
3 - lam
!
! - 2
!
!
5
- 2
3 - lam
4 - lam
!
!
!
!
!
The characteristic equation can be determined using function poly with matrix A and the
variable name lam as arguments, i.e.,
-->charPoly = poly(A,'lam')
charPoly =
Download at InfoClearinghouse.com
23
2
3
283 - 32lam -10lam + lam
which, although not exactly zero, is close enough to zero to ensure singularity.
Because the characteristic matrix is singular, there is no unique solution to the problem (A1I)x = 0. The equivalent system of linear equations is:
2x2 +
5x3 = 0
8.4409348x1 6x3 = 0
- 2x1 + 8.4409348x2 +
6 x2 + 9.4409348x3 = 0
5x1 +
The three equations are linearly dependent. This means that we can select an arbitrary value
of one of the solutions, say, x3 = 1, and solve two of the equations for the other two solutions,
x1 and x2. We could try, for example, to solve the first two equations (with x3 = 1), re-written
as:
8.4409348x1 2x2 = -5
- 2x1 + 8.4409348x2 = -6
Using SCILAB we can get the solution to this system as follows:
-->C1 = B1(1:2,1:2), b1 = -B1(3,1:2)
C1 =
!
8.4409348
- 2.
Download at InfoClearinghouse.com
24
! - 2.
8.4409348 !
-->b1 = -B1(1:2,3)
b1 =
! - 5. !
! - 6. !
.8060249 !
.9018017 !
Download at InfoClearinghouse.com
//Eigenvalues of matrix A
25
x = [];
for k = 1:n
B = A - lam(k)*eye(n,n);
C = B(1:n-1,1:n-1);
b = -B(1:n-1,n);
y = C\b;
y = [y;1];
y = y/norm(y);
x = [x y];
end;
//Characteristic matrix
//Coeff. matrix for reduced system
//RHS vector for reduced system
//Solution for reduced system
//Complete eigenvector
//Make unit eigenvector
//Add eigenvector to matrix
//End of function
Applying function eigenvectors to the matrix A used earlier produces the following eigenvalues
and eigenvectors:
-->getf('eigenvectors')
-->[x,lam] = eigenvectors(A)
lam =
! - 5.4409348
x =
! ! !
.5135977
.5746266
.6371983
4.9650189
10.475916 !
.7711676
.6347298
.0491799
.3761887 !
.5166454 !
.7691291 !
.5135977 !
.5746266 !
.6371983 !
=
!
! !
x3
.7711676 !
.6347298 !
.0491799 !
=
!
!
!
.3761887 !
.5166454 !
.7691291 !
The eigenvectors corresponding to a symmetric matrix are orthogonal to each other, i.e., xixj
= 0, for ij. Checking these results for the eigenvectors found above:
-->x1'*x2, x1'*x3, x2'*x3
ans =
- 3.678E-16
ans =
Download at InfoClearinghouse.com
26
- 5.551E-17
ans =
9.576E-16
3.
1.
5.
2.
1.
5.
- 1. !
2. !
- 2. !
-->[x,lam] = eigenvectors(A)
lam
.9234927
x
4.1829571
- 3.1064499 !
! !
!
=
.5974148
.7551776
.2698191
.3437456
.5746662
.7426962
.2916085 !
.4752998 !
.8300931 !
2.
3.
1.
2.
3.
1.
- 3. !
- 2. !
1. !
-->[x,lam] = eigenvectors(A)
warning
matrix is close to singular or badly scaled.
results may be inaccurate. rcond =
7.6862E-17
lam
!
=
4.646E-16
=
3. - i
! - .7071068
!
.7071068
!
2.826E-16
.25 .75 .5
3. + i
.25i
.25i
.25 +
.75 +
.5
.25i !
.25i !
!
Download at InfoClearinghouse.com
27
p = [p1 p2 pn+1].
The characteristic equation is given by
p1+p2+p3 2+ +pn n-1+pn+1 n = 0.
Here is a listing of the function:
function [p]=chreq(A)
//This function generates the coefficients of the characteristic
//equation for a square matrix A
[m n]=size(A);
if(m <> n)then
error('matrix is not square.')
abort
end;
I = eye(n,n);
//Identity matrix
p = zeros(1,n);
//Matrix (1xn) filled with zeroes
p(1,n+1) = -1.0;
B = A;
p(1,n)=trace(B);
for j = n-1:-1:1,
B = A*(B-p(1,j+1)*I),
p(1,j) = trace(B)/(n-j+1),
end;
p = (-1)^n*p;
p = poly(p,"lmbd","coeff");
p
//end function chreq
In this function we used the function poly to generate the final result in the function. In the
call to poly shown above, we use the vector of coefficients p as the first argument, "lmbd" is
the independent variable , and "coeff" is required to indicate that the vector p represent the
coefficients of the polynomial. The following commands show you how to load the function and
run it for a particular matrix:
-->getf('chreq')
-->A = [1 3 1;2 5 -1;2 7 -1]
A =
!
1.
3.
1. !
Download at InfoClearinghouse.com
28
!
!
2.
2.
5.
7.
- 1. !
- 1. !
-->CheqA = chreq(A)
CheqA =
2
3
- 6 - 2lmbd - 5lmbd + lmbd
x = []; n = nA;
for k = 1:n
BB = A - lam(k)*B;
CC = BB(1:n-1,1:n-1);
bb = -BB(1:n-1,n);
y = CC\bb;
y = [y;1];
y = y/norm(y);
x = [x y];
end;
Download at InfoClearinghouse.com
//Characteristic matrix
//Coeff. matrix for reduced system
//RHS vector for reduced system
//Solution for reduced system
//Complete eigenvector
//Make unit eigenvector
//Add eigenvector to matrix
29
//End of function
4.
8.
1.
1.
5.
5.
6. !
8. !
5. !
-->B = [3,9,7;3,3,2;9,3,4]
B =
!
!
!
3.
3.
9.
9.
3.
3.
7. !
2. !
4. !
-->getf('geigenvectors')
-->[x,lam] = geigenvectors(A,B)
lam =
! - .2055713 - 1.1759636i
x =
.2055713 + 1.1759636i
- 1.5333019 !
! ! !
.2828249 +
.5392352 .7450009
.2828249 .5392352 +
.7450009
.0422024i
.2691247i
Download at InfoClearinghouse.com
30
.0422024i
.2691247i
.0202307 !
.7437197 !
.6681854 !
Sparse matrices
Sparse matrices are those that have a large percentage of zero elements. When a matrix is
defined as sparse in SCILAB, only those non-zero elements are stored. The regular definition
of a matrix, also referred to as a full matrix, implies that SCILAB stores all elements of the
matrix, zero or otherwise.
2.
0.
0.
0.
1.
0.
- 1. !
0. !
2. !
-->As = sparse(A)
As =
(
3,
3) sparse matrix
(
(
(
(
1,
1,
2,
3,
1)
3)
2)
3)
2.
- 1.
1.
2.
Notice that SCILAB reports the size of the matrix (3,3), and those non-zero elements only.
These are the only elements stored in memory. Thus, sparse matrices are useful in minimizing
memory storage particularly when large-size matrices, with relatively small density of non-zero
elements, are involved.
Alternatively, a call to sparse of the form
A_sparse = sparse(index,values)
Where index is a nx2 matrix and values is a nx1 vector such that values(i,1) represents the
element in row index(i,1) and column index(i,2) of the sparse matrix. For example,
-->row = [2, 2, 3, 3, 6, 6, 10]
row =
!
2.
2.
3.
3.
6.
-->col = [1, 2, 2, 3, 1, 4, 2]
col =
Download at InfoClearinghouse.com
6.
10. !
31
1.
2.
2.
3.
1.
4.
2. !
-->val = [-0.5, 0.3, 0.2, 1.5, 4.2, -1.1, 2.0] //values of non-zero elements
val =
! -
.5
.3
.2
1.5
2.
2.
3.
3.
6.
6.
10.
1.
2.
2.
3.
1.
4.
2.
4.2
- 1.1
2. !
!
!
!
!
!
!
!
-->As = sparse(index,val)
As =
(
10,
4) sparse matrix
(
(
(
(
(
(
(
2,
2,
3,
3,
6,
6,
10,
1)
2)
2)
3)
1)
4)
2)
.5
.3
.2
1.5
4.2
- 1.1
2.
The function full converts a sparse matrix into a full matrix. For example,
-->A =full(As)
A =
!
0.
! - .5
!
0.
!
0.
!
0.
!
4.2
!
0.
!
0.
!
0.
!
0.
0.
.3
.2
0.
0.
0.
0.
0.
0.
2.
0.
0.
1.5
0.
0.
0.
0.
0.
0.
0.
0.
0.
0.
0.
0.
- 1.1
0.
0.
0.
0.
!
!
!
!
!
!
!
!
!
!
The following call to sparse includes defining the row and column dimensions of the sparse
matrix besides providing indices and values of the non-zero values:
A_sparse = sparse(index,values,dim)
Here, dim is a 1x2 vector with the row and column dimensions of the sparse matrix. As an
example, we can try:
-->As = sparse(index,val,[10,12])
Download at InfoClearinghouse.com
32
As
10,
(
(
(
(
(
(
(
2,
2,
3,
3,
6,
6,
10,
.5
.3
.2
1.5
4.2
- 1.1
2.
10.
values
12. !
=
! - .5 !
!
.3 !
!
.2 !
!
1.5 !
!
4.2 !
! - 1.1 !
!
2. !
index =
!
!
!
!
!
!
!
2.
2.
3.
3.
6.
6.
10.
1.
2.
2.
3.
1.
4.
2.
!
!
!
!
!
!
!
Download at InfoClearinghouse.com
33
-->A1 = spones(As)
A1 =
(
10,
(
(
(
(
(
(
(
2,
2,
3,
3,
6,
6,
10,
1.
1.
1.
1.
1.
1.
1.
6,
6) sparse matrix
(
(
(
(
(
(
1,
2,
3,
4,
5,
6,
1)
2)
3)
4)
5)
6)
1.
1.
1.
1.
1.
1.
To produce a sparse identity matrix with the same dimensions of an existing matrix As use:
-->speye(As)
ans =
(
10,
(
(
(
(
(
(
(
(
(
(
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
1)
2)
3)
4)
5)
6)
7)
8)
9)
10)
1.
1.
1.
1.
1.
1.
1.
1.
1.
1.
Download at InfoClearinghouse.com
34
elements as the third argument. The density of non-zero values is given as a number between
zero and one, for example:
-->As = sprand(6,5,0.2)
As =
(
6,
5) sparse matrix
(
(
(
(
(
(
1,
2,
2,
3,
3,
4,
3)
1)
4)
2)
5)
2)
.0500420
.9931210
.7485507
.6488563
.4104059
.9923191
0.
.9931210
0.
0.
0.
0.
0.
0.
.6488563
.9923191
0.
0.
.0500420
0.
0.
0.
0.
0.
0.
.7485507
0.
0.
0.
0.
0.
0.
.4104059
0.
0.
0.
!
!
!
!
!
!
The function spzeros can be used, for example, in programming SCILAB functions that require
sparse matrices to reserve a matrix name for future use.
Download at InfoClearinghouse.com
35
The function requires as input the name of the sparse matrix and it requests the number of the
graphics window where the plot is to be displayed. The following is an example that uses this
function to visualize a sparse matrix of dimensions 40x40 with a density of non-zero numbers of
0.2:
-->As = sprand(40,40,0.2);
-->getf('spplot')
-->spplot(As)
Enter graphics window number:
--> 2
Download at InfoClearinghouse.com
36
[hand,rk] = lufact(As)
where hand is the handle or pointer, rk is the rank of sparse matrix As. Be aware that hand is
a pointer to locate the LU factors in memory. SCILAB produces no display for hand.
The call to function luget has the general form:
[P,L,U,Q] = luget(hand)
where P and Q are permutation matrices and L and U are the LU factors of the sparse matrix
that generated hand through a call to function lufact. Matrices P, Q, L, and U are related to
matrix As by P*Q*L*U = As.
After a LU factorization is completed, it is necessary to call function ludel to clear the pointer
generated in the call to function lufact, i.e., use
ludel(hand)
The following example shows the LU factorization of a 6x6 randomly-generated sparse matrix
with a density of non-zero numbers of 0.5:
-->As = sprand(5,5,0.5);
.7019967
.5018194
0.
.7860680
0.
.7354560
0.
.4732295
0.
.9456872
0.
.6522234
.9388094
0.
.2401141
0.
.2921187
.6533222
.9933566
.4494063
.5098139
.3776263
0.
0.
0.
!
!
!
!
!
The rank of the matrix is 5. Notice that SCILAB shows nothing for the handle or pointer hand.
The second step in the LU factorization is to get the matrices P, L, U, and Q, such that P*L*U*Q
= As, i.e.,
-->[P,L,U,Q] = luget(hand);
0.
0.
0.
0.
1.
0.
0.
1.
Download at InfoClearinghouse.com
0. !
0. !
37
!
!
!
0.
1.
0.
1.
0.
0.
0.
0.
0.
0.
0.
0.
0. !
0. !
1. !
-->full(L)
ans =
!
!
!
!
!
.9933566
.6533222
0.
.2921187
.4494063
0.
.9388094
0.
.6522234
.2401141
0.
0.
.7019967
.6298297
- .2233988
0.
0.
0.
- .9886182
1.0586985
0.
.5040741
1.047663
1.
0.
0.
0.
.726234
.0806958
1.
0.
0.
0.
0.
.0768072
!
!
!
!
!
-->full(U)
ans =
!
!
!
!
!
1.
0.
0.
0.
0.
0.
1.
0.
0.
0.
.7913251
.5506871
1.
0.
0.
!
!
!
!
!
-->full(Q)
ans =
!
!
!
!
!
0.
0.
1.
0.
0.
0.
0.
0.
1.
0.
0.
1.
0.
0.
0.
1.
0.
0.
0.
0.
0.
0.
0.
0.
1.
!
!
!
!
!
To check if the product P*L*U*Q is indeed equal to the original sparse matrix As, use:
-->full(P*L*U*Q-As)
ans =
!
!
!
!
!
0.
0.
0.
0.
0.
1.110E-16
0.
0.
0.
1.110E-16
0.
0.
0.
0.
0.
0.
0.
0.
0.
0.
0.
0.
0.
0.
0.
!
!
!
!
!
The resulting matrix has a couple of small non-zero elements. These can be cleared by using
the function clean as follows:
-->clean(full(P*L*U*Q-As))
ans =
!
!
!
!
!
0.
0.
0.
0.
0.
0.
0.
0.
0.
0.
0.
0.
0.
0.
0.
0.
0.
0.
0.
0.
0.
0.
0.
0.
0.
!
!
!
!
!
At this point we can use function ludel to clear up the handle or pointer used in the
factorization:
--> ludel(hand)
Download at InfoClearinghouse.com
38
-->b=rand(6,1);
First, we get the rank and handle for the LU factorization of matrix A:
-->[hand,rk] = lufact(A)
rk =
5.
hand =
Next, we use function lusolve with the handle hand and right-hand side vector b to obtain the
solution to the system of linear equations:
-->x = lusolve(hand,b)
x =
! - .8137973
! - .5726203
!
.8288245
!
.9152823
!
.4984395
!
0.
!
!
!
!
!
!
Function lusolve also allows for the direct solution of the system of linear equations without
having to use lufact first. For the case under consideration the SCILAB command is:
-->x = lusolve(A,b)
!--error
singular matrix
19
However, matrix A for this case is singular (earlier we found that its rank was 5, i.e., smaller
than the number of rows or columns, thus indicating a singular matrix), and no solution is
obtained. The use of lufact combined with lusolve, as shown earlier, forces a solution by
making one of the values equal to zero and solving for the remaining five values.
After completing the solution we need to clear the handle for the LU factorization by using:
-->ludel(hand)
Download at InfoClearinghouse.com
39
Because the matrix has full rank (i.e., its rank is the same as the number of rows or columns),
it is possible to find the solution by using:
-->x = lusolve(A,b)
x =
! - .2813594 !
!
.5590971 !
!
1.0143263 !
! - .1810458 !
! - 1.1233045 !
Do not forget to clear the handle generated with lufact by using: -->ludel(hand)
5,
5) sparse matrix
(
(
(
(
(
(
(
(
(
(
(
(
1,
1,
1,
1,
1,
2,
2,
2,
2,
2,
3,
4,
1)
2)
3)
4)
5)
1)
2)
3)
4)
5)
1)
1)
.7235355
.7404212
.3915968
2.0631067
.1957261
1.4075482
1.2851264
- .6179589
.0607043
- .5012662
2.9506345
- 2.1683689
Download at InfoClearinghouse.com
40
(
(
(
(
(
(
4,
5,
5,
5,
5,
5,
3)
1)
2)
3)
4)
5)
1.8737551
- 2.4138658
- .0298745
- .9542841
- 2.1488971
2.5469151
-->x = Ainv*b
x =
! - .2813594 !
!
.5590971 !
!
1.0143263 !
! - .1810458 !
! - 1.1233045 !
a11
a
21
0
M
0
0
0
a12
a 22
a32
M
0
0
0
0
a 23
a33
M
0
0
0
L
0
L
0
L
0
O
M
L a n 2,n 2
L a n 1, n 2
0
L
0
0
0
M
a n 2,n 1
a n 1,n 1
a n 1, n
x1 b1
x 2 b2
x3 b3
M = M
x n 2 bn 2
a n 1,n x n 1 bn 1
a nn x n bn
0
0
0
M
0
This system can also be written as Ax = b, where the nxn matrix A, and the nx1 vectors x
and b are easily identified from the previous expression.
Since each column in the matrix of coefficients only uses three elements, we can enter the
data as the elements of a nx3 matrix,
Download at InfoClearinghouse.com
41
a11
a
21
a 31
A=
M
a n1,1
a12
a 22
a32
M
a n 1, 2
a n ,n 1
0
a 23
a33
M
a n 1,3
a nn
Thomas algorithm for the solution of the tri-diagonal system of linear equations is an
adaptation of the Gaussian elimination procedure. It consists of a forward elimination
accomplished through the recurrence formulas:
ai2 = ai2 - ai1 ai-1,3/ai-1,2, bi = bi - ai1bi-1/ai-1,2, for i = 2, 3, , n
The backward substitution step is performed through the following equations:
xn = bn/an2,
and
Download at InfoClearinghouse.com
42
44.976077
29.904306
54.641148
38.660287
!
!
!
!
=
=
=
=
=
=
krow+1;
kcol+1;
kval+1;
krow+1;
kcol+1;
kval+1;
for i = 2:n-1
krow =
kcol =
kval =
krow =
kcol =
kval =
krow =
kcol =
kval =
end;
krow
kcol
kval
krow
kcol
kval
=
=
=
=
=
=
row(krow)
col(kcol)
val(kval)
row(krow)
col(kcol)
val(kval)
krow+1;
kcol+1;
kval+1;
krow+1;
kcol+1;
kval+1;
krow+1;
kcol+1;
kval+1;
krow+1;
kcol+1;
kval+1;
krow+1;
kcol+1;
kval+1;
=
=
=
=
=
=
1;
1;
A(1,2)
1;
2;
A(1,3)
row(krow)
col(kcol)
val(kval)
row(krow)
col(kcol)
val(kval)
row(krow)
col(kcol)
val(kval)
row(krow)
col(kcol)
val(kval)
row(krow)
col(kcol)
val(kval)
Download at InfoClearinghouse.com
=
=
=
=
=
=
=
=
=
=
=
=
=
=
=
i;
i-1;
A(i,1)
i;
i;
A(i,2)
i;
i+1;
A(i,3)
n;
n-1;
A(n,1)
n;
n;
A(n,2)
43
An application, using the compact tri-dimensional matrix A presented earlier is shown below:
First, we load the function:
-->getf('tritosparse')
A call to the function with argument A produces the values and index for a sparse matrix:
-->[index,val] = tritosparse(A)
val =
!
4. !
! - 1. !
! - 1. !
!
4. !
! - 1. !
! - 1. !
!
4. !
! - 1. !
! - 1. !
!
4. !
index
1.
1.
2.
2.
2.
3.
3.
3.
4.
4.
!
!
!
!
!
!
!
!
!
!
=
1.
2.
1.
2.
3.
2.
3.
4.
3.
4.
!
!
!
!
!
!
!
!
!
!
Next, we put together the sparse matrix using the matrix index and the vector val:
-->As = sparse(index,val)
As =
(
4,
4) sparse matrix
(
(
(
(
(
(
(
(
(
(
1,
1,
2,
2,
2,
3,
3,
3,
4,
4,
1)
2)
1)
2)
3)
2)
3)
4)
3)
4)
4.
- 1.
- 1.
4.
- 1.
- 1.
4.
- 1.
- 1.
4.
Download at InfoClearinghouse.com
44
ans
!
4.
! - 1.
!
0.
!
0.
- 1.
4.
- 1.
0.
0.
- 1.
4.
- 1.
0.
0.
- 1.
4.
!
!
!
!
For the right-hand side vector previously defined we can use function lusolve to obtain the
solution of the tri-diagonal system of linear equations:
-->x = lusolve(As,b)
x =
!
!
!
!
44.976077
29.904306
54.641148
38.660287
!
!
!
!
.
.
.
.
.
.
.
.
.
an1x1 + an2x2 + an3x3 + + an,n-1x n-1 + an,nx n = bn.
as
x1 = (b1 - a12x2 - a13x3- - a1nxn)/a11
x2 = (b2 - a22x2 - a23x3- - a2nxn)/a22
.
Download at InfoClearinghouse.com
45
function [x]=Nextx(x,R,A)
//calculates residual average for the system A*x = b
[n m] = size(A);
for i = 1:n
x(i,1) = x(i,1) + R(i,1)/A(i,i)
end;
Download at InfoClearinghouse.com
46
x
//end function Nextx
Download at InfoClearinghouse.com
47
Download at InfoClearinghouse.com
48
McCuen, R.H., 1989,Hydrologic Analysis and Design - second edition, Prentice Hall, Upper Saddle River, New Jersey.
Middleton, G.V., 2000, "Data Analysis in the Earth Sciences Using Matlab," Prentice Hall, Upper Saddle River, New
Jersey.
Montgomery, D.C., G.C. Runger, and N.F. Hubele, 1998, "Engineering Statistics," John Wiley & Sons, Inc.
Newland, D.E., 1993, "An Introduction to Random Vibrations, Spectral & Wavelet Analysis - Third Edition," Longman
Scientific and Technical, New York.
Nicols, G., 1995, Introduction to Nonlinear Science, Cambridge University Press, Cambridge CB2 2RU, U.K.
Parker, T.S. and L.O. Chua, , "Practical Numerical Algorithms for Chaotic Systems, 1989, Springer-Verlag, New York.
Peitgen, H-O. and D. Saupe (editors), 1988, "The Science of Fractal Images," Springer-Verlag, New York.
Peitgen, H-O., H. Jrgens, and D. Saupe, 1992, "Chaos and Fractals - New Frontiers of Science," Springer-Verlag, New
York.
Press, W.H., B.P. Flannery, S.A. Teukolsky, and W.T. Vetterling, 1989, Numerical Recipes - The Art of Scientific
Computing (FORTRAN version), Cambridge University Press, Cambridge CB2 2RU, U.K.
Raghunath, H.M., 1985, "Hydrology - Principles, Analysis and Design," Wiley Eastern Limited, New Delhi, India.
Recktenwald, G., 2000, "Numerical Methods with Matlab - Implementation and Application," Prentice Hall, Upper
Saddle River, N.J., U.S.A.
Rothenberg, R.I., 1991, "Probability and Statistics," Harcourt Brace Jovanovich College Outline Series, Harcourt Brace
Jovanovich, Publishers, San Diego, CA.
Sagan, H., 1961,"Boundary and Eigenvalue Problems in Mathematical Physics," Dover Publications, Inc., New York.
Spanos, A., 1999,"Probability Theory and Statistical Inference - Econometric Modeling with Observational Data,"
Cambridge University Press, Cambridge CB2 2RU, U.K.
Spiegel, M. R., 1971 (second printing, 1999), "Schaum's Outline of Theory and Problems of Advanced Mathematics for
Engineers and Scientists," Schaum's Outline Series, McGraw-Hill, New York.
Tanis, E.A., 1987, "Statistics II - Estimation and Tests of Hypotheses," Harcourt Brace Jovanovich College Outline
Series, Harcourt Brace Jovanovich, Publishers, Fort Worth, TX.
Tinker, M. and R. Lambourne, 2000, "Further Mathematics for the Physical Sciences," John Wiley & Sons, LTD.,
Chichester, U.K.
Tolstov, G.P., 1962, "Fourier Series," (Translated from the Russian by R. A. Silverman), Dover Publications, New York.
Tveito, A. and R. Winther, 1998, "Introduction to Partial Differential Equations - A Computational Approach," Texts in
Applied Mathematics 29, Springer, New York.
Urroz, G., 2000, "Science and Engineering Mathematics with the HP 49 G - Volumes I & II", www.greatunpublished.com,
Charleston, S.C.
Urroz, G., 2001, "Applied Engineering Mathematics with Maple", www.greatunpublished.com, Charleston, S.C.
Winnick, J., , "Chemical Engineering Thermodynamics - An Introduction to Thermodynamics for Undergraduate
Engineering Students," John Wiley & Sons, Inc., New York.
Download at InfoClearinghouse.com
49