DETERMINANTS
MIRUTHULA L
CLASS: XII ‘A’
ROLL NO: 21
INTRODUCTION
In mathematics, the determinant is a scalar value that is a certain function of
the entries of a square matrix. The determinant of a matrix A is commonly
denoted det(A), det A, or |A|. Its value characterizes some properties of the
matrix and the linear map represented, on a given basis, by the matrix. In
particular, the determinant is nonzero if and only if the matrix is invertible and
the corresponding linear map is an isomorphism. The determinant of a product
of matrices is the product of their determinants. The determinant of
an n × n matrix can be defined in several equivalent ways, the most common
being Leibniz formula, which expresses the determinant as a sum of
(the factorial of n) signed products of matrix entries. It can be computed by
the Laplace expansion, which expresses the determinant as a linear
combination of determinants of submatrices, or with Gaussian elimination,
which expresses the determinant as the product of the diagonal entries of
a diagonal matrix that is obtained by a succession of elementary row operations.
Determinants can also be defined by some of their properties. Namely, the
determinant is the unique function defined on the n × n matrices that has the
four following properties:
The determinant of the identity matrix is 1.
The exchange of two rows multiplies the determinant by −1.
Multiplying a row by a number multiplies the determinant by this number.
Adding to a row a multiple of another row does not change the determinant.
The above properties relating to rows (properties 2–4) may be replaced by the
corresponding statements with respect to columns.
Determinants occur throughout mathematics. For example, a matrix is often
used to represent the coefficients in a system of linear equations, and
determinants can be used to solve these equations (Cramer's rule), although
other methods of solution are computationally much more efficient.
Determinants are used for defining the characteristic polynomial of a square
matrix, whose roots are the eigenvalues. This is used in calculus with exterior
differential forms and the Jacobian determinant, in particular for changes of
variables in multiple integrals.
HISTORY
Historically, determinants were used long before matrices: A determinant was
originally defined as a property of a system of linear equations. The determinant
"determines" whether the system has a unique solution (which occurs precisely
if the determinant is non-zero). Determinants proper originated separately from
the work of Seki Takakazu in 1683 in Japan and parallelly of Leibniz in
1693. Cramer (1750) stated, without proof, Cramer's rule.[27] Both Cramer and
also Bezout (1779) were led to determinants by the question of plane
curves passing through a given set of points.
Vandermonde (1771) first recognized determinants as independent functions.
[24]
Laplace (1772) gave the general method of expanding a determinant in terms
of its complementary minors: Vandermonde had already given a special
case. Immediately following, Lagrange (1773) treated determinants of the
second and third order and applied it to questions of elimination theory; he
proved many special cases of general identities.
Gauss (1801) made the next advance. Like Lagrange, he made much use of
determinants in the theory of numbers. He introduced the word "determinant"
(Laplace had used "resultant"), though not in the present signification, but rather
as applied to the discriminant of a quantic. Gauss also arrived at the notion of
reciprocal (inverse) determinants, and came very near the multiplication
theorem.
Jacobi (1841) used the functional determinant which Sylvester later called
the Jacobian. In his memoirs in Crelle's Journal for 1841 he specially treats this
subject, as well as the class of alternating functions which Sylvester has
called alternants. About the time of Jacobi's last memoirs, Sylvester (1839)
and Cayley began their work. Cayley 1841 introduced the modern notation for
the determinant using vertical bars.
The study of special forms of determinants has been the natural result of the
completion of the general theory. Axisymmetric determinants have been studied
by Lebesgue, Hesse, and Sylvester; persymmetric determinants by Sylvester
and Hankel; circulants by Catalan, Spottiswoode, Glaisher, and Scott; skew
determinants and Pfaffians, in connection with the theory of orthogonal
transformation, by Cayley; continuants by Sylvester; Wronskians (so called
by Muir) by Christoffel and Frobenius; compound determinants by Sylvester,
Reiss, and Picquet; Jacobians and Hessians by Sylvester; and symmetric gauche
determinants by Trudi. Of the textbooks on the subject Spottiswoode's was the
first. In America, Hanus (1886), Weld (1893), and Muir/Metzler (1933)
published treatises.
GEOMETRICAL MEANING
If the matrix entries are real numbers, the matrix A can be used to represent
two linear maps: one that maps the standard basis vectors to the rows of A, and
one that maps them to the columns of A. In either case, the images of the basis
vectors form a parallelogram that represents the image of the unit square under
the mapping. The parallelogram defined by the rows of the above matrix is the
one with vertices at (0, 0), (a, b), (a + c, b + d), and (c, d), as shown in the
accompanying diagram.
The absolute value of ad − bc is the area of the parallelogram, and thus
represents the scale factor by which areas are transformed by A. (The
parallelogram formed by the columns of A is in general a different
parallelogram, but since the determinant is symmetric with respect to rows and
columns, the area will be the same.)
The absolute value of the determinant together with the sign becomes
the oriented area of the parallelogram. The oriented area is the same as the
usual area, except that it is negative when the angle from the first to the second
vector defining the parallelogram turns in a clockwise direction (which is
opposite to the direction one would get for the identity matrix).
To show that ad − bc is the signed area, one may consider a matrix containing
two vectors u ≡ (a, b) and v ≡ (c, d) representing the parallelogram's sides. The
signed area can be expressed as |u| |v| sin θ for the angle θ between the vectors,
which is simply base times height, the length of one vector times the
perpendicular component of the other. Due to the sine this already is the signed
area, yet it may be expressed more conveniently using the cosine of the
complementary angle to a perpendicular vector, e.g. u⊥ = (−b, a), so that |u⊥| |v|
cos θ′ becomes the signed area in question, which can be determined by the
pattern of the scalar product to be equal to ad − bc according to the following
equations:
Thus the determinant gives the scaling factor and the orientation induced by the
mapping represented by A.
The object known as the bivector is related to these ideas. In 2D, it can be
interpreted as an oriented plane segment formed by imagining two vectors each
(denoted by (a, b) ∧ (c, d)) is the signed area, which is also the
with origin (0, 0), and coordinates (a, b) and (c, d). The bivector magnitude
determinant ad − bc.
DEFINITION
Let A be a square matrix with n rows and n columns, so that it can be written as
The entries etc. are, for many purposes, real or complex numbers. As discussed
below, the determinant is also defined for matrices whose entries are in
a commutative ring.
The determinant of A is denoted by det(A), or it can be denoted directly in terms
of the matrix entries by writing enclosing bars instead of brackets:
There are various equivalent ways to define the determinant of a square
matrix A, i.e. one with the same number of rows and columns: the determinant
can be defined via the Leibniz formula, an explicit formula involving sums of
products of certain entries of the matrix. The determinant can also be
characterized as the unique function depending on the entries of the matrix
satisfying certain properties. This approach can also be used to compute
determinants by simplifying the matrices in question.
The Leibniz formula for the determinant of a 3 × 3 matrix is the following:
In this expression, each term has one factor from each row, all in different
columns, arranged in increasing row order. The signs are determined by how
many transpositions of factors are necessary to arrange the factors in increasing
order of their columns positive for an even number of transpositions and
negative for an odd number.
PROPERTIES OF DETERMINANTS
i) Characterisation of the determinant
The determinant can be characterized by the following three key properties. To
state these, it is convenient to regard an n*n-matrix A as being composed of
its n columns, so denoted as
where the column vector (for each i) is composed of the entries of the matrix
in the i-th column.
, where is an identity matrix.
The determinant is multilinear: if the jth column of a matrix A is written as
a linear combination of two column vectors v and w and a
number r, then the determinant of A is expressible as a similar linear
combination:
The determinant is alternating: whenever two columns of a matrix are identical,
its determinant is 0:
Other consequences:
The determinant is a homogeneous function, i.e.,
Interchanging any pair of columns of a matrix multiplies its determinant
by −1. This follows from the determinant being multilinear and
alternating (properties 2 and 3 above):
If some column can be expressed as a linear combination of
the other columns (i.e. the columns of the matrix form a linearly
dependent set), the determinant is 0. As a special case, this includes: if
some column is such that all its entries are zero, then the determinant of
that matrix is 0.
Adding a scalar multiple of one column to another column does not
change the value of the determinant. This is a consequence of
multilinearity and being alternative: by multilinearity the determinant
changes by a multiple of the determinant of a matrix with two equal
columns, which determinant is 0, since the determinant is alternating.
These characterizing properties and their consequences listed above are both
theoretically significant, but can also be used to compute determinants for
concrete matrices. In fact, Gaussian elimination can be applied to bring any
matrix into upper triangular form, and the steps in this algorithm affect the
determinant in a controlled way. The following concrete example illustrates
the computation of the determinant of the matrix A using that method:
Combining these equalities gives
ii) Multiplicativity and matrix groups
The determinant is a multiplicative map, i.e., for square matrices A and B of
equal size, the determinant of a matrix product equals the product of their
determinants:
A matrix A with entries in a field is invertible precisely if its determinant is
nonzero. This follows from the multiplicativity of the determinant and the
formula for the inverse involving the adjugate matrix mentioned below. In this
event, the determinant of the inverse matrix is given by
iii) Adjugate matrix
The adjugate matrix adj(A) is the transpose of the matrix of the cofactors, that
is,
For every matrix, one has
Thus the adjugate matrix can be used for expressing the inverse of a nonsingular
matrix:
iv) Laplace expansion
which is called the Laplace expansion along the ith row. For example, the
Laplace expansion along the first row (i=1) gives the following formula:
Unwinding the determinants of these 2*2-matrices gives back the Leibniz
formula mentioned above. Similarly, the Laplace expansion along the j-th
column is the equality
CONCLUSION
From this project, I inferred the various methods of approaching a determinant,
its significance in mathematics, its evolution and introduction from various
mathematicians and few of the many properties of a determinant. It is the basic
precursor for the concept of matrices, which is another interconnected, essential
requirement in the field of mathematics.
Determinants are also used in plotting graphs, statistics, scientific studies, and
research in different areas. Determinants help represent the data like the
population of the people, death rate, hunger rate, etc.
From Laplace to Gauss, every single law or property given each of them has
governed the path of determinants to what it is today, and stands out as one of
the most important topics to be covered in schools and well as university level
mathematics.
BIBLIOGRAPHY
Websites:
[Link]
[Link]
[Link]
Books:
Pure mathematics: Determinants and Matrices- Volume 9, By Anthony
Nicolaides
Determinants and Matrices, By AC Aitken