0% found this document useful (0 votes)
66 views3 pages

Dissimilarities in Matrix Algebra

The document discusses the differences between matrix algebra and elementary algebra, highlighting key properties such as non-commutativity and the existence of matrices that defy typical algebraic rules. It outlines several 'Matrix Oddities' that demonstrate these differences, including cases where the product of matrices can be zero despite both matrices being non-zero. Additionally, it includes exercises to explore these concepts further.

Uploaded by

chiefagenaton
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
66 views3 pages

Dissimilarities in Matrix Algebra

The document discusses the differences between matrix algebra and elementary algebra, highlighting key properties such as non-commutativity and the existence of matrices that defy typical algebraic rules. It outlines several 'Matrix Oddities' that demonstrate these differences, including cases where the product of matrices can be zero despite both matrices being non-zero. Additionally, it includes exercises to explore these concepts further.

Uploaded by

chiefagenaton
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

🔗 5.

4 Matrix Oddities
🔗 5.4.1 Dissimilarities with elementary algebra
🔗 We have seen that matrix algebra is similar in many ways to elementary algebra.
Indeed, if we want to solve the matrix equation AX = B for the unknown X,
we imitate the procedure used in elementary algebra for solving the equation
ax = b. One assumption we need is that A is a square matrix that has an

inverse. Notice how exactly the same properties are used in the following
detailed solutions of both equations.
🔗 Table 5.4.1.
Equation in the algebra of real numbers Equation in matrix algebra
ax = b AX = B

a
−1
(ax) = a
−1
b if a ≠ 0 A
−1
(AX) = A
−1
B if A−1
exists
(a
−1
a)x = a
−1
b Associative Property (A
−1
A)X = A
−1
B

1x = a
−1
b Inverse Property IX = A
−1
B

x = a
−1
b Identity Property X = A
−1
B

🔗 Certainly the solution process for solving AX = B is the same as that of solving
ax = b .

🔗 The solution of xa = b is x = ba . In fact, we usually write the solution


−1 −1
= a b

of both equations as x = . In matrix algebra, the solution of XA = B is


b

X = BA , which is not necessarily equal to A B. So in matrix algebra, since


−1 −1

the commutative law (under multiplication) is not true, we have to be more


careful in the methods we use to solve equations.

🔗 It is clear from the above that if we wrote the solution of AX as X = , B


= B
A

we would not know how to interpret B

A
. Does it mean A B or BA
−1
? Because
−1

of this, A is never written as .


−1 I

🔗 Observation 5.4.2. Matrix Oddities. Some of the main dissimilarities


between matrix algebra and elementary algebra are that in matrix algebra:

🔗 1. AB may be different from BA.


2. There exist matrices A and B such that AB = 00, and yet A ≠ 00 and
B ≠ 00.

3. There exist matrices A where A ≠ 00, and yet A 2


= 0
0 .
4. There exist matrices A where A 2
= A with A ≠ I and A ≠ 00
5. There exist matrices A where A 2
= I , where A ≠ I and A ≠ −I

🔗 5.4.2 Exercises
🔗 1. Discuss each of the “Matrix Oddities” with respect to elementary algebra.
Answer.
🔗 2. Determine 2 × 2 matrices which show that each of the “Matrix Oddities” are
true.

🔗 3. Prove or disprove the following implications.

🔗 a. A
2
= A and det A ≠ 0 ⇒ A = I
b. A = I and
2
det A ≠ 0 ⇒ A = I or A = −I .
Answer.

🔗 4. Let M be the set of real n × n matrices. Let P ⊆ M (R) be the


n×n(R) n×n

subset of matrices defined by A ∈ P if and only if A = A. Let Q ⊆ P be 2

defined by A ∈ Q if and only if det A ≠ 0.

🔗 a. Determine the cardinality of Q.


b. Consider the special case n = 2 and prove that a sufficient condition for
A ∈ P ⊆ M (R) is that A has a zero determinant (i.e., A is singular)
2×2

and tr(A) = 1 where tr(A) = a + a is the sum of the main diagonal


11 22

elements of A.
c. Is the condition of part b a necessary condition?

🔗 5. Write each of the following systems in the form AX , and then solve the
= B

systems using matrices.

🔗 a. 2x1 + x2 = 3 b. 2x1 − x2 = 4

x1 − x2 = 1 x1 − x2 = 0

c. 2x1 + x2 = 1 d. 2x1 + x2 = 1

x1 − x2 = 1 x1 − x2 = −1

e. 3x1 + 2x2 = 1

6x1 + 4x2 = −1

Answer.

🔗 6. For those who know calculus:

🔗 a. Write the series expansion for e centered around a = 0.


a

b. Use the idea of exercise 6 to write what would be a plausible definition of


e where A is an n × n matrix.
A

c. 1 1 0 −1
If A = ( ) and B = ( ) , use the series in part (b) to show
0 0 0 0

e e − 1 1 −1
that e A
= ( ) and e
B
= ( .
)
0 1 0 1

d. Show that e A
e
B
≠ e
B A
e .
e. e 0
Show that e A+B
= ( .
)
0 1

f. Is e A
e
B
= e
A+B
?

Common questions

Powered by AI

For A ∈ P ⊆ M₂×₂(ℝ) defined by A² = A, the matrix A having a zero determinant (i.e., A is singular) is ensured if the trace condition tr(A) = 1 is satisfied. This condition guarantees that the matrix exhibits idempotent behavior while also being singular, thus providing specific structural insights .

The condition that makes e^(A+B) not equal to e^Ae^B stems from the non-commutativity of A and B. When matrices do not commute, i.e., AB ≠ BA, the exponential of their sum does not equal the product of their exponentials due to the fundamental deviations from usual algebraic rules observed in matrix operations .

In matrix theory, the implication A² = A and det A ≠ 0 implies A = I is not universally true. The conditions A² = I and det A ≠ 0 can indeed imply that A = I or A = -I, showcasing a stronger argument due to the properties of matrices squaring to the identity and their determinants .

To express systems of linear equations in matrix form, one represents the system as AX = B, where A is the matrix of coefficients, X is the vector of variables, and B is the vector of constants. The solution uses techniques like finding the inverse of A (if it exists) and multiplying by B, thus solving the equations simultaneously. For example, the system 2x₁ + x₂ = 3, x₁ − x₂ = 1 can be written and solved as: A = [[2, 1], [1, -1]], X = [x₁, x₂], B = [3, 1]. The solution X = A^-1B is computed if A^-1 exists .

The trace of a matrix, tr(A), which is the sum of its main diagonal elements, factors into determining idempotent properties by influencing eigenvalue behavior and thus confirming matrix classification. For matrices A such that A² = A, ensuring that tr(A) = 1 in M₂×₂(ℝ) can be essential as it suggests specific algebraic simplicity and determinant conditions (singularity).

In solving matrix equations, the associative property ensures that groupings do not affect the result, similar to real numbers. The inverse property allows solving equations AX = B by premultiplying both sides by A^-1, paralleling division by a in real numbers when a ≠ 0. The identity property supports returning the matrix to its unchanged form, akin to multiplying by 1. These properties are pivotal for Mathematically structuring solutions, forming a foundation comparable to solving real number equations .

Examples of matrix oddities include matrices A and B such that AB = 0 despite A ≠ 0 and B ≠ 0; matrices A such that A² = 0, yet A ≠ 0; matrices A where A² = A with A not being the identity matrix; and matrices A such that A² = I but A is neither I nor -I. These oddities challenge the straightforward assumptions from elementary algebra, highlighting unique behaviors in matrix operations .

Singular matrices, which have a determinant of zero, do not have an inverse, thus making the solution of linear systems AX = B potentially undefined or infinite when A is singular. In such cases, either there are no solutions, or the system possesses infinite solutions, highlighting the significance of matrix regularity in determining system solvability and implications for solution uniqueness .

When matrices A and B do not commute, the exponential matrices e^A and e^B exhibit non-commutative behavior, meaning e^Ae^B ≠ e^Be^A. This results from the fundamental commutative properties not holding at the exponential level when operations are reliant on non-commutative matrices .

In matrix algebra, the commutative law (under multiplication) is not valid, meaning AB may not be equal to BA. This is unlike elementary algebra where multiplication is commutative. Additionally, matrix algebra demonstrates phenomena such as the existence of non-zero matrices A and B such that AB = 0, diagonalizable matrices that are not equal to the identity despite squaring to themselves, and matrices that square to the identity without being the identity or its negative .

You might also like