SUMS87 Algebras and Representation Theory, Karin Erdmann, Thorsten Holm (2018) PDF
SUMS87 Algebras and Representation Theory, Karin Erdmann, Thorsten Holm (2018) PDF
Karin Erdmann
Thorsten Holm
Algebras and
Representation
Theory
Springer Undergraduate Mathematics Series
Advisory Board
M.A.J. Chaplain, University of St. Andrews
A. MacIntyre, Queen Mary University of London
S. Scott, King’s College London
N. Snashall, University of Leicester
E. Süli, University of Oxford
M.R. Tehranchi, University of Cambridge
J.F. Toland, University of Bath
More information about this series at http://www.springer.com/series/3423
Karin Erdmann • Thorsten Holm
123
Karin Erdmann Thorsten Holm
Mathematical Institute Fakultät für Mathematik und Physik
University of Oxford Institut für Algebra, Zahlentheorie und
Oxford, United Kingdom Diskrete Mathematik
Leibniz Universität Hannover
Hannover, Germany
Mathematics Subject Classification (2010): 16-XX, 16G10, 16G20, 16D10, 16D60, 16G60, 20CXX
This Springer imprint is published by the registered company Springer Nature Switzerland AG
The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland
Introduction
v
vi Introduction
1 Algebras . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 1
1.1 Definition and Examples .. . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 1
1.1.1 Division Algebras . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 4
1.1.2 Group Algebras .. . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 6
1.1.3 Path Algebras of Quivers .. . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 6
1.2 Subalgebras, Ideals and Factor Algebras .. . . . . . .. . . . . . . . . . . . . . . . . . . . 9
1.3 Algebra Homomorphisms.. . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 13
1.4 Some Algebras of Small Dimensions . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 19
2 Modules and Representations .. . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 29
2.1 Definition and Examples .. . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 29
2.2 Modules for Polynomial Algebras.. . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 33
2.3 Submodules and Factor Modules .. . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 35
2.4 Module Homomorphisms .. . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 40
2.5 Representations of Algebras . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 47
2.5.1 Representations of Groups vs. Modules for Group
Algebras .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 51
2.5.2 Representations of Quivers vs. Modules for Path
Algebras .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 53
3 Simple Modules and the Jordan–Hölder Theorem .. . . . . . . . . . . . . . . . . . . . 61
3.1 Simple Modules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 61
3.2 Composition Series. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 63
3.3 Modules of Finite Length .. . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 69
3.4 Finding All Simple Modules.. . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 71
3.4.1 Simple Modules for Factor Algebras of Polynomial
Algebras .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 73
3.4.2 Simple Modules for Path Algebras . . . .. . . . . . . . . . . . . . . . . . . . 75
3.4.3 Simple Modules for Direct Products.. .. . . . . . . . . . . . . . . . . . . . 77
3.5 Schur’s Lemma and Applications . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 79
vii
viii Contents
Index . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 297
Chapter 1
Algebras
1R . Axiom (R7) also implies that 1R is not the zero element. In particular, a ring has
at least two elements.
We list some common examples of rings.
(1) The integers Z form a ring. Every field is also a ring, such as the rational
numbers Q, the real numbers R, the complex numbers C, or the residue classes
Zp of integers modulo p where p is a prime number.
(2) The n × n-matrices Mn (K), with entries in a field K, form a ring with respect
to matrix addition and matrix multiplication.
(3) The ring K[X] of polynomials over a field K where X is a variable. Similarly,
the ring of polynomials in two or more variables, such as K[X, Y ].
Examples (2) and (3) are not just rings but also vector spaces. There are many more
rings which are vector spaces, and this has led to the definition of an algebra.
Definition 1.1.
(i) An algebra A over a field K (or a K-algebra) is a ring, with addition and
multiplication
which is also a vector space over K, with the above addition and with scalar
multiplication
(λ, a) → λ · a for λ ∈ K, a ∈ A,
EndK (V ) := {α : V → V | α is K-linear}.
In Example 1.3 where A = C and K = R, taking the basis {1C , i} one gets the
usual multiplication rules for complex numbers.
For the n × n-matrices Mn (K) in Example 1.3, it is convenient to take the basis
of matrix units {Eij | 1 ≤ i, j ≤ n} since products of two such matrices are either
zero, or some other matrix of the same form, see Exercise 1.10.
Given some algebras, there are several general methods to construct new ones. We
describe two such methods.
Definition 1.5. If A1 , . . . , An are K-algebras, their direct product is defined to be
the algebra with underlying space
A1 × . . . × An := {(a1 , . . . , an ) | ai ∈ Ai for i = 1, . . . , n}
a ∗ b := ba for a, b ∈ A.
A commutative ring is a field precisely when every non-zero element has an inverse
with respect to multiplication. More generally, there are algebras in which every
non-zero element has an inverse, and they need not be commutative.
Definition 1.7. An algebra A (over a field K) is called a division algebra if every
non-zero element a ∈ A is invertible, that is, there exists an element b ∈ A such
that ab = 1A = ba. If so, we write b = a −1 . Note that if A is finite-dimensional
and ab = 1A then it follows that ba = 1A ; see Exercise 1.8.
Division algebras occur naturally, we will see this later. Clearly, every field is a
division algebra. There is a famous example of a division algebra which is not a
field, this was discovered by Hamilton.
1.1 Definition and Examples 5
i 2 = j 2 = k 2 = −1
and
ij = k, j i = −k, j k = i, kj = −i, ki = j, ik = −j
and extending to linear combinations. That is, an arbitrary element of H has the
form a + bi + cj + dk with a, b, c, d ∈ R, and the product of two elements in H is
given by
(a1 + b1 i + c1 j + d1 k) · (a2 + b2 i + c2 j + d2 k) =
(a1 a2 − b1 b2 − c1 c2 − d1 d2 ) + (a1 b2 + b1 a2 + c1 d2 − d1 c2 )i
+ (a1 c2 − b1 d2 + c1 a2 + d1 b2 )j + (a1 d2 + b1 c2 − c1 b2 + d1 a2 )k.
ūu = (a 2 + b 2 + c2 + d 2 ) · 1 = uū.
This is non-zero for any u = 0, and from this, one can write down the inverse of any
non-zero element u.
Remark 1.9. We use the notation i, and this is justified: The subspace
{a + bi | a, b ∈ R} of H really is C, indeed from the multiplication rules in H
we get
i(j 1) = ij = k = −k = j i = j (i1).
The subset {±1, ±i, ±j, ±k} of H forms a group under multiplication, this is known
as the quaternion group.
6 1 Algebras
Let G be a group and K a field. We define a vector space over K which has basis
the set {g | g ∈ G}, and we call this vector space KG. This space becomes a
K-algebra if one defines the product on the basis by taking the group multiplication,
and extends it to linear combinations. We call this algebra KG the group algebra.
Thus an arbitrary element of KG is a finite linear combination of the form
g∈G αg g with αg ∈ K. We can write down a formula for the product of
two elements, following the recipe in Remark 1.4. Let α = g∈G αg g and
β = h∈G βh h be two elements in KG; then their product has the form
αβ = ( αg βh )x.
x∈G gh=x
Since the multiplication in the group is associative, it follows that the multiplication
in KG is associative. Furthermore, one checks that the multiplication in KG is
distributive. The identity element of the group algebra KG is given by the identity
element of G.
Note that the group algebra KG is finite-dimensional if and only if the group G
is finite, in which case the dimension of KG is equal to the order of the group G.
The group algebra KG is commutative if and only if the group G is abelian.
Example 1.10. Let G be the cyclic group of order 3, generated by y, so that
G = {1G , y, y 2 } and y 3 = 1G . Then we have
(a0 1G + a1 y + a2 y 2 )(b0 1G + b1 y + b2 y 2 ) = c0 1G + c1 y + c2 y 2 ,
with
c0 = a 0 b 0 + a 1 b 2 + a 2 b 1 , c1 = a 0 b 1 + a 1 b 0 + a 2 b 2 , c2 = a 0 b 2 + a 1 b 1 + a 2 b 0 .
Path algebras of quivers are a class of algebras with an easy multiplication formula,
and they are extremely useful for calculating examples. They also have connections
to other parts of mathematics. The underlying basis of a path algebra is the set of
paths in a finite directed graph. It is customary in representation theory to call such
a graph a quiver. We assume throughout that a quiver has finitely many vertices and
finitely many arrows.
Definition 1.11. A quiver Q is a finite directed graph. We sometimes write
Q = (Q0 , Q1 ), where Q0 is the set of vertices and Q1 is the set of arrows.
1.1 Definition and Examples 7
We assume that Q0 and Q1 are finite sets. For any arrow α ∈ Q1 we denote by
s(α) ∈ Q0 its starting point and by t (α) ∈ Q0 its end point.
A non-trivial path in Q is a sequence p = αr . . . α2 α1 of arrows αi ∈ Q1 such
that t (αi ) = s(αi+1 ) for all i = 1, . . . , r − 1. Note that our convention is to read
paths from right to left. The number r of arrows is called the length of p, and we
denote by s(p) = s(α1 ) the starting point, and by t (p) = t (αr ) the end point of p.
For each vertex i ∈ Q0 we also need to have a trivial path of length 0, which we
call ei , and we set s(ei ) = i = t (ei ).
We call a path p in Q an oriented cycle if p has positive length and s(p) = t (p).
Definition 1.12. Let K be a field and Q a quiver. The path algebra KQ of the
quiver Q over the field K has underlying vector space with basis given by all paths
in Q.
The multiplication in KQ is defined on the basis by concatenation of paths (if
possible), and extended linearly to linear combinations. More precisely, for two
paths p = αr . . . α1 and q = βs . . . β1 in Q we set
αr . . . α1 βs . . . β1 if t (βs ) = s(α1 ),
p·q =
0 otherwise.
Note that for the trivial paths ei , where i is a vertex in Q, we have that p · ei = p
for i = s(p) and p · ei = 0 for i = s(p); similarly ei · p = p for i = t (p) and 0
otherwise. In particular we have ei · ei = ei .
The multiplication in KQ is associative since the concatenation of paths is
associative, and it is distributive, by definition of products for arbitrary linear
combinations. We claim that the identity element of KQ is given by the sum of
trivial paths, that is
1KQ = ei .
i∈Q0
and by distributivity it follows that α · ( i∈Q0 ei ) =α=( i∈Q0 ei ) · α for every
α ∈ KQ.
Example 1.13.
α
(1) We consider the quiver Q of the form 1 ←− 2. The path algebra KQ has
dimension 3, the basis consisting of paths is {e1 , e2 , α}. The multiplication table
8 1 Algebras
for KQ is given by
· e1 α e2
e1 e1 α 0
α 0 0 α
e2 0 0 e2
(2) Let Q be the one-loop quiver with one vertex v and one arrow α with
s(α) = v = t (α), that is
Then the path algebra KQ has as basis the set {1, α, α 2 , α 3 , . . .}, and it is not
finite-dimensional.
(3) A quiver can have multiple arrows between two vertices. This is the case for the
Kronecker quiver
(4) Examples of quivers where more than two arrows start or end at a vertex are the
three-subspace quiver
(2) The n×n-matrices Mn (Z) ⊆ Mn (R) over the integers are closed under addition
and multiplication, but Mn (Z) is not an R-subalgebra of Mn (R) since it is not
an R-subspace.
(3) The subset
0b
| b ∈ K ⊆ T2 (K)
00
is not a K-subalgebra of T2 (K) since it does not contain the identity element.
(4) Let A be a K-algebra. For any element a ∈ A define Aa to be the K-span of
{1A , a, a 2 , . . .}. That is, Aa is the space of polynomial expressions in a. This
is a K-subalgebra of A, and it is always commutative. Note that if A is finite-
dimensional then so is Aa .
(5) Let A = A1 × A2 , the direct product of two algebras. Then A1 × {0} is not a
subalgebra of A since it does not contain the identity element of A.
(6) Let H be a subgroup of a group G. Then the group algebra KH is a subalgebra
of the group algebra KG.
α β
(7) Let KQ be the path algebra of the quiver 1 ←− 2 ←− 3. We can consider the
α
‘subquiver’,Q given by 1 ←− 2. The path algebra KQ is not a subalgebra of
KQ since it does not contain the identity element 1KQ = e1 + e2 + e3 of KQ.
Exercise 1.4. Verify that the three-subspace algebra from Example 1.16 is a
subalgebra of M4 (K), hence is a K-algebra.
In addition to subalgebras, there are also ideals, and they are needed when one
wants to define factor algebras.
1.2 Subalgebras, Ideals and Factor Algebras 11
Since ak = 0 we conclude that Ers ∈ I for all r, s and hence I = Mn (K).
(5) Consider the K-algebra Tn (K) of upper triangular matrices. The K-subspace
of strict upper triangular matrices I1 := span{Eij | 1 ≤ i < j ≤ n}
forms a two-sided ideal. More generally, for any d ∈ N the subspace
12 1 Algebras
(a + I ) + (b + I ) := a + b + I, (a + I )(b + I ) := ab + I
for a, b ∈ A. Note that these operations are well-defined, that is, they are
independent of the choice of the representatives of the cosets, because I is a two-
sided ideal. Moreover, the assumption I = A is needed to ensure that the factor ring
has an identity element; see Axiom (R7) in Sect. 1.1.
For K-algebras we have some extra structure on the factor rings.
Lemma 1.20. Let A be a K-algebra. Then the following holds.
(a) Every left (or right) ideal I of A is a K-subspace of A.
(b) If I is a proper two-sided ideal of A then the factor ring A/I is a K-algebra,
the factor algebra of A with respect to I .
Proof. (a) Let I be a left ideal. By definition, (I, +) is an abelian group. We need
to show that if λ ∈ K and x ∈ I then λx ∈ I . But λ1A ∈ A, and we obtain
λx = λ(1A x) = (λ1A )x ∈ I,
since I is a left ideal. The same argument works if I is a right ideal, by axiom (Alg)
in Definition 1.1.
(b) We have already recalled above that the cosets A/I form a ring. Moreover, by
part (a), I is a K-subspace and hence A/I is also a K-vector space with (well-
defined) scalar multiplication λ(a +I ) = λa +I for all λ ∈ K and a ∈ A. According
to Definition 1.1 it only remains to show that axiom (Alg) holds. But this property
is inherited from A; explicitly, let λ ∈ K and a, b ∈ A, then
Similarly, using that λ(ab) = a(λb) by axiom (Alg), one shows that
λ((a + I )(b + I )) = (a + I )(λ(b + I )).
Example 1.21. Consider the algebra K[X] of polynomials in one variable over
a field K. Recall from a course on basic algebra that every non-zero ideal I of
K[X] is of the form K[X]f = (f ) for some non-zero polynomial f ∈ K[X]
(that is, K[X] is a principal ideal domain). The factor algebra A/I = K[X]/(f )
is finite-dimensional. More precisely, we claim that it has dimension equal to the
degree d of the polynomial f and that a K-basis of K[X]/(f ) is given by the
cosets 1 + (f ), X + (f ), . . . , Xd−1 + (f ). In fact, if g ∈ K[X] then division with
remainder in K[X] (polynomial long division) yields g = qf + r with polynomials
q, r ∈ K[X] and r has degree less than d, the degree of f . Hence
On the other hand, considering degrees one checks that this spanning set of
K[X]/(f ) is also linearly independent.
As with any algebraic structure (like vector spaces, groups, rings) one needs to
define and study maps between algebras which ‘preserve the structure’.
Definition 1.22. Let A and B be K-algebras. A map φ : A → B is a K-algebra
homomorphism (or homomorphism of K-algebras) if
(i) φ is a K-linear map of vector spaces,
(ii) φ(ab) = φ(a)φ(b) for all a, b ∈ A,
(iii) φ(1A ) = 1B .
The map φ : A → B is a K-algebra isomorphism if it is a K-algebra
homomorphism and is in addition bijective. If so, then the K-algebras A and B
are said to be isomorphic, and one writes A ∼
= B. Note that the inverse of an algebra
isomorphism is also an algebra isomorphism, see Exercise 1.14.
Remark 1.23.
(1) To check condition (ii) of Definition 1.22, it suffices to take for a, b any two
elements in some fixed basis. Then it follows for arbitrary elements of A as
long as φ is K-linear.
(2) Note that the definition of an algebra homomorphism requires more than just
being a homomorphism of the underlying rings. Indeed, a ring homomorphism
between K-algebras is in general not a K-algebra homomorphism.
For instance, consider the complex numbers C as a C-algebra. Let
φ : C → C, φ(z) = z̄ be the complex conjugation map. By the usual rules
for complex conjugation φ satisfies axioms (ii) and (iii) from Definition 1.22,
14 1 Algebras
that is, φ is a ring homomorphism. But φ does not satisfy axiom (i), since for
example
φ(rz) = rz = r̄ z̄ = r z̄ = rφ(z).
We list a few examples, some of which will occur frequently. For each of these,
we recommend checking that the axioms of Definition 1.22 are indeed satisfied.
Example 1.24. Let K be a field.
(1) Let Q be the one-loop quiver with one vertex v and one arrow α such
that s(α) = v = t (α). As pointed out in Example 1.13, the path algebra
KQ has a basis consisting of 1, α, α 2 , . . .. The multiplication is given by
α i · α j = α i+j . From this we can see that the polynomial algebraK[X] and
arei isomorphic, via the homomorphism defined
KQ by sending i λi Xi to
i
i λi α . That is, we substitute α into the polynomial i λi X .
(2) Let A be a K-algebra. For every element a ∈ A we consider the ‘evaluation
map’
ϕa : K[X] → A , λi Xi → λi a i .
i i
πi : A → Ai , (a1 , . . . , an ) → ai
ιi : Ai → A , ai → (0, . . . , 0, ai , 0, . . . , 0)
are not K-algebra homomorphisms when n ≥ 2 since the identity element 1Ai
is not mapped to the identity element 1A = (1A1 , . . . , 1An ).
1.3 Algebra Homomorphisms 15
τ : Mn (K) → Mn (K)op , m → mt .
From linear algebra it is known that ψ is K-linear and that it preserves the
multiplication, that is, M(β)M(α) = M(β ◦ α). The map ψ is also injective.
Suppose M(α) = 0, then by definition α maps the fixed basis to zero, but then
α = 0. The map ψ is surjective, because every n × n-matrix defines a linear
transformation of V .
(8) We consider the algebra T2 (K) of upper triangular 2 × 2-matrices. This
algebra is of dimension 3 and has a basis of matrix units E11 , E12 and E22 .
Their products can easily be computed (for instance using the formula in
Exercise 1.10), and they are collected in the multiplication table below. Let
us now compare the algebra T2 (K) with the path algebra KQ for the quiver
16 1 Algebras
α
1 ←− 2 which has appeared already in Example 1.13. The multiplication tables
for these two algebras are given as follows
From this it easily follows that the assignment E11 → e1 , E12 → α, and
E22 → e2 defines a K-algebra isomorphism T2 (K) → KQ.
Remark 1.25. The last example can be generalized. For every n ∈ N the K-algebra
Tn (K) of upper triangular n × n-matrices is isomorphic to the path algebra of the
quiver
1 ←− 2 ←− . . . ←− n − 1 ←− n
Proof. From linear algebra we know that the kernel ker(φ) = {a ∈ A | φ(a) = 0} is
a subspace of A, and ker(φ) = A since φ(1A ) = 1B (see Definition 1.22). If a ∈ A
and x ∈ ker(φ) then we have
Hence φ̄ is well defined, and its image is obviously equal to im(φ). It remains to
check that φ̄ is an algebra homomorphism. It is known from linear algebra (and
easy to check) that the map is K-linear. It takes the identity 1A + I to the identity
element of B since φ is an algebra homomorphism. To see that it preserves products,
let a, b ∈ A; then
R[X]/(X2 + 1) ∼
=C
as R-algebras.
(2) Let G be a cyclic group of order n, generated by the element a ∈ G. For a field
K consider the group algebra KG; this is a K-algebra of dimension n. Similar
to the previous example we consider the surjective evaluation homomorphism
K[X]/ ker() ∼
= im() = KG.
K[X]/(Xn − 1) ∼
= KG.
t
ma := Xt +1 − λi Xi ∈ K[X]
i=0
K[X]/(ma ) ∼
= Aa .
of strict upper triangular matrices and the image is the K-algebra Dn (K) of
diagonal matrices. Hence the isomorphism theorem yields that
Tn (K)/Un (K) ∼
= Dn (K)
Using the formula from Example 1.8 for the product of two elements from H
one can check that is an R-algebra homomorphism, see Exercise 1.11.
Looking at the first row of the matrices in the image, it is immediate that
is injective. Therefore, the algebra H is isomorphic to the subalgebra im()
of M4 (R). Since we know (from linear algebra) that matrix multiplication is
associative and that the distributivity law holds in M4 (R), we can now conclude
with no effort that the multiplication in H is associative and distributive.
Exercise 1.7. Explain briefly how examples (1) and (2) in Example 1.27 are special
cases of (3).
One might like to know how many K-algebras there are of a given dimension, up
to isomorphism. In general there might be far too many different algebras, but for
small dimensions one can hope to get a complete overview. We fix a field K, and we
consider K-algebras of dimension at most 2. For these, there are some restrictions.
Proposition 1.28. Let K be a field.
(a) Every 1-dimensional K-algebra is isomorphic to K.
(b) Every 2-dimensional K-algebra is commutative.
Proof. (a) Let A be a 1-dimensional K-algebra. Then A must contain the scalar
multiples of the identity element, giving a subalgebra U := {λ1A | λ ∈ K} ⊆ A.
Then U = A, since A is 1-dimensional. Moreover, according to axiom (Alg) from
Definition 1.1 the product in U is given by (λ1A )(μ1A ) = (λμ)1A and hence the
map A → K, λ1A → λ, is an isomorphism of K-algebras.
20 1 Algebras
X2 − δX − γ = (X − δ/2)2 − (γ + (δ/2)2 ).
Note that {1A , b } is still an R-vector space basis of A. Then we rescale this basis
by setting
|ρ|−1 b if ρ =
0,
b̃ :=
b if ρ = 0.
Then the set {1A , b̃} also is an R-vector space basis of A, and now we have
b̃2 ∈ {0, ±1A }.
This leaves only three possible forms for the algebra A. We write Aj for the
algebra in which b̃2 = j 1Aj for j = 0, 1, −1. We want to show that no two of these
three algebras are isomorphic. For this we use Exercise 1.15.
(1) The algebra A0 has a non-zero element with square zero, namely b̃. By
Exercise 1.15, any algebra isomorphic to A0 must also have such an element.
(2) The algebra A1 does not have a non-zero element whose square is zero: Suppose
a 2 = 0 for a ∈ A1 and write a = λ1A1 + μb̃ with λ, μ ∈ R. Then, using that
b̃2 = 1A1 , we have
Since 1A1 and b̃ are linearly independent, it follows that 2λμ = 0 and λ2 = −μ2 .
So λ = 0 or μ = 0, which immediately forces λ = μ = 0, and therefore a = 0, as
claimed.
1.4 Some Algebras of Small Dimensions 21
and since α and β are not both zero, we can write down the inverse of the above
non-zero element with respect to multiplication.
Clearly A0 and A1 are not fields, since they have zero divisors, namely b̃ for A0 ,
and (b̃ − 1)(b̃ + 1) = 0 in A1 . So A−1 is not isomorphic to A0 or A1 , again by
Exercise 1.15.
We can list a ‘canonical representative’ for each of the three isomorphism classes
of algebras. For j ∈ {0, ±1} consider the R-algebra
R[X]/(X2 − 1) ∼
= R[X]/(X − 1) × R[X]/(X + 1)
Remark 1.32. In this book we focus on algebras over fields. One can also define
algebras where for K one takes a commutative ring, instead of a field. With this,
large parts of the constructions in this chapter work as well, but generally the
situation is more complicated. Therefore we will not discuss these here.
In addition, as we have mentioned, our algebras are associative algebras, that is,
for any a, b, c in the algebra we have
(ab)c = a(bc).
The properties of such Lie algebras are rather different; see the book by Erdmann
and Wildon in this series for a thorough treatment of Lie algebras at an undergrad-
uate level.1
EXERCISES
1.10. Let K be a field and let Eij ∈ Mn (K) be the matrix units as defined in
Example 1.3, that is, Eij has an entry 1 at position (i, j ) and all other entries
are 0. Show that for all i, j, k, ∈ {1, . . . , n} we have
Ei if j = k,
Eij Ek = δj k Ei =
0 if j = k,
1 ←− 2 ←− . . . ←− n − 1 ←− n
(b) Find subalgebras of M3 (K) which are isomorphic to the path algebra of
the quiver 1 −→ 2 ←− 3.
1.20. Let K be a field. Consider the following set of upper triangular matrices
⎧⎛ ⎞ ⎫
⎨ a0c ⎬
A := ⎝0 a d ⎠ | a, b, c, d ∈ K ⊆ T3 (K).
⎩ ⎭
00b
(a) The three-subspace quiver is the quiver in Example 1.13 where all arrows
point towards the branch vertex. Show that the path algebra of the three-
subspace quiver is isomorphic to the three-subspace algebra. (Hint: It
might be convenient to label the branch vertex as vertex 1.)
(b) Determine the opposite algebra Aop . Is A isomorphic to Aop ? Is it
isomorphic to the path algebra of some quiver?
1.22. This exercise gives a criterion by which one can sometimes deduce that a
certain algebra cannot be isomorphic to the path algebra of a quiver.
(a) An element e in a K-algebra A is called an idempotent if e2 = e. Note
that 0 and 1A are always idempotent elements. Show that if φ : A → B
is an isomorphism of K-algebras and e ∈ A is an idempotent, then
φ(e) ∈ B is also an idempotent.
(b) Suppose that A is a K-algebra of dimension > 1 which has only 0 and 1A
as idempotents. Then A is not isomorphic to the path algebra of a quiver.
(Hint: consider the trivial paths in the quiver.)
(c) Show that every division algebra A has no idempotents other than 0 and
1A ; deduce that if A has dimension > 1 then A cannot be isomorphic to
the path algebra of a quiver. In particular, this applies to the R-algebra H
of quaternions.
(d) Show that the factor algebra K[X]/(X2 ) is not isomorphic to the path
algebra of a quiver.
26 1 Algebras
1.23. Let K be a field and A a K-algebra. Recall from Definition 1.6 that the
opposite algebra Aop has the same underlying vector space as A, but a new
multiplication
where on the right-hand the side the product is given by the multiplication
in A.
(a) Show that Aop is again a K-algebra.
(b) Let H be the R-algebra of quaternions (see Example 1.8). Show that the
map
ϕ : H → Hop , a + bi + cj + dk → a − bi − cj − dk
is an R-algebra isomorphism.
(c) Let G be a group and KG the group algebra. Show that the K-algebras
KG and (KG)op are isomorphic.
(d) Let Q be a quiver and KQ its path algebra. Show that the opposite
algebra (KQ)op is isomorphic to the path algebra KQ, where Q is the
quiver obtained from Q by reversing all arrows.
1.24. Consider the following 2-dimensional R-subalgebras of M2 (R) and deter-
mine to which algebra of Proposition 1.29 they are isomorphic:
(i) D2 (R),
the diagonal
matrices;
ab
(ii) A := | a, b ∈ R ;
0a
a b
(iii) B := | a, b ∈ R .
−b a
1.25. Let K be any field, and let A be a 2-dimensional K-algebra with basis {1A , a}.
Hence A = Aa as in Example 1.16. Let a 2 = γ 1A + δa, where γ , δ ∈ K. As
in Example 1.27, the element a has minimal polynomial ma := X2 − δX − γ ,
and
A = Aa ∼
= K[X]/(ma ).
(a) By applying the Chinese Remainder Theorem, show that if ma has two
distinct roots in K, then A is isomorphic to the direct product K × K of
K-algebras.
(b) Show also that if ma = (X − λ)2 for λ ∈ K, then A is isomorphic to the
algebra K[T ]/(T 2 ).
(c) Show that if ma is irreducible in K[X], then A is a field, containing K as
a subfield.
1.4 Some Algebras of Small Dimensions 27
(d) Explain briefly why the algebras in (a) and (b) are not isomorphic, and
also not isomorphic to any of the algebras in (c).
1.26. Show that there are precisely two 2-dimensional algebras over C, up to
isomorphism.
1.27. Consider 2-dimensional algebras over Q. Show that the algebras
Q[X]/(X2 − p) and Q[X]/(X2 − q) are not isomorphic if p and q are
distinct prime numbers.
1.28. Let K = Z2 , the field with two elements.
(a) Let B be the set of matrices
a b
B= | a, b ∈ Z2 .
b a+b
Representation theory studies how algebras can act on vector spaces. The
fundamental notion is that of a module, or equivalently (as we shall see), that
of a representation. Perhaps the most elementary way to think of modules is to view
them as generalizations of vector spaces, where the role of scalars is played by
elements in an algebra, or more generally, in a ring.
R × M → M, (r, m) → r · m
Exercise 2.1. Let R be a ring (with zero element 0R and identity element 1R ) and
M an R-module with zero element 0M . Show that the following holds for all r ∈ R
and m ∈ M:
(i) 0R · m = 0M ;
(ii) r · 0M = 0M ;
(ii) −(r · m) = (−r) · m = r · (−m), in particular −m = (−1R ) · m.
Remark 2.2. Completely analogous to Definition 2.1 one can define right
R-modules, using a map M × R → M, (m, r) → m · r. When the ring R
is not commutative the behaviour of left modules and of right modules can be
different; for an illustration see Exercise 2.22. We will consider only left modules,
since we are mostly interested in the case when the ring is a K-algebra, and scalars
are usually written to the left.
Before dealing with elementary properties of modules we consider a few
examples.
Example 2.3.
(1) When R = K is a field, then R-modules are exactly the same as K-vector
spaces. Thus, modules are a true generalization of the concept of a vector space.
(2) Let R = Z, the ring of integers. Then every abelian group can be viewed as
a Z-module: If n ≥ 1 then n · a is set to be the sum of n copies of a, and
(−n) · a := −(n · a), and 0Z · a = 0. With this, conditions (i) to (iv) in
Definition 2.1 hold in any abelian group.
(3) Let R be a ring (with 1). Then every left ideal I of R is an R-module, with
R-action given by ring multiplication. First, as a left ideal, (I, +) is an abelian
group. The properties (i)–(iv) hold even for arbitrary elements in R.
(4) A very important special case of (3) is that every ring R is an R-module, with
action given by ring multiplication.
(5) Suppose M1 , . . . , Mn are R-modules. Then the cartesian product
M1 × . . . × Mn := {(m1 , . . . , mn ) | mi ∈ Mi }
The module axioms follow immediately from the fact that they hold in
M1 , . . . , Mn .
We will almost always study modules when the ring is a K-algebra A. In this
case, there is a range of important types of A-modules, which we will now introduce.
2.1 Definition and Examples 31
A × V → V , (ϕ, v) → ϕ · v := ϕ(v).
ϕ · (ψ · v) = ϕ(ψ(v)) = (ϕψ) · v
and extended linearly to the entire group algebra KG. The module axioms are
trivially satisfied.
32 2 Modules and Representations
A × M → M , (a, m) → a · m := ϕ(a)m
In this section we will completely describe the modules for algebras of the form
K[X]/(f ) where f ∈ K[X] is a polynomial. We first recall the situation for the
case f = 0, that is, modules for the polynomial algebra K[X].
Definition 2.7. Let K be a field and V a K-vector space. For any K-linear map
α : V → V we can use this and turn V into a K[X]-module by setting
g · v := g(α)(v) = λi α i (v) (for g = λi Xi ∈ K[X] and v ∈ V ).
i i
Here α i = α◦. . .◦α is the i-fold composition of maps. We denote this K[X]-module
by Vα .
Checking the module axioms (i)–(iv) from Definition 2.1 is straightforward. For
example, consider condition (iii),
Verifying the other axioms is similar and is left as an exercise. Note that, to define
a K[X]-module structure on a vector space, one only has to specify the action of X,
see Remark 2.6.
Example 2.8. Let K = R, and take V = R2 , the space of column vectors. Let
α : V → V be the linear map with matrix
01
00
with respect to the standard basis of unit vectors of R2 . According to Definition 2.7,
V becomes
a module for R[X] by setting X · v := α(v). Here α 2 = 0, so if
g = i λi Xi ∈ R[X] is a general polynomial then
g · v = g(α)(v) = λi α i (v) = λ0 v + λ1 α(v).
i
The definition of Vα is more than just an example. We will now show that every
K[X]-module is equal to Vα for some K-linear map α.
Proposition 2.9. Let K be a field and let V be a K[X]-module. Then V = Vα ,
where α : V → V is the K-linear map given by α(v) := X · v for v ∈ V .
34 2 Modules and Representations
Proof. We first want to show that the map α defined in the statement is K-linear. In
fact, for every λ, μ ∈ K and v, w ∈ V we have
(Note that this is a special case of Example 2.4, for the algebra homomorphism
K[X] → A, g → g + I .) Then as a K[X]-module, V = Vα , where α is the linear
map v → X · v, by Proposition 2.9. It follows that for every v ∈ V we have
f (α)(v) = f · v = (f + I ) · v = 0 · v = 0
Rm := {r · m | r ∈ R}
g · v = g(α)(v) (v ∈ V , g ∈ K[X]).
(3) Let A = Tn (K), the algebra of upper triangular matrices. We can also consider
K n as an A-module. Then K n has non-trivial submodules, for example there is a
1-dimensional submodule, spanned by (1, 0, . . . , 0)t . Exercise 2.14 determines
all Tn (K)-submodules of K n .
(4) Let Q be a quiver and A = KQ, the path algebra of Q. For any r ≥ 1, let
A≥r be the subspace of A spanned by paths of length ≥ r. Then A≥r is a
submodule of the A-module A. We have seen Aei if i is a vertex of Q, this is also
a submodule of A. Then we also have the submodule (Aei )≥r := Aei ∩ A≥r ,
by Example 2.13.
(5) Consider the 2-dimensional R-algebra A0 = span{1A0 , b̃} with b̃2 = 0 as
in Sect. 1.4, as an A-module. The 1-dimensional subspace spanned by b̃ is
an A0 -submodule of A0 . Alternatively, A0 ∼ = R[X]/(X2 ) and the subspace
span{X + (X )} is an R[X]/(X )-submodule of R[X]/(X2 ).
2 2
On the other hand, consider the algebra A1 = span{1A1 , b̃} with b̃2 = 1A1 in
the same section, then the subspace spanned by b̃ is not a submodule. But the
space spanned by b̃ − 1A1 is a submodule. Alternatively, A1 ∼ = R[X]/(X2 − 1);
here the subspace U1 := span{X + (X − 1)} is not a submodule since
2
Exercise 2.4. Let A = R[X]/(X2 + 1) (which is the algebra A−1 in Sect. 1.4). Why
does A as an A-module not have any submodules except 0 and A?
Sometimes a module can be broken up into ‘smaller pieces’; the fundamental
notion for such phenomena is that of the direct sum of submodules. Very often we
will only need finite direct sums, but when dealing with semisimple modules we
will also need arbitrary direct sums. For clarity, we will give the definition in both
situations separately.
Definition 2.15. Let R be a ring and let M be an R-module.
(a) Let U1 , U2 , . . . , Ut be R-submodules of M. We say that M is the direct sum of
U1 , . . . , Ut , denoted M = U1 ⊕ U2 ⊕ . . . ⊕ Ut , if the following two conditions
are satisfied:
(i) M = U1 + U2 + . . . + Ut , that is, every element of M can be expressed as
a sum of elements from the submodules Ui .
(ii) For every j with 1 ≤ j ≤ t we have Uj ∩ i =j Ui = 0.
38 2 Modules and Representations
A = C1 ⊕ C2 ⊕ . . . ⊕ Cn
M/U := {m + U | m ∈ M}
φ : R → M, φ(r) = rm.
2.4 Module Homomorphisms 41
ψ : R n → M, ψ(r1 , r2 , . . . , rn ) = r1 m1 + r2 m2 + . . . + rn mn .
πi : M → Mi , (m1 , . . . , mr ) → mi
ιi : Mi → M , mi → (0, . . . , 0, mi , 0, . . . , 0)
(3) Let G = Cn be a cyclic group of order n. Then we have seen in Example 2.11
that the group algebra CG ∼ = C[X]/(Xn − 1) has n one-dimensional modules
given by multiplication with the scalars (e2πi/n )j for j = 0, 1, . . . , n−1. These
are pairwise non-isomorphic, by part (2) above. Thus, CG has precisely n one-
dimensional modules, up to isomorphism.
Exercise 2.8. Let A = KQ, where Q is a quiver. We have seen that A has for each
vertex i of Q a 1-dimensional module Si := Aei /(Aei )≥1 (see Example 2.19). Show
that if i = j then Si and Sj are not isomorphic.
In analogy to the isomorphism theorem for linear maps, or group homomor-
phisms, there are also isomorphism theorems for module homomorphisms.
Theorem 2.24 (Isomorphism Theorems). Let R be a ring. Then the following
hold.
(a) Suppose φ : M → N is an R-module homomorphism. Then the kernel ker(φ) is
an R-submodule of M and the image im(φ) is an R-submodule of N. Moreover,
we have an isomorphism of R-modules
M/ker(φ) ∼
= im(φ).
(b) Suppose U, V are submodules of an R-module M, then the sum U + V and the
intersection U ∩ V are also R-submodules of M. Moreover,
U/(U ∩ V ) ∼
= (U + V )/V
(M/U )/(V /U ) ∼
= M/V
φ(rm) = rφ(m) = r · 0 = 0
2.4 Module Homomorphisms 43
and rm ∈ ker(φ). Similarly one checks that the image im(φ) is a submodule of N.
For the second statement we consider the map
and
(b) We have already seen in Example 2.13 that U + V and U ∩ V are submodules.
Then we consider the map
ψ : U → (U + V )/V , u → u + V .
From the addition and R-action on factor modules being defined on representatives,
it is easy to check that ψ is an R-module homomorphism. Since every coset in
(U + V )/V is of the form u + v + V = u + V , the map ψ is surjective. Moreover,
it follows directly from the definition that ker(ψ) = U ∩ V . So part (a) implies that
U/(U ∩ V ) ∼
= (U + V )/V .
(c) That V /U is an R-submodule of M/U follows directly from the fact that V is
an R-submodule of M. We then consider the map
ψ : M/U → M/V , m + U → m + V .
Note that this map is well-defined since U ⊆ V by assumption. One checks that ψ is
an R-module homomorphism. By definition, ψ is surjective, and the kernel consists
precisely of the cosets of the form m + U with m ∈ V , that is, ker(ψ) = V /U . So
part (a) implies that
(M/U )/(V /U ) ∼
= M/V ,
as claimed.
44 2 Modules and Representations
Example 2.25. Let R be a ring and M an R-module. For any m ∈ M consider the
R-module homomorphism
φ : R → M , φ(r) = rm
AnnR (m) := {r ∈ R | rm = 0}
R/AnnR (m) ∼
= im(φ) = Rm,
that is, the factor module is actually isomorphic to the submodule of M generated
by m; this has appeared already in Example 2.13.
In the isomorphism theorem we have seen that factor modules occur very natu-
rally in the context of module homomorphisms. We now describe the submodules
of a factor module. This so-called submodule correspondence is very useful, as it
allows one to translate between factor modules and modules. This is based on the
following observation:
Proposition 2.26. Let R be a ring and φ : M → N an R-module homomorphism.
Then for every R-submodule W ⊆ N the preimage φ −1 (W ) := {m ∈ M | φ(m) ∈ W }
is an R-submodule of M, which contains the kernel of φ.
Proof. We first show that φ −1 (W ) is a subgroup. It contains the zero element since
φ(0) = 0 ∈ W . Moreover, if m1 , m2 ∈ φ −1 (W ), then
:= π −1 (W ) = {m ∈ M | m + U ∈ W }.
W
2.4 Module Homomorphisms 45
Note that the K[X]-action on A is the same as the A-action on A (both given
by multiplication in K[X]); thus the K[X]-submodules of A are the same as the
A-submodules of A and the above list gives precisely the submodules of A as an
A-module.
Alternatively, we know that as a K[X]-module, A = Vα , where V = A
as a vector space, and α is the linear map which is given as multipli-
cation by X (see Sect. 2.2). The matrix of α with respect to the basis
{1 + (Xn ), X + (Xn ), . . . , Xn−2 + (Xn ), Xn−1 + (Xn )} is the Jordan block Jn (0)
46 2 Modules and Representations
A submodule is a subspace which is invariant under this matrix. With this, we also
get the above description of submodules.
Exercise 2.9. Check the details in the above example.
Often, properties of module homomorphisms already give rise to direct sum
decompositions. We give an illustration of this, which will actually be applied twice
later.
Lemma 2.30. Let A be a K-algebra, and let M, N, N be non-zero A-modules.
Suppose there are A-module homomorphisms j : N → M and π : M → N such
that the composition π ◦ j : N → N is an isomorphism. Then j is injective and π
is surjective, and M is the direct sum of two submodules,
M = im(j ) ⊕ ker(π).
Proof. The first part is clear. We must show that im(j ) ∩ ker(π) = 0 and that
M = im(j ) + ker(π), see Definition 2.15.
Suppose w ∈ im(j ) ∩ ker(π), so that w = j (n) for some n ∈ N and π(w) = 0.
Then 0 = π(w) = (π ◦ j )(n) from which it follows that n = 0 since π ◦ j is
injective. Clearly, then also w = 0, as desired. This proves that the intersection
im(j ) ∩ ker(π) is zero.
Let φ : N → N be the inverse of π ◦ j , so that we have π ◦ j ◦ φ = idN . Take
w ∈ M then
w = (j ◦ φ ◦ π)(w) + (w − (j ◦ φ ◦ π)(w)).
The first summand belongs to im(j ), and the second summand is in ker(π) since
In basic group theory one learns that a group action of a group G on a set is ‘the
same’ as a group homomorphism from G into the group of all permutations of the
set .
In analogy, let A be a K-algebra; as we will see, an A-module V ‘is the same’ as
an algebra homomorphism φ : A → EndK (V ), that is, a representation of A.
Definition 2.31. Let K be a field and A a K-algebra.
(a) A representation of A over K is a K-vector space V together with a K-algebra
homomorphism θ : A → EndK (V ).
(b) A matrix representation of A is a K-algebra homomorphism θ : A −→ Mn (K),
for some n ≥ 1.
(c) Suppose in (a) that V is finite-dimensional. We may fix a basis in V and
write linear maps of V as matrices with respect to such a basis. Then the
representation of A becomes a K-algebra homomorphism θ : A → Mn (K),
that is, a matrix representation of A.
Example 2.32. Let K be a field.
(1) Let A be a K-subalgebra of Mn (K), the algebra of n × n-matrices, then the
inclusion map is an algebra homomorphism and hence is a matrix representation
of A. Similarly, let A be a subalgebra of the K-algebra EndK (V ) of K-linear
maps on V , where V is a vector space over K. Then the inclusion map from A
to EndK (V ) is an algebra homomorphism, hence it is a representation of A.
(2) Consider the polynomial algebra K[X], and take a K-vector space V together
with a fixed linear transformation α. We have seen in Example 1.24 that
evaluation at α is an algebra homomorphism. Hence we have a representation
of K[X] given by
(3) Let A = KG, the group algebra of a finite group. Define θ : A → M1 (K) = K
by mapping each basis vector g ∈ G to 1, and extend to linear combinations.
This is a representation of A: by Remark 2.6 it is enough to check the conditions
on a basis. This is easy, since θ (g) = 1 for g ∈ G.
In Sect. 2.1 we introduced modules for an algebra as vector spaces on which the
algebra acts by linear transformations. The following crucial result observes that
modules and representations of an algebra are the same. Going from one notion to
the other is a formal matter, nothing is ‘done’ to the modules or representations and
it only describes two different views of the same thing.
48 2 Modules and Representations
that is, θ (λa + μb) = λθ (a) + μθ (b). Next, for any a, b ∈ A and v ∈ V we get
which holds for all v ∈ V , hence θ (ab) = θ (a) ◦ θ (b). Finally, it is immediate from
the definition that θ (1A ) = idV .
(b) This is analogous to the proof of part (a), where each argument can be
reversed.
Example 2.34. Let K be a field.
(1) Let V be a K-vector space and A ⊆ EndK (V ) a subalgebra. As observed in
Example 2.32 the inclusion map θ : A → EndK (V ) is a representation. When
V is interpreted as an A-module we obtain precisely the ‘natural module’ from
Example 2.4 with A-action given by A × V → V , (ϕ, v) → ϕ(v).
2.5 Representations of Algebras 49
(2) The representation θ : K[X] → EndK (V ), θ (f ) = f (α) for any K-linear map
α on a K-vector space V when interpreted as a K[X]-module is precisely the
K[X]-module Vα studied in Sect. 2.2.
(3) Let A be a K-algebra, then A is an A-module with A-action given by the
multiplication in A. The corresponding representation of A is then given by
ψ ◦ θ1 (a) = θ2 (a) ◦ ψ
θ (a + I ) = θ (a) (a ∈ A)
(a + I )v = av for all a ∈ A, v ∈ V .
We want to illustrate the above inflation procedure with an example from linear
algebra.
Example 2.39. Let A = K[X] be the polynomial algebra. We take a representation
θ : K[X] → EndK (V ) and let θ (X) =: α. The kernel is an ideal of K[X], so it
is of the form ker(θ ) = K[X]m = (m) for some polynomial m ∈ K[X]. Assume
2.5 Representations of Algebras 51
that is, ρθ (g) = θ (g) ∈ GL(V ) is indeed an invertible linear map. Moreover, for
every g, h ∈ G we have
The group algebra KG has basis the elements of the group G, which allows us to
relate KG-modules and representations of G. The path algebra KQ of a quiver has
basis the paths of Q. In analogy, we can define a representation of a quiver Q, which
allows us to relate KQ-modules and representations of Q. We will introduce this
now, and later we will study it in more detail.
Roughly speaking, a representation is as follows. A quiver consists of vertices
and arrows, and if we want to realize it in the setting of vector spaces, we represent
vertices by vector spaces, and arrows by linear maps, so that when arrows can be
composed, the corresponding maps can also be composed.
Definition 2.44. Let Q = (Q0 , Q1 ) be a quiver. A representation V of Q over a
field K is a set of K-vector spaces {V (i) | i ∈ Q0 } together with K-linear maps
V (α) : V (i) → V (j ) for each arrow α from i to j . We sometimes also write V as a
tuple V = ((V (i))i∈Q0 , (V (α))α∈Q1 ).
Example 2.45.
(1) Let Q be the one-loop quiver as in Example 1.13 with one vertex 1, and one
arrow α with starting and end point 1. Then a representation V of Q consists of
a K-vector space V (1) and a K-linear map V (α) : V (1) → V (1).
54 2 Modules and Representations
α
(2) Let Q be the quiver 1 −→ 2. Then a representation V consists of two K-vector
spaces V (1) and V (2) and a K-linear map V (α) : V (1) → V (2).
Consider the second example. We can construct from this a module for the path
algebra KQ = span{e1 , e2 , α}. Take as the underlying space V := V (1) × V (2),
a direct product of K-vector spaces (a special case of a direct product of modules
as in Example 2.3 and Definition 2.17). Let ei act as the projection onto V (i) with
kernel V (j ) for j = i. Then define the action of α on V by
α((v1 , v2 )) := V (α)(v1 )
V (1) := e1 V , V (2) = e2 V
V (i) = ei V = {ei · v | v ∈ V };
α
for any arrow i −→ j in Q1 we set
Proof. (a) We check that the module axioms from Definition 2.1 are satisfied. Let
p = αr . . . α1 and q be paths in Q. Since the KQ-action is defined on the basis of
KQ and then extended by linearity, the distributivity (p + q) · v = p · v + q · v holds
by definition. Moreover, let v, w ∈ V ; then
since all the maps V (αi ) are K-linear. Since the multiplication in KQ is defined by
concatenation of paths (see Sect. 1.1.3), it is immediate that p · (q · v) = (pq) · v for
all v ∈ V and all paths p, q in Q, and then by linearity also for arbitrary elements
in KQ. Finally, the identity element is 1KQ = i∈Q0 ei , the sum of all trivial paths
(see Sect. 1.1.3); by definition, ei acts by picking out the ith component, then for all
v ∈ V we have
1KQ · v = ei · v = v.
i∈Q0
(b) According to Definition 2.44 we have to confirm that the V (i) = ei V are
K-vector spaces and that the maps V (α) are K-linear. The module axioms for V
imply that for every v, w ∈ V and λ ∈ K we have
ei · v + ei · w = ei · (v + w) ∈ ei V = V (i)
and also
α
Example 2.47. We consider the quiver 1 −→ 2. The 1-dimensional vector space
span{e2 } is a KQ-module (more precisely, a KQ-submodule of the path algebra
0
KQ). We interpret it as a representation of the quiver Q by 0 −→ span{e2 }. Also
the two-dimensional vector space span{e1 , α} is a KQ-module. As a representation
V (α)
of Q this takes the form span{e1 } −→ span{α}, where V (α)(e1 ) = α. Often, the
vector spaces are only considered up to isomorphism, then the latter representation
idK
takes the more concise form K −→ K.
EXERCISES
U ∩ (V + W ) = (U ∩ V ) + (U ∩ W ).
(b) Show that U \ V is never a submodule. Show also that the union U ∪ V
is a submodule if and only if U ⊆ V or V ⊆ U .
2.11. Let A = M2 (K), the K-algebra of 2 × 2-matrices over a field K. Take
the A-module M = A, and for i = 1, 2 define Ui to be the subset of
matrices
where
all entriesnot in the i-th column are zero. Moreover, let
aa
U3 := | a, b ∈ K .
bb
(a) Check that each Ui is an A-submodule of M.
(b) Verify that for i = j , the intersection Ui ∩ Uj is zero.
(c) Show that M is not the direct sum of U1 , U2 and U3 .
2.12. For a field K we consider the factor algebra A = K[X]/(X4 − 2). In each
of the following cases find the number of 1-dimensional A-modules (up to
isomorphism); moreover, describe explicitly the action of the coset of X on
the module.
F := (M1 × M2 )/C,
(Hint: Show that the linear map w → (β1 (w), β2 (w)) from W to C is an
isomorphism.)
(c) Assume that W, M1 , M2 are A-modules where A is some K-algebra and
that β1 , β2 are A-module homomorphisms. Show that then C and hence
F are A-modules.
2.18. Let E be the pull-back as in Exercise 2.16 and assume M = im(α1 )+im(α2 ).
Now take the push-out F as in Exercise 2.17 where W = E with the same
M1 , M2 , and where βi : W → Mi are the maps
satisfy the defining relations for G, hence give rise to a group representation
ρ : G → GL2 (C), and a 2-dimensional CG-module.
2.5 Representations of Algebras 59
α β
2.21. Let K be a field. We consider the quiver Q given by 1 ←− 2 ←− 3 and the
path algebra KQ as a KQ-module.
(a) Let V := span{e2 , α} ⊆ KQ. Explain why V = KQe2 and hence V is a
KQ-submodule of KQ.
(b) Find a K-basis of the KQ-submodule W := KQβ generated by β.
(c) Express the KQ-modules V and W as a representation of the quiver Q.
Are V and W isomorphic as KQ-modules?
α β
2.22. Let A = KQ where Q is the following quiver: 1 −→ 2 ←− 3. This exercise
illustrates that A as a left module and A as a right module have different
properties.
(a) As a left module A = Ae1 ⊕ Ae2 ⊕ Ae3 (see Exercise 2.6). For each Aei ,
find a K-basis, and verify that each of Ae1 and Ae3 is 2-dimensional, and
Ae2 is 1-dimensional.
(b) Show that the only 1-dimensional A-submodule of Ae1 is span{α}.
Deduce that Ae1 cannot be expressed as Ae1 = U ⊕ V where U and
V are non-zero A-submodules.
(c) Explain briefly why the same holds for Ae3 .
(d) As a right A-module, A = e1 A ⊕ e2 A ⊕ e3 A (by the same reasoning as
in Exercise 2.6). Verify that e1 A and e3 A are 1-dimensional.
2.23. Assume A = K[X] and let f = gh where g and h are polynomials in A.
Then Af = (f ) ⊆ (g) = Ag and the factor module is
In this section we introduce simple modules for algebras over a field. Simple
modules can be seen as the building blocks of arbitrary modules. We will make this
precise by introducing and studying composition series, in particular we will prove
the fundamental Jordan–Hölder theorem. This shows that it is an important problem
to classify, if possible, all simple modules of a (finite-dimensional) algebra. We
discuss tools to find and to compare simple modules of a fixed algebra. Furthermore,
we determine the simple modules for algebras of the form K[X]/(f ) where f is a
non-constant polynomial, and also for finite-dimensional path algebras of quivers.
where r is the rotation by π/2 and s the reflection about the x-axis. The
corresponding A-module is V = R2 , the action of every g ∈ D4 is given by
applying the matrix ρ(g) to (column) vectors.
We claim that V = R2 is a simple RD4 -module. Suppose, for a contradic-
tion, that V has a non-zero submodule U and U = V . Then U is 1-dimensional,
say U is spanned by a vector u. Then ρ(r)u = λu for some λ ∈ R, which means
that u ∈ R2 is an eigenvector of ρ(r). But the matrix ρ(r) does not have a real
eigenvalue, a contradiction.
Since GL2 (R) is a subgroup of GL2 (C), we may view ρ(g) for g ∈ D4
also as elements in GL2 (C). This gives the 2-dimensional module V = C2 for
the group algebra CD4 . In Exercise 3.13 it will be shown that this module is
simple. Note that this does not follow from the argument we used in the case of
the RD4 -module R2 .
(4) Let K be a field and D a division algebra over K, see Definition 1.7. We view D
as a D-module (with action given by multiplication in D). Then D is a simple
D-module: In fact, let 0 = U ⊆ D be a D-submodule, and take an element
0 = u ∈ U . Then 1D = u−1 u ∈ U and hence if d ∈ D is arbitrary, we have
that d = d1D ∈ U . Therefore U = D, and D is a simple D-module.
We describe now a method which allows us to show that a given A-module V is
simple. For any element v ∈ V set Av := {av | a ∈ A}. This is an A-submodule of
V , the submodule of V generated by v, see Example 2.13.
Lemma 3.3. Let A be a K-algebra and let V be a non-zero A-module. Then V is
simple if and only if for each v ∈ V \ {0} we have Av = V .
Proof. First suppose that V is simple, and take an arbitrary element 0 = v ∈ V . We
know that Av is a submodule of V , and it contains v = 1A v, and so Av is non-zero
and therefore Av = V since V is simple.
Conversely, suppose U is a non-zero submodule of V . Then there is some non-
zero u ∈ U . Since U is a submodule, we have Au ⊆ U , but by the hypothesis,
V = Au ⊆ U ⊆ V and hence U = V .
A module isomorphism takes a simple module to a simple module, see Exer-
cise 3.3; this is perhaps not surprising.
We would like to understand when a factor module of a given module is simple.
This can be answered by using the submodule correspondence (see Theorem 2.28).
3.2 Composition Series 63
0 = V0 ⊂ V1 ⊂ V2 ⊂ . . . ⊂ Vn = V
such that the factor modules Vi /Vi−1 are simple, for all 1 ≤ i ≤ n. The length of
the composition series is n, the number of factor modules appearing. We refer to the
Vi as the terms of the composition series.
Example 3.7.
(1) The zero module has a composition series 0 = V0 = 0 of length 0. If V is a
simple module then 0 = V0 ⊂ V1 = V is a composition series, of length 1.
(2) Assume we have a composition series as in Definition 3.6. If Vk is one of the
terms, then Vk ‘inherits’ the composition series
0 = V0 ⊂ V1 ⊂ . . . ⊂ Vk .
64 3 Simple Modules and the Jordan–Hölder Theorem
(3) Let K = R and take A to be the 2-dimensional algebra over R, with basis
{1A , β} such that β 2 = 0 (see Proposition 1.29); an explicit realisation would be
A = R[X]/(X2 ). Take the A-module V = A, and let V1 be the space spanned
by β, then V1 is a submodule. Since V1 and V /V1 are 1-dimensional, they are
simple (see Example 3.2). Hence V has a composition series
0 = V0 ⊂ V1 ⊂ V2 = V .
(4) Let A = Mn (K) and take the A-module V = A. In Exercise 2.5 we have
considered the A-submodules Ci consisting of the matrices with zero entries
outside the i-th column, where 1 ≤ i ≤ n. In Exercise 3.1 it is shown that every
A-module Ci is isomorphic to the natural A-module K n . In particular, each A-
module Ci is simple (see Example 3.2). On the other hand we have a direct sum
decomposition A = C1 ⊕ C2 ⊕ . . . ⊕ Cn and therefore we have a finite chain
of submodules
0 ⊂ C1 ⊂ C1 ⊕ C2 ⊂ . . . ⊂ C1 ⊕ . . . ⊕ Cn−1 ⊂ A.
Each factor module is simple: By the isomorphism theorem (see Theorem 2.24)
0 ⊂ span{e2 } ⊂ span{e2 , α} ⊂ V .
0 ⊂ span{α} ⊂ span{α, e1 } ⊂ V .
In each case, the factor modules are 1-dimensional and hence are simple A-
modules.
Exercise 3.1. Let A = Mn (K), and let Ci ⊆ A be the space of matrices which
are zero outside the i-th column. Show that Ci is isomorphic to the natural module
V = K n of column vectors. Hint: Show that placing v ∈ V into the i-th column of
a matrix and extending by zeros is a module homomorphism V → Ci .
3.2 Composition Series 65
Remark 3.8. Not every module has a composition series. For instance, take A = K
to be the 1-dimensional algebra over a field K. Then A-modules are K-vector
spaces, and A-submodules are K-subspaces. Therefore it follows from Defini-
tion 3.1 that the simple K-modules are precisely the 1-dimensional K-vector spaces.
This means that an infinite-dimensional K-vector space does not have a composition
series since a composition series is by definition a finite chain of submodules, see
Definition 3.6. On the other hand, we will now see that for any algebra, finite-
dimensional modules have a composition series.
Lemma 3.9. Let A be a K-algebra. Every finite-dimensional A-module V has a
composition series.
Proof. This is proved by induction on the dimension of V . If dimK V = 0 or
dimK V = 1 then we are done by Example 3.7.
So assume now that dimK V > 1. If V is simple then, again by Example 3.7,
V has a composition series. Otherwise, V has proper non-zero submodules. So we
can choose a proper submodule 0 = U ⊂ V of largest possible dimension. Then U
must be a maximal submodule of V and hence the factor module V /U is a simple
A-module, by Lemma 3.4. Since dimK U < dimK V , by the induction hypothesis
U has a composition series, say
0 = U0 ⊂ U1 ⊂ U2 ⊂ . . . ⊂ Uk = U.
0 = U0 ⊂ U1 ⊂ U2 ⊂ . . . ⊂ Uk = U ⊂ V
is a composition series of V .
In Example 3.7 we have seen that a term of a composition series inherits a
composition series. This is a special case of the following result, which holds for
arbitrary submodules.
Proposition 3.10. Let A be a K-algebra, and let V be an A-module. If V has a
composition series, then every submodule U ⊆ V also has a composition series.
Proof. Take any composition series for V , say
0 = V0 ⊂ V1 ⊂ V2 ⊂ . . . ⊂ Vn−1 ⊂ Vn = V .
0 = V0 ∩ U ⊆ V1 ∩ U ⊆ V2 ∩ U ⊆ . . . ⊆ Vn−1 ∩ U ⊆ Vn ∩ U = U. (3.1)
Note that terms of this chain can be equal; that is, (3.1) is in general not a
composition series. However, if we remove any repetition, so that each module
occurs precisely once, then we get a composition series for U : Consider the factors
66 3 Simple Modules and the Jordan–Hölder Theorem
(Vi ∩ U )/(Vi−1 ∩ U ). Using that Vi−1 ⊂ Vi and applying the isomorphism theorem
(see Theorem 2.24) we obtain
Since Vi /Vi−1 is simple the factor modules (Vi ∩ U )/(Vi−1 ∩ U ) occurring in (3.1)
are either zero or simple.
In general, a module can have many composition series, even infinitely many
different composition series; see Exercise 3.11. The Jordan–Hölder Theorem shows
that any two composition series of a module have the same length and the same
factors up to isomorphism and up to order.
Theorem 3.11 (Jordan–Hölder Theorem). Let A be a K-algebra. Suppose an
A-module V has two composition series
0 = V0 ⊂ V1 ⊂ V2 ⊂ . . . ⊂ Vn−1 ⊂ Vn = V (I)
0 = W0 ⊂ W1 ⊂ W2 ⊂ . . . ⊂ Wm−1 ⊂ Wm = V . (II)
Vn−1 /D ∼
= V /Wm−1 and Wm−1 /D ∼
= V /Vn−1 ,
If the first inclusion was an equality then Wm−1 ⊆ Vn−1 ⊂ V ; but both Vn−1
and Wm−1 are maximal submodules of V (since V /Vn−1 and V /Wm−1 are simple,
see Lemma 3.4). Thus Vn−1 = Wm−1 , a contradiction. Since Vn−1 is a maximal
submodule of V , we conclude that Vn−1 + Wm−1 = V .
Now we apply the isomorphism theorem (Theorem 2.24), and get
0 = D0 ⊂ D1 ⊂ . . . ⊂ Dt = D.
(III) 0 = D0 ⊂ D1 ⊂ . . . ⊂ Dt = D ⊂ Vn−1 ⊂ V
(IV) 0 = D0 ⊂ D1 ⊂ . . . ⊂ Dt = D ⊂ Wm−1 ⊂ V
since, by Lemma 3.13, the quotients Vn−1 /D and Wm−1 /D are simple. Moreover,
by Lemma 3.13, the composition series (III) and (IV) are equivalent since only the
two top factors are interchanged, up to isomorphism.
Next, we claim that m = n. The module Vn−1 inherits a composition series of
length n − 1 from (I). So by the inductive hypothesis, all composition series of
68 3 Simple Modules and the Jordan–Hölder Theorem
Vn−1 have length n − 1. But the composition series which is inherited from (III) has
length t + 1 and hence n − 1 = t + 1. Similarly, the module Wm−1 inherits from
(IV) a composition series of length t + 1 = n − 1, so by the inductive hypothesis all
composition series of Wm−1 have length n − 1. In particular, the composition series
inherited from (II) does, and therefore m − 1 = n − 1 and m = n.
Now we show that the composition series (I) and (III) are equivalent. By the
inductive hypothesis, the composition series of Vn−1 inherited from (I) and (III) are
equivalent, that is, there is a permutation of n − 1 letters, γ say, such that
V /Vn−1 = Vn /Vn−1 ∼
= Vγ (n) /Vγ (n)−1,
0 = V0 ⊂ V1 ⊂ . . . ⊂ Vn−1 ⊂ Vn = V .
Wj := Cn ⊕ Cn−1 ⊕ . . . ⊕ Cn−j +1 ,
0 = W0 ⊂ W1 ⊂ . . . ⊂ Wn−1 ⊂ Wn = V .
0 = V0 ⊂ V1 ⊂ . . . ⊂ Vn−1 ⊂ Vn = K n .
0 ⊂ S1 ⊂ A
for each a ∈ A. We take a = (1, 0), then a(x, 0) = (x, 0) but a(0, y) = 0 for
all (0, y) ∈ S2 . That is, φ = 0.
Because of the Jordan–Hölder theorem we can define the length of a module. This
is a very useful natural generalization of the dimension of a vector space.
Definition 3.15. Let A be a K-algebra. For every A-module V the length (V ) is
defined as the length of a composition series of V (see Definition 3.6), if it exists;
otherwise we set (V ) = ∞. An A-module V is said to be of finite length if (V ) is
finite, that is, when V has a composition series.
Note that the length of a module is well-defined because of the Jordan–Hölder
theorem which in particular says that all composition series of a module have the
same length.
70 3 Simple Modules and the Jordan–Hölder Theorem
Note that in this series terms can be equal. Using the isomorphism theorem
(Theorem 2.24) we analyze the factor modules
where the equality in the second step holds since Vi−1 ⊂ Vi . We also have
Vi−1 ⊆ Vi−1 + U and therefore Vi−1 ⊆ (Vi−1 + U ) ∩ Vi ⊆ Vi . But Vi−1 is a
maximal submodule of Vi and therefore the factor module Vi /(Vi−1 + U ) ∩ Vi is
3.4 Finding All Simple Modules 71
either zero or is simple. We omit terms where the factor in the series (3.2) is zero,
and we obtain a composition series for V /U , as required.
(b) By assumption V has a composition series. Then by Proposition 3.10 and by
part (a) the modules U and V /U have composition series. Take a composition series
for U ,
0 = U0 ⊂ U1 ⊂ . . . ⊂ Ut −1 ⊂ Ut = U.
0 = V0 /U ⊂ V1 /U ⊂ . . . ⊂ Vr−1 /U ⊂ Vr /U = V /U.
0 = U0 ⊂ . . . ⊂ Ut ⊂ V 1 ⊂ . . . ⊂ V r = V .
In this series all factor modules are simple. This is clear for Ui /Ui−1 ; and further-
more, by the isomorphism theorem (Theorem 2.24) Vj /Vj −1 ∼ = (Vj /U )/(Vj −1 /U )
is simple. Therefore, we have constructed a composition series for V in which
U = Ut appears as one of the terms.
For the lengths we get (V ) = t + r = (U ) + (V /U ), as claimed.
(c) By part (b) we have (U ) = (V ) − (V /U ) ≤ (V ). Moreover, if U = V then
V /U is non-zero; so (V /U ) > 0 and (U ) = (V ) − (V /U ) < (V ).
The Jordan–Hölder theorem shows that every module which has a composition
series can be built from simple modules. Therefore, it is a fundamental problem of
representation theory to understand what the simple modules of a given algebra are.
Recall from Example 2.25 the following notion. Let A be a K-algebra and V an
A-module. Then for every v ∈ V we set AnnA (v) = {a ∈ A | av = 0}, and call
this the annihilator of v in A. We have seen in Example 2.25 that for every v ∈ V
there is an isomorphism of A-modules A/AnnA (v) ∼ = Av. In the context of simple
modules this takes the following form, which we restate here for convenience.
Lemma 3.18. Let A be a K-algebra and S a simple A-module. Then for every
non-zero s ∈ S we have that S ∼
= A/AnnA (s) as A-modules.
Proof. As in Example 2.25 we consider the A-module homomorphism
ψ : A → S , ψ(a) = as. Since S is simple and s non-zero, this map is surjective by
Lemma 3.3, and by definition the kernel is AnnA (s). So the isomorphism theorem
yields A/AnnA (s) ∼
= im(ψ) = As = S.
72 3 Simple Modules and the Jordan–Hölder Theorem
This implies in particular that if an algebra has a composition series, then it can
only have finitely many simple modules:
Theorem 3.19. Let A be a K-algebra which has a composition series as an A-
module. Then every simple A-module occurs as a composition factor of A. In
particular, there are only finitely many simple A-modules, up to isomorphism.
Proof. By Lemma 3.18 we know that if S is a simple A-module then S ∼ = A/I for
some A-submodule I of A. By Proposition 3.17 there is some composition series
of A in which I is one of the terms. Since A/I is simple there are no further A-
submodules between I and A (see Lemma 3.4). This means that I can only appear as
the penultimate entry in this composition series, and S ∼
= A/I , so it is a composition
factor of A.
For finite-dimensional algebras we have an interesting consequence.
Corollary 3.20. Let A be a finite-dimensional K-algebra. Then every simple A-
module is finite-dimensional.
Proof. Suppose S is a simple A-module, then by Lemma 3.18, we know that S is
isomorphic to a factor module of A. Hence if A is finite-dimensional, so is S.
Remark 3.21.
(a) In Theorem 3.19 the assumption that A has a composition series as an A-module
is essential. For instance, consider the polynomial algebra A = K[X] when
K is infinite. There are infinitely many simple A-modules which are pairwise
non-isomorphic. In fact, take a one-dimensional vector space V = span{v}
and make it into a K[X]-module Vλ by setting X · v = λv for λ ∈ K. For
λ = μ the modules Vλ and Vμ are not isomorphic, see Example 2.23; however
they are 1-dimensional and hence simple. In particular, we can conclude from
Theorem 3.19 that A = K[X] cannot have a composition series as an A-module.
(b) In Corollary 3.20 the assumption on A is essential. Infinite-dimensional algebras
can have simple modules of infinite dimension. For instance, let Q be the two-
loop quiver with one vertex and two loops,
and let A = KQ be the path algebra. Exercise 3.6 constructs for each n ∈ N
a simple A-module of dimension n and even an infinite-dimensional simple A-
module.
Example 3.22. Let A = Mn (K), the algebra of n × n-matrices over K. We have
seen a composition series of A in Example 3.7, in which every composition factor is
isomorphic to the natural module K n . So by Theorem 3.19 the algebra Mn (K) has
precisely one simple module, up to isomorphism, namely the natural module K n of
dimension n.
3.4 Finding All Simple Modules 73
We will now determine the simple modules for an algebra A of the form K[X]/I
where I is a non-zero ideal with I = K[X]; hence I = (f ) where f is a polynomial
of positive degree. Note that this does not require us to know a composition series
of A, in fact we could have done this already earlier, after Lemma 3.4.
Proposition 3.23. Let A = K[X]/(f ) with f ∈ K[X] of positive degree.
(a) The simple A-modules are up to isomorphism precisely the A-modules
K[X]/(h) where h is an irreducible polynomial dividing f .
(b) Write f = f1a1 . . . frar , with ai ∈ N, as a product of irreducible polynomials
fi ∈ K[X] which are pairwise coprime. Then A has precisely r simple modules,
up to isomorphism, namely K[X]/(f1 ), . . . , K[X]/(fr ).
Proof. (a) First, let h ∈ K[X] be an irreducible polynomial dividing f . Then
K[X]/(h) is an A-module, by Exercise 2.23, with A-action given by
Since h is irreducible, the ideal (h) is maximal, and hence K[X]/(h) is a simple
A-module, by Lemma 3.4.
Conversely, let S be any simple A-module. By Lemmas 3.18 and 3.4 we know that
S is isomorphic to A/U where U is a maximal submodule of A. By the submodule
correspondence, see Theorem 2.28, we know U = W/(f ) where W is an ideal of
K[X] containing (f ), that is, W = (h) where h ∈ K[X] and h divides f . Applying
the isomorphism theorem yields
Isomorphisms preserve simple modules (see Exercise 3.3), so with A/U the module
K[X]/W is also simple. This means that W = (h) is a maximal ideal of K[X] and
then h is an irreducible polynomial.
(b) By part (a), every simple A-module is isomorphic to one of
K[X]/(f1 ), . . . , K[X]/(fr ) (use that K[X] has the unique factorization property,
hence f1 , . . . , fr are the unique irreducible divisors of f , up to multiplication by
units). On the other hand, these A-modules are pairwise non-isomorphic: suppose
ψ : K[X]/(fi ) → K[X]/(fj ) is an A-module homomorphism, we show that for
i = j it is not injective. Write ψ(1 + (fi )) = g + (fj ) and consider the coset
fj + (fi ) Since fi and fj are irreducible and coprime, this coset is not the zero
element in K[X]/(fi ). But it is in the kernel of ψ, since
ψ(fj + (fi )) = ψ((fj + (fi ))(1 + (fi ))) = (fj + (fi ))ψ(1 + (fi ))
= (fj + (fi ))(g + (fj )) = fj g + (fj ),
Remark 3.24. We can use this to find a composition series of the algebra
A = K[X]/(f ) as an A-module: Let f = f1 f2 . . . ft with fi ∈ K[X] irreducible,
we allow repetitions (that is, the fi are not necessarily pairwise coprime). This gives
a series of submodules of A
so that all factor modules are simple. Hence we have found a composition series
of A.
Of course, one would also get other composition series by changing the order of
the irreducible factors. Note that the factorisation of a polynomial into irreducible
factors depends on the field K.
Example 3.25.
(1) Over the complex numbers, every polynomial f ∈ C[X] of positive degree
splits into linear factors. Hence every simple C[X]/(f )-module is one-
dimensional.
The same works more generally for K[X]/(f ) when K is algebraically
closed. (Recall that a field K is algebraically closed if every non-constant
polynomial in K[X] is a product of linear factors.)
We will see later, in Corollary 3.38, that this is a special case of a more
general result about commutative algebras over algebraically closed fields.
(2) As an explicit example, let G = g be a cyclic group of order n, and let
A = CG be the group algebra over C. Then A is isomorphic to the factor
algebra C[X]/(Xn −1), see Example 1.27. The polynomial Xn −1 has n distinct
roots in C, namely e2kπi/n where 0 ≤ k ≤ n − 1, so it splits into linear factors
of the form
n−1
Xn − 1 = (X − e2kπi/n ).
k=0
where P denotes the set of paths in Q. The A-modules S1 , . . . , Sn are pairwise non-
isomorphic. In fact, let ϕ : Si → Sj be an A-module homomorphism for some
i = j . Then there exists a scalar λ ∈ K such that ϕ(ei + Ji ) = λej + Jj . Hence we
get
s = 1 A s = e1 s + e2 s + . . . + en s
3.4 Finding All Simple Modules 77
and there is some i such that ei s = 0. By Lemma 3.3 we know that S = Aei s. We
have the A-module homomorphism
S∼
= Aei /ker(ψ).
1 ←− 2 ←− . . . ←− n − 1 ←− n
Theorem 3.26 shows that KQ, and hence Tn (K), has precisely n simple modules, up
to isomorphism. However, we have already seen n pairwise non-isomorphic simple
Tn (K)-modules in Example 3.14. Thus, these are all simple Tn (K)-modules, up to
isomorphism.
In this section we will describe the simple modules for direct products
A = A1 × . . . × Ar of algebras. We will show that the simple A-modules are
precisely the simple Ai -modules, viewed as A-modules by letting the other factors
act as zero. We have seen a special case in Example 3.14.
Let A = A1 × . . . × Ar . The algebra A contains εi := (0, . . . , 0, 1Ai , 0, . . . , 0)
for 1 ≤ i ≤ r, and εi commutes with all elements of A. Moreover, εi εj = 0 for
i = j and also εi2 = εi ; and we have
1A = ε1 + . . . + εr .
πi : A → Ai , πi (a1 , . . . , an ) = ai .
Proof. By the above, we only have to show that S is also simple as an A-module.
This is a special case of Lemma 3.5.
We will now show that every simple A-module is of the form as in Proposi-
tion 3.29. For this we will more generally describe A-modules and we use the
elements εi and their properties.
Lemma 3.30. Let K be a field and A = A1 × . . . × Ar a direct product of
K-algebras. Moreover, let εi := (0, . . . , 0, 1Ai , 0, . . . , 0) for 1 ≤ i ≤ r. Then
for every A-module M the following holds.
(a) Let Mi := εi M, then Mi is an A-submodule of M, and M = M1 ⊕ . . . ⊕ Mr ,
the direct sum of these submodules.
(b) If M is a simple A-module then there is precisely one i ∈ {1, . . . , r} such that
Mi = 0 and this Mi is a simple A-module.
Proof. (a) We have Mi = {εi m | m ∈ M}. Since εi commutes with all elements of
A, we see that if a ∈ A then a(εi m) = εi am, therefore each Mi is an A-submodule
of M.
To prove M is a direct sum, we first see that M = M1 + . . . + Mr . In fact, for
every m ∈ M we have
r
r
m = 1A m = ( εj )m = εj m ∈ M1 + . . . + Mr .
j =1 j =1
Secondly, we have to check thatMi ∩ ( j =i Mj ) =
0 for each i ∈ {1, . . . , r}. To
this end, suppose x := εi m = j =i εj mj ∈ Mi ∩ ( j =i Mj ). Since εi εi = εi and
εi εj = 0 for j = i we then have
x = εi x = εi ( ε j mj ) = εi εj mj = 0.
j =i j =i
The Jordan–Hölder Theorem shows that simple modules are the ‘building blocks’
for arbitrary finite-dimensional modules. So it is important to understand simple
modules. The first question one might ask is, given two simple modules, how can
we find out whether or not they are isomorphic? This is answered by Schur’s lemma,
which we will now present. Although it is elementary, Schur’s lemma has many
important applications.
Theorem 3.33 (Schur’s Lemma). Let A be a K-algebra where K is a field.
Suppose S and T are simple A-modules and φ : S −→ T is an A-module
homomorphism. Then the following holds.
(a) Either φ = 0, or φ is an isomorphism. In particular, for every simple A-module
S the endomorphism algebra EndA (S) is a division algebra.
(b) Suppose S = T , and S is finite-dimensional, and let K be algebraically closed.
Then φ = λ idS for some scalar λ ∈ K.
Proof. (a) Suppose φ is non-zero. The kernel ker(φ) is an A-submodule of S and
ker(φ) = S since φ = 0. But S is simple, so ker(φ) = 0 and φ is injective.
Similarly, the image im(φ) is an A-submodule of T , and T is simple. Since
φ = 0, we know im(φ) = 0 and therefore im(φ) = T . So φ is also surjective,
and we have proved that φ is an isomorphism.
The second statement is just a reformulation of the first one, using the definition
of a division algebra (see Definition 1.7).
80 3 Simple Modules and the Jordan–Hölder Theorem
Exercise 3.2. Let A be a K-algebra. Show that the centre Z(A) is a subalgebra of A.
Example 3.36.
(1) By definition, a K-algebra A is commutative if and only if Z(A) = A. So in
some sense, the size of the centre provides a ‘measure’ of how far an algebra is
from being commutative.
(2) For any n ∈ N the centre of the matrix algebra Mn (K) has dimension 1, it is
spanned by the identity matrix. The proof of this is Exercise 3.16.
Lemma 3.37. Let K be an algebraically closed field, and let A be a K-algebra.
Suppose that S is a finite-dimensional simple A-module. Then for every z ∈ Z(A)
there is some scalar λz ∈ K such that zs = λz s for all s ∈ S.
Proof. We consider the map ρ : S → S defined by ρ(s) = zs. One checks that it is a
K-linear map. Moreover, it is an A-module homomorphism: using that z commutes
with every element a ∈ A we have
The assumptions allow us to apply part (b) of Schur’s lemma, giving some λz ∈ K
such that ρ = λz idS , that is, zs = λz s for all s ∈ S.
Corollary 3.38. Let K be an algebraically closed field and let A be a commutative
algebra over K. Then every finite-dimensional simple A-module S is 1-dimensional.
Proof. Since A is commutative we have A = Z(A), so by Lemma 3.37, every a ∈ A
acts by scalar multiplication on S. For every 0 = s ∈ S this implies that span{s}
is a (1-dimensional) A-submodule of S. But S is simple, so S = span{s} and S is
1-dimensional.
Remark 3.39. Both assumptions in Corollary 3.38 are needed.
(1) If the field is not algebraically closed, simple modules of a commutative algebra
need not be 1-dimensional. For example, let A = R[X]/(X2 + 1), a 2-
dimensional commutative R-algebra. The polynomial X2 +1 is irreducible over
R, and hence by Proposition 3.23, we know that A is simple as an A-module,
so A has a 2-dimensional simple module.
(2) The assumption that S is finite-dimensional is needed. As an example, consider
the commutative C-algebra A = C(X) as an A-module, as in Remark 3.34.
This is a simple module, but clearly not 1-dimensional.
We will see more applications of Schur’s lemma later. In particular, it will be
crucial for the proof of the Artin–Wedderburn structure theorem for semisimple
algebras.
EXERCISES
is a composition series of W .
3.4. Find a composition series for A as an A-module, where A is the 3-subspace
algebra
⎧⎛ ⎞ ⎫
⎪
⎪ a1 b1 b2 b3 ⎪
⎪
⎨⎜ ⎟ ⎬
0 a 0 0
A := ⎜ 2
⎝ 0 0 a3 0 ⎠
⎟ | ai , bj ∈ K ⊆ M4 (K)
⎪
⎪ ⎪
⎪
⎩ ⎭
0 0 0 a4
CC
3.5. Let A be the ring A = , that is, A consists of all upper triangular
0 R
matrices in M2 (C) with (2, 2)-entry in R.
(a) Show that A is an algebra over R (but not over C). What is its dimension
over R?
C0 0C
(b) Consider A as an A-module. Check that and are A-
0 0 0 0
submodules of A. Show that they are simple A-modules and that they
are isomorphic.
(c) Find a composition series of A as an A-module.
(d) Determine all simple A-modules, up to isomorphism, and their dimen-
sions over R.
3.6. Let Q be the quiver with one vertex and two loops denoted x and y. For any
field K consider the path algebra KQ. Note that for any choice of n × n-
matrices X, Y over K, taking x, y ∈ KQ to X, Y in Mn (K) extends to an
algebra homomorphism, hence a representation of KQ, that is, one gets a
KQ-module.
(a) Let V3 := K 3 be the 3-dimensional KQ-module on which x and y act
via the matrices
⎛ ⎞ ⎛ ⎞
01 0 000
X = ⎝0 0 1⎠ and Y = ⎝1 0 0⎠ .
00 0 010
3.16. Let D be a division algebra over K. Let A be the K-algebra Mn (D) of all
n × n-matrices with entries in D. Find the centre Z(A) of A.
3.17. Suppose A is a finite-dimensional algebra over a finite field K, and S is a
(finite-dimensional) simple A-module. Let D := EndA (S). Show that then D
must be a field. More generally, let D be a finite-dimensional division algebra
over a finite field K. Then D must be commutative, hence is a field.
3.18. Let A be a K-algebra and M an A-module of finite length (M). Show that
(M) is the maximal length of a chain
M0 ⊂ M1 ⊂ . . . ⊂ Mr−1 ⊂ Mr = M
0 ⊂ M1 ⊂ M1 ⊕ M2 ⊂ . . . ⊂ M1 ⊕ M2 ⊕ . . . ⊕ Mn−1 ⊂ M.
(M1 ⊕ . . . ⊕ Mj )/(M1 ⊕ . . . ⊕ Mj −1 ) ∼
= Mj
as A-modules.
(b) Explain briefly how to construct a composition series for M if one is
given a composition series of Mj for each j .
3.21. Let Q be a quiver without oriented cycles, so the path algebra A = KQ is
finite-dimensional. Let Q0 = {1, 2, . . . , n}.
(a) For a vertex i of Q, let r be the maximal length of a path in Q with
starting vertex i. Recall Aei has a sequence of submodules
In the previous chapter we have seen that simple modules are the ‘building
blocks’ for arbitrary finite-dimensional modules. One would like to understand how
modules are built up from simple modules. In this chapter we study modules which
are direct sums of simple modules, this leads to the theory of semisimple modules.
If an algebra A, viewed as an A-module, is a direct sum of simple modules, then
surprisingly, every A-module is a direct sum of simple modules. In this case, A
is called a semisimple algebra. We will see later that semisimple algebras can be
described completely, this is the famous Artin–Wedderburn theorem. Semisimple
algebras (and hence semisimple modules) occur in many places in mathematics;
for example, as we will see in Chap. 6, many group algebras of finite groups are
semisimple.
In this chapter, as an exception, we deal with arbitrary direct sums of modules, as
introduced in Definition 2.15. The results have the same formulation, independent
of whether we take finite or arbitrary direct sums, and this is an opportunity to
understand a result which does not have finiteness assumptions. The only new tool
necessary is Zorn’s lemma.
This section deals with modules which can be expressed as a direct sum of simple
submodules. Recall Definition 2.15 for the definition of direct sums.
We assume throughout that K is a field.
Definition 4.1. Let A be a K-algebra. An A-module V = 0 is called semisimple if
V is the direct sum of simple submodules, that is, there exist simple submodules Si ,
Example 4.2.
(1) Every simple module is semisimple, by definition.
(2) Consider the field K as a 1-dimensional algebra A = K. Then A-modules are
the same as K-vector spaces and submodules are the same as K-subspaces.
Recall from linear algebra that every vector space V has a basis. Take a basis
{bi | i ∈ I } of V where I is some index set which may or may not be finite,
and set Si := span{bi }. Then S i is a simple A-submodule of V since it is 1-
dimensional, and we have V = i∈I Si , since every element of V has a unique
expression as a (finite) linear combination of the basis vectors. This shows that
when the algebra is the field K then every non-zero K-module is semisimple.
(3) Let A = Mn (K) and consider V = A as an A-module. We know from
Exercise 2.5 that V = C1 ⊕ C2 ⊕ . . . ⊕ Cn , where Ci is the space of matrices
which are zero outside the i-th column. We have also seen in Exercise 3.1 that
each Ci is isomorphic to K n and hence is a simple A-module. So A = Mn (K)
is a semisimple A-module.
(4) Consider again the matrix algebra Mn (K), and the natural module V = K n . As
we have just observed, V is a simple Mn (K)-module, hence also a semisimple
Mn (K)-module.
However, we can also consider V = K n as a module for the alge-
bra of upper triangular matrices A = Tn (K). Then by Exercise 2.14 the
A-submodules of K n are given by the subspaces Vi for i = 0, 1, . . . , n, where
Vi = {(x1 , . . . , xi , 0, . . . , 0)t | xi ∈ K}. Hence the A-submodules of V form a
chain
0 = V0 ⊂ V1 ⊂ . . . ⊂ Vn−1 ⊂ Vn = K n .
1 P. J. Cameron,
Sets, Logic and Categories. Springer Undergraduate Mathematics Series. Springer-
Verlag London, Ltd., London, 1999. x+180 pp.
4.1 Semisimple Modules 89
Am = Am ∩ V = Am ∩ (U ⊕ C) = U ⊕ (Am ∩ C),
where the last equality holds since U is contained in Am. It follows now by the
isomorphism theorem that Am/U ∼ = Am ∩ C, which is simple and is also a
submodule of Am.
Proof of Theorem 4.3 in general. (1) ⇒ (2). Consider families of simple submod-
ules of V whose sum is a direct sum. We set
M := {(Si )i∈I | Si ⊆ V simple, Si = Si }.
i∈I i∈I
if every simple module Si appears in the family (Tj )j ∈J . This is a partial order. To
apply Zorn’s lemma, we must show that any chain in M has an upper bound in
M. We can assume that the index sets of the sequences in the chain are also totally
ordered by inclusion. Let I˜ denote the union of the index sets of the families in the
chain. Then the family (Si )i∈I˜ is an upper bound of thechain in M: Suppose (for
a contradiction) that (Si )i∈I˜ does not lie in M, that is, i∈I˜ Si is not a direct sum.
Then for some k ∈ I˜ we have Sk ∩ i =k Si = 0. This means that there exists a non-
zero element s ∈ Sk which can be expressed as a finite(!) sum s = si1 + . . . + sir
with sij ∈ Sij for some i1 , . . . , ir ∈ I˜. Since I˜ is a union of index sets, the finitely
many indices k, i1 , . . . , ir must appear in some
index set I which is an index set
for some term of the chain in M. But then i∈I Si = i∈I Si , contradicting the
assumption that (Si )i∈I ∈ M. So we have shown that every chain in the partially
ordered set M has an upper bound in M. Now Zorn’s lemma implies
that M has
a maximal element (Sj )j ∈J . In particular, U := j ∈J Sj = j ∈J j . Now we
S
continue as in the first version of the proof: By (1) there is a submodule C of V
such that V = U ⊕ C. If C is non-zero then by Lemma 4.6, it contains a simple
submodule S. Since U ∩ C = 0, we have U ∩ S = 0 and hence U + S = U ⊕ S.
This means that the family (Sj )j ∈J ∪ {S} ∈ M, contradicting
the maximality of the
family (Sj )j ∈J . Therefore, C = 0 and V = U = j ∈J Sj is a direct sum of simple
submodules, that is, (2) holds.
(3) ⇒ (1). Let U ⊆ V be a submodule of V . Consider the set
By part (a), each ϕ(Si ) is either zero, or is a simple A-module. We can ignore the
ones which are zero, and get that W is a sum of simple A-modules and hence is
semisimple, using again Theorem 4.3.
Part (b) follows now, by applying (∗) to ϕ and also to the inverse isomorphism
ϕ −1 .
(c) Suppose that V is a semisimple A-module, and U ⊆ V an A-submodule. We
start by dealing with the factor module V /U , we must show that if V /U is non-zero
then it is semisimple. Let π be the canonical A-module homomorphism
π : V → V /U , π(v) = v + U.
C ⊆ V such that V = U ⊕ C. This implies that U ∼ = V /C. But the non-zero factor
module V /C is semisimple by the first part of (c), and then by (b) we deduce that
U is also semisimple.
(d) Write V := i∈I Vi and consider the inclusion maps ιi : Vi → V . These
are injective A-module homomorphisms; in particular, Vi ∼ = im(ιi ) ⊆ V are A-
submodules.
Suppose that V is semisimple. Then by parts (b) and (c) each Vi is semisimple,
as it is isomorphic to the non-zero submodule im(ιi ) of the semisimple module V .
Conversely, suppose that each Vi , i ∈ I , is a semisimple A-module. By
Theorem 4.3 we can write Vi as a sum of simple submodules, say Vi = j ∈Ji ij S
(for some index sets Ji ). On the other hand we have that V = i∈I ιi (Vi ),
since every element of the direct sum has only finitely many non-zero entries, see
Definition 2.17. Combining these, we obtain that
⎛ ⎞
V = ιi (Vi ) = ιi ⎝ Sij ⎠ = ιi (Sij )
i∈I i∈I j ∈Ji i∈I j ∈Ji
and V is a sum of simple A-submodules (the ιi (Sij ) are simple by part (a)). Hence
V is semisimple by Theorem 4.3.
In Example 4.2 we have seen that for the 1-dimensional algebra A = K, every
non-zero A-module is semisimple. We would like to describe algebras for which
all non-zero modules are semisimple. If A is such an algebra, then in particular
A viewed as an A-module is semisimple. Surprisingly, the converse holds, as we
will see soon: If A as an A-module is semisimple, then all non-zero A-modules are
semisimple. Therefore, we make the following definition.
Definition 4.8. A K-algebra A is called semisimple if A is semisimple as an A-
module.
We have already seen some semisimple algebras.
Example 4.9. Every matrix algebra Mn (K) is a semisimple algebra, see Exam-
ple 4.2.
Remark 4.10. By definition, a semisimple algebra A is a direct sum A = i∈I Si
of simple A-submodules. Luckily, in this situation the index set I must befinite.
Indeed, the identity element can be expressed as a finite sum 1A = i∈I si
with si∈ Si . This means that there
is a finite subset {i1 , . . .
, ik } ⊆ I such that
1A ∈ kj =1 Sij . Then A = A1A ⊆ kj =1 Sij , that is, A = k
j =1 Sij is a direct
sum of finitely many simple A-submodules. In particular, a semisimple algebra has
finite length as an A-module, and every simple A-module is isomorphic to one of
92 4 Semisimple Modules and Semisimple Algebras
the modules Si1 , . . . , Sik which appear in the direct sum decomposition of A (see
Theorem 3.19).
When A is a semisimple algebra, then we can understand arbitrary non-zero A-
modules; they are just direct sums of simple modules, as we will now show.
Theorem 4.11. Let A be a K-algebra. Then the following assertions are equiva-
lent.
(i) A is a semisimple algebra.
(ii) Every non-zero A-module is semisimple.
Proof. The implication (ii) ⇒ (i) follows by Definition 4.8.
Conversely, suppose that A is semisimple as an A-module. Take an arbitrary
non-zero A-module V . We have to show that V is a semisimple A-module. As a
K-vector space V has a basis, say {vi | i ∈ I }. With the same index set I , we take
the A-module
A := {(ai )i∈I | ai ∈ A, only finitely many ai are non-zero}
i∈I
the direct sum of copies of A (see Definition 2.17). We consider the map
ψ: A → V , (ai )i∈I → ai vi .
i∈I i∈I
a · m = ϕ(a)m (for m ∈ m, a ∈ A)
by Example
2.4. Since A is semisimple, the A-module M can be written as
M = i∈I Si where the Si are simple A-modules. We are done if we show that
each Si is a B-submodule of M and that Si is simple as a B-module.
First, Si is a non-zero subspace of M. Let b ∈ B and v ∈ Si , we must show that
bv ∈ Si . Since ϕ is surjective we have b = ϕ(a) for some a ∈ A, and then
a · v = ϕ(a)v = bv
(a + I )v = av (a ∈ A, v ∈ V ).
The following shows that with this correspondence, semisimple modules correspond
to semisimple modules.
Theorem 4.17. Let A be a K-algebra, I ⊂ A a two-sided ideal of A with I = A,
and let B = A/I the factor algebra. The following are equivalent for any B-
module V .
(i) V is a semisimple B-module.
(ii) V is a semisimple A-module with I V = 0.
Proof. First, suppose that (i) holds. By Theorem 4.3, V = j ∈J Sj , the sum of
simple B-submodules of V . By Lemma 2.37, we can also view the Sj as A-modules
with I Sj = 0. Moreover, they are also simple as A-modules, by Lemma 3.5. This
shows that V is a sum of simple A-modules, and therefore it is a semisimple A-
module, by Theorem 4.3.
Conversely, suppose that (ii) holds, assume
V is a semisimple A-module with
I V = 0. By Theorem 4.3 we know V = j ∈J Sj , a sum of simple A-submodules
of V . Then I Sj ⊆ I V = 0, and Lemma 2.37 says that we can view the Sj as
B-modules. One checks that these are also simple as B-modules
(with the same
reasoning as in Lemma 3.5). So as a B-module, V = j ∈J Sj , a sum of simple
B-modules, and hence is a semisimple B-module by Theorem 4.3.
Corollary 4.18. Let A1 , . . . , Ar be finitely many K-algebras. Then the direct
product A1 ×. . .×Ar is a semisimple algebra if and only if each Ai for i = 1, . . . , r
is a semisimple algebra.
Proof. Set A = A1 × . . . × Ar . Suppose first that A is semisimple. For any
i ∈ {1, . . . , r}, the projection πi : A → Ai is a surjective algebra homomorphism.
By Corollary 4.12 each Ai is a semisimple algebra.
Conversely, suppose that all algebras A1 , . . . , Ar are semisimple. We want to
use Theorem 4.11, that is, we have to show that every non-zero A-module is
semisimple. Let M = 0 be an A-module. We use Lemma 3.30, which gives that
M = M1 ⊕ M2 ⊕ . . . ⊕ Mr , where Mi = εi M, with εi = (0, . . . , 0, 1Ai , 0, . . . , 0),
and Mi is an A-submodule of M. Then Mi is also an Ai -module, since the kernel of
πi annihilates Mi (using Lemma 2.37). We can assume that Mi = 0; otherwise we
can ignore this summand in M = M1 ⊕ M2 ⊕ . . . ⊕ Mr . Then by assumption and
Theorem 4.11, Mi is semisimple as a module for Ai , and then by Theorem 4.17 it
is also semisimple as an A-module. Now part (d) of Corollary 4.7 shows that M is
semisimple as an A-module.
Example 4.19. Let K be a field. We have already seen that matrix algebras Mn (K)
are semisimple (see Example 4.9). Corollary 4.18 now shows that arbitrary finite
96 4 Semisimple Modules and Semisimple Algebras
direct products
r
r
J (A) = (fi )/(f ) = ( fi )/(f ).
i=1 i=1
I J = span{xy | x ∈ I, y ∈ J }
and this is also a left ideal of A. In particular, for any left ideal I of A we define
powers inductively by setting I 0 = A, I 1 = I and I k = I k−1 I for all k ≥ 2. Thus
for every left ideal I we get a chain of left ideals of the form
A ⊇ I ⊇ I2 ⊇ I3 ⊇ . . .
is a two-sided ideal of A.
Theorem 4.23. Let K be a field and A a K-algebra which has a composition series
as an A-module (that is, A has finite length as an A-module). Then the following
holds for the Jacobson radical J (A).
(a) J (A) is the intersection of finitely many maximal left ideals.
(b) We have that
J (A) = AnnA (S),
S simple
98 4 Semisimple Modules and Semisimple Algebras
that is, J (A) consists of those a ∈ A such that aS = 0 for every simple A-
module S.
(c) J (A) is a two-sided ideal of A.
(d) J (A) is a nilpotent ideal, we have J (A)n = 0 where n is the length of a
composition series of A as an A-module.
(e) The factor algebra A/J (A) is a semisimple algebra.
(f) Let I ⊆ A be a two-sided ideal with I = A such that the factor algebra A/I is
semisimple. Then J (A) ⊆ I .
(g) A is a semisimple algebra if and only J (A) = 0.
Remark 4.24. The example of a polynomial algebra K[X] shows that the assump-
tion of finite length in the theorem is needed. We have seen in Example 4.22
that J (K[X]) = 0. However, K[X] is not semisimple, see Example 4.13. So, for
instance, part (g) of Theorem 4.23 is not valid for K[X].
Proof. (a) Suppose that M1 , . . . , Mr are finitely many maximal left ideals of A.
Hence we have that J (A) ⊆ M1 ∩ . . . ∩ Mr . If we have equality then we are done.
Otherwise there exists another maximal left ideal Mr+1 such that
M1 ∩ . . . ∩ Mr ⊃ M1 ∩ . . . ∩ Mr ∩ Mr+1
and this is a proper inclusion. Repeating the argument gives a sequence of left ideals
of A,
A ⊃ M1 ⊃ M1 ∩ M2 ⊃ M1 ∩ M2 ∩ M3 ⊃ . . .
Each quotient is non-zero, so the process must stop at the latest after n steps where n
is the length of a composition series of A, see Exercise 3.18. This means that J (A)
is the intersection of finitely many maximal left ideals.
(b) We first prove that the intersection of the annihilators of simple modules is
contained in J (A). Take an element a ∈ A such that aS = 0 for all simple A-
modules S. We want to show that a belongs to every maximal left ideal of A.
Suppose M is a maximal left ideal, then A/M is a simple A-module. Therefore
by assumption a(A/M) = 0, so that a + M = a(1A + M) = 0 and hence a ∈ M.
Since M is arbitrary, this shows that a is in the intersection of all maximal left ideals,
that is, a ∈ J (A).
Assume (for a contradiction) that the inclusion is not an equality, then there is a
simple A-module S such that J (A)S = 0; let s ∈ S with J (A)s = 0. Then J (A)s
is an A-submodule of S (since J (A) is a left ideal), and it is non-zero. Because
S is simple we get that J (A)s = S. In particular, there exists an x ∈ J (A) such
that xs = s, that is, x − 1A ∈ AnnA (s). Now, AnnA (s) is a maximal left ideal
(since A/AnnA (s) ∼ = S, see Lemma 3.18). Hence J (A) ⊆ AnnA (s). Therefore we
have x ∈ AnnA (s) and x − 1A ∈ AnnA (s) and it follows that 1A ∈ AnnA (s), a
contradiction since s = 0.
4.3 The Jacobson Radical 99
(c) This follows directly from part (b), together with Exercise 4.2.
(d) Take a composition series of A as an A-module, say
0 = V0 ⊂ V1 ⊂ . . . ⊂ Vn−1 ⊂ Vn = A.
We will show that J (A)n = 0. For each i with 1 ≤ i ≤ n, the factor module
Vi /Vi−1 is a simple A-module. By part (b) it is therefore annihilated by J (A), and
this implies that J (A)Vi ⊆ Vi−1 for all i = 1, . . . , n. Hence
and inductively we see that J (A)r Vr = 0 for all r. In particular, J (A)n A = 0, and
this implies J (A)n = 0, as required.
(e) By Definition 4.8 we have to show that A/J (A) is semisimple as an A/J (A)-
module. According to Theorem 4.17 this is the same as showing thatA/J (A) is
semisimple as an A-module. From part (a) we know that J (A) = ri=1 Mi for
finitely many
maximal left ideals Mi ; moreover, we can assume that for each i
we have j =i Mj ⊆ Mi (otherwise we may remove Mi from the intersection
M1 ∩ . . . ∩ Mr ). We then consider the following map
This mapis well-defined since J (A) ⊆ Mi for all i, and it is injective since
J (A) = ri=1 Mi . Moreover, it is an A-module homomorphism since the action on
the direct sum is componentwise. It remains to show that is surjective, and hence
an isomorphism; then the claim in part (e) follows since each A/Mi is a simple A-
module. To prove that is surjective, it suffices to show that for each i the element
(0, . . . , 0, 1A + Mi , 0, . . . , 0) is in the image
of . Fix some i. By our assumption
we have that Mi is a proper subset of Mi + ( j =i Mj ). Since Mi ismaximal this
implies that Mi + ( j =i Mj ) = A. So there exist mi ∈ Mi and y ∈ j =i Mj such
that 1A = mi + y. Therefore (y) = (0, . . . , 0, 1A + Mi , 0, . . . , 0), as desired.
(f) We have by assumption, A/I = S1 ⊕ . . . ⊕ Sr with finitely many simple
A/I -modules Si , see Remark 4.10. The Si can also be viewed as simple A-modules
(see the proof of Theorem 4.17). From part (b) we get J (A)Si = 0, which implies
J (A)(A/I ) = J (A)(S1 ⊕ . . . ⊕ Sr ) = 0,
n
Taking the intersection of all these, we get precisely J (A) = i=1 Ji which is the
span of all paths in Q of length ≥ 1.
Corollary 4.27. Let KQ be a finite-dimensional path algebra. Then KQ is a
semisimple algebra if and only if Q has no arrows, that is, Q is a union of vertices.
In particular, the semisimple path algebras KQ are isomorphic to direct products
K × . . . × K of copies of the field K.
Proof. By Theorem 4.23, KQ is semisimple if and only if J (KQ) = 0. Then the
first statement directly follows from Proposition 4.26. The second statement is easily
verified by mapping each vertex of Q to one of the factors of the direct product.
EXERCISES
4.11. (a) Show that there exist infinitely many irreducible polynomials in K[X].
(Hint: try a variation of Euclid’s famous proof that there are infinitely
many prime numbers.)
(b) Deduce that the Jacobson radical of K[X] is zero.
4.12. For each of the following subalgebras of M3 (K), find the Jacobson radical.
⎛ ⎞ ⎧⎛ ⎞ ⎫
∗00 ⎨ xy z ⎬
A1 = ⎝ 0 ∗ 0 ⎠ A2 = ⎝0 x 0 ⎠ | x, y, z ∈ K
⎩ ⎭
00∗ 00x
⎧⎛ ⎞ ⎫ ⎛ ⎞
⎨ xy0 ⎬ ∗∗0
A3 = ⎝ 0 z 0 ⎠ | x, y, z ∈ K A4 = ⎝ 0 ∗ 0⎠
⎩ ⎭
00x 0∗∗
A = M1 ⊕ M2 ⊕ . . . ⊕ Mr .
εi = εi 1A = εi ε1 + εi ε2 + . . . + εi εr
and therefore
The left-hand side belongs to Mi , and the right-hand side belongs to j =i Mj . The
sum A = M1 ⊕ M2 ⊕ . . . ⊕ Mr is direct, therefore Mi ∩ j =i Mj = 0. So εi2 = εi ,
which proves part of (a). Moreover, this implies
0 = εi ε1 + . . . + εi εi−1 + εi εi+1 + . . . + εi εr ,
where the summands εi εj are in Mj for each j = i. Since we have a direct sum,
each of these summands must be zero, and this completes the proof of (a).
(b) We show now that Aεi = Mi . Since εi ∈ Mi and Mi is an A-module, it follows
that Aεi ⊆ Mi . For the converse, take some m ∈ Mi , then
A = S1 ⊕ S2 ⊕ . . . ⊕ Sr .
ψ(a) := (α1 , α2 , . . . , αr ).
We now show that ψ is an algebra isomorphism. From the definition one sees that
ψ is K-linear. It is also surjective, since ψ(εi ) = (0, . . . , 0, 1, 0, . . . , 0) for each
i. Moreover, it is injective: if ψ(a) = 0, so that all αi are zero, then a = 0, by
definition. It only remains to show that the map ψ is an algebra homomorphism. For
any a, b ∈ A suppose that ψ(a) = (α1 , α2 , . . . , αr ) and ψ(b) = (β1 , β2 , . . . , βr );
then we have
where the fourth equality uses axiom (Alg) from Definition 1.1 and the last equality
holds since the αi and βi are in K and hence commute. This implies that
Finally, it follows from the definition that ψ(1A ) = (1, 1, . . . , 1) = 1K×...×K . This
proves that ψ : A → K × K × . . . × K is an isomorphism of algebras.
Remark 5.3.
(1) Proposition 5.2 need not hold if K is not algebraically closed. For example,
consider the commutative R-algebra A = R[X]/(X2 + 1). Since X2 + 1 is
irreducible in R[X], we know from Proposition 3.23 that A as an A-module is
simple, and it is a semisimple algebra, by Proposition 4.14. However, A ∼ R×R
=
since A ∼
= C is a field, whereas R × R contains non-zero zero divisors.
(2) Proposition 5.2 does not hold for infinite-dimensional algebras, even if the field
K is algebraically closed. We have seen in Remark 3.34 that C(X) is a simple
C(X)-module. In particular, C(X) is a semisimple C-algebra. As in (1), the field
C(X) cannot be isomorphic to a product of copies of C.
106 5 The Structure of Semisimple Algebras: The Artin–Wedderburn Theorem
We want to classify semisimple algebras. The input for this is more general: the first
ingredient is to relate any algebra A to its algebra of A-module endomorphisms. The
second ingredient is the observation that one can view the endomorphisms of a direct
sum of A-modules as an algebra of matrices, where the entries are homomorphisms
between the direct summands. We will discuss these now.
Let A be a K-algebra. For any A-module V we denote by EndA (V ) the K-
algebra of A-module homomorphisms from V to V (see Exercise 2.7). Recall
from Definition 1.6 the definition of the opposite algebra: For any K-algebra B,
the opposite algebra B op has the same K-vector space structure as B and has
multiplication ∗ defined by b ∗ b := b b for any b, b ∈ B. The following result
compares an algebra with its endomorphism algebra.
Lemma 5.4. Let K be a field and let A be a K-algebra. Then A is isomorphic to
EndA (A)op as K-algebras.
Proof. For any a ∈ A we consider the right multiplication map,
One sees that this is an A-module homomorphism, so that ra ∈ EndA (A) for every
a ∈ A.
Conversely, we claim that every element in EndA (A) is of this form, that is, we
have EndA (A) = {ra | a ∈ A}. In fact, let ϕ ∈ EndA (A) and let a := ϕ(1A ); then
for every x ∈ A we have
Then ψ is surjective, as we have just seen. It is also injective: If ψ(a) = ψ(a ) then
for all x ∈ A we have xa = xa , and taking x = 1A shows a = a .
We will now complete the proof of the lemma, by showing that ψ is a
homomorphism of K-algebras. First, the map ψ is K-linear: For every λ, μ ∈ K
and a, b, x ∈ A we have
rλa+μb (x) = x(λa + μb) = λ(xa) + μ(xb) = λra (x) + μrb (x) = (λra + μrb )(x)
(where the second equality uses axiom (Alg) from Definition 1.1). Therefore,
Moreover, it is clear that ψ(1A ) = idA . Finally, we show that ψ preserves the
multiplication. For every a, b, x ∈ A we have
and hence
Then becomes a K-algebra with respect to matrix addition and matrix multipli-
cation, where the product of two matrix entries is composition of maps.
Proof. It is clear that matrix addition and scalar multiplication turn into a K-
vector space (where homomorphisms are added pointwise as usual). Furthermore,
matrix multiplication induces a multiplication on . To see this, consider the
product of two elements ϕ = (ϕij ) and ψ = (ψij ) from . The product ϕψ has
as the (i, j )-entry the homomorphism r=1 ϕi ◦ ψj , which is indeed an element
from HomA (Uj , Ui ), as needed. The identity element in is the diagonal matrix
with diagonal entries idU1 , . . . , idUr . All axioms follow from the usual rules for
matrix addition and matrix multiplication.
In linear algebra one identifies the algebra of endomorphisms of an n-
dimensional K-vector space with the algebra Mn (K) of n × n-matrices over
K. In analogy, we can identify the algebra of endomorphisms of a direct sum of
A-modules with the matrix algebra as introduced in Lemma 5.5.
Lemma 5.6. Let A be a K-algebra, suppose U1 , . . . , Ur are A-modules and
V := U1 ⊕ . . . ⊕ Ur , their direct sum. Then the algebra from Lemma 5.5 is
isomorphic as a K-algebra to the endomorphism algebra EndA (V ).
108 5 The Structure of Semisimple Algebras: The Artin–Wedderburn Theorem
which is equal to the sum of the (i, j )-entries of the matrices a(β) and b(γ ).
Thus, is a K-linear map. Furthermore, it is clear from the definition that
(1EndA (V ) ) = (idV ) = 1 ,
the diagonal matrix with identity maps on the diagonal, since πi ◦ κj = 0 for i = j
and πi ◦ κi = idUi for all i. Next we show that is multiplicative. The (i, j )-entry
in the product (β)(γ ) is given by
r
r
βi ◦ γj = πi ◦ β ◦ κ ◦ π ◦ γ ◦ κj
=1 =1
r
= πi ◦ β ◦ ( κ ◦ π ) ◦ γ ◦ κj
=1
= πi ◦ (β ◦ idV ◦ γ ) ◦ κj = (β ◦ γ )ij .
r
r
γ = idV ◦ γ ◦ idV = ( κi ◦ πi ) ◦ γ ◦ ( κj ◦ πj )
i=1 j =1
5.2 Towards the Artin–Wedderburn Theorem 109
r
r
r
r
= κi ◦ (πi ◦ γ ◦ κj ) ◦ πj = κi ◦ γij ◦ πj = 0.
i=1 j =1 i=1 j =1
To show that is surjective, let λ ∈ be an arbitrary element, with (i, j )-entry λij .
We have to find a preimage under . To this end, we define
r
r
γ := κk ◦ λk ◦ π ∈ EndA (V ).
k=1 =1
r
r
γij = πi ◦ γ ◦ κj = πi ◦ ( κk ◦ λk ◦ π ) ◦ κj
k=1 =1
r
r
= πi ◦ κk ◦ λk ◦ π ◦ κj = λij .
k=1 =1
EndA (V ) ∼
= Mn1 (D1 ) × . . . × Mnr (Dr ).
Proof. The crucial input is Schur’s lemma (see Theorem 3.33) which we recall now.
If Si and Sj are simple A-modules then HomA (Si , Sj ) = 0 if Si and Sj are not
isomorphic, and otherwise we have that EndA (Si ) is a division algebra over K.
We label the summands of the module V so that isomorphic ones are grouped
together, explicitly we take
S1 ∼ = Sn1 , Sn1 +1 ∼
= ... ∼ = Sn1 +n2 , . . . , Sn1 +...+nr−1 +1 ∼
= ... ∼ = ... ∼
= Sn1 +...+nr =St
and there are no other isomorphisms. That is, we have r different isomorphism types
amongst the Si , and they come with multiplicities n1 , . . . , nr . Define
these are division algebras, by Schur’s lemma. Then Lemma 5.6 and Schur’s lemma
show that the endomorphism algebra of V can be written as a matrix algebra, with
110 5 The Structure of Semisimple Algebras: The Artin–Wedderburn Theorem
block matrices
⎛ ⎞
Mn1 (D1 ) 0
⎜ ⎟
EndA (V ) ∼
= = (HomA (Sj , Si ))i,j ∼
= ⎝ ..
. ⎠
0 Mnr (Dr )
∼
= Mn1 (D1 ) × . . . × Mnr (Dr ).
We will now consider the algebras in Theorem 5.7 in more detail. In particular,
we want to show that they are semisimple K-algebras. This is part of the Artin–
Wedderburn theorem, which will come in the next section.
Example 4.19 is a special case, and we have seen that for every field K the
algebras Mn1 (K) × . . . × Mnr (K) are semisimple. The proof for the algebras in
Theorem 5.7 is essentially the same.
Lemma 5.8.
(a) Let D be a division algebra over K. Then for every n ∈ N the matrix algebra
Mn (D) is a semisimple K-algebra. Moreover, the opposite algebra Mn (D)op is
isomorphic to Mn (D op ), as a K-algebra.
(b) Let D1 , . . . , Dr be division algebras over K. Then for any n1 , . . . , nr ∈ N the
direct product Mn1 (D1 ) × . . . × Mnr (Dr ) is a semisimple K-algebra.
Proof. (a) Let A = Mn (D), and let D n be the natural A-module. We claim that this
is a simple module. The proof of this is exactly the same as the proof for D = K in
Example 2.14, since there we only used that non-zero elements have inverses (and
not that elements commute). As for the case D = K, we see that A as an A-module
is the direct sum A = C1 ⊕ C2 ⊕ . . . ⊕ Cn , where Ci consists of the matrices
in A which are zero outside the i-th column. As for the case D = K, each Ci is
isomorphic to the natural module D n , as an A-module. This shows that A is a direct
sum of simple submodules and hence is a semisimple algebra.
We show now that the opposite algebra Mn (D)op is isomorphic to Mn (D op ).
Note that both algebras have the same underlying K-vector space. Let τ be the
map which takes an n × n-matrix to its transpose. That is, if a = (aij ) ∈ Mn (D)
then τ (a) is the matrix with (s, t)-entry equal to at s . Then τ defines a K-linear
isomorphism on the vector space Mn (D). We show that τ is an algebra isomorphism
Mn (D)op → Mn (D op ). The identity elements in both algebras are the identity
matrices, and τ takes the identity to the identity. It remains to show that for
a, b ∈ Mn (D)op we have τ (a ∗ b) is equal to τ (a)τ (b).
(i) We have τ (a ∗ b) = τ (ba), and this has (s, t)-entry equal to the (t, s)-entry of
the matrix product ba, which is
n
btj aj s .
j =1
5.3 The Artin–Wedderburn Theorem 111
(ii) Now we write τ (a) = (âij ), where âij = aj i , and similarly let τ (b) = (b̂ij ).
We compute τ (a)τ (b) in Mn (D op ). This has (s, t)-entry equal to
n
n
n
âsj ∗ b̂j t = aj s ∗ btj = btj aj s
j =1 j =1 j =1
(where in the first step we removed the ˆ and in the second step we removed the
∗.) This holds for all s, t, hence τ (a ∗ b) = τ (a)τ (b).
(b) By part (a) we know that Mn (D) is a semisimple algebra. Now part (b) follows
directly using Corollary 4.18, which shows that finite direct products of semisimple
algebras are semisimple.
We have seen that any K-algebra Mn1 (D1 ) × . . . × Mnr (Dr ) is semisimple, where
D1 . . . , Dr are division algebras over K. The Artin–Wedderburn theorem shows
that up to isomorphism, every semisimple K-algebra is of this form.
Theorem 5.9 (Artin–Wedderburn Theorem). Let K be a field and A a semisim-
ple K-algebra. Then there exist positive integers r and n1 , . . . , nr , and division
algebras D1 , . . . , Dr over K such that
A∼
= Mn1 (D1 ) × . . . × Mnr (Dr ).
Conversely, each K-algebra of the form Mn1 (D1 ) × . . . × Mnr (Dr ) is semisimple.
We will refer to this direct product as the Artin–Wedderburn decomposition of
the semisimple algebra A.
Proof. The last statement has been proved in Lemma 5.8.
Suppose that A is a semisimple K-algebra. By Remark 4.10, A as an A-module
is a finite direct sum A = S1 ⊕ . . . ⊕ St with simple A-submodules S1 , . . . , St . Now
Theorem 5.7 implies that there exist positive integers r and n1 , . . . , nr and division
algebras D 1 , . . . , D
r over K such that
A∼
= EndA (A)op (by Lemma 5.4)
∼ 1 ) × . . . × Mnr (D
= (Mn1 (D r ))op
1 ) × . . . × Mnr (D
= Mn1 (D op r )op
) × . . . × Mnr (D
= Mn1 (D
op rop ). (by Lemma 5.8)
1
112 5 The Structure of Semisimple Algebras: The Artin–Wedderburn Theorem
We set Di := D op (note that this is also a division algebra, since reversing the order
i
in the multiplication does not affect whether elements are invertible).
Remark 5.10. Note that a matrix algebra Mn (D) is commutative if and only if
n = 1 and D is a field. Therefore, let A be a commutative semisimple K-algebra.
Then the Artin–Wedderburn decomposition of A has the form
= M1 (D1 ) × . . . × M1 (Dr ) ∼
A∼ = D1 × . . . × Dr ,
where Di are fields containing K. From the start, Di is the endomorphism algebra
of a simple A-module. Furthermore, taking Di as the i-th factor in the above product
decomposition, it is a simple A-module, and hence this simple module is identified
with its endomorphism algebra.
In the rest of this section we will derive some consequences from the Artin–
Wedderburn theorem and we also want to determine the Artin–Wedderburn decom-
position for some classes of semisimple algebras explicitly.
We will now see that one can read off the number and the dimensions of simple
modules of a semisimple algebra from the Artin–Wedderburn decomposition. This
is especially nice when the underlying field is algebraically closed, such as the field
of complex numbers.
Corollary 5.11.
(a) Let D1 , . . . , Dr be division algebras over K, and let n1 , . . . , nr be positive
integers. The semisimple K-algebra Mn1 (D1 ) × . . . × Mnr (Dr ) has precisely
r simple modules, up to isomorphism. The K-vector space dimensions of these
simple modules are n1 dimK D1 , , . . . , nr dimK Dr . (Note that these dimensions
of simple modules need not be finite.)
(b) Suppose the field K is algebraically closed, and that A is a finite-dimensional
semisimple K-algebra. Then there exist positive integers n1 , . . . , nr such that
A∼
= Mn1 (K) × . . . × Mnr (K).
A∼
= M1 (K[X]/(f1 )) × . . . × M1 (K[X]/(fr )) ∼
= K[X]/(f1 ) × . . . × K[X]/(fr ).
A∼
= K[X]/(X − λ1 ) × . . . × K[X]/(X − λr ) ∼
= K × . . . × K.
114 5 The Structure of Semisimple Algebras: The Artin–Wedderburn Theorem
f = f1 · . . . fr · g1 · . . . gs
A∼
= R × .!"
. . × R# × C × .!"
. . × C# .
r s
EXERCISES
(a) For all i show that (X + I )εi = λi εi in A, that is, εi is an eigenvector for
the action of the coset of X.
116 5 The Structure of Semisimple Algebras: The Artin–Wedderburn Theorem
(b) Deduce from (a) that εi εj = 0 for i = j . Moreover, show that εi2 = εi .
(c) Show that ε1 + . . . + εr = 1A .
5.12. Let K = Zp and let f = Xp − X. Consider the algebra A = K[X]/(f ) as an
A-module. Explain how Exercise 5.11 can be applied to express A as a direct
sum of 1-dimensional modules. (Hint: The roots of Xp − X are precisely the
elements of Zp , by Lagrange’s theorem from elementary group theory. Hence
Xp − X factors into p distinct linear factors in Zp [X].)
Chapter 6
Semisimple Group Algebras
and Maschke’s Theorem
Let G be a finite group and let K be a field. The main idea of the proof of Maschke’s
theorem is more general: Given any K-linear map between KG-modules, one can
construct a KG-module homomorphism, by ‘averaging over the group’.
Lemma 6.1. Let G be a finite group and let K be a field. Suppose M and N are
KG-modules, and f : M → N is a K-linear map. Define
T (f ) : M → N, m → x(f (x −1 m)).
x∈G
= x(αf (x −1 m1 ) + βf (x −1 m2 ))
x∈G
= α T (f )(m1 ) + β T (f )(m2 ).
and similarly
So the linear map T (f ) is, with respect to the standard basis {1, g}, given by the
matrix
a+d b+c
.
b+c a+d
01
One checks that it commutes with , hence T (f ) is indeed a CG-module
10
homomorphism.
6.1 Maschke’s Theorem 119
We will now state and prove Maschke’s theorem. This is an easy and completely
general criterion to decide when a group algebra of a finite group is semisimple.
Theorem 6.3 (Maschke’s Theorem). Let K be a field and G a finite group. Then
the group algebra KG is semisimple if and only if the characteristic of K does not
divide the order of G.
Proof. Assume first that the characteristic of K does not divide the group order |G|.
By definition the group algebra KG is semisimple if and only if KG is semisimple
as a KG-module. So let W be a submodule of KG, then by Theorem 4.3 we must
show that W has a complement, that is, there is a KG-submodule C of KG such
that W ⊕ C = KG.
Considered as K-vector spaces there is certainly a K-subspace V such that
W ⊕ V = KG (this is the standard result from linear algebra that every linearly
independent subset can be completed to a basis). Let f : KG → W be the
projection onto W with kernel V ; note that this is just a K-linear map. By
assumption, |G| is invertible in K and using Lemma 6.1 we can define
1
γ : KG → W , γ := T (f ).
|G|
1 1
(γ ◦ j )(w) = γ (w) = gf (g −1 w) = w = w.
|G| |G|
g∈G g∈G
w − λ|G|w ∈ U ∩ C = 0.
CG ∼
= Mn1 (C) × Mn2 (C) × . . . × Mnk (C)
CG ∼
= Mn1 (C) × Mn2 (C) × . . . × Mnk (C)
Remark 6.5. The statements of Theorem 6.4 need not hold if the field is not
algebraically closed. For instance, consider the group algebra RC3 where C3 is the
cyclic group of order 3. This algebra is isomorphic to the algebra R[X]/(X3 − 1),
see Example 1.27. In R[X], we have the factorization X3 −1 = (X−1)(X2 +X+1)
into irreducible polynomials. Hence, by Example 5.14 the algebra has Artin–
Wedderburn decomposition
= M1 (R) × M1 (C) ∼
RC3 ∼ = R × C.
k
6=1+1+ n2i .
i=3
The only possible solution for this is 6 = 1 + 1 + 22 , that is, the group algebra CS3
has three simple modules, up to isomorphism, of dimensions 1, 1, 2.
122 6 Semisimple Group Algebras and Maschke’s Theorem
Exercise 6.2. Let G = Sn , the group of permutations of {1, 2, . . . , n}. Recall that
every permutation g ∈ G is either even or odd. Define
1 g is even
σ (g) =
−1 g is odd
Deduce that this defines a representation σ : G → GL1 (C), usually called the sign
representation. Describe the corresponding CG-module.
In Example 6.7 we found the dimensions of the simple modules for the group
algebra CS3 from the numerical data coming from the Artin–Wedderburn decom-
position as in Theorem 6.4, and knowing that the group algebra is not commutative.
In general, one needs further information if one wants to find the dimensions of
simple modules for a group algebra CG. For instance, take the alternating group
A4 . We ask for the integers ni in Theorem 6.4, that is the sizes of the matrix blocks
in the Artin–Wedderburn decomposition of CA4 . That is, we must express 12 as a
sum of squares of integers, not all equal to 1 (but at least one summand equal to
1, coming from the trivial module). The possibilities are 12 = 1 + 1 + 1 + 32 , or
12 = 1+1+1+1+22+22, or also as 12 = 1+1+1+1+1+1+1+1+22. Fortunately,
there is further general information on the number of 1-dimensional (simple) CG-
modules, which uses the group theoretic description of the largest abelian factor
group of G.
We briefly recall a notion from elementary group theory. For a finite group G, the
commutator subgroup G is defined as the subgroup of G generated by all elements
of the form [x, y] := xyx −1y −1 for x, y ∈ G; then we have:
(i) G is a normal subgroup of G.
(ii) Let N be a normal subgroup of G. Then the factor group G/N is abelian if and
only if G ⊆ N. In particular, G/G is abelian.
Details can be found, for example, in the book by Smith and Tabachnikova in this
series.1
This allows us to determine the number of one-dimensional simple modules for
a group algebra CG, that is, the number of factors C in the Artin–Wedderburn
decomposition.
Corollary 6.8. Let G be a finite group. Then the number of 1-dimensional simple
CG-modules (up to isomorphism) is equal to the order of the factor group G/G , in
particular it divides the order of G.
Proof. Let V be a 1-dimensional CG-module, say V = span{v}. We claim that
an element n ∈ G acts trivially on V . It is enough to prove this for an element
n = [x, y] with x, y ∈ G. There exist scalars α, β ∈ C such that x · v = αv and
y · v = βv. Then
The only possibility is that k = 4 and n4 = 3. Hence, CA4 has four simple
modules (up to isomorphism), of dimensions 1, 1, 1, 3, and its Artin–Wedderburn
decomposition is
CA4 ∼
= C × C × C × M3 (C).
Example 6.10. Let G be the dihedral group of order 10, as in Exercise 2.20. Then
by Exercise 3.14, the dimension of any simple CG-module is at most 2. The trivial
module is 1-dimensional, and by Theorem 6.4 there must be a simple CG-module
of dimension 2 since G is not abelian. From Theorem 6.4 we have now
10 = a · 1 + b · 4
124 6 Semisimple Group Algebras and Maschke’s Theorem
with positive integers a and b. By Corollary 6.8, the number a divides 10, the order
of G. The only solution is that a = 2 and b = 2. So there are two non-isomorphic
2-dimensional simple CG-modules. The Artin–Wedderburn decomposition has the
form
CG ∼
= C × C × M2 (C) × M2 (C).
Proposition 6.11.Let G be a finite group, and let K be an arbitrary field. Then the
class sums C := g∈C g, as C varies through the conjugacy classes of G, form a
K-basis of the centre Z(KG) of the group algebra KG.
Proof. We begin by showing that each class sum C is contained in the centre of
KG. It suffices to show that xC = Cx for all x ∈ G. Note that with g also xgx −1
varies through all elements of the conjugacy class C. Then we have
xCx −1 = xgx −1 = y = C,
g∈C y∈C
The group elements form a basis of KG, we compare coefficients and deduce that
αx = αg −1 xg for all g, x ∈ G,
that is, the coefficients αx are constant on conjugacy classes. So we can write each
element w ∈ Z(KG) in the form
w= αC C,
C
where the sum is over the different conjugacy classes C of G. Hence the class sums
span the centre Z(KG) as a K-vector space.
We now return to the Artin–Wedderburn decomposition of CG, and relate the
number of matrix blocks occurring there to the number of conjugacy classes of G.
∼ Mn (C) × . . . × Mn (C) be
Theorem 6.12. Let G be a finite group and let CG = 1 k
the Artin–Wedderburn decomposition of the group algebra CG. Then the following
are equal:
(i) The number k of matrix blocks.
(ii) The number of conjugacy classes of G.
(iii) The number of simple CG-modules, up to isomorphism.
Proof. By Theorem 6.4, the numbers in (i) and (iii) are equal. In order to prove
the equality of the numbers in (i) and (ii) we consider the centres of the algebras.
The centre of CG has dimension equal to the number of conjugacy classes of G, by
Proposition 6.11. On the other hand, the centre of Mn1 (C) × . . . × Mnk (C) is equal
to Z(Mn1 (C)) × . . . × Z(Mnk (C)) and this has dimension equal to k, the number of
matrix blocks, by Exercise 3.16.
Example 6.13. We consider the symmetric group S4 on four letters. As we have
mentioned, the conjugacy classes of symmetric groups are determined by the cycle
type. There are five cycle types for elements of S4 , we have the identity, 2-cycles, 3-
cycles, 4-cycles and products of two disjoint 2-cycles. Hence, by Theorem 6.12,
there are five matrix blocks in the Artin–Wedderburn decomposition of CS4 , at
least one of them has size 1. One can see directly that there is a unique solution
for expressing 24 = |S4 | as a sum of five squares where at least one of them is
equal to 1. Furthermore, we can see from Remark 6.6 and the fact that S4 /V4 is
not abelian, that the commutator subgroup of S4 is A4 . So CS4 has precisely two
1-dimensional simple modules (the trivial and the sign module), by Corollary 6.8.
From Theorem 6.4 (b) we get
CS4 ∼
= C × C × M2 (C) × M3 (C) × M3 (C).
EXERCISES
6.3. Let G = Dn be the dihedral group of order 2n (that is, the symmetry group
of a regular n-gon). Determine the Artin–Wedderburn decomposition of the
group algebra CDn . (Hint: Apply Exercise 3.14.)
6.4. Let G = S3 be the symmetric group of order 6 and A = CG. We consider
the elements σ = (1 2 3) and τ = (1 2) in S3 . We want to show directly
that this group algebra is a direct product of three matrix algebras. (We know
from Example 6.7 that there should be two blocks of size 1 and one block of
size 2.)
(a) Let e± := 16 (1 ± τ )(1 + σ + σ 2 ); show that e± are idempotents in the
centre of A, and that e+ e− = 0.
(b) Let f = 13 (1 + ω−1 σ + ωσ 2 ) where ω ∈ C is a primitive 3rd root of
unity. Let f1 := τf τ −1 . Show that f and f1 are orthogonal idempotents,
and that
f + f1 = 1A − e− − e+ .
6.7. (a) Let G1 , G2 be two abelian groups of the same order. Explain why CG1
and CG2 have the same Artin–Wedderburn decomposition, and hence are
isomorphic as C-algebras.
(b) Let G be any non-abelian group of order 8. Show that there is a unique
possibility for the Artin–Wedderburn decomposition of CG.
6.8. Let G = {±1, ±i, ±j, ±k} be the quaternion group, as defined in
Remark 1.9.
(i) Show that the commutator subgroup of G is the cyclic group generated
by the element −1.
(ii) Determine the number of simple CG-modules (up to isomorphism) and
their dimensions.
(iii) Compare the Artin–Wedderburn decomposition of CG with that of the
group algebra of the dihedral group D4 of order 8 (that is, the symmetry
group of the square). Are the group algebras CG and CD4 isomorphic?
6.9. In each of the following cases, does there exist a finite group G such that the
Artin–Wedderburn decomposition of the group algebra CG has the following
form?
(i) M3 (C),
(ii) C × M2 (C),
(iii) C × C × M2 (C),
(iv) C × C × M3 (C).
Chapter 7
Indecomposable Modules
We have seen that for a semisimple algebra, any non-zero module is a direct sum of
simple modules (see Theorem 4.11). We investigate now how this generalizes when
we consider finite-dimensional modules. If the algebra is not semisimple, one needs
to consider indecomposable modules instead of just simple modules, and then one
might hope that any finite-dimensional module is a direct sum of indecomposable
modules. We will show that this is indeed the case. In addition, we will show that
a direct sum decomposition into indecomposable summands is essentially unique;
this is known as the Krull–Schmidt Theorem.
In Chap. 3 we have studied simple modules, which are building blocks for
arbitrary modules. They might be thought of as analogues of ‘elementary particles’,
and then indecomposable modules could be viewed as analogues of ‘molecules’.
Throughout this chapter, A is a K-algebra where K is a field.
given by the action of the coset of X, and note that α t = 0. Let Vα be the
2-dimensional A-module where α has matrix
01
00
this sum is direct: if x ∈ ε(M) ∩ (idM − ε)(M) then ε(m) = x = (idM − ε)(n) with
m, n ∈ M, and then
Similarly, since b = be1 we have ε(b) = λb. We have proved that ε = λ · idM .
Now, ε2 = ε and therefore λ2 = λ and hence λ = 0 or λ = 1. That is, ε is the
zero map, or the identity map.
(2) Let N be the A-module with basis {v1 , v1 , v2 } where e1 N has basis {v1 , v1 } and
e2 N has basis {v2 } and where the action of a and b is defined by
We would like to have criteria which tell us when a given module is indecomposable.
Obviously Definition 7.1 is not so helpful; we would need to inspect all submodules
of a given module. One criterion is Lemma 7.3; in this section we will look for
further information from linear algebra.
Given a linear transformation of a finite-dimensional vector space, one gets a
direct sum decomposition of the vector space, in terms of the kernel and the image
of some power of the linear transformation.
Lemma 7.7 (Fitting’s Lemma I). Let K be a field. Assume V is a finite-
dimensional K-vector space, and θ : V → V is a linear transformation. Then
there is some n ≥ 1 such that the following hold:
(i) For all k ≥ 0 we have ker(θ n ) = ker(θ n+k ) and im(θ n ) = im(θ n+k ).
(ii) V = ker(θ n ) ⊕ im(θ n ).
Proof. This is elementary linear algebra, but since it is important, we give the proof.
(i) We have inclusions of subspaces
and
Hence the sum ker(θ n )+im(θ n ) is equal to V since it is a subspace whose dimension
is equal to dimK V .
134 7 Indecomposable Modules
(−a)y = 1A − ax.
We know that ax does not have a left inverse, and therefore, using (ii) we deduce that
(−a)y has a left inverse. But then y has a left inverse, and y ∈ N, a contradiction.
We have now shown that N is a left ideal of A.
Now assume (i) holds, we prove that this implies (ii). Assume a ∈ A does not
have a left inverse in A. We have to show that then 1A − a has a left inverse in A.
If this is false then both a and 1A − a belong to N. By assumption (i), N is closed
under addition, therefore 1A ∈ N, which is not true. This contradiction shows that
1A − a must belong to N.
Definition 7.12. A K-algebra A is called a local algebra (or just local) if it satisfies
the equivalent conditions from Theorem 7.11.
Exercise 7.1. Let A be a local K-algebra. Show that the left ideal N in Theo-
rem 7.11 is a maximal left ideal of A, and that it is the only maximal left ideal
of A.
Remark 7.13. Let A be a local K-algebra. By Exercise 7.1 the left ideal N in
Theorem 7.11 is then precisely the Jacobson radical as defined and studied in
Sect. 4.3 (see Definition 4.21). In particular, if A is finite-dimensional then this
unique maximal left ideal is even a two-sided ideal (see Theorem 4.23).
Lemma 7.14.
(a) Assume A is a local K-algebra. Then the only idempotents in A are 0 and 1A .
(b) Assume A is a finite-dimensional algebra. Then A is local if and only if the only
idempotents in A are 0 and 1A .
Proof. (a) Let ε ∈ A be an idempotent. If ε has no left inverse, then by
Theorem 7.11 we know that 1A − ε has a left inverse, say a(1A − ε) = 1A for
some a ∈ A. Then it follows that ε = 1A ε = a(1A − ε)ε = aε − aε2 = 0.
On the other hand, if ε has a left inverse, say bε = 1A for some b ∈ A, then
ε = 1A ε = bε2 = bε = 1A .
(b) We must show the converse of (a). Assume that 0 and 1A are the only
idempotents in A. We will verify condition (ii) of Theorem 7.11, that is, let a ∈ A,
then we show that at least one of a and 1A − a has a left inverse in A. Consider the
map θ : A → A defined by θ (x) := xa. This is an A-module homomorphism if we
136 7 Indecomposable Modules
by Xn . But K[X] = ker(θ n ) ⊕ im(θ n ). So Lemma 7.7 fails for A. Exercise 7.11
contains further illustrations.
M1 ⊕ . . . ⊕ Mr = M = N1 ⊕ . . . ⊕ Ns
r
idN1 = ν1 ◦ κ1 = ν1 ◦ idM ◦ κ1 = ν1 ◦ ei ◦ κ1 . (∗)
i=1
138 7 Indecomposable Modules
φ := ν1 ◦ e1 ◦ κ1 = ν1 ◦ ι1 ◦ μ1 ◦ κ1 .
M1 = im(μ1 ◦ κ1 ) ⊕ ker(ν1 ◦ ι1 ).
γ := idM − f1 + e1 ◦ f1 .
0 = f1 (0) = (f1 ◦ γ )(x) = f1 (x) − f12 (x) + (f1 ◦ e1 ◦ f1 )(x) = (f1 ◦ e1 ◦ f1 )(x).
(3) We complete the proof of the Krull–Schmidt Theorem. Note that an isomorphism
takes a direct sum decomposition to a direct sum decomposition, see Exercise 7.3.
By (2) we have
M2 ⊕ . . . ⊕ Mr ∼
= M/M1 = (M1 ⊕ N2 ⊕ . . . ⊕ Ns )/M1 ∼
= N2 ⊕ . . . ⊕ Ns .
To apply the induction hypothesis, we need two direct sum decompositions of the
same module. Let M := M2 ⊕ . . . ⊕ Mr . We have obtained an isomorphism
ψ : M → N2 ⊕ . . . ⊕ Ns . Let Ni := ψ −1 (Ni ), this is a submodule of M , and we
have, again by Exercise 7.3, the direct sum decomposition
M = N2 ⊕ . . . ⊕ Ns .
EXERCISES
where ei denotes the i-th standard basis vector. Recall from Exercise 2.14
that V0 , V1 , . . . , Vn are the only Tn (K)-submodules of K n , and that
Vi,j := Vi /Vj (for 0 ≤ j < i ≤ n) are n(n+1) 2 pairwise non-isomorphic
Tn (K)-modules.
(a) Determine the endomorphism algebra EndTn (K) (Vi,j ) for all
0 ≤ j < i ≤ n.
(b) Deduce that each Vi,j is an indecomposable Tn (K)-module.
7.5. Recall that for any K-algebra A and every element a ∈ A the map
θa : A → A, b → ba, is an A-module homomorphism.
140 7 Indecomposable Modules
(a) Show that for λ = μ the KQ-modules Vλ and Vμ are not isomorphic.
(b) Show that for all λ ∈ K the KQ-module Vλ is indecomposable.
7.11. Let K[X] be the polynomial algebra.
(a) Show that K[X] is indecomposable as a K[X]-module.
(b) Show that the equivalence in the second version of Fitting’s Lemma
(Corollary 7.9) does not hold, by giving a K[X]-module endomorphism
of K[X] which is neither invertible nor nilpotent.
(c) Show that the equivalence in the third version of Fitting’s Lemma
(Corollary 7.16) also does not hold for K[X].
7.12. (a) By applying the Artin–Wedderburn theorem, characterize which
semisimple K-algebras are local algebras.
(b) Let G be a finite group such that the group algebra KG is semisimple
(that is, the characteristic of K does not divide |G|, by Maschke’s
theorem). Deduce that KG is not a local algebra, except for the group
G with one element.
Chapter 8
Representation Type
Remark 8.2.
(1) One sometimes alternatively defines finite representation type for an algebra A
by requesting that there are only finitely many indecomposable A-modules of
finite length, up to isomorphism.
For a finite-dimensional algebra A the two versions are the same: That
is, an A-module has finite length if and only if it is finite-dimensional. This
follows since in this case every simple A-module is finite-dimensional, see
Corollary 3.20.
(2) Isomorphic algebras have the same representation type. To see this, let
: A → B be an isomorphism of K-algebras. Then every B-module
M becomes an A-module by setting a · m = (a)m and conversely,
every A-module becomes a B-module by setting b · m = −1 (b)m.
This correspondence preserves dimensions of modules and it preserves
isomorphisms, and moreover indecomposable A-modules correspond to
indecomposable B-modules. We shall use this tacitly from now on.
Example 8.3.
(1) Every semisimple K-algebra has finite representation type.
In fact, suppose A is a semisimple K-algebra. By Remark 7.2, an A-module
M is indecomposable if and only if it is simple. By Remark 4.10, the algebra
A = S1 ⊕ . . . ⊕ Sk is a direct sum of finitely many simple A-modules. In
particular, A has finite length as an A-module and then every indecomposable
(that is, simple) A-module is isomorphic to one of the finitely many A-modules
S1 , . . . , Sk , by Theorem 3.19. In particular, there are only finitely many finite-
dimensional indecomposable A-modules, and A has finite representation type.
Note that in this case it may happen that the algebra A is not finite-
dimensional, for example it could just be an infinite-dimensional division
algebra.
(2) The polynomial algebra K[X] has infinite representation type.
To see this, take some integer m ≥ 2 and consider the (finite-dimensional)
K[X]-module Wm = K[X]/(Xm ). This is indecomposable: In Remark 7.2
we have seen that it is indecomposable as a module for the factor algebra
K[X]/(Xm ), and the argument we gave works here as well: every non-zero
K[X]-submodule of Wm must contain the coset Xm−1 + (Xm ). So Wm cannot
be expressed as a direct sum of two non-zero submodules. The module Wm has
dimension m, and hence different Wm are not isomorphic.
(3) For any n ∈ N the algebra A := K[X]/(Xn ) has finite representation type.
Recall that a finite-dimensional A-module is of the form Vα where V is a
finite-dimensional K-vector space, and α is a linear transformation of V such
that α n = 0 (here α describes the action of the coset of X). Since α n = 0,
the only eigenvalue of α is 0, so the field K contains all eigenvalues of α. This
means that there exists a Jordan canonical form for α over K. That is, V is the
direct sum V = V1 ⊕ . . . ⊕ Vr , where each Vi is invariant under α, and each Vi
8.1 Definition and Examples 145
has a basis such that the matrix of α on Vi is a Jordan block matrix of the form
⎛ ⎞
0
⎜1 0 ⎟
⎜ ⎟
⎜ ⎟
⎜ 1 ... ⎟.
⎜ ⎟
⎜ .. .. ⎟
⎝ . . ⎠
1 0
where ci ∈ K. Then the set of vectors {b, α(b), . . . , α d−1 (b)} is a K-basis of Vα .
146 8 Representation Type
1 T.S. Blyth,
E.F. Robertson, Further Linear Algebra. Springer Undergraduate Mathematics Series.
Springer-Verlag London, Ltd., 2002.
8.1 Definition and Examples 147
sum of cyclic A-modules. Then the minimal polynomial of a cyclic summand must
divide g m and hence is of the form g t with t ≤ m.
We apply these results to Vα . Since Vα is assumed to be indecomposable we see
that Vα must be cyclic and α has minimal polynomial g t where g is an irreducible
polynomial such that g t divides f . By the remark preceding the lemma, we know
that the matrix of α with respect to some basis is of the form C(g t ).
(b) By part (a), every finite-dimensional indecomposable A-module is of the form
Vα , where α has matrix C(g t ) with respect to a suitable basis, and where g is
irreducible and g t divides f . Note that two such modules for the same factor g t
are isomorphic (see Example 2.23 for modules over the polynomial algebra, but the
argument carries over immediately to A = K[X]/(f )).
There are only finitely many factors of f of the form g t with g an irreducible
polynomial in K[X]. Hence A has only finitely many finite-dimensional indecom-
posable A-modules, that is, A has finite representation type.
One may ask what a small algebra of infinite representation type might look like.
The following is an example:
Lemma 8.5. For any field K, the 3-dimensional commutative K-algebra
A := K[X, Y ]/(X2 , Y 2 , XY )
is a Jordan block of size n for the eigenvalue λ. One checks that αX and αY satisfy
the equations in (8.1), that is, this defines an A-module Vn (λ).
148 8 Representation Type
We will now show that Vn (λ) is an indecomposable A-module; note that because
of the different dimensions, for a fixed λ, the Vn (λ) are pairwise non-isomorphic.
(Even more, for fixed n, the modules Vn (λ) and Vn (μ) are not isomorphic for
different λ, μ in K; see Exercise 8.5.)
By Lemma 7.3 it suffices to show that the only idempotents in the endomorphism
algebra EndA (Vn (λ)) are the zero and the identity map. So let ϕ ∈ EndA (Vn (λ)) be
an idempotent element. Inparticular, ϕ is a K-linear map of Vn (λ), so we can write
A1 A2
it as a block matrix ϕ̃ = where A1 , A2 , A3 , A4 are n × n-matrices over
A3 A4
K. Then ϕ is an A-module homomorphism if and only if this matrix commutes
with the matrices αX and αY . By using that ϕ̃αX = αX ϕ̃ we deduce that A2 = 0
and A 1 = A4 .Moreover, since ϕ̃αY = αY ϕ̃, we get that A1 Jn (λ) = Jn (λ)A1 . So
A1 0
ϕ̃ = , where A1 commutes with the Jordan block Jn (λ).
A3 A1
Assume ϕ̃ 2 = ϕ̃, then in particular A21 = A1 . We exploit that A1 commutes with
Jn (λ), namely we want to apply Exercise 8.1. Take f = (X − λ)n , then A1 is an
endomorphism of the K[X]/(f )-module Vα , where α is given by Jn (λ). This is a
cyclic K[X]/(f )-module (generated by the first basis vector). We have A21 = A1 ,
therefore by Exercise 8.1, A1 is the zero or the identity matrix. In both cases, since
ϕ̃ 2 = ϕ̃ it follows that A3 = 0 and hence ϕ̃ is zero or is the identity. This means that
0 and idVn (λ) are the only idempotents in EndA (Vn (λ)).
In general, it may be a difficult problem to determine the representation type
of a given algebra. There are some methods which reduce this problem to smaller
algebras, one of these is the following.
Proposition 8.6. Let A be a K-algebra and I ⊂ A a two-sided ideal with I = A.
If the factor algebra A/I has infinite representation type then A has infinite
representation type.
Proof. Note that if I = 0 then there is nothing to do.
We have seen in Lemma 2.37 that the A/I -modules are in bijection with those
A-modules M such that I M = 0. Note that under this bijection, the underlying
K-vector spaces remain the same, and the actions are related by (a + I )m = am
for all a ∈ A and m ∈ M. From this it is clear that for any such module M,
the A/I -submodules are the same as the A-submodules. In particular, M is an
indecomposable A/I -module if and only if it is an indecomposable A-module, and
M has finite dimension as an A/I -module if and only if it has finite dimension as
an A-module. Moreover, two such modules are isomorphic as A/I -modules if and
only if they are isomorphic as A-modules, roughly since they are not changed but
just viewed differently. (Details for this are given in Exercise 8.8.) By assumption
there are infinitely many pairwise non-isomorphic indecomposable A/I -modules of
finite dimension. By the above remarks they also yield infinitely many pairwise non-
isomorphic indecomposable A-modules of finite dimension, hence A has infinite
representation type.
8.2 Representation Type for Group Algebras 149
Example 8.7.
(1) Consider the commutative 4-dimensional K-algebra A = K[X, Y ]/(X2 , Y 2 ).
Let I be the ideal of A generated by the coset of XY . Then A/I is isomorphic
to the algebra K[X, Y ]/(X2 , Y 2 , XY ); this has infinite representation type by
Lemma 8.5. Hence A has infinite representation type by Proposition 8.6.
(2) More generally, consider the commutative K-algebra A = K[X, Y ]/(Xr , Y r )
for r ≥ 2, this has dimension r 2 . Let I be the ideal generated by the
cosets of X2 , Y 2 and XY . Then again A/I is isomorphic to the algebra
K[X, Y ]/(X2 , Y 2 , XY ) and hence A has infinite representation type, as in
part (1).
The representation type of a direct product of algebras can be determined from
the representation type of its factors.
Proposition 8.8. Let A = A1 × . . . × Ar be the direct product of K-algebras
A1 , . . . , Ar . Then A has finite representation type if and only if all the algebras
A1 , . . . , Ar have finite representation type.
Proof. We have seen that every A-module M can be written as a direct sum
M = M1 ⊕ . . . ⊕ Mr where Mi = εi M, with εi = (0, . . . , 0, 1Ai , 0, . . . , 0), and Mi
is an A-submodule of M (see Lemma 3.30).
Now, assume that M is an indecomposable A-module. So there exists a unique i
such that M = Mi and Mj = 0 for j = i. Let I = A1 ×. . .×Ai−1 ×0×Ai+1 ×. . .×Ar ,
this is an ideal of A and A/I ∼ = Ai . The ideal I acts as zero on M. Hence M is
the inflation of an Ai -module (see Remark 2.38). By Lemma 2.37 the submodules
of M as an A-module are the same as the submodules as an Ai -module. Hence the
indecomposable A-module M is also an indecomposable Ai -module.
Conversely, every indecomposable Ai -module M clearly becomes an inde-
composable A-module by inflation. Again, since A-submodules are the same as
Ai -submodules, M is indecomposable as an A-module.
So we have shown that the indecomposable A-modules are in bijection with the
union of the sets of indecomposable Ai -modules, for 1 ≤ i ≤ r. Moreover, one sees
that under this bijection, isomorphic modules correspond to isomorphic modules,
and modules of finite dimension correspond to modules of finite dimension.
Therefore (see Definition 8.1), A has finite representation type if and only if each
Ai has finite representation type, as claimed.
K(Cp × Cp ) ∼
p p
= K[X1 , X2 ]/(X1 − 1, X2 − 1).
K(Cp × Cp ) ∼
= K[X, Y ]/(Xp , Y p ).
and linear extension to arbitrary polynomials. One checks that this map is an algebra
p
homomorphism. Moreover, each Xi − 1 is contained in the kernel of
, since
p
gi = 1 in the cyclic group Cp . Hence, I ⊆ ker(
). On the other hand,
is clearly
surjective, so the isomorphism theorem for algebras (see Theorem 1.26) implies that
K[X1 , X2 ]/ker(
) ∼
= K(Cp × Cp ).
8.2 Representation Type for Group Algebras 151
Now we compare the dimensions (as K-vector spaces). The group algebra on the
right has dimension p2 . The factor algebra K[X1 , X2 ]/I also has dimension p2 ,
the cosets of the monomials X1a1 X2a2 with 0 ≤ ai ≤ p − 1 form a basis. But since
I ⊆ ker(
) these equal dimensions force I = ker(
), which proves the desired
isomorphism.
(b) Let K have characteristic p > 0. Then by the binomial formula we have
p
Xi − 1 = (Xi − 1)p for i = 1, 2. This means that substituting X1 − 1 for X
and X2 − 1 for Y yields a well-defined isomorphism
p p
K[X, Y ]/(Xp , Y p ) → K[X1 , X2 ]/(X1 − 1, X2 − 1).
By part (a) the algebra on the right-hand side is isomorphic to the group algebra
K(Cp × Cp ) and this completes the proof of the claim in (b).
As a first step towards our main goal we can now find the representation type in
the case of finite p-groups, the answer is easy to describe:
Theorem 8.11. Let p be a prime number, K a field of characteristic p and let G be
a finite p-group. Then the group algebra KG has finite representation type if and
only if G is cyclic.
To prove this, we will use the following property, which characterizes when a
p-group is not cyclic.
Lemma 8.12. If a finite p-group G is not cyclic then it has a factor group which is
isomorphic to Cp × Cp .
Proof. In the case when G is abelian, we can deduce this from the general
description of a finite abelian group. Indeed, such a group can be written as the
direct product of cyclic groups, and if there are at least two factors, both necessarily
p-groups, then there is a factor group as in the lemma. For the proof in the general
case, we refer to the worked Exercise 8.6.
Proof of Theorem 8.11. Suppose first that G is cyclic, that is, G = Cpa for some
a ∈ N. Then by Lemma 8.9, we have KG ∼
a
= K[T ]/(T p ). We have seen in
Example 8.3 that this algebra has finite representation type. So KG also has finite
representation type, by Remark 8.2.
Conversely, suppose G is not cyclic. We must show that KG has infinite
representation type. By Lemma 8.12, the group G has a normal subgroup N such
that the factor group Ḡ := G/N is isomorphic to Cp ×Cp . We construct a surjective
algebra homomorphism ψ : KG → K Ḡ, by taking ψ(g) := gN and extending this
to linear combinations. This is an algebra homomorphism, that is, it is compatible
with products since (g1 N)(g2 N) = g1 g2 N in the factor group G/N. Clearly ψ is
surjective. Let I = ker(ψ), then KG/I ∼ = K Ḡ by the isomorphism theorem of
algebras (see Theorem 1.26). We have seen in Lemma 8.10 that
K Ḡ ∼
= K(Cp × Cp ) ∼
= K[X, Y ]/(Xp , Y p ).
152 8 Representation Type
By Example 8.7, the latter algebra is of infinite representation type. Then the
isomorphic algebras K Ḡ and KG/I also have infinite representation type, by
Remark 8.2. Since the factor algebra KG/I has infinite representation type, by
Proposition 8.6 the group algebra KG also has infinite representation type.
In order to determine precisely which group algebras have finite representation
type, we need new tools to relate modules of a group to modules of a subgroup.
They are known as ‘restriction’ and ‘induction’, and they are used extensively in
the representation theory of finite groups.
The setup is as follows. Assume G is a finite group, and H is a subgroup
of G. Take a field K. Then the group algebra KH is a subalgebra of KG (see
Example 1.16). One would therefore like to relate KG-modules and KH -modules.
(Restriction) If M is any KG-module then by restricting the action of KG to the
subalgebra KH , the space M becomes a KH -module, called the restriction of M
to KH .
(Induction) There is also the process of induction, this is described in detail in
Chap. A, an appendix on induced modules. We briefly sketch the main construction.
Let W be a KH -module. Then we can form the tensor product KG ⊗K W of
vector spaces; this becomes a KG-module, called the induced module, by setting
x · (g ⊗ w) = xg ⊗ w for all x, g ∈ G and w ∈ W (and extending linearly). We
then consider the K-subspace
H = span{gh ⊗ w − g ⊗ hw | g ∈ G, h ∈ H, w ∈ W }
KG ⊗H W := (KG ⊗K W )/H
is called the KG-module induced from the KH -module W . For convenience one
writes
g ⊗H w := g ⊗ w + H ∈ KG ⊗H W.
{t ⊗H wi | t ∈ T , i = 1, . . . , m};
μ : KG ⊗H M → M, g ⊗H m → gm
{t ⊗H wi | t ∈ T , i = 1, . . . , m},
implies that
h · (t ⊗H wi ) = ht ⊗H wi = s h̃ ⊗H wi = s ⊗H h̃wi
and then one checks that W1 and t ∈T \{1} Wt are KH -submodules of KG ⊗H W .
Since these are spanned by elements of a basis of KG ⊗H W we obtain a direct
sum decomposition of KH -modules
⎛ ⎞
KG ⊗H W = W1 ⊕ ⎝ Wt ⎠ .
t ∈T \{1}
r
σ : M → KG ⊗H M, m → gi ⊗H gi−1 m.
i=1
(The reader might wonder how to get the idea to use this map. It is not
too hard to see that we have an injective KH -module homomorphism
i : M → KG ⊗H M , m → 1 ⊗H m; the details are given in Proposition A.6 in
the appendix. To make it a KG-module homomorphism, one mimics the averaging
formula from the proof of Maschke’s theorem, see Lemma 6.1, leading to the above
map σ .)
156 8 Representation Type
r
r
r
gi hi ⊗H h−1 −1
i gi m = gi hi h−1 −1
i ⊗H gi m = gi ⊗H gi−1 m = σ (m),
i=1 i=1 i=1
where for the first equation we have used the defining relations in the induced
module (coming from the subspace H in the definition of KG ⊗H M).
Next, we show that σ is a KG-module homomorphism. In fact, for every g ∈ G
and m ∈ M we have
r
r
σ (gm) = gi ⊗H gi−1 gm = gi ⊗H (g −1 gi )−1 m.
i=1 i=1
Setting g̃i := g −1 gi we get another set of left coset representatives g̃1 , . . . , g̃r . This
implies that the above sum can be rewritten as
r
r r
σ (gm) = gi ⊗H (g −1 gi )−1 m = g g̃i ⊗H g̃i−1 m = g( g̃i ⊗H g̃i−1 m) = gσ (m),
i=1 i=1 i=1
where the last equation holds since we have seen above that σ is independent of the
choice of coset representatives.
For the composition μ ◦ σ : M → M we obtain for all m ∈ M that
r
r
(μ ◦ σ )(m) = μ( gi ⊗H gi−1 m) = gi gi−1 m = rm = |G : H |m.
i=1 i=1
So far we have not used our assumption that the index |G : H | is invertible in the
field K, but now it becomes crucial. We set κ := |G:H
1
| σ : M → M. Then from the
above computation we deduce that μ ◦ κ = idM .
As the final step we can now apply Lemma 2.30 to get a direct sum decomposi-
tion KG ⊗H M = im(κ) ⊕ ker(μ). The map κ is injective (since μ ◦ κ = idM ), so
im(κ) ∼ = M and the claim follows.
(ii) By assumption, KH has finite representation type; let W1 , . . . , Ws be the
finite-dimensional indecomposable KH -modules, up to isomorphism. It suffices to
show that every finite-dimensional indecomposable KG-module M is isomorphic
to a direct summand of one of the KG-modules KG ⊗H Wi with 1 ≤ i ≤ s.
In fact, since these finitely many finite-dimensional modules have only finitely
many indecomposable summands, it then follows that there are only finitely many
possibilities for M (up to isomorphism), that is, KG has finite representation type.
8.2 Representation Type for Group Algebras 157
M∼
= (W1 ⊕ .!"
. . ⊕ W1 ) ⊕ . . . ⊕ (Ws ⊕ . . . ⊕ Ws )
# !" #
a1 as
as a KH -module. Since tensor products commute with finite direct sums (see the
worked Exercise 8.9 for a proof in the special case of induced modules used here),
we obtain
KG ⊗H M ∼
= (KG ⊗H W1 )⊕a1 ⊕ . . . ⊕ (KG ⊗H Ws )⊕as .
So we can view M as a module for the group algebra K(G/H ). Since G/H has
order 2, it is isomorphic to the subgroup s of S3 generated by s. So we can also
view M as a module for the group algebra Ks. As such, it is the direct sum of
two 1-dimensional submodules, with K-basis (1 + s) ⊗H w and (1 − s) ⊗H w,
respectively. Thus, from M = KG ⊗H W we obtain two 1-dimensional (hence
simple) KG-modules
{1 ⊗H b1 , 1 ⊗H b2 , s ⊗H b1 , s ⊗H b2 },
see Proposition A.5. One checks that KG ⊗H W2 has the two KG-submodules
an indecomposable KH -module).
We consider the submodules generated by (1 + s) ⊗H 1 and (1 − s) ⊗H 1. More
precisely, again using that rs = sr −1 in S3 one can check that
U3 := KG((1 + s) ⊗H 1)
= span{u := (1 + s) ⊗H 1, v := 1 ⊗H r + s ⊗H r 2 , w := 1 ⊗H r 2 + s ⊗H r}
160 8 Representation Type
and
V3 := KG((1 − s) ⊗H 1)
= span{x := (1 − s) ⊗H 1, y := 1 ⊗H r − s ⊗H r 2 , z := 1 ⊗H r 2 − s ⊗H r}.
su = u , sv = w , sw = v and ru = v , rv = w , rw = u
sx = −x , sy = −z , sz = −y and rx = y , ry = z , rz = x.
From this one checks that U3 and V3 each have a unique 1-dimensional KG-
submodule, namely span{u + v + w} ∼ = U1 for U3 and span{x + y + z} ∼ = V1 for
V3 . From this one deduces that there is a direct sum decomposition KG = U3 ⊕ V3 .
Moreover, we claim that U3 and V3 are indecomposable KG-modules. For this
it suffices to show that they are indecomposable when considered as KH -modules
(with the restricted action); in fact, any direct sum decomposition as KG-modules
would also be a direct sum decomposition as KH -modules.
Note that U3 as a KH -module is isomorphic to KH (an isomorphism is given
on basis elements by u → 1, v → r and w → r 2 ) and this is an indecomposable
KH -module. Hence U3 is an indecomposable KG-module. Similarly, V3 is an
indecomposable KG-module.
Finally, U3 and V3 are not isomorphic; in fact, an isomorphism would yield
an isomorphism between the unique 1-dimensional submodules, but these are
isomorphic to U1 and V1 , respectively.
According to the proof of Theorem 8.15 we have now shown that the group
algebra KS3 for a field K of characteristic 3 has precisely six indecomposable mod-
ules up to isomorphism, two 1-dimensional modules, two 2-dimensional modules
and two 3-dimensional modules. Among these only the 1-dimensional modules are
simple (we have found that each of the other indecomposable modules has a 1-
dimensional submodule).
EXERCISES
form
M1 (a) 0
.
0 M2 (a)
A = K[X, Y ]/(X2 , Y 2 , XY ).
(a) Show that Z(G) has order at least p. (Hint: note that Z(G) consists of
the elements whose conjugacy class has size 1, and that the number of
elements with this property must be divisible by p.)
(b) Show that if Z(G) = G (that is, G is not abelian) then G/Z(G) cannot
be cyclic.
(c) Suppose G is not cyclic. Show that then G has a normal subgroup N such
that G/N is isomorphic to Cp ×Cp . When G is abelian, this follows from
the structure of finite abelian groups. Prove the general case by induction
on n.
8.7. Let A be a K-algebra. Suppose f : M → M and g : M → M are A-
module homomorphisms between A-modules such that f is injective, g is
surjective, and im(f ) = ker(g). This is called a short exact sequence and it
is written as
f g
0 → M −→ M −→ M → 0.
Show that for such a short exact sequence the following statements are
equivalent.
162 8 Representation Type
KG ⊗H (V ⊕ W ) ∼
= (KG ⊗H V ) ⊕ (KG ⊗H W ).
8.10. Let G = {±1, ±i, ±j, ±k} be the quaternion group, as defined in Remark 1.9
(see also Exercise 6.8).
(a) Determine a normal subgroup N of G of order 2 and show that G/N is
isomorphic to C2 × C2 .
(b) For which fields K does the group algebra KG have finite representation
type?
8.11. For which fields K does the group algebra KG have finite representation type
where G is:
(a) the alternating group G = A5 of even permutations on five letters,
(b) the dihedral group G = Dn of order 2n where n ≥ 2, that is, the
symmetry group of the regular n-gon.
Chapter 9
Representations of Quivers
We have seen representations of a quiver in Chap. 2 and we have also seen how to
relate representations of a quiver Q over a field K and modules for the path algebra
KQ, and that quiver representations and KQ-modules are basically the same
(see Sect. 2.5.2). For some tasks, quiver representations are more convenient than
modules. In this chapter we develop the theory further and study representations of
quivers in detail. In particular, we want to exploit properties which come from the
graph structure of the quiver Q.
α β γ
1 −→ 2 −→ 3 −→ 4
idK
K −→ K −→ 0 −→ K 2 .
Note that the maps starting or ending at a space which is zero can only be zero maps
and there is no need to write this down.
In Sect. 2.5.2 we have seen how to view a representation of a quiver Q over the
field K as a module for the path algebra KQ, and conversely how to view a module
for KQ as a representation of Q over K. We recall these constructions here, using
the quiver Q as in Example 9.2.
Example 9.3. Let Q be the quiver as in Example 9.2.
(a) Let M be as in Example 9.2. We translate the representation M of Q into
a KQ-module M. The underlying vector space is 4-dimensional, indeed,
according to Proposition 2.46 (a) we take M = 4i=1 M(i) = K 4 , where
e1 M ={(x, 0, 0, 0) | x ∈ K}
e2 M ={(0, y, 0, 0) | y ∈ K}
e4 M ={(0, 0, z, w) | z, w ∈ K}.
Then α(x, y, z, w) = (0, x, 0, 0) and β, γ act as zero maps. Note that M is the
direct sum of two KQ-submodules, the summands are e1 M ⊕ e2 M and e4 M.
(b) We give now an example which starts with a module and constructs from this
a quiver representation. Start with the KQ-module P = (KQ)e2 , that is, the
submodule of KQ generated by the trivial path e2 . It has basis {e2 , β, γβ}.
According to Proposition 2.46 (b), the representation P of Q corresponding
to the KQ-module P has the following shape. For each i = 1, 2, 3, 4 we
set P (i) = ei P = ei (KQ)e2 . A basis of this K-vector space is given by
paths in Q starting at vertex 2 and ending at vertex i. So we get P (1) = 0,
9.1 Definitions and Examples 165
idK idK
0 −→ K −→ K −→ K.
The close relation between modules for path algebras and representations of
quivers means that our definitions and results on modules can be translated to
representations of quivers. In particular, module homomorphisms become the
following.
Definition 9.4. Let Q = (Q0 , Q1 ) be a quiver, and let M and N be representations
of Q over K. A homomorphism ϕ : M → N of representations consists of a tuple
(ϕi )i∈Q0 of K-linear maps ϕi : M(i) → N(i) for each vertex i ∈ Q0 such that for
α
each arrow i −→ j in Q1 , the following diagram commutes:
that is,
ϕj ◦ M(α) = N(α) ◦ ϕi .
M(α1 ) : K → K 2 , x → (x, 0)
M(α2 ) : K → K 2 , x → (0, x)
M(α3 ) : K → K 2 , x → (x, x).
then we have
Using the other arrows we similarly find ϕ4 (0, x) = (0, c2 x), and
ϕ4 (x, x) = (c3 x, c3 x). Using linearity,
ϕ4 (x, y) = ϕ4 (x, 0) + ϕ4 (0, y) = (cx, 0) + (0, cy) = (cx, cy) = c(x, y),
(i) For each vertex i ∈ Q0 , the vector space U (i) is a subspace of M(i).
α
(ii) For each arrow i −→ j in Q, the linear map U (α) : U (i) −→ U (j ) is the
restriction of M(α) to the subspace U (i).
(b) A non-zero representation S of Q is called simple if its only subrepresentations
are O and S.
Example 9.7. Let Q be a quiver.
(1) Every representation M of Q has the trivial subrepresentations O and M.
(2) For each vertex j ∈ Q0 we have a representation Sj of Q over K given by
K if i = j
Sj (i) =
0 if i = j
(b) A subquiver Q of Q as above is called a full subquiver if for any two vertices
α
i, j ∈ Q0 all arrows i −→ j of Q are also arrows in Q .
Note that since Q must be a quiver, it is part of the definition that in a
subquiver the starting and end points of any arrow are also in the subquiver (see
Definition 1.11). Thus one cannot choose arbitrary subsets Q1 ⊆ Q1 in the above
definition.
Example 9.14. Let Q be the quiver
We determine the subquivers Q of Q with vertex set Q0 = {1, 2}. For the arrow set
we have the following possibilities: Q1 = ∅, Q1 = {α}, Q1 = {β} and Q1 = {α, β}.
Of these, only the last quiver is a full subquiver. However, by the preceding remark
we cannot choose Q1 = {α, γ } since the vertex 3 is not in Q0 .
Given a quiver Q with a subquiver Q , we want to relate representations of Q
with representations of Q . For our purposes two constructions will be particularly
useful. We first present the ‘restriction’ of a representation of Q to a representation
of Q . Starting with a representation of Q , we then introduce the ‘extension by
zero’ which produces a representation of Q.
Definition 9.15. Let Q = (Q0 , Q1 ) be a quiver and Q = (Q0 , Q1 ) a subquiver of
Q. If
M = ((M(i))i∈Q0 , (M(α))α∈Q1 )
M := ((M(i))i∈Q0 , (M(α))α∈Q1 ).
β γ
2 −→ 3 −→ 4
K −→ 0 −→ K 2 .
is compatible with the direct sum decomposition, and the same holds if α is in Q .
This shows that M = U ⊕ V.
Assume now that M is an indecomposable representation of Q. By the above,
we have M = U ⊕ V, therefore one of U or V must be the zero representation.
Say U is the zero representation, that is, M is the extension by zero of M , a
representation of Q . We claim that M must be indecomposable: Suppose we
had M = X ⊕ Y with subrepresentations X and Y . Then we extend X
and Y by zero and obtain a direct sum decomposition M = X ⊕ Y. Since
M is indecomposable, one of X or Y is the zero representation. But since these
are obtained as extensions by zero this implies that one of X or Y is the zero
representation. Therefore, M is indecomposable, as claimed.
There are further methods to relate representations of different quivers. We will now
present a general construction which will be very useful later. This construction
works for quivers without loops; for simplicity we consider from now on only
quivers without oriented cycles. Recall that the corresponding path algebras are then
finite-dimensional, see Exercise 1.2.
Consider two quivers Q and Q where Q is obtained from Q by replacing one
γ
vertex i of Q by two vertices i1 , i2 and one arrow, i1 −→ i2 , and by distributing
the arrows adjacent to i between i1 and i2 . The following definition makes this
construction precise.
Definition 9.20. Let Q be a quiver without oriented cycles and i a fixed vertex. Let
T be the set of all arrows adjacent to i, and suppose T = T1 ∪ T2 , a disjoint union.
Define Q to be the quiver obtained from Q as follows.
γ
(i) Replace vertex i by i1 −→ i2 (where i1 , i2 are different vertices);
(ii) Join the arrows in T1 to i1 ;
(iii) Join the arrows in T2 to i2 .
In (ii) and (iii) we keep the original orientation of the arrows. We call the new quiver
a stretch of Q.
Q
By assumption, Q does not have loops, so any arrow adjacent to i either starts at
i or ends at i but not both, and it belongs either to T1 or to T2 . Note that if T is large
then there are many possible stretches of a quiver Q at a given vertex i, coming from
different choices of the sets T1 and T2 .
We illustrate the general construction from Definition 9.20 with several
examples.
9.3 Stretching Quivers and Representations 173
Example 9.21.
(1) Let Q be the quiver 1 −→ 2. We stretch this quiver at vertex 2 and take T2 = ∅,
and we get the quiver
γ
1 −→ 21 −→ 22 .
There are several stretches of Q at the vertex i, for example we can get the
quiver
174 9 Representations of Quivers
or the quiver
Exercise 9.3. Let Q be a quiver with vertex set {1, . . . , n} and n − 1 arrows such
that for each i with 1 ≤ i ≤ n − 1, there is precisely one arrow between vertices
i and i + 1, with arbitrary orientation. That is, the underlying graph of Q has the
shape
Show that one can get the quiver Q by a finite number of stretches starting with the
one-vertex quiver, that is, the quiver with one vertex and no arrows.
Exercise 9.4. Let Q be a quiver with vertex set {1, . . . , n + 1} such that there is one
arrow between vertices i and i + 1 for all i = 1, . . . , n and an arrow between n + 1
has a circular shape
and 1. That is, the underlying graph of Q
9.3 Stretching Quivers and Representations 175
Suppose that the arrows of Q̃ are oriented so that Q̃ is not an oriented cycle. Show
that one can obtain Q̃ by a finite number of stretches starting with the Kronecker
quiver.
So far, stretching a quiver as in Definition 9.20 is a combinatorial construction
which produces new quivers from given ones. We can similarly stretch representa-
tions. Roughly speaking, we replace the vector space M(i) in a representation M
of Q by two copies of M(i) with the identity map between them, distributing the
and keeping the rest
M(α) with α adjacent to i so that we get a representation of Q
as it is.
Definition 9.22. Let Q be a quiver without oriented cycles and let Q be the quiver
γ
obtained from Q by stretching at a fixed vertex i, with a new arrow i1 −→ i2 and
where the arrows adjacent to i are the disjoint union T = T1 ∪T2 , see Definition 9.20.
Given a representation M of Q, define M % to be the representation of Q by
1 ) = M(i) = M(i
M(i 2 ), M(j
) = M(j ) (for j = i)
) = idM(i) , M(α)
M(γ = M(α) (for α any arrow of Q).
Note that if α is in T1 then M(α) must start or end at vertex i1 , and similarly for α
in T2 .
Example 9.23.
(1) As in Example 9.21 we consider the quiver Q of the form 1 −→ 2, and the
: 1 −→ 21 −→ 22 . Moreover, let M be the representation
stretched quiver Q
idK
K −→ K of Q. Then the stretched representation M % of Q has the form
idK idK
K −→ K −→ K. As another example, let N be the representation K −→ 0
of Q
of Q. Then the stretched representation N has the form K −→ 0 −→ 0.
(2) Let Q be the Kronecker quiver and let M be the representation
For the stretched quivers appearing in Example 9.21 we then obtain the
following stretched representations:
176 9 Representations of Quivers
and
)=N
ϕi2 ◦ M(γ
(γ ) ◦
ϕi1 .
) and N(γ
But M(γ ) are identity maps by Definition 9.22, and hence ϕi1 = ϕi2 .
This means that we can define a homomorphism ϕ : M → N of representations
by setting ϕi := ϕi1 = ϕi2 and ϕj := ϕj for j = i. One checks that the
relevant diagrams as in Definition 9.4 commute; this follows since the corresponding
diagrams for ϕ commute, and since ϕi1 = ϕi2 .
With this preliminary observation we will now prove the two assertions.
(a) Consider the case M = N . To show that M % is indecomposable it suffices by
Lemma 9.11 to show that if ϕ2 = ϕ then ϕ is zero or the identity. By the above
definition of the homomorphism ϕ we see that if ϕ2 = ϕ then also ϕ 2 = ϕ. By
assumption, M is indecomposable and hence, again by Lemma 9.11, ϕ is zero or
the identity. But then it follows directly from the definition of ϕ that
ϕ is also zero
or the identity.
(b) Assume M and N are not isomorphic. Suppose for a contradiction that
ϕ : M % → N is an isomorphism of representations, that is, all linear maps
) → N(j
ϕj : M(j ) are isomorphisms. Then all linear maps ϕj : M(j ) → N(j ) are
also isomorphisms and hence ϕ : M → N is an isomorphism, a contradiction.
9.4 Representation Type of Quivers 177
vector space to this vertex. Any vector space has a basis, hence it is a direct
sum of 1-dimensional subspaces; each subspace is a representation of the
one-vertex quiver. So there is just one indecomposable representation of the
one-vertex quiver, it is 1-dimensional. In particular, the one-vertex quiver has
finite representation type.
α
(2) Let Q be the quiver 1 −→ 2. We will determine explicitly its indecomposable
representations. This will show that Q has finite representation type.
Let X be an arbitrary representation of Q, that is, we have two
finite-dimensional vector spaces X(1) and X(2), and a linear map
T = X(α) : X(1) → X(2). We exploit the proof of the rank-nullity theorem
from linear algebra.
Choose a basis {b1 , . . . , bn } of the kernel ker(T ), and extend it to a basis of
X(1), say by {c1 , . . . , cr }. Then the image im(T ) has basis {T (c1 ), . . . , T (cr )},
by the proof of the rank-nullity theorem. Extend this set to a basis of X(2),
say by {d1 , . . . , ds }. With this, we aim at expressing X as a direct sum of
subrepresentations.
For each basis vector bi of the kernel of T we get a subrepresentation Bi of
X of the form
span{bi } −→ 0.
0 −→ span{di }
and that
Since Cλ (α) and Cμ (α) are identity maps, we have ϕ1 = ϕ2 . We also have the
commutative diagram
180 9 Representations of Quivers
Since C(α) is the identity map, it follows that ϕ1 = ϕ2 . We also have a commutative
diagram of K-linear maps
9.4 Representation Type of Quivers 181
and
appearing in Example 9.21 have infinite representation type over any field K.
182 9 Representations of Quivers
EXERCISES
9.5. Consider the representation defined in Example 9.2. Show that it is the direct
sum of three indecomposable representations.
9.6. Let Q = (Q0 , Q1 ) be a quiver and let M be a representation of Q. Suppose
that M = U ⊕ V is a direct sum of subrepresentations. For each vertex
i ∈ Q0 let ϕi : M(i) = U (i) ⊕ V (i) → U (i) be the linear map given by
projecting onto the first summand, and let ψi : U (i) → M(i) = U (i) ⊕ V (i)
be the inclusion of U (i) into M(i). Show that ϕ = (ϕi )i∈Q0 : M → U and
ψ = (ψi )i∈Q0 : U → M are homomorphisms of representations.
9.7. (This exercise gives an outline of an alternative proof of Theorem 9.8.) Let
Q = (Q0 , Q1 ) be a quiver without oriented cycles. For each vertex j ∈ Q0
let Sj be the simple representation of Q defined in Example 9.7.
(i) Show that for j = k ∈ Q0 the only homomorphism Sj → Sk of
representations is the zero homomorphism. In particular, the different
Sj are pairwise non-isomorphic.
Let M be a simple representation of Q.
(ii) Show that there exists a vertex k ∈ Q0 such that M(k) = 0 and
M(α) = 0 for all arrows α ∈ Q1 starting at k.
(iii) Let k ∈ Q1 be as in (ii). Deduce that M has a subrepresentation U with
U (k) = M(k) and U (i) = 0 for i = k.
(iv) Show that M is isomorphic to the simple representation Sk .
9.8. Let Q be a quiver. Let M be a representation of Q such that for a fixed vertex
j of Q we have M(i) = 0 for all i = j . Show that M is isomorphic to a direct
sum of dimK M(j ) many copies of the simple representation Sj .
Conversely, check that if a representation M of Q is isomorphic to a direct
sum of copies of Sj then M(i) = 0 for all i = j .
9.9. Let Q be a quiver and j a sink of Q, that is, no arrow of Q starts at j . Let
α1 , . . . , αt be all the arrows ending at j . Let M be a representation of Q.
(a) Show that M is a direct sum of subrepresentations, M = X ⊕ Y,
where
(i) Y satisfies Y (k) = M(k) for k = j , and Y (j ) = ti=1 im(M(αi )) is
the sum of the images of the maps M(αi ) : M(i) → M(j ),
(ii) X is isomorphic to the direct sum of copies of the simple
representation Sj , and the number of copies is equal to
dimK M(j ) − dimK Y (j ).
(b) If M has a direct summand isomorphic to Sj then ti=1 im(M(αi )) is a
proper subspace of M(j ).
9.10. Let Q be a quiver and j a source of Q , that is, no arrow of Q ends at j . Let
β1 , . . . , βt be the arrows starting at j . Let N be a representation of Q .
9.4 Representation Type of Quivers 183
t
(a) Consider the subspace X(j ) := i=1 ker(N(βi )) of N(j ). As a K-
vector space we can decompose N(j ) = X(j ) ⊕ Y (j ) for some subspace
Y (j ). Show that N is a direct sum of subrepresentations, N = X ⊕ Y,
where
(i) Y satisfies Y (k) = N(k) for k = j , and Y (j ) is as above,
(ii) X is isomorphic to the direct sum of dimK X(j ) many copies of the
simple representation Sj .
(b) If N has a direct summand isomorphic to Sj then ti=1 ker(N(βi )) is a
non-zero subspace of N(j ).
9.11. Let K be a field and let Q = (Q0 , Q1 ) be a quiver. For each vertex i ∈ Q0
consider the KQ-module Pi = KQei generated by the trivial path ei .
(i) Interpret the KQ-module Pi as a representation Pi of Q. In particular,
describe bases for the vector spaces Pi (j ) for j ∈ Q0 . (Hint: Do it first
for the last quiver in (4) of Example 9.21 and use this as an illustration
for the general case.)
(ii) Suppose that Q has no oriented cycles. Show that the representation Pi
of Q is indecomposable.
9.12. Let Q = (Q0 , Q1 ) be a quiver without oriented cycles and suppose that
M = ((M(i))i∈Q0 , (M(α))α∈Q1 ) is a representation of Q. Show that the
following holds.
(a) The representation M is semisimple (that is, a direct sum of simple
subrepresentations) if and only if M(α) = 0 for each arrow α ∈ Q1 .
(b) For each vertex i ∈ Q0 we set
socM (i) = ker(M(α))
s(α)=i
(where s(α) denotes the starting vertex of the arrow α). Then
Gabriel’s theorem (which will be proved in the next chapter) states that a connected
quiver has finite representation type if and only if the underlying graph is one of
the Dynkin diagrams of types An for n ≥ 1, Dn for n ≥ 4, E6 , E7 , E8 , which we
define in Fig. 10.1.
We have seen some small special cases of Gabriel’s theorem earlier in the
book. Namely, a quiver of type A1 (that is, the one-vertex quiver) has only
one indecomposable representation by Example 9.28; in particular, it is of finite
representation type. Moreover, also in Example 9.28 we have shown that the quiver
1 −→ 2 has finite representation type; note that this quiver has as underlying graph
a Dynkin diagram of type A2 .
To deal with the case when is not a Dynkin diagram, we will only need a small
list of graphs. These are the Euclidean diagrams, sometimes also called extended
Dynkin diagrams. They are shown in Fig. 10.2, and are denoted by A n for n ≥ 1,
n for n ≥ 4, and E
D 6 , E
7 , E
8 . For example, the Kronecker quiver is a quiver with
underlying graph a Euclidean diagram of type A 1 ; and we have seen already in
Example 9.30 that the Kronecker quiver has infinite representation type.
We refer to graphs in Fig. 10.1 as graphs of type A, D, or E. We say that a quiver
has Dynkin type if its underlying graph is one of the graphs in Fig. 10.1. Similarly,
we say that a quiver has Euclidean type if its underlying graph belongs to the list in
Fig. 10.2.
In analogy to the definition of a subquiver in Definition 9.13, a subgraph
= (0 , 1 ) of a graph is a graph which consists of a subset 0 ⊆ 0 of
the vertices of and a subset 1 ⊆ 1 of the edges of .
The following result shows that we might not need any other graphs than Dynkin
and Euclidean diagrams.
Lemma 10.1. Assume is a connected graph. If is not a Dynkin diagram then
has a subgraph which is a Euclidean diagram.
Proof. Assume does not have a Euclidean diagram as a subgraph, we will show
that then is a Dynkin diagram.
The Euclidean diagrams of type A n are just the cycles; so does not contain
a cycle; in particular, it does not have a multiple edge. Since is connected by
assumption, it must then be a tree.
10.1 Dynkin Diagrams and Euclidean Diagrams 187
D,
Fig. 10.2 The Euclidean diagrams of types A, E.
The index plus 1 gives the number of vertices
in each diagram
The graph does not have a subgraph of type D 4 and hence every vertex in
is adjacent to at most three other vertices. Moreover, since there is no subgraph of
type D n for n ≥ 5, at most one vertex in is adjacent to three other vertices. In
total, this means that the graph is of the form
6 and hence r ≤ 1. If r = 0
By assumption, there is no subgraph in of type E
then the graph is a Dynkin diagram of the form As+t +1, and we are done. So
assume now that r = 1. There is also no subgraph of type E 7 and therefore s ≤ 2.
If s = 1 then the graph is a Dynkin diagram of type Dt +3 , and again we are done.
So assume s = 2. Since also does not have a subgraph of type E 8 we get
t ≤ 4. If t = 2 the graph is a Dynkin diagram of type E6 , next if t = 3 we have
the Dynkin diagram E7 and for t = 4 we have E8 . This shows that the graph is
indeed a Dynkin diagram.
Exercise 10.1. Let be a graph of Euclidean type D n (so has n + 1 vertices).
Show that any subgraph with n vertices is a disjoint union of Dynkin diagrams.
Let be a graph, assume that it does not contain loops, that is, edges with the same
starting and end point.
In this section we will define a bilinear form and analyze the corresponding
quadratic form for such a graph . These two forms are defined on Zn , by using
the standard basis vectors εi which form a Z-basis of Zn . We refer to εi as a ‘unit
vector’, it has a 1 in position i and is zero otherwise.
Definition 10.2. Let = (0 , 1 ) be a graph without loops and label the vertices
by 0 = {1, 2, . . . , n}.
(a) For any vertices i, j ∈ 0 let dij be the number of edges in between i and j .
Note that dij = dj i (since edges are unoriented).
(b) We define a symmetric bilinear form (−, −) : Zn ×Zn → Z on the unit vectors
by
−dij if i = j
(εi , εj ) =
2 if i = j
sj : Zn → Zn , sj (a) = a − (a, εj ) εj .
Remark 10.3. We can extend the definition of sj to a map on Rn , and then we can
write down a matrix with respect to the standard basis of Rn . But for our application
it is important that sj preserves Zn , and we work mostly with Zn .
We record some properties of the above reflection maps, which also justify why
they are called reflections. Let j be a vertex of the graph .
10.2 The Bilinear Form and the Quadratic Form 189
Compute a formula for the reflections s1 and s2 . Check also that their matrices
with respect to the standard basis of R2 are
−1 2 1 0
s1 = , s2 = .
0 1 2 −1
Example 10.4. We compute explicitly the Gram matrices of the bilinear forms
corresponding to Dynkin diagrams of type A, D and E, defined in Fig. 10.1.
Note that the bilinear forms depend on the numbering of the vertices of the graph.
It is convenient to fix some ‘standard labelling’. For later use, we also fix an
orientation of the arrows; but note that the bilinear form (−, −) is independent
of the orientation.
Type An
Type Dn
190 10 Diagrams and Roots
Type E8
Then, for E6 we take the subquiver with vertices 1, 2, . . . , 6 and similarly for E7 .
With this fixed standard labelling of the Dynkin diagrams, the Gram matrices of
the bilinear forms are as follows.
Type An :
⎛ ⎞
2 −1 0 . . . . . . 0
⎜−1 2 −1 0 ⎟
⎜ 0⎟
⎜ ⎟
⎜ . ⎟
..
⎜ 0 −1 2 −1 . . ⎟
.
⎜ . . . . . ⎟
⎜ . . . . . ⎟
⎜ . . . . . 0⎟
⎜ ⎟
⎝ 0 . . . 0 −1 2 −1⎠
0 . . . . . . 0 −1 2
Type Dn :
⎛ ⎞
2 0 −1 0 0 ...
0
⎜ 0 2 −1 0⎟
⎜ 0 0 ... ⎟
⎜ ⎟
⎜−1 −1 2 −1 0 ⎟
⎜ .. ⎟
⎜ . ⎟
⎜ 0 0 −1 2 −1 . .. ⎟
⎜ . ⎟
⎜ . .. .. .. .. ⎟
⎜ . . . . .0⎟
⎜ ⎟
⎝ 0 ... 0 −1 2 −1⎠
0 ... ... 0 −1 2
The matrices for E6 and E7 are then obtained by removing the last two rows and
columns for E6 and the last row and column for E7 .
Remark 10.5. In the above example we have chosen a certain labelling for each of
the Dynkin diagrams. In general, let be a graph without loops, and let ˜ be the
graph obtained from by choosing a different labelling of the vertices. Choosing a
different labelling means permuting the unit vectors ε1 , . . . , εn , and hence the rows
and columns of the Gram matrix G are permuted accordingly. In other words, there
is a permutation matrix P (that is, a matrix with precisely one entry 1 in each row
and column, and zero entries otherwise) describing the basis transformation coming
from the permutation of the unit vectors, and such that P G P −1 is the Gram matrix
˜ Note that any permutation matrix P is orthogonal, hence P −1 = P t ,
of the graph .
the transposed matrix, and P G P −1 = P G P t .
Given any bilinear form, there is an associated quadratic form. We want to write
down explicitly the quadratic form associated to the above bilinear form (−, −)
for a graph .
Definition 10.6. Let = (0 , 1 ) be a graph without loops, and let
0 = {1, . . . , n}.
(a) If G is the Gram matrix of the bilinear form (−, −) as defined in Defini-
tion 10.2, the associated quadratic form is given as follows
1 1 n
q : Zn → Z , q (x) = (x, x) = x G x t = xi2 − dij xi xj .
2 2
i=1 i<j
1 1
q˜ (xP −1 ) = (xP −1 )G˜ (xP −1 )t = (xP −1 )P G P −1 (xP −1 )t
2 2
1 1
= x(P −1 P )G (P −1 (P −1 )t )x t = xG x t = q (x)
2 2
192 10 Diagrams and Roots
n
n−1
q (x) = xi2 − xi xi+1 .
i=1 i=1
n−1
2q (x) = x12 + (xi − xi+1 )2 + xn2 .
i=1
We want to determine the set of roots , that is, we have to find all x ∈ Zn such
that 2q (x) = 2. If so, then |xi − xi+1 | ≤ 1 for 1 ≤ i ≤ n − 1, and |x1 | and |xn | also
are ≤ 1 (recall that the xi are integers). Precisely two of the numbers |xi − xi+1 |,
|x1 |, |xn | are equal to 1 and all others are zero.
Let r ∈ {1, . . . , n} be minimal such that xr = 0 (this exists since x cannot be the
zero vector, otherwise q (x) = 0). So xr = ±1 and |xr−1 − xr | = |xr | = 1. Then
among |xi − xi+1 | with r + 1 ≤ i ≤ n − 1, and |xn | precisely one further 1 appears.
So the only possibilities are x = εr + εr+1 + . . . + εs or x = −εr − εr+1 − . . . − εs
for some s ∈ {r, . . . , n}.
Thus we have shown that the roots of a Dynkin diagram of type An are given by
Proof. We will show that sj preserves the bilinear form (−, −) : for x ∈ Zn we
have
For the last equality we have used that the bilinear form is symmetric and that
(εj , εj ) = 2 by Definition 10.2. For the corresponding quadratic form we get
1 1
q (sj (x)) = (sj (x), sj (x)) = (x, x) = q (x).
2 2
Hence if x is a root, that is q (x) = 1, then q (sj (x)) = 1 and sj (x) is a root.
We want to show that there are only finitely many roots if is a Dynkin diagram.
To do so, we will prove that q is positive definite and we want to use tools from
linear algebra. Therefore, we extend the bilinear form (−, −) and the quadratic
form q to Rn . That is, for the standard basis we take the same formulae as in
Definitions 10.2 and 10.6, and we apply them to arbitrary x ∈ Rn .
Recall from linear algebra that a quadratic form q : Rn → Rn is called positive
definite if q(x) > 0 for any non-zero x ∈ Rn . Suppose the quadratic form comes
from a symmetric bilinear form as in our case, where (see Definition 10.6)
1 1
q (x) = (x, x) = x G x t .
2 2
Then the quadratic form is positive definite if and only if, for some labelling,
the Gram matrix of the symmetric bilinear form is positive definite. Recall from
linear algebra that a symmetric real matrix is positive definite if and only if all its
leading principal minors are positive. The leading principal k-minor of an n × n-
matrix is the determinant of the submatrix obtained by deleting rows and columns
k + 1, k + 2, . . . , n. This is what we will use in the proof of the following result.
Proposition 10.10. Assume is a Dynkin diagram. Then the quadratic form q is
positive definite.
Proof. We have seen in Remarks 10.5 and 10.7 how the quadratic forms change
when the labelling of the vertices is changed. With a different labelling, one only
permutes the coordinates of an element in Rn but this does not affect the condition
of whether q (x) > 0 for all non-zero x ∈ Rn , that is, the condition of whether q
is positive definite. So we can take the standard labelling as in Example 10.4, and it
suffices to show that the Gram matrices given in Example 10.4 are positive definite.
(1) We start with the Gram matrix in type An . Then the leading principal k-minor
is the determinant of the Gram matrix of type Ak , and there is a recursion formula:
Write d(Ak ) for the determinant of the matrix of type Ak . Then we have d(A1 ) = 2
194 10 Diagrams and Roots
and d(A2 ) = 3. Expanding the determinant by the last row of the matrix we find
It follows by induction on n that d(An ) = n + 1 for all n ∈ N, and hence all leading
principal minors are positive.
(2) Next, consider the Gram matrix of type Dn for n ≥ 4. Again, the leading
principal k-minor for k ≥ 4 is the determinant of the Gram matrix of type Dk . When
k = 2 we write D2 for the submatrix obtained by removing rows and columns with
labels ≥ 3, and similarly we define D3 . We write d(Dk ) for the determinant of the
matrix Dk for k ≥ 2. We see directly that d(D2 ) = 4 and d(D3 ) = 4. For k ≥ 4 the
same expansion of the determinant as for type A gives the recursion
Hence for n = 6, 7, 8 all leading principal minors of the Gram matrix for types
E6 , E7 and E8 are positive and hence the associated quadratic form q is positive
definite.
1 , as in Exercise 10.2. Verify that the
Exercise 10.5. Let be the graph of type A
quadratic form is
hence it is not positive definite. However q (x) ≥ 0 for all x ∈ R2 (that is, q is
positive semidefinite).
10.2 The Bilinear Form and the Quadratic Form 195
Remark 10.11.
(1) Alternatively one could prove Proposition 10.10 by finding a suitable formula
for q (x) as a sum of squares. We have used this strategy for Dynkin type An
in Example 10.8. The formula there,
n−1
2q (x) = x12 + (xi − xi+1 )2 + xn2 ,
i=1
implies easily that q (x) > 0 for all non-zero x ∈ Rn , that is, the quadratic
form q is positive definite for Dynkin type An . Similarly, one can find suitable
formulae for the other Dynkin types. See Exercise 10.6 for type Dn .
(2) Usually, the quadratic form of a graph is not positive definite. If is as in
Exercise 10.5 then obviously q (x) = 0 for x = (a, a) and arbitrary a. We can
see another example if we enlarge the E8 -diagram by more vertices and obtain
En -diagrams for n > 8, then the computation in the above proof still gives
d(En ) = 9 − n; but this means that the quadratic form is not positive definite
for n > 8.
(3) The previous remarks and Proposition 10.10 are a special case of a very nice
result which characterises Dynkin and Euclidean diagrams by the associated
quadratic forms. Namely, let be a connected graph (without loops). Then the
quadratic form q is positive definite if and only if is a Dynkin diagram.
Moreover, the quadratic form is positive semidefinite, but not positive definite,
if and only if is a Euclidean diagram. This is not very difficult but we do not
need it for the proof of Gabriel’s theorem.
Exercise 10.6. Let be the Dynkin diagram of type Dn with standard labelling as
in Example 10.4. Show that for the quadratic form q we have
n−1
4q (x) = (2x1 − x3 ) + (2x2 − x3 ) + 2
2 2
(xi − xi+1 ) 2
+ 2xn2.
i=3
1 1
q (x) = xG x t = (xP )D(xP )t (10.1)
2 2
and we want to show that there are at most finitely many roots of q , that is, solutions
with x ∈ Zn such that q (x) = 1 (see Definition 10.6).
Suppose q (x) = 1 and write xP = (ξ1 , . . . , ξn ). Then Equation (10.1) becomes
n
2 = 2 q (x) = ξi2 λi ,
i=1
n
ξi2 ≤ nR.
i=1
n
n
xi2 = |x|2 = |xP |2 = ξi2 ≤ nR.
i=1 i=1
Hence there are at most finitely many solutions for (x1 , . . . , xn ) ∈ Zn with
q (x) = 1, that is, there are only finitely many roots for q .
In Example 10.8 we have determined the (finite) set of roots for a Dynkin
diagram of type An . Exercise 10.14 asks to find the roots for a Dynkin diagram
of type Dn . For most graphs, there are infinitely many roots.
Exercise 10.7. Consider the graph of type Ã1 as in Exercises 10.2 and 10.5 which
is the underlying graph of the Kronecker quiver. Show that
= {(a, a ± 1) | a ∈ Z}.
For the Dynkin diagrams we refine the set of roots, namely we divide the
roots into ‘positive’ and ‘negative’ roots.
10.3 The Coxeter Transformation 197
(here the εi are the unit vectors). Moreover, by definition of q (see Definition 10.6)
we have
1 1
q (x) = (x, x) = (x + + x − , x + + x − ) = (x + , x − ) + q (x + ) + q (x − ).
2 2
(for the last equality we used the definition of (−, −) , see Definition 10.2).
Since is one of the Dynkin diagrams, the quadratic form q is positive definite
by Proposition 10.10. In particular, q (x + ) ≥ 0 and q (x − ) ≥ 0. But since
x = x + + x − is non-zero, at least one of x + and x − is non-zero and then
q (x + ) + q (x − ) > 0, again by positive definiteness.
In summary, we get
1 = q (x) = (x + , x − ) + q (x + ) + q (x − ) ≥ q (x + ) + q (x − ) > 0.
Since the quadratic form has integral values, precisely one of q (x + ) and q (x − ) is
1 and the other is 0. Since q is positive definite, x + = 0 or x − = 0, that is, x = x −
or x = x + , which proves the claim.
Let be one of the Dynkin diagrams, with standard labelling. We have seen in
Lemma 10.9 that each reflection sj , where j is a vertex of , preserves the set of
roots. Then the set of roots is also preserved by arbitrary products of reflections,
that is, by any element in the group W , the subgroup of the automorphism group
Aut(Zn ) generated by the reflections sj . The Coxeter transformation is an element
of W and it has special properties.
Definition 10.14. Assume is a Dynkin diagram with standard labelling as in
Example 10.4. Let sj : Zn → Zn , sj (x) = x − (x, εj ) εj be the reflections as
in Definition 10.2. The Coxeter transformation C is the map
C = sn ◦ sn−1 ◦ . . . ◦ s2 ◦ s1 : Zn → Zn .
The Coxeter matrix is the matrix of C with respect to the standard basis of Rn .
Example 10.15 (Coxeter Transformation in Dynkin Type A). Let be the Dynkin
diagram of type An with standard labelling. We describe the Coxeter transformation
and its action on the roots of q . To check some of the details, see Exercise 10.8
below. Let sj be the reflection, as defined in Definition 10.2. Explicitly, we have for
x = (x1 , x2 , . . . , xn ) ∈ Rn that
⎧
⎨ (−x1 + x2 , x2 , . . . , xn ) j =1
sj (x) = (x1 , . . . , xj −1 , −xj + xj −1 + xj +1 , xj +1 , . . . , xn ) 2 ≤ j ≤ n − 1
⎩
(x1 , . . . , xn−1 , xn−1 − xn ) j = n.
Consider the action of C on the set of roots. Recall from Example 10.8 that for the
Dynkin diagram of type An the set of roots is given by
= {±αr,s | 1 ≤ r ≤ s ≤ n},
where αr,s = εr + εr+1 + . . . + εs . Consider the root C (αr,s ). One checks the
following formula:
αr−1,s−1 r > 1
C (αr,s ) =
−αs,n r = 1.
Since also C (−x) = −C (x), we see that C permutes the elements in (in
fact, this already follows from the fact that this holds for each reflection sj ). We
also see that C can take positive roots to negative roots.
10.3 The Coxeter Transformation 199
n
2q (y) = (y, y) = yi (y, εi ) = 0
i=1
and since the quadratic form q is positive definite (by Proposition 10.10), it will
follow that y = 0.
So suppose that C (y) = y. Since sn2 is the identity this implies
sn−1 ◦ . . . ◦ s1 (y) = sn (y). Since the reflection sj only changes the j -th coordinate,
the n-th coordinate of sn−1 ◦ . . . ◦ s1 (y) is yn , and the n-th coordinate of sn (y) is
yn − (y, εn ) . So we have (y, εn ) = 0.
200 10 Diagrams and Roots
h−1
y := Cr (x) ∈ Zn .
r=0
EXERCISES
10.10. Let C be the Coxeter transformation for Dynkin type An (with standard
labelling), see Example 10.15; this permutes the set of roots.
(a) Find the C -orbit of εn , show that it contains each εi , and that it has size
n + 1.
(b) Show that each orbit contains a unique root of the form αt,n for
1 ≤ t ≤ n, compute its orbit, and verify that it has size n + 1.
(c) Deduce that Cn+1 is the identity map of Rn .
10.3 The Coxeter Transformation 201
10.11. Let be a Dynkin diagram with standard labelling. The Coxeter number
of is the smallest positive integer h such that Ch is the identity map of
Rn . Using the previous Exercise 10.10 show that the Coxeter number of the
Dynkin diagram of type An is equal to n + 1.
10.12. Assume C is the Coxeter transformation for the Dynkin diagram of type
An with standard labelling. Show by using the formula in Example 10.15 that
C (y) = y for y ∈ Zn implies that y = 0.
10.13. Consider the Coxeter transformation C for a Dynkin diagram of type An ,
with standard labelling. Show that its matrix with respect to the standard basis
is given by
⎛ ⎞
−1 1 0 ... ... 0
⎜−1 0 1 0 . . . 0⎟
⎜ ⎟
⎜ . .. . . . . . . .. ⎟
⎜ . ⎟
⎜ . . . . . .⎟
Cn := ⎜ . ⎟
⎜ . .. ⎟
⎜ . . 0 1 0⎟
⎜ ⎟
⎝−1 0 0 1⎠
−1 0 ... ... ... 0
x n+1 − 1
fn (x) = (−1)n .
x−1
(d) Deduce from this that Cn does not have an eigenvalue equal to 1. Hence
deduce that C (y) = y implies y = 0.
10.14. (Roots in Dynkin type D) Let be the Dynkin diagram of Type Dn , where
n = 4 and n = 5, with standard labelling as in Example 10.4. Use the formula
for the quadratic form q given in Exercise 10.6 to determine all roots of q .
(Hint: In total there are 2n(n − 1) roots.)
10.15. Compute the reflections and the Coxeter transformation for a Dynkin diagram
of type D5 with standard labelling.
(a) Verify that
C (x) = (x3 −x1 , x3 −x2 , x3 +x4 −x1 −x2 , x3 +x5 −x1 −x2 , x3 −x1 −x2 ).
202 10 Diagrams and Roots
si (εi ) = − εi
si (εi+1 ) =εi+1 + εi
si (εi−1 ) =εi−1 + εi
Assume Q is a quiver without oriented cycles, then for any field K the path algebra
KQ is finite-dimensional (see Exercise 1.2). We want to know when Q is of finite
representation type; this is answered completely by Gabriel’s theorem. Let Q̄ be the
underlying graph of Q, which is obtained by ignoring the orientation of the arrows.
Gabriel’s theorem states that KQ is of finite representation type if and only if Q̄
is the disjoint union of Dynkin diagrams of type A, D and E. The relevant Dynkin
diagrams are listed in Fig. 10.1. So the representation type of Q does not depend on
the orientation of the arrows. Note that Gabriel’s theorem holds, and is proved here,
for an arbitrary field K.
Theorem 11.1 (Gabriel’s Theorem). Assume Q is a quiver without oriented
cycles, and K is a field. Then Q has finite representation type if and only if the
underlying graph Q̄ is the disjoint union of Dynkin diagrams of types An for n ≥ 1,
or Dn for n ≥ 4, or E6 , E7 , E8 .
Moreover, if a quiver Q has finite representation type, then the indecomposable
representations are parametrized by the set of positive roots (see Definition 10.6),
associated to the underlying graph of Q. Dynkin diagrams and roots play a central
role in Lie theory, and Gabriel’s theorem connects representation theory with Lie
theory.
Gabriel’s theorem states implicitly that the representation type of a quiver depends
only on the underlying graph but not on the orientation of the arrows. To prove
this, we will use ‘reflection maps’, which relate representations of two quivers with
the same underlying graph but where some arrows have different orientation. This
construction will show that any two quivers with the same underlying graph have
the same representation type, if is an arbitrary finite tree.
Throughout this chapter let K be an arbitrary field.
Definition 11.2. Let Q be a quiver. A vertex j of Q is called a sink if no arrows in
Q start at j . A vertex k of Q is a source if no arrows in Q end at k.
For example, consider the quiver 1 −→ 2 ←− 3 ←− 4. Then vertices 1 and 4
are sources, vertex 2 is a sink and vertex 3 is neither a sink nor a source.
Exercise 11.1. Let Q be a quiver without oriented cycles. Show that Q contains a
sink and a source.
Definition 11.3. Let Q be a quiver and let j be a vertex in Q which is a sink or a
source. We define a new quiver σj Q, this is the quiver obtained from Q by reversing
all arrows adjacent to j , and keeping everything else unchanged. We call σj Q the
reflection of Q at the vertex j . Note that if a vertex j is a sink of Q then j is a
source of σj Q, and if j is a source of Q then it is a sink of σj Q. We also have that
σj σj Q = Q.
Example 11.4. Consider all quivers whose underlying graph is the Dynkin diagram
of type A4 . Up to labelling of the vertices, there are four possible quivers,
Q1 : 1 ←− 2 ←− 3 ←− 4
Q2 : 1 −→ 2 ←− 3 ←− 4
Q3 : 1 ←− 2 −→ 3 ←− 4
Q4 : 1 ←− 2 ←− 3 −→ 4
and
We write down Q, and Q , obtained by removing vertex 5 and the adjacent arrow.
Then we have σ4 σ1 Q = Q . We extend the sequence to Q and we see that we must
take twice a reflection at vertex 5, and get σ5 σ4 σ5 σ1 (Q) = Q .
Starting with a quiver Q where vertex j is a sink or a source, we have obtained
a new reflected quiver σj Q. We want to compare the representation type of
these quivers, and want to construct from a representation M of Q a ‘reflected
representation’ of the quiver σj Q.
α1 ᾱ1
1 −→ j and 1 ←− j.
11.1 Reflecting Quivers and Representations 207
M(α1 )
M(1) −−−−→ M(j ),
M + (ᾱ1 )
M(1) ←−−−− M + (j ),
and this should only use information from M. There is not much choice, we
take M + (j ) := ker(M(α1 )), which is a subspace of M(1), and we take M + (ᾱ1 )
to be the inclusion map. This defines a representation j+ (M) of σj Q.
(2) Let t = 2, and take the quivers Q and σj Q as follows:
α1 α2 ᾱ1 ᾱ2
1 −→ j ←− 2 and 1 ←− j −→ 2.
M(α1 ) M(α2 )
M(1) −−−
−→ M(j ) ←−−−
− M(2).
Here we can use the construction of the pull-back, which was introduced in
Chap. 2 (see Exercise 2.16). This takes two linear maps to a fixed vector space
and constructs from this a new space E, explicitly,
π1 π2
M(1) ←− E −→ M(2).
3
M + (j ) = {(m1 , m2 , m3 ) ∈ M(i) | M(α1 )(m1 ) + M(α2 )(m2 ) + M(α3 )(m3 ) = 0}.
i=1
M(α1 ) : K → K 2 , x → (x, 0)
M(α2 ) : K → K 2 , x → (0, x)
M(α3 ) : K → K 2 , x → (x, x).
Then
If γ is an arrow of Q which does not end at the sink j then we set M + (γ ) = M(γ ).
For i = 1, . . . , t we define M + (ᾱi ) : M + (j ) → M + (i) to be the projection onto
M(i), that is, M + (ᾱi )(m1 , . . . , mt ) = mi .
To compare the representation types of the quivers Q and σj Q, we need to
keep track over direct sum decompositions. Fortunately, the construction of j+ is
compatible with taking direct sums:
Lemma 11.10. Let Q be a quiver and let j be a sink in Q. Let M be a
representation of Q such that M = X ⊕ Y for subrepresentations X and Y. Then
we have
We will prove this later, in Sect. 12.1.1, since the proof is slightly technical. With
this lemma, we can focus on indecomposable representations of Q. We consider a
small example.
α
Example 11.11. We consider the quiver 1 −→ 2. In Example 9.28 we have seen
that it has precisely three indecomposable representations (up to isomorphism),
which are listed in the left column of the table below. We now reflect the quiver at the
sink 2, and we compute the reflected representations 2+ (M), using Example 11.8.
The representations 2+ (M) are listed in the right column of the following table.
M 2+ (M)
idK
K −→ 0 K ←− K
idK
K −→ K K ←− 0
0 −→ K 0 ←− 0
We see that 2+ permutes the indecomposable representations other than the simple
representation S2 . Moreover, it takes S2 to the zero representation, and S2 does not
appear as 2+ (M) for some M.
We can generalize the last observation in the example:
Proposition 11.12. Assume Q is a quiver and j is a sink in Q. Let M be a
representation of Q.
210 11 Gabriel’s Theorem
(a) j+ (M) is the zero representation if and only if M(r) = 0 for all vertices
r = j , equivalently M is isomorphic to a direct sum of copies of the simple
representation Sj .
(b) j+ (M) has no subrepresentation isomorphic to the simple representation Sj .
Proof. (a) Assume first that M(r) = 0 for all r = j then it follows directly from
Definition 11.9 that j+ (M) is the zero representation. Conversely, if j+ (M) is
the zero representation then for r = j we have 0 = M + (r) = M(r). This condition
means that M is isomorphic to a direct sum of copies of Sj , by Exercise 9.8.
(b) Suppose for a contradiction that j+ (M) has a subrepresentation isomorphic
to Sj . Then we have a non-zero element m := (m1 , . . . , mt ) ∈ M + (j ) with
M + (ᾱi )(m) = 0 for i = 1, . . . , t. But by definition, the map M + (ᾱi ) takes
(m1 , . . . , mt ) to mi . Therefore mi = 0 for i = 1, . . . , t and m = 0, a contradiction.
Remark 11.13. Let j be a sink of Q, then we take in Definition 11.7 distinct arrows
ending at j , but we are not excluding that some of these may start at the same vertex.
For example, take the Kronecker quiver
then for a representation M of this quiver, to define j+ (M) we must take
We will not introduce extra notation for multiple arrows, since the only time we
have multiple arrows is for examples using the Kronecker quiver.
Example 11.15.
(1) Let t = 1, and take the quivers Q and σj Q as follows:
β1 β̄1
1 ←− j and 1 −→ j.
N(β1 )
N(1) ←−−− N(j ),
N − (β̄1 )
N(1) −−−−→ N − (j ),
and this should only use information from N . There is not much choice, we take
N − (j ) := N(1)/im(N(β1 )), which is a quotient space of N(1), and we take
N − (β̄1 ) to be the canonical surjection. This defines the representation j− (N )
of σj Q .
(2) Let t = 2, and take the quivers Q and σj Q as follows:
β1 β2 β̄1 β̄2
1 ←− j −→ 2 and 1 −→ j ←− 2.
N(β1 ) N(β2 )
N(1) ←−−− N(j ) −−−→ N(2).
Here we can use the construction of the push-out, which was introduced in
Chap. 2 (see Exercise 2.17). This takes two linear maps starting at the same
vector space and constructs from this a new space, F , explicitly,
μ1 μ2
N(1) −→ F ←− N(2).
212 11 Gabriel’s Theorem
where CN := {(N(β1 )(x), N(β2 )(x), N(β3 )(x)) | x ∈ N(j )}. As the required
linear map N − (β̄1 ) : N(1) → N − (j ) we take the canonical map
x → (x, 0, 0) + CN ,
N(β1 )(x1 , x2 ) := x1
N(β2 )(x1 , x2 ) := x2
N(β3 )(x1 , x2 ) := x1 + x2 .
CN = {(x1, x2 , x1 + x2 ) | (x1 , x2 ) ∈ K 2 }.
where
Next, define N − (γ ) = N(γ ) if γ is an arrow which does not start at j , and for
1 ≤ i ≤ t define the linear map N − (β̄i ) : N(i) → N − (j ) by setting
j− (N ) ∼
= j− (X ) ⊕ j− (Y).
214 11 Gabriel’s Theorem
This lemma will be proved in Sect. 12.1.1. Note that part (b) is a direct
application of Exercise 9.13, and part (c) follows from parts (a) and (b). So it remains
to prove part (a) of the lemma, and this is done in Sect. 12.1.1.
With this result, we focus on indecomposable representations, and we consider a
small example.
α
Example 11.18. We consider the quiver 1 −→ 2, and we reflect at the source 1.
Recall that the quiver has three indecomposable representations, they are listed
in the left column of the table below. We compute the reflected representations
1− (N ), using Example 11.15. The representations 1− (N ) are listed in the right
column of the following table.
N 1− (N )
K −→ 0 0 ←− 0
idK
K −→ K 0 ←− K
idK
0 −→ K K ←− K
We see that 1− permutes the indecomposable representations other than the simple
representation S1 . Moreover, it takes S1 to the zero representation, and S1 does not
appear as 1− (N ) for some N .
We can generalize the last observation in this example.
Proposition 11.19. Assume Q is a quiver, and j is a source of Q . Let N be a
representation of Q .
(a) j− (N ) is the zero representation if and only if N(i) = 0 for all i = j , equiva-
lently, N is isomorphic to a direct sum of copies of the simple representation Sj .
(b) j− (N ) has no direct summand isomorphic to the simple representation Sj .
Proof. (a) First, if N(i) = 0 for all i = j then it follows directly from
Definition 11.16 that j− (N ) is the zero representation. Conversely, assume that
j− (N ) is the zero representation, that is N − (i) = 0 for each vertex i. In particular,
for i = j we have N(i) = N − (i) = 0. For the last part, see Exercise 9.8.
(b) Assume for a contradiction that j− (N ) = X ⊕Y, where X is isomorphic to Sj .
Then N(i) = N − (i) = X(i) ⊕ Y (i) = Y (i) for i = j and N − (j ) = X(j ) ⊕ Y (j )
and N − (j ) = Y (j ) since X(j ) is non-zero. We get a contradiction if we show that
Y (j ) is equal to N − (j ).
By definition Y (j ) ⊆ N − (j ). Conversely, take an element in N − (j ), it is of the
form (v1 , . . . , vt ) + CN with vi ∈ N(i). We can write it as
Similarly, the vertex 1 is a source in Q and hence a sink in σ1 Q. For the composition
1+ 1− we get the following, using the table in Example 11.18,
We observe in the first table that if M is not the simple representation S2 then
2− 2+ (M) is isomorphic to M. Similarly in the second table, if N is not the simple
representation S1 then 1+ 1− (N ) is isomorphic to N . We will see that this is not a
coincidence.
Proposition 11.22. Assume j is a sink of a quiver Q and let α1 , . . . , αt be the
arrows in Q ending at j . Suppose M is a representation of Q such that the linear
map
t
(M(α1 ), . . . , M(αt )) : M(i) −→ M(j )
i=1
π1 π2
M(1) ←− E −→ M(2)
where
is the pull-back as in Exercise 2.16 and π1 , π2 are the projection maps from E onto
M(1) and M(2), respectively.
If N = j− j+ (M) then by Example 11.15 this representation has the form
μ1 μ2
M(1) −→ F ←− M(2)
M∼
= j− j+ (M) ∼
= j− (U) ⊕ j− (V).
218 11 Gabriel’s Theorem
We will now prove that if the underlying graph of Q is not a union of Dynkin
diagrams then Q has infinite representation type. This is one direction of Gabriel’s
theorem. As we have seen in Lemma 9.27, it is enough to consider connected
quivers, and we should deal with smallest connected quivers whose underlying
graph is not a Dynkin diagram (see Lemma 9.26).
Proposition 11.27. Assume Q is a connected quiver with no oriented cycles. If the
underlying graph of Q is not a Dynkin diagram, then Q has infinite representation
type.
The proof of Proposition 11.27 will take the entire section.
By Lemma 10.1 we know that a connected quiver Q whose underlying graph is not
a Dynkin diagram must have a subquiver Q whose underlying graph is a Euclidean
diagram. By Lemma 9.26, it suffices to show that the subquiver Q has infinite
11.2 Quivers of Infinite Representation Type 219
and Mλ (β) = M(β) for any arrow β = α, and Mλ (α) : Mλ (ω) → M(k),
that is, from K to K 2 , is the map x → (x, λx). We want to show that Mλ is
indecomposable, and that if λ = μ then Mλ ∼ Mμ .
=
Let ϕ : Mλ → Mμ be a homomorphism of representations. Then the restriction
of ϕ to vertices in Q is a homomorphism from M to M. By the assumption, this is
a scalar multiple of the identity. In particular, at the vertex k we have ϕk = c idK 2
for some c ∈ K. Now, the space at vertex ω is the one-dimensional space K, so ϕω
is also a scalar multiple of the identity, say ϕω = d idK with d ∈ K. Consider the
commutative diagram
Recall that for the representation M we have M(i) = K for i = 4 and M(4) = K 2 .
Now we take the extended quiver Q as in Lemma 11.29. This is of the form
and the underlying graph is a Euclidean diagram of type D 4 . Using the representa-
tion M of Q from Lemma 9.5 we find by Lemma 11.29 pairwise non-isomorphic
indecomposable representations Mλ of Q for each λ ∈ K. In particular, the quiver
Q has infinite representation type over any infinite field K.
However, we want to prove Gabriel’s theorem for arbitrary fields. We will
therefore construct indecomposable representations of Q of arbitrary dimension,
which then shows that Q always has infinite representation type, independent of the
field. Roughly speaking, to construct these representations we take direct sums of
the special representation, and glue them together at vertex 4 using a ‘Jordan block’
matrix. To set our notation, we denote by
⎛ ⎞
1
⎜1 1 ⎟
⎜ ⎟
⎜ .. ⎟
Jm = ⎜
⎜ 1 .
⎟
⎟
⎜ .. .. ⎟
⎝ . . ⎠
1 1
K. Then take
V (4) = {(v1 , v2 )t | vi ∈ V } = K 2m .
222 11 Gabriel’s Theorem
and for this to be contained in V (ω) we must have AJm v = Jm Av for all v ∈ K m ;
and if this holds for all v then AJm = Jm A.
We can use the same argument as in the proof of Lemma 8.5 and also in
Example 9.30, that is, we apply Exercise 8.1. The matrix A is an endomorphism
of the module Vβ for the algebra K[X]/(f ), where f = (X − 1)m and where β
is given by Jm . This is a cyclic module, generated by the first basis element. By
Exercise 8.1, if A2 = A then A is zero or the identity.
Therefore the only idempotent endomorphisms ϕ : Vm → Vm are zero and the
identity. As mentioned at the beginning of the proof, Lemma 9.11 then gives that
the representation Vm is indecomposable.
11.3 Dimension Vectors and Reflections 223
Note that if we had taken any non-zero λ ∈ K as the eigenvalue of the Jordan
block above we would still have obtained indecomposable representations. We
chose λ = 1 since this lies in any field.
The above Lemma 11.32 shows that the quiver Q has infinite representation type
over any field K. Indeed, the indecomposable representations Vm are pairwise non-
isomorphic since they have different dimensions. We will now use this to show
that every quiver whose underlying graph is of type D n for n ≥ 4 has infinite
representation type. Recall that we may choose the orientation as we like, by
Corollary 11.26. For example, for type D 4 it is enough to deal with the above
quiver Q.
Proposition 11.33. Every quiver whose underlying graph is a Euclidean diagram
n for some n ≥ 4 has infinite representation type.
of type D
Proof. Assume first that n = 4. Then Q has infinite representation type by
Lemma 11.32. Indeed, if m1 = m2 then Vm1 and Vm2 cannot be isomorphic, as they
have different dimensions. By Corollary 11.26, any quiver with underlying graph
4 has infinite representation type.
D
Now assume n > 4. Any quiver of type D n can be obtained from the above
quiver Q by a finite sequence of stretches in the sense of Definition 9.20. When
n = 5 this is Example 9.21, and for n ≥ 6 one may replace the branching vertex of
4 quiver by a quiver whose underlying graph is a line with the correct number
the D
of vertices. By Lemma 9.31, the stretched quiver has infinite representation type,
and then by Corollary 11.26, every quiver of type D n has infinite representation
type.
So far we have shown that every quiver without oriented cycles whose underlying
graph is a Euclidean diagram of type A n (n ≥ 1) or D n (n ≥ 4) has infinite
representation type over any field K; see Propositions 11.28 and 11.33.
The only Euclidean diagrams in Fig. 10.2 we have not yet dealt with are quivers
whose underlying graphs are of types E 6 , E
7 , and E
8 . The proof that these have
infinite representation type over any field K will follow the same strategy as for
type Dn above. However, the proofs are longer and more technical. Therefore, they
are postponed to Sect. 12.2.
Taking these proofs for granted we have now completed the proof of Proposi-
tion 11.27 which shows that every quiver whose underlying graph is not a union of
Dynkin diagrams has infinite representation type.
The next task is to show that any quiver whose underlying graph is a union of
Dynkin diagrams has finite representation type. Recall that by Lemma 9.27 we
need only to look at connected quivers. At the same time we want to parametrize
224 11 Gabriel’s Theorem
the indecomposable representations. The appropriate invariants for this are the
dimension vectors, which we will now define.
Again, we fix a field K and all representations are over K.
Definition 11.34. Let Q be a quiver and assume M is a representation of Q.
Suppose Q has n vertices; we label the vertices by 1, 2, . . . , n. The dimension vector
of the representation M is defined to be
Note that by definition the dimension vector depends on the labelling of the
vertices.
Example 11.35.
(1) Let Q be a quiver without oriented cycles. By Theorem 9.8, the simple
representations of Q correspond to the vertices of Q. The simple representation
Sj labelled by vertex j has dimension vector εj , the unit vector.
(2) In Example 9.28 we have classified the indecomposable representations of
the quiver 1 −→ 2, with underlying graph a Dynkin diagram A2 . We have
seen that the three indecomposable representations have dimension vectors
ε1 = (1, 0), ε2 = (0, 1) and (1, 1).
Remark 11.36. Given two isomorphic representations M ∼ = N of a quiver Q, then
for each vertex i the spaces M(i) and N(i) are isomorphic, so they have the same
dimension and hence dimM = dimN . That is, M and N have the same dimension
vector. We will prove soon that for a Dynkin quiver the dimension vector actually
determines the indecomposable representation.
However, this is not true for arbitrary quivers, and we have seen this already:
in Example 9.29 we have constructed indecomposable representations Cλ for the
Kronecker quiver. They all have dimension vector (1, 1), but as we have shown, the
representations Cλ for different values of λ ∈ K are not isomorphic.
We consider quivers with a fixed underlying graph . We will now analyse how
dimension vectors of representations change if we apply the reflections defined in
Sect. 11.1 to representations. We recall the definition of the bilinear form of ,
and the definition of a reflection of Zn , from Definition 10.2. The bilinear form
(−, −) : Zn × Zn → Z is defined on unit vectors by
−dij if i =
j
(εi , εj ) =
2 if i = j
where dij is the number of edges between vertices i and j , and then extended
bilinearly. Soon we will focus on the case when is a Dynkin diagram, and then
dij = 0 or 1, but we will also take obtained from the Kronecker quiver, where
d12 = 2.
11.3 Dimension Vectors and Reflections 225
sj : Zn → Zn , sj (a) = a − (a, εj ) εj .
We will now see that the reflection maps precisely describe how the dimension
vectors are changed when a representation is reflected.
Proposition 11.37.
(a) Let Q be a quiver with underlying graph . Assume j is a sink of Q and
α1 , . . . , αtare the arrows in Q ending at j . Let M be a representation
t +
of Q. If i=1 im(M(αi )) = M(j ) then dimj (M) = sj (dimM). In
particular, this holds if M is indecomposable and not isomorphic to the simple
representation Sj .
(b) Let Q be a quiver with underlying graph . Assume j is a source of Q and
at j . Let N be a representation of Q . If
1t, . . . , βt are the arrows in Q starting
β
−
i=1 ker(N(βi )) = 0 then dimj (N ) = sj (dim N ). In particular, this holds
if N is indecomposable and not isomorphic to the simple representation Sj .
Proof. The second parts of the statements of (a) and (b) are part of Exercise 11.2;
we include a worked solution in the appendix.
(a) We compare the entries in the vectors dimj+ (M) and sj (dimM), respectively.
For vertices i = j we have M + (i) = M(i) (see Definition 11.9). So the i-th
entry in dimj+ (M) is equal to dimK M(i). On the other hand, the i-th entry in
sj (dimM) also equals dimK M(i) because sj only changes the j -th coordinate (see
Definition 10.2).
Now let i = j , then, by Definition 11.9, M + (j ) is the kernel of the linear map
t
M(1) × . . . × M(t) → M(j ) , (m1 , . . . , mt ) → M(αi )(mi ).
i=1
We set ai = dimK M(i) for abbreviation. Since α1 , . . . , αt are the only arrows
adjacent to the sink j , we have drj = 0 if a vertex r is not the starting point of
one of the arrows α1 , . . . , αt . Recall that some of these arrows can start at the same
vertex, so if r is the starting point of one of these arrows then we have drj arrows
from r to j . So we can write
t ⎛ ⎞
dimK M + (j ) = ai − aj = ⎝ drj ar ⎠ − aj .
i=1 r∈Q0 \{j }
226 11 Gabriel’s Theorem
This is the j -th coordinate of the vector dimj+ (M). We compare this with the j -th
coordinate of sj (dimM). By Definition 10.2 this is
aj − (dimM, εj ) = aj − ( −drj ar + 2aj ),
r∈Q0 \{j }
By our assumption this linear map is injective, hence the image has dimension equal
to dimK N(j ). We set bi = dimK N(i) for abbreviation and get
t
dimK N − (j ) = dimK N(i) − dimK N(j ) = drj br − bj ,
i=1 r∈Q0 \{j }
which is the same formula as in part (a) and by what we have seen there, this is the
j -th coordinate of sj (dim N ).
We illustrate the above result with an example.
Example 11.38. As in Example 11.11 we consider the quiver Q of the form 1 −→ 2
and reflect at the sink 2. The corresponding reflection map is equal to
This confirms what we have proved in Proposition 11.37. We also see that exluding
the simple representation S2 in Proposition 11.37 is necessary, observing that
s2 (dimS2 ) is not a dimension vector of a representation.
11.4 Finite Representation Type for Dynkin Quivers 227
= {x ∈ Zn | q (x) = 1}
and we have proved that it is finite (see Proposition 10.12). We have also seen that a
root x is either positive or negative, see Lemma 10.13. Recall that a non-zero x ∈ Zn
is positive if xi ≥ 0 for all i, and it is negative if xi ≤ 0 for all i.
In this section we will prove the following, which will complete the proof of
Gabriel’s Theorem:
Theorem 11.39. Assume Q is a quiver whose underlying graph is a union of
Dynkin diagrams of type An (n ≥ 1) or Dn (n ≥ 4), or E6 , E7 , E8 . Then the
following hold.
(1) If M is an indecomposable representation of Q then dimM is in .
(2) Every positive root is equal to dimM for a unique indecomposable representa-
tion M of Q.
In particular, Q has finite representation type.
Before starting with the proof, we consider some small examples.
Example 11.40.
(1) Let Q be the quiver 1 −→ 2 with underlying graph the Dynkin diagram A2 . In
Example 9.28 we have seen that this has three indecomposable representations,
with dimension vectors ε1 , ε2 and (1, 1). We see that these are precisely the
positive roots as described in Example 10.8.
(2) Let Q be the quiver 1 −→ 2 ←− 3. The Exercises 11.8 and 11.9 prove using
elementary linear algebra that the above theorem holds for Q. By applying
reflections 1± or 2± (see Exercise 11.11), one deduces that the theorem holds
for any quiver with underlying graph a Dynkin diagram of type A3 .
To prove Theorem 11.39 we first show that it suffices to prove it for connected
quivers. This will follow from the lemma below, suitably adapted to finite unions of
Dynkin diagrams.
Lemma 11.41. Let Q = Q ∪ Q be a disjoint union of two quivers such that
the underlying graph = ∪ is a union of two Dynkin diagrams. Then
Theorem 11.39 holds for Q if and only if it holds for Q and for Q .
Proof. We label the vertices of Q as {1, . . . , n } ∪ {n + 1, . . . , n + n } where
{1, . . . , n } are the vertices of Q and {n + 1, . . . , n + n } are the vertices of Q .
So we can write every dimension vector in the form (x , x ) with x ∈ Zn and
x ∈ Zn .
228 11 Gabriel’s Theorem
(see Definition 10.6). Since there are no edges between vertices of and , we
see that for x = (x , x ) ∈ Zn +n we have
q (x) = q (x ) + q (x ). (11.1)
is analogous (we leave this for Exercise 11.4). Suppose j is a sink. We assume that
(1) and (2) hold for Q.
First we show that (1) holds for σj Q. By Theorem 11.25 any indecomposable
representation of σj Q is either isomorphic to the simple representation Sj , or
is of the form j+ (M) where M is an indecomposable representation of Q
not isomorphic to Sj . The dimension vector of Sj is εj , which is in (see
Exercise 10.3). Moreover, in the second case, the dimension vector of j+ (M) is
sj (dimM), by Proposition 11.37. We have dimM is in by assumption, and by
Lemma 10.9, sj takes roots to roots. Moreover, sj (dimM) is positive since it is the
dimension vector of a representation.
We show now that (2) holds for σj Q. Let x ∈ be a positive root. If x = εj
then x = dimSj and clearly this is the only possibility. So let x = εj , we must show
that there is a unique indecomposable representation of σj Q with dimension vector
x. Since x = εj , we have y := sj (x) = εj , and this is also a root. It is a positive
root: Since x is positive and not equal to εj , there is some k = j such that xk > 0
(in fact, since q (λεj ) = λ2 q (εj ), the only scalar multiples of εj which are in
are ±εj ; see also Proposition 12.16). The reflection map sj changes only the j -th
coordinate, therefore yk = xk > 0, and y is a positive root.
By assumption, there is a unique indecomposable representation M of Q,
not isomorphic to Sj , such that dimM = y. Let N := j+ (M). This is an
indecomposable representation, and dim N = sj (y) = x. To prove uniqueness,
let N be an indecomposable representation of σj Q and dimN = x, then
N ∼ Sj , hence by Theorem 11.25 there is a unique indecomposable representation
=
M of Q with j+ (M ) = N . Then we have sj (dimM ) = x and hence
dimM = sj (x) = y. Since (2) holds for Q we have M ∼ = M and then N ∼ = N .
This proves that (2) also holds for σj Q.
Exercise 11.4. Write down the details for the proof of Lemma 11.43 when the
vertex j is a source of Q, analogous to the case when the vertex is a sink.
Assume from now that Q has standard labelling, as in Example 10.4. Then we
have the following properties:
(i) Vertex 1 is a sink of Q, and for 2 ≤ j ≤ n, vertex j is a sink of the quiver
σj −1 . . . σ1 Q;
(ii) σn σn−1 . . . σ1 Q = Q.
Note that the corresponding sequence of reflections sn ◦ sn−1 ◦ . . . ◦ s1 is the Coxeter
transformation C , where is the underlying graph of Q (see Definition 10.14).
Exercise 11.5. Verify (i) and (ii) in detail when Q is of type An as in Example 10.4.
By Theorem 11.25, 1+ takes an indecomposable representation M of Q
which is not isomorphic to S1 to an indecomposable representation of σ1 Q not
isomorphic to S1 . Similarly, j+ takes an indecomposable representation M of
σj −1 . . . σ1 Q which is not isomorphic to Sj to an indecomposable representa-
230 11 Gabriel’s Theorem
τ := sj ◦ sj −1 ◦ . . . ◦ s1 ◦ C r−1
1 ←− 2 ←− 3
According to Theorem 11.39, these positive roots are in bijection with the indecom-
posable representations of any quiver Q with underlying graph . In particular, any
such quiver of Dynkin type An has precisely n(n+1)
2 indecomposable representations
(up to isomorphism).
We can write down an indecomposable representation with dimension vector
αr,s . This should exist independent of the orientation of Q. In fact, we can see this
directly. Take the representation M where
K if r ≤ i ≤ s
M(i) =
0 else.
1 ←− 2 3 ←− 4 ←− 5
Recall the Gram matrix of , see Definition 10.2. This has block form,
G1 0
G =
0 G2
where
⎛ ⎞
2 −1 0
2 −1
G1 = , G2 = ⎝−1 2 −1⎠ .
−1 2
0 −1 2
For x ∈ Z5 , we get q (x) = 1 if and only if one of q1 (x1 , x2 ) and q2 (x3 , x4 , x5 )
is equal to 1 and the other is zero, since the quadratic forms are positive definite (see
Proposition 10.10). Hence a root of is either a root of 1 or a root of 2 .
Exercise 11.7. Explain briefly why this holds in general. That is, if Q is any
quiver where all connected components are of Dynkin type A, D or E then the
indecomposable representations of Q are in bijection with the positive roots of the
quadratic form q of Q.
EXERCISES
(b) Determine all M such that M(1) = 0, by using the classification of inde-
composable representations for Dynkin type A2 . Similarly determine all
M such that M(3) = 0.
(c) Assume M(1) and M(3) are non-zero, deduce that then M(2) is non-
zero.
234 11 Gabriel’s Theorem
The following exercise uses the linear algebra proof of the dimension
formula, dimK (X + Y ) = dimK X + dimK Y − dimK (X ∩ Y ) where X, Y
are subspaces of some finite-dimensional vector space.
11.9. Let Q be as in Exercise 11.8. Take a representation M of Q which satisfies
the conditions in part (a) of Exercise 11.8. Let D := im(M(α1 ))∩im(M(α2 )),
a subspace of M(2).
(a) Explain why M(2) has a basis B={x1 , . . . , xd ; v1 , . . . , vm ; w1 , . . . , wn }
such that
(i) {x1 , . . . , xd } is a basis of D;
(ii) {x1 , . . . , xd ; v1 , . . . , vm } is a basis of im(M(α1 ));
(iii) {x1 , . . . , xd ; w1 , . . . , wn } is a basis of im(M(α2 )).
(b) Explain why M(1) has a basis of the form {a1, . . . , ad , a1 , . . . , am },
where M(α1 )(ai ) = xi , and M(α1 )(aj ) = vj . Similarly, explain why
M(3) has a basis {b1 , . . . , bd , b1 , . . . , bn } such that M(α2 )(bi ) = xi ,
and M(α2 )(bj ) = wj .
(c) Show that each xi gives rise to an indecomposable direct summand of M
of the form K −→ K ←− K. Moreover, show that each vj gives rise to
an indecomposable direct summand of M of the form K −→ K ←− 0.
Similarly each wj gives rise to an indecomposable direct summand of
M of the form 0 −→ K ←− K.
11.10. Let Q be the quiver of Dynkin type A3 with the orientation as in Exer-
cise 11.8. Explain how Exercises 11.8 and 11.9 classify the indecomposable
representations of this quiver of type A3 . Confirm that the dimension vectors
of the indecomposable representations are precisely the positive roots for the
Dynkin diagram A3 .
11.11. Consider quivers whose underlying graph is the Dynkin diagram of type A3 .
We have classified the indecomposable representations for the quiver
α1 α2
1 −→ 2 ←− 3
in the previous exercises. Explain how the general results on reflection maps
j± imply Gabriel’s theorem for the other two possible orientations.
11.12. The following shows that j− does not take subrepresentations to subrepre-
sentations in general. Let Q be the quiver of type A3 with labelling
β1 β2
1 ←− j −→ 2
and define the maps by N(β1 )(e1 ) = f1 and N(β1 )(e2 ) = 0, and moreover
N(β2 )(e1 ) = 0 and N(β2 )(e2 ) = f2 .
(a) Show that CN = N(1) × N(2) and hence that N − (j ) = 0.
(b) Let U be the subrepresentation of N given by
(1, 1, 2, 2, . . . , 2, 1, 1, . . . , 1, 0, . . . , 0)
!" # !" # !" #
a b c
for any positive integers a, b and any non-negative integer c such that
a + b + c = n − 2.
In the following exercises, we take the Kronecker quiver Q of the form
A := K[X, Y ]/(X2 , Y 2 ).
In this chapter we will give the proofs, and fill in the details, which were postponed
in Chap. 11. First we will prove the results on reflection maps from Sect. 11.1,
namely the compatibility of j+ and j− with direct sums (see Lemmas 11.10
and 11.17) and the fact that j+ and j− compose to the identity map (under certain
assumptions), see Propositions 11.22 and 11.24. This is done in Sect. 12.1.
Secondly, in Sect. 11.2 we have shown that every connected quiver whose
underlying graph is not a Dynkin diagram has infinite representation type. The
crucial step is to prove this for a quiver whose underlying graph is a Euclidean
diagram. We have done this in detail in Sect. 11.2 for types A n and Dn . In Sect. 12.2
below we will provide the technically more involved proofs that quivers of types E 6 ,
7 and E
E 8 have infinite representation type.
Finally, we have two sections containing some background and outlook. We give
a brief account of root systems as they occur in Lie theory, and we show that the
set of roots, as defined in Chap. 10, is in fact a root system in this sense. Then we
provide an informal account of Morita equivalence.
In this section we give details for the results on the reflection maps j± . Recall that
we define these when j is a sink or a source in a quiver Q. We use the notation
as before: If j is a sink of Q then we label the arrows ending at j by α1 , . . . , αt
(see Definition 11.7), and we let αi : i → j for 1 ≤ i ≤ t. As explained in
Remark 11.13, there may be multiple arrows, and if so then we identify the relevant
vertices at which they start (rather than introducing more notation). If j is a source of
Q then we label the arrows starting at vertex j by β1 , . . . , βt (see Definition 11.14),
where βi : j → i, and we make the same convention as above in the case of multiple
arrows, see also Remark 11.20.
t
t
(V (α1 ), . . . , V (αt )) : V (i) → V (j ) , (v1 , . . . , vt ) → V (αi )(vi ).
i=1 i=1
t
t
M(αi )(xi ) = X(αi )(xi ) = 0,
i=1 i=1
that is, w ∈ M + (j ). Moreover, X+ (ᾱi )(w) = xi = M + (ᾱi )(w), that is, X+ (ᾱi ) is
the restriction of M + (ᾱi ) to X+ (j ). We have now proved (1).
(2) We show now that j+ (M) is the direct sum of j+ (X ) and j+ (Y),
that is, verify Definition 9.9. We must show that for each vertex r, we have
M + (r) = X+ (r) ⊕ Y + (r) as vector spaces. This is clear for r = j , then since
M = X ⊕ Y, we have
t
t
t
0= M(αi )(mi ) = M(αi )(xi + yi ) = (M(αi )(xi ) + M(αi )(yi ))
i=1 i=1 i=1
t
t
t
t
= M(αi )(xi ) + M(αi )(yi ) = X(αi )(xi ) + Y (αi )(yi ).
i=1 i=1 i=1 i=1
Now, because X and Y are subrepresentations, we know that ti=1 X(αi )(xi ) lies in
X(j ) and that ti=1 Y (αi )(yi ) lies in Y (j ). We assume that M(j) = X(j ) ⊕ Y (j ),
so the intersection of X(j ) and Y (j ) is zero. It follows that both ti=1 X(αi )(xi )=0
t +
and i=1 Y (αi )(yi ) = 0. This means that (x1 , . . . , xt ) ∈ X (j ) and
+
(y1 , . . . , yt ) ∈ Y (j ) and hence
The other inclusion follows from part (1) since X+ (j ) and Y + (j ) are subspaces of
M + (j ).
We now give the proof of the analogous result for the reflection map j− . This
is Lemma 11.17, which has several parts. We explained right after Lemma 11.17
that it only remains to prove part (a) of that lemma, which we also restate here for
convenience.
Lemma (Lemma 11.17 (a)). Suppose Q is a quiver with a source j , and N is a
representation of Q . Assume N = X ⊕Y is a direct sum of subrepresentations, then
j− (N ) is isomorphic to the direct product j− (X ) × j− (Y) of representations.
Proof. We use the notation as in Definition 11.14. We recall from Definition 11.16
how the reflection j− (V) is defined for any representation V of Q . For
vertices r = j we set V − (r) = V (r), and V − (j ) is the factor space
V − (j ) = (V (1) × . . . × V (t))/CV , where CV is the image of the linear map
In this proof, we will take the direct product of the representations j− (X ) and
j− (Y), see Exercise 9.13.
We will first construct an isomorphism of vector spaces N − (j )∼
=X− (j )×Y − (j ),
and then use this to prove the lemma. Throughout, we write nr for an element in
N(r), and we use that nr = xr + yr for unique elements xr ∈ X(r) and yr ∈ Y (r).
242 12 Proofs and Background
xi + yi = ni = N(βi )(n) = N(βi )(x) + N(βi )(y) = X(βi )(x) + Y (βi )(y).
ni = xi + yi = X(βi )(x) + Y (βi )(y) = N(βi )(x) + N(βi )(y) = N(βi )(x + y)
For each r, the map θr is a vector space isomorphism; for r = j this holds because
(ii) This leaves us to deal with an arrow β̄i : i → j , hence we must show that we
have θj ◦ N − (β̄i ) = (X− (β̄i ), Y − (β̄i )) ◦ θi . Let ni ∈ N − (i) = N(i) = X(i) ⊕ Y (i),
so that ni = xi + yi with xi ∈ X(i) and yi ∈ Y (i). Then
as required.
In this section we will give the proofs for the results on compositions of the
reflection maps j± . More precisely, we have stated in Propositions 11.22 and 11.24
that under certain assumptions on the representations M and N we have that
j− j+ (M) ∼= M and j+ j− (N ) ∼ = N , respectively. For our purposes the most
important case is when M and N are indecomposable (and not isomorphic to
the simple representation Sj ), and then the assumptions are always satisfied (see
Exercise 11.2). The following two propositions are crucial for the proof that j+
and j− give mutually inverse bijections as described in Theorem 11.25.
We start by proving Proposition 11.22, which we restate here.
Proposition (Proposition 11.22). Assume j is a sink of a quiver Q and let
α1 , . . . , αt be the arrows in Q ending at j . Suppose M is a representation of Q
such that the linear map
t
(M(α1 ), . . . , M(αt )) : M(i) → M(j ), (m1 , . . . , mt ) → M(α1 )(m1 )+. . .+M(αt )(mt )
i=1
Proof. We set N = j+ (M) = M+ and N − = j− (N ), and we want to show that
N − is isomorphic to M.
(1) We claim that CN is equal to M + (j ). By definition, we have
t
y = (m1 , . . . , mt ) ∈ M(i).
i=1
The map N(ᾱi ) is the projection onto the i-th coordinate, therefore
t
ϕj ((m1 , . . . , mt ) + CN ) = M(αi )(mi ) ∈ M(j ).
i=1
Then ϕj is well-defined:
t If (m1 , . . . , mt ) ∈ CN = M + (j ) then by defi-
nition we have
i=1 M(αi )(mi ) = 0. Moreover, ϕj is injective: indeed, if
t +
i=1 M(αi )(mi ) = 0 then (m1 , . . . , mt ) ∈ M (j ) = CN . Furthermore, ϕj is
surjective, by assumption, and we have shown that ϕj is an isomorphism.
Finally, we check that ϕ is a homomorphism of representations. If γ : r → s
is an arrow not adjacent to j then N − (γ ) = M(γ ), and both ϕr and ϕs are the
identity maps, so the relevant square commutes. This leaves us to consider the maps
corresponding to the arrows αi : i → j . They are the maps in the diagram
12.1 Proofs on Reflections of Representations 245
Since ϕi is the identity map of M(i) and the maps N − (αi ) are induced by inclusion
maps, we have
(M(αi )◦ϕi )(mi ) = M(αi )(mi ) = ϕj ((0, . . . , 0, mi , 0, . . . , 0)+CN ) = (ϕj ◦N − (αi ))(mi ).
t
M + (j ) = {(n1 , . . . , nt ) ∈ N(1) × . . . × N(t) | M(β̄i )(ni ) = 0}.
i=1
t
M(β̄i )(ni ) = (n1 , . . . , nt ) + CN .
i=1
Each of these linear maps is an isomorphism, and we are left to check that ϕ is a
homomorphism of representations.
246 12 Proofs and Background
Recall that M + (βi ) is the projection onto the i-th component, and ϕi is the identity,
and so we get for any y ∈ N(j ) that
(M + (βi )◦ϕj )(y) = M + (βi )(N(β1 )(y), . . . , N(βt )(y)) = N(βi )(y) = (ϕi ◦N(βi ))(y),
We have already proved in Sect. 11.2 that quivers with underlying graphs of type
n and of type D
A n have infinite representation type, over any field K. We will
now deal with the three missing Euclidean diagrams E 6 , E
7 and E8 . Recall from
Corollary 11.26 that the orientation of the arrows does not affect the representation
type. So it suffices in each case to consider a quiver with a fixed chosen orientation
of the arrows. We take the labelling as in Example 10.4. However, we do not take
the orientation as in 10.4. Instead, we take the orientation so that the branch vertex
is the only sink. This will make the notation easier (then we can always take the
maps to be inclusions).
As a strategy, in each case we will first construct a special representation as in
Lemma 11.29 (see Definition 11.30). This will already imply infinite representation
type if the underlying field is infinite. This is not yet sufficient for our purposes
since we prove Gabriel’s theorem for arbitrary fields. Thus we then construct
representations of arbitrary dimensions over an arbitrary field and show that they
are indecomposable. The details for the general case are analogous to those in the
construction of the special representation.
12.2 All Euclidean Quivers Are of Infinite Representation Type 247
Let Q be the quiver of Dynkin type E6 with the following labelling and orientation:
Lemma 12.1. The quiver Q has a special representation M with dimK M(3) = 2.
Proof. We define M to be a representation for which all maps are inclusion maps, so
we do not need to specify names for the arrows. We take M(4) to be a 3-dimensional
space, and all other M(i) are subspaces.
M(4) = span{e, f, g}
M(1) = span{g}
M(2) = span{f, g}
M(3) = span{e + f, f + g}
M(5) = span{e, f }
M(6) = span{e}.
ϕ4 (e + f ) = ϕ4 (e) + ϕ4 (f ) = c2 e + c3 f.
V (4) = span{e1 , . . . , em , f1 , . . . , fm , g1 , . . . , gm }
V (1) = span{g1 , . . . , gm }
V (2) = span{f1 , . . . , fm , g1 , . . . , gm }
V (3) = span{e1 + f1 , . . . , em + fm , f1 + g1 , . . . , fm + gm }
V (5) = span{e1 , . . . , em , f1 , . . . , fm }
V (6) = span{e1 , . . . , em }
V (ω) = span{e1 + f1 , (e2 + f2 ) + (f1 + g1 ), . . . , (em + fm ) + (fm−1 + gm−1 )}.
ϕ(e1 ) = 0 = ϕ(f1 ).
Since ϕ preserves the spaces spanned by the ej and the fj , this element now belongs
to V (5)∩V (ω) = span{e1 +f1 }. Hence ϕ(ek+1 +fk+1 ) = λ(e1 +f1 ) for some scalar
λ ∈ K. Now using our assumption ϕ 2 = ϕ and that ϕ(e1 +f1 ) = ϕ(e1 )+ϕ(f1 ) = 0
by induction hypothesis, it follows that ϕ(ek+1 + fk+1 ) = 0. From this we deduce
by linearity that
Let Q be the quiver of Dynkin type E7 with the following labelling and orientation:
Lemma 12.5. The quiver Q has a special representation M with dimK M(1) = 2.
Proof. We define a representation M of this quiver for which all maps are inclusion
maps, so we do not specify names for the arrows. We take M(4) to be a 4-
dimensional space, and all other spaces are subspaces.
M(4) = span{e, f, g, h}
M(1) = span{f − g, e + h}
M(2) = span{e + f, e + g, e + h}
M(3) = span{g, h}
M(5) = span{e, f, g}
M(6) = span{e, f }
M(7) = span{e}.
Note that indeed for each arrow in Q the space corresponding to the starting
vertex is a subspace of the space corresponding to the end vertex. The only arrow
for which this is not immediate is 1 −→ 2, and here we have M(1) ⊆ M(2) since
f − g = (e + f ) − (e + g).
12.2 All Euclidean Quivers Are of Infinite Representation Type 251
ϕ4 (f ) = ϕ4 (e + f ) − ϕ4 (e) = c3 (e + f ) − c1 e
ϕ4 (f ) = ϕ4 (f − g) + ϕ4 (g) = c4 (f − g) + c2 g.
is
Lemma 12.7. For every m ∈ N, the representation Vm of the quiver Q
indecomposable.
Proof. We write briefly V = Vm . To prove that V is indecomposable, we use
Lemma 9.11. So let ϕ : V → V be a homomorphism with ϕ 2 = ϕ. Then we
have to show that ϕ is zero or the identity.
Since all maps in V are inclusions, the morphism ϕ is given by a single linear
map, which we also denote by ϕ, on V (4) such that all subspaces V (i), where
i ∈ {1, . . . , 7, ω}, are invariant under ϕ (see Definition 9.4).
First we note that V (ω) ∩ V (5) = span{f1 − g1 }. Therefore f1 − g1 must be an
eigenvector of ϕ. Again, we may assume that ϕ(f1 − g1 ) = 0 (since ϕ 2 = ϕ the
eigenvalue is 0 or 1, and if necessary we may replace ϕ by idV − ϕ). Then
Thus, there exists a scalar μ ∈ K such that ϕ(fk+1 − gk+1 ) = μ(f1 − g1 ). Since
ϕ 2 = ϕ and ϕ(f1 ) = 0 = ϕ(g1 ) we conclude that ϕ(fk+1 − gk+1 ) = 0. But then
we have
It follows that ϕ = 0 and then Lemma 9.11 implies that the representation V = Vm
is indecomposable.
Now we consider the quiver Q of type E8 with labelling and orientation as follows:
Lemma 12.8. The quiver Q has a special representation M with dimK M(8) = 2.
Proof. We define a representation M of this quiver for which all maps are inclusion
maps, so we do not specify names of the arrows. We take M(4) to be a 6-dimensional
space, and all other spaces are subspaces.
M(4) = span{e, f, g, h, k, l}
M(1) = span{e, l}
M(2) = span{e, f, g, l}
254 12 Proofs and Background
M(3) = span{h + l, e + g + k, e + f + h}
M(5) = span{e, f, g, h, k}
M(6) = span{f, g, h, k}
M(7) = span{g, h, k}
M(8) = span{h, k}.
ϕ4 (e + g + k) = c2 e + c1 g + (ck + zh).
But this must lie in M(3). Since l and f do not occur in the above expression, it
follows from the definition of M(3) that ϕ4 (e + g + k) must be a scalar multiple of
e + g + k and hence z = 0 and c1 = c2 = c. In particular, ϕ4 (k) = ck.
We may write ϕ4 (l) = ue + vl with u, v ∈ K since it is in M(1), and
ϕ4 (h) = rh + sk with r, s ∈ K since it is in M(8). We have ϕ4 (h + l) ∈ M(3). In
ϕ4 (h) + ϕ4 (l), basis vectors g and f do not occur, and it follows that ϕ4 (h + l) is a
scalar multiple of h + l. Hence
Since the basis vectors l and k do not occur in this expression, it follows that
ϕ4 (e + f + h) is a scalar multiple of e + f + h. So b = 0 and a = c2 = r; in
particular, we have ϕ4 (f ) = af .
In total we have now seen that c = c1 = c2 = a = r = v, so all six basis vectors
of M(4) are mapped to the same scalar multiple of themselves. This proves that ϕ4
is a scalar multiple of the identity, and then so is ϕ.
12.2 All Euclidean Quivers Are of Infinite Representation Type 255
We extend now the quiver Q by a new vertex ω and a new arrow ω −→ 8. Hence
whose underlying graph is a Euclidean diagram
we consider the following quiver Q
of type E8 :
The result in Lemma 12.8 together with Lemma 11.29 already yields that Q has
infinite representation type over every infinite field K. But since we prove Gabriel’s
theorem for arbitrary fields, we need to show that Q has infinite representation
type over any field K. To this end we now define representations of arbitrary
large dimensions and afterwards show that they are indeed indecomposable. The
construction is inspired by the special representation of Q just considered; in fact,
the restriction to the subquiver Q is a direct sum of copies of the above special
representation.
Definition 12.9. Fix an integer m ∈ N. We will define a representation V = Vm of
where all maps are inclusions, and all spaces V (i) are subspaces
the above quiver Q
of V (4), a 6m-dimensional vector space over K. We give the bases of the spaces.
V (4) = span{ei , fi , gi , hi , ki , li | 1 ≤ i ≤ m}
V (1) = span{ei , li | 1 ≤ i ≤ m}
V (2) = span{ei , fi , gi , li | 1 ≤ i ≤ m}
V (3) = span{hi + li , ei + gi + ki , ei + fi + hi | 1 ≤ i ≤ m}
V (5) = span{ei , fi , gi , hi , ki | 1 ≤ i ≤ m}
V (6) = span{fi , gi , hi , ki | 1 ≤ i ≤ m}
V (7) = span{gi , hi , ki | 1 ≤ i ≤ m}
V (8) = span{hi , ki | 1 ≤ i ≤ m}
V (ω) = span{h1 , h2 + k1 , h3 + k2 , . . . , hm + km−1 }.
is
Lemma 12.10. For every m ∈ N, the representation Vm of the quiver Q
indecomposable.
Proof. We briefly write V := Vm . To show that V is indecomposable we use the
criterion in Lemma 9.11, that is, we show that the only endomorphisms ϕ : V → V
of representations satisfying ϕ 2 = ϕ are zero and the identity.
As before, since all maps are given by inclusions, any endomorphism ϕ on V
is given by a linear map V (4) → V (4), which we also denote by ϕ, such that
ϕ(V (i)) ⊆ V (i) for all vertices i of Q.
256 12 Proofs and Background
The space V (4) is the direct sum of six subspaces, each of dimension m, spanned
by the basis vectors with the same letter. We write E for the span of the set
{e1 , . . . , em }, and similarly we define subspaces F, G, H, K and L.
(1) We show that ϕ leaves each of these six subspaces of V (4) invariant:
(i) We have ϕ(ei ) ∈ V (1) ∩ V (5) = E. Similarly, ϕ(gi ) ∈ V (7) ∩ V (2) = G.
(ii) We show now that ϕ(hi ) is in H and that ϕ(li ) is in L: To do so, we compute
ϕ(hi + li ) = ϕ(hi ) + ϕ(li ). First, ϕ(hi ) is in V (8), which is H ⊕ K. Moreover, ϕ(li )
is in V (1), that is, in E ⊕ L. Therefore ϕ(hi + li ) is in H ⊕ K ⊕ E ⊕ L. Secondly,
since hi + li ∈ V (3), its image ϕ(hi + li ) is also in V (3). If expressed in terms of
the basis of V (3), there cannot be any ej + gj + kj occuring since this involves a
non-zero element in G. Similarly no basis vector ei + fi + hi can occur. It follows
that ϕ(hi + li ) is in H ⊕ L. This implies that ϕ(hi ) cannot involve any non-zero
element in K, so it must lie in H . Similarly ϕ(li ) must lie in L.
In the following steps, the strategy is similar to that in (ii).
(iii) We show that ϕ(ki ) is in K. To prove this, we compute
We know ϕ(ei +gi ) ∈ E⊕G and ϕ(ki ) ∈ V (8) = H ⊕K and therefore ϕ(ei +gi +ki )
lies in E ⊕ G ⊕ H ⊕ K. On the other hand, ϕ(ei + gi + ki ) lies in V (3). It cannot
involve a basis element in which some lj occurs, or some fj , and it follows that it
must be in the span of the elements of the form ej + gj + kj . Therefore it follows
that ϕ(ki ) ∈ K.
(iv) We claim that ϕ(fi ) is in F . It lies in V (6) ∩ V (2), so it is in F ⊕ G. We
compute ϕ(ei + fi + hi ) = ϕ(ei + hi ) + ϕ(fi ). By parts (i) and (ii), we know
ϕ(ei + hi ) is in E ⊕ H and therefore ϕ(ei + hi ) + ϕ(fi ) is in E ⊕ H ⊕ F ⊕ G. On
the other hand, it lies in V (3) and since it cannot involve a basis vector with a kj or
lj we deduce that it must be in E ⊕ F ⊕ H . Therefore ϕ(fi ) cannot involve any gj
and hence it belongs of F .
(2) Consider ϕ(h1 ). It belongs to H and also to V (ω), so it is a scalar multiple of h1 ,
that is, h1 is an eigenvector of ϕ. Since ϕ 2 = ϕ, the eigenvalue is 0 or 1. As before,
we may assume that ϕ(h1 ) = 0, otherwise we replace ϕ by idV (4) − ϕ.
(3) We show that ϕ(l1 ) = 0 = ϕ(e1 ) = ϕ(f1 ) and ϕ(g1 ) = ϕ(k1 ) = 0. First we
have ϕ(h1 + l1 ) = ϕ(h1 ) + ϕ(l1 ) = ϕ(l1 ) ∈ L ∩ V (3) = 0. Next, we have
2(x, v)
sv (x) = x − v for every x ∈ E.
(v, v)
2(x,v)
Then sv (v) = −v, and if (y, v) = 0 then sv (y) = y. Write x, v := (v,v) .
Definition 12.11. A subset R of E is a root system if it satisfies the following:
(R1) R is finite, it spans E and 0 ∈ R.
(R2) If α ∈ R, the only scalar multiples of α in R are ±α.
(R3) If α ∈ R then the reflection sα permutes the elements of R.
(R4) If α, β ∈ R then β, α ∈ Z.
The elements of the root system R are called roots.
Remark 12.12. Condition (R4) is closely related to the possible angles between two
roots. If α, β ∈ R and θ is the angle between α and β then
(α, β)2
α, β · β, α = 4 = 4 cos2 (θ ) ≤ 4
|α|2 |β|2
and this is an integer by (R4). So there are only finitely many possibilities for the
numbers β, α.
Definition 12.13. Let R be a root system in E. A base of R is a subset B of R such
that
(i) B is a vector space basis of E.
(ii) Every β ∈ R can be written as
β= kα α
α∈B
with kα ∈ Z and where all non-zero coefficients kα have the same sign.
One can show that every root system has a base. With this, R = R + ∪ R − , where
R+ is the set of all β where the signs are positive, and R − is the set of all β where
signs are negative. Call R + the set of ‘positive roots’ , and R − the set of ‘negative
roots’.
We fix a base B = {α1 , . . . , αn } of the root system R. Note that the cardinality n
of B is the vector space dimension of E. The Cartan matrix of R is the (integral)
matrix with (i, j )-entry αi , αj .
Root systems are classified by their Cartan matrices. We consider root systems
whose Cartan matrices are symmetric, known as ‘simply laced’ root systems. One
can show that for these root systems, if i = j then αi , αj is equal to 0 or to −1.
Definition 12.14. Let R be a simply laced root system and let B = {α1 , . . . , αn } be
a base of R. The Dynkin diagram of R is the graph R , with vertices {1, 2, . . . , n},
and there is an edge between vertices i and j if αi , αj = 0.
The classification of root systems via Dynkin diagrams then takes the following
form. Recall that the Dynkin diagrams of type A, D, E were given in Fig. 10.1.
Theorem 12.15. The Dynkin diagrams for simply laced root systems are the unions
of Dynkin diagrams of type An (for n ≥ 1), Dn (for n ≥ 4) and E6 , E7 and E8 .
Now we relate this to quivers. Let Q be a connected quiver without oriented
cycles, with underlying graph . We have defined the symmetric bilinear form
(−, −) on Zn × Zn , see Definition 10.2. With the same Gram matrix we get a
symmetric bilinear form on Rn × Rn .
Let be a union of Dynkin diagrams. Then the quadratic form q corresponding
to the above symmetric bilinear form is positive definite; in fact, Proposition 10.10
12.3 Root Systems 259
shows this for a single Dynkin diagram, and the general case can be deduced from
the formula (11.1) in the proof of Lemma 11.41.
Hence, (−, −) is an inner product. So we consider the vector space E = Rn
with the inner product (−, −) , where n is the number of vertices of .
Proposition 12.16. Let Q be a quiver whose underlying graph is a union of
Dynkin diagrams of type A, D, or E. Let q be the quadratic form associated to
, and let = {x ∈ Zn | q (x) = 1} be the set of roots, as in Definition 10.6.
Then is a root system in E = Rn , as in Definition 12.11. It has a base (as in
Definition 12.13) consisting of the unit vectors of Zn . The associated Cartan matrix
is the Gram matrix of (−, −) .
Proof. (R1) We have seen that is finite (see Proposition 10.12 and
Remark 11.42). Since the unit vectors are roots (see Exercise 10.3), the set spans
E = Rn . From the definition of q (see Definition 10.6) we see that q (0) = 0, that
is, the zero vector is not in .
(R2) Let x ∈ and λ ∈ R such that λx ∈ . Then we have q (λx) = λ2 q (x).
They are both equal to 1 if and only if λ = ±1.
(R3) We have proved in Lemma 10.9 that a reflection si permutes the elements of
but a similar computation shows that for any y ∈ we have q (sy (x)) = q (x):
Since y ∈ we have (y, y) = 2q (y) = 2; then
2(x, y)
sy (x) = x − v = x − (x, y) y for every x ∈ E.
(y, y)
It follows that
Thus, q (sy (x)) = 12 (sy (x), sy (x)) = 12 (x, x) = q (x) and hence sy permutes
the elements in .
(R4) We have for x, y ∈ that (y, y) = 2q (y) = 2 and hence x, y = (x, y) ∈ Z.
This proves that satisfies the axioms of a root system.
We now show that the unit vectors form a base of the root system (in the sense
of Definition 12.13): the unit vectors clearly form a vector space basis of E = Rn ;
moreover, they satisfy the property (ii) as in Definition 12.13, as we have seen in
Lemma 10.13.
As noted, for unit vectors εi , εj we have that εi , εj is equal to (εi , εj ) ; this says
that the Cartan matrix of the root system is the same as the Gram matrix in
Definition 10.2.
260 12 Proofs and Background
F (M) := eM,
The condition that I contains all paths of length ≥ m for some m makes sure
that KQ/I is finite-dimensional, and one can show that Q is unique by using that
I is contained in the span of paths of length ≥ 2. The ideal I in this theorem is not
unique.
Example 12.21.
(1) Let A = K[X]/(Xr ) for r ≥ 1. We have seen that K[X] is isomorphic to the
path algebra KQ where Q is the quiver with one vertex and a loop α, hence A
is isomorphic to KQ/I where I = (α r ) is the ideal generated by α r . Moreover,
A is basic, see Proposition 3.23.
(2) Let A = KG where G is the symmetric group S3 , with elements r = (1 2 3)
and s = (1 2).
(i) Assume that K has characteristic 3. Then by Example 8.20 we know that
the simple A-modules are 1-dimensional. They are the trivial module and
the sign module. Therefore A is basic.
(ii) Now assume that K = C, then we have seen in Example 6.7 that the
simple A-modules have dimensions 1, 1, 2 and hence A is not basic. In
Exercise 6.4 we obtained the Artin–Wedderburn decomposition of CS3 ,
with orthogonal idempotents e+ , e− , f and f1 . We can take as isomorphism
classes of simple A-modules Ae+ and Ae− (the trivial and the sign
representation) and in addition Af . So we can write down the basic algebra
eAe of A by taking e = e+ + e− + f .
Example 12.22. As before, let A = KG be the group algebra for the symmetric
group G = S3 and assume that K has characteristic 3. We describe the quiver
Q and a factor algebra KQ/I isomorphic to A. Note that the group algebra Ks
of the subgroup of order 2 is a subalgebra, and it is semisimple (by Maschke’s
theorem). It has two orthogonal idempotents ε1 , ε2 with 1A = ε1 + ε2 . Namely, take
ε1 = −1 − s and ε2 = −1 + s (use that −2 = 1 in K). These are also idempotents
of A, still orthogonal. Then one checks that A as a left A-module is the direct sum
A = Aε1 ⊕ Aε2. For the following, we leave the checking of the details as an
exercise.
For i = 1, 2 the vector space Aεi has a basis of the form {εi , rεi , r 2 εi }. We can
write Aε1 = ε1 Aε1 ⊕ ε2 Aε1 as vector spaces, and Aε2 = ε2 Aε2 ⊕ ε1 Aε2 . One
checks that for i = 1, 2 the space εi Aεi has basis {εi , (1 + r + r 2 )εi }. Moreover,
ε2 Aε1 is 1-dimensional and spanned by ε2 rε1 , and ε1 Aε2 also is 1-dimensional and
spanned by ε1 rε2 . Furthermore, one checks that
(i) (ε1 rε2 )(ε2 rε1 ) = (1 + r + r 2 )ε1 and (ε2 rε1 )(ε1 rε2 ) = (1 + r + r 2 )ε2
(ii) (εi rεj )(εj rεi )(εi rεj ) = 0 for {i, j } = {1, 2}.
We take the quiver Q defined as
264 12 Proofs and Background
Note that the path algebra KQ is infinite-dimensional since Q has oriented cycles.
Then we have a surjective algebra homomorphism ψ : KQ → A where we let
and extend this to linear combinations of paths in KQ. Let I be the kernel of ψ. One
shows, using (i) and (ii) above, that I = (αβα, βαβ), the ideal generated by αβα
and βαβ. Hence I is an admissible ideal of KQ, and KQ/I is isomorphic to A.
Appendix A
Induced Modules for Group Algebras
V ⊗K W := span{vi ⊗ wj | 1 ≤ i ≤ n, 1 ≤ j ≤ m}.
Remark A.2.
(1) By definition, for the dimensions we have
i,j λij (vi ⊗ wj ). Addition and scalar multiplication in V ⊗K W are given
by
( λij (vi ⊗ wj )) + ( μij (vi ⊗ wj )) = (λij + μij )(vi ⊗ wj )
i,j i,j i,j
and
λ( λij (vi ⊗ wj )) = λλij (vi ⊗ wj ),
i,j i,j
respectively.
The next result collects some fundamental ‘calculation rules’ for tensor products
of vector spaces, in particular, it shows that the tensor product is bilinear. Moreover
it makes clear that the vector space V ⊗K W does not depend on the choice of the
bases used in the definition.
Proposition A.3. We keep the notation from Definition A.1. For arbitrary
elements
v ∈ V and w ∈ W , expressed in the given bases as v = i λi vi and
w = j μj wj , we set
v ⊗ w := λi μj (vi ⊗ wj ) ∈ V ⊗K W. (A.1)
i,j
r
s
r
s
( xi ) ⊗ ( yj ) = xi ⊗ yj .
i=1 j =1 i=1 j =1
= (λv) ⊗ w.
write the elements in the given bases as xi =
(b) We k λik vk ∈ V and
yj = μj w ∈ W . Then, again by (A.1), we have
( xi ) ⊗ ( yj ) = ( ( λik )vk ) ⊗ ( ( μj )w )
i j k i j
= ( λik )( μj )(vk ⊗ w )
k, i j
= λik μj (vk ⊗ w )
i,j k,
= (( λik vk ) ⊗ ( μj w )) = xi ⊗ yj .
i,j k i,j
the elements of the given bases in the new bases as vi =
(c) We express k λik bk
and wj = μj c . Applying parts (b) and (a) we obtain
vi ⊗ wj = (λik bk ⊗ μj c ) = λik μj (bk ⊗ c ).
k, k,
KG ⊗K W = span{g ⊗ wi | g ∈ G, i = 1, . . . , m}.
x · (g ⊗ w) = xg ⊗ w, (A.2)
H := span{gh ⊗ w − g ⊗ hw | g ∈ G, h ∈ H, w ∈ W }.
Definition A.4. We keep the notations from above. The factor module
KG ⊗H W := (KG ⊗K W )/H
is called the KG-module induced from the KH -module W . For short we write
g ⊗H w := g ⊗ w + H ∈ KG ⊗H W ;
{t ⊗H wi | t ∈ T , i = 1, . . . , m}
where |G : H | is the index of the subgroup H in G (that is, the number of cosets).
$
Proof. By definition, G is a disjoint union G = t ∈T tH of left cosets. So every
g ∈ G can be written as g = t h̃ for some t ∈ T and h̃ ∈ H . Hence every element
g ⊗H wi is of the form g ⊗H wi = t h̃ ⊗H wi = t ⊗H h̃wi , by (A.3). Since the
elements g ⊗H wi for g ∈ G, i = 1, . . . , m, clearly form a spanning set of the vector
space KG ⊗H W , we conclude that {t ⊗H wi | t ∈ T , i = 1, . . . , m} also forms a
spanning set and that
α = gh ⊗ w − g ⊗ hw = t h̃h ⊗ w − t h̃ ⊗ hw
= t h̃h ⊗ w − t ⊗ h̃hw + t ⊗ h̃hw − t h̃ ⊗ hw.
{ty ⊗ wi − t ⊗ ywi | t ∈ T , y ∈ H, i = 1, . . . , m}
m
0 = i(w) = 1 ⊗H w = λj (1 ⊗H wj ).
j =1
We can choose our system T of coset representatives to contain the identity element,
then Proposition A.5 in particular says that the elements 1⊗H wj , for j = 1, . . . , m,
270 A Induced Modules for Group Algebras
are part of a basis of KG ⊗H W . So in the equation above we can deduce that all
λj = 0. This implies that w = 0 and hence i is injective.
It is easy to check that i is a K-linear map. Moreover, using (A.3), the following
holds for every h ∈ H and w ∈ W :
Usually there are many different ways to solve a problem. The following are
possible approaches, but are not unique.
Chapter 1
which does not lie in the first subalgebra. The fifth subset is also not a subalgebra
since it does not contain the identity matrix, which is the identity element of M3 (K).
The other four subsets are subalgebras: they contain the identity element of M3 (K),
and one checks that they are closed under taking products.
Definition 1.22.
(i) Let λ, μ ∈ K. We must show that ψ(λb + μb ) = λψ(b) + μψ(b ). In fact,
In the second step we have used that φ commutes with taking products.
(iii) We have ψ(1B ) = ψ(φ(1A )) = ψ ◦ φ(1A ) = idA (1A ) = 1A .
Exercise 1.15. (i) If a 2 = 0 then φ(a)2 = φ(a 2 ) = φ(0) = 0, using that φ is an
algebra homomorphism. Conversely, if φ(a 2 ) = φ(a)2 = 0, then a 2 is in the kernel
of φ which is zero since φ is injective.
(ii) Let a be a (left) zero divisor, that is, there exists an a ∈ A \ {0} with aa = 0.
It follows that 0 = φ(0) = φ(aa ) = φ(a)φ(a ). Moreover, φ(a ) = 0 since
φ is injective. Hence φ(a) is a zero divisor. Conversely, let φ(a) be a (left) zero
divisor, then there exists a b ∈ B \ {0} such that φ(a)b = 0. Since φ is surjective
there exists an a ∈ A with b = φ(a ); note that a = 0 because b = 0. Then
0 = φ(a)b = φ(a)φ(a ) = φ(aa ). This implies aa = 0 since φ is injective, and
hence a is a (left) zero divisor. The same proof works for right zero divisors.
(iii) Let A be commutative, and let b, b ∈ B. We must show that bb − b b = 0.
Since φ is surjective, there are a, a ∈ A such that b = φ(a) and b = φ(a ).
Therefore
That is, b has inverse φ(a ) ∈ B. We have proved if A is a field then so is B. For the
converse, interchange the roles of A and B, and use the inverse φ −1 of φ.
Exercise 1.16. (i) One checks that each Ai is a K-subspace of M3 (K), contains
the identity matrix, and that each Ai is closed under multiplication; hence it
is a K-subalgebra of M3 (K). Moreover, some direct computations show that
A1 , A2 , A3 , A5 are commutative. But A4 is not commutative, for example
⎛ ⎞⎛ ⎞ ⎛ ⎞⎛ ⎞
100 110 110 10 0
⎝0 0 0⎠ ⎝0 0 0⎠ = ⎝0 0 0⎠ ⎝0 0 0⎠ .
001 001 001 00 1
n
n
n
φ(1Tn (K) ) = φ( Eii ) = φ(Eii ) = ei = 1KQ .
i=1 i=1 i=1
Exercise 1.24. Recall that in the proof of Proposition 1.29 the essential part was to
find a basis {1, b̃} for which b̃ squares to either 0 or ±1. For D2 (R) we can choose
b̃ as a diagonal matrix with entries 1 and −1; clearly, b̃2is the identity matrix and
0 1
hence D2 (R) ∼ = R[X]/(X2 − 1). For A choose b̃ = ; then b̃2 = 0 and
00
∼ 0 1
hence A = R[X]/(X ). Finally, for B we choose b̃ =
2 ; then b̃ 2 equals the
−1 0
negative of the identity matrix and hence B ∼ = R[X]/(X2 + 1). In particular, there
are no isomorphisms between any of the three algebras.
Exercise 1.26. One possibility is to imitate the proof of Proposition 1.29; it
still gives the three possibilities A0 = C[X]/(X2 ), A1 = C[X]/(X2 − 1) and
A−1 = C[X]/(X2 + 1). Furthermore, the argument that A0 ∼ A1 also works for C.
=
However, the C-algebras A1 and A−1 are isomorphic. In fact, the map
Exercise 1.29. (a) We use the axioms of a K-algebra from Definition 1.1.
(i) For every x, y ∈ A and λ, μ ∈ K we have
By parts (b) and (c), the path algebra KQ is isomorphic to the subalgebra of M3 (K)
consisting of all K-linear combinations of the matrices corresponding to le1 , le2 and
lα , that is, to the subalgebra
⎧⎛ ⎞ ⎫
⎨ a00 ⎬
⎝0 b 0⎠ | a, b, c ∈ K .
⎩ ⎭
0ca
Chapter 2
Note that if j < s then Ej +1,j +1 e = 0 for all = s + 1, . . . , r. Thus, if j < s then
ej +1 + Vj ∈ ker(φ) and φ is not an isomorphism.
Completely analogously, there can’t be an isomorphism φ if j > s (since then
φ −1 is not injective, by the argument just given). So we have shown that if φ is
an isomorphism, then j = s. Moreover, if φ is an isomorphism, the dimensions of
Vi,j and Vr,s must agree, that is, we have i − j = r − s = r − j and hence also
i = r. This shows that the Tn (K)-modules Vi,j with 0 ≤ j < i ≤ n are pairwise
non-isomorphic. An easy counting argument shows that there are n(n+1) 2 such
modules.
(c) The annihilator AnnTn (K) (ei ) consists precisely of those upper triangular
matrices with i-th column equal to zero. So the factor module Tn (K)/AnnTn (K)(ei )
is spanned by the cosets E1i + AnnTn (K)(ei ), . . . , Eii + AnnTn (K) (ei ). Then one
checks that the map
Exercise 2.16. (a) E is non-empty since (0, 0) ∈ E. Let (m1 , m2 ), (n1 , n2 ) ∈ E and
λ, μ ∈ K. Then
α1 (λm1 + μn1 ) + α2 (λm2 + μn2 ) = λα1 (m1 ) + μα1 (n1 ) + λα2 (m2 ) + μα2 (n2 )
= λ(α1 (m1 ) + α2 (m2 )) + μ(α1 (n1 ) + α2 (n2 ))
= λ · 0 + μ · 0 = 0,
λ(β1 (w), β2 (w)) + μ(β1 (v), β2 (v)) = (λβ1 (w) + μβ1 (v), λβ2 (w) + μβ2 (v))
= (β1 (λw + μv), β2 (λw + μv)),
(c) By part (a), C is a K-subspace. For every a ∈ A and (β1 (w), β2 (w)) ∈ C we
have
a((β1 (w), β2 (w)) = (aβ1 (w), aβ2 (w)) = (β1 (aw), β2 (aw)) ∈ C,
(b) By construction
Exercise 2.22. We prove part (b) (all other parts are straightforward). Let W ⊆ Ae1
be a 1-dimensional submodule, take 0 = w ∈ W . We express it in terms of the basis
of Ae1 , as w = ce1 +dα with c, d ∈ K. Then e1 w and e2 w are both scalar multiples
of w, and e1 w = ce1 but e2 w = dα. It follows that one of c, d must be zero and
the other is non-zero. If c = 0 then αw = cα = 0 and is not a scalar multiple of w.
Hence c = 0 and w is a non-zero scalar multiple of α.
Now suppose we have non-zero submodules U and V of Ae1 such that
Ae1 = U ⊕ V . Then U and V must be 1-dimensional. By part (b) it follows that
U = span{α} = V and we do not have a direct sum, a contradiction.
Exercise 2.23. We apply Example 2.22 with R = A. We have seen that a map from
A to a module taking r ∈ A to rm for m a fixed element in a module is an A-module
homomorphism. Hence we have the module homomorphism
Chapter 3
Exercise 3.3. (a) It suffices to prove one direction; the other one then follows by
using the inverse of the isomorphism. So suppose that V is a simple A-module, and
let U ⊆ W be an A-submodule. By Proposition 2.26 the preimage φ −1 (U ) is an
A-submodule of V . We assume V is simple, and we conclude that φ −1 (U ) = 0 or
280 B Solutions to Selected Exercises
n
z= i,j =1 zij Eij ∈ Z(A). Then for every k, ∈ {1, . . . , n} we have on the one
hand
n
n
Ek z = zij Ek Eij = zj Ekj
i,j =1 j =1
n
n
zEk = zij Eij Ek = zik Ei .
i,j =1 i=1
Note that the first matrix has the -th row of z in row k, the second one has the k-th
column of z in column . However, the two matrices must be equal since z ∈ Z(A).
So we get zk = 0 for all = k and z = zkk for all k, . This just means that z
is a multiple of the identity matrix by a ‘scalar’ from D. However, D need not be
commutative so only the Z(D)-multiples of the identity matrix form the centre.
Exercise 3.18. We apply Proposition 3.17 to the chain
M0 ⊂ M1 ⊂ . . . ⊂ Mr−1 ⊂ Mr = M. Since all inclusions are strict we obtain for
the lengths that
Chapter 4
Exercise 4.3. By the division algorithm of polynomials, since g and h are coprime,
there are polynomials q1 , q2 such that 1K[X] = gq1 + hq2 and therefore we have
and it follows that K[X]/(f ) is the sum of the two submodules (g)/(f ) and
(h)/(f ). We show that the intersection is zero: Suppose r is a polynomial such
that r + (f ) is in the intersection of (g)/(f ) and (h)/(f ). Then
r + (f ) = gp1 + (f ) = hp2 + (f )
ũ + (1 2)ũ = β(v1 + v2 + v3 ) ∈ U ∩ Ũ .
Chapter 5
Exercise 5.1. According to the Artin–Wedderburn theorem there are precisely four
such algebras: C9 , C5 × M2 (C), M3 (C) and C × M2 (C) × M2 (C).
Exercise 5.2. Here, again by the Artin–Wedderburn theorem there are five possible
algebras, up to isomorphism, namely: R4 , M2 (R), R × R × C, C × C and H.
Exercise 5.4. (a) By the Artin–Wedderburn theorem, A ∼ = Mn1 (D1 )×. . .×Mnr (Dr )
with positive
r integers ni and division algebras D i . For the centres we get
Z(A) ∼= i=1 Z(Mni (Di )). We have seen in Exercise 3.16 that the centre of a
matrix algebra Mni (Di ) consists precisely of the Z(Di )-multiples of the identity
matrix, that is Z(Mni (Di )) ∼
= Z(Di ) as K-algebras. Note that Z(Di ) is by definition
284 B Solutions to Selected Exercises
x := (x1 , . . . , xr ) ∈ K1 × . . . × Kr
can only be nilpotent (that is, satisfy x = 0 for some ∈ N) if all xi = 0, that is, if
x = 0. Clearly, this property is invariant under isomorphisms, so holds for Z(A) as
well.
Exercise
5.11. (a) By definition of εi we have that (X −λi +I )εi is a scalar multiple
of ri=1 (X − λi ) + I , which is zero in A.
(b) The product εi εj for i = j has a factor rt=1 (X − λt ) + I and hence is zero in
A. Moreover, using part (a), we have
εi2 = (1/ci ) (X − λj + I )εi = (1/ci ) (λi − λj )εi = εi .
j =i j =i
(c) It follows from (b) that the elements ε1 , . . . , εr are linearly independent over
K (if λ1 ε1 + . . . + λr εr = 0 then multiplication by εi yields λi εi = 0 and hence
λi = 0). Observe that dimK A = r and hence ε1 , . . . , εr are a K-basis for A. So we
can write 1A = b1 ε1 + . . . + br εr for bi ∈ K. Now use εi = εi · 1A and deduce that
bi = 1 for all i.
Chapter 6
Exercise 6.8. (i) Since −1 commutes with every element in G, the cyclic group
−1 generated by −1 is a normal subgroup in G. The factor group G/−1 is of
order 4, hence abelian. This implies that G ⊆ −1. On the other hand, as G is not
abelian, G = 1, and we get G = −1.
(ii) The number of one-dimensional simple CG-modules is equal to |G/G | = 4.
Then the only possible solution of
k
k
8 = |G| = n2i = 1 + 1 + 1 + 1 + n2i
i=1 i=5
CG ∼
= C × C × C × C × M2 (C).
This is actually the same as for the group algebra of the dihedral group D4 of order
8. Thus, the group algebras CG and CD4 are isomorphic algebras (but, of course,
the groups G and D4 are not isomorphic).
Exercise 6.9. (i) No. Every group G has the 1-dimensional trivial CG-module;
hence there is always at least one factor C in the Artin–Wedderburn decomposition
of CG.
(ii) No. Such a group would have order |G| = 12 +22 = 5. But every group of prime
order is cyclic (by Lagrange’s theorem from elementary group theory), and hence
abelian. But then every simple CG-module is 1-dimensional (see Theorem 6.4), so
a factor M2 (C) cannot occur.
(iii) Yes. G = S3 is such a group, see Example 6.7.
(iv) No. Such a group would have order |G| = 12 + 12 + 32 = 11. On the
other hand, the number of 1-dimensional CG-modules divides the group order (see
Corollary 6.8). But this would give that 2 divides 11, a contradiction.
Chapter 7
(ii) Now let x ∈ γ (Ui ) ∩ j =i γ (Uj ), then x = γ (ui ) = j =i γ (uj ) for elements
ui ∈ Ui and uj ∈ Uj . We have
0 = −γ (ui ) + γ (uj ) = γ (−ui + uj ).
j =i j =i
follows that −ui + j =i uj = 0 and then ui is in the
Since γ is injective, it
intersection of Ui and j =i Uj . This is zero, by the definition of a direct sum.
Then also x = γ (ui ) = 0. This shows that γ (M) = γ (U1 ) ⊕ . . . ⊕ γ (Ur ).
Exercise 7.4. (a) Analogous to the argument in the proof of Exercise 2.14 (b) one
shows that for any fixed t with j + 1 ≤ t ≤ i we have φ(et + Vj ) = λt (et + Vj ),
where λt ∈ K. If we use the action of Ej +1,t similarly, we deduce that λt = λj +1
and hence φ is a scalar multiple of the identity. This gives EndTn (K) (Vi,j ) ∼
= K, the
1-dimensional K-algebra.
(b) By part (a), the endomorphism algebra of Vi,j is a local algebra. Then Fitting’s
Lemma (see Corollary 7.16) shows that each Vi,j is an indecomposable Tn (K)-
module.
Exercise 7.6. (a) The left ideals of A = K[X]/(f ) are given by (h)/(f ), where h
divides f . In particular, the maximal left ideals are given by the irreducible divisors
h of f . By definition, A is a local algebra if and only if A has a unique maximal left
ideal. By the previous argument this holds if and only if f has only one irreducible
divisor (up to scalars), that is, if f = g m is a power of an irreducible polynomial.
(b) Applying part (a) gives the following answers: (i) No; (ii) Yes (since
Xp − 1 = (X − 1)p in characteristic p); (iii) Yes (since
X3 − 6X2 + 12X − 8 = (X − 2)3 ).
Exercise 7.9. As a local algebra, A has a unique maximal left ideal (see Exercise 7.1)
which then is the Jacobson radical J (A) (see Definition 4.21). Now let S be a simple
A-module. So S = As for any non-zero s ∈ S (see Lemma 3.3) and by Lemma 3.18
we know that S ∼ = A/AnnA (s). Then AnnA (s) is a maximal left ideal of A (by
the submodule correspondence), thus AnnA (s) = J (A) as observed above, since
A is local. Hence any simple A-module is isomorphic to A/J (A), which proves
uniqueness.
Exercise 7.12. (a) The Artin–Wedderburn theorem gives that up to isomorphism the
semisimple algebra A has the form
A∼
= Mn1 (D1 ) × . . . × Mnr (Dr )
i−1
r
Ii,j := Mn (D ) × Uj × Mn (D ),
=1 =i+1
B Solutions to Selected Exercises 287
where Uj ⊆ Mni (Di ) is the left ideal of all matrices with zero entries in the j -th
column. Clearly, each Ii,j is a left ideal of A, that is, an A-submodule, with factor
module A/Ii,j ∼ n
= Di i . The latter is a simple A-module (see Lemma 5.8, and its
proof), and thus each Ii,j is a maximal left ideal of A. However, if A is also a local
algebra, then it must have a unique maximal left ideal. So in the Artin–Wedderburn
decomposition above we must have r = 1 and n1 = 1, that is, A = D1 is a division
algebra over K.
Conversely, any division algebra over K is a local algebra, see Example 7.15.
(b) For the group G with one element we have KG ∼ = K, which is local and
semisimple. So let G have at least two elements and let 1 = g ∈ G. If m is the
order of g then g m = 1 in G, which implies that in KG we have
(g − 1)(g m−1 + . . . + g 2 + g + 1) = 0,
Chapter 8
Exercise 8.1. (a) (i) Recall that the action of the coset of X on Vα is given by
the linear map α, and T is a different name for this map. Since the coset of X
commutes with every element of A, applying α commutes with the action of an
arbitrary element in A. Hence T is an A-module homomorphism. Since T = α,
it has minimal polynomial g t , that is, g t (T ) = 0 and if h is a polynomial with
h(T ) = 0 then g t divides h.
(ii) By assumption, Vα is cyclic, so we can fix a generator w of Vα as an A-module.
The cosets of monomials Xi span A, and they take w to α i (w) and therefore Vα is
spanned by elements of the form α i (w). Since Vα is finite-dimensional, there exists
an m ∈ N such that Vα has a K-basis of the form {α i (w) | 0 ≤ i ≤ m}.
Let φ : Vα → Vα be an A-module homomorphism, then
m
m
m
φ(w) = ai α i (w) = ai T i (w) = ( ai T i )(w)
i=0 i=0 i=0
ates Vα and it follows that φ = h(T ). (To make this explicit: an arbitrary element in
Vα is of the form zw for z ∈ A. Then φ(zw) = zφ(w) = z(h(T )(w)) = h(T )(zw)
and therefore the two maps φ and h(T ) are equal.)
(iii) By (ii) we have φ = h(T ), where h is a polynomial. Assume φ 2 = φ, then
h(T )(h(T ) − idV ) is the zero map on Vα . By part (i), the map T has minimal
polynomial g t , and therefore g t divides the polynomial h(h − 1). We have that
288 B Solutions to Selected Exercises
g is irreducible and clearly h and h − 1 are coprime, then since K[X] is a unique
factorisation domain, it follows that either g t divides h or g t divides h − 1. In the
first case, h(T ) is the zero map on Vα and φ = 0 and in the second case h(T ) − idV
is the zero map on Vα and φ = idV .
We have shown that the zero map and the identity map are the only idempotents
in the endomorphism algebra of Vα . Then Lemma 7.3 implies that Vα is indecom-
posable as an A-module.
(b) By assumption,
⎛ ⎞
λ
⎜1 λ ⎟
⎜ ⎟
⎜ .. ⎟
Jn (λ) = ⎜
⎜ 1 .
⎟
⎟
⎜ .. .. ⎟
⎝ . . ⎠
1 λ
is the matrix of α with respect to some K-basis {w1 , . . . , wn } of V . This means that
α(wi ) = λwi + wi+1 for 1 ≤ i ≤ n − 1 and α(wn ) = λwn . Since α describes the
action of the coset of X, this implies that Aw1 contains w1 , . . . , wn and hence Vα is
a cyclic A-module, generated by w1 . The minimal polynomial of Jn (λ), and hence
of α, is g n , where g = X − λ. This is irreducible, so we can apply part (a) and Vα is
indecomposable by (iii).
Exercise 8.6. (a) Recall (from basic algebra) that the conjugacy class of an element g
in some group G is the set {xgx −1 | x ∈ G}, and that the size of the conjugacy class
divides the order of G. If |G| = pn then each conjugacy class has size some power
of p. Since G is the disjoint union of conjugacy classes, the number of conjugacy
classes of size 1 must be a multiple of p. It is non-zero since the identity element
is in such a class. Hence there must be at least one non-identity element g with
conjugacy class of size 1. Then g is in the centre Z(G), hence part (a) holds.
(b) Assume Z(G) is a proper subgroup of G; it is normal, so we have the factor
group. Assume (for a contradiction) that it is cyclic, say generated by the coset
xZ(G) for x ∈ Z(G). So every element of G belongs to a coset x r Z(G) for some
r. Take elements y1 , y2 ∈ G, then y1 = x r z1 and y2 = x s z2 for some r, s ∈ N0 and
z1 , z2 ∈ Z(G). We see directly that y1 and y2 commute. So G is abelian, and then
Z(G) = G, a contradiction.
(c) Since G is not cyclic, we must have n ≥ 2. Assume first that n = 2: Then G
must be abelian. Indeed, otherwise the centre of G would be a subgroup of order
p, by (a) and Lagrange’s theorem. Then the factor group G/Z(G) has order p and
must be cyclic, and this contradicts (b). So if G of order p2 is not cyclic then it is
abelian and can only be Cp × Cp by the structure of finite abelian groups. Thus the
claim holds, with N = {1} the trivial normal subgroup.
As an inductive hypothesis, we assume the claim is true for any group of order
pm where m ≤ n. Now take a group G of order pn+1 and assume G is not cyclic. If
B Solutions to Selected Exercises 289
G is abelian then it is a direct product of cyclic groups with at least two factors, and
then we can see directly that the claim holds. So assume G is not abelian. Then by
(a) we have G/Z(G) has order pk for k ≤ n, and the factor group is not cyclic, by
(b). By the inductive hypothesis it has a factor group N̄ = (G/Z(G))/(N/Z(G))
isomorphic to Cp × Cp . By the isomorphism theorem for groups, this group is
isomorphic to G/N, where N is normal in G, that is, we have proved (c).
Exercise 8.9. Let {v1 , . . . , vr } be a basis of V and {w1 , . . . , ws } be a basis of W as
K-vector spaces. Then the elements (v1 , 0), . . . , (vr , 0), (0, w1 ), . . . , (0, ws ) form
a K-vector space basis of V ⊕ W . (Recall that we agreed in Sect. 8.2 to use the
symbol ⊕ also for external direct sums, that is, elements of V ⊕ W are written as
tuples, see Definition 2.17.)
Let T be a system of representatives of the left cosets of H in G. Then we get K-
vector space bases for the modules involved from Proposition A.5 in the appendix.
Indeed, a basis for KG ⊗H (V ⊕ W ) is given by the elements
From linear algebra it is well known that mapping a basis onto a basis and extending
linearly yields a bijective K-linear map. If we choose to map
t ⊗H (vi , 0) → (t ⊗H vi , 0) for 1 ≤ i ≤ r
and
Chapter 9
For each arrow, the corresponding map in the representation is given by left
multiplication with this arrow, for instance Pi (β) is given by γ → βγ .
B Solutions to Selected Exercises 291
In general, for each vertex j of Q, define Pi (j ) to be the span of all paths from
i to j (which may be zero). Assume β : j → t is an arrow in Q. Then define
Pi (β) : Pi (j ) → Pi (t) by mapping each element p in Pi (j ) to βp, which lies in
Pi (t).
(ii) Assume Q has no oriented cycles. We give three alternative arguments for
proving indecomposability.
Let Pi = U ⊕ V for subrepresentations U and V. The space Pi (i) = span{ei }
is only 1-dimensional, by assumption, and must be equal to U (i) ⊕ V (i), and it
follows that one of U (i) and V (i) must be zero, and the other is spanned by ei . Say
U (i) = span{ei } and V (i) = 0. Take some vertex t. Then Pi (t) is the span of all
paths from i to t. Suppose Pi (t) is non-zero, then any path p in Pi (t) is equal to
pei . By the definition of the maps in Pi , such a path belongs to U (t). This means
that U (t) = Pi (t), and then V (t) = 0 since by assumption Pi (t) = U (t) ⊕ V (t).
This is true for all vertices t and hence V is the zero representation.
Alternatively we can also show that KQei is an indecomposable KQ-module:
Suppose KQei = U ⊕ V for submodules U and V . Then ei = ui + vi for unique
ui ∈ U and vi ∈ V . Since ei2 = ei we have ui +vi = ei = ei2 = ei ui +ei vi . Note that
ei ui ∈ U and ei vi ∈ V since U and V are submodules. By uniqueness, ei ui = ui
and ei vi = vi ; these are elements of ei KQei which is 1-dimensional (since Q
has no oriented cycles). If ui and vi were non-zero then they would be linearly
independent (they lie in different summands), a contradiction. So say vi = 0. Then
ei = ui belongs to U and since ei generates the module, U = KQei and V = 0.
As a second alternative, we can also prove indecomposability by applying our
previous work. By Exercise 5.9 we know that EndKQ (KQei ) is isomorphic to
(ei KQei )op . Since there are no oriented cycles, this is just the span of ei . Then
we see directly that any endomorphism ϕ of Pi = KQei with ϕ 2 = ϕ is zero or the
identity. Hence the module is indecomposable, by Lemma 7.3.
Chapter 10
Exercise 10.10. (a) Using the explicit formula for C in Example 10.15, we find
that the orbit of εn is {εn , εn−1 , . . . , ε1 , −α1,n }.
(b) Using the formula for C (αr,s ) in Example 10.15 we see that the orbit of αr,n is
given by
This has n + 1 elements and αr,n is the unique root of the form αt,n . Moreover, there
are n such orbits, each containing n + 1 elements. Since in total there are n(n + 1)
roots for Dynkin type An (see Example 10.8), these are all orbits and the claim
follows.
292 B Solutions to Selected Exercises
(c) We have Cn+1 (εi ) = εi for all i = 1, . . . , n (as one sees from the calculation
for (a)). The εi form a basis for Rn , and C is linear. It follows that Cn+1 fixes each
element in Rn .
Exercise 10.11. In Exercise 10.10 we have seen that Cn+1 is the identity. Moreover,
for 1 ≤ k < n + 1 the map Ck is not the identity map, as we see from the
computation in the previous exercise, namely all C -orbits have size n + 1.
Exercise 10.14. (a) The positive roots for D4 are
εi , (1, 0, 1, 0), (0, 1, 1, 0), (0, 0, 1, 1), (1, 1, 1, 0), (0, 1, 1, 1), (1, 0, 1, 1),
(1, 1, 1, 1), (1, 1, 2, 1).
(b) The positive roots for type D5 are as follows: first, append a zero to each of the
roots for D4 . Then any other positive root for D5 has 5-th coordinate equal to 1. One
gets
ε5 , (0, 0, 0, 1, 1), (0, 0, 1, 1, 1), (0, 1, 1, 1, 1), (1, 0, 1, 1, 1), (1, 1, 1, 1, 1),
(1, 1, 2, 1, 1), (1, 1, 2, 2, 1).
Chapter 11
id
S2 , S3 and 0 −→ K ←− K.
id
S1 , S2 and K −→ K ←− 0.
Exercise 11.18. If one of conditions (i) or (ii) of Exercise 11.16 does not hold then
the representation is decomposable, by Exercise 11.16. So assume now that both
(i) and (ii) hold. We only have to show that the hypotheses of Exercise 11.17 are
satisfied. By (i) we know
and by rank-nullity
by Proposition 11.37. Note that s1 s2 = (s2 s1 )−1 = C−1 and this is given by the
3 −2
matrix . Then one checks that the dimension vector of M is of the form
2 −1
' (
(a, a − 1); indeed, if N = S1 then dimM = 2r+1 and if 2− (N ) = S2 then
'2r+2( 2r
dimM = 2r+1 .
Finally, we get uniqueness by using the argument as in Proposition 11.46.
Exercise 11.22. (a) This is straightforward.
(b) Let ϕ : VM → VM be an A-module homomorphism such that ϕ 2 = ϕ;
according to Lemma 7.3 we must show that ϕ is the zero map or the identity. Write
ϕ as a matrix, in block form, as
T1 T
T =
T T2
by (i), that is, T = 0. Recall that ϕ 2 = ϕ, since T = 0 we deduce from this that
T12 = T1 and T22 = T2 . Let τ := (T1 , T2 ), by (ii) this defines a homomorphism of
representations M → M and we have τ 2 = τ . We assume M is indecomposable,
so by Lemma 9.11 it follows that τ is zero or the identity. If τ = 0 then we see that
T 2 = 0. But we should have T 2 = T since ϕ 2 = ϕ and therefore T = 0 and ϕ = 0.
If τ = idM then T 2 = T implies that 2T = T and therefore T = 0 and then
ϕ = idVM . We have proved that VM is indecomposable.
(c) Assume ϕ : VM → VN is an isomorphism and write it as a matrix in block
form, as
T1 T
T = .
T T2
296 B Solutions to Selected Exercises
We write xM and yM for the matrices describing the action of X and Y on VM and
similarly xN and yN for the action on VN . Since ϕ is an A-module homomorphism
we have xN T = T xM , and yN T = T yM . That is, the following two conditions
are satisfied for i = 1, 2:
(i) N(αi )T = 0 and T M(αi ) = 0
(ii) N(αi )T1 = T2 M(αi ).
Say M(α1 ) is surjective, then it follows by the argument as in part (b) that T = 0.
Using this, we see that since T is invertible, we must have that both T1 and T2
are invertible. Therefore τ = (T1 , T2 ) gives an isomorphism of representations
τ : M → N . This proves part (c).
Exercise 11.24. (a) An A-module M can be viewed as a module of KQ, by inflation.
So we get from this a representation M of Q as usual. That is, M(i) = ei M and the
maps are given by multiplication with the arrows. Since βγ is zero in A we have
M(β) ◦ M(γ ) is zero, similarly for αγ and δγ .
(b) (i) Vertex 5 is a source, so we can apply Exercise 9.10, which shows that
M = X ⊕ Y where X(5) = ker(M(γ )), and X is isomorphic to a direct sum
of copies of the simple representation S5 .
(ii) Let U be the subrepresentation with U (5) = M(5) and U (3) = im(M(γ )),
and where U (γ ) = M(γ ). This is a subrepresentation of M since M(α), M(β)
and M(δ) map the image of M(γ ) to zero, by part (a). The dimension vector of this
subrepresentation is (0, 0, d, 0, d) since M(γ ) is injective. From the construction
it is clear that U decomposes as a direct sum of d copies of a representation with
dimension vector (0, 0, 1, 0, 1), and that this representation is indecomposable (it is
the extension by zero of an indecomposable representation for a quiver of type A2 ).
Now choose a subspace C of M(3) such that C ⊕ U (3) = M(3), and then we
have a subrepresentation V of M with V (i) = M(i) for i = 1, 2, 4 and V (3) = C,
V (5) = 0 and where V (ω) = M(ω) for ω = α, β, δ. Then M = U ⊕ V.
(c) Let M be an indecomposable A-module, and let M be the corresponding
indecomposable representation of Q, satisfying the relations as in (a).
Suppose first that M(5) = 0. Then M is the extension by zero of an
indecomposable representation of a quiver of Dynkin type D4 , so there are finitely
many of these, by Gabriel’s theorem.
Suppose now that M(5) = 0. If M(γ ) is not injective then M ∼ = S5 , by part
(b) (i) (and since M is indecomposable by assumption). So we can assume now that
M(γ ) is injective. Then by part (b) (ii) (and because M is indecomposable), M has
dimension vector (0, 0, 1, 0, 1), and hence is unique up to isomorphism.
In total we have finitely many indecomposable representations of Q satisfying
the relations in A, and hence we have finitely many indecomposable A-modules.
(d) Gabriel’s theorem is a result about path algebras of quivers; it does not make any
statement about representations of a quiver where the arrows satisfy relations.
Index