100% found this document useful (2 votes)
1K views304 pages

SUMS87 Algebras and Representation Theory, Karin Erdmann, Thorsten Holm (2018) PDF

Uploaded by

ll
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
100% found this document useful (2 votes)
1K views304 pages

SUMS87 Algebras and Representation Theory, Karin Erdmann, Thorsten Holm (2018) PDF

Uploaded by

ll
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 304

Springer Undergraduate Mathematics Series

Karin Erdmann 
Thorsten Holm

Algebras and
Representation
Theory
Springer Undergraduate Mathematics Series

Advisory Board
M.A.J. Chaplain, University of St. Andrews
A. MacIntyre, Queen Mary University of London
S. Scott, King’s College London
N. Snashall, University of Leicester
E. Süli, University of Oxford
M.R. Tehranchi, University of Cambridge
J.F. Toland, University of Bath
More information about this series at http://www.springer.com/series/3423
Karin Erdmann • Thorsten Holm

Algebras and Representation


Theory

123
Karin Erdmann Thorsten Holm
Mathematical Institute Fakultät für Mathematik und Physik
University of Oxford Institut für Algebra, Zahlentheorie und
Oxford, United Kingdom Diskrete Mathematik
Leibniz Universität Hannover
Hannover, Germany

ISSN 1615-2085 ISSN 2197-4144 (electronic)


Springer Undergraduate Mathematics Series
ISBN 978-3-319-91997-3 ISBN 978-3-319-91998-0 (eBook)
https://doi.org/10.1007/978-3-319-91998-0

Library of Congress Control Number: 2018950191

Mathematics Subject Classification (2010): 16-XX, 16G10, 16G20, 16D10, 16D60, 16G60, 20CXX

© Springer International Publishing AG, part of Springer Nature 2018


This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part of
the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation,
broadcasting, reproduction on microfilms or in any other physical way, and transmission or information
storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology
now known or hereafter developed.
The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication
does not imply, even in the absence of a specific statement, that such names are exempt from the relevant
protective laws and regulations and therefore free for general use.
The publisher, the authors and the editors are safe to assume that the advice and information in this book
are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or
the editors give a warranty, express or implied, with respect to the material contained herein or for any
errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional
claims in published maps and institutional affiliations.

This Springer imprint is published by the registered company Springer Nature Switzerland AG
The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland
Introduction

Representation theory is a beautiful subject which has numerous applications in


mathematics and beyond. Roughly speaking, representation theory investigates how
algebraic systems can act on vector spaces. When the vector spaces are finite-
dimensional this allows one to explicitly express the elements of the algebraic
system by matrices, and hence one can exploit basic linear algebra to study abstract
algebraic systems. For example, one can study symmetry via group actions, but
more generally one can also study processes which cannot be reversed. Algebras
and their representations provide a natural framework for this. The idea of letting
algebraic systems act on vector spaces is so general that there are many variations.
A large part of these fall under the heading of representation theory of associative
algebras, and this is the main focus of this text.
Examples of associative algebras which already appear in basic linear algebra
are the spaces of n × n-matrices with coefficients in some field K, with the usual
matrix operations. Another example is provided by polynomials over some field,
but there are many more. In general, roughly speaking, an associative algebra A is
a ring which also is a vector space over some field K such that scalars commute
with all elements of A. We start by introducing algebras and basic concepts, and
describe many examples. In particular, we discuss group algebras, division algebras,
and path algebras of quivers. Next, we introduce modules and representations of
algebras and study standard concepts, such as submodules, factor modules and
module homomorphisms.
A module is simple (or irreducible) if it does not have any submodules, except
zero and the whole space. The first part of the text is motivated by simple modules.
They can be seen as building blocks for arbitrary modules, this is made precise by
the Jordan–Hölder theorem. It is therefore a fundamental problem to understand all
simple modules of an algebra. The next question is then how the simple modules
can be combined to form new modules. For some algebras, every module is a direct
sum of simple modules, and in this case, the algebra is called semisimple. We study
these in detail. In addition, we introduce the Jacobson radical of an algebra, which,
roughly speaking, measures how far an algebra is away from being semisimple.
The Artin–Wedderburn theorem completely classifies semisimple algebras. Given

v
vi Introduction

an arbitrary algebra, in general it is difficult to decide whether or not it is semisimple.


However, when the algebra is the group algebra of a finite group G, Maschke’s
theorem answers this, namely the group algebra KG is semisimple if and only the
characteristic of the field K does not divide the order of the group. We give a proof,
and we discuss some applications.
If an algebra is not semisimple, one has to understand indecomposable modules
instead of just simple modules. The second half of the text focusses on these.
Any finite-dimensional module is a direct sum of indecomposable modules. Even
more, such a direct sum decomposition is essentially unique, this is known as the
Krull–Schmidt theorem. It shows that it is enough to understand indecomposable
modules of an algebra. This suggests the definition of the representation type of an
algebra. This is said to be finite if the algebra has only finitely many indecomposable
modules, and is infinite otherwise. In general it is difficult to determine which
algebras have finite representation type. However for group algebras of finite groups,
there is a nice answer which we present.
The rest of the book studies quivers and path algebras of quivers. The main goal
is to classify quivers whose path algebras have finite representation type. This is
Gabriel’s theorem, proved in the 1970s, which has led to a wide range of new
directions, in algebra and also in geometry and other parts of mathematics. Gabriel
proved that a path algebra KQ of a quiver Q has finite representation type if and
only if the underlying graph of Q is the disjoint union of Dynkin diagrams of types
A, D or E. In particular, this theorem shows that the representation type of KQ is
independent of the field K and is determined entirely by the underlying graph of
Q. Our aim is to give an elementary account of Gabriel’s theorem, and this is done
in several chapters. We introduce representations of quivers; they are the same as
modules for the path algebra of the quiver, but working with representations has
additional combinatorial information. We devote one chapter to the description of
the graphs relevant for the proof of Gabriel’s theorem, and to the development of
further tools related to these graphs. Returning to representations, we introduce
reflections of quivers and of representations, which are crucial to show that the
representation type does not depend on the orientation of arrows. Combining the
various tools allows us to prove Gabriel’s Theorem, for arbitrary fields.
This text is an extended version of third year undergraduate courses which we
gave at Oxford, and at Hannover. The aim is to give an elementary introduction, we
assume knowledge of results from linear algebra, and on basic properties of rings
and groups. Apart from this, we have tried to make the text self-contained. We have
included a large number of examples, and also exercises. We are convinced that they
are essential for the understanding of new mathematical concepts. In each section,
we give sample solutions for some of the exercises, which can be found in the
appendix. We hope that this will help to make the text also suitable for independent
self-study.

Oxford, UK Karin Erdmann


Hannover, Germany Thorsten Holm
2018
Contents

1 Algebras . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 1
1.1 Definition and Examples .. . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 1
1.1.1 Division Algebras . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 4
1.1.2 Group Algebras .. . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 6
1.1.3 Path Algebras of Quivers .. . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 6
1.2 Subalgebras, Ideals and Factor Algebras .. . . . . . .. . . . . . . . . . . . . . . . . . . . 9
1.3 Algebra Homomorphisms.. . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 13
1.4 Some Algebras of Small Dimensions . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 19
2 Modules and Representations .. . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 29
2.1 Definition and Examples .. . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 29
2.2 Modules for Polynomial Algebras.. . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 33
2.3 Submodules and Factor Modules .. . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 35
2.4 Module Homomorphisms .. . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 40
2.5 Representations of Algebras . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 47
2.5.1 Representations of Groups vs. Modules for Group
Algebras .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 51
2.5.2 Representations of Quivers vs. Modules for Path
Algebras .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 53
3 Simple Modules and the Jordan–Hölder Theorem .. . . . . . . . . . . . . . . . . . . . 61
3.1 Simple Modules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 61
3.2 Composition Series. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 63
3.3 Modules of Finite Length .. . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 69
3.4 Finding All Simple Modules.. . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 71
3.4.1 Simple Modules for Factor Algebras of Polynomial
Algebras .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 73
3.4.2 Simple Modules for Path Algebras . . . .. . . . . . . . . . . . . . . . . . . . 75
3.4.3 Simple Modules for Direct Products.. .. . . . . . . . . . . . . . . . . . . . 77
3.5 Schur’s Lemma and Applications . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 79

vii
viii Contents

4 Semisimple Modules and Semisimple Algebras . . . . .. . . . . . . . . . . . . . . . . . . . 85


4.1 Semisimple Modules .. . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 85
4.2 Semisimple Algebras.. . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 91
4.3 The Jacobson Radical . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 96
5 The Structure of Semisimple Algebras:
The Artin–Wedderburn Theorem . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 103
5.1 A Special Case . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 104
5.2 Towards the Artin–Wedderburn Theorem .. . . . . .. . . . . . . . . . . . . . . . . . . . 106
5.3 The Artin–Wedderburn Theorem .. . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 111
6 Semisimple Group Algebras and Maschke’s Theorem .. . . . . . . . . . . . . . . . 117
6.1 Maschke’s Theorem .. . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 117
6.2 Some Consequences of Maschke’s Theorem . . .. . . . . . . . . . . . . . . . . . . . 120
6.3 One-Dimensional Simple Modules and Commutator Groups.. . . . . 122
6.4 Artin–Wedderburn Decomposition and Conjugacy Classes. . . . . . . . 124
7 Indecomposable Modules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 129
7.1 Indecomposable Modules .. . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 129
7.2 Fitting’s Lemma and Local Algebras.. . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 133
7.3 The Krull–Schmidt Theorem . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 137
8 Representation Type . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 143
8.1 Definition and Examples .. . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 143
8.2 Representation Type for Group Algebras . . . . . . .. . . . . . . . . . . . . . . . . . . . 149
9 Representations of Quivers . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 163
9.1 Definitions and Examples .. . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 163
9.2 Representations of Subquivers.. . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 169
9.3 Stretching Quivers and Representations.. . . . . . . .. . . . . . . . . . . . . . . . . . . . 172
9.4 Representation Type of Quivers . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 177
10 Diagrams and Roots .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 185
10.1 Dynkin Diagrams and Euclidean Diagrams .. . . .. . . . . . . . . . . . . . . . . . . . 185
10.2 The Bilinear Form and the Quadratic Form .. . . .. . . . . . . . . . . . . . . . . . . . 188
10.3 The Coxeter Transformation .. . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 197
11 Gabriel’s Theorem.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 203
11.1 Reflecting Quivers and Representations.. . . . . . . .. . . . . . . . . . . . . . . . . . . . 203
11.1.1 The Reflection j+ at a Sink . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 206
11.1.2 The Reflection j− at a Source.. . . . . . . .. . . . . . . . . . . . . . . . . . . . 210
11.1.3 Compositions j− j+ and j+ j− . . . . .. . . . . . . . . . . . . . . . . . . . 215
11.2 Quivers of Infinite Representation Type.. . . . . . . .. . . . . . . . . . . . . . . . . . . . 218
11.3 Dimension Vectors and Reflections . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 223
11.4 Finite Representation Type for Dynkin Quivers . . . . . . . . . . . . . . . . . . . . 227
Contents ix

12 Proofs and Background .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 239


12.1 Proofs on Reflections of Representations . . . . . . .. . . . . . . . . . . . . . . . . . . . 239
12.1.1 Invariance Under Direct Sums . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 240
12.1.2 Compositions of Reflections . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 243
12.2 All Euclidean Quivers Are of Infinite Representation Type .. . . . . . . 246
12.2.1 Quivers of Type E 6 Have Infinite Representation
Type . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 247
12.2.2 Quivers of Type E 7 Have Infinite Representation
Type . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 250
12.2.3 Quivers of Type E 8 Have Infinite Representation
Type . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 253
12.3 Root Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 257
12.4 Morita Equivalence.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 260

A Induced Modules for Group Algebras . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 265

B Solutions to Selected Exercises . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 271

Index . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 297
Chapter 1
Algebras

We begin by defining associative algebras over fields, and we give a collection of


typical examples. Roughly speaking, an algebra is a ring which is also a vector space
such that scalars commute with everything. One example is given by the n × n-
matrices over some field, with the usual matrix addition and multiplication. We will
introduce many more examples of algebras in this section.

1.1 Definition and Examples

We start by recalling the definition of a ring: A ring is a non-empty set R together


with an addition + : R×R → R, (r, s) → r +s and a multiplication · : R×R → R,
(r, s) → r · s such that the following axioms are satisfied for all r, s, t ∈ R:
(R1) (Associativity of +) r + (s + t) = (r + s) + t.
(R2) (Zero element) There exists an element 0R ∈ R such that r+0R = r = 0R +r.
(R3) (Additive inverses) For every r ∈ R there is an element −r ∈ R such that
r + (−r) = 0R .
(R4) (Commutativity of +) r + s = s + r.
(R5) (Distributivity) r · (s + t) = r · s + r · t and (r + s) · t = r · t + s · t.
(R6) (Associativity of ·) r · (s · t) = (r · s) · t.
(R7) (Identity element) There is an element 1R ∈ R \ {0} such that
1R · r = r = r · 1R .
Moreover, a ring R is called commutative if r · s = s · r for all r, s ∈ R.
As usual, the multiplication in a ring is often just written as rs instead of r · s;
we will follow this convention from now on.
Note that axioms (R1)–(R4) say that (R, +) is an abelian group. We assume by
Axiom (R7) that all rings have an identity element; usually we will just write 1 for

© Springer International Publishing AG, part of Springer Nature 2018 1


K. Erdmann, T. Holm, Algebras and Representation Theory, Springer
Undergraduate Mathematics Series, https://doi.org/10.1007/978-3-319-91998-0_1
2 1 Algebras

1R . Axiom (R7) also implies that 1R is not the zero element. In particular, a ring has
at least two elements.
We list some common examples of rings.
(1) The integers Z form a ring. Every field is also a ring, such as the rational
numbers Q, the real numbers R, the complex numbers C, or the residue classes
Zp of integers modulo p where p is a prime number.
(2) The n × n-matrices Mn (K), with entries in a field K, form a ring with respect
to matrix addition and matrix multiplication.
(3) The ring K[X] of polynomials over a field K where X is a variable. Similarly,
the ring of polynomials in two or more variables, such as K[X, Y ].
Examples (2) and (3) are not just rings but also vector spaces. There are many more
rings which are vector spaces, and this has led to the definition of an algebra.
Definition 1.1.
(i) An algebra A over a field K (or a K-algebra) is a ring, with addition and
multiplication

(a, b) → a + b and (a, b) → ab for a, b ∈ A

which is also a vector space over K, with the above addition and with scalar
multiplication

(λ, a) → λ · a for λ ∈ K, a ∈ A,

and satisfying the following axiom

λ · (ab) = (λ · a)b = a(λ · b) for all λ ∈ K, a, b ∈ A. (Alg)

(ii) The dimension of a K-algebra A is the dimension of A as a K-vector space.


The K-algebra A is finite-dimensional if A is finite-dimensional as a K-vector
space.
(iii) The algebra is commutative if it is commutative as a ring.
Remark 1.2.
(1) The condition (Alg) relating scalar multiplication and ring multiplication
roughly says that scalars commute with everything.
(2) Spelling out the various axioms, we have already listed above the axioms for
a ring. To say that A is a vector space over K means that for all a, b ∈ A and
λ, μ ∈ K we have
(i) λ · (a + b) = λ · a + λ · b;
(ii) (λ + μ) · a = λ · a + μ · a;
(iii) (λμ) · a = λ · (μ · a);
(iv) 1K · a = a.
1.1 Definition and Examples 3

(3) Strictly speaking, in Definition 1.1 we should say that A is an associative


algebra, as the underlying multiplication in the ring is associative. There are
other types of algebras, which we discuss at the end of Chap. 2.
(4) Since A is a vector space, and 1A is a non-zero vector, it follows that the map
λ → λ · 1A from K to A is injective. We use this map to identify K as a subset
of A. Similar to the convention for ring multiplication, for scalar multiplication
we will usually also just write λa instead of λ · a.
We describe now some important algebras which will appear again later.
Example 1.3.
(1) The field K is a commutative K-algebra, of dimension 1.
(2) The field C is also an algebra over R, of dimension 2, with R-vector space basis
{1, i}, where i 2 = −1. More generally, if K is a subfield of a larger field L, then
L is an algebra over K where addition and (scalar) multiplication are given by
the addition and multiplication in the field L.
(3) The space of n×n-matrices Mn (K) with matrix addition and matrix multiplica-
tion form a K-algebra. It has dimension n2 ; the matrix units Eij for 1 ≤ i, j ≤ n
form a K-basis. Here Eij is the matrix which has entry 1 at position (i, j ), and
all other entries are 0. This algebra is not commutative for n ≥ 2. For example
we have E11 E12 = E12 but E12 E11 = 0.
(4) Polynomial rings K[X], or K[X, Y ], are commutative K-algebras. They are not
finite-dimensional.
(5) Let V be a K-vector space, and consider the K-linear maps on V

EndK (V ) := {α : V → V | α is K-linear}.

This is a K-algebra, if one takes as multiplication the composition of maps, and


where the addition and scalar multiplication are pointwise, that is

(α + β)(v) = α(v) + β(v) and (λα)(v) = λ(α(v))

for all α, β ∈ EndK (V ), λ ∈ K and v ∈ V .


Exercise 1.1. For the n × n-matrices Mn (K) in Example 1.3 (3) check that the
axioms for a K-algebra from Definition 1.1 are satisfied.
Remark 1.4. In order to perform computations in a K-algebra A, very often one
fixes a K-vector space basis, say {b1 , b2 , . . .}. It then suffices to know the products

br bs (r, s ≥ 1). (*)

Then one expresses arbitrary elements a and a  in A as finite linear combinations


of elements in this basis, and uses the distributive laws to compute the product of a
and a  . We refer to (*) as ‘multiplication rules’.
4 1 Algebras

In Example 1.3 where A = C and K = R, taking the basis {1C , i} one gets the
usual multiplication rules for complex numbers.
For the n × n-matrices Mn (K) in Example 1.3, it is convenient to take the basis
of matrix units {Eij | 1 ≤ i, j ≤ n} since products of two such matrices are either
zero, or some other matrix of the same form, see Exercise 1.10.
Given some algebras, there are several general methods to construct new ones. We
describe two such methods.
Definition 1.5. If A1 , . . . , An are K-algebras, their direct product is defined to be
the algebra with underlying space

A1 × . . . × An := {(a1 , . . . , an ) | ai ∈ Ai for i = 1, . . . , n}

and where addition and multiplication are componentwise. It is a straightforward


exercise to check that the axioms of Definition 1.1 are satisfied.
Definition 1.6. If A is any K-algebra, then the opposite algebra Aop of A has the
same underlying K-vector space as A, and the multiplication in Aop , which we
denote by ∗, is given by reversing the order of the factors, that is

a ∗ b := ba for a, b ∈ A.

This is again a K-algebra, and (Aop )op = A.


There are three types of algebras which will play a special role in the following,
namely division algebras, group algebras, and path algebras of quivers. We will
study these now in more detail.

1.1.1 Division Algebras

A commutative ring is a field precisely when every non-zero element has an inverse
with respect to multiplication. More generally, there are algebras in which every
non-zero element has an inverse, and they need not be commutative.
Definition 1.7. An algebra A (over a field K) is called a division algebra if every
non-zero element a ∈ A is invertible, that is, there exists an element b ∈ A such
that ab = 1A = ba. If so, we write b = a −1 . Note that if A is finite-dimensional
and ab = 1A then it follows that ba = 1A ; see Exercise 1.8.
Division algebras occur naturally, we will see this later. Clearly, every field is a
division algebra. There is a famous example of a division algebra which is not a
field, this was discovered by Hamilton.
1.1 Definition and Examples 5

Example 1.8. The algebra H of quaternions is the 4-dimensional algebra over R


with basis elements 1, i, j, k and with multiplication defined by

i 2 = j 2 = k 2 = −1

and

ij = k, j i = −k, j k = i, kj = −i, ki = j, ik = −j

and extending to linear combinations. That is, an arbitrary element of H has the
form a + bi + cj + dk with a, b, c, d ∈ R, and the product of two elements in H is
given by

(a1 + b1 i + c1 j + d1 k) · (a2 + b2 i + c2 j + d2 k) =
(a1 a2 − b1 b2 − c1 c2 − d1 d2 ) + (a1 b2 + b1 a2 + c1 d2 − d1 c2 )i
+ (a1 c2 − b1 d2 + c1 a2 + d1 b2 )j + (a1 d2 + b1 c2 − c1 b2 + d1 a2 )k.

It might be useful to check this formula, see Exercise 1.11.


One can check directly that the multiplication in H is associative, and that it
satisfies the distributive law. But this will follow more easily later from a different
construction of H, see Example 1.27.
This 4-dimensional R-algebra H is a division algebra. Indeed, take a general
element of H, of the form u = a + bi + cj + dk with a, b, c, d ∈ R. Let
ū := a − bi − cj − dk ∈ H, then we compute ūu, using the above formula, and get

ūu = (a 2 + b 2 + c2 + d 2 ) · 1 = uū.

This is non-zero for any u = 0, and from this, one can write down the inverse of any
non-zero element u.
Remark 1.9. We use the notation i, and this is justified: The subspace
{a + bi | a, b ∈ R} of H really is C, indeed from the multiplication rules in H
we get

(a1 + b1 i) · (a2 + b2 i) = a1 a2 − b1 b2 + (a1 b2 + b1 a2 )i.

So the multiplication in H agrees with the one in C.


However, H is not a C-algebra, since axiom (Alg) from Definition 1.1 is not
satisfied, for example we have

i(j 1) = ij = k = −k = j i = j (i1).

The subset {±1, ±i, ±j, ±k} of H forms a group under multiplication, this is known
as the quaternion group.
6 1 Algebras

1.1.2 Group Algebras

Let G be a group and K a field. We define a vector space over K which has basis
the set {g | g ∈ G}, and we call this vector space KG. This space becomes a
K-algebra if one defines the product on the basis by taking the group multiplication,
and extends it to linear combinations. We call this algebra KG the group algebra.
 Thus an arbitrary element of KG is a finite linear combination of the form
g∈G αg g with αg ∈ K. We can write down a formula for  the product of
two elements, following the recipe in Remark 1.4. Let α = g∈G αg g and
β = h∈G βh h be two elements in KG; then their product has the form
 
αβ = ( αg βh )x.
x∈G gh=x

Since the multiplication in the group is associative, it follows that the multiplication
in KG is associative. Furthermore, one checks that the multiplication in KG is
distributive. The identity element of the group algebra KG is given by the identity
element of G.
Note that the group algebra KG is finite-dimensional if and only if the group G
is finite, in which case the dimension of KG is equal to the order of the group G.
The group algebra KG is commutative if and only if the group G is abelian.
Example 1.10. Let G be the cyclic group of order 3, generated by y, so that
G = {1G , y, y 2 } and y 3 = 1G . Then we have

(a0 1G + a1 y + a2 y 2 )(b0 1G + b1 y + b2 y 2 ) = c0 1G + c1 y + c2 y 2 ,

with

c0 = a 0 b 0 + a 1 b 2 + a 2 b 1 , c1 = a 0 b 1 + a 1 b 0 + a 2 b 2 , c2 = a 0 b 2 + a 1 b 1 + a 2 b 0 .

1.1.3 Path Algebras of Quivers

Path algebras of quivers are a class of algebras with an easy multiplication formula,
and they are extremely useful for calculating examples. They also have connections
to other parts of mathematics. The underlying basis of a path algebra is the set of
paths in a finite directed graph. It is customary in representation theory to call such
a graph a quiver. We assume throughout that a quiver has finitely many vertices and
finitely many arrows.
Definition 1.11. A quiver Q is a finite directed graph. We sometimes write
Q = (Q0 , Q1 ), where Q0 is the set of vertices and Q1 is the set of arrows.
1.1 Definition and Examples 7

We assume that Q0 and Q1 are finite sets. For any arrow α ∈ Q1 we denote by
s(α) ∈ Q0 its starting point and by t (α) ∈ Q0 its end point.
A non-trivial path in Q is a sequence p = αr . . . α2 α1 of arrows αi ∈ Q1 such
that t (αi ) = s(αi+1 ) for all i = 1, . . . , r − 1. Note that our convention is to read
paths from right to left. The number r of arrows is called the length of p, and we
denote by s(p) = s(α1 ) the starting point, and by t (p) = t (αr ) the end point of p.
For each vertex i ∈ Q0 we also need to have a trivial path of length 0, which we
call ei , and we set s(ei ) = i = t (ei ).
We call a path p in Q an oriented cycle if p has positive length and s(p) = t (p).
Definition 1.12. Let K be a field and Q a quiver. The path algebra KQ of the
quiver Q over the field K has underlying vector space with basis given by all paths
in Q.
The multiplication in KQ is defined on the basis by concatenation of paths (if
possible), and extended linearly to linear combinations. More precisely, for two
paths p = αr . . . α1 and q = βs . . . β1 in Q we set

αr . . . α1 βs . . . β1 if t (βs ) = s(α1 ),
p·q =
0 otherwise.

Note that for the trivial paths ei , where i is a vertex in Q, we have that p · ei = p
for i = s(p) and p · ei = 0 for i = s(p); similarly ei · p = p for i = t (p) and 0
otherwise. In particular we have ei · ei = ei .
The multiplication in KQ is associative since the concatenation of paths is
associative, and it is distributive, by definition of products for arbitrary linear
combinations. We claim that the identity element of KQ is given by the sum of
trivial paths, that is

1KQ = ei .
i∈Q0

In fact, for every path p in Q we have


  
p·( ei ) = p · ei = p · es(p) = p = et (p) · p = ( ei ) · p,
i∈Q0 i∈Q0 i∈Q0

 
and by distributivity it follows that α · ( i∈Q0 ei ) =α=( i∈Q0 ei ) · α for every
α ∈ KQ.
Example 1.13.
α
(1) We consider the quiver Q of the form 1 ←− 2. The path algebra KQ has
dimension 3, the basis consisting of paths is {e1 , e2 , α}. The multiplication table
8 1 Algebras

for KQ is given by

· e1 α e2
e1 e1 α 0
α 0 0 α
e2 0 0 e2

(2) Let Q be the one-loop quiver with one vertex v and one arrow α with
s(α) = v = t (α), that is

Then the path algebra KQ has as basis the set {1, α, α 2 , α 3 , . . .}, and it is not
finite-dimensional.
(3) A quiver can have multiple arrows between two vertices. This is the case for the
Kronecker quiver

(4) Examples of quivers where more than two arrows start or end at a vertex are the
three-subspace quiver

or similarly with a different orientation of the arrows

(5) Quivers can have oriented cycles, for example


1.2 Subalgebras, Ideals and Factor Algebras 9

Exercise 1.2. Let K be a field.


(a) Show that the path algebra KQ in part (5) of Example 1.13 is not finite-
dimensional.
(b) Show that the path algebra KQ of a quiver Q is finite-dimensional if and only
if Q does not contain any oriented cycles.

1.2 Subalgebras, Ideals and Factor Algebras

In analogy to the ‘subspaces’ of vector spaces, and ‘subgroups’ of groups, we


should define the notion of a ‘subalgebra’. Suppose A is a K-algebra, then, roughly
speaking, a subalgebra B of A is a subset of A which is itself an algebra with respect
to the operations in A. This is made precise in the following definition.
Definition 1.14. Let K be a field and A a K-algebra. A subset B of A is called a
K-subalgebra (or just subalgebra) of A if the following holds:
(i) B is a K-subspace of A, that is, for every λ, μ ∈ K and b1 , b2 ∈ B we have
λb1 + μb2 ∈ B.
(ii) B is closed under multiplication, that is, for all b1 , b2 ∈ B, the product b1 b2
belongs to B.
(iii) The identity element 1A of A belongs to B.
Exercise 1.3. Let A be a K-algebra and B ⊆ A a subset. Show that B is a
K-subalgebra of A if and only if B itself is a K-algebra with the operations induced
from A.
Remark 1.15. Suppose B is given as a subset of some algebra A, with the same
addition, scalar multiplication, and multiplication. To decide whether or not B is
an algebra, there is no need to check all the axioms of Definition 1.1. Instead, it is
enough to verify conditions (i) to (iii) of Definition 1.14.
We consider several examples.
Example 1.16. Let K be a field.
(1) The K-algebra Mn (K) of n × n-matrices over K has many important subalge-
bras.
(i) The upper triangular matrices

Tn (K) := {a = (aij ) ∈ Mn (K) | aij = 0 for i > j }

form a subalgebra of Mn (K). Similarly, one can define lower triangular


matrices and they also form a subalgebra of Mn (K).
10 1 Algebras

(ii) The diagonal matrices

Dn (K) := {a = (aij ) ∈ Mn (K) | aij = 0 for i = j }

form a subalgebra of Mn (K).


(iii) The three-subspace algebra is the subalgebra of M4 (K) given by
⎧⎛ ⎞ ⎫

⎪ a1 b1 b2 b3 ⎪

⎨⎜ ⎬
⎜0 a2 0 0⎟ ⎟ | ai , bj ∈ K .
⎪ ⎝0 0 0⎠ ⎪


a3 ⎪

0 0 0 a4

(iv) There are also subalgebras such as


⎧⎛ ⎞ ⎫

⎪ a b 0 0 ⎪

⎨⎜ ⎬
⎜c d 0 0⎟⎟ | a, b, c, d, x, y, z, u ∈ K ⊆ M4 (K).
⎪ ⎝0 0 ⎠ ⎪


x y ⎪

0 0 z u

(2) The n×n-matrices Mn (Z) ⊆ Mn (R) over the integers are closed under addition
and multiplication, but Mn (Z) is not an R-subalgebra of Mn (R) since it is not
an R-subspace.
(3) The subset
  
0b
| b ∈ K ⊆ T2 (K)
00

is not a K-subalgebra of T2 (K) since it does not contain the identity element.
(4) Let A be a K-algebra. For any element a ∈ A define Aa to be the K-span of
{1A , a, a 2 , . . .}. That is, Aa is the space of polynomial expressions in a. This
is a K-subalgebra of A, and it is always commutative. Note that if A is finite-
dimensional then so is Aa .
(5) Let A = A1 × A2 , the direct product of two algebras. Then A1 × {0} is not a
subalgebra of A since it does not contain the identity element of A.
(6) Let H be a subgroup of a group G. Then the group algebra KH is a subalgebra
of the group algebra KG.
α β
(7) Let KQ be the path algebra of the quiver 1 ←− 2 ←− 3. We can consider the
α
‘subquiver’,Q given by 1 ←− 2. The path algebra KQ is not a subalgebra of
KQ since it does not contain the identity element 1KQ = e1 + e2 + e3 of KQ.
Exercise 1.4. Verify that the three-subspace algebra from Example 1.16 is a
subalgebra of M4 (K), hence is a K-algebra.
In addition to subalgebras, there are also ideals, and they are needed when one
wants to define factor algebras.
1.2 Subalgebras, Ideals and Factor Algebras 11

Definition 1.17. If A is a K-algebra then a subset I is a left ideal of A provided


(I, +) is a subgroup of (A, +) such that ax ∈ I for all x ∈ I and a ∈ A. Similarly,
I is a right ideal of A if (I, +) is a subgroup such that xa ∈ I for all x ∈ I and
a ∈ A. A subset I of A is a two-sided ideal (or just ideal) of A if it is both a left
ideal and a right ideal.
Remark 1.18.
(1) The above definition works verbatim for rings instead of algebras.
(2) For commutative algebras A the notions of left ideal, right ideal and two-sided
ideal clearly coincide. However, for non-commutative algebras, a left ideal
might not be a right ideal, and vice versa; see the examples below.
(3) An ideal I (left or right or two-sided) of an algebra A is by definition closed
under multiplication, and also a subspace, see Lemma 1.20 below. However, an
ideal is in general not a subalgebra, since the identity element need not be in I .
Actually, a (left or right or two-sided) ideal I ⊆ A contains 1A if and only if
I = A. In addition, a subalgebra is in general not an ideal.
Exercise 1.5. Assume B is a subalgebra of A. Show that B is a left ideal (or right
ideal) if and only if B = A.
Example 1.19.
(1) Every K-algebra A has the trivial two-sided ideals {0} and A. In the sequel, we
will usually just write 0 for the ideal {0}.
(2) Let A be a K-algebra. For every z ∈ A the subset Az = {az | a ∈ A} is a left
ideal of A. Similarly, zA = {za | a ∈ A} is a right ideal of A for every z ∈ A.
These are called the principal (left or right) ideals generated by z. As a notation
for principal ideals in commutative algebras we often use Az = (z).
(3) For non-commutative algebras, Az need not be a two-sided ideal. For instance,
let A = Mn (K) for some n ≥ 2 and consider the matrix unit z = Eii for some
i ∈ {1, . . . , n}. Then the left ideal Az consists of the matrices with non-zero
entries only in column i, whereas the right ideal zA consists of those matrices
with non-zero entries in row i. In particular, Az = zA, and both are not two-
sided ideals of Mn (K).
(4) The only two-sided ideals in Mn (K) are the trivial ideals. In fact, suppose that
I = 0 is a two-sided ideal in Mn (K), and let a = (aij ) ∈ I \ {0}, say ak = 0.
Then for every r, s ∈ {1, . . . , n} we have that Erk aEs ∈ I since I is a two-
sided ideal. On the other hand,
⎛ ⎞ ⎛ ⎞
n n
Erk aEs = Erk ⎝ aij Eij ⎠ Es = ⎝ akj Erj ⎠ Es = ak Ers .
i,j =1 j =1

Since ak = 0 we conclude that Ers ∈ I for all r, s and hence I = Mn (K).
(5) Consider the K-algebra Tn (K) of upper triangular matrices. The K-subspace
of strict upper triangular matrices I1 := span{Eij | 1 ≤ i < j ≤ n}
forms a two-sided ideal. More generally, for any d ∈ N the subspace
12 1 Algebras

Id := span{Eij | d ≤ j − i} is a two-sided ideal of Tn (K). Note that by


definition, Id = 0 for d ≥ n.
(6) Let Q be a quiver and A = KQ be its path algebra over a field K. Take the
trivial path ei for a vertex i ∈ Q0 . Then by Example (2) above we have the left
ideal Aei . This is spanned by all paths in Q with starting vertex i. Similarly, the
right ideal ei A is spanned by all paths in Q ending at vertex i.
A two-sided ideal of KQ is for instance given by the subspace (KQ)≥1
spanned by all paths in Q of non-zero length (that is, by all paths in Q except
the trivial paths). More generally, for every d ∈ N the linear combinations of
all paths in Q of length at least d form a two-sided ideal (KQ)≥d .
Exercise 1.6. Let Q be a quiver, i a vertex in Q and A = KQ the path algebra (over
some field K). Find a condition on Q so that the left ideal Aei is a two-sided ideal.
Let A be a K-algebra and let I ⊂ A be a proper two-sided ideal. Recall from basic
algebra that the cosets A/I := {a + I | a ∈ A} form a ring, the factor ring, with
addition and multiplication defined by

(a + I ) + (b + I ) := a + b + I, (a + I )(b + I ) := ab + I

for a, b ∈ A. Note that these operations are well-defined, that is, they are
independent of the choice of the representatives of the cosets, because I is a two-
sided ideal. Moreover, the assumption I = A is needed to ensure that the factor ring
has an identity element; see Axiom (R7) in Sect. 1.1.
For K-algebras we have some extra structure on the factor rings.
Lemma 1.20. Let A be a K-algebra. Then the following holds.
(a) Every left (or right) ideal I of A is a K-subspace of A.
(b) If I is a proper two-sided ideal of A then the factor ring A/I is a K-algebra,
the factor algebra of A with respect to I .
Proof. (a) Let I be a left ideal. By definition, (I, +) is an abelian group. We need
to show that if λ ∈ K and x ∈ I then λx ∈ I . But λ1A ∈ A, and we obtain

λx = λ(1A x) = (λ1A )x ∈ I,

since I is a left ideal. The same argument works if I is a right ideal, by axiom (Alg)
in Definition 1.1.
(b) We have already recalled above that the cosets A/I form a ring. Moreover, by
part (a), I is a K-subspace and hence A/I is also a K-vector space with (well-
defined) scalar multiplication λ(a +I ) = λa +I for all λ ∈ K and a ∈ A. According
to Definition 1.1 it only remains to show that axiom (Alg) holds. But this property
is inherited from A; explicitly, let λ ∈ K and a, b ∈ A, then

λ((a + I )(b + I )) = λ(ab + I ) = λ(ab) + I = (λa)b + I


= (λa + I )(b + I ) = (λ(a + I ))(b + I ).
1.3 Algebra Homomorphisms 13

Similarly, using that λ(ab) = a(λb) by axiom (Alg), one shows that
λ((a + I )(b + I )) = (a + I )(λ(b + I )). 

Example 1.21. Consider the algebra K[X] of polynomials in one variable over
a field K. Recall from a course on basic algebra that every non-zero ideal I of
K[X] is of the form K[X]f = (f ) for some non-zero polynomial f ∈ K[X]
(that is, K[X] is a principal ideal domain). The factor algebra A/I = K[X]/(f )
is finite-dimensional. More precisely, we claim that it has dimension equal to the
degree d of the polynomial f and that a K-basis of K[X]/(f ) is given by the
cosets 1 + (f ), X + (f ), . . . , Xd−1 + (f ). In fact, if g ∈ K[X] then division with
remainder in K[X] (polynomial long division) yields g = qf + r with polynomials
q, r ∈ K[X] and r has degree less than d, the degree of f . Hence

g + (f ) = r + (f ) ∈ span{1 + (f ), X + (f ), . . . , Xd−1 + (f )}.

On the other hand, considering degrees one checks that this spanning set of
K[X]/(f ) is also linearly independent.

1.3 Algebra Homomorphisms

As with any algebraic structure (like vector spaces, groups, rings) one needs to
define and study maps between algebras which ‘preserve the structure’.
Definition 1.22. Let A and B be K-algebras. A map φ : A → B is a K-algebra
homomorphism (or homomorphism of K-algebras) if
(i) φ is a K-linear map of vector spaces,
(ii) φ(ab) = φ(a)φ(b) for all a, b ∈ A,
(iii) φ(1A ) = 1B .
The map φ : A → B is a K-algebra isomorphism if it is a K-algebra
homomorphism and is in addition bijective. If so, then the K-algebras A and B
are said to be isomorphic, and one writes A ∼
= B. Note that the inverse of an algebra
isomorphism is also an algebra isomorphism, see Exercise 1.14.
Remark 1.23.
(1) To check condition (ii) of Definition 1.22, it suffices to take for a, b any two
elements in some fixed basis. Then it follows for arbitrary elements of A as
long as φ is K-linear.
(2) Note that the definition of an algebra homomorphism requires more than just
being a homomorphism of the underlying rings. Indeed, a ring homomorphism
between K-algebras is in general not a K-algebra homomorphism.
For instance, consider the complex numbers C as a C-algebra. Let
φ : C → C, φ(z) = z̄ be the complex conjugation map. By the usual rules
for complex conjugation φ satisfies axioms (ii) and (iii) from Definition 1.22,
14 1 Algebras

that is, φ is a ring homomorphism. But φ does not satisfy axiom (i), since for
example

φ(ii) = φ(i 2 ) = φ(−1) = −1 = 1 = i(−i) = iφ(i).

So φ is not a C-algebra homomorphism. However, if one considers C as an


algebra over R, then complex conjugation is an R-algebra homomorphism. In
fact, for r ∈ R and z ∈ C we have

φ(rz) = rz = r̄ z̄ = r z̄ = rφ(z).

We list a few examples, some of which will occur frequently. For each of these,
we recommend checking that the axioms of Definition 1.22 are indeed satisfied.
Example 1.24. Let K be a field.
(1) Let Q be the one-loop quiver with one vertex v and one arrow α such
that s(α) = v = t (α). As pointed out in Example 1.13, the path algebra
KQ has a basis consisting of 1, α, α 2 , . . .. The multiplication is given by
α i · α j = α i+j . From this we can see that the polynomial algebraK[X] and
 arei isomorphic, via the homomorphism defined
KQ by sending i λi Xi to
i
i λi α . That is, we substitute α into the polynomial i λi X .
(2) Let A be a K-algebra. For every element a ∈ A we consider the ‘evaluation
map’
 
ϕa : K[X] → A , λi Xi → λi a i .
i i

This is a homomorphism of K-algebras.


(3) Let A be a K-algebra and I ⊂ A a proper two-sided ideal, with factor algebra
A/I . Then the canonical map π : A → A/I , a → a + I is a surjective
K-algebra homomorphism.
(4) Let A = A1 × . . . × An be a direct product of K-algebras. Then for each
i ∈ {1, . . . , n} the projection map

πi : A → Ai , (a1 , . . . , an ) → ai

is a surjective K-algebra homomorphism. However, note that the embeddings

ιi : Ai → A , ai → (0, . . . , 0, ai , 0, . . . , 0)

are not K-algebra homomorphisms when n ≥ 2 since the identity element 1Ai
is not mapped to the identity element 1A = (1A1 , . . . , 1An ).
1.3 Algebra Homomorphisms 15

(5) Let A = Tn (K) be the algebra of upper triangular matrices. Denote by


B = K × . . . × K the direct product of n copies of K. Define φ : A → B
by setting
⎛ ⎞
a11
⎜ .. ⎟
⎜ . ∗ ⎟

φ⎜ ⎟ = (a11 , . . . , ann ).
.. ⎟
⎝ 0 . ⎠
ann

Then φ is a homomorphism of K-algebras.


(6) Consider the matrix algebra Mn (K) where n ≥ 1. Then the opposite algebra
Mn (K)op (as defined in Definition 1.6) is isomorphic to the algebra Mn (K). In
fact, consider the map given by transposition of matrices

τ : Mn (K) → Mn (K)op , m → mt .

Clearly, this is a K-linear map, and it is bijective (since τ 2 is the identity).


Moreover, for all matrices m1 , m2 ∈ Mn (K) we have (m1 m2 )t = mt2 mt1 by a
standard result from linear algebra, that is,

τ (m1 m2 ) = (m1 m2 )t = mt2 mt1 = τ (m1 ) ∗ τ (m2 ).

Finally, τ maps the identity matrix to itself, hence τ is an algebra homomor-


phism.
(7) When writing linear transformations of a finite-dimensional vector space as
matrices with respect to a fixed basis, one basically proves that the algebra of
linear transformations is isomorphic to the algebra of square matrices. We recall
the proof, partly as a reminder, but also since we will later need a generalization.
Suppose V is an n-dimensional vector space over the field K. Then the
K-algebras EndK (V ) and Mn (K) are isomorphic.
In fact, we fix a K-basis of V . Suppose α is a linear transformation of V , let
M(α) be the matrix of α with respect to the fixed basis. Then define a map

ψ : EndK (V ) → Mn (K), ψ(α) := M(α).

From linear algebra it is known that ψ is K-linear and that it preserves the
multiplication, that is, M(β)M(α) = M(β ◦ α). The map ψ is also injective.
Suppose M(α) = 0, then by definition α maps the fixed basis to zero, but then
α = 0. The map ψ is surjective, because every n × n-matrix defines a linear
transformation of V .
(8) We consider the algebra T2 (K) of upper triangular 2 × 2-matrices. This
algebra is of dimension 3 and has a basis of matrix units E11 , E12 and E22 .
Their products can easily be computed (for instance using the formula in
Exercise 1.10), and they are collected in the multiplication table below. Let
us now compare the algebra T2 (K) with the path algebra KQ for the quiver
16 1 Algebras

α
1 ←− 2 which has appeared already in Example 1.13. The multiplication tables
for these two algebras are given as follows

· E11 E12 E22 · e1 α e2


E11 E11 E12 0 e1 e1 α 0
E12 0 0 E12 α 0 0 α
E22 0 0 E22 e2 0 0 e2

From this it easily follows that the assignment E11 → e1 , E12 → α, and
E22 → e2 defines a K-algebra isomorphism T2 (K) → KQ.
Remark 1.25. The last example can be generalized. For every n ∈ N the K-algebra
Tn (K) of upper triangular n × n-matrices is isomorphic to the path algebra of the
quiver

1 ←− 2 ←− . . . ←− n − 1 ←− n

See Exercise 1.18.


In general, homomorphisms and isomorphisms are very important when com-
paring different algebras. Exactly as for rings we have an isomorphism theorem for
algebras. Note that the kernel and the image of an algebra homomorphism are just
the kernel and the image of the underlying K-linear map.
Theorem 1.26 (Isomorphism Theorem for Algebras). Let K be a field and let
A and B be K-algebras. Suppose φ : A → B is a K-algebra homomorphism. Then
the kernel ker(φ) is a two-sided ideal of A, and the image im(φ) is a subalgebra of
B. Moreover:
(a) If I is an ideal of A and I ⊆ ker(φ) then we have a surjective algebra
homomorphism

φ̄ : A/I → im(φ), φ̄(a + I ) = φ(a).

(b) The map φ̄ is injective if and only if I = ker(φ). Hence φ induces an


isomorphism

: A/ker(φ) → im(φ) , a + ker(φ) → φ(a).

Proof. From linear algebra we know that the kernel ker(φ) = {a ∈ A | φ(a) = 0} is
a subspace of A, and ker(φ) = A since φ(1A ) = 1B (see Definition 1.22). If a ∈ A
and x ∈ ker(φ) then we have

φ(ax) = φ(a)φ(x) = φ(a) · 0 = 0 = 0 · φ(a) = φ(x)φ(a) = φ(xa),

that is, ax ∈ ker(φ) and xa ∈ ker(φ) and ker(φ) is a two-sided ideal of A.


1.3 Algebra Homomorphisms 17

We check that im(φ) is a subalgebra of B (see Definition 1.14). Since φ is a


K-linear map, the image im(φ) is a subspace. It is also closed under multiplication
and contains the identity element; in fact, we have

φ(a)φ(b) = φ(ab) ∈ im(φ) and 1B = φ(1A ) ∈ im(φ)

since φ is an algebra homomorphism.


(a) If a + I = a  + I then a − a  ∈ I and since I ⊆ ker(φ) we have

0 = φ(a − a  ) = φ(a) − φ(a  ).

Hence φ̄ is well defined, and its image is obviously equal to im(φ). It remains to
check that φ̄ is an algebra homomorphism. It is known from linear algebra (and
easy to check) that the map is K-linear. It takes the identity 1A + I to the identity
element of B since φ is an algebra homomorphism. To see that it preserves products,
let a, b ∈ A; then

φ̄((a + I )(b + I )) = φ̄(ab + I ) = φ(ab) = φ(a)φ(b) = φ̄(a + I )φ̄(b + I ).

(b) We see directly that ker(φ̄) = ker(φ)/I . The homomorphism φ̄ is injective if


and only if its kernel is zero, that is, ker(φ) = I . The last part follows.

Example 1.27.
(1) Consider the following evaluation homomorphism of R-algebras (see Exam-
ple 1.24),

 : R[X] → C , (f ) = f (i)

where i 2 = −1. In order to apply the isomorphism theorem we have to


determine the kernel of . Clearly, the ideal (X2 + 1) = R[X](X2 + 1)
generated by X2 +1 is contained in the kernel. On the other hand, if g ∈ ker(),
then division with remainder in R[X] yields polynomials h and r such that
g = h(X2 + 1) + r, where r is of degree ≤ 1. Evaluating at i gives r(i) = 0,
but r has degree ≤ 1, so r is the zero polynomial, that is, g ∈ (X2 + 1). Since
 is surjective by definition, the isomorphism theorem gives

R[X]/(X2 + 1) ∼
=C

as R-algebras.
(2) Let G be a cyclic group of order n, generated by the element a ∈ G. For a field
K consider the group algebra KG; this is a K-algebra of dimension n. Similar
to the previous example we consider the surjective evaluation homomorphism

 : K[X] → KG , (f ) = f (a).


18 1 Algebras

From the isomorphism theorem we know that

K[X]/ ker() ∼
= im() = KG.

The ideal generated by the polynomial Xn − 1 is contained in the kernel of ,


since a n is the identity element in G (and then also in KG). Thus, the algebra
on the left-hand side has dimension at most n, see Example 1.21. On the other
hand, KG has dimension n. From this we can conclude that ker() = (Xn − 1)
and that as K-algebras we have

K[X]/(Xn − 1) ∼
= KG.

(3) Let A be a finite-dimensional K-algebra and a ∈ A, and let Aa be the


t Let t i∈ N0 such that {1, a, a , . . . a }
subalgebra of A as in Example 1.16. 2 t

is linearly independent and a t +1 = i=0 λi a for λi ∈ K. The polynomial


t
ma := Xt +1 − λi Xi ∈ K[X]
i=0

is called the minimal polynomial of a. It is the (unique) monic polynomial of


smallest degree such that the evaluation map (see Example 1.24) at a is zero.
By the same arguments as in the first two examples we have

K[X]/(ma ) ∼
= Aa .

(4) Suppose A = A1 × . . . × Ar , the direct product of K-algebras A1 . . . , Ar . Then


for every i ∈ {1, . . . , r} the projection πi : A → Ai , (a1 , . . . , ar ) → ai , is
an algebra homomorphism, and it is surjective. By definition the kernel has the
form

ker(πi ) = A1 × . . . × Ai−1 × 0 × Ai+1 × . . . × Ar .

By the isomorphism theorem we have A/ker(πi ) ∼ = Ai . We also see from this


that A1 × . . . × Ai−1 × 0 × Ai+1 × . . . × Ar is a two-sided ideal of A.
(5) We consider the following map on the upper triangular matrices
⎛ ⎛ ⎞ ⎞
a11 ∗ a11 0
⎜ .. ⎟ ⎜ .. ⎟
 : Tn (K) → Tn (K) , ⎝ . ⎠ → ⎝ . ⎠,
0 ann 0 ann

which sets all non-diagonal entries to 0. It is easily checked that  is a


homomorphism of K-algebras. By definition, the kernel is the two-sided ideal

Un (K) := {a = (aij ) ∈ Mn (K) | aij = 0 for i ≥ j }


1.4 Some Algebras of Small Dimensions 19

of strict upper triangular matrices and the image is the K-algebra Dn (K) of
diagonal matrices. Hence the isomorphism theorem yields that

Tn (K)/Un (K) ∼
= Dn (K)

as K-algebras. Note that, moreover, we have that Dn (K) ∼


= K × . . . × K, the
n-fold direct product of copies of K.
(6) We want to give an alternative description of the R-algebra H of quaternions
from Sect. 1.1.1. To this end, consider the map
⎛ ⎞
a b c d
⎜−b a −d c ⎟
 : H → M4 (R) , a + bi + cj + dk → ⎜
⎝ −c
⎟.
d a −b⎠
−d −c b a

Using the formula from Example 1.8 for the product of two elements from H
one can check that  is an R-algebra homomorphism, see Exercise 1.11.
Looking at the first row of the matrices in the image, it is immediate that
 is injective. Therefore, the algebra H is isomorphic to the subalgebra im()
of M4 (R). Since we know (from linear algebra) that matrix multiplication is
associative and that the distributivity law holds in M4 (R), we can now conclude
with no effort that the multiplication in H is associative and distributive.
Exercise 1.7. Explain briefly how examples (1) and (2) in Example 1.27 are special
cases of (3).

1.4 Some Algebras of Small Dimensions

One might like to know how many K-algebras there are of a given dimension, up
to isomorphism. In general there might be far too many different algebras, but for
small dimensions one can hope to get a complete overview. We fix a field K, and we
consider K-algebras of dimension at most 2. For these, there are some restrictions.
Proposition 1.28. Let K be a field.
(a) Every 1-dimensional K-algebra is isomorphic to K.
(b) Every 2-dimensional K-algebra is commutative.
Proof. (a) Let A be a 1-dimensional K-algebra. Then A must contain the scalar
multiples of the identity element, giving a subalgebra U := {λ1A | λ ∈ K} ⊆ A.
Then U = A, since A is 1-dimensional. Moreover, according to axiom (Alg) from
Definition 1.1 the product in U is given by (λ1A )(μ1A ) = (λμ)1A and hence the
map A → K, λ1A → λ, is an isomorphism of K-algebras.
20 1 Algebras

(b) Let A be a 2-dimensional K-algebra. We can choose a basis which contains


the identity element of A, say {1A , b} (use from linear algebra that every linearly
independent subset can be extended to a basis). The basis elements clearly commute;
but then also any linear combinations of basis elements commute, and therefore A
is commutative. 

We consider now algebras of dimension 2 over the real numbers R. The aim is to
classify these, up to isomorphism. The method will be to find suitable bases, leading
to ‘canonical’ representatives of the isomorphism classes. It will turn out that there
are precisely three R-algebras of dimension 2, see Proposition 1.29 below.
So we take a 2-dimensional R-algebra A, and we choose a basis of A containing
the identity, say {1A , b}, as in the above proof of Proposition 1.28. Then b 2 must be
a linear combination of the basis elements, so there are scalars γ , δ ∈ R such that
b2 = γ 1A + δb. We consider the polynomial X2 − δX − γ ∈ R[X] and we complete
squares,

X2 − δX − γ = (X − δ/2)2 − (γ + (δ/2)2 ).

Let b := b − (δ/2)1A , this is an element in the algebra A, and we also set


ρ = γ + (δ/2)2, which is a scalar from R. Then we have

b2 = b2 − δb + (δ/2)2 1A = γ 1A + (δ/2)2 1A = ρ1A .

Note that {1A , b  } is still an R-vector space basis of A. Then we rescale this basis
by setting

|ρ|−1 b if ρ =
0,
b̃ :=
b if ρ = 0.

Then the set {1A , b̃} also is an R-vector space basis of A, and now we have
b̃2 ∈ {0, ±1A }.
This leaves only three possible forms for the algebra A. We write Aj for the
algebra in which b̃2 = j 1Aj for j = 0, 1, −1. We want to show that no two of these
three algebras are isomorphic. For this we use Exercise 1.15.
(1) The algebra A0 has a non-zero element with square zero, namely b̃. By
Exercise 1.15, any algebra isomorphic to A0 must also have such an element.
(2) The algebra A1 does not have a non-zero element whose square is zero: Suppose
a 2 = 0 for a ∈ A1 and write a = λ1A1 + μb̃ with λ, μ ∈ R. Then, using that
b̃2 = 1A1 , we have

0 = a 2 = (λ2 + μ2 )1A1 + 2λμb̃.

Since 1A1 and b̃ are linearly independent, it follows that 2λμ = 0 and λ2 = −μ2 .
So λ = 0 or μ = 0, which immediately forces λ = μ = 0, and therefore a = 0, as
claimed.
1.4 Some Algebras of Small Dimensions 21

This shows that the algebra A1 is not isomorphic to A0 , by Exercise 1.15.


(3) Consider the algebra A−1 . This occurs in nature, namely C is such an R-algebra,
taking b̃ = i. In fact, we can see directly that A−1 is a field; take an arbitrary non-
zero element α1A−1 + β b̃ ∈ A−1 with α, β ∈ R. Then we have

(α1A−1 + β b̃)(α1A−1 − β b̃) = (α 2 + β 2 )1A−1

and since α and β are not both zero, we can write down the inverse of the above
non-zero element with respect to multiplication.
Clearly A0 and A1 are not fields, since they have zero divisors, namely b̃ for A0 ,
and (b̃ − 1)(b̃ + 1) = 0 in A1 . So A−1 is not isomorphic to A0 or A1 , again by
Exercise 1.15.
We can list a ‘canonical representative’ for each of the three isomorphism classes
of algebras. For j ∈ {0, ±1} consider the R-algebra

R[X]/(X2 − j ) = span{1 + (X2 − j ), X̃ := X + (X2 − j )}

∼ Aj for j ∈ {0, ±1}, where an isomorphism


and observe that R[X]/(X2 − j ) =
maps X̃ to b̃.
To summarize, we have proved the following classification of 2-dimensional R-
algebras.
Proposition 1.29. Up to isomorphism, there are precisely three 2-dimensional
algebras over R. Any 2-dimensional algebra over R is isomorphic to precisely one
of

R[X]/(X2 ), R[X]/(X2 − 1), R[X]/(X2 + 1).

Example 1.30. Any 2-dimensional R-algebra will be isomorphic to one of these.


For example, which one is isomorphic to the 2-dimensional R-algebra R × R? The
element (−1, −1) ∈ R × R squares to the identity element, so R × R is isomorphic
to A1 ∼
= R[X]/(X2 − 1).
There is an alternative explanation for those familiar with the Chinese Remainder
Theorem for polynomial rings; in fact, this yields

R[X]/(X2 − 1) ∼
= R[X]/(X − 1) × R[X]/(X + 1)

and for any scalar, R[X]/(X − λ) ∼


= R: apply the isomorphism theorem with the
evaluation map ϕλ : R[X] → R.
Remark 1.31. One might ask what happens for different fields. For the complex
numbers we can adapt the above proof and see that there are precisely two
2-dimensional C-algebras, up to isomorphism, see Exercise 1.26. However, the
situation for the rational numbers is very different: there are infinitely many non-
isomorphic 2-dimensional algebras over Q, see Exercise 1.27.
22 1 Algebras

Remark 1.32. In this book we focus on algebras over fields. One can also define
algebras where for K one takes a commutative ring, instead of a field. With this,
large parts of the constructions in this chapter work as well, but generally the
situation is more complicated. Therefore we will not discuss these here.
In addition, as we have mentioned, our algebras are associative algebras, that is,
for any a, b, c in the algebra we have

(ab)c = a(bc).

There are other kinds of algebras which occur in various contexts.


Sometimes one defines an ‘algebra’ to be a vector space V over a field, together
with a product (v, w) → vw : V × V → V which is bilinear. Then one can impose
various identities, and obtain various types of algebras. Perhaps the most common
algebras, apart from associative algebras, are the Lie algebras, which are algebras as
above where the product is written as a bracket [v, w] and where one requests that
for any v ∈ V , the product [v, v] = 0, and in addition that the ‘Jacobi identity’ must
hold, that is for any v, w, z ∈ V

[[v, w], z] + [[w, z], v] + [[z, v], w] = 0.

The properties of such Lie algebras are rather different; see the book by Erdmann
and Wildon in this series for a thorough treatment of Lie algebras at an undergrad-
uate level.1

EXERCISES

1.8. Assume A is a finite-dimensional algebra over K. If a, b ∈ A and ab = 1A


then also ba = 1A . That is, left inverses are automatically two-sided inverses.
(Hint: Observe that S(x) = ax and T (x) = bx for x ∈ A define linear maps
of A and that S ◦ T = idA . Apply a result from linear algebra.)
1.9. Let A = EndK (V ) be the algebra of K-linear maps of the infinite-
dimensional K-vector space V with basis {vi | i ∈ N}. Define elements
a, b in A on the basis, and extend to finite linear combinations, as follows:
Take b(vi ) = vi+1 for i ≥ 1, and

vi−1 i > 1,
a(vi ) :=
0 i = 1.

Check that a ◦ b = 1A and show also that b ◦ a is not the identity of A.

1 K. Erdmann, M.J. Wildon, Introduction to Lie Algebras. Springer Undergraduate Mathematics

Series. Springer-Verlag London, Ltd., London, 2006. x+251 pp.


1.4 Some Algebras of Small Dimensions 23

1.10. Let K be a field and let Eij ∈ Mn (K) be the matrix units as defined in
Example 1.3, that is, Eij has an entry 1 at position (i, j ) and all other entries
are 0. Show that for all i, j, k,  ∈ {1, . . . , n} we have

Ei if j = k,
Eij Ek = δj k Ei =
0 if j = k,

where δj k is the Kronecker symbol, that is, δj k = 1 if j = k and 0 otherwise.


1.11. Let H be the R-algebra of quaternions from Sect. 1.1.1.
(a) Compute the product of two arbitrary elements a1 + b1 i + c1 j + d1 k and
a2 +b2 i +c2 j +d2 k in H, that is, verify the formula given in Example 1.8.
(b) Verify that the following set of matrices
⎧⎛ ⎞ ⎫

⎪ a b c d ⎪

⎨⎜ ⎬
−b a −d c ⎟
A := ⎜⎝ −c
⎟ | a, b, c, d ∈ R ⊆ M4 (R)

⎪ d a −b⎠ ⎪

⎩ ⎭
−d −c b a

forms a subalgebra of the R-algebra M4 (R).


(c) Prove that the algebra A from part (b) is isomorphic to the algebra H of
quaternions, that is, fill in the details in Example 1.27.
1.12. Let H be the R-algebra of quaternions. By using the formula in Example 1.8,
determine all roots in H of the polynomial X2 + 1. In particular, verify that
there are infinitely many such roots.
1.13. Let K be a field. Which of the following subsets of M3 (K) are K-
subalgebras? Each asterisk indicates an arbitrary entry from K, not neces-
sarily the same.
⎛ ⎞ ⎛ ⎞ ⎛ ⎞
∗∗0 ∗∗∗ ∗0∗
⎝0 ∗ ∗⎠ , ⎝0 ∗ 0 ⎠ , ⎝0 ∗ 0 ⎠ ,
00∗ 00∗ 00∗
⎛ ⎞ ⎛ ⎞ ⎛ ⎞
∗∗∗ ∗∗∗ ∗0∗
⎝0 ∗ 0⎠ , ⎝0 ∗ ∗⎠ , ⎝0 ∗ 0⎠ .
0∗∗ 000 ∗0∗

1.14. Assume A and B are K-algebras, and φ : A → B is a K-algebra isomor-


phism. Show that then the inverse φ −1 of φ is a K-algebra isomorphism from
B to A.
1.15. Suppose φ : A → B is an isomorphism of K-algebras. Show that the
following holds.
(i) If a ∈ A then a 2 = 0 if and only if φ(a)2 = 0.
(ii) a ∈ A is a zero divisor if and only if φ(a) ∈ B is a zero divisor.
24 1 Algebras

(iii) A is commutative if and only if B is commutative.


(iv) A is a field if and only if B is a field.
1.16. For a field K we consider the following 3-dimensional subspaces of M3 (K):
⎧⎛ ⎞ ⎫ ⎧⎛ ⎞ ⎫
⎨ x00 ⎬ ⎨ xy z ⎬
A1 = ⎝0 y 0⎠ | x, y, z ∈ K ; A2 = ⎝ 0 x 0 ⎠ | x, y, z ∈ K ;
⎩ ⎭ ⎩ ⎭
00z 00x
⎧⎛ ⎞ ⎫ ⎧⎛ ⎞ ⎫
⎨ xy z ⎬ ⎨ xy0 ⎬
A3 = ⎝ 0 x y ⎠ | x, y, z ∈ K ; A4 = ⎝ 0 z 0 ⎠ | x, y, z ∈ K ;
⎩ ⎭ ⎩ ⎭
00x 00x
⎧⎛ ⎞ ⎫
⎨ x0y ⎬
A5 = ⎝ 0 x z ⎠ | x, y, z ∈ K .
⎩ ⎭
00x

(i) For each i, verify that Ai is a K-subalgebra of M3 (K). Is Ai commuta-


tive?
(ii) For each i, determine the set Āi := {α ∈ Ai | α 2 = 0}.
(iii) For each pair of algebras above, determine whether they are isomorphic,
or not. Hint: Apply Exercise 1.15.
1.17. Find all quivers whose path algebras have dimension at most 5 (up to an
obvious notion of isomorphism). For this, note that quivers don’t have to be
connected and that they are allowed to have multiple arrows.
1.18. Show that for every n ∈ N the K-algebra Tn (K) of upper triangular n × n-
matrices is isomorphic to the path algebra of the quiver

1 ←− 2 ←− . . . ←− n − 1 ←− n

1.19. Let K be a field.


(a) Consider the following sets of matrices in M3 (K) (where each asterisk
indicates an arbitrary entry from K, not necessarily the same)
⎛ ⎞ ⎛ ⎞
∗∗0 ∗00
A := ⎝ 0 ∗ 0 ⎠ , B := ⎝ ∗ ∗ 0 ⎠ .
0∗∗ ∗0∗

Show that A and B are subalgebras of M3 (K).


If possible, for each of the algebras A and B find an isomorphism to
the path algebra of a quiver.
1.4 Some Algebras of Small Dimensions 25

(b) Find subalgebras of M3 (K) which are isomorphic to the path algebra of
the quiver 1 −→ 2 ←− 3.
1.20. Let K be a field. Consider the following set of upper triangular matrices
⎧⎛ ⎞ ⎫
⎨ a0c ⎬
A := ⎝0 a d ⎠ | a, b, c, d ∈ K ⊆ T3 (K).
⎩ ⎭
00b

(a) Show that A is a subalgebra of T3 (K).


(b) Show that A is isomorphic to the path algebra of the Kronecker quiver,
as defined in Example 1.13.
1.21. For a field K let A be the three-subspace algebra as in Example 1.16, that is,
⎧⎛ ⎞ ⎫

⎪ a1 b1 b2 b3 ⎪

⎨⎜ ⎬
0 a2 0 0⎟
A= ⎜⎝
⎟ | ai , bj ∈ K .

⎪ 0 0 a3 0⎠ ⎪

⎩ ⎭
0 0 0 a4

(a) The three-subspace quiver is the quiver in Example 1.13 where all arrows
point towards the branch vertex. Show that the path algebra of the three-
subspace quiver is isomorphic to the three-subspace algebra. (Hint: It
might be convenient to label the branch vertex as vertex 1.)
(b) Determine the opposite algebra Aop . Is A isomorphic to Aop ? Is it
isomorphic to the path algebra of some quiver?
1.22. This exercise gives a criterion by which one can sometimes deduce that a
certain algebra cannot be isomorphic to the path algebra of a quiver.
(a) An element e in a K-algebra A is called an idempotent if e2 = e. Note
that 0 and 1A are always idempotent elements. Show that if φ : A → B
is an isomorphism of K-algebras and e ∈ A is an idempotent, then
φ(e) ∈ B is also an idempotent.
(b) Suppose that A is a K-algebra of dimension > 1 which has only 0 and 1A
as idempotents. Then A is not isomorphic to the path algebra of a quiver.
(Hint: consider the trivial paths in the quiver.)
(c) Show that every division algebra A has no idempotents other than 0 and
1A ; deduce that if A has dimension > 1 then A cannot be isomorphic to
the path algebra of a quiver. In particular, this applies to the R-algebra H
of quaternions.
(d) Show that the factor algebra K[X]/(X2 ) is not isomorphic to the path
algebra of a quiver.
26 1 Algebras

1.23. Let K be a field and A a K-algebra. Recall from Definition 1.6 that the
opposite algebra Aop has the same underlying vector space as A, but a new
multiplication

∗ : Aop × Aop → Aop , a ∗ b = ba

where on the right-hand the side the product is given by the multiplication
in A.
(a) Show that Aop is again a K-algebra.
(b) Let H be the R-algebra of quaternions (see Example 1.8). Show that the
map

ϕ : H → Hop , a + bi + cj + dk → a − bi − cj − dk

is an R-algebra isomorphism.
(c) Let G be a group and KG the group algebra. Show that the K-algebras
KG and (KG)op are isomorphic.
(d) Let Q be a quiver and KQ its path algebra. Show that the opposite
algebra (KQ)op is isomorphic to the path algebra KQ, where Q is the
quiver obtained from Q by reversing all arrows.
1.24. Consider the following 2-dimensional R-subalgebras of M2 (R) and deter-
mine to which algebra of Proposition 1.29 they are isomorphic:
(i) D2 (R),
the diagonal
 matrices;

ab
(ii) A := | a, b ∈ R ;
0a
  
a b
(iii) B := | a, b ∈ R .
−b a
1.25. Let K be any field, and let A be a 2-dimensional K-algebra with basis {1A , a}.
Hence A = Aa as in Example 1.16. Let a 2 = γ 1A + δa, where γ , δ ∈ K. As
in Example 1.27, the element a has minimal polynomial ma := X2 − δX − γ ,
and

A = Aa ∼
= K[X]/(ma ).

(a) By applying the Chinese Remainder Theorem, show that if ma has two
distinct roots in K, then A is isomorphic to the direct product K × K of
K-algebras.
(b) Show also that if ma = (X − λ)2 for λ ∈ K, then A is isomorphic to the
algebra K[T ]/(T 2 ).
(c) Show that if ma is irreducible in K[X], then A is a field, containing K as
a subfield.
1.4 Some Algebras of Small Dimensions 27

(d) Explain briefly why the algebras in (a) and (b) are not isomorphic, and
also not isomorphic to any of the algebras in (c).
1.26. Show that there are precisely two 2-dimensional algebras over C, up to
isomorphism.
1.27. Consider 2-dimensional algebras over Q. Show that the algebras
Q[X]/(X2 − p) and Q[X]/(X2 − q) are not isomorphic if p and q are
distinct prime numbers.
1.28. Let K = Z2 , the field with two elements.
(a) Let B be the set of matrices
  
a b
B= | a, b ∈ Z2 .
b a+b

Show that B is a subalgebra of M2 (Z2 ). Moreover, show that B is a field


with four elements.
(b) Find all 2-dimensional Z2 -algebras, up to isomorphism. (Hint: there are
three different algebras.)
1.29. This exercise shows that every finite-dimensional algebra is isomorphic to a
subalgebra of some matrix algebra. Assume A is an n-dimensional algebra
over a field K.
(a) For a ∈ A, define the map la : A → A, la (x) = ax.
(i) Show that la is K-linear, hence is an element of EndK (A).
(ii) Show that lλa+μb = λla + μlb for λ, μ ∈ K and a, b ∈ A.
(iii) Show that lab = la ◦ lb for a, b ∈ A.
(b) By part (a), the map ψ : A → EndK (A), where ψ(a) = la , is an algebra
homomorphism. Show that it is injective.
(c) Fix a K-basis of A and write each la as a matrix with respect to such a
fixed basis. Deduce that A is isomorphic to a subalgebra of Mn (K).
α
(d) Let Q be the quiver 1 ←− 2, so that the path algebra KQ is 3-
dimensional. Identify KQ with a subalgebra of M3 (K).
Chapter 2
Modules and Representations

Representation theory studies how algebras can act on vector spaces. The
fundamental notion is that of a module, or equivalently (as we shall see), that
of a representation. Perhaps the most elementary way to think of modules is to view
them as generalizations of vector spaces, where the role of scalars is played by
elements in an algebra, or more generally, in a ring.

2.1 Definition and Examples

A vector space over a field K is an abelian group V together with a scalar


multiplication K × V → V , satisfying the usual axioms. If one replaces the field K
by a ring R, then one gets the notion of an R-module. Although we mainly deal with
algebras over fields in this book, we slightly broaden the perspective in this chapter
by defining modules over rings. We always assume that rings contain an identity
element.
Definition 2.1. Let R be a ring with identity element 1R . A left R-module (or just
R-module) is an abelian group (M, +) together with a map

R × M → M, (r, m) → r · m

such that for all r, s ∈ R and all m, n ∈ M we have


(i) (r + s) · m = r · m + s · m;
(ii) r · (m + n) = r · m + r · n;
(iii) r · (s · m) = (rs) · m;
(iv) 1R · m = m.

© Springer International Publishing AG, part of Springer Nature 2018 29


K. Erdmann, T. Holm, Algebras and Representation Theory, Springer
Undergraduate Mathematics Series, https://doi.org/10.1007/978-3-319-91998-0_2
30 2 Modules and Representations

Exercise 2.1. Let R be a ring (with zero element 0R and identity element 1R ) and
M an R-module with zero element 0M . Show that the following holds for all r ∈ R
and m ∈ M:
(i) 0R · m = 0M ;
(ii) r · 0M = 0M ;
(ii) −(r · m) = (−r) · m = r · (−m), in particular −m = (−1R ) · m.
Remark 2.2. Completely analogous to Definition 2.1 one can define right
R-modules, using a map M × R → M, (m, r) → m · r. When the ring R
is not commutative the behaviour of left modules and of right modules can be
different; for an illustration see Exercise 2.22. We will consider only left modules,
since we are mostly interested in the case when the ring is a K-algebra, and scalars
are usually written to the left.
Before dealing with elementary properties of modules we consider a few
examples.
Example 2.3.
(1) When R = K is a field, then R-modules are exactly the same as K-vector
spaces. Thus, modules are a true generalization of the concept of a vector space.
(2) Let R = Z, the ring of integers. Then every abelian group can be viewed as
a Z-module: If n ≥ 1 then n · a is set to be the sum of n copies of a, and
(−n) · a := −(n · a), and 0Z · a = 0. With this, conditions (i) to (iv) in
Definition 2.1 hold in any abelian group.
(3) Let R be a ring (with 1). Then every left ideal I of R is an R-module, with
R-action given by ring multiplication. First, as a left ideal, (I, +) is an abelian
group. The properties (i)–(iv) hold even for arbitrary elements in R.
(4) A very important special case of (3) is that every ring R is an R-module, with
action given by ring multiplication.
(5) Suppose M1 , . . . , Mn are R-modules. Then the cartesian product

M1 × . . . × Mn := {(m1 , . . . , mn ) | mi ∈ Mi }

is an R-module if one defines the addition and the action of R componentwise,


so that

r · (m1 , . . . , mn ) := (rm1 , . . . , rmn ) for r ∈ R and mi ∈ Mi .

The module axioms follow immediately from the fact that they hold in
M1 , . . . , Mn .
We will almost always study modules when the ring is a K-algebra A. In this
case, there is a range of important types of A-modules, which we will now introduce.
2.1 Definition and Examples 31

Example 2.4. Let K be a field.


(1) If A is a subalgebra of the algebra of n × n-matrices Mn (K), or a subalgebra of
the algebra EndK (V ) of K-linear maps on a vector space V (see Example 1.3),
then A has a natural module, which we will now describe.
(i) Let A be a subalgebra of Mn (K), and let V = K n , the space of column
vectors, that is, of n × 1-matrices. By properties of matrix multiplication,
multiplying an n × n-matrix by an n × 1-matrix gives an n × 1-matrix,
and this satisfies axioms (i) to (iv). Hence V is an A-module, the natural
A-module. Here A could be all of Mn (K), or the algebra of upper triangular
n × n-matrices, or any other subalgebra of Mn (K).
(ii) Let V be a vector space over the field K. Assume that A is a subalgebra of
the algebra EndK (V ) of all K-linear maps on V (see Example 1.3). Then
V becomes an A-module, where the action of A is just applying the linear
maps to the vectors, that is, we set

A × V → V , (ϕ, v) → ϕ · v := ϕ(v).

To check the axioms, let ϕ, ψ ∈ A and v, w ∈ V , then we have

(ϕ + ψ) · v = (ϕ + ψ)(v) = ϕ(v) + ψ(v) = ϕ · v + ψ · v

by the definition of the sum of two maps, and similarly

ϕ · (v + w) = ϕ(v + w) = ϕ(v) + ϕ(w) = ϕ · v + ϕ · w

since ϕ is K-linear. Moreover,

ϕ · (ψ · v) = ϕ(ψ(v)) = (ϕψ) · v

since the multiplication in EndK (V ) is given by composition of maps, and


clearly we have 1A · v = idV (v) = v.
(2) Let A = KQ be the path algebra of a quiver Q. For a fixed vertex i, let M be
the span of all paths in Q starting at i. Then M = Aei , which is a left ideal of
A and hence is an A-module (see Example 2.3).
(3) Let A = KG be the group algebra of a group G. The trivial KG-module has
underlying vector space K, and the action of A is defined by

g · x = x for all g ∈ G and x ∈ K

and extended linearly to the entire group algebra KG. The module axioms are
trivially satisfied.
32 2 Modules and Representations

(4) Let B be an algebra and A a subalgebra of B. Then every B-module M can


be viewed as an A-module with respect to the given action. The axioms are
then satisfied since they even hold for elements in the larger algebra B. We
have already used this, when describing the natural module for subalgebras of
Mn (K), or of EndK (V ).
(5) Let A, B be K-algebras and suppose ϕ : A → B is a K-algebra homomor-
phism. If M is a B-module, then M also becomes an A-module by setting

A × M → M , (a, m) → a · m := ϕ(a)m

where on the right we use the given B-module structure on M. It is straightfor-


ward to check the module axioms.
Exercise 2.2. Explain briefly why example (4) is a special case of example (5).
We will almost always focus on the case when the ring is an algebra over a field
K. However, for some of the general properties it is convenient to see these for rings.
In this chapter we will write R and M if we are working with an R-module for a
general ring, and we will write A and V if we are working with an A-module where
A is a K-algebra.
Assume A is a K-algebra, then we have the following important observation,
namely all A-modules are automatically vector spaces.
Lemma 2.5. Let K be a field and A a K-algebra. Then every A-module V is a
K-vector space.
Proof. Recall from Remark 1.2 that we view K as a subset of A, by identifying
λ ∈ K with λ1A ∈ A. Restricting the action of A on V gives us a map K × V → V .
The module axioms (i)–(iv) from Definition 2.1 are then just the K-vector space
axioms for V . 

Remark 2.6. Let A be a K-algebra. To simplify the construction of A-modules, or to
check the axioms, it is usually enough to deal with elements of a fixed K-basis of A,
recall Remark 1.4. Sometimes one can simplify further. For example if A = K[X],
it has basis Xr for r ≥ 1. Because of axiom (iii) in Definition 2.1 it suffices to define
and to check the action of X as this already determines the action of arbitrary basis
elements.
Similarly, since an A-module V is always a K-vector space, it is often convenient
(and enough) to define actions on a K-basis of V and also check axioms using a
basis.
2.2 Modules for Polynomial Algebras 33

2.2 Modules for Polynomial Algebras

In this section we will completely describe the modules for algebras of the form
K[X]/(f ) where f ∈ K[X] is a polynomial. We first recall the situation for the
case f = 0, that is, modules for the polynomial algebra K[X].
Definition 2.7. Let K be a field and V a K-vector space. For any K-linear map
α : V → V we can use this and turn V into a K[X]-module by setting
 
g · v := g(α)(v) = λi α i (v) (for g = λi Xi ∈ K[X] and v ∈ V ).
i i

Here α i = α◦. . .◦α is the i-fold composition of maps. We denote this K[X]-module
by Vα .
Checking the module axioms (i)–(iv) from Definition 2.1 is straightforward. For
example, consider condition (iii),

g · (h · v) = g(α)(h · v) = g(α)(h(α)(v)) = ((gh)(α))(v) = (gh) · v.

Verifying the other axioms is similar and is left as an exercise. Note that, to define
a K[X]-module structure on a vector space, one only has to specify the action of X,
see Remark 2.6.
Example 2.8. Let K = R, and take V = R2 , the space of column vectors. Let
α : V → V be the linear map with matrix
 
01
00

with respect to the standard basis of unit vectors of R2 . According to Definition 2.7,
V becomes
 a module for R[X] by setting X · v := α(v). Here α 2 = 0, so if
g = i λi Xi ∈ R[X] is a general polynomial then

g · v = g(α)(v) = λi α i (v) = λ0 v + λ1 α(v).
i

The definition of Vα is more than just an example. We will now show that every
K[X]-module is equal to Vα for some K-linear map α.
Proposition 2.9. Let K be a field and let V be a K[X]-module. Then V = Vα ,
where α : V → V is the K-linear map given by α(v) := X · v for v ∈ V .
34 2 Modules and Representations

Proof. We first want to show that the map α defined in the statement is K-linear. In
fact, for every λ, μ ∈ K and v, w ∈ V we have

α(λv + μw) = X · (λv + μw) = X · (λv) + X · (μw)


= X · (λ1K[X] · v) + X · (μ1K[X] · w)
= (Xλ1K[X] ) · v + (Xμ1K[X] ) · w
= (λ1K[X] X) · v + (μ1K[X] X) · w
= λ(X · v) + μ(X · w) = λα(v) + μα(w).

To prove that V = Vα it remains to check that for any polynomial g ∈ K[X] we


have g · v = g(α)(v). In fact, by induction on r one sees that Xr · v = α r (v) for
each r, and then one uses linearity to prove the claim. 

The following result relates K[X]-modules with modules for factor algebras
K[X]/I . This is important since for I = 0, these factor algebras are finite-
dimensional, and many finite-dimensional algebras occuring ‘in nature’ are of this
form. Therefore, this result will be applied many times throughout the book. Recall
from basic algebra that K[X] is a principal ideal domain, so that every ideal of
K[X] can be generated by one element. Consider now an ideal of K[X] of the form
(f ) = K[X]f for some polynomial f ∈ K[X]. We assume that f has positive
degree, to ensure that K[X]/(f ) is an algebra (that is, contains a non-zero identity
element).
Theorem 2.10. Let K be a field and f ∈ K[X] a non-constant polynomial, and let
A be the factor algebra A = K[X]/(f ). Then there is a bijection between the set of
A-modules and the set of those K[X]-modules Vα which satisfy f (α) = 0.
Proof. We set I = (f ). Suppose first that V is an A-module. Then we can view V
as a module for K[X] by setting

g · v = (g + I ) · v for all v ∈ V and g ∈ K[X].

(Note that this is a special case of Example 2.4, for the algebra homomorphism
K[X] → A, g → g + I .) Then as a K[X]-module, V = Vα , where α is the linear
map v → X · v, by Proposition 2.9. It follows that for every v ∈ V we have

f (α)(v) = f · v = (f + I ) · v = 0 · v = 0

since f ∈ I , that is, f (α) = 0, as desired.


Conversely, consider a K[X]-module Vα where f (α) = 0. Then we view Vα as
a module for A, by setting

(g + I ) · v := g(α)(v) for all g ∈ K[X] and v ∈ V .


2.3 Submodules and Factor Modules 35

This is well-defined, that is, independent of the representatives of the cosets: if


g + I = h + I then g − h ∈ I = (f ) and thus g − h = pf for some polynomial
p ∈ K[X]. Therefore (g − h)(α) = (pf )(α) = p(α)f (α) = 0 by assumption, and
hence g(α) = h(α). Finally, one checks that the above action satisfies the module
axioms from Definition 2.1.
Since we only changed the point of view, and did not do anything to the vector
space and the action as such, the two constructions above are inverse to each other
and hence give bijections as claimed in the theorem. 

Example 2.11. Let K be a field.
(1) Consider the K-algebras K[X]/(Xn ), where n ∈ N. By Theorem 2.10, the
K[X]/(Xn )-modules are in bijection with those K[X]-modules Vα such that
α n = 0. In particular, the 1-dimensional K[X]/(Xn )-modules correspond to
scalars α ∈ K such that α n = 0. But for every field K this has only one
solution α = 0, that is, there is precisely one 1-dimensional K[X]/(Xn )-
module V0 = span{v} with zero action X · v = 0. Note that this is independent
of the field K.
(2) Let A = K[X]/(Xn − 1), where n ≥ 1. By Theorem 2.10, we have that
A-modules are in bijection with the K[X]-modules Vα such that α n = idV .
In particular, a 1-dimensional A-module is given by the K[X]-modules Vα ,
where V = span{v} ∼ = K is a 1-dimensional vector space and α : K → K
satisfies α n = idK . That is, α ∈ K is an n-th root of unity. Hence the number of
1-dimensional A-modules depends on how many n-th roots of unity K contains.
When K = C, we have n possibilities, namely α = (e2πi/n )j for
j = 0, 1, . . . , n − 1.
Assume K = R, then we only get one or two 1-dimensonal A-modules. If n
is even then α = 1 or −1, and if n is odd we only have α = 1.
(3) In general, the K-algebra K[X]/(f ) has a 1-dimensional module if and only if
the polynomial f has a root in K. So, for example, R[X]/(X2 + 1) does not
have a 1-dimensional module.
(4) Let Cn be a cyclic group of order n. We have seen in Example 1.27 that the
group algebra KCn is isomorphic to the factor algebra K[X]/(Xn − 1). So we
get information on KCn from example (2).

2.3 Submodules and Factor Modules

In analogy to the vector space constructions, we expect there to be ‘submodules’


and ‘factor modules’ of R-modules. A submodule of an R-module M will be a
subset U of M which itself is a module with the operations inherited from M. If so,
then one can turn the factor group M/U into an R-module.
36 2 Modules and Representations

Definition 2.12. Let R be a ring and let M be some R-module. An R-submodule U


of M is a subgroup (U, +) which is closed under the action of R, that is, r · u ∈ U
for all r ∈ R and u ∈ U .
Example 2.13.
(1) Let R = K be a field. Then K-modules are just K-vector spaces, and
K-submodules are just K-subspaces.
(2) Let R = Z be the ring of integers. Then Z-modules are nothing but abelian
groups, and Z-submodules are just (abelian) subgroups.
(3) Let R be a ring, considered as an R-module, with action given by ring
multiplication. Then the R-submodules of R are precisely the left ideals of R.
(4) Every R-module M has the trivial submodules {0} and M. We often just write
0 for the trivial submodule consisting only of the zero element.
(5) Let R be a ring and M an R-module. For every m ∈ M the subset

Rm := {r · m | r ∈ R}

is an R-submodule of M, the submodule generated by m.


(6) Assume M is an R-module, and U, V are R-submodules of M. Then the inter-
section U ∩ V is a submodule of M. Furthermore, let
U + V := {u + v | u ∈ U, v ∈ V }. Then U + V is a submodule of M.
Exercise 2.3. Prove the statements in Example 2.13 (6).
For modules of K-algebras, we have some specific examples of submodules.
Example 2.14. Let K be a field.
(1) Assume A = K[X]. We consider submodules of an A-module Vα . Recall V
is a vector space, and α is a linear transformation of V , and the action of A is
given by

g · v = g(α)(v) (v ∈ V , g ∈ K[X]).

Then an A-submodule of Vα is the same as a subspace U of V such that


α(U ) ⊆ U . For example, the kernel U = ker(α), and the image U = im(α),
are A-submodules of Vα .
(2) Consider the matrix algebra Mn (K), and let K n be the natural A-module, see
Example 2.4. We claim that the only Mn (K)-submodules of K n are the trivial
submodules 0 and K n . In fact, let U ⊆ K n be a non-zero Mn (K)-submodule,
then there is some u = (u1 , . . . , un )t ∈ U \ {0}, say uj = 0 for some j . For
every i ∈ {1, . . . , n} consider the matrix unit Eij ∈ Mn (K) with entry 1 at
position (i, j ) and 0 otherwise. Then (uj )−1 Eij · u is the ith unit vector and it is
in U since U is an Mn (K)-submodule. So U contains all unit vectors and hence
U = K n , as claimed.
2.3 Submodules and Factor Modules 37

(3) Let A = Tn (K), the algebra of upper triangular matrices. We can also consider
K n as an A-module. Then K n has non-trivial submodules, for example there is a
1-dimensional submodule, spanned by (1, 0, . . . , 0)t . Exercise 2.14 determines
all Tn (K)-submodules of K n .
(4) Let Q be a quiver and A = KQ, the path algebra of Q. For any r ≥ 1, let
A≥r be the subspace of A spanned by paths of length ≥ r. Then A≥r is a
submodule of the A-module A. We have seen Aei if i is a vertex of Q, this is also
a submodule of A. Then we also have the submodule (Aei )≥r := Aei ∩ A≥r ,
by Example 2.13.
(5) Consider the 2-dimensional R-algebra A0 = span{1A0 , b̃} with b̃2 = 0 as
in Sect. 1.4, as an A-module. The 1-dimensional subspace spanned by b̃ is
an A0 -submodule of A0 . Alternatively, A0 ∼ = R[X]/(X2 ) and the subspace
span{X + (X )} is an R[X]/(X )-submodule of R[X]/(X2 ).
2 2

On the other hand, consider the algebra A1 = span{1A1 , b̃} with b̃2 = 1A1 in
the same section, then the subspace spanned by b̃ is not a submodule. But the
space spanned by b̃ − 1A1 is a submodule. Alternatively, A1 ∼ = R[X]/(X2 − 1);
here the subspace U1 := span{X + (X − 1)} is not a submodule since
2

X · (X + (X2 − 1)) = 1 + (X2 − 1) ∈ U1 ,

but the subspace U2 := span{X − 1 + (X2 − 1)} is a submodule since

X · (X − 1 + (X2 − 1)) = 1 − X + (X2 − 1) ∈ U2 .

Exercise 2.4. Let A = R[X]/(X2 + 1) (which is the algebra A−1 in Sect. 1.4). Why
does A as an A-module not have any submodules except 0 and A?
Sometimes a module can be broken up into ‘smaller pieces’; the fundamental
notion for such phenomena is that of the direct sum of submodules. Very often we
will only need finite direct sums, but when dealing with semisimple modules we
will also need arbitrary direct sums. For clarity, we will give the definition in both
situations separately.
Definition 2.15. Let R be a ring and let M be an R-module.
(a) Let U1 , U2 , . . . , Ut be R-submodules of M. We say that M is the direct sum of
U1 , . . . , Ut , denoted M = U1 ⊕ U2 ⊕ . . . ⊕ Ut , if the following two conditions
are satisfied:
(i) M = U1 + U2 + . . . + Ut , that is, every element of M can be expressed as
a sum of elements from the submodules Ui .
(ii) For every j with 1 ≤ j ≤ t we have Uj ∩ i =j Ui = 0.
38 2 Modules and Representations

(b) We say that M is the direct sum of a family


 (Ui )i∈I of R-submodules (where I is
an arbitrary index set), denoted M = i∈I Ui , if the following two conditions
are satisfied:

(i) M = i∈I Ui , that is, every element of M can be expressed as a finite sum
of elements from the submodules  Ui .
(ii) For every j ∈ I we have Uj ∩ i =j Ui = 0.
Remark 2.16.
(1) When the ring is a K-algebra A, conditions (i) and (ii) in the above definition
state that M is, as a vector space, the direct sum of the subspaces U1 , . . . , Ut (or
Ui where i ∈ I ). One should keep in mind that each Ui must be an A-submodule
of M.
(2) Note that for R = K a field, every K-module (that is, K-vector space)
can be decomposed as a direct sum of 1-dimensional K-submodules (that is,
K-subspaces); this is just another formulation of the fact that every K-vector
space has a basis.
Given an algebra A, expressing A as an A-module as a direct sum of submodules
has numerous applications, we will see this later. The following exercises are
examples.
Exercise 2.5. Let A = Mn (K), the algebra of n × n-matrices over a field K,
considered as an A-module. For any i ∈ {1, . . . , n} we define Ci ⊆ A to be the set
of matrices which are zero outside the i-th column. Show that Ci is an A-submodule
of A, and that

A = C1 ⊕ C2 ⊕ . . . ⊕ Cn

is a direct sum of these submodules.


Exercise 2.6. Let A = KQ, the path algebra of some quiver. Suppose the vertices of
Q are 1, 2, . . . , n. Recall that the trivial paths ei ∈ KQ are orthogonal idempotents
(that is, ei2 = ei and ei ej = 0 for i = j ), and that e1 + e2 + . . . + en = 1A . Show
that as an A-module, A = Ae1 ⊕ Ae2 ⊕ . . . ⊕ Aen , the direct sum of A-submodules.
Recall that we have seen direct products of finitely many modules for a ring (in
Example 2.3). These products are necessary to construct a new module from given
modules, which may not be related in any way. We will later need arbitrary direct
products, and also some important submodules therein.
Definition 2.17. Let R be a ring and let (Mi )i∈I be a family of R-modules, where
I is some index set.
(a) The cartesian product

Mi = {(mi )i∈I | mi ∈ Mi for all i ∈ I }
i∈I
2.3 Submodules and Factor Modules 39

becomes an R-module with componentwise addition and R-action. This


R-module is called the direct product of the family (Mi )i∈I of R-modules.

(b) The subset



Mi = {(mi )i∈I | mi ∈ Mi for all i ∈ I , only finitely many mi non-zero}
i∈I

is an R-submodule of the direct product i∈I Mi , called the direct sum of the
family (Mi )i∈I of R-modules.
Note that we now have two notions of a direct sum. In Definition 2.15 it is
assumed that in M = i∈I Ui the Ui are submodules
 of the given R-module M,
whereas in Definition 2.17 the modules Mi in i∈I Mi are not related and a priori
are not contained in some given module. (Some books distinguish between these
two constructions, calling them an ‘internal’ and ‘external’ direct sum.)
We now come to the important construction of factor modules. Suppose U is a
submodule of an R-module M, then one knows from basic algebra that the cosets

M/U := {m + U | m ∈ M}

form an abelian group, with addition (m + U ) + (n + U ) = m + n + U . In the case,


when M is an R-module and U is an R-submodule, this is actually an R-module in
a natural way.
Proposition 2.18. Let R be a ring, let M be an R-module and U an R-submodule
of M. Then the cosets M/U form an R-module if one defines

r · (m + U ) := rm + U for all r ∈ R and m ∈ M.

This module is called the factor module of M modulo U .


Proof. One has to check that the R-action is well-defined, that is, independent
of the choice of representatives. If m + U = m + U then m − m ∈ U and
then also rm − rm = r(m − m ) ∈ U since U is an R-submodule. Therefore
rm + U = rm + U . Finally, the module axioms are inherited from those for M.

Example 2.19.
(1) Let R be a ring and I some left ideal of R, then R/I is a factor module of the
R-module R. For example, if R = Z then every ideal is of the form I = Zd for
d ∈ Z, and this gives the well-known Z-modules Z/Zd of integers modulo d.
(2) Let A = KQ, the path algebra of a quiver Q, and let M = Aei where i is a
vertex of Q. This has the submodule (Aei )≥1 = Aei ∩ A≥1 (see Example 2.14).
The factor module Aei /(Aei )≥1 is 1-dimensional, spanned by the coset of ei .
Note that Aei may be infinite-dimensional.
40 2 Modules and Representations

2.4 Module Homomorphisms

We have introduced modules as a generalization of vector spaces. In this section


we introduce the suitable maps between modules; these ‘module homomorphisms’
provide a direct generalization of the concept of a K-linear map of vector spaces.
Definition 2.20. Suppose R is a ring, and M and N are R-modules. A map
φ : M → N is an R-module homomorphism if for all m, m1 , m2 ∈ M and r ∈ R
we have
(i) φ(m1 + m2 ) = φ(m1 ) + φ(m2 );
(ii) φ(rm) = rφ(m).
An isomorphism of R-modules is an R-module homomorphism which is also
bijective. The R-modules M and N are then called isomorphic; notation: M ∼ = N.
Note that if φ is an isomorphism of modules then so is the inverse map; the proof is
analogous to Exercise 1.14.
Remark 2.21. Assume that the ring in Definition 2.20 is a K-algebra A (for some
field K).
(1) The above definition also says that every A-module homomorphism φ must be
K-linear. Indeed, recall that we identified scalars λ ∈ K with elements λ1A in
A. Then we have for λ, μ ∈ K and m1 , m2 ∈ M that

φ(λm1 + μm2 ) = φ((λ1A )m1 + (μ1A )m2 )


= (λ1A )φ(m1 ) + (μ1A )φ(m2 )
= λφ(m1 ) + μφ(m2 ).

(2) It suffices to check condition (ii) in Definition 2.20 for elements r ∈ A in a


fixed basis, or just for elements which generate A, such as r = X in the case
when A = K[X].
Exercise 2.7. Suppose V is an A-module where A is a K-algebra. Let EndA (V ) be
the set of all A-module homomorphisms V → V , which is by Remark 2.21 a subset
of EndK (V ). Check that it is a K-subalgebra.
Example 2.22. Let R be a ring.
(1) Suppose U is a submodule of an R-module M, then the ‘canonical map’
π : M → M/U defined by π(m) = m + U is an R-module homomorphism.
(2) Suppose M is an R-module, and m ∈ M. Then there is an R-module
homomorphism

φ : R → M, φ(r) = rm.
2.4 Module Homomorphisms 41

This is very common, and we call it a ‘multiplication homomorphism’. There is


a general version of this, also very common. Namely, suppose m1 , m2 , . . . , mn
are given elements in M. Now take the R-module R n := R × R × . . . × R, and
define

ψ : R n → M, ψ(r1 , r2 , . . . , rn ) = r1 m1 + r2 m2 + . . . + rn mn .

You are encouraged to check that this is indeed an R-module homomorphism.


(3) Suppose M = M1 × . . . × Mr , the direct product of R-modules M1 , . . . , Mr .
Then for every i ∈ {1, . . . , r} the projection map

πi : M → Mi , (m1 , . . . , mr ) → mi

is an R-module homomorphism. Similarly, the inclusion maps into the i-th


coordinate

ιi : Mi → M , mi → (0, . . . , 0, mi , 0, . . . , 0)

are R-module homomorphisms. We recommend to check this as well.


Similarly, if M = U ⊕ V , the direct sum of two submodules U and V , then
the projection maps, and the inclusion maps, are R-module homomorphisms.
Now consider the case when the ring is an algebra, then we have several other
types of module homomorphisms.
Example 2.23. Let K be a field.
(1) Consider the polynomial algebra A = K[X]. For a K-linear map between
A-modules to be an A-module homomorphism it suffices that it commutes
with the action of X. We have described the A-modules (see Sect. 2.2), so let
Vα and Wβ be A-modules. An A-module homomorphism is a K-linear map
θ : V → W such that θ (Xv) = Xθ (v) for all v ∈ V . On Vα , the element X
acts by α, and on Wβ , the action of X is given by β. So this means

θ (α(v)) = β(θ (v)) for all v ∈ V .

This holds for all v, so we have θ ◦ α = β ◦ θ.


Conversely, if θ : V → W is a linear map such that θ ◦ α = β ◦ θ , then
θ defines a module homomorphism. In particular, Vα ∼ = Wβ are isomorphic
A-modules if and only if there is an invertible linear map θ such that
θ −1 ◦ β ◦ θ = α. This condition means that α and β are similar as linear maps.
(2) Let A = K[X] and let Vα , Wβ be 1-dimensional A-modules. In this case, α and
β are scalar multiples of the identity map. Then by the previous result, Vα and
Wβ are isomorphic if and only if α = β. This follows from the fact that, since
β is a scalar multiple of the identity map, it commutes with θ .
42 2 Modules and Representations

(3) Let G = Cn be a cyclic group of order n. Then we have seen in Example 2.11
that the group algebra CG ∼ = C[X]/(Xn − 1) has n one-dimensional modules
given by multiplication with the scalars (e2πi/n )j for j = 0, 1, . . . , n−1. These
are pairwise non-isomorphic, by part (2) above. Thus, CG has precisely n one-
dimensional modules, up to isomorphism.
Exercise 2.8. Let A = KQ, where Q is a quiver. We have seen that A has for each
vertex i of Q a 1-dimensional module Si := Aei /(Aei )≥1 (see Example 2.19). Show
that if i = j then Si and Sj are not isomorphic.
In analogy to the isomorphism theorem for linear maps, or group homomor-
phisms, there are also isomorphism theorems for module homomorphisms.
Theorem 2.24 (Isomorphism Theorems). Let R be a ring. Then the following
hold.
(a) Suppose φ : M → N is an R-module homomorphism. Then the kernel ker(φ) is
an R-submodule of M and the image im(φ) is an R-submodule of N. Moreover,
we have an isomorphism of R-modules

M/ker(φ) ∼
= im(φ).

(b) Suppose U, V are submodules of an R-module M, then the sum U + V and the
intersection U ∩ V are also R-submodules of M. Moreover,

U/(U ∩ V ) ∼
= (U + V )/V

are isomorphic as R-modules.


(c) Let M be an R-module. Suppose U ⊆ V ⊆ M are R-submodules, then V /U is
an R-submodule of M/U , and

(M/U )/(V /U ) ∼
= M/V

are isomorphic as R-modules.


Proof. Readers who have seen the corresponding theorem for abelian groups may
apply this, and just check that all maps involved are R-module homomorphisms.
But for completeness, we give a slightly more detailed proof here.
(a) Since φ is in particular a homomorphism of (abelian) groups, we know (or can
easily check) that ker(φ) is a subgroup of M. Moreover, for every m ∈ ker(φ) and
r ∈ R we have

φ(rm) = rφ(m) = r · 0 = 0
2.4 Module Homomorphisms 43

and rm ∈ ker(φ). Similarly one checks that the image im(φ) is a submodule of N.
For the second statement we consider the map

ψ : M/ker(φ) → im(φ), m + ker(φ) → φ(m).

This map is well-defined and injective: in fact, for any m1 , m2 ∈ M we have

m1 + ker(φ) = m2 + ker(φ) ⇐⇒ m1 − m2 ∈ ker(φ) ⇐⇒ φ(m1 ) = φ(m2 ).

By definition, ψ is surjective. It remains to check that this map is an R-module


homomorphism. For every m1 , m2 , m ∈ M and r ∈ R we have

ψ((m1 + ker(φ)) + (m2 + ker(φ))) = φ(m1 + m2 ) = φ(m1 ) + φ(m2 )


= ψ(m1 + ker(φ)) + ψ(m2 + ker(φ)),

and

ψ(r(m + ker(φ))) = φ(rm) = rφ(m) = rψ(m + ker(φ)).

(b) We have already seen in Example 2.13 that U + V and U ∩ V are submodules.
Then we consider the map

ψ : U → (U + V )/V , u → u + V .

From the addition and R-action on factor modules being defined on representatives,
it is easy to check that ψ is an R-module homomorphism. Since every coset in
(U + V )/V is of the form u + v + V = u + V , the map ψ is surjective. Moreover,
it follows directly from the definition that ker(ψ) = U ∩ V . So part (a) implies that

U/(U ∩ V ) ∼
= (U + V )/V .

(c) That V /U is an R-submodule of M/U follows directly from the fact that V is
an R-submodule of M. We then consider the map

ψ : M/U → M/V , m + U → m + V .

Note that this map is well-defined since U ⊆ V by assumption. One checks that ψ is
an R-module homomorphism. By definition, ψ is surjective, and the kernel consists
precisely of the cosets of the form m + U with m ∈ V , that is, ker(ψ) = V /U . So
part (a) implies that

(M/U )/(V /U ) ∼
= M/V ,

as claimed. 

44 2 Modules and Representations

Example 2.25. Let R be a ring and M an R-module. For any m ∈ M consider the
R-module homomorphism

φ : R → M , φ(r) = rm

from Example 2.22. The kernel

AnnR (m) := {r ∈ R | rm = 0}

is called the annihilator of m in R. By the isomorphism theorem we have that

R/AnnR (m) ∼
= im(φ) = Rm,

that is, the factor module is actually isomorphic to the submodule of M generated
by m; this has appeared already in Example 2.13.
In the isomorphism theorem we have seen that factor modules occur very natu-
rally in the context of module homomorphisms. We now describe the submodules
of a factor module. This so-called submodule correspondence is very useful, as it
allows one to translate between factor modules and modules. This is based on the
following observation:
Proposition 2.26. Let R be a ring and φ : M → N an R-module homomorphism.
Then for every R-submodule W ⊆ N the preimage φ −1 (W ) := {m ∈ M | φ(m) ∈ W }
is an R-submodule of M, which contains the kernel of φ.
Proof. We first show that φ −1 (W ) is a subgroup. It contains the zero element since
φ(0) = 0 ∈ W . Moreover, if m1 , m2 ∈ φ −1 (W ), then

φ(m1 ± m2 ) = φ(m1 ) ± φ(m2 ) ∈ W

since W is a subgroup of N, that is, m1 ± m2 ∈ φ −1 (W ). Finally, let r ∈ R and


m ∈ φ −1 (W ). Then φ(rm) = rφ(m) ∈ W since W is an R-submodule, that is,
rm ∈ φ −1 (W ). The kernel of φ is mapped to zero, and 0 ∈ W , hence the kernel of
φ is contained in φ −1 (W ). 

Example 2.27. Let R be a ring and M an R-module, and let U be a submod-
ule of M with factor module M/U . Then we have the natural homomorphism
π : M → M/U, π(m) = m + U. Proposition 2.26 with φ = π shows that for
every submodule W of M/U , the preimage under π is a submodule of M containing
U . Explicitly, this is the module

 := π −1 (W ) = {m ∈ M | m + U ∈ W }.
W
2.4 Module Homomorphisms 45

This construction leads to the following submodule correspondence:


Theorem 2.28. Let R be a ring. Suppose M is an R-module and U is an R-
submodule of M. Then the map W → W  induces a bijection, inclusion preserving,
between the set F of R-submodules of M/U and the set S of R-submodules of M
that contain U . Its inverse takes V in S to V /U ∈ F .
Proof. If W is a submodule of M/U then W  belongs to the set S, by Example 2.27.
Take a module V which belongs to S, then V /U is in F , by part (c) of Theorem 2.24.
To show that this gives a bijection, we must check that
(i) if W is in F then W /U = W ,

(ii) if V is in S then (V /U ) = V .
To prove (i), let m ∈ W  , then by definition m + U ∈ W so that W  /U ⊆ W .
Conversely if w ∈ W then w = m + U for some m ∈ M. Then m ∈ W  and
therefore w ∈ W /U .
To prove (ii), note that (V /U ) = {m ∈ M | m + U ∈ V /U }. First, let

m ∈ (V /U ), that is, m + U ∈ V /U . Then m + U = v + U for some v ∈ V ,
and therefore m − v ∈ U . But U ⊆ V and therefore m − v ∈ V , and it follows
that m = (m − v) + v ∈ V . This shows that (V  /U ) ⊆ V . Secondly, if v ∈ V then
v + U ∈ V /U , so that v ∈ (V /U ). This completes the proof of (ii). It is clear from
the constructions that they preserve inclusions. 

Example 2.29. For a field K, we consider the K-algebra A = K[X]/(Xn ) as an
A-module. We apply Theorem 2.28 in the case R = M = K[X] and U = (Xn );
then the K[X]-submodules of A are in bijection with the K[X]-submodules of
K[X] containing the ideal (Xn ). The K[X]-submodules of K[X] are precisely
the (left) ideals. Since K[X] is a principal ideal domain every ideal is of the
form (g) = K[X]g for some polynomial g ∈ K[X]; moreover, (g) contains the
ideal (Xn ) if and only if g divides Xn . Hence there are precisely n + 1 different
K[X]-submodules of A = K[X]/(Xn ), namely all

(Xj )/(Xn ) = {Xj h + (Xn ) | h ∈ K[X]} (0 ≤ j ≤ n).

Note that the K[X]-action on A is the same as the A-action on A (both given
by multiplication in K[X]); thus the K[X]-submodules of A are the same as the
A-submodules of A and the above list gives precisely the submodules of A as an
A-module.
Alternatively, we know that as a K[X]-module, A = Vα , where V = A
as a vector space, and α is the linear map which is given as multipli-
cation by X (see Sect. 2.2). The matrix of α with respect to the basis
{1 + (Xn ), X + (Xn ), . . . , Xn−2 + (Xn ), Xn−1 + (Xn )} is the Jordan block Jn (0)
46 2 Modules and Representations

of size n with zero diagonal entries, that is,


⎛ ⎞
0 0 ... ... 0
⎜ .. ⎟
⎜1 0 .⎟
⎜ ⎟
⎜ .. .. .. ⎟
.
Jn (0) = ⎜0 . . ⎟.
⎜ ⎟
⎜. .. ⎟
⎝ .. . 0 0⎠
0 ... 0 1 0

A submodule is a subspace which is invariant under this matrix. With this, we also
get the above description of submodules.
Exercise 2.9. Check the details in the above example.
Often, properties of module homomorphisms already give rise to direct sum
decompositions. We give an illustration of this, which will actually be applied twice
later.
Lemma 2.30. Let A be a K-algebra, and let M, N, N  be non-zero A-modules.
Suppose there are A-module homomorphisms j : N → M and π : M → N  such
that the composition π ◦ j : N → N  is an isomorphism. Then j is injective and π
is surjective, and M is the direct sum of two submodules,

M = im(j ) ⊕ ker(π).

Proof. The first part is clear. We must show that im(j ) ∩ ker(π) = 0 and that
M = im(j ) + ker(π), see Definition 2.15.
Suppose w ∈ im(j ) ∩ ker(π), so that w = j (n) for some n ∈ N and π(w) = 0.
Then 0 = π(w) = (π ◦ j )(n) from which it follows that n = 0 since π ◦ j is
injective. Clearly, then also w = 0, as desired. This proves that the intersection
im(j ) ∩ ker(π) is zero.
Let φ : N  → N be the inverse of π ◦ j , so that we have π ◦ j ◦ φ = idN  . Take
w ∈ M then

w = (j ◦ φ ◦ π)(w) + (w − (j ◦ φ ◦ π)(w)).

The first summand belongs to im(j ), and the second summand is in ker(π) since

π(w − (j ◦ φ ◦ π)(w)) = π(w) − (π ◦ j ◦ φ ◦ π)(w) = π(w) − π(w) = 0.

This proves that w ∈ im(j ) + ker(π), and hence M = im(j ) + ker(π). 



2.5 Representations of Algebras 47

2.5 Representations of Algebras

In basic group theory one learns that a group action of a group G on a set  is ‘the
same’ as a group homomorphism from G into the group of all permutations of the
set .
In analogy, let A be a K-algebra; as we will see, an A-module V ‘is the same’ as
an algebra homomorphism φ : A → EndK (V ), that is, a representation of A.
Definition 2.31. Let K be a field and A a K-algebra.
(a) A representation of A over K is a K-vector space V together with a K-algebra
homomorphism θ : A → EndK (V ).
(b) A matrix representation of A is a K-algebra homomorphism θ : A −→ Mn (K),
for some n ≥ 1.
(c) Suppose in (a) that V is finite-dimensional. We may fix a basis in V and
write linear maps of V as matrices with respect to such a basis. Then the
representation of A becomes a K-algebra homomorphism θ : A → Mn (K),
that is, a matrix representation of A.
Example 2.32. Let K be a field.
(1) Let A be a K-subalgebra of Mn (K), the algebra of n × n-matrices, then the
inclusion map is an algebra homomorphism and hence is a matrix representation
of A. Similarly, let A be a subalgebra of the K-algebra EndK (V ) of K-linear
maps on V , where V is a vector space over K. Then the inclusion map from A
to EndK (V ) is an algebra homomorphism, hence it is a representation of A.
(2) Consider the polynomial algebra K[X], and take a K-vector space V together
with a fixed linear transformation α. We have seen in Example 1.24 that
evaluation at α is an algebra homomorphism. Hence we have a representation
of K[X] given by

θ : K[X] → EndK (V ), θ (f ) = f (α).

(3) Let A = KG, the group algebra of a finite group. Define θ : A → M1 (K) = K
by mapping each basis vector g ∈ G to 1, and extend to linear combinations.
This is a representation of A: by Remark 2.6 it is enough to check the conditions
on a basis. This is easy, since θ (g) = 1 for g ∈ G.
In Sect. 2.1 we introduced modules for an algebra as vector spaces on which the
algebra acts by linear transformations. The following crucial result observes that
modules and representations of an algebra are the same. Going from one notion to
the other is a formal matter, nothing is ‘done’ to the modules or representations and
it only describes two different views of the same thing.
48 2 Modules and Representations

Theorem 2.33. Let K be a field and let A be a K-algebra.


(a) Suppose V is an A-module, with A-action A × V → V , (a, v) → a · v. Then
we have a representation of A given by

θ : A → EndK (V ) , θ (a)(v) = a · v for all a ∈ A, v ∈ V .

(b) Suppose θ : A → EndK (V ) is a representation of A. Then V becomes an


A-module by setting

A × V → V , a · v := θ (a)(v) for all a ∈ A, v ∈ V .

Roughly speaking, the representation θ corresponding to an A-module V


describes how each element a ∈ A acts linearly on the vector space V , and vice
versa.
Proof. The proof basically consists of translating the module axioms from Defini-
tion 2.1 to the new language of representations from Definition 2.31, and vice versa.
(a) We first have to show that θ (a) ∈ EndK (V ) for all a ∈ A. Recall from
Lemma 2.5 that every A-module is also a K-vector space; then for all λ, μ ∈ K
and v, w ∈ V we have

θ (a)(λv + μw) = θ (a)(λ1A · v + μ1A · w) = (aλ1A ) · v + (aμ1A) · w


= λ(a · v) + μ(a · w) = λθ (a)(v) + μθ (a)(w),

where we again used axiom (Alg) from Definition 1.1.


It remains to check that θ is an algebra homomorphism. First, it has to be a
K-linear map; for any λ, μ ∈ K, a, b ∈ A and v ∈ V we have

θ (λa + μb)(v) = (λa + μb) · v = λ(a · v) + μ(b · v) = (λθ (a) + μθ (b))(v),

that is, θ (λa + μb) = λθ (a) + μθ (b). Next, for any a, b ∈ A and v ∈ V we get

θ (ab)(v) = (ab) · v = a · (b · v) = (θ (a) ◦ θ (b))(v),

which holds for all v ∈ V , hence θ (ab) = θ (a) ◦ θ (b). Finally, it is immediate from
the definition that θ (1A ) = idV .
(b) This is analogous to the proof of part (a), where each argument can be
reversed. 

Example 2.34. Let K be a field.
(1) Let V be a K-vector space and A ⊆ EndK (V ) a subalgebra. As observed in
Example 2.32 the inclusion map θ : A → EndK (V ) is a representation. When
V is interpreted as an A-module we obtain precisely the ‘natural module’ from
Example 2.4 with A-action given by A × V → V , (ϕ, v) → ϕ(v).
2.5 Representations of Algebras 49

(2) The representation θ : K[X] → EndK (V ), θ (f ) = f (α) for any K-linear map
α on a K-vector space V when interpreted as a K[X]-module is precisely the
K[X]-module Vα studied in Sect. 2.2.
(3) Let A be a K-algebra, then A is an A-module with A-action given by the
multiplication in A. The corresponding representation of A is then given by

θ : A → EndK (A) , θ (a)(x) = ax for all a, x ∈ A.

This representation θ is called the regular representation of A. (See also


Exercise 1.29.)
We now want to transfer the notion of module isomorphism (see Definition 2.20)
to the language of representations. It will be convenient to have the following notion:
Definition 2.35. Let K be a field. Suppose we are given two representations θ1 , θ2
of a K-algebra A where θ1 : A → EndK (V1 ) and θ2 : A → EndK (V2 ). Then θ1
and θ2 are said to be equivalent if there is a vector space isomorphism ψ : V1 → V2
such that

θ1 (a) = ψ −1 ◦ θ2 (a) ◦ ψ for all a ∈ A. (∗)

If θ1 : A → Mn1 (K) and θ2 : A → Mn2 (K) are matrix representations of A, this


means that θ1 (a) and θ2 (a) should be simultaneously similar, that is, there exists an
invertible n2 × n2 -matrix C (independent of a) such that θ1 (a) = C −1 θ2 (a)C for all
a ∈ A. (Note that for equivalent representations the dimensions of the vector spaces
must agree, that is, we have n1 = n2 .)
For example, let A = K[X], and assume V1 , V2 are finite-dimensional. Then (∗)
holds for all a ∈ K[X] if and only if (∗) holds for a = X, that is, if and only if the
matrices θ1 (X) and θ2 (X) are similar.
Proposition 2.36. Let K be a field and A a K-algebra. Then two representations
θ1 : A → EndK (V1 ) and θ2 : A → EndK (V2 ) of A are equivalent if and only if the
corresponding A-modules V1 and V2 are isomorphic.
Proof. Suppose first that θ1 and θ2 are equivalent via the vector space isomorphism
ψ : V1 → V2 . We claim that ψ is also an A-module homomorphism (and then it is
an isomorphism). By definition ψ : V1 → V2 is K-linear. Moreover, for v ∈ V1 and
a ∈ A we have

ψ(a · v) = ψ(θ1 (a)(v)) = θ2 (a)(ψ(v)) = a · ψ(v).

Conversely, suppose ψ : V1 → V2 is an A-module isomorphism. Then for all a ∈ A


and v ∈ V1 we have

ψ(θ1 (a)(v)) = ψ(a · v) = a · ψ(v) = θ2 (a)(ψ(v)).


50 2 Modules and Representations

This is true for all v ∈ V1 , that is, for all a ∈ A we have

ψ ◦ θ1 (a) = θ2 (a) ◦ ψ

and hence the representations θ1 and θ2 are equivalent. 



Focussing on representations (rather than modules) gives some new insight.
Indeed, an algebra homomorphism has a kernel which quite often is non-trivial.
Factoring out the kernel, or an ideal contained in the kernel, gives a representation
of a smaller algebra.
Lemma 2.37. Let A be a K-algebra, and I a proper two-sided ideal of A, with
factor algebra A/I . Then the representations of A/I are in bijection with the
representations of A whose kernel contains I . The bijection takes a representation
ψ of A/I to the representation ψ ◦ π where π : A → A/I is the natural
homomorphism.
Translating this to modules, there is a bijection between the set of A/I -modules
and the set of those A-modules V for which xv = 0 for all x ∈ I and v ∈ V .
Proof. We prove the statement about the representations. The second statement then
follows directly by Theorem 2.33.
(i) Let ψ : A/I → EndK (V ) be a representation of the factor algebra A/I , then
the composition ψ ◦ π : A → EndK (V ) is an algebra homomorphism and hence a
representation of A. By definition, I = ker(π) and hence I ⊆ ker(ψ ◦ π).
(ii) Conversely, assume θ : A → EndK (V ) is a representation of A such that
I ⊆ ker(θ ). By Theorem 1.26, we have a well-defined algebra homomorphism
θ : A/I → EndK (V ) defined by

θ (a + I ) = θ (a) (a ∈ A)

and moreover θ = θ ◦ π. Then θ is a representation of A/I .


In (i) we define a map between the two sets of representations, and in (ii) we
show that it is surjective. Finally, let ψ, ψ  be representations of A/I such that
ψ ◦ π = ψ  ◦ π. The map π is surjective, and therefore ψ = ψ  . Thus, the map in
(i) is also injective. 

Remark 2.38. In the above lemma, the A/I -module V is called inflation when it is
viewed as an A-module. The two actions in this case are related by

(a + I )v = av for all a ∈ A, v ∈ V .

We want to illustrate the above inflation procedure with an example from linear
algebra.
Example 2.39. Let A = K[X] be the polynomial algebra. We take a representation
θ : K[X] → EndK (V ) and let θ (X) =: α. The kernel is an ideal of K[X], so it
is of the form ker(θ ) = K[X]m = (m) for some polynomial m ∈ K[X]. Assume
2.5 Representations of Algebras 51

m = 0, then we can take it to be monic. Then m is the minimal polynomial of α, as


it is studied in linear algebra. That is, we have m(α) = 0 and f (α) = 0 if and only
if f is a multiple of m in K[X].
By the above Lemma 2.37, the representation θ can be viewed as a representation
of the algebra K[X]/I whenever I is an ideal of K[X] contained in ker(θ ) = (m),
that is, I = (f ) and f is a multiple of m.

2.5.1 Representations of Groups vs. Modules for Group


Algebras

In this book we focus on representations of algebras. However, we explain in this


section how the important theory of representations of groups can be interpreted in
this context. Representations of groups have historically been the starting point for
representation theory at the end of the 19t h century. The idea of a representation of a
group is analogous to that for algebras: one lets a group act by linear transformations
on a vector space in a way compatible with the group structure. Group elements
are invertible, therefore the linear transformations by which they act must also be
invertible. The invertible linear transformations of a vector space form a group
GL(V ), with respect to composition of maps, and it consists of the invertible
elements in EndK (V ). For an n-dimensional vector space, if one fixes a basis, one
gets the group GLn (K), which are the invertible matrices in Mn (K), a group with
respect to matrix multiplication.
Definition 2.40. Let G be a group. A representation of G over a field K is a
homomorphism of groups ρ : G → GL(V ), where V is a vector space over K.
If V is finite-dimensional one can choose a basis and obtain a matrix representa-
tion, that is, a group homomorphism ρ : G → GLn (K).
The next result explains that group representations are basically the same
as representations for the corresponding group algebras (see Sect. 1.1.2). By
Theorem 2.33 this can be rephrased by saying that a representation of a group
is nothing but a module for the corresponding group algebra. Sometimes group
homomorphisms are more useful, and sometimes it is useful that one can also use
linear algebra.
Proposition 2.41. Let G be a group and K a field.
(a) Every representation ρ : G → GL(V ) of the group G over K extends to
θρ : KG → EndK (V ) of the group algebra KG given by
a representation

α
g∈G g g →
 g∈G αg ρ(g).
(b) Conversely, given a representation θ : KG → EndK (V ) of the group algebra
KG, then the restriction ρθ : G → GL(V ) of θ to G is a representation of the
group G over K.
52 2 Modules and Representations

Proof. (a) Given a representation, that is, a group homomorphism ρ : G → GL(V ),


we must show that θρ is an algebra homomorphism from KG to EndK (V ). First,
θρ is linear, by definition, and maps into EndK (V ). Next, we must check that
θρ (ab) = θρ (a)θρ (b). As mentioned in Remark 1.23, it suffices to take a, b in a
basis of KG, so we can take a, b ∈ G and then also ab ∈ G, and we have

θρ (ab) = ρ(ab) = ρ(a)ρ(b) = θρ (a)θρ (b).

In addition θρ (1KG ) = ρ(1G ) = 1GL(V ) = idV .


(b) The elements of G form a subset (even a vector space basis) of the group algebra
KG. Thus we can restrict θ to this subset and get a map

ρθ : G → GL(V ) , ρθ (g) = θ (g) for all g ∈ G.

In fact, every g ∈ G has an inverse g −1 in the group G; since θ is an algebra


homomorphism it follows that

θ (g)θ (g −1 ) = θ (gg −1 ) = θ (1KG ) = idV ,

that is, ρθ (g) = θ (g) ∈ GL(V ) is indeed an invertible linear map. Moreover, for
every g, h ∈ G we have

ρθ (gh) = θ (gh) = θ (g)θ (h) = ρθ (g)ρθ (h)

and ρθ is a group homomorphism, as required. 



Example 2.42. We consider a square in the plane, with corners (±1, ±1). Recall
that the group of symmetries of the square is by definition the group of orthogonal
transformations of R2 which leave the square invariant. This group is called dihedral
group of order 8 and we denote it by D4 (some texts call it D8 ). It consists of four
rotations (including the identity), and four reflections. Let r be the anti-clockwise
rotation by π/2, and let s be the reflection in the x-axis. One can check that
D4 = {s i r j | 0 ≤ i ≤ 1, 0 ≤ j ≤ 3}. We define this group as a subgroup of
GL(R2 ). Therefore the inclusion map D4 −→ GL(R2 ) is a group homomorphism,
hence is a representation. We can write the elements in this group as matrices with
respect to the standard basis. Then
   
0 −1 1 0
ρ(r) = and ρ(s) = .
1 0 0 −1

Then the above group homomorphism translates into a matrix representation

ρ : D4 → GL2 (R) , ρ(s i r j ) = ρ(s)i ρ(r)j .


2.5 Representations of Algebras 53

By Proposition 2.41 this yields a representation θρ of the group algebra RD4 .


Interpreting representations of algebras as modules (see Theorem 2.33) this means
that V = R2 becomes an RD4 -module where each g ∈ D4 acts on V by applying
the matrix ρ(g) to a vector.
Assume N is a normal subgroup of G with factor group G/N, then it is reason-
able to expect that representations of G/N may be related to some representations
of G. Indeed, we have the canonical map π : G → G/N where π(g) = gN, which
is a group homomorphism. In fact, using this we have an analogue of inflation for
algebras, described in Lemma 2.37.
Lemma 2.43. Let G be a group and N a normal subgroup of G with factor group
G/N, and consider representations over the field K. Then the representations of
G/N are in bijection with the representations of G whose kernel contains N.
The bijection takes a representation θ of G/N to the representation θ ◦ π where
π : G → G/N is the canonical map.
Translating this to modules, this gives a bijection between the set of K(G/N)-
modules and the set of those KG-modules V for which n · v = v for all n ∈ N and
v ∈ V.
Proof. This is completely analogous to the proof of Lemma 2.37, so we leave it as
an exercise. In fact, Exercise 2.24 shows that it is the same. 

2.5.2 Representations of Quivers vs. Modules for Path Algebras

The group algebra KG has basis the elements of the group G, which allows us to
relate KG-modules and representations of G. The path algebra KQ of a quiver has
basis the paths of Q. In analogy, we can define a representation of a quiver Q, which
allows us to relate KQ-modules and representations of Q. We will introduce this
now, and later we will study it in more detail.
Roughly speaking, a representation is as follows. A quiver consists of vertices
and arrows, and if we want to realize it in the setting of vector spaces, we represent
vertices by vector spaces, and arrows by linear maps, so that when arrows can be
composed, the corresponding maps can also be composed.
Definition 2.44. Let Q = (Q0 , Q1 ) be a quiver. A representation V of Q over a
field K is a set of K-vector spaces {V (i) | i ∈ Q0 } together with K-linear maps
V (α) : V (i) → V (j ) for each arrow α from i to j . We sometimes also write V as a
tuple V = ((V (i))i∈Q0 , (V (α))α∈Q1 ).
Example 2.45.
(1) Let Q be the one-loop quiver as in Example 1.13 with one vertex 1, and one
arrow α with starting and end point 1. Then a representation V of Q consists of
a K-vector space V (1) and a K-linear map V (α) : V (1) → V (1).
54 2 Modules and Representations

α
(2) Let Q be the quiver 1 −→ 2. Then a representation V consists of two K-vector
spaces V (1) and V (2) and a K-linear map V (α) : V (1) → V (2).
Consider the second example. We can construct from this a module for the path
algebra KQ = span{e1 , e2 , α}. Take as the underlying space V := V (1) × V (2),
a direct product of K-vector spaces (a special case of a direct product of modules
as in Example 2.3 and Definition 2.17). Let ei act as the projection onto V (i) with
kernel V (j ) for j = i. Then define the action of α on V by

α((v1 , v2 )) := V (α)(v1 )

(where vi ∈ V (i)). Conversely, if we have a KQ-module V then we can turn it into


a representation of Q by setting

V (1) := e1 V , V (2) = e2 V

and V (α) : e1 V → e2 V is given by (left) multiplication by α.


The following result shows how this can be done in general, it says that
representations of a quiver Q over a field K are ‘the same’ as modules for the path
algebra KQ.
Proposition 2.46. Let K be a field and Q a quiver, with vertex set Q0 and arrow
set Q1 .
(a) Let V = ((V (i))i∈Q0 , (V(α))α∈Q1 ) be a representation of Q over K. Then
the direct product V := i∈Q0 V (i) becomes a KQ-module as follows: let
v = (vi )i∈Q0 ∈ V and let p = αr . . . α1 be a path in Q starting at vertex
s(p) = s(α1 ) and ending at vertex t (p) = t (αr ). Then we set

p · v = (0, . . . , 0, V (αr ) ◦ . . . ◦ V (α1 )(vs(p) ), 0, . . . , 0)

where the (possibly) non-zero entry is in position t (p). In particular, if r = 0,


then ei · v = (0, . . . , 0, vi , 0, . . . , 0). This action is extended to all of KQ by
linearity.
(b) Let V be a KQ-module. For any vertex i ∈ Q0 we set

V (i) = ei V = {ei · v | v ∈ V };
α
for any arrow i −→ j in Q1 we set

V (α) : V (i) → V (j ) , ei · v → α · (ei · v) = α · v.

Then V = ((V (i))i∈Q0 , (V (α))α∈Q1 ) is a representation of the quiver Q


over K.
(c) The constructions in (a) and (b) are inverse to each other.
2.5 Representations of Algebras 55

Proof. (a) We check that the module axioms from Definition 2.1 are satisfied. Let
p = αr . . . α1 and q be paths in Q. Since the KQ-action is defined on the basis of
KQ and then extended by linearity, the distributivity (p + q) · v = p · v + q · v holds
by definition. Moreover, let v, w ∈ V ; then

p · (v + w) = (0, . . . , 0, V (αr ) ◦ . . . ◦ V (α1 )(vs(p) + ws(p)), 0, . . . , 0)


= p·v+p·w

since all the maps V (αi ) are K-linear. Since the multiplication in KQ is defined by
concatenation of paths (see Sect. 1.1.3), it is immediate that p · (q · v) = (pq) · v for
all v ∈ V and all paths p, q in Q, and then by linearity also for arbitrary elements
in KQ. Finally, the identity element is 1KQ = i∈Q0 ei , the sum of all trivial paths
(see Sect. 1.1.3); by definition, ei acts by picking out the ith component, then for all
v ∈ V we have

1KQ · v = ei · v = v.
i∈Q0

(b) According to Definition 2.44 we have to confirm that the V (i) = ei V are
K-vector spaces and that the maps V (α) are K-linear. The module axioms for V
imply that for every v, w ∈ V and λ ∈ K we have

ei · v + ei · w = ei · (v + w) ∈ ei V = V (i)

and also

λ(ei · v) = (λ1KQ ei ) · v = (ei λ1KQ ) · v = ei · (λv) ∈ ei V .

So V (i) is a K-vector space for every i ∈ Q0 .


α
Finally, let us check that V (α) is a K-linear map for every arrow i −→ j in Q.
Note first that αei = α, so that the map V (α) is indeed given by ei · v → α · v. Then
for all λ, μ ∈ K and all v, w ∈ V (i) we have

V (α)(λv + μw) = α · (λv + μw) = α · (λ1KQ · v) + α · (μ1KQ · w)


= (αλ1KQ ) · v + (αμ1KQ ) · w = (λ1KQ α) · v + (μ1KQ α) · w
= λ1KQ · (α · v) + μ1KQ · (α · w) = λV (α)(v) + μV (α)(w).

So V (α) is a K-linear map.


(c) It is straightforward to check from the definitions that the two constructions are
inverse to each other; we leave the details to the reader. 

56 2 Modules and Representations

α
Example 2.47. We consider the quiver 1 −→ 2. The 1-dimensional vector space
span{e2 } is a KQ-module (more precisely, a KQ-submodule of the path algebra
0
KQ). We interpret it as a representation of the quiver Q by 0 −→ span{e2 }. Also
the two-dimensional vector space span{e1 , α} is a KQ-module. As a representation
V (α)
of Q this takes the form span{e1 } −→ span{α}, where V (α)(e1 ) = α. Often, the
vector spaces are only considered up to isomorphism, then the latter representation
idK
takes the more concise form K −→ K.

EXERCISES

2.10. Let R be a ring. Suppose M is an R-module with submodules U, V and W .


We have seen in Exercise 2.3 that the sum U + V := {u + v | u ∈ U, v ∈ V }
and the intersection U ∩ V are submodules of M.
(a) Show by means of an example that it is not in general the case that

U ∩ (V + W ) = (U ∩ V ) + (U ∩ W ).

(b) Show that U \ V is never a submodule. Show also that the union U ∪ V
is a submodule if and only if U ⊆ V or V ⊆ U .
2.11. Let A = M2 (K), the K-algebra of 2 × 2-matrices over a field K. Take
the A-module M = A, and for i = 1, 2 define Ui to be the subset of
matrices
where
 all entriesnot in the i-th column are zero. Moreover, let
aa
U3 := | a, b ∈ K .
bb
(a) Check that each Ui is an A-submodule of M.
(b) Verify that for i = j , the intersection Ui ∩ Uj is zero.
(c) Show that M is not the direct sum of U1 , U2 and U3 .
2.12. For a field K we consider the factor algebra A = K[X]/(X4 − 2). In each
of the following cases find the number of 1-dimensional A-modules (up to
isomorphism); moreover, describe explicitly the action of the coset of X on
the module.

(i) K = Q; (ii) K = R; (iii) K = C; (iv) K = Z3 ; (v) K = Z7 .

2.13. Let A = Zp [X]/(Xn − 1), where p is a prime number. We investigate


1-dimensional A-modules, that is, the roots of Xn − 1 in Zp (see Theo-
rem 2.10).
(a) Show that 1̄ ∈ Zp is always a root.
2.5 Representations of Algebras 57

(b) Show that the number of 1-dimensional A-modules is d where d is


the greatest common divisor of n and p − 1. (Hint: You might use
that the non-zero elements in Zp are precisely the roots of the polyno-
mial Xp−1 − 1.)
2.14. For a field K we consider the natural Tn (K)-module K n , where Tn (K) is the
algebra of upper triangular n × n-matrices.
(a) In K n consider the K-subspaces

Vi := {(λ1 , . . . , λn )t | λj = 0 for all j > i},

where 0 ≤ i ≤ n. Show that these are precisely the Tn (K)-submodules


of K n .
(b) For any 0 ≤ j < i ≤ n consider the factor module Vi,j := Vi /Vj . Show
that these give n(n+1)
2 pairwise non-isomorphic Tn (K)-modules.
(c) For each standard basis vector ei ∈ K n , where 1 ≤ i ≤ n, determine
the annihilator AnnTn (K)(ei ) (see Example 2.25) and identify the factor
module Tn (K)/AnnTn (K) (ei ) with a Tn (K)-submodule of K n , up to
isomorphism.
2.15. Let R be a ring and suppose M = U × V , the direct product of R-modules U
and V . Check that Ũ := {(u, 0) | u ∈ U } is a submodule of M, isomorphic
to U . Write down a similar submodule Ṽ of M isomorphic to V , and show
that M = Ũ ⊕ Ṽ , the direct sum of submodules.
2.16. Let K be a field. Assume M, M1 , M2 are K-vector spaces and αi : Mi → M
are K-linear maps. The pull-back of (α1 , α2 ) is defined to be

E := {(m1 , m2 ) ∈ M1 × M2 | α1 (m1 ) + α2 (m2 ) = 0}.

(a) Check that E is a K-subspace of M1 × M2 .


(b) Assume that M, M1 , M2 are finite-dimensional K-vector spaces and that
M = im(α1 ) + im(α2 ). Show that then for the vector space dimensions
we have dimK E = dimK M1 + dimK M2 − dimK M. (Hint: Show that
the map (m1 , m2 ) → α1 (m1 )+α2 (m2 ) from M1 ×M2 to M is surjective,
and has kernel E.)
(c) Now assume that M, M1 , M2 are A-modules where A is some K-algebra,
and that α1 and α2 are A-module homomorphisms. Show that then E is
an A-submodule of M1 × M2 .
2.17. Let K be a field. Assume W, M1 , M2 are K-vector spaces and βi : W → Mi
are K-linear maps. The push-out of (β1 , β2 ) is defined to be

F := (M1 × M2 )/C,

where C := {(β1 (w), β2 (w)) | w ∈ W }.


58 2 Modules and Representations

(a) Check that C is a K-subspace of M1 × M2 , hence F is a K-vector space.


(b) Assume W, M1 , M2 are finite-dimensional and ker(β1 ) ∩ ker(β2 ) = 0.
Show that then

dimK F = dimK M1 + dimK M2 − dimK W.

(Hint: Show that the linear map w → (β1 (w), β2 (w)) from W to C is an
isomorphism.)
(c) Assume that W, M1 , M2 are A-modules where A is some K-algebra and
that β1 , β2 are A-module homomorphisms. Show that then C and hence
F are A-modules.
2.18. Let E be the pull-back as in Exercise 2.16 and assume M = im(α1 )+im(α2 ).
Now take the push-out F as in Exercise 2.17 where W = E with the same
M1 , M2 , and where βi : W → Mi are the maps

β1 (m1 , m2 ) := m1 , β2 (m1 , m2 ) := m2 (where (m1 , m2 ) ∈ W = E).

(a) Check that ker(β1 ) ∩ ker(β2 ) = 0.


(b) Show that the vector space C in the construction of the pushout is equal
to E.
(c) Show that the pushout F is isomorphic to M. (Hint: Consider the map
from M1 × M2 to M defined by (m1 , m2 ) → α1 (m1 ) + α2 (m2 ).)
2.19. Let K be a field and KG be the group algebra where G is a finite group.
Recall from Example 2.4 that the trivial KG-module is the 1-dimensional
module with action g · x = x for all g ∈ G and x ∈ K, linearly extended to
all of KG. Show that the corresponding representation θ : KG → EndK (K)
satisfies θ (a) = idK for all a ∈ KG. Check that this is indeed a
representation.
2.20. Let G be the group of symmetries of a regular pentagon, that is, the group of
orthogonal transformations of R2 which leave the pentagon invariant. That is,
G is the dihedral group of order 10, a subgroup of GL(R2 ). As a group, G is
generated by the (counterclockwise) rotation by 2πi 5 , which we call r, and a
reflection s; the defining relations are r 5 = idR2 , s 2 = idR2 and s −1 rs = r −1 .
Consider the group algebra CG of G over the complex numbers, and suppose
that ω ∈ C is some 5-th root of unity. Show that the matrices
   
ω 0 01
ρ(r) = −1 , ρ(s) =
0ω 10

satisfy the defining relations for G, hence give rise to a group representation
ρ : G → GL2 (C), and a 2-dimensional CG-module.
2.5 Representations of Algebras 59

α β
2.21. Let K be a field. We consider the quiver Q given by 1 ←− 2 ←− 3 and the
path algebra KQ as a KQ-module.
(a) Let V := span{e2 , α} ⊆ KQ. Explain why V = KQe2 and hence V is a
KQ-submodule of KQ.
(b) Find a K-basis of the KQ-submodule W := KQβ generated by β.
(c) Express the KQ-modules V and W as a representation of the quiver Q.
Are V and W isomorphic as KQ-modules?
α β
2.22. Let A = KQ where Q is the following quiver: 1 −→ 2 ←− 3. This exercise
illustrates that A as a left module and A as a right module have different
properties.
(a) As a left module A = Ae1 ⊕ Ae2 ⊕ Ae3 (see Exercise 2.6). For each Aei ,
find a K-basis, and verify that each of Ae1 and Ae3 is 2-dimensional, and
Ae2 is 1-dimensional.
(b) Show that the only 1-dimensional A-submodule of Ae1 is span{α}.
Deduce that Ae1 cannot be expressed as Ae1 = U ⊕ V where U and
V are non-zero A-submodules.
(c) Explain briefly why the same holds for Ae3 .
(d) As a right A-module, A = e1 A ⊕ e2 A ⊕ e3 A (by the same reasoning as
in Exercise 2.6). Verify that e1 A and e3 A are 1-dimensional.
2.23. Assume A = K[X] and let f = gh where g and h are polynomials in A.
Then Af = (f ) ⊆ (g) = Ag and the factor module is

(g)/(f ) = {rg + (f ) | r ∈ K[X]}.

Show that (g)/(f ) is isomorphic to K[X]/(h) as a K[X]-module.


2.24. Let G be a group and N a normal subgroup of G, and let A = KG, the group
algebra over the field K.
(a) Show that the space I = span{g1 − g2 | g1 g2−1 ∈ N} is a 2-sided ideal of
A.
(b) Show that the A-modules on which I acts as zero are precisely the
A-modules V such that n · v = v for all n ∈ N and v ∈ V .
(c) Explain briefly how this connects Lemmas 2.37 and 2.43.
Chapter 3
Simple Modules and the Jordan–Hölder
Theorem

In this section we introduce simple modules for algebras over a field. Simple
modules can be seen as the building blocks of arbitrary modules. We will make this
precise by introducing and studying composition series, in particular we will prove
the fundamental Jordan–Hölder theorem. This shows that it is an important problem
to classify, if possible, all simple modules of a (finite-dimensional) algebra. We
discuss tools to find and to compare simple modules of a fixed algebra. Furthermore,
we determine the simple modules for algebras of the form K[X]/(f ) where f is a
non-constant polynomial, and also for finite-dimensional path algebras of quivers.

3.1 Simple Modules

Throughout this chapter, K is a field. Let A be a K-algebra.


Definition 3.1. An A-module V is called simple if V is non-zero, and if it does not
have any A-submodules other than 0 and V .
We start by considering some examples; in fact, we have already seen simple
modules in the previous chapter.
Example 3.2.
(1) Every A-module V such that dimK V = 1 is a simple module. In fact, it does
not have any K-subspaces except for 0 and V and therefore it cannot have any
A-submodules except for 0 or V .
(2) Simple modules can have arbitrarily large dimensions. As an example, let
A = Mn (K), and take V = K n to be the natural module. Then we have
seen in Example 2.14 that the only Mn (K)-submodules of K n are the trivial
submodules 0 and K n . That is, K n is a simple Mn (K)-module, and it is
n-dimensional.

© Springer International Publishing AG, part of Springer Nature 2018 61


K. Erdmann, T. Holm, Algebras and Representation Theory, Springer
Undergraduate Mathematics Series, https://doi.org/10.1007/978-3-319-91998-0_3
62 3 Simple Modules and the Jordan–Hölder Theorem

However, if we consider V = K n as a module for the upper triangular matrix


algebra Tn (K) then V is not simple when n ≥ 2, see Exercise 2.14.
(3) Consider the symmetry group D4 of a square, that is, the dihedral group of order
8. Let A = RD4 be the group algebra over the real numbers. We have seen in
Example 2.42 that there is a matrix representation ρ : D4 → GL2 (R) such that
   
0 −1 1 0
ρ(r) = and ρ(s) = ,
1 0 0 −1

where r is the rotation by π/2 and s the reflection about the x-axis. The
corresponding A-module is V = R2 , the action of every g ∈ D4 is given by
applying the matrix ρ(g) to (column) vectors.
We claim that V = R2 is a simple RD4 -module. Suppose, for a contradic-
tion, that V has a non-zero submodule U and U = V . Then U is 1-dimensional,
say U is spanned by a vector u. Then ρ(r)u = λu for some λ ∈ R, which means
that u ∈ R2 is an eigenvector of ρ(r). But the matrix ρ(r) does not have a real
eigenvalue, a contradiction.
Since GL2 (R) is a subgroup of GL2 (C), we may view ρ(g) for g ∈ D4
also as elements in GL2 (C). This gives the 2-dimensional module V = C2 for
the group algebra CD4 . In Exercise 3.13 it will be shown that this module is
simple. Note that this does not follow from the argument we used in the case of
the RD4 -module R2 .
(4) Let K be a field and D a division algebra over K, see Definition 1.7. We view D
as a D-module (with action given by multiplication in D). Then D is a simple
D-module: In fact, let 0 = U ⊆ D be a D-submodule, and take an element
0 = u ∈ U . Then 1D = u−1 u ∈ U and hence if d ∈ D is arbitrary, we have
that d = d1D ∈ U . Therefore U = D, and D is a simple D-module.
We describe now a method which allows us to show that a given A-module V is
simple. For any element v ∈ V set Av := {av | a ∈ A}. This is an A-submodule of
V , the submodule of V generated by v, see Example 2.13.
Lemma 3.3. Let A be a K-algebra and let V be a non-zero A-module. Then V is
simple if and only if for each v ∈ V \ {0} we have Av = V .
Proof. First suppose that V is simple, and take an arbitrary element 0 = v ∈ V . We
know that Av is a submodule of V , and it contains v = 1A v, and so Av is non-zero
and therefore Av = V since V is simple.
Conversely, suppose U is a non-zero submodule of V . Then there is some non-
zero u ∈ U . Since U is a submodule, we have Au ⊆ U , but by the hypothesis,
V = Au ⊆ U ⊆ V and hence U = V . 

A module isomorphism takes a simple module to a simple module, see Exer-
cise 3.3; this is perhaps not surprising.
We would like to understand when a factor module of a given module is simple.
This can be answered by using the submodule correspondence (see Theorem 2.28).
3.2 Composition Series 63

Lemma 3.4. Let A be a K-algebra. Suppose V is an A-module and U is an A-


submodule of V with U = V . Then the following conditions are equivalent:
(i) The factor module V /U is simple.
(ii) U is a maximal submodule of V , that is, if U ⊆ W ⊆ V are A-submodules then
W = U or W = V .
Proof. This follows directly from the submodule correspondence in Theo-
rem 2.28. 

Suppose A is an algebra and I is a two-sided ideal of A with I = A. Then we
have the factor algebra B = A/I . According to Lemma 2.37, any B-module M can
be viewed as an A-module, with action am = (a + I )m for m ∈ M; this A-module
is called the inflation of M to A, see Remark 2.38. This can be used as a method to
find simple A-modules:
Lemma 3.5. Let A be a K-algebra, and let B = A/I where I is an ideal of A with
I = A. If S is a simple B-module, then the inflation of S to A is a simple A-module.
Proof. The submodules of S as an A-module are inflations of submodules of S
as a B-module, since the inflation only changes the point of view. Since S has no
submodules other than S and 0 as a B-module, it also has no submodules other than
S and 0 as an A-module, hence is a simple A-module. 

3.2 Composition Series

Roughly speaking, a composition series of a module breaks the module up into


‘simple pieces’. This will make precise in what sense the simple modules are the
building blocks for arbitrary modules.
Definition 3.6. Let A be a K-algebra and V an A-module. A composition series of
V is a finite chain of A-submodules

0 = V0 ⊂ V1 ⊂ V2 ⊂ . . . ⊂ Vn = V

such that the factor modules Vi /Vi−1 are simple, for all 1 ≤ i ≤ n. The length of
the composition series is n, the number of factor modules appearing. We refer to the
Vi as the terms of the composition series.
Example 3.7.
(1) The zero module has a composition series 0 = V0 = 0 of length 0. If V is a
simple module then 0 = V0 ⊂ V1 = V is a composition series, of length 1.
(2) Assume we have a composition series as in Definition 3.6. If Vk is one of the
terms, then Vk ‘inherits’ the composition series

0 = V0 ⊂ V1 ⊂ . . . ⊂ Vk .
64 3 Simple Modules and the Jordan–Hölder Theorem

(3) Let K = R and take A to be the 2-dimensional algebra over R, with basis
{1A , β} such that β 2 = 0 (see Proposition 1.29); an explicit realisation would be
A = R[X]/(X2 ). Take the A-module V = A, and let V1 be the space spanned
by β, then V1 is a submodule. Since V1 and V /V1 are 1-dimensional, they are
simple (see Example 3.2). Hence V has a composition series

0 = V0 ⊂ V1 ⊂ V2 = V .

(4) Let A = Mn (K) and take the A-module V = A. In Exercise 2.5 we have
considered the A-submodules Ci consisting of the matrices with zero entries
outside the i-th column, where 1 ≤ i ≤ n. In Exercise 3.1 it is shown that every
A-module Ci is isomorphic to the natural A-module K n . In particular, each A-
module Ci is simple (see Example 3.2). On the other hand we have a direct sum
decomposition A = C1 ⊕ C2 ⊕ . . . ⊕ Cn and therefore we have a finite chain
of submodules

0 ⊂ C1 ⊂ C1 ⊕ C2 ⊂ . . . ⊂ C1 ⊕ . . . ⊕ Cn−1 ⊂ A.

Each factor module is simple: By the isomorphism theorem (see Theorem 2.24)

= Ck /Ck ∩ (C1 ⊕ . . . ⊕ Ck−1 ) = Ck /{0} ∼


C1 ⊕ . . . ⊕ Ck /C1 ⊕ . . . ⊕ Ck−1 ∼ = Ck .

This shows that the above chain is a composition series.


(5) Let A = T2 (K), the 2 × 2 upper triangular matrices over K, and consider the
A-module V = A. One checks that the sets
     
a0 ab
V1 := | a ∈ K and V2 := | a, b ∈ K
00 00

are A-submodules of V . The chain 0 = V0 ⊂ V1 ⊂ V2 ⊂ V is a composition


series: each factor module Vi /Vi−1 is 1-dimensional and hence simple.
α
(6) Let A = KQ be the path algebra of the quiver 1 −→ 2 and let V be the
A-module V = A. Then the following chains are composition series for V :

0 ⊂ span{e2 } ⊂ span{e2 , α} ⊂ V .

0 ⊂ span{α} ⊂ span{α, e1 } ⊂ V .

In each case, the factor modules are 1-dimensional and hence are simple A-
modules.
Exercise 3.1. Let A = Mn (K), and let Ci ⊆ A be the space of matrices which
are zero outside the i-th column. Show that Ci is isomorphic to the natural module
V = K n of column vectors. Hint: Show that placing v ∈ V into the i-th column of
a matrix and extending by zeros is a module homomorphism V → Ci .
3.2 Composition Series 65

Remark 3.8. Not every module has a composition series. For instance, take A = K
to be the 1-dimensional algebra over a field K. Then A-modules are K-vector
spaces, and A-submodules are K-subspaces. Therefore it follows from Defini-
tion 3.1 that the simple K-modules are precisely the 1-dimensional K-vector spaces.
This means that an infinite-dimensional K-vector space does not have a composition
series since a composition series is by definition a finite chain of submodules, see
Definition 3.6. On the other hand, we will now see that for any algebra, finite-
dimensional modules have a composition series.
Lemma 3.9. Let A be a K-algebra. Every finite-dimensional A-module V has a
composition series.
Proof. This is proved by induction on the dimension of V . If dimK V = 0 or
dimK V = 1 then we are done by Example 3.7.
So assume now that dimK V > 1. If V is simple then, again by Example 3.7,
V has a composition series. Otherwise, V has proper non-zero submodules. So we
can choose a proper submodule 0 = U ⊂ V of largest possible dimension. Then U
must be a maximal submodule of V and hence the factor module V /U is a simple
A-module, by Lemma 3.4. Since dimK U < dimK V , by the induction hypothesis
U has a composition series, say

0 = U0 ⊂ U1 ⊂ U2 ⊂ . . . ⊂ Uk = U.

Since V /U is simple, it follows that

0 = U0 ⊂ U1 ⊂ U2 ⊂ . . . ⊂ Uk = U ⊂ V

is a composition series of V . 

In Example 3.7 we have seen that a term of a composition series inherits a
composition series. This is a special case of the following result, which holds for
arbitrary submodules.
Proposition 3.10. Let A be a K-algebra, and let V be an A-module. If V has a
composition series, then every submodule U ⊆ V also has a composition series.
Proof. Take any composition series for V , say

0 = V0 ⊂ V1 ⊂ V2 ⊂ . . . ⊂ Vn−1 ⊂ Vn = V .

Taking intersections with the submodule U yields a chain of A-submodules of U ,

0 = V0 ∩ U ⊆ V1 ∩ U ⊆ V2 ∩ U ⊆ . . . ⊆ Vn−1 ∩ U ⊆ Vn ∩ U = U. (3.1)

Note that terms of this chain can be equal; that is, (3.1) is in general not a
composition series. However, if we remove any repetition, so that each module
occurs precisely once, then we get a composition series for U : Consider the factors
66 3 Simple Modules and the Jordan–Hölder Theorem

(Vi ∩ U )/(Vi−1 ∩ U ). Using that Vi−1 ⊂ Vi and applying the isomorphism theorem
(see Theorem 2.24) we obtain

(Vi ∩ U )/(Vi−1 ∩ U ) = (Vi ∩ U )/(Vi−1 ∩ (Vi ∩ U ))



= (Vi−1 + (Vi ∩ U ))/Vi−1 ⊆ Vi /Vi−1 .

Since Vi /Vi−1 is simple the factor modules (Vi ∩ U )/(Vi−1 ∩ U ) occurring in (3.1)
are either zero or simple. 

In general, a module can have many composition series, even infinitely many
different composition series; see Exercise 3.11. The Jordan–Hölder Theorem shows
that any two composition series of a module have the same length and the same
factors up to isomorphism and up to order.
Theorem 3.11 (Jordan–Hölder Theorem). Let A be a K-algebra. Suppose an
A-module V has two composition series

0 = V0 ⊂ V1 ⊂ V2 ⊂ . . . ⊂ Vn−1 ⊂ Vn = V (I)

0 = W0 ⊂ W1 ⊂ W2 ⊂ . . . ⊂ Wm−1 ⊂ Wm = V . (II)

Then n = m, and there is a permutation σ of {1, 2, . . . , n} such that


Vi /Vi−1 ∼
= Wσ (i) /Wσ (i)−1 for each i = 1, . . . , n.
Before starting with the proof of this theorem, we give a definition, and we will
also deal with a special setting as a preparation.
Definition 3.12.
(a) The composition series (I) and (II) in Theorem 3.11 are called equivalent.
(b) If an A-module V has a composition series, the simple factor modules Vi /Vi−1
are called the composition factors of V . By the Jordan–Hölder theorem, they
only depend on V , and not on the composition series.
For the proof of the Jordan–Hölder Theorem, which will be given below, we
need to compare two given composition series. We consider now the case when
Vn−1 is different from Wm−1 . Observe that this shows why the simple quotients of
two composition series can come in different orders.
Lemma 3.13. With the notation as in the Jordan–Hölder Theorem, suppose
Vn−1 = Wm−1 and consider the intersection D := Vn−1 ∩ Wm−1 . Then the
following holds:

Vn−1 /D ∼
= V /Wm−1 and Wm−1 /D ∼
= V /Vn−1 ,

and these factor modules are simple A-modules.


3.2 Composition Series 67

Proof. We first observe that Vn−1 + Wm−1 = V . In fact, we have

Vn−1 ⊆ Vn−1 + Wm−1 ⊆ V .

If the first inclusion was an equality then Wm−1 ⊆ Vn−1 ⊂ V ; but both Vn−1
and Wm−1 are maximal submodules of V (since V /Vn−1 and V /Wm−1 are simple,
see Lemma 3.4). Thus Vn−1 = Wm−1 , a contradiction. Since Vn−1 is a maximal
submodule of V , we conclude that Vn−1 + Wm−1 = V .
Now we apply the isomorphism theorem (Theorem 2.24), and get

V /Wm−1 = (Vn−1 + Wm−1 )/Wm−1 ∼


= Vn−1 /(Vn−1 ∩ Wm−1 ) = Vn−1 /D.

Similarly one shows that V /Vn−1 ∼


= Wm−1 /D. 

Proof (of the Jordan–Hölder Theorem). We proceed by induction on n (the length
of the composition series (I)).
The zero module is the only module with a composition series of length n = 0
(see Example 3.7), and the statement of the theorem clearly holds in this case.
Let n = 1. Then V is simple, so W1 = V (since there is no non-zero submodule
except V ), and m = 1. Clearly, the factor modules are the same in both series.
Now suppose n > 1. The inductive hypothesis is that the theorem holds for
modules which have a composition series of length ≤ n − 1.
Assume first that Vn−1 = Wm−1 =: U , say. Then the module U inherits a
composition series of length n − 1, from (I). By the inductive hypothesis, any two
composition series of U have length n − 1. So the composition series of U inherited
from (II) also has length n − 1 and therefore m − 1 = n − 1 and m = n. Moreover,
by the inductive hypothesis, there is a permutation σ of {1, . . . , n − 1} such that
Vi /Vi−1 ∼= Wσ (i) /Wσ (i)−1 . We also have Vn /Vn−1 = V /Vn−1 = Wn /Wn−1 . So
if we view σ as a permutation of {1, . . . , n} fixing n then we have the required
permutation.
Now assume Vn−1 = Wm−1 . We define D := Vn−1 ∩ Wm−1 as in Lemma 3.13.
Take a composition series of D (it exists by Proposition 3.10), say

0 = D0 ⊂ D1 ⊂ . . . ⊂ Dt = D.

Then V has composition series

(III) 0 = D0 ⊂ D1 ⊂ . . . ⊂ Dt = D ⊂ Vn−1 ⊂ V
(IV) 0 = D0 ⊂ D1 ⊂ . . . ⊂ Dt = D ⊂ Wm−1 ⊂ V

since, by Lemma 3.13, the quotients Vn−1 /D and Wm−1 /D are simple. Moreover,
by Lemma 3.13, the composition series (III) and (IV) are equivalent since only the
two top factors are interchanged, up to isomorphism.
Next, we claim that m = n. The module Vn−1 inherits a composition series of
length n − 1 from (I). So by the inductive hypothesis, all composition series of
68 3 Simple Modules and the Jordan–Hölder Theorem

Vn−1 have length n − 1. But the composition series which is inherited from (III) has
length t + 1 and hence n − 1 = t + 1. Similarly, the module Wm−1 inherits from
(IV) a composition series of length t + 1 = n − 1, so by the inductive hypothesis all
composition series of Wm−1 have length n − 1. In particular, the composition series
inherited from (II) does, and therefore m − 1 = n − 1 and m = n.
Now we show that the composition series (I) and (III) are equivalent. By the
inductive hypothesis, the composition series of Vn−1 inherited from (I) and (III) are
equivalent, that is, there is a permutation of n − 1 letters, γ say, such that

= Vγ (i) /Vγ (i)−1 , (i = n − 1), and Vn−1 /D ∼


Di /Di−1 ∼ = Vγ (n−1) /Vγ (n−1)−1 .

We view γ as a permutation of n letters (fixing n), and then also

V /Vn−1 = Vn /Vn−1 ∼
= Vγ (n) /Vγ (n)−1,

which proves that (I) and (III) are equivalent.


Similarly one shows that (II) and (IV) are equivalent. We have already seen that
(III) and (IV) are equivalent as well. Therefore, it follows that (I) and (II) are also
equivalent. 

Example 3.14. Let K be a field.
(1) Let A = Mn (K) and V = A as an A-module. We consider the A-submodules
Ci of A (where 1 ≤ i ≤ n) consisting of matrices which are zero outside the
i-th column. Define Vi := C1 ⊕ C2 ⊕ . . . ⊕ Ci for 0 ≤ i ≤ n. Then we have a
chain of submodules

0 = V0 ⊂ V1 ⊂ . . . ⊂ Vn−1 ⊂ Vn = V .

This is a composition series, as we have seen in Example 3.7.


On the other hand, the A-module V also has submodules

Wj := Cn ⊕ Cn−1 ⊕ . . . ⊕ Cn−j +1 ,

for any 0 ≤ j ≤ n, and this gives us a series of submodules

0 = W0 ⊂ W1 ⊂ . . . ⊂ Wn−1 ⊂ Wn = V .

This also is a composition series, since Wj /Wj −1 ∼= Cn−j +1 ∼= K n which


is a simple A-module. As predicted by the Jordan–Hölder theorem, both
composition series have the same length n. Since all composition factors are
isomorphic as A-modules, we do not need to worry about a permutation.
(2) Let A = Tn (K) be the upper triangular matrix algebra and V = K n the natural
A-module. By Exercise 2.14, V has Tn (K)-submodules Vi for 0 ≤ i ≤ n
consisting of those vectors in K n with non-zero entries only in the first i
3.3 Modules of Finite Length 69

coordinates; in particular, each Vi has dimension i, with V0 = 0 and Vn = K n .


This gives a series of Tn (K)-submodules

0 = V0 ⊂ V1 ⊂ . . . ⊂ Vn−1 ⊂ Vn = K n .

Each factor module Vi /Vi−1 is 1-dimensional and hence simple, so this is a


composition series. Actually, this is the only composition series of the Tn (K)-
module K n , since by Exercise 2.14 the Vi are the only Tn (K)-submodules
of K n .
Moreover, also by Exercise 2.14, the factor modules Vi /Vi−1 are pair-
wise non-isomorphic. We will see in Example 3.28 that these n simple
Tn (K)-modules of dimension 1 are in fact all simple Tn (K)-modules, up to
isomorphism.
(3) This example also shows that a module can have non-isomorphic composition
factors. Let A := K × K, the direct product of K-algebras, and view A as an
A-module. Let S1 := {(x, 0) | x ∈ K} and S2 := {(0, y) | y ∈ K}. Then each
Si is an A-submodule of A. It is 1-dimensional and therefore it is simple. We
have a series

0 ⊂ S1 ⊂ A

and by the isomorphism theorem, A/S1 ∼ = S2 . This is therefore a composition


series. We claim that S1 and S2 are not isomorphic: Let φ : S1 → S2 be an
A-module homomorphism, then

φ(a(x, 0)) = aφ((x, 0)) ∈ S2

for each a ∈ A. We take a = (1, 0), then a(x, 0) = (x, 0) but a(0, y) = 0 for
all (0, y) ∈ S2 . That is, φ = 0.

3.3 Modules of Finite Length

Because of the Jordan–Hölder theorem we can define the length of a module. This
is a very useful natural generalization of the dimension of a vector space.
Definition 3.15. Let A be a K-algebra. For every A-module V the length (V ) is
defined as the length of a composition series of V (see Definition 3.6), if it exists;
otherwise we set (V ) = ∞. An A-module V is said to be of finite length if (V ) is
finite, that is, when V has a composition series.
Note that the length of a module is well-defined because of the Jordan–Hölder
theorem which in particular says that all composition series of a module have the
same length.
70 3 Simple Modules and the Jordan–Hölder Theorem

Example 3.16. Let K be a field.


(1) Let A = K, the 1-dimensional K-algebra. Then A-modules are just K-vector
spaces and simple A-modules are precisely the 1-dimensional K-vector spaces.
In particular, the length of a composition series is just the dimension, that is, for
every K-vector space V we have (V ) = dimK V .
(2) Let A be a K-algebra and V an A-module. By Example 3.7 we have that
(V ) = 0 if and only if V = 0, the zero module, and that (V ) = 1 if and only
if V is a simple A-module. In addition, we have seen there that for A = Mn (K),
the natural module V = K n has (V ) = 1, and that for the A-module A we
have (A) = n. Roughly speaking, the length gives a measure of how far a
module is away from being simple.
We now collect some fundamental properties of the length of modules. We first
prove a result analogous to Proposition 3.10, but now also including factor modules.
It generalizes properties from linear algebra: Let V be a finite-dimensional vector
space over K and U a subspace, then U and V /U also are finite-dimensional and
dimK V = dimK U + dimK V /U . Furthermore, dimK U < dimK V if U = V .
Proposition 3.17. Let A be a K-algebra and let V be an A-module which has a
composition series. Then for every A-submodule U ⊆ V the following holds.
(a) The factor module V /U has a composition series.
(b) There exists a composition series of V in which U is one of the terms. Moreover,
for the lengths we have (V ) = (U ) + (V /U ).
(c) We have (U ) ≤ (V ). If U = V then (U ) < (V ).
Proof. (a) Let 0 = V0 ⊂ V1 ⊂ . . . ⊂ Vn−1 ⊂ Vn = V be a composition series for
V . We wish to relate this to a sequence of submodules of V /U . In general, U need
not be related to any of the modules Vi , but we have a series of submodules of V /U
(using the submodule correspondence) of the form

0 = (V0 + U )/U ⊆ (V1 + U )/U ⊆ . . . ⊆ (Vn−1 + U )/U ⊆ (Vn + U )/U = V /U.


(3.2)

Note that in this series terms can be equal. Using the isomorphism theorem
(Theorem 2.24) we analyze the factor modules

((Vi + U )/U )/((Vi−1 + U )/U ) ∼


= (Vi + U )/(Vi−1 + U )
= (Vi−1 + U + Vi )/(Vi−1 + U )

= Vi /((Vi−1 + U ) ∩ Vi )

where the equality in the second step holds since Vi−1 ⊂ Vi . We also have
Vi−1 ⊆ Vi−1 + U and therefore Vi−1 ⊆ (Vi−1 + U ) ∩ Vi ⊆ Vi . But Vi−1 is a
maximal submodule of Vi and therefore the factor module Vi /(Vi−1 + U ) ∩ Vi is
3.4 Finding All Simple Modules 71

either zero or is simple. We omit terms where the factor in the series (3.2) is zero,
and we obtain a composition series for V /U , as required.
(b) By assumption V has a composition series. Then by Proposition 3.10 and by
part (a) the modules U and V /U have composition series. Take a composition series
for U ,

0 = U0 ⊂ U1 ⊂ . . . ⊂ Ut −1 ⊂ Ut = U.

By the submodule correspondence (see Theorem 2.28) every submodule of V /U


has the form Vi /U for some submodule Vi ⊆ V containing U , hence a composition
series of V /U is of the form

0 = V0 /U ⊂ V1 /U ⊂ . . . ⊂ Vr−1 /U ⊂ Vr /U = V /U.

We have that Ut = U = V0 ; and by combining the two series we obtain

0 = U0 ⊂ . . . ⊂ Ut ⊂ V 1 ⊂ . . . ⊂ V r = V .

In this series all factor modules are simple. This is clear for Ui /Ui−1 ; and further-
more, by the isomorphism theorem (Theorem 2.24) Vj /Vj −1 ∼ = (Vj /U )/(Vj −1 /U )
is simple. Therefore, we have constructed a composition series for V in which
U = Ut appears as one of the terms.
For the lengths we get (V ) = t + r = (U ) + (V /U ), as claimed.
(c) By part (b) we have (U ) = (V ) − (V /U ) ≤ (V ). Moreover, if U = V then
V /U is non-zero; so (V /U ) > 0 and (U ) = (V ) − (V /U ) < (V ). 

3.4 Finding All Simple Modules

The Jordan–Hölder theorem shows that every module which has a composition
series can be built from simple modules. Therefore, it is a fundamental problem of
representation theory to understand what the simple modules of a given algebra are.
Recall from Example 2.25 the following notion. Let A be a K-algebra and V an
A-module. Then for every v ∈ V we set AnnA (v) = {a ∈ A | av = 0}, and call
this the annihilator of v in A. We have seen in Example 2.25 that for every v ∈ V
there is an isomorphism of A-modules A/AnnA (v) ∼ = Av. In the context of simple
modules this takes the following form, which we restate here for convenience.
Lemma 3.18. Let A be a K-algebra and S a simple A-module. Then for every
non-zero s ∈ S we have that S ∼
= A/AnnA (s) as A-modules.
Proof. As in Example 2.25 we consider the A-module homomorphism
ψ : A → S , ψ(a) = as. Since S is simple and s non-zero, this map is surjective by
Lemma 3.3, and by definition the kernel is AnnA (s). So the isomorphism theorem
yields A/AnnA (s) ∼
= im(ψ) = As = S. 

72 3 Simple Modules and the Jordan–Hölder Theorem

This implies in particular that if an algebra has a composition series, then it can
only have finitely many simple modules:
Theorem 3.19. Let A be a K-algebra which has a composition series as an A-
module. Then every simple A-module occurs as a composition factor of A. In
particular, there are only finitely many simple A-modules, up to isomorphism.
Proof. By Lemma 3.18 we know that if S is a simple A-module then S ∼ = A/I for
some A-submodule I of A. By Proposition 3.17 there is some composition series
of A in which I is one of the terms. Since A/I is simple there are no further A-
submodules between I and A (see Lemma 3.4). This means that I can only appear as
the penultimate entry in this composition series, and S ∼
= A/I , so it is a composition
factor of A. 

For finite-dimensional algebras we have an interesting consequence.
Corollary 3.20. Let A be a finite-dimensional K-algebra. Then every simple A-
module is finite-dimensional.
Proof. Suppose S is a simple A-module, then by Lemma 3.18, we know that S is
isomorphic to a factor module of A. Hence if A is finite-dimensional, so is S. 

Remark 3.21.
(a) In Theorem 3.19 the assumption that A has a composition series as an A-module
is essential. For instance, consider the polynomial algebra A = K[X] when
K is infinite. There are infinitely many simple A-modules which are pairwise
non-isomorphic. In fact, take a one-dimensional vector space V = span{v}
and make it into a K[X]-module Vλ by setting X · v = λv for λ ∈ K. For
λ = μ the modules Vλ and Vμ are not isomorphic, see Example 2.23; however
they are 1-dimensional and hence simple. In particular, we can conclude from
Theorem 3.19 that A = K[X] cannot have a composition series as an A-module.
(b) In Corollary 3.20 the assumption on A is essential. Infinite-dimensional algebras
can have simple modules of infinite dimension. For instance, let Q be the two-
loop quiver with one vertex and two loops,

and let A = KQ be the path algebra. Exercise 3.6 constructs for each n ∈ N
a simple A-module of dimension n and even an infinite-dimensional simple A-
module.
Example 3.22. Let A = Mn (K), the algebra of n × n-matrices over K. We have
seen a composition series of A in Example 3.7, in which every composition factor is
isomorphic to the natural module K n . So by Theorem 3.19 the algebra Mn (K) has
precisely one simple module, up to isomorphism, namely the natural module K n of
dimension n.
3.4 Finding All Simple Modules 73

3.4.1 Simple Modules for Factor Algebras of Polynomial


Algebras

We will now determine the simple modules for an algebra A of the form K[X]/I
where I is a non-zero ideal with I = K[X]; hence I = (f ) where f is a polynomial
of positive degree. Note that this does not require us to know a composition series
of A, in fact we could have done this already earlier, after Lemma 3.4.
Proposition 3.23. Let A = K[X]/(f ) with f ∈ K[X] of positive degree.
(a) The simple A-modules are up to isomorphism precisely the A-modules
K[X]/(h) where h is an irreducible polynomial dividing f .
(b) Write f = f1a1 . . . frar , with ai ∈ N, as a product of irreducible polynomials
fi ∈ K[X] which are pairwise coprime. Then A has precisely r simple modules,
up to isomorphism, namely K[X]/(f1 ), . . . , K[X]/(fr ).
Proof. (a) First, let h ∈ K[X] be an irreducible polynomial dividing f . Then
K[X]/(h) is an A-module, by Exercise 2.23, with A-action given by

(g1 + (f ))(g2 + (h)) = g1 g2 + (h).

Since h is irreducible, the ideal (h) is maximal, and hence K[X]/(h) is a simple
A-module, by Lemma 3.4.
Conversely, let S be any simple A-module. By Lemmas 3.18 and 3.4 we know that
S is isomorphic to A/U where U is a maximal submodule of A. By the submodule
correspondence, see Theorem 2.28, we know U = W/(f ) where W is an ideal of
K[X] containing (f ), that is, W = (h) where h ∈ K[X] and h divides f . Applying
the isomorphism theorem yields

A/U = (K[X]/(f ))/(W/(f )) ∼


= K[X]/W.

Isomorphisms preserve simple modules (see Exercise 3.3), so with A/U the module
K[X]/W is also simple. This means that W = (h) is a maximal ideal of K[X] and
then h is an irreducible polynomial.
(b) By part (a), every simple A-module is isomorphic to one of
K[X]/(f1 ), . . . , K[X]/(fr ) (use that K[X] has the unique factorization property,
hence f1 , . . . , fr are the unique irreducible divisors of f , up to multiplication by
units). On the other hand, these A-modules are pairwise non-isomorphic: suppose
ψ : K[X]/(fi ) → K[X]/(fj ) is an A-module homomorphism, we show that for
i = j it is not injective. Write ψ(1 + (fi )) = g + (fj ) and consider the coset
fj + (fi ) Since fi and fj are irreducible and coprime, this coset is not the zero
element in K[X]/(fi ). But it is in the kernel of ψ, since

ψ(fj + (fi )) = ψ((fj + (fi ))(1 + (fi ))) = (fj + (fi ))ψ(1 + (fi ))
= (fj + (fi ))(g + (fj )) = fj g + (fj ),

which is the zero element in K[X]/(fj ). In particular, ψ is not an isomorphism.



74 3 Simple Modules and the Jordan–Hölder Theorem

Remark 3.24. We can use this to find a composition series of the algebra
A = K[X]/(f ) as an A-module: Let f = f1 f2 . . . ft with fi ∈ K[X] irreducible,
we allow repetitions (that is, the fi are not necessarily pairwise coprime). This gives
a series of submodules of A

0 ⊂ I1 /(f ) ⊂ I2 /(f ) ⊂ . . . ⊂ It −1 /(f ) ⊂ A

where Ij is the ideal of K[X] generated by fj +1 . . . ft . Then I1 /(f ) ∼


= K[X]/(f1 )
(see Exercise 2.23) and

(Ij /(f ))/(Ij −1 /(f )) ∼


= Ij /Ij −1 ∼
= K[X]/(fj ),

so that all factor modules are simple. Hence we have found a composition series
of A.
Of course, one would also get other composition series by changing the order of
the irreducible factors. Note that the factorisation of a polynomial into irreducible
factors depends on the field K.
Example 3.25.
(1) Over the complex numbers, every polynomial f ∈ C[X] of positive degree
splits into linear factors. Hence every simple C[X]/(f )-module is one-
dimensional.
The same works more generally for K[X]/(f ) when K is algebraically
closed. (Recall that a field K is algebraically closed if every non-constant
polynomial in K[X] is a product of linear factors.)
We will see later, in Corollary 3.38, that this is a special case of a more
general result about commutative algebras over algebraically closed fields.
(2) As an explicit example, let G = g be a cyclic group of order n, and let
A = CG be the group algebra over C. Then A is isomorphic to the factor
algebra C[X]/(Xn −1), see Example 1.27. The polynomial Xn −1 has n distinct
roots in C, namely e2kπi/n where 0 ≤ k ≤ n − 1, so it splits into linear factors
of the form


n−1
Xn − 1 = (X − e2kπi/n ).
k=0

According to Proposition 3.23 the algebra C[X]/(Xn − 1) has precisely


n simple modules, up to isomorphism, each 1-dimensional, namely
Sk := C[X]/(X − e2kπi/n ) for k = 0, 1, . . . , n − 1. The structure of the module
Sk is completely determined by the action of the coset of X in C[X]/(Xn − 1),
which is clearly given by multiplication by e2kπi/n .
(3) Now we consider the situation over the real numbers. We first observe that every
irreducible polynomial g ∈ R[X] has degree 1 or 2. In fact, if g ∈ R[X]
3.4 Finding All Simple Modules 75

has anon-real root z ∈ C, then the complex-conjugate z̄ is also a root: write


g = i ai Xi , where ai ∈ R, then
  
0 = 0̄ = g(z) = ai z i = a i z̄i = ai z̄i = g(z̄).
i i i

Then the product of the two linear factors

(X − z)(X − z̄) = X2 − (z + z̄)X + zz̄

is a polynomial with real coefficients, and a divisor of g.


This has the following immediate consequence for the simple modules of
algebras of the form R[X]/(f ): Every simple R[X]/(f )-module has dimension
1 or 2.
As an example of a two-dimensional module consider the algebra
A = R[X]/(X2 + 1), which is a simple A-module since X2 + 1 is irreducible
in R[X].
(4) Over the rational numbers, simple Q[X]/(f )-modules can have arbitrarily large
dimensions. For example, consider the algebra Q[X]/(Xp − 1), where p is a
prime number. We have the factorisation

Xp − 1 = (X − 1)(Xp−1 + Xp−2 + . . . + X + 1).

Since p is prime, the second factor is an irreducible polynomial in Q[X] (this


follows by applying the Eisenstein criterion from basic algebra). So Proposi-
tion 3.23 shows that Q[X]/(Xp−1 + . . . + X + 1) is a simple Q[X]/(Xp − 1)-
module, it has dimension p − 1.

3.4.2 Simple Modules for Path Algebras

In this section we completely describe the simple modules for finite-dimensional


path algebras of quivers.
Let Q be a quiver without oriented cycles, then for any field K, the path algebra
A = KQ is finite-dimensional (see Exercise 1.2). We label the vertices of Q by
Q0 = {1, . . . , n}. Recall that for every vertex i ∈ Q0 there is a trivial path ei of
length 0. We consider the A-module Aei generated by ei ; as a vector space this
module is spanned by the paths which start at i. The A-module Aei has an A-
submodule Ji := Aei≥1 spanned by all paths of positive length starting at vertex i.
Hence we get n one-dimensional (hence simple) A-modules as factor modules of
the form

Si := Aei /Ji = span{ei + Ji },


76 3 Simple Modules and the Jordan–Hölder Theorem

for i = 1, . . . , n. The A-action on these simple modules is given by

ei (ei + Ji ) = ei + Ji and p(ei + Ji ) = 0 for all p ∈ P \ {ei },

where P denotes the set of paths in Q. The A-modules S1 , . . . , Sn are pairwise non-
isomorphic. In fact, let ϕ : Si → Sj be an A-module homomorphism for some
i = j . Then there exists a scalar λ ∈ K such that ϕ(ei + Ji ) = λej + Jj . Hence we
get

ϕ(ei + Ji ) = ϕ(ei2 + Ji ) = ϕ(ei (ei + Ji )) = ei (λej + Jj ) = λ(ei ej + Jj ) = 0,

since ei ej = 0 for i = j . In particular, ϕ is not an isomorphism.


We now show that this gives all simple KQ-modules, up to isomorphism.
Theorem 3.26. Let K be a field and let Q be a quiver without oriented cycles.
Assume the vertices of Q are denoted Q0 = {1, . . . , n}. Then the finite-dimensional
path algebra A = KQ has precisely the simple modules (up to isomorphism) given
by S1 , . . . , Sn , where Si = Aei /Ji for i ∈ Q0 . In particular, all simple A-modules
are one-dimensional, and they are labelled by the vertices of Q.
To identify simple modules, we first determine the maximal submodules of Aei .
This will then be used in the proof of Theorem 3.26.
Lemma 3.27. With the above notation the following hold:
(a) For each vertex i ∈ Q0 , the space ei Aei is 1-dimensional, and is spanned by ei .
(b) The only maximal submodule of Aei is Ji = Aei≥1, and ei Ji = 0.
Proof. (a) The elements in ei Aei are linear combinations of paths from vertex i to
vertex i. The path of length zero, that is ei , is the only path from i to i since Q has
no oriented cycle.
(b) First, Ji is a maximal submodule of Aei , since the factor module is 1-
dimensional, hence simple. Furthermore, there are no paths of positive length from i
to i and therefore ei Ji = 0. Next, let U be any submodule of Aei with U = Aei , we
must show that U is contained in Ji . If not, then U contains an element u = cei + u
where 0 = c ∈ K and u ∈ Ji . Then also ei u ∈ U but ei u = c(ei2 ) + ei u = cei + 0
(because ei Ji = 0) and ei ∈ U . It follows that U = Aei , a contradiction. Hence if
U is maximal then U = Ji . 

Proof of Theorem 3.26. Each Si is 1-dimensional, hence is simple. We have already
seen that for i = j , the modules Si and Sj are not isomorphic.
Now let S be any simple A-module and take 0 = s ∈ S. Then

s = 1 A s = e1 s + e2 s + . . . + en s
3.4 Finding All Simple Modules 77

and there is some i such that ei s = 0. By Lemma 3.3 we know that S = Aei s. We
have the A-module homomorphism

ψ : Aei → S, ψ(aei ) = aei s.

It is surjective and hence, by the isomorphism theorem, we have

S∼
= Aei /ker(ψ).

Since S is simple, the kernel of ψ is a maximal submodule of Aei , by the


submodule correspondence theorem. The only maximal submodule of Aei is Ji ,
by Lemma 3.27. Hence S is isomorphic to Si . 

Example 3.28. We have seen in Remark 1.25 (see also Exercise 1.18) that the
algebra Tn (K) of upper triangular matrices is isomorphic to the path algebra of
the quiver Q

1 ←− 2 ←− . . . ←− n − 1 ←− n

Theorem 3.26 shows that KQ, and hence Tn (K), has precisely n simple modules, up
to isomorphism. However, we have already seen n pairwise non-isomorphic simple
Tn (K)-modules in Example 3.14. Thus, these are all simple Tn (K)-modules, up to
isomorphism.

3.4.3 Simple Modules for Direct Products

In this section we will describe the simple modules for direct products
A = A1 × . . . × Ar of algebras. We will show that the simple A-modules are
precisely the simple Ai -modules, viewed as A-modules by letting the other factors
act as zero. We have seen a special case in Example 3.14.
Let A = A1 × . . . × Ar . The algebra A contains εi := (0, . . . , 0, 1Ai , 0, . . . , 0)
for 1 ≤ i ≤ r, and εi commutes with all elements of A. Moreover, εi εj = 0 for
i = j and also εi2 = εi ; and we have

1A = ε1 + . . . + εr .

Each Ai is isomorphic to a factor algebra of A, via the projection map

πi : A → Ai , πi (a1 , . . . , an ) = ai .

This is convenient for computing with inflations of modules. Indeed, if M is an


Ai -module then if we view it as an A-module by the usual inflation (see
Remark 2.38), the formula for the action of A is

(a1 , . . . , ar ) · m := ai m for (a1 , . . . , ar ) ∈ A, m ∈ M.


78 3 Simple Modules and the Jordan–Hölder Theorem

Proposition 3.29. Let K be a field and let A = A1 × . . . × Ar be a direct product


of K-algebras. Then for i ∈ {1, . . . , r}, any simple Ai -module S becomes a simple
A-module by setting

(a1 , . . . , ar ) · s = ai s for all (a1 , . . . , ar ) ∈ A and s ∈ S.

Proof. By the above, we only have to show that S is also simple as an A-module.
This is a special case of Lemma 3.5. 

We will now show that every simple A-module is of the form as in Proposi-
tion 3.29. For this we will more generally describe A-modules and we use the
elements εi and their properties.
Lemma 3.30. Let K be a field and A = A1 × . . . × Ar a direct product of
K-algebras. Moreover, let εi := (0, . . . , 0, 1Ai , 0, . . . , 0) for 1 ≤ i ≤ r. Then
for every A-module M the following holds.
(a) Let Mi := εi M, then Mi is an A-submodule of M, and M = M1 ⊕ . . . ⊕ Mr ,
the direct sum of these submodules.
(b) If M is a simple A-module then there is precisely one i ∈ {1, . . . , r} such that
Mi = 0 and this Mi is a simple A-module.
Proof. (a) We have Mi = {εi m | m ∈ M}. Since εi commutes with all elements of
A, we see that if a ∈ A then a(εi m) = εi am, therefore each Mi is an A-submodule
of M.
To prove M is a direct sum, we first see that M = M1 + . . . + Mr . In fact, for
every m ∈ M we have


r 
r
m = 1A m = ( εj )m = εj m ∈ M1 + . . . + Mr .
j =1 j =1


Secondly, we have to check thatMi ∩ ( j =i Mj ) =
0 for each i ∈ {1, . . . , r}. To
this end, suppose x := εi m = j =i εj mj ∈ Mi ∩ ( j =i Mj ). Since εi εi = εi and
εi εj = 0 for j = i we then have
 
x = εi x = εi ( ε j mj ) = εi εj mj = 0.
j =i j =i

This shows that M = M1 ⊕ . . . ⊕ Mr , as claimed.


(b) By part (a) we have the direct sum decomposition M = M1 ⊕ . . . ⊕ Mr , where
the Mi are A-submodules of M. If M is a simple A-module, then precisely one of
these submodules Mi must be non-zero. In particular, M ∼
= Mi and Mi is a simple
A-module. 

We can now completely describe all simple modules for a direct product.
3.5 Schur’s Lemma and Applications 79

Corollary 3.31. Let K be a field and A = A1 × . . . × Ar a direct product of


K-algebras. Then the simple A-modules are precisely the inflations of the simple
Ai -modules, for i = 1, . . . , r.
Proof. We have seen in Proposition 3.29 that the inflation of any simple Ai -module
is a simple A-module.
Conversely, if M is a simple A-module then by Lemma 3.30, M ∼ = Mi for
precisely one i ∈ {1, . . . , r}. The A-action on Mi = εi M is given by the Ai -action
and the zero action for all factors Aj with j = i, since εj εi = 0 for j = i. Note that
Mi is also simple as an Ai -module. In other words, M is the inflation of the simple
Ai -module Mi . 

Example 3.32. For a field K let A = Mn1 (K) × . . . × Mnr (K) for some
natural numbers ni . By Example 3.22, the matrix algebra Mni (K) has only one
simple module, which is the natural module K ni (up to isomorphism). Then by
Corollary 3.31, the algebra A has precisely r simple modules, given by the inflations
of the natural Mni (K)-modules. In particular, the r simple modules of A have
dimensions n1 , . . . , nr .

3.5 Schur’s Lemma and Applications

The Jordan–Hölder Theorem shows that simple modules are the ‘building blocks’
for arbitrary finite-dimensional modules. So it is important to understand simple
modules. The first question one might ask is, given two simple modules, how can
we find out whether or not they are isomorphic? This is answered by Schur’s lemma,
which we will now present. Although it is elementary, Schur’s lemma has many
important applications.
Theorem 3.33 (Schur’s Lemma). Let A be a K-algebra where K is a field.
Suppose S and T are simple A-modules and φ : S −→ T is an A-module
homomorphism. Then the following holds.
(a) Either φ = 0, or φ is an isomorphism. In particular, for every simple A-module
S the endomorphism algebra EndA (S) is a division algebra.
(b) Suppose S = T , and S is finite-dimensional, and let K be algebraically closed.
Then φ = λ idS for some scalar λ ∈ K.
Proof. (a) Suppose φ is non-zero. The kernel ker(φ) is an A-submodule of S and
ker(φ) = S since φ = 0. But S is simple, so ker(φ) = 0 and φ is injective.
Similarly, the image im(φ) is an A-submodule of T , and T is simple. Since
φ = 0, we know im(φ) = 0 and therefore im(φ) = T . So φ is also surjective,
and we have proved that φ is an isomorphism.
The second statement is just a reformulation of the first one, using the definition
of a division algebra (see Definition 1.7).
80 3 Simple Modules and the Jordan–Hölder Theorem

(b) Since K is algebraically closed, the K-linear map φ on the finite-dimensional


vector space S has an eigenvalue, say λ ∈ K. That is, there is some non-zero vector
v ∈ S such that φ(v) = λv. The map λ idS is also an A-module homomorphism,
and so is φ − λ idS . The kernel of φ − λ idS is an A-submodule and is non-zero (it
contains v). Since S is simple, it follows that ker(φ − λ idS ) = S, so that we have
φ = λ idS . 

Remark 3.34. The two assumptions in part (b) of Schur’s lemma are both needed
for the endomorphism algebra to be 1-dimensional.
To see that one needs that K is algebraically closed, consider K = R and A = H,
the quaternions, from Example 1.8. We have seen that H is a division algebra over R,
and hence it is a simple H-module (see Example 3.2). But right multiplication by any
fixed element in H is an H-module endomorphism, and for example multiplication
by i ∈ H is not of the form λidH for any λ ∈ R.
Even if K is algebraically closed, part (b) of Schur’s lemma can fail for a simple
module S which is not finite-dimensional. For instance, let K = C and A = C(X),
the field of rational functions. Since it is a field, it is a division algebra over C,
hence C(X) is a simple C(X)-module. For example, left multiplication by X is a
C(X)-module endomorphism which is not of the form λidC(X) for any λ ∈ C.
One important application of Schur’s lemma is that elements in the centre of
a K-algebra A act as scalars on finite-dimensional simple A-modules when K is
algebraically closed.
Definition 3.35. Let K be a field and let A be a K-algebra. The centre of A is
defined to be

Z(A) := {z ∈ A | za = az for all a ∈ A}.

Exercise 3.2. Let A be a K-algebra. Show that the centre Z(A) is a subalgebra of A.
Example 3.36.
(1) By definition, a K-algebra A is commutative if and only if Z(A) = A. So in
some sense, the size of the centre provides a ‘measure’ of how far an algebra is
from being commutative.
(2) For any n ∈ N the centre of the matrix algebra Mn (K) has dimension 1, it is
spanned by the identity matrix. The proof of this is Exercise 3.16.
Lemma 3.37. Let K be an algebraically closed field, and let A be a K-algebra.
Suppose that S is a finite-dimensional simple A-module. Then for every z ∈ Z(A)
there is some scalar λz ∈ K such that zs = λz s for all s ∈ S.
Proof. We consider the map ρ : S → S defined by ρ(s) = zs. One checks that it is a
K-linear map. Moreover, it is an A-module homomorphism: using that z commutes
with every element a ∈ A we have

ρ(as) = z(as) = (za)s = (az)s = a(zs) = aρ(s).


3.5 Schur’s Lemma and Applications 81

The assumptions allow us to apply part (b) of Schur’s lemma, giving some λz ∈ K
such that ρ = λz idS , that is, zs = λz s for all s ∈ S.

Corollary 3.38. Let K be an algebraically closed field and let A be a commutative
algebra over K. Then every finite-dimensional simple A-module S is 1-dimensional.
Proof. Since A is commutative we have A = Z(A), so by Lemma 3.37, every a ∈ A
acts by scalar multiplication on S. For every 0 = s ∈ S this implies that span{s}
is a (1-dimensional) A-submodule of S. But S is simple, so S = span{s} and S is
1-dimensional. 

Remark 3.39. Both assumptions in Corollary 3.38 are needed.
(1) If the field is not algebraically closed, simple modules of a commutative algebra
need not be 1-dimensional. For example, let A = R[X]/(X2 + 1), a 2-
dimensional commutative R-algebra. The polynomial X2 +1 is irreducible over
R, and hence by Proposition 3.23, we know that A is simple as an A-module,
so A has a 2-dimensional simple module.
(2) The assumption that S is finite-dimensional is needed. As an example, consider
the commutative C-algebra A = C(X) as an A-module, as in Remark 3.34.
This is a simple module, but clearly not 1-dimensional.
We will see more applications of Schur’s lemma later. In particular, it will be
crucial for the proof of the Artin–Wedderburn structure theorem for semisimple
algebras.

EXERCISES

3.3. Let A be a K-algebra. Suppose V and W are A-modules and φ : V → W is


an A-module isomorphism.
(a) Show that V is simple if and only if W is simple.
(b) Suppose 0 = V0 ⊂ V1 ⊂ . . . ⊂ Vn = V is a composition series of V .
Show that then

0 = φ(0) ⊂ φ(V1 ) ⊂ . . . ⊂ φ(Vn ) = W

is a composition series of W .
3.4. Find a composition series for A as an A-module, where A is the 3-subspace
algebra
⎧⎛ ⎞ ⎫

⎪ a1 b1 b2 b3 ⎪

⎨⎜ ⎟ ⎬
0 a 0 0
A := ⎜ 2
⎝ 0 0 a3 0 ⎠
⎟ | ai , bj ∈ K ⊆ M4 (K)

⎪ ⎪

⎩ ⎭
0 0 0 a4

introduced in Example 1.16.


82 3 Simple Modules and the Jordan–Hölder Theorem

 
CC
3.5. Let A be the ring A = , that is, A consists of all upper triangular
0 R
matrices in M2 (C) with (2, 2)-entry in R.
(a) Show that A is an algebra over R (but not over C). What is its dimension
over R?    
C0 0C
(b) Consider A as an A-module. Check that and are A-
0 0 0 0
submodules of A. Show that they are simple A-modules and that they
are isomorphic.
(c) Find a composition series of A as an A-module.
(d) Determine all simple A-modules, up to isomorphism, and their dimen-
sions over R.
3.6. Let Q be the quiver with one vertex and two loops denoted x and y. For any
field K consider the path algebra KQ. Note that for any choice of n × n-
matrices X, Y over K, taking x, y ∈ KQ to X, Y in Mn (K) extends to an
algebra homomorphism, hence a representation of KQ, that is, one gets a
KQ-module.
(a) Let V3 := K 3 be the 3-dimensional KQ-module on which x and y act
via the matrices
⎛ ⎞ ⎛ ⎞
01 0 000
X = ⎝0 0 1⎠ and Y = ⎝1 0 0⎠ .
00 0 010

Show that V3 is a simple KQ-module.


(b) For every n ∈ N find a simple KQ-module of dimension n.
(c) Construct an infinite-dimensional simple KQ-module.
3.7. Let V be a 2-dimensional vector space over a field K, and let A be a
subalgebra of EndK (V ). Recall that V is then an A-module (by applying
linear transformations to vectors). Show that V is not simple as an A-module
if and only if there is some 0 = v ∈ V which is an eigenvector for all α ∈ A.
More generally, show that this also holds for a 2-dimensional A-module V ,
by considering the representation ψ : A → EndK (V ).
3.8. We consider subalgebras A of M2 (K) and the natural A-module V = K 2 .
Show that in each of the following situations the A-module V is simple, and
determine the endomorphism algebra EndK (V ).
  
a b
(a) K = R, A = | a, b ∈ R .
−b a
  
a b
(b) K = Z2 , A = | a, b ∈ Z2 .
b a+b
3.5 Schur’s Lemma and Applications 83

3.9. (a) Find all simple R[X]/(X4 − 1)-modules, up to isomorphim.


(b) Find all simple Q[X]/(X3 − 2)-modules, up to isomorphism.
3.10. Consider the R-algebra A = R[X]/(f ) for a non-constant polynomial
f ∈ R[X]. Show that for each simple A-module S the endomorphism algebra
EndR (S) is isomorphic to R or to C.
3.11. Let A = M2 (R), and V be the A-module V = A. The following will show
that V has infinitely many different composition series.
(a) Let ε ∈ A \ {0, 1} with ε2 = ε. Show that then Aε = {aε | a ∈ A} is an
A-submodule of A, and that 0 ⊂ Aε ⊂ A is a composition series of A.
(You may apply the Jordan–Hölder
  Theorem.)

(b) For λ ∈ R let ελ := . Show that for λ = μ the A-modules Aελ
00
and Aεμ are different. Hence deduce that V has infinitely many different
composition series.
3.12. (a) Let A = K[X]/(Xn ). Show that A as an A-module has a unique
composition series.
(b) Find all composition series of A as an A-module where A is the K-
algebra K[X]/(X3 − X2 ).
(c) Let f ∈ K[X] be a non-constant polynomial and let f = f1a1 . . . frar
be the factorisation into irreducible polynomials in K[X] (where the
different fi are pairwise coprime). Determine the number of different
composition series of A as an A-module where A is the K-algebra
K[X]/(f ).
3.13. Consider the group algebra A = CD4 where D4 is the group of symmetries of
the square. Let V = C2 , which is an A-module if we take the representation
as in Example 3.2. Show that V is a simple CD4 -module.
3.14. For any natural number n ≥ 3 let Dn be the dihedral group of order 2n,
that is, the symmetry group of a regular n-gon. This group is by definition
a subgroup of GL(R2 ), generated by the rotation r by an angle 2π/n and a
reflection s. The elements satisfy r n = id, s 2 = id and s −1 rs = r −1 . (For
n = 5 this group appeared in Exercise 2.20.) Let CDn be the group algebra
over the complex numbers.
(a) Prove that every simple CDn -module has dimension at most 2. (Hint: If
v is an eigenvector of the action of r, then sv is also an eigenvector for
the action of r.)
(b) Find all 1-dimensional CDn -modules, up to isomorphism. (Hint: Use the
identities for r, s; the answers will be different depending on whether n
is even or odd.)
3.15. Find all simple modules (up to isomorphism) and their dimensions for the
following K-algebras:
(i) K[X]/(X2 ) × K[X]/(X3 ).
(ii) M2 (K) × M3 (K).
(iii) K[X]/(X2 − 1) × K.
84 3 Simple Modules and the Jordan–Hölder Theorem

3.16. Let D be a division algebra over K. Let A be the K-algebra Mn (D) of all
n × n-matrices with entries in D. Find the centre Z(A) of A.
3.17. Suppose A is a finite-dimensional algebra over a finite field K, and S is a
(finite-dimensional) simple A-module. Let D := EndA (S). Show that then D
must be a field. More generally, let D be a finite-dimensional division algebra
over a finite field K. Then D must be commutative, hence is a field.
3.18. Let A be a K-algebra and M an A-module of finite length (M). Show that
(M) is the maximal length of a chain

M0 ⊂ M1 ⊂ . . . ⊂ Mr−1 ⊂ Mr = M

of A-submodules of M with Mi = Mi+1 for all i.


3.19. Let A be a K-algebra and M an A-module of finite length. Show that for all
A-submodules U and V of M we have

(U + V ) = (U ) + (V ) − (U ∩ V ).

3.20. Assume A is a K-algebra, and M is an A-module which is a direct sum of


submodules, M = M1 ⊕M2 ⊕. . .⊕Mn . Consider the sequence of submodules

0 ⊂ M1 ⊂ M1 ⊕ M2 ⊂ . . . ⊂ M1 ⊕ M2 ⊕ . . . ⊕ Mn−1 ⊂ M.

(a) Apply the isomorphism theorem and show that

(M1 ⊕ . . . ⊕ Mj )/(M1 ⊕ . . . ⊕ Mj −1 ) ∼
= Mj

as A-modules.
(b) Explain briefly how to construct a composition series for M if one is
given a composition series of Mj for each j .
3.21. Let Q be a quiver without oriented cycles, so the path algebra A = KQ is
finite-dimensional. Let Q0 = {1, 2, . . . , n}.
(a) For a vertex i of Q, let r be the maximal length of a path in Q with
starting vertex i. Recall Aei has a sequence of submodules

0 ⊂ Aei≥r ⊂ Aei≥r−1 ⊂ . . . ⊂ Aei≥2 ⊂ Aei≥1 ⊂ Aei .

Show that Aei≥t /Aei≥t +1 for t ≤ r is a direct sum of simple modules


(spanned by the cosets of paths starting at i of length t). Hence describe
a composition series of Aei .
(b) Recall that A = Ae1 ⊕ . . . ⊕ Aen as an A-module (see Exercise 2.6).
Apply the previous exercise and part (a) and describe a composition
series for A as an A-module.
Chapter 4
Semisimple Modules and Semisimple
Algebras

In the previous chapter we have seen that simple modules are the ‘building
blocks’ for arbitrary finite-dimensional modules. One would like to understand how
modules are built up from simple modules. In this chapter we study modules which
are direct sums of simple modules, this leads to the theory of semisimple modules.
If an algebra A, viewed as an A-module, is a direct sum of simple modules, then
surprisingly, every A-module is a direct sum of simple modules. In this case, A
is called a semisimple algebra. We will see later that semisimple algebras can be
described completely, this is the famous Artin–Wedderburn theorem. Semisimple
algebras (and hence semisimple modules) occur in many places in mathematics;
for example, as we will see in Chap. 6, many group algebras of finite groups are
semisimple.
In this chapter, as an exception, we deal with arbitrary direct sums of modules, as
introduced in Definition 2.15. The results have the same formulation, independent
of whether we take finite or arbitrary direct sums, and this is an opportunity to
understand a result which does not have finiteness assumptions. The only new tool
necessary is Zorn’s lemma.

4.1 Semisimple Modules

This section deals with modules which can be expressed as a direct sum of simple
submodules. Recall Definition 2.15 for the definition of direct sums.
We assume throughout that K is a field.
Definition 4.1. Let A be a K-algebra. An A-module V = 0 is called semisimple if
V is the direct sum of simple submodules, that is, there exist simple submodules Si ,

© Springer International Publishing AG, part of Springer Nature 2018 85


K. Erdmann, T. Holm, Algebras and Representation Theory, Springer
Undergraduate Mathematics Series, https://doi.org/10.1007/978-3-319-91998-0_4
86 4 Semisimple Modules and Semisimple Algebras

for i ∈ I an index set, such that



V = Si .
i∈I

Example 4.2.
(1) Every simple module is semisimple, by definition.
(2) Consider the field K as a 1-dimensional algebra A = K. Then A-modules are
the same as K-vector spaces and submodules are the same as K-subspaces.
Recall from linear algebra that every vector space V has a basis. Take a basis
{bi | i ∈ I } of V where I is some index set which may or may not be finite,
and set Si := span{bi }. Then S i is a simple A-submodule of V since it is 1-
dimensional, and we have V = i∈I Si , since every element of V has a unique
expression as a (finite) linear combination of the basis vectors. This shows that
when the algebra is the field K then every non-zero K-module is semisimple.
(3) Let A = Mn (K) and consider V = A as an A-module. We know from
Exercise 2.5 that V = C1 ⊕ C2 ⊕ . . . ⊕ Cn , where Ci is the space of matrices
which are zero outside the i-th column. We have also seen in Exercise 3.1 that
each Ci is isomorphic to K n and hence is a simple A-module. So A = Mn (K)
is a semisimple A-module.
(4) Consider again the matrix algebra Mn (K), and the natural module V = K n . As
we have just observed, V is a simple Mn (K)-module, hence also a semisimple
Mn (K)-module.
However, we can also consider V = K n as a module for the alge-
bra of upper triangular matrices A = Tn (K). Then by Exercise 2.14 the
A-submodules of K n are given by the subspaces Vi for i = 0, 1, . . . , n, where
Vi = {(x1 , . . . , xi , 0, . . . , 0)t | xi ∈ K}. Hence the A-submodules of V form a
chain

0 = V0 ⊂ V1 ⊂ . . . ⊂ Vn−1 ⊂ Vn = K n .

In particular, Vi ∩ Vj = 0 for every i, j = 0 and hence V cannot be the direct


sum of A-submodules if n ≥ 2. Thus, for n ≥ 2, the natural Tn (K)-module K n
is not semisimple.
(5) As a similar example, consider the algebra A = K[X]/(Xt ) for t ≥ 2, as
an A-module. By Theorem 2.10, this module is of the form Vα where V has
dimension t and α is the linear map which comes from multiplication with the
coset of X. We see α t = 0 and α t −1 = 0. We have seen in Proposition 3.23
that A has a unique simple module, which is isomorphic to K[X]/(X). This
is a 1-dimensional space, spanned by an eigenvector for the coset of X with
eigenvalue 0. Suppose A is semisimple as an A-module, then it is a direct sum
of simple modules, each is spanned by a vector which is mapped to zero by the
coset of X, that is, by α. Then the matrix for α is the zero matrix and it follows
that t = 1. Hence for t > 1, A is not semisimple as an A-module.
4.1 Semisimple Modules 87

Given some A-module V , how can we decide whether or not it is semisimple?


The following result provides several equivalent criteria, and each of them has its
advantages.
Theorem 4.3. Let A be a K-algebra and let V be a non-zero A-module. Then the
following statements are equivalent.
(1) For every A-submodule U of V there exists an A-submodule C of V such that
V = U ⊕ C.
(2) V is a direct sum of simple submodules (that is, V is semisimple).
(3) V is a sum of simple submodules,
 that is, there exist simple A-submodules
Si , i ∈ I , such that V = i∈I Si .
The module C in (1) such that V = U ⊕ C is called a complement to U in V .
Condition (1) shows that every submodule U is also isomorphic to a factor module
(namely V /C) and every factor module, V /U , is also isomorphic to a submodule
(namely C).
The implication (2) ⇒ (3) is obvious. So it suffices to show the implications
(1) ⇒ (2) and (3) ⇒ (1). We will first prove these when V is finite-dimensional (or
when V has a composition series) and then we will introduce Zorn’s lemma, and
give a general proof.
Proof when V is finite-dimensional. (1) ⇒ (2). Assume every submodule of V has
a complement. We want to show that V is a direct sum of simple submodules.
Let M be the set of submodules of V which can be expressed as a direct sum of
simple submodules. We assume for the moment that V is finite-dimensional, and so
is every submodule of V . By Lemma 3.9 every submodule of V has a composition
series, hence every non-zero submodule of V has a simple submodule.
In particular, the set M is non-empty. Choose U in M of largest possible
dimension. We claim that then U = V .
We assume (1) holds, so there is a submodule C of V such that V = U ⊕ C.
Suppose (for a contradiction) that U = V , then C is non-zero. Then the module
C has a simple submodule, S say. The intersection of U and S is zero, since
U ∩ S ⊆ U ∩ C = 0. Hence U + S = U ⊕ S. Since U is a direct sum of simple
modules, U ⊕S is also a direct sum of simple modules. But dimK U < dimK (U ⊕S)
(recall a simple module is non-zero by definition), which contradicts the choice of
U . Therefore we must have C = 0 and V = U ∈ M is a direct sum of simple
submodules.
(3) ⇒ (1). Assume V is the sum of simple submodules. We claim that then every
submodule of V has a complement.
Let U be a submodule of V , we may assume U = V . Let M be the set of
submodules W of V such that U ∩ W = 0; clearly M is non-empty since it contains
the zero submodule. Again, any such W has dimension at most dimK V . Take C in
M of largest possible dimension. We claim that U ⊕ C = V .
Since U ∩ C = 0, we have U + C = U ⊕ C. Suppose that U ⊕ C is a proper
submodule of V . Then there must be a simple submodule, S say, of V which is not
contained in U ⊕ C (indeed, if all simple submodules of V were contained in U ⊕ C
88 4 Semisimple Modules and Semisimple Algebras

then since V is a sum of simple submodules, we would have that V ⊆ U ⊕ C).


Then the intersection of S with U ⊕ C is zero (if not, the intersection is a non-zero
submodule of S and since S is simple, it is equal to S). Then we have a submodule
of V of the form U ⊕ (C ⊕ S) and dimK C < dimK (C ⊕ S) which contradicts the
choice of C. Therefore we must have U ⊕ C = V . 

Note that if we only assume V has a composition series then the proof is identical,
by replacing ‘dimension’ by ‘length’ (of a composition series).
If V is not finite-dimensional, then we apply Zorn’s lemma. This is a more
general statement about partially ordered sets. Let P be a non-empty set and ≤
some partial order on P. A chain (Ui )i∈I in P is a linearly ordered subset. An upper
bound of such a chain is an element U ∈ P such that Ui ≤ U for all i ∈ I . Zorn’s
lemma states that if every chain in P has an upper bound then P has at least one
maximal element. For a discussion of this, we refer to the book by Cameron in this
series.1
Perhaps one of the most important applications of Zorn’s lemma in representation
theory is that a non-zero cyclic module always has a maximal submodule.
Definition 4.4. Let R be a ring. An R-module M is called cyclic if there exists an
element m ∈ M such that M = Rm = {rm | r ∈ R}, that is, the module M is
generated by a single element.
Lemma 4.5. Assume R is a ring and M = Rm, a non-zero cyclic R-module. Then
M has a maximal submodule, and hence has a simple factor module.
Proof. Let M be the set of submodules of M which do not contain m, this set is
partially ordered by inclusion. Then M is not empty (the zero submodule belongs
to M). Take any chain (Ui )i∈I in M, and let U := ∪i∈I Ui , this is a submodule of
M and it does not contain m. So U ∈ M, and Ui ⊆ U , that is, U is an upper bound
in M for the chain. By Zorn’s lemma, the set M has a maximal element, N say.
We claim that N is a maximal submodule of M: Indeed, let N ⊆ W ⊆ M, where
W is a submodule of M. If m ∈ W then M = Rm ⊆ W and W = M. On the
other hand, if m ∈ W then W belongs to M, and by maximality of N it follows that
N = W . Then M/N is a simple R-module, by the submodule correspondence (see
Theorem 2.28). 

We return to the proof of Theorem 4.3 in the general case. One ingredient in the
proof of (1) ⇒ (2) is that assuming (1) holds for V then every non-zero submodule
of V must have a simple submodule. We can prove this for general V as follows.
Lemma 4.6. Let A be a K-algebra. Assume V is an A-module such that every
submodule U of V has a complement. Then every non-zero submodule of V has a
simple submodule.

1 P. J. Cameron,
Sets, Logic and Categories. Springer Undergraduate Mathematics Series. Springer-
Verlag London, Ltd., London, 1999. x+180 pp.
4.1 Semisimple Modules 89

Proof. It is enough to show that if Am is a non-zero cyclic submodule of V then


Am has a simple submodule. By Lemma 4.5, the module Am has a maximal
submodule, U say, and then Am/U is a simple A-module. By the assumption, there
is a submodule C of V such that V = U ⊕ C. Then we have

Am = Am ∩ V = Am ∩ (U ⊕ C) = U ⊕ (Am ∩ C),

where the last equality holds since U is contained in Am. It follows now by the
isomorphism theorem that Am/U ∼ = Am ∩ C, which is simple and is also a
submodule of Am.

Proof of Theorem 4.3 in general. (1) ⇒ (2). Consider families of simple submod-
ules of V whose sum is a direct sum. We set
 
M := {(Si )i∈I | Si ⊆ V simple, Si = Si }.
i∈I i∈I

By assumption in Theorem 4.3, V = 0. We assume that (1) holds, then by


Lemma 4.6, V has a simple submodule, so M is non-empty. We consider the
‘refinement order’ on M, that is, we define

(Si )i∈I ≤ (Tj )j ∈J

if every simple module Si appears in the family (Tj )j ∈J . This is a partial order. To
apply Zorn’s lemma, we must show that any chain in M has an upper bound in
M. We can assume that the index sets of the sequences in the chain are also totally
ordered by inclusion. Let I˜ denote the union of the index sets of the families in the
chain. Then the family (Si )i∈I˜ is an upper bound of thechain in M: Suppose (for
a contradiction) that (Si )i∈I˜ does not lie in M, that is, i∈I˜ Si is not a direct sum.

Then for some k ∈ I˜ we have Sk ∩ i =k Si = 0. This means that there exists a non-
zero element s ∈ Sk which can be expressed as a finite(!) sum s = si1 + . . . + sir
with sij ∈ Sij for some i1 , . . . , ir ∈ I˜. Since I˜ is a union of index sets, the finitely
many indices k, i1 , . . . , ir must appear in some 
 index set I which is an index set
for some term of the chain in M. But then i∈I  Si = i∈I  Si , contradicting the
assumption that (Si )i∈I  ∈ M. So we have shown that every chain in the partially
ordered set M has an upper bound in M. Now Zorn’s  lemma implies
 that M has
a maximal element (Sj )j ∈J . In particular, U := j ∈J Sj = j ∈J j . Now we
S
continue as in the first version of the proof: By (1) there is a submodule C of V
such that V = U ⊕ C. If C is non-zero then by Lemma 4.6, it contains a simple
submodule S. Since U ∩ C = 0, we have U ∩ S = 0 and hence U + S = U ⊕ S.
This means that the family (Sj )j ∈J ∪ {S} ∈ M, contradicting
 the maximality of the
family (Sj )j ∈J . Therefore, C = 0 and V = U = j ∈J Sj is a direct sum of simple
submodules, that is, (2) holds.
(3) ⇒ (1). Let U ⊆ V be a submodule of V . Consider the set

S := {W | W ⊆ V an A-submodule such that U ∩ W = 0}.


90 4 Semisimple Modules and Semisimple Algebras

Then S is non-empty (the zero module is in S), it is partially ordered by inclusion


and for each chain in S the union of the submodules is an upper bound in S.
So Zorn’s lemma gives a maximal element C ∈ S. Now the rest of the proof is
completely analogous to the above proof for the finite-dimensional case. 

The important Theorem 4.3 has many consequences; we collect a few which will
be used later.
Corollary 4.7. Let A be a K-algebra.
(a) Let ϕ : S → V be an A-module homomorphism, where S is a simple A-module.
Then ϕ = 0 or the image of ϕ is a simple A-module isomorphic to S.
(b) Let ϕ : V → W be an isomorphism of A-modules. Then V is semisimple if and
only if W is semisimple.
(c) All non-zero submodules and all non-zero factor modules of semisimple A-
modules are again semisimple. 
(d) Let (Vi )i∈I be a family of non-zero A-modules. Then the direct sum i∈I Vi
(see Definition 2.17) is a semisimple A-module if and only if all the modules
Vi , i ∈ I , are semisimple A-modules.
Proof. (a) The kernel ker(ϕ) is an A-submodule of S. Since S is simple, there are
only two possibilities: if ker(ϕ) = S, then ϕ = 0; otherwise ker(ϕ) = 0, then by the
isomorphism theorem, im(ϕ) ∼ = S/ker(ϕ) ∼= S and is simple.
Towards (b) and (c), we prove:
(∗) If ϕ : V → W is a non-zero surjective A-module homomorphism, and if V is
semisimple, then so is W .
So assume thatV is semisimple. By Theorem 4.3, V is a sum of simple
submodules, V = i∈I Si say. Then
 
 
W = ϕ(V ) = ϕ Si = ϕ(Si ).
i∈I i∈I

By part (a), each ϕ(Si ) is either zero, or is a simple A-module. We can ignore the
ones which are zero, and get that W is a sum of simple A-modules and hence is
semisimple, using again Theorem 4.3.
Part (b) follows now, by applying (∗) to ϕ and also to the inverse isomorphism
ϕ −1 .
(c) Suppose that V is a semisimple A-module, and U ⊆ V an A-submodule. We
start by dealing with the factor module V /U , we must show that if V /U is non-zero
then it is semisimple. Let π be the canonical A-module homomorphism

π : V → V /U , π(v) = v + U.

If V /U is non-zero then π is non-zero, and by (∗) we get that V /U is semisimple.


Next, assume U is non-zero, we must show that then U is semisimple. By
Theorem 4.3 we know that there exists a complement to U , that is, an A-submodule
4.2 Semisimple Algebras 91

C ⊆ V such that V = U ⊕ C. This implies that U ∼ = V /C. But the non-zero factor
module V /C is semisimple by the first part of (c), and then by (b) we deduce that
U is also semisimple.

(d) Write V := i∈I Vi and consider the inclusion maps ιi : Vi → V . These
are injective A-module homomorphisms; in particular, Vi ∼ = im(ιi ) ⊆ V are A-
submodules.
Suppose that V is semisimple. Then by parts (b) and (c) each Vi is semisimple,
as it is isomorphic to the non-zero submodule im(ιi ) of the semisimple module V .
Conversely, suppose that each Vi , i ∈ I , is a semisimple A-module.  By
Theorem 4.3 we can write Vi as a sum of simple submodules, say Vi =  j ∈Ji ij S
(for some index sets Ji ). On the other hand we have that V = i∈I ιi (Vi ),
since every element of the direct sum has only finitely many non-zero entries, see
Definition 2.17. Combining these, we obtain that
⎛ ⎞
   
V = ιi (Vi ) = ιi ⎝ Sij ⎠ = ιi (Sij )
i∈I i∈I j ∈Ji i∈I j ∈Ji

and V is a sum of simple A-submodules (the ιi (Sij ) are simple by part (a)). Hence
V is semisimple by Theorem 4.3. 

4.2 Semisimple Algebras

In Example 4.2 we have seen that for the 1-dimensional algebra A = K, every
non-zero A-module is semisimple. We would like to describe algebras for which
all non-zero modules are semisimple. If A is such an algebra, then in particular
A viewed as an A-module is semisimple. Surprisingly, the converse holds, as we
will see soon: If A as an A-module is semisimple, then all non-zero A-modules are
semisimple. Therefore, we make the following definition.
Definition 4.8. A K-algebra A is called semisimple if A is semisimple as an A-
module.
We have already seen some semisimple algebras.
Example 4.9. Every matrix algebra Mn (K) is a semisimple algebra, see Exam-
ple 4.2.

Remark 4.10. By definition, a semisimple algebra A is a direct sum A = i∈I Si
of simple A-submodules. Luckily, in this situation the index set I must befinite.
Indeed, the identity element can be expressed as a finite sum 1A = i∈I si
with si∈ Si . This means that there 
is a finite subset {i1 , . . .
, ik } ⊆ I such that
1A ∈ kj =1 Sij . Then A = A1A ⊆ kj =1 Sij , that is, A = k
j =1 Sij is a direct
sum of finitely many simple A-submodules. In particular, a semisimple algebra has
finite length as an A-module, and every simple A-module is isomorphic to one of
92 4 Semisimple Modules and Semisimple Algebras

the modules Si1 , . . . , Sik which appear in the direct sum decomposition of A (see
Theorem 3.19).
When A is a semisimple algebra, then we can understand arbitrary non-zero A-
modules; they are just direct sums of simple modules, as we will now show.
Theorem 4.11. Let A be a K-algebra. Then the following assertions are equiva-
lent.
(i) A is a semisimple algebra.
(ii) Every non-zero A-module is semisimple.
Proof. The implication (ii) ⇒ (i) follows by Definition 4.8.
Conversely, suppose that A is semisimple as an A-module. Take an arbitrary
non-zero A-module V . We have to show that V is a semisimple A-module. As a
K-vector space V has a basis, say {vi | i ∈ I }. With the same index set I , we take
the A-module

A := {(ai )i∈I | ai ∈ A, only finitely many ai are non-zero}
i∈I

the direct sum of copies of A (see Definition 2.17). We consider the map
 
ψ: A → V , (ai )i∈I → ai vi .
i∈I i∈I

One checks that ψ is an A-module homomorphism. Since the vi form a K-basis of


V , the map ψ is surjective. By the isomorphism theorem,
 

A /ker(ψ) ∼
= im(ψ) = V .
i∈I

By assumption, A is a semisimple A-module, and hence i∈I A is also a semisim-
ple A-module by Corollary 4.7. But then Corollary 4.7 implies that the non-zero
factor module
 ( i∈I A)/ker(ψ) is also semisimple. Finally, Corollary 4.7 gives
that V ∼
= ( i∈I A)/ker(ψ) is a semisimple A-module. 

This theorem has many consequences, in particular we can use it to show that
factor algebras of semisimple algebras are semisimple, and hence also that an
algebra isomorphic to a semisimple algebra is again semisimple:
Corollary 4.12. Let A and B be K-algebras. Then the following holds:
(a) Let ϕ : A → B be a surjective algebra homomorphism. If A is a semisimple
algebra then so is B.
(b) If A and B are isomorphic then A is semisimple if and only if B is semisimple.
(c) Every factor algebra of a semisimple algebra is semisimple.
4.2 Semisimple Algebras 93

Proof. (a) Let ϕ : A → B be a surjective algebra homomorphism. By Theorem 4.11


it suffices to show that every non-zero B-module is semisimple. Suppose M = 0 is
a B-module, then we can view it as an A-module, with action

a · m = ϕ(a)m (for m ∈ m, a ∈ A)

by Example
 2.4. Since A is semisimple, the A-module M can be written as
M = i∈I Si where the Si are simple A-modules. We are done if we show that
each Si is a B-submodule of M and that Si is simple as a B-module.
First, Si is a non-zero subspace of M. Let b ∈ B and v ∈ Si , we must show that
bv ∈ Si . Since ϕ is surjective we have b = ϕ(a) for some a ∈ A, and then

a · v = ϕ(a)v = bv

and by assumption a · v ∈ Si . Hence Si is a B-submodule of M.


Now let U be a non-zero B-submodule of Si , then with the above formula for the
action of A, we see that U is a non-zero A-submodule of Si . Since Si is simple as
an A-module, it follows that U = Si . Hence Si is simple as a B-module.
(b) Suppose ϕ : A → B is an isomorphism. Then (b) follows by applying (a) to ϕ
and to the inverse isomorphism ϕ −1 .
(c) Assume A is semisimple. Let I ⊂ A be a two-sided ideal with I = A,
and consider the factor algebra A/I . We have the canonical surjective algebra
homomorphism ϕ : A → A/I . By (a) the algebra A/I is semisimple. 

Example 4.13.
(1) The algebra A = Tn (K) of upper triangular matrices is not semisimple when
n ≥ 2 (for n = 1 we have A = K, which is semisimple). In fact, the
natural A-module K n is not a semisimple module, see Example 4.2. Therefore
Theorem 4.11 implies that Tn (K) is not a semisimple algebra.
(2) The algebra A = K[X]/(Xt ) is not a semisimple algebra for t ≥ 2. Indeed, we
have seen in Example 4.2 that A is not semisimple as an A-module.
It follows that also the polynomial algebra K[X] is not a semisimple
algebra: indeed, apply Corollary 4.12 to the surjective algebra homomorphism
K[X] → K[X]/(Xt ), f → f + (Xt ).
We can answer precisely which factor algebras of K[X] are semisimple. We
recall from basic algebra that the polynomial ring K[X] over a field K is a unique
factorization domain, that is, every polynomial can be (uniquely) factored as a
product of irreducible polynomials.
Proposition 4.14. Let K be a field, f ∈ K[X] a non-constant polynomial and
f = f1a1 · . . . · frar its factorization into irreducible polynomials, where ai ∈ N and
f1 , . . . , fr are pairwise coprime. Then the following statements are equivalent.
(i) K[X]/(f ) is a semisimple algebra.
(ii) We have ai = 1 for all i = 1, . . . , r, that is, f = f1 · . . . · fr is a product of
pairwise coprime irreducible polynomials.
94 4 Semisimple Modules and Semisimple Algebras

Proof. We consider the algebra A = K[X]/(f ) as an A-module. Recall that the A-


submodules of A are given precisely by (g)/(f ) where the polynomial g divides f .
(i) ⇒ (ii). We assume A is semisimple. Assume for a contradiction that (say) a1 ≥ 2.
We will find a non-zero A-module which is not semisimple (and this contradicts
Theorem 4.11). Consider M := K[X]/(f12 ). Then M is a K[X]-module which is
annihilated by f , and hence is an A-module (see for example Lemma 2.37). The
A-submodules of M are of the form (g)/(f12 ) for polynomials g dividing f12 . Since
f1 is irreducible, M has only one non-trivial A-submodule, namely (f1 )/(f12 ). In
particular, M cannot be the direct sum of simple A-submodules, that is, M is not
semisimple as an A-module.
(ii) ⇒ (i). Assume f is a product of pairwise coprime irreducible polynomials. We
want to show that A is a semisimple algebra. It is enough to show that every A-
submodule of A has a complement (see Theorem 4.3). Every submodule of A is
of the form (g)/(f ) where g divides f . Write f = gh then by our assumption,
the polynomials g and h are coprime. One shows now, with basic algebra, that
A = (g)/(f ) ⊕ (h)/(f ), which is a direct sum of A-modules; see the worked
Exercise 4.3. 

Example 4.15.
(1) By Proposition 4.14 we see that A = K[X]/(Xt ) is a semisimple algebra if and
only if t = 1 (see Example 4.13).
(2) In Sect. 1.4 we have seen that up to isomorphism there are precisely
three 2-dimensional R-algebras, namely R[X]/(X2 ), R[X]/(X2 − 1) and
R[X]/(X2 + 1) ∼ = C. Using Proposition 4.14 we can now see which of these
are semisimple. The algebra R[X]/(X2 ) is not semisimple, as we have just
seen. The other two algebras are semisimple because X2 − 1 = (X − 1)(X + 1),
the product of two coprime polynomials and X2 + 1 is irreducible in R[X].
(3) Let p be a prime number and denote by Zp the field with p elements. Consider
the algebra A = Zp /(Xp − 1). We have Xp − 1 = (X − 1)p in Zp [X], hence
by Proposition 4.14 the algebra A is not semisimple.
Remark 4.16. Note that subalgebras of semisimple algebras are not necessarily
semisimple. For example, the algebra of upper triangular matrices Tn (K) for n ≥ 2
is not semisimple, by Example 4.13; but it is a subalgebra of the semisimple algebra
Mn (K), see Example 4.9.
More generally, in Exercise 1.29 we have seen that every finite-dimensional
algebra is isomorphic to a subalgebra of a matrix algebra Mn (K), hence to a
subalgebra of a semisimple algebra, and we have seen many algebras which are
not semisimple.
On the other hand, the situation is different for factor algebras: we have already
seen in Corollary 4.12 that every factor algebra of a semisimple algebra is again
semisimple.
If B = A/I , where I is an ideal of A with I = A, then in Lemma 2.37 we have
seen that a B-module can be viewed as an A-module on which I acts as zero, and
conversely any A-module V with I V = 0 (that is, I acts as zero on V ), can be
4.2 Semisimple Algebras 95

viewed as a B-module. The actions are related by the formula

(a + I )v = av (a ∈ A, v ∈ V ).

The following shows that with this correspondence, semisimple modules correspond
to semisimple modules.
Theorem 4.17. Let A be a K-algebra, I ⊂ A a two-sided ideal of A with I = A,
and let B = A/I the factor algebra. The following are equivalent for any B-
module V .
(i) V is a semisimple B-module.
(ii) V is a semisimple A-module with I V = 0.

Proof. First, suppose that (i) holds. By Theorem 4.3, V = j ∈J Sj , the sum of
simple B-submodules of V . By Lemma 2.37, we can also view the Sj as A-modules
with I Sj = 0. Moreover, they are also simple as A-modules, by Lemma 3.5. This
shows that V is a sum of simple A-modules, and therefore it is a semisimple A-
module, by Theorem 4.3.
Conversely, suppose that (ii) holds, assume
 V is a semisimple A-module with
I V = 0. By Theorem 4.3 we know V = j ∈J Sj , a sum of simple A-submodules
of V . Then I Sj ⊆ I V = 0, and Lemma 2.37 says that we can view the Sj as
B-modules. One checks that these are also simple as B-modules
 (with the same
reasoning as in Lemma 3.5). So as a B-module, V = j ∈J Sj , a sum of simple
B-modules, and hence is a semisimple B-module by Theorem 4.3. 

Corollary 4.18. Let A1 , . . . , Ar be finitely many K-algebras. Then the direct
product A1 ×. . .×Ar is a semisimple algebra if and only if each Ai for i = 1, . . . , r
is a semisimple algebra.
Proof. Set A = A1 × . . . × Ar . Suppose first that A is semisimple. For any
i ∈ {1, . . . , r}, the projection πi : A → Ai is a surjective algebra homomorphism.
By Corollary 4.12 each Ai is a semisimple algebra.
Conversely, suppose that all algebras A1 , . . . , Ar are semisimple. We want to
use Theorem 4.11, that is, we have to show that every non-zero A-module is
semisimple. Let M = 0 be an A-module. We use Lemma 3.30, which gives that
M = M1 ⊕ M2 ⊕ . . . ⊕ Mr , where Mi = εi M, with εi = (0, . . . , 0, 1Ai , 0, . . . , 0),
and Mi is an A-submodule of M. Then Mi is also an Ai -module, since the kernel of
πi annihilates Mi (using Lemma 2.37). We can assume that Mi = 0; otherwise we
can ignore this summand in M = M1 ⊕ M2 ⊕ . . . ⊕ Mr . Then by assumption and
Theorem 4.11, Mi is semisimple as a module for Ai , and then by Theorem 4.17 it
is also semisimple as an A-module. Now part (d) of Corollary 4.7 shows that M is
semisimple as an A-module. 

Example 4.19. Let K be a field. We have already seen that matrix algebras Mn (K)
are semisimple (see Example 4.9). Corollary 4.18 now shows that arbitrary finite
96 4 Semisimple Modules and Semisimple Algebras

direct products

Mn1 (K) × . . . × Mnr (K)

are semisimple algebras.

4.3 The Jacobson Radical

In this section we give an alternative characterisation of semisimplicity of algebras.


We will introduce the Jacobson radical J (A) of an algebra A. We will see that this
is an ideal which measures how far away A is from being semisimple.
Example 4.20. Let A = R[X]/(f ) where f = (X − 1)a (X + 1)b for a, b ≥ 1. By
Proposition 3.23 this algebra has two simple modules (up to isomorphism), which
we can take as S1 := A/M1 and S2 = A/M2 , where M1 , M2 are the maximal left
ideals of A given by M1 = (X − 1)/(f ) and M2 = (X + 1)/(f ). Since X − 1 and
X + 1 are coprime we observe that

M1 ∩ M2 = ((X − 1)(X + 1))/(f ).

We know that A is semisimple if and only if a = b = 1 (see Proposition 4.14). This


is the same as M1 ∩ M2 = 0. This motivates the definition of the Jacobson radical
below.
Exercise 4.1. Assume A is a semisimple algebra, say A = S1 ⊕ . . . ⊕ Sn , where the
Si are simple A-submodules of A (see Remark 4.10). Then Si ∼ = A/Mi , where Mi is
a maximal left ideal of A (see Lemma 3.18). Show that the intersection M1 ∩. . .∩Mn
is zero. (Hint: Let a be in the intersection, show that aSi = 0 for 1 ≤ i ≤ n.)
Definition 4.21. Let A be a K-algebra. The Jacobson radical J (A) of A is defined
to be the intersection of all maximal left ideals of A. In other words, J (A) is the
intersection of all maximal A-submodules of A.
Example 4.22.
(1) Assume K is an infinite field, then the polynomial algebra A = K[X] has
Jacobson radical J (A) = 0: In fact, for each λ ∈ K the left ideal generated by
X − λ is maximal (the factor module K[X]/(X − λ) has dimension  1; then use
the submodule correspondence, Theorem 2.28). Then J (A) ⊆ λ∈K (X − λ).
We claim this is zero. Suppose 0 = f is in the intersection, then f is a
polynomial of degree n (say), and X − λ divides f for each λ ∈ K, but there
are infinitely many such factors, a contradiction. It follows that J (A) = 0.
In general, J (K[X]) = 0 for arbitrary fields K. One can adapt the above
proof by using that K[X] has infinitely many irreducible polynomials. The
proof is Exercise 4.11.
4.3 The Jacobson Radical 97

(2) Consider a factor algebra of a polynomial algebra, A = K[X]/(f ), where f is a


non-constant polynomial, and write f = f1a1 . . . frar , where the fi are pairwise
coprime irreducible polynomials in K[X]. We have seen before that the left
ideals of A are of the form (g)/(f ) where g is a divisor of f . The maximal left
ideals are those where g is irreducible. One deduces that


r 
r
J (A) = (fi )/(f ) = ( fi )/(f ).
i=1 i=1

In particular, J (A) = 0 if and only if f = f1 · . . . · fr , that is, f is the product


of pairwise coprime irreducible polynomials.
The following theorem collects some of the basic properties of the Jacobson
radical.
Recall that for any left ideals I, J of an algebra A, the product is defined as

I J = span{xy | x ∈ I, y ∈ J }

and this is also a left ideal of A. In particular, for any left ideal I of A we define
powers inductively by setting I 0 = A, I 1 = I and I k = I k−1 I for all k ≥ 2. Thus
for every left ideal I we get a chain of left ideals of the form

A ⊇ I ⊇ I2 ⊇ I3 ⊇ . . .

The ideal I is called nilpotent if there is some r ≥ 1 such that I r = 0.


We also need the annihilator of a simple A-module S, this is defined as

AnnA (S) := {a ∈ A | as = 0 for every s ∈ S}.

This is contained in AnnA (s) whenever S = As for s ∈ S, and since


S ∼= A/AnnA (s) (see Lemma 3.18) we know that AnnA (s) is a maximal left
ideal of A.
Exercise 4.2. Let A be a K-algebra. Show that for any A-module M we have

AnnA (M) := {a ∈ A | am = 0 for every m ∈ M}

is a two-sided ideal of A.
Theorem 4.23. Let K be a field and A a K-algebra which has a composition series
as an A-module (that is, A has finite length as an A-module). Then the following
holds for the Jacobson radical J (A).
(a) J (A) is the intersection of finitely many maximal left ideals.
(b) We have that

J (A) = AnnA (S),
S simple
98 4 Semisimple Modules and Semisimple Algebras

that is, J (A) consists of those a ∈ A such that aS = 0 for every simple A-
module S.
(c) J (A) is a two-sided ideal of A.
(d) J (A) is a nilpotent ideal, we have J (A)n = 0 where n is the length of a
composition series of A as an A-module.
(e) The factor algebra A/J (A) is a semisimple algebra.
(f) Let I ⊆ A be a two-sided ideal with I = A such that the factor algebra A/I is
semisimple. Then J (A) ⊆ I .
(g) A is a semisimple algebra if and only J (A) = 0.
Remark 4.24. The example of a polynomial algebra K[X] shows that the assump-
tion of finite length in the theorem is needed. We have seen in Example 4.22
that J (K[X]) = 0. However, K[X] is not semisimple, see Example 4.13. So, for
instance, part (g) of Theorem 4.23 is not valid for K[X].
Proof. (a) Suppose that M1 , . . . , Mr are finitely many maximal left ideals of A.
Hence we have that J (A) ⊆ M1 ∩ . . . ∩ Mr . If we have equality then we are done.
Otherwise there exists another maximal left ideal Mr+1 such that

M1 ∩ . . . ∩ Mr ⊃ M1 ∩ . . . ∩ Mr ∩ Mr+1

and this is a proper inclusion. Repeating the argument gives a sequence of left ideals
of A,

A ⊃ M1 ⊃ M1 ∩ M2 ⊃ M1 ∩ M2 ∩ M3 ⊃ . . .

Each quotient is non-zero, so the process must stop at the latest after n steps where n
is the length of a composition series of A, see Exercise 3.18. This means that J (A)
is the intersection of finitely many maximal left ideals.
(b) We first prove that the intersection of the annihilators of simple modules is
contained in J (A). Take an element a ∈ A such that aS = 0 for all simple A-
modules S. We want to show that a belongs to every maximal left ideal of A.
Suppose M is a maximal left ideal, then A/M is a simple A-module. Therefore
by assumption a(A/M) = 0, so that a + M = a(1A + M) = 0 and hence a ∈ M.
Since M is arbitrary, this shows that a is in the intersection of all maximal left ideals,
that is, a ∈ J (A).
Assume (for a contradiction) that the inclusion is not an equality, then there is a
simple A-module S such that J (A)S = 0; let s ∈ S with J (A)s = 0. Then J (A)s
is an A-submodule of S (since J (A) is a left ideal), and it is non-zero. Because
S is simple we get that J (A)s = S. In particular, there exists an x ∈ J (A) such
that xs = s, that is, x − 1A ∈ AnnA (s). Now, AnnA (s) is a maximal left ideal
(since A/AnnA (s) ∼ = S, see Lemma 3.18). Hence J (A) ⊆ AnnA (s). Therefore we
have x ∈ AnnA (s) and x − 1A ∈ AnnA (s) and it follows that 1A ∈ AnnA (s), a
contradiction since s = 0.
4.3 The Jacobson Radical 99

(c) This follows directly from part (b), together with Exercise 4.2.
(d) Take a composition series of A as an A-module, say

0 = V0 ⊂ V1 ⊂ . . . ⊂ Vn−1 ⊂ Vn = A.

We will show that J (A)n = 0. For each i with 1 ≤ i ≤ n, the factor module
Vi /Vi−1 is a simple A-module. By part (b) it is therefore annihilated by J (A), and
this implies that J (A)Vi ⊆ Vi−1 for all i = 1, . . . , n. Hence

J (A)V1 = 0, J (A)2 V2 ⊆ J (A)V1 = 0

and inductively we see that J (A)r Vr = 0 for all r. In particular, J (A)n A = 0, and
this implies J (A)n = 0, as required.
(e) By Definition 4.8 we have to show that A/J (A) is semisimple as an A/J (A)-
module. According to Theorem 4.17 this is the same as showing thatA/J (A) is
semisimple as an A-module. From part (a) we know that J (A) = ri=1 Mi for
finitely many
 maximal left ideals Mi ; moreover, we can assume that for each i
we have j =i Mj ⊆ Mi (otherwise we may remove Mi from the intersection
M1 ∩ . . . ∩ Mr ). We then consider the following map

 : A/J (A) → A/M1 ⊕ . . . ⊕ A/Mr , x + J (A) → (x + M1 , . . . , x + Mr ).

This mapis well-defined since J (A) ⊆ Mi for all i, and it is injective since
J (A) = ri=1 Mi . Moreover, it is an A-module homomorphism since the action on
the direct sum is componentwise. It remains to show that  is surjective, and hence
an isomorphism; then the claim in part (e) follows since each A/Mi is a simple A-
module. To prove that  is surjective, it suffices to show that for each i the element
(0, . . . , 0, 1A + Mi , 0, . . . , 0) is in the image 
of . Fix some i. By our assumption
we have that Mi is  a proper subset of Mi + ( j =i Mj ). Since Mi ismaximal this
implies that Mi + ( j =i Mj ) = A. So there exist mi ∈ Mi and y ∈ j =i Mj such
that 1A = mi + y. Therefore (y) = (0, . . . , 0, 1A + Mi , 0, . . . , 0), as desired.
(f) We have by assumption, A/I = S1 ⊕ . . . ⊕ Sr with finitely many simple
A/I -modules Si , see Remark 4.10. The Si can also be viewed as simple A-modules
(see the proof of Theorem 4.17). From part (b) we get J (A)Si = 0, which implies

J (A)(A/I ) = J (A)(S1 ⊕ . . . ⊕ Sr ) = 0,

that is, J (A) = J (A)A ⊆ I .


(g) If A is semisimple then J (A) = 0 by part (f), taking I = 0. Conversely, if
J (A) = 0 then A is semisimple by part (e).

Remark 4.25. We now obtain an alternative proof of Proposition 4.14 which
characterizes which algebras A = K[X]/(f ) are semisimple. Let f ∈ K[X]
be a non-constant polynomial and write f = f1a1 . . . frar with pairwise coprime
irreducible polynomials f1 , . . . , fr ∈ K[X]. We have seen in Example 4.22 that
100 4 Semisimple Modules and Semisimple Algebras

J (A) = 0 if and only if ai = 1 for all i = 1, . . . , r. By Theorem 4.23 this is exactly


the condition for the algebra A = K[X]/(f ) to be semisimple.
We will now describe the Jacobson radical for a finite-dimensional path algebra.
Let A = KQ where Q is a quiver, recall A is finite-dimensional if and only if Q
does not have an oriented cycle of positive length, see Exercise 1.2.
Proposition 4.26. Let K be a field, and let Q be a quiver without oriented cycles,
so the path algebra A = KQ is finite-dimensional. Then the Jacobson radical J (A)
is the subspace of A spanned by all paths in Q of positive length.
Note that this result does not generalize to infinite-dimensional path algebras. For
example, consider the path algebra of the one loop quiver, which is isomorphic to
the polynomial algebra K[X]. Its Jacobson radical is therefore zero but the subspace
generated by the paths of positive length is non-zero (even infinite-dimensional).
Proof. We denote the vertices of Q by {1, . . . , n}. We apply part (b) of Theo-
rem 4.23. The simple A-modules are precisely the modules Si := Aei /Ji for
1 ≤ i ≤ n, and Ji is spanned by all paths in Q of positive length starting at i,
see Theorem 3.26. Therefore we see directly that

AnnA (Si ) = Ji ⊕ ( Aej ).
j =i

n
Taking the intersection of all these, we get precisely J (A) = i=1 Ji which is the
span of all paths in Q of length ≥ 1. 

Corollary 4.27. Let KQ be a finite-dimensional path algebra. Then KQ is a
semisimple algebra if and only if Q has no arrows, that is, Q is a union of vertices.
In particular, the semisimple path algebras KQ are isomorphic to direct products
K × . . . × K of copies of the field K.
Proof. By Theorem 4.23, KQ is semisimple if and only if J (KQ) = 0. Then the
first statement directly follows from Proposition 4.26. The second statement is easily
verified by mapping each vertex of Q to one of the factors of the direct product. 

EXERCISES

4.3. Let f ∈ K[X] be the product of two non-constant coprime polynomials,


f = gh. Show that then K[X]/(f ) = (g)/(f ) ⊕ (h)/(f ).
4.4. Let A = Tn (K), the K-algebra of upper triangular matrices. Let R ⊆ A be
the submodule of all matrices which are zero outside the first row; similarly,
let C ⊆ A be the submodule of all matrices which are zero outside the n-th
column. For each of R and C, determine whether or not it is a semisimple
A-module.
4.3 The Jacobson Radical 101

4.5. Let G = S3 be the symmetric group of order 6. Consider the 3-dimensional


KS3 -module V = span{v1 , v2 , v3 } on which S3 acts by permuting the basis
vectors, that is, σ · vi = vσ (i) for all σ ∈ S3 and where the action is extended
to arbitrary linear combinations in KS3 .
(a) Verify that U := span{v1 + v2 + v3 } is a KS3 -submodule of V .
(b) Assume the characteristic of K is not equal to 3. Show that V is a
semisimple KS3 -module by expressing V as a direct sum of two simple
KS3 -submodules.
(c) Suppose that K has characteristic 3. Show that then V is not a semisimple
KS3 -module.
4.6. Let A be a K-algebra. Let I be a left ideal of A which is nilpotent, that is,
there exists an r ∈ N such that I r = 0. Show that I is contained in the
Jacobson radical J (A).
4.7. Which of the following subalgebras of M3 (K) are semisimple? (Each asterisk
stands for an arbitrary element from K.)
⎛ ⎞ ⎛ ⎞ ⎛ ⎞ ⎛ ⎞
∗00 ∗∗∗ ∗0∗ ∗0∗
A1 = ⎝ 0 ∗ 0 ⎠ A2 = ⎝ 0 ∗ 0⎠ A3 = ⎝ 0 ∗ 0 ⎠ A4 = ⎝ 0 ∗ 0⎠
00∗ 00∗ 00∗ ∗0∗

4.8. Which of the following K-algebras K[X]/(f ) are semisimple?


(i) K[X]/(X3 − X2 + X − 1) for K = C, R and Q,
(ii) C[X]/(X3 + X2 − X − 1),
(iii) R[X]/(X3 − 3X2 + 4),
(iv) Q[X]/(X2 − 1),
(v) Q[X]/(X4 − X2 − 2),
(vi) Z2 [X]/(X4 + X + 1).
4.9. (a) Let f ∈ R[X] be a non-constant polynomial. Show that the R-algebra
R[X]/(f ) is semisimple if and only if the C-algebra C[X]/(f ) is
semisimple.
(b) Let f ∈ Q[X] be a non-constant polynomial. Either prove the following,
or find a counterexample: the Q-algebra Q[X]/(f ) is semisimple if and
only if the R-algebra R[X]/(f ) is semisimple.
4.10. Suppose A is an algebra and N is some A-module. We define a subquotient
of N to be a module Y/X where X, Y are submodules of N such that
0 ⊆ X ⊆ Y ⊆ N.
Suppose N has composition length 3, and assume that every subquotient
of N which has composition length 2 is semisimple. Show that then N must
be semisimple. (Hint: Choose a simple submodule X of N and show that there
are submodules U1 = U2 of N, both containing X, of composition length 2.
Then show that U1 + U2 is the direct sum of three simple modules.)
102 4 Semisimple Modules and Semisimple Algebras

4.11. (a) Show that there exist infinitely many irreducible polynomials in K[X].
(Hint: try a variation of Euclid’s famous proof that there are infinitely
many prime numbers.)
(b) Deduce that the Jacobson radical of K[X] is zero.
4.12. For each of the following subalgebras of M3 (K), find the Jacobson radical.
⎛ ⎞ ⎧⎛ ⎞ ⎫
∗00 ⎨ xy z ⎬
A1 = ⎝ 0 ∗ 0 ⎠ A2 = ⎝0 x 0 ⎠ | x, y, z ∈ K
⎩ ⎭
00∗ 00x
⎧⎛ ⎞ ⎫ ⎛ ⎞
⎨ xy0 ⎬ ∗∗0
A3 = ⎝ 0 z 0 ⎠ | x, y, z ∈ K A4 = ⎝ 0 ∗ 0⎠
⎩ ⎭
00x 0∗∗

4.13. Let ϕ : A → B be a surjective K-algebra homomorphism. Prove the


following statements.
(a) For the Jacobson radicals we have ϕ(J (A)) ⊆ J (B).
(b) If ker(ϕ) ⊆ J (A) then ϕ(J (A)) = J (B).
(c) For all K-algebras A we have J (A/J (A)) = 0.
4.14. Assume A is a commutative K-algebra of dimension n. Show that if A
has n pairwise non-isomorphic simple modules then A must be semisimple.
(Hint: Consider a map  as in the proof of Theorem 4.23 (e), from A to
A/M1 ⊕ . . . ⊕ A/Mn , where the A/Mi are the distinct simple modules. Show
that  must be an isomorphism.)
4.15. Which of the following commutative algebras over C are semisimple? Note
that the algebras in (i) have dimension 2, and the others have dimension 4.
(i) C[X]/(X2 − X), C[X]/(X2 ), C[X]/(X2 − 1),
(ii) C[X1 ]/(X12 − X1 ) × C[X2 ]/(X22 − X2 ),
(iii) C[X1 , X2 ]/(X12 − X1 , X22 − X2 ),
(iv) C[X1 ]/(X12 ) × C[X2 ]/(X22 ),
(v) C[X1 , X2 ]/(X12 , X22 ).
Chapter 5
The Structure of Semisimple Algebras:
The Artin–Wedderburn Theorem

In this chapter we will prove the fundamental Artin–Wedderburn Theorem, which


completely classifies semisimple K-algebras. We have seen in Example 4.19 that
finite direct products Mn1 (K) × . . .× Mnr (K) of matrix algebras are semisimple K-
algebras. When the field K is algebraically closed the Artin–Wedderburn theorem
shows that in fact every semisimple algebra is isomorphic to such an algebra. In
general, a semisimple algebra is isomorphic to a direct product of matrix algebras
but where the matrix coefficients in the matrix blocks are elements of some division
algebras over K.
Our starting point is a direct sum decomposition, as an A-module, of the algebra
A. The first observation is how to obtain information on the algebra just from such a
direct sum decomposition. This is in fact more general, in the following lemma the
algebra need not be semisimple.
Lemma 5.1. Assume a K-algebra A as an A-module is the direct sum of non-zero
submodules,

A = M1 ⊕ M2 ⊕ . . . ⊕ Mr .

Write the identity element of A as 1A = ε1 + ε2 + . . . + εr with εi ∈ Mi . Then


(a) εi εj = 0 for i = j and εi2 = εi ;
(b) Mi = Aεi and εi = 0.
Proof. (a) We have

εi = εi 1A = εi ε1 + εi ε2 + . . . + εi εr

and therefore

εi − εi2 = εi ε1 + . . . + εi εi−1 + εi εi+1 + . . . + εi εr .

© Springer International Publishing AG, part of Springer Nature 2018 103


K. Erdmann, T. Holm, Algebras and Representation Theory, Springer
Undergraduate Mathematics Series, https://doi.org/10.1007/978-3-319-91998-0_5
104 5 The Structure of Semisimple Algebras: The Artin–Wedderburn Theorem


The left-hand side belongs to Mi , and the right-hand side belongs to j =i Mj . The

sum A = M1 ⊕ M2 ⊕ . . . ⊕ Mr is direct, therefore Mi ∩ j =i Mj = 0. So εi2 = εi ,
which proves part of (a). Moreover, this implies

0 = εi ε1 + . . . + εi εi−1 + εi εi+1 + . . . + εi εr ,

where the summands εi εj are in Mj for each j = i. Since we have a direct sum,
each of these summands must be zero, and this completes the proof of (a).
(b) We show now that Aεi = Mi . Since εi ∈ Mi and Mi is an A-module, it follows
that Aεi ⊆ Mi . For the converse, take some m ∈ Mi , then

m = m1A = mε1 + . . . + mεi + . . . + mεr .



Now m − mεi ∈ Mi ∩ j =i Mj , which is zero, and therefore m = mεi ∈ Aεi . We
assume Mi = 0 and therefore εi = 0. 

Elements εi ∈ A with εi2 = εi are called idempotents, the properties in (a)
are referred to as an orthogonal idempotent decomposition of the identity. (Such a
decomposition for a path algebra has already appeared in Exercise 2.6.)

5.1 A Special Case

In this section we classify finite-dimensional commutative semisimple K-algebras


where K is algebraically closed. This particularly nice result is a special case of the
Artin–Wedderburn theorem.
Proposition 5.2. Let K be an algebraically closed field. Suppose A is a finite-
dimensional commutative K-algebra. Then A is semisimple if and only if A is
isomorphic to the direct product of copies of K, that is, as algebras we have
A∼ = K × K × . . . × K.
Proof. The direct product of copies of K is semisimple, even for arbitrary fields,
see Example 4.19.
Conversely, assume A is a finite-dimensional semisimple commutative K-
algebra. By Remark 4.10, as an A-module, A is the direct sum of finitely many
simple submodules,

A = S1 ⊕ S2 ⊕ . . . ⊕ Sr .

We apply Lemma 5.1 with Mi = Si , so we get an orthogonal idempotent


decomposition of the identity element of A, as 1A = ε1 +ε2 +. . .+εr , and Si = Aεi .
Since A is finite-dimensional, every simple A-module is finite-dimensional by
Corollary 3.20. Moreover, since K is algebraically closed and A commutative,
Corollary 3.38 implies that the simple A-module Si is 1-dimensional, hence has a
basis εi .
5.1 A Special Case 105

We will now construct a map ψ : A → K × K × . . . × K (with r factors in


the direct product) and show that it is an algebra isomorphism. Take an arbitrary
element a ∈ A. Then aεi ∈ Si for i = 1, . . . , r. Since εi is a basis for Si , there exist
unique αi ∈ K such that aεi = αi εi . It follows that we have

a = a1A = aε1 + aε2 + . . . + aεr = α1 ε1 + α2 ε2 + . . . + αr εr .

Define a map ψ : A → K × K × . . . × K by setting

ψ(a) := (α1 , α2 , . . . , αr ).

We now show that ψ is an algebra isomorphism. From the definition one sees that
ψ is K-linear. It is also surjective, since ψ(εi ) = (0, . . . , 0, 1, 0, . . . , 0) for each
i. Moreover, it is injective: if ψ(a) = 0, so that all αi are zero, then a = 0, by
definition. It only remains to show that the map ψ is an algebra homomorphism. For
any a, b ∈ A suppose that ψ(a) = (α1 , α2 , . . . , αr ) and ψ(b) = (β1 , β2 , . . . , βr );
then we have

(ab)1A = a(b1A) = a(β1 ε1 + β2 ε2 + . . . + βr εr )


= aβ1 ε1 + aβ2 ε2 + . . . + aβr εr = β1 (aε1 ) + β2 (aε2) + . . . + βr (aεr )
= β1 α1 ε1 + β2 α2 ε2 + . . . + βr αr εr = α1 β1 ε1 + α2 β2 ε2 + . . . + αr βr εr ,

where the fourth equality uses axiom (Alg) from Definition 1.1 and the last equality
holds since the αi and βi are in K and hence commute. This implies that

ψ(a)ψ(b)=(α1 , α2 , . . . , αr )(β1 , β2 , . . . , βr ) = (α1 β1 , α2 β2 , . . . , αr βr ) = ψ(ab).

Finally, it follows from the definition that ψ(1A ) = (1, 1, . . . , 1) = 1K×...×K . This
proves that ψ : A → K × K × . . . × K is an isomorphism of algebras. 

Remark 5.3.
(1) Proposition 5.2 need not hold if K is not algebraically closed. For example,
consider the commutative R-algebra A = R[X]/(X2 + 1). Since X2 + 1 is
irreducible in R[X], we know from Proposition 3.23 that A as an A-module is
simple, and it is a semisimple algebra, by Proposition 4.14. However, A ∼ R×R
=
since A ∼
= C is a field, whereas R × R contains non-zero zero divisors.
(2) Proposition 5.2 does not hold for infinite-dimensional algebras, even if the field
K is algebraically closed. We have seen in Remark 3.34 that C(X) is a simple
C(X)-module. In particular, C(X) is a semisimple C-algebra. As in (1), the field
C(X) cannot be isomorphic to a product of copies of C.
106 5 The Structure of Semisimple Algebras: The Artin–Wedderburn Theorem

5.2 Towards the Artin–Wedderburn Theorem

We want to classify semisimple algebras. The input for this is more general: the first
ingredient is to relate any algebra A to its algebra of A-module endomorphisms. The
second ingredient is the observation that one can view the endomorphisms of a direct
sum of A-modules as an algebra of matrices, where the entries are homomorphisms
between the direct summands. We will discuss these now.
Let A be a K-algebra. For any A-module V we denote by EndA (V ) the K-
algebra of A-module homomorphisms from V to V (see Exercise 2.7). Recall
from Definition 1.6 the definition of the opposite algebra: For any K-algebra B,
the opposite algebra B op has the same K-vector space structure as B and has
multiplication ∗ defined by b ∗ b := b b for any b, b ∈ B. The following result
compares an algebra with its endomorphism algebra.
Lemma 5.4. Let K be a field and let A be a K-algebra. Then A is isomorphic to
EndA (A)op as K-algebras.
Proof. For any a ∈ A we consider the right multiplication map,

ra : A → A , ra (x) = xa for all x ∈ A.

One sees that this is an A-module homomorphism, so that ra ∈ EndA (A) for every
a ∈ A.
Conversely, we claim that every element in EndA (A) is of this form, that is, we
have EndA (A) = {ra | a ∈ A}. In fact, let ϕ ∈ EndA (A) and let a := ϕ(1A ); then
for every x ∈ A we have

ϕ(x) = ϕ(x1A ) = xϕ(1A ) = xa = ra (x),

that is ϕ = ra , proving the claim. We define therefore a map

ψ : A → EndA (A)op , ψ(a) = ra .

Then ψ is surjective, as we have just seen. It is also injective: If ψ(a) = ψ(a  ) then
for all x ∈ A we have xa = xa  , and taking x = 1A shows a = a  .
We will now complete the proof of the lemma, by showing that ψ is a
homomorphism of K-algebras. First, the map ψ is K-linear: For every λ, μ ∈ K
and a, b, x ∈ A we have

rλa+μb (x) = x(λa + μb) = λ(xa) + μ(xb) = λra (x) + μrb (x) = (λra + μrb )(x)

(where the second equality uses axiom (Alg) from Definition 1.1). Therefore,

ψ(λa + μb) = rλa+μb = λra + μrb = λψ(a) + μψ(b).


5.2 Towards the Artin–Wedderburn Theorem 107

Moreover, it is clear that ψ(1A ) = idA . Finally, we show that ψ preserves the
multiplication. For every a, b, x ∈ A we have

rab (x) = x(ab) = (xa)b = rb (ra (x)) = (ra ∗ rb )(x)

and hence

ψ(ab) = rab = ψ(a) ∗ ψ(b).

Note that here we used the multiplication in the opposite algebra. 



As promised, we will now analyse endomorphisms of direct sums of A-modules.
In analogy to the definition of matrix algebras in linear algebra, if we start with
a direct sum of A-modules, we can define a matrix algebra where the entries are
homomorphisms between the direct summands. If U, W are A-modules, then we
write HomA (U, W ) for the vector space of all A-module homomorphisms U → W ,
a subspace of the K-linear maps from U to W .
Lemma 5.5. Let A be a K-algebra. Given finitely many A-modules U1 , . . . , Ur ,
we consider r × r-matrices whose (i, j )-entry is an A-module homomorphism from
Uj to Ui ,
⎧⎛ ⎞ ⎫

⎨ ϕ11 . . . ϕ1r ⎪

⎜ .. . ⎟
 := ⎝ . .. ⎠ | ϕij ∈ HomA (Uj , Ui ) .

⎩ ⎪

ϕr1 . . . ϕrr

Then  becomes a K-algebra with respect to matrix addition and matrix multipli-
cation, where the product of two matrix entries is composition of maps.
Proof. It is clear that matrix addition and scalar multiplication turn  into a K-
vector space (where homomorphisms are added pointwise as usual). Furthermore,
matrix multiplication induces a multiplication on . To see this, consider the
product of two elements ϕ = (ϕij ) and ψ = (ψij ) from . The product ϕψ has
as the (i, j )-entry the homomorphism r=1 ϕi ◦ ψj , which is indeed an element
from HomA (Uj , Ui ), as needed. The identity element in  is the diagonal matrix
with diagonal entries idU1 , . . . , idUr . All axioms follow from the usual rules for
matrix addition and matrix multiplication. 

In linear algebra one identifies the algebra of endomorphisms of an n-
dimensional K-vector space with the algebra Mn (K) of n × n-matrices over
K. In analogy, we can identify the algebra of endomorphisms of a direct sum of
A-modules with the matrix algebra as introduced in Lemma 5.5.
Lemma 5.6. Let A be a K-algebra, suppose U1 , . . . , Ur are A-modules and
V := U1 ⊕ . . . ⊕ Ur , their direct sum. Then the algebra  from Lemma 5.5 is
isomorphic as a K-algebra to the endomorphism algebra EndA (V ).
108 5 The Structure of Semisimple Algebras: The Artin–Wedderburn Theorem

Proof. For every i ∈ {1, . . . , r} we consider the projections πi : V → Ui and the


embeddings κi : Ui → V . These are A-module homomorphisms and they satisfy
 r
i=1 κi ◦ πi = idV .
Now let γ ∈ EndA (V ) be an arbitrary element. Then for every i, j ∈ {1, . . . , r}
we define

γij := πi ◦ γ ◦ κj ∈ HomA (Uj , Ui ).

This leads us to define the following map


⎛ ⎞
γ11 . . . γ1r
⎜ .. ⎟ ∈ .
 : EndA (V ) →  , γ → ⎝ ... . ⎠
γr1 . . . γrr

We are going to show that  is an isomorphism of K-algebras, thus proving the


lemma. Let β, γ ∈ EndA (V ). For every pair of scalars a, b ∈ K the (i, j )-entry in
(aβ + bγ ) is

πi ◦ (aβ + bγ ) ◦ κj = a(πi ◦ β ◦ κj ) + b(πi ◦ γ ◦ κj ),

which is equal to the sum of the (i, j )-entries of the matrices a(β) and b(γ ).
Thus,  is a K-linear map. Furthermore, it is clear from the definition that

(1EndA (V ) ) = (idV ) = 1 ,

the diagonal matrix with identity maps on the diagonal, since πi ◦ κj = 0 for i = j
and πi ◦ κi = idUi for all i. Next we show that  is multiplicative. The (i, j )-entry
in the product (β)(γ ) is given by


r 
r
βi ◦ γj = πi ◦ β ◦ κ ◦ π ◦ γ ◦ κj
=1 =1


r
= πi ◦ β ◦ ( κ ◦ π ) ◦ γ ◦ κj
=1
= πi ◦ (β ◦ idV ◦ γ ) ◦ κj = (β ◦ γ )ij .

Thus, (β ◦ γ ) = (β)(γ ).


It remains to show that  is bijective. For injectivity suppose that (γ ) = 0, that
is, γij = 0 for all i, j . Then we get


r 
r
γ = idV ◦ γ ◦ idV = ( κi ◦ πi ) ◦ γ ◦ ( κj ◦ πj )
i=1 j =1
5.2 Towards the Artin–Wedderburn Theorem 109


r 
r 
r 
r
= κi ◦ (πi ◦ γ ◦ κj ) ◦ πj = κi ◦ γij ◦ πj = 0.
i=1 j =1 i=1 j =1

To show that  is surjective, let λ ∈  be an arbitrary element, with (i, j )-entry λij .
We have to find a preimage under . To this end, we define


r 
r
γ := κk ◦ λk ◦ π ∈ EndA (V ).
k=1 =1

Using that πi ◦ κk = 0 for i = k and πi ◦ κi = idUi we then obtain for every


i, j ∈ {1, . . . , r} that


r 
r
γij = πi ◦ γ ◦ κj = πi ◦ ( κk ◦ λk ◦ π ) ◦ κj
k=1 =1


r 
r
= πi ◦ κk ◦ λk ◦ π ◦ κj = λij .
k=1 =1

Thus, (γ ) = λ and  is surjective. 



Theorem 5.7. Let A be a K-algebra and let V = S1 ⊕. . .⊕St , where S1 , . . . , St are
simple A-modules. Then there exist positive integers r and n1 , . . . , nr , and division
algebras D1 , . . . , Dr over K such that

EndA (V ) ∼
= Mn1 (D1 ) × . . . × Mnr (Dr ).

Proof. The crucial input is Schur’s lemma (see Theorem 3.33) which we recall now.
If Si and Sj are simple A-modules then HomA (Si , Sj ) = 0 if Si and Sj are not
isomorphic, and otherwise we have that EndA (Si ) is a division algebra over K.
We label the summands of the module V so that isomorphic ones are grouped
together, explicitly we take

S1 ∼ = Sn1 , Sn1 +1 ∼
= ... ∼ = Sn1 +n2 , . . . , Sn1 +...+nr−1 +1 ∼
= ... ∼ = ... ∼
= Sn1 +...+nr =St

and there are no other isomorphisms. That is, we have r different isomorphism types
amongst the Si , and they come with multiplicities n1 , . . . , nr . Define

D1 := EndA (S1 ), D2 := EndA (Sn1 +1 ), . . . , Dr := EndA (Sn1 +...+nr−1 +1 );

these are division algebras, by Schur’s lemma. Then Lemma 5.6 and Schur’s lemma
show that the endomorphism algebra of V can be written as a matrix algebra, with
110 5 The Structure of Semisimple Algebras: The Artin–Wedderburn Theorem

block matrices
⎛ ⎞
Mn1 (D1 ) 0
⎜ ⎟
EndA (V ) ∼
=  = (HomA (Sj , Si ))i,j ∼
= ⎝ ..
. ⎠
0 Mnr (Dr )

= Mn1 (D1 ) × . . . × Mnr (Dr ).



We will now consider the algebras in Theorem 5.7 in more detail. In particular,
we want to show that they are semisimple K-algebras. This is part of the Artin–
Wedderburn theorem, which will come in the next section.
Example 4.19 is a special case, and we have seen that for every field K the
algebras Mn1 (K) × . . . × Mnr (K) are semisimple. The proof for the algebras in
Theorem 5.7 is essentially the same.
Lemma 5.8.
(a) Let D be a division algebra over K. Then for every n ∈ N the matrix algebra
Mn (D) is a semisimple K-algebra. Moreover, the opposite algebra Mn (D)op is
isomorphic to Mn (D op ), as a K-algebra.
(b) Let D1 , . . . , Dr be division algebras over K. Then for any n1 , . . . , nr ∈ N the
direct product Mn1 (D1 ) × . . . × Mnr (Dr ) is a semisimple K-algebra.
Proof. (a) Let A = Mn (D), and let D n be the natural A-module. We claim that this
is a simple module. The proof of this is exactly the same as the proof for D = K in
Example 2.14, since there we only used that non-zero elements have inverses (and
not that elements commute). As for the case D = K, we see that A as an A-module
is the direct sum A = C1 ⊕ C2 ⊕ . . . ⊕ Cn , where Ci consists of the matrices
in A which are zero outside the i-th column. As for the case D = K, each Ci is
isomorphic to the natural module D n , as an A-module. This shows that A is a direct
sum of simple submodules and hence is a semisimple algebra.
We show now that the opposite algebra Mn (D)op is isomorphic to Mn (D op ).
Note that both algebras have the same underlying K-vector space. Let τ be the
map which takes an n × n-matrix to its transpose. That is, if a = (aij ) ∈ Mn (D)
then τ (a) is the matrix with (s, t)-entry equal to at s . Then τ defines a K-linear
isomorphism on the vector space Mn (D). We show that τ is an algebra isomorphism
Mn (D)op → Mn (D op ). The identity elements in both algebras are the identity
matrices, and τ takes the identity to the identity. It remains to show that for
a, b ∈ Mn (D)op we have τ (a ∗ b) is equal to τ (a)τ (b).
(i) We have τ (a ∗ b) = τ (ba), and this has (s, t)-entry equal to the (t, s)-entry of
the matrix product ba, which is


n
btj aj s .
j =1
5.3 The Artin–Wedderburn Theorem 111

(ii) Now we write τ (a) = (âij ), where âij = aj i , and similarly let τ (b) = (b̂ij ).
We compute τ (a)τ (b) in Mn (D op ). This has (s, t)-entry equal to


n 
n 
n
âsj ∗ b̂j t = aj s ∗ btj = btj aj s
j =1 j =1 j =1

(where in the first step we removed the ˆ and in the second step we removed the
∗.) This holds for all s, t, hence τ (a ∗ b) = τ (a)τ (b).
(b) By part (a) we know that Mn (D) is a semisimple algebra. Now part (b) follows
directly using Corollary 4.18, which shows that finite direct products of semisimple
algebras are semisimple. 

5.3 The Artin–Wedderburn Theorem

We have seen that any K-algebra Mn1 (D1 ) × . . . × Mnr (Dr ) is semisimple, where
D1 . . . , Dr are division algebras over K. The Artin–Wedderburn theorem shows
that up to isomorphism, every semisimple K-algebra is of this form.
Theorem 5.9 (Artin–Wedderburn Theorem). Let K be a field and A a semisim-
ple K-algebra. Then there exist positive integers r and n1 , . . . , nr , and division
algebras D1 , . . . , Dr over K such that

A∼
= Mn1 (D1 ) × . . . × Mnr (Dr ).

Conversely, each K-algebra of the form Mn1 (D1 ) × . . . × Mnr (Dr ) is semisimple.
We will refer to this direct product as the Artin–Wedderburn decomposition of
the semisimple algebra A.
Proof. The last statement has been proved in Lemma 5.8.
Suppose that A is a semisimple K-algebra. By Remark 4.10, A as an A-module
is a finite direct sum A = S1 ⊕ . . . ⊕ St with simple A-submodules S1 , . . . , St . Now
Theorem 5.7 implies that there exist positive integers r and n1 , . . . , nr and division
algebras D 1 , . . . , D
r over K such that

EndA (A) ∼ 1 ) × . . . × Mnr (D


= Mn1 (D r ).

We can now deduce the structure of the algebra A, namely

A∼
= EndA (A)op (by Lemma 5.4)
∼ 1 ) × . . . × Mnr (D
= (Mn1 (D r ))op
1 ) × . . . × Mnr (D
= Mn1 (D op r )op
 ) × . . . × Mnr (D
= Mn1 (D
op rop ). (by Lemma 5.8)
1
112 5 The Structure of Semisimple Algebras: The Artin–Wedderburn Theorem

We set Di := D op (note that this is also a division algebra, since reversing the order
i
in the multiplication does not affect whether elements are invertible). 

Remark 5.10. Note that a matrix algebra Mn (D) is commutative if and only if
n = 1 and D is a field. Therefore, let A be a commutative semisimple K-algebra.
Then the Artin–Wedderburn decomposition of A has the form

= M1 (D1 ) × . . . × M1 (Dr ) ∼
A∼ = D1 × . . . × Dr ,

where Di are fields containing K. From the start, Di is the endomorphism algebra
of a simple A-module. Furthermore, taking Di as the i-th factor in the above product
decomposition, it is a simple A-module, and hence this simple module is identified
with its endomorphism algebra.
In the rest of this section we will derive some consequences from the Artin–
Wedderburn theorem and we also want to determine the Artin–Wedderburn decom-
position for some classes of semisimple algebras explicitly.
We will now see that one can read off the number and the dimensions of simple
modules of a semisimple algebra from the Artin–Wedderburn decomposition. This
is especially nice when the underlying field is algebraically closed, such as the field
of complex numbers.
Corollary 5.11.
(a) Let D1 , . . . , Dr be division algebras over K, and let n1 , . . . , nr be positive
integers. The semisimple K-algebra Mn1 (D1 ) × . . . × Mnr (Dr ) has precisely
r simple modules, up to isomorphism. The K-vector space dimensions of these
simple modules are n1 dimK D1 , , . . . , nr dimK Dr . (Note that these dimensions
of simple modules need not be finite.)
(b) Suppose the field K is algebraically closed, and that A is a finite-dimensional
semisimple K-algebra. Then there exist positive integers n1 , . . . , nr such that

A∼
= Mn1 (K) × . . . × Mnr (K).

Then A has precisely r simple modules, up to isomorphism, of dimensions


n1 , . . . , nr .
Proof. (a) We apply Corollary 3.31 to describe the simple modules: Let
A = A1 ×. . .×Ar be a direct product of K-algebras. Then the simple A-modules are
given precisely by the simple Ai -modules (where 1 ≤ i ≤ r) such that the factors
Aj for j = i act as zero. In our situation, A = Mn1 (D1 ) × . . . × Mnr (Dr ) and
the simple A-modules are given by the simple Mni (Di )-modules for i = 1, . . . , r.
As observed in the proof of Lemma 5.8, each matrix algebra Mni (Di ) is a direct
n
sum of submodules, each isomorphic to the natural module Di i ; then Theorem 3.19
n
implies that Mni (Di ) has a unique simple module, namely Di i . Therefore, A has
precisely r simple modules, up to isomorphism, as claimed. Clearly, the dimensions
n
of these simple A-modules are dimK Di i = ni dimK Di .
5.3 The Artin–Wedderburn Theorem 113

(b) Since A is finite-dimensional by assumption, every simple A-module is finite-


dimensional, see Corollary 3.20. By the assumption on K, Schur’s lemma (see
Theorem 3.33) shows that EndA (S) ∼ = K for every simple A-module S. These
are the division algebras appearing in the Artin–Wedderburn decomposition, hence
A∼ = Mn1 (K) × . . . × Mnr (K). The statements on the number and the dimensions
of simple modules follow from part (a). 

Remark 5.12. Note that in the proof above we described explicit versions of the
simple modules for the algebra Mn1 (D1 ) × . . . × Mnr (Dr ). For each i, take the
n
natural module Di i as a module for the i-th component, and for j = i the j -th
component acts as zero.
We will now find the Artin–Wedderburn decomposition for semisimple algebras
of the form A = K[X]/(f ). Recall from Proposition 4.14 that A is semisimple
if and only if f = f1 · . . . · fr with pairwise coprime irreducible polynomials
f1 , . . . , fr .
Proposition 5.13. Let A = K[X]/(f ), where f ∈ K[X] is a non-constant
polynomial and f = f1 · . . . · fr is a product of pairwise coprime irreducible
polynomials f1 , . . . , fr ∈ K[X]. Then the Artin–Wedderburn decomposition of the
semisimple algebra A has the form

A∼
= M1 (K[X]/(f1 )) × . . . × M1 (K[X]/(fr )) ∼
= K[X]/(f1 ) × . . . × K[X]/(fr ).

Moreover, A has r simple modules, up to isomorphism, of dimensions


deg(f1 ), . . . , deg(fr ) (where deg denotes the degree of a polynomial).
Note that this recovers the Chinese Remainder Theorem in this case. Note also
that this Artin–Wedderburn decomposition depends on the field K. We will consider
the cases K = R and K = C explicitly at the end of the section.
Proof. By Proposition 4.23 the simple A-modules are up to isomorphism the
modules K[X]/(fi ) for 1 ≤ i ≤ r. We may apply Remark 5.10, which shows that
A is isomorphic to D1 × . . . × Dr where Di are fields. If we label the components
so that Di is the endomorphism algebra of K[X]/(fi ) then by the remark again, this
simple module can be identified with Di , and the claim follows. 

Example 5.14.
(1) Let K be an algebraically closed field, for example K = C. Then the
irreducible polynomials in K[X] are precisely the polynomials of degree 1.
So the semisimple algebras A = K[X]/(f ) appear for polynomials of the form
f = (X − λ1 ) · . . . · (X − λr ) for pairwise different λ1 , . . . , λr ∈ K. (Note that
we can assume f to be monic since (f ) = (uf ) for every non-zero u ∈ K.)
Then Proposition 5.13 gives the Artin–Wedderburn decomposition

A∼
= K[X]/(X − λ1 ) × . . . × K[X]/(X − λr ) ∼
= K × . . . × K.
114 5 The Structure of Semisimple Algebras: The Artin–Wedderburn Theorem

In particular, A has precisely r simple modules, up to isomorphism, and they


are all 1-dimensional.
Note that the Artin–Wedderburn decomposition in this situation also follows
from Proposition 5.2; in Exercise 5.11 it is shown how to construct the
orthogonal idempotent decomposition of the identity of A, which then gives
an explicit Artin–Wedderburn decomposition.
(2) If K is arbitrary and f is a product of pairwise coprime irreducible polynomials
of degree 1, then K[X]/(f ) also has the Artin–Wedderburn decomposition as
in (1). For example, this applies when K = Zp and f (X) = Xp − X, this
has roots precisely the elements in Zp and therefore is a product of pairwise
coprime linear factors.
(3) Let K = R. The irreducible polynomials in R[X] have degree 1 or 2 (where a
degree 2 polynomial is irreducible if and only if it has no real root). In fact, for
any polynomial in R[X] take a root z ∈ C; then (X − z)(X − z) ∈ R[X] is a
factor. Thus we consider semisimple algebras R[X]/(f ) where

f = f1 · . . . fr · g1 · . . . gs

with pairwise coprime irreducible polynomials fi of degree 1 and gj of


degree 2. Set Si := R[X]/(fi ) for i = 1, . . . , r and Tj := R[X]/(gj ) for
j = 1, . . . , s. By Exercise 3.10 we have that EndA (Si ) ∼
= R for i = 1, . . . , r
and EndA (Tj ) ∼ = C for j = 1, . . . , s. So, by Remark 5.10, the Artin–
Wedderburn decomposition takes the form

A∼
= R × .!"
. . × R# × C × .!"
. . × C# .
r s

In particular, A has precisely r + s simple modules, up to isomorphism, r of


them of dimension 1 and s of dimension 2.

EXERCISES

5.1. Find all semisimple C-algebras of dimension 9, up to isomorphism.


5.2. Find all 4-dimensional semisimple R-algebras, up to isomorphism. You may
use without proof that there are no 3-dimensional division algebras over R
and that the quaternions H form the only 4-dimensional real division algebra,
up to isomorphism.
5.3. Find all semisimple Z2 -algebras, up to isomorphism (where Z2 is the field
with two elements).
5.4. Let K be a field and A a semisimple K-algebra.
5.3 The Artin–Wedderburn Theorem 115

(a) Show that the centre Z(A) of A is isomorphic to a direct product of


fields; in particular, the centre of a semisimple algebra is a commutative,
semisimple algebra.
(b) Suppose A is a finite-dimensional semisimple algebra over K. Suppose x
is an element in the centre Z(A). Show that if x is nilpotent then x = 0.
5.5. Let K be a field and A a K-algebra. We consider the matrix algebra Mn (A)
with entries in A; note that this is again a K-algebra.
(a) For A = Mm (K) show that Mn (Mm (K)) ∼ = Mnm (K) as K-algebras.
(b) Let A = A1 × . . . × Ar be a direct product of K-algebras. Show that then
Mn (A) ∼
= Mn (A1 ) × . . . × Mn (Ar ) as K-algebras.
(c) Show that if A is a semisimple K-algebra then Mn (A) is also a
semisimple K-algebra.
5.6. Let A = A1 × . . . × Ar be a direct product of K-algebras. Show that all
two-sided ideals of A are of the form I1 × . . . × Ir where Ij is a two-sided
ideal of Aj for all j = 1, . . . , r.
5.7. (a) Let D be a division algebra over K. Show that the matrix algebra Mn (D)
only has the trivial two-sided ideals 0 and Mn (D).
(b) Let A be a semisimple K-algebra. Determine all two-sided ideals of A
and in particular give the number of these ideals.
5.8. Consider the C-algebra A := C[X, Y ]/(X2 , Y 2 , XY ) (note that A is
commutative and 3-dimensional). Find all two-sided ideals of A which have
dimension 1 as a C-vector space. Deduce from this (using the previous
exercise) that A is not a semisimple algebra.
5.9. Let A be a K-algebra, and ε ∈ A be an idempotent, that is, ε2 = ε.
(a) Verify that εAε = {εaε | a ∈ A} is a K-algebra.
(b) Consider the A-module Aε and show that every A-module homomor-
phism Aε → Aε is of the form x → xb for some b = εbε ∈ εAε.
(c) By following the strategy of Lemma 5.4, show that εAε ∼
= EndA (Aε)op
as K-algebras.
5.10. Let A be a commutative semisimple K-algebra, where A = S1 ⊕ . . . ⊕ Sr
with Si simple A-modules. By Lemma 5.1 we know Si = Aεi , where εi are
idempotents of A. Deduce from Exercise 5.9 that the endomorphism algebra
EndA (Aεi ) can be identified with Aεi . 
5.11. Consider the K-algebra A = K[X]/I where I = (f ) and f = ri=1 (X −λi )
with pairwise distinct λi ∈ K. For i = 1, . . . , r we define elements
 
ci = (λi − λj ) ∈ K and εi = (1/ci ) (X − λj ) + I ∈ A.
j =i j =i

(a) For all i show that (X + I )εi = λi εi in A, that is, εi is an eigenvector for
the action of the coset of X.
116 5 The Structure of Semisimple Algebras: The Artin–Wedderburn Theorem

(b) Deduce from (a) that εi εj = 0 for i = j . Moreover, show that εi2 = εi .
(c) Show that ε1 + . . . + εr = 1A .
5.12. Let K = Zp and let f = Xp − X. Consider the algebra A = K[X]/(f ) as an
A-module. Explain how Exercise 5.11 can be applied to express A as a direct
sum of 1-dimensional modules. (Hint: The roots of Xp − X are precisely the
elements of Zp , by Lagrange’s theorem from elementary group theory. Hence
Xp − X factors into p distinct linear factors in Zp [X].)
Chapter 6
Semisimple Group Algebras
and Maschke’s Theorem

The Artin–Wedderburn theorem gives a complete description of the structure of


semisimple algebras. We will now investigate when a group algebra of a finite
group is semisimple, this is answered by Maschke’s theorem. If this is the case,
then we will see how the Artin–Wedderburn decomposition of the group algebra
gives information about the group, and vice versa.

6.1 Maschke’s Theorem

Let G be a finite group and let K be a field. The main idea of the proof of Maschke’s
theorem is more general: Given any K-linear map between KG-modules, one can
construct a KG-module homomorphism, by ‘averaging over the group’.
Lemma 6.1. Let G be a finite group and let K be a field. Suppose M and N are
KG-modules, and f : M → N is a K-linear map. Define

T (f ) : M → N, m → x(f (x −1 m)).
x∈G

Then T (f ) is a KG-module homomorphism.


Proof. We check that T (f ) is K-linear. Let α, β ∈ K and m1 , m2 ∈ M. Using that
multiplication by elements in KG is linear, and that f is linear, we get

T (f )(αm1 + βm2 ) = x(f (x −1 (αm1 + βm2 )))
x∈G

= x(f (x −1 αm1 + x −1 βm2 ))
x∈G

© Springer International Publishing AG, part of Springer Nature 2018 117


K. Erdmann, T. Holm, Algebras and Representation Theory, Springer
Undergraduate Mathematics Series, https://doi.org/10.1007/978-3-319-91998-0_6
118 6 Semisimple Group Algebras and Maschke’s Theorem


= x(αf (x −1 m1 ) + βf (x −1 m2 ))
x∈G
= α T (f )(m1 ) + β T (f )(m2 ).

To see that T (f ) is indeed a KG-module homomorphism it suffices to check, by


Remark 2.21, that the action of T (f ) commutes with the action of elements in the
group basis of KG. So take y ∈ G, then we have for all m ∈ M that
 
T (f )(ym) = x(f (x −1 ym)) = y(y −1 x)(f ((y −1 x)−1 m)).
x∈G x∈G

But for a fixed y ∈ G, as x varies through all the elements of G, so does y −1 x.


Hence we get from above that

T (f )(ym) = y x̃(f (x̃ −1 m)) = yT (f )(m),
x̃∈G

which shows that T (f ) is a KG-module homomorphism. 



Example 6.2. As an illustration of the above lemma, consider a cyclic group G of
order 2, generated by an element g, and let M = N be the regular CG-module (that
is, CG with left
 multiplication).
 Any C-linear map f : CG → CG is given by a
ab
2 × 2-matrix ∈ M2 (C), with respect to the standard basis {1, g} of CG.
cd
Then f defines a CG-module homomorphism if and only if this matrix commutes
with
 the  matrix describing the action of g with respect to the same basis, which is
01
. We compute the map T (f ). Since g = g −1 in G we have
10

T (f )(1) = f (1) + gf (g −1 ) = a + cg + g(b + dg) = (a + d) + (b + c)g

and similarly

T (f )(g) = f (g) + gf (g −1 g) = (b + c) + (a + d)g.

So the linear map T (f ) is, with respect to the standard basis {1, g}, given by the
matrix
 
a+d b+c
.
b+c a+d
 
01
One checks that it commutes with , hence T (f ) is indeed a CG-module
10
homomorphism.
6.1 Maschke’s Theorem 119

We will now state and prove Maschke’s theorem. This is an easy and completely
general criterion to decide when a group algebra of a finite group is semisimple.
Theorem 6.3 (Maschke’s Theorem). Let K be a field and G a finite group. Then
the group algebra KG is semisimple if and only if the characteristic of K does not
divide the order of G.
Proof. Assume first that the characteristic of K does not divide the group order |G|.
By definition the group algebra KG is semisimple if and only if KG is semisimple
as a KG-module. So let W be a submodule of KG, then by Theorem 4.3 we must
show that W has a complement, that is, there is a KG-submodule C of KG such
that W ⊕ C = KG.
Considered as K-vector spaces there is certainly a K-subspace V such that
W ⊕ V = KG (this is the standard result from linear algebra that every linearly
independent subset can be completed to a basis). Let f : KG → W be the
projection onto W with kernel V ; note that this is just a K-linear map. By
assumption, |G| is invertible in K and using Lemma 6.1 we can define

1
γ : KG → W , γ := T (f ).
|G|

Note that γ is a KG-module homomorphism by Lemma 6.1. So C := ker(γ ) ⊆ KG


is a KG-submodule.
We apply Lemma 2.30, with M = KG, and with N = N  = W , and where
γ : KG → W is the map π of the lemma and j : W → KG is the inclusion map.
We will show that γ ◦ j is the identity map on W , hence is an isomorphism. Then
Lemma 2.30 shows that KG = W ⊕ C is a direct sum of KG-submodules, that is,
W has a complement.
Let w ∈ W , then for all g ∈ G we have g −1 w ∈ W and so f (g −1 w) = g −1 w.
Therefore gf (g −1 w) = w and

1  1 
(γ ◦ j )(w) = γ (w) = gf (g −1 w) = w = w.
|G| |G|
g∈G g∈G

Hence, γ ◦ j is the identity map, as required.


For the converse of Maschke’s Theorem, suppose KG is a semisimple algebra.
We have to show that the characteristic
 of the field K does not divide the order of
G. Consider the element w := g∈G g ∈ KG, and observe that

xw = w for all x ∈ G. (6.1)

Therefore, the 1-dimensional K-subspace U = span{w} is a KG-submodule of


KG. We assume that KG is a semisimple algebra, hence there is a KG-submodule
C of KG such that KG = U ⊕ C (see Theorem 4.3). Write the identity element
in the form 1KG = u + c with u ∈ U and c ∈ C. Then u is non-zero, since
otherwise we would have KG = KGc ⊆ C = KG, a contradiction. Now,
120 6 Semisimple Group Algebras and Maschke’s Theorem

u = λw and 0 = λ ∈ K. By (6.1), we deduce that w2 = |G|w, but also


w = w1KG = w(λw) + wc. We have wc ∈ C since C is a KG-submodule,
and it follows that

w − λ|G|w ∈ U ∩ C = 0.

Now, w = 0 and therefore |G| must be non-zero in K. 



Note that when the field K has characteristic 0 the condition in Maschke’s
theorem is satisfied for every finite group G and hence KG is semisimple.

6.2 Some Consequences of Maschke’s Theorem

Suppose G is a finite group, then by Maschke’s Theorem the group algebra


CG is semisimple. We can therefore apply the Artin–Wedderburn theorem (see
Corollary 5.11 (b)) and obtain that

CG ∼
= Mn1 (C) × Mn2 (C) × . . . × Mnk (C)

for some positive integers n1 , . . . , nk . This Artin–Wedderburn decomposition of the


group algebra CG gives us new information.
Theorem 6.4. Let G be a finite group, and let

CG ∼
= Mn1 (C) × Mn2 (C) × . . . × Mnk (C)

be the Artin–Wedderburn decomposition of the group algebra CG. Then the


following hold:
(a) The group algebra CG has precisely k simple modules, up to isomorphism, and
the dimensions of
these simple modules are n1 , n2 , . . . , nk .
(b) We have |G| = ki=1 n2i .
(c) The group G is abelian if and only if all simple CG-modules are of dimension 1.
Proof. (a) This follows directly from Corollary 5.11.
(b) This follows from comparing C-vector space dimensions on both sides in the
Artin–Wedderburn decomposition.
(c) If G is abelian then the group algebra CG is commutative. Therefore, by
Proposition 5.2 we have CG ∼ = C ×C ×. . .×C. By part (a), all simple CG-modules
are 1-dimensional.
Conversely, if all simple CG-modules are 1-dimensional, then again by part (a)
the Artin–Wedderburn decomposition has the form CG ∼ = C × C × . . . × C. Hence
CG is commutative and therefore G is abelian. 

6.2 Some Consequences of Maschke’s Theorem 121

Remark 6.5. The statements of Theorem 6.4 need not hold if the field is not
algebraically closed. For instance, consider the group algebra RC3 where C3 is the
cyclic group of order 3. This algebra is isomorphic to the algebra R[X]/(X3 − 1),
see Example 1.27. In R[X], we have the factorization X3 −1 = (X−1)(X2 +X+1)
into irreducible polynomials. Hence, by Example 5.14 the algebra has Artin–
Wedderburn decomposition

= M1 (R) × M1 (C) ∼
RC3 ∼ = R × C.

Moreover, it has two simple modules, up to isomorphism, of dimensions 1 and 2, by


Proposition 3.23. Thus all parts of Theorem 6.4 do not hold in this case.
Exercise 6.1. Consider the group algebra RC4 of the cyclic group C4 of order 4 over
the real numbers. Determine the Artin–Wedderburn decomposition of this algebra
and also the number and the dimensions of the simple modules.
Remark 6.6. We would like to explore examples of Artin–Wedderburn decompo-
sitions for groups which are not abelian. We do this using subgroups of small
symmetric groups. Recall that Sn is the group of all permutations of {1, 2, . . . , n},
it contains n! elements. It has as a normal subgroup the alternating group, An ,
of even permutations, of order n!2 , and the factor group Sn /An has two elements.
A permutation is even if it can be expressed as the product of an even number of
2-cycles. For n ≥ 5, the group An is the only normal subgroup of Sn except the
identity subgroup and Sn itself.
Let n = 4, then the non-trivial normal subgroups of S4 are A4 and also the Klein
4-group. This is the subgroup of S4 which consists of the identity together with all
elements of the form (a b)(c d) where {a, b, c, d} = {1, 2, 3, 4}. We denote it by
V4 . It is contained in A4 and therefore it is also a normal subgroup of A4 .
Example 6.7. We can sometimes find the dimensions of simple modules from the
numerical data obtained from the Artin–Wedderburn decomposition. Let G = S3 be
the symmetric group on three letters. Apart from the trivial module there is another
1-dimensional CG-module given by the sign of a permutation, see Exercise 6.2.
Moreover, by Theorem 6.4 (c) there must exist a simple CG-module of dimension
> 1 since S3 is not abelian. From Theorem 6.4 (b) we get


k
6=1+1+ n2i .
i=3

The only possible solution for this is 6 = 1 + 1 + 22 , that is, the group algebra CS3
has three simple modules, up to isomorphism, of dimensions 1, 1, 2.
122 6 Semisimple Group Algebras and Maschke’s Theorem

Exercise 6.2. Let G = Sn , the group of permutations of {1, 2, . . . , n}. Recall that
every permutation g ∈ G is either even or odd. Define

1 g is even
σ (g) =
−1 g is odd

Deduce that this defines a representation σ : G → GL1 (C), usually called the sign
representation. Describe the corresponding CG-module.

6.3 One-Dimensional Simple Modules and Commutator


Groups

In Example 6.7 we found the dimensions of the simple modules for the group
algebra CS3 from the numerical data coming from the Artin–Wedderburn decom-
position as in Theorem 6.4, and knowing that the group algebra is not commutative.
In general, one needs further information if one wants to find the dimensions of
simple modules for a group algebra CG. For instance, take the alternating group
A4 . We ask for the integers ni in Theorem 6.4, that is the sizes of the matrix blocks
in the Artin–Wedderburn decomposition of CA4 . That is, we must express 12 as a
sum of squares of integers, not all equal to 1 (but at least one summand equal to
1, coming from the trivial module). The possibilities are 12 = 1 + 1 + 1 + 32 , or
12 = 1+1+1+1+22+22, or also as 12 = 1+1+1+1+1+1+1+1+22. Fortunately,
there is further general information on the number of 1-dimensional (simple) CG-
modules, which uses the group theoretic description of the largest abelian factor
group of G.
We briefly recall a notion from elementary group theory. For a finite group G, the
commutator subgroup G is defined as the subgroup of G generated by all elements
of the form [x, y] := xyx −1y −1 for x, y ∈ G; then we have:
(i) G is a normal subgroup of G.
(ii) Let N be a normal subgroup of G. Then the factor group G/N is abelian if and
only if G ⊆ N. In particular, G/G is abelian.
Details can be found, for example, in the book by Smith and Tabachnikova in this
series.1
This allows us to determine the number of one-dimensional simple modules for
a group algebra CG, that is, the number of factors C in the Artin–Wedderburn
decomposition.

1 G. Smith, O. Tabachnikova, Topics in Group Theory. Springer Undergraduate Mathematics Series.

Springer-Verlag London, Ltd., London, 2000. xvi+255 pp.


6.3 One-Dimensional Simple Modules and Commutator Groups 123

Corollary 6.8. Let G be a finite group. Then the number of 1-dimensional simple
CG-modules (up to isomorphism) is equal to the order of the factor group G/G , in
particular it divides the order of G.
Proof. Let V be a 1-dimensional CG-module, say V = span{v}. We claim that
an element n ∈ G acts trivially on V . It is enough to prove this for an element
n = [x, y] with x, y ∈ G. There exist scalars α, β ∈ C such that x · v = αv and
y · v = βv. Then

[x, y] · v = (xyx −1y −1 ) · v = αβα −1 β −1 v = v

and the claim follows.


The bijection in Lemma 2.43 (with N = G ) preserves dimensions, so we get
a bijection between 1-dimensional representations of C(G/G ) and 1-dimensional
CG-modules on which G acts as identity, that is, with 1-dimensional CG-modules
(by what we have seen above).
The group G/G is abelian, so the 1-dimensional C(G/G )-modules are pre-
cisely the simple C(G/G )-modules, by Corollary 3.38 (note that by Corollary 3.20
every simple C(G/G )-module is finite-dimensional). By Theorem 6.4 the number
of these is |G/G |. 

Example 6.9. We return to the example at the beginning of this section. Consider
the alternating group A4 of order 12, we determine the commutator subgroup A4 . As
we have mentioned, the Klein 4-group V4 = {id, (1 2)(3 4), (1 3)(2 4), (1 4)(2 3)}
is a normal subgroup in A4 . The factor group has order 3, hence is cyclic and in
particular abelian. Thus we obtain A4 ⊆ V4 . On the other hand, every element in V4
is a commutator in A4 , for example (1 2)(3 4) = [(1 2 3), (1 2 4)]. Thus, A4 = V4
and Corollary 6.8 shows that CA4 has precisely |A4 /V4 | = 3 one-dimensional
simple modules. This information is sufficient to determine the number and the
dimensions of the simple CA4 -modules. By Theorem 6.4 we know that

12 = |A4 | = 12 + 12 + 12 + n24 + . . . + n2k with ni ≥ 2.

The only possibility is that k = 4 and n4 = 3. Hence, CA4 has four simple
modules (up to isomorphism), of dimensions 1, 1, 1, 3, and its Artin–Wedderburn
decomposition is

CA4 ∼
= C × C × C × M3 (C).

Example 6.10. Let G be the dihedral group of order 10, as in Exercise 2.20. Then
by Exercise 3.14, the dimension of any simple CG-module is at most 2. The trivial
module is 1-dimensional, and by Theorem 6.4 there must be a simple CG-module
of dimension 2 since G is not abelian. From Theorem 6.4 we have now

10 = a · 1 + b · 4
124 6 Semisimple Group Algebras and Maschke’s Theorem

with positive integers a and b. By Corollary 6.8, the number a divides 10, the order
of G. The only solution is that a = 2 and b = 2. So there are two non-isomorphic
2-dimensional simple CG-modules. The Artin–Wedderburn decomposition has the
form

CG ∼
= C × C × M2 (C) × M2 (C).

6.4 Artin–Wedderburn Decomposition and Conjugacy


Classes

The number of matrix blocks in the Artin–Wedderburn decomposition of the group


algebra CG for G a finite group also has an interpretation in terms of the group G.
Namely it is equal to the number of conjugacy classes of the group. We will see this
by showing that both numbers are equal to the dimension of the centre of CG.
Let K be an arbitrary field, and let G be a finite group. Recall that the conjugacy
class of an element x ∈ G is the set {gxg −1 | g ∈ G}. For example, if G is the
symmetric group Sn then the conjugacy class of an element g ∈ G consists precisely
of all elements which have the same cycle type. For each conjugacy class C of the
group G we consider its ‘class sum’

C := g ∈ KG.
g∈C

Proposition 6.11.Let G be a finite group, and let K be an arbitrary field. Then the
class sums C := g∈C g, as C varies through the conjugacy classes of G, form a
K-basis of the centre Z(KG) of the group algebra KG.
Proof. We begin by showing that each class sum C is contained in the centre of
KG. It suffices to show that xC = Cx for all x ∈ G. Note that with g also xgx −1
varies through all elements of the conjugacy class C. Then we have
 
xCx −1 = xgx −1 = y = C,
g∈C y∈C

which is equivalent to xC = Cx.


Since each g ∈ G occurs in precisely one conjugacy class it is clear that the class
sums C for the different conjugacy classes C are linearly independent
 over K.
It remains to show that the class sums span the centre. Let w = x∈G αx x be an
element in the centre Z(KG). Then for every g ∈ G we have
 
w = gwg −1 = αx gxg −1 = αg −1 yg y.
x∈G y∈G
6.4 Artin–Wedderburn Decomposition and Conjugacy Classes 125

The group elements form a basis of KG, we compare coefficients and deduce that

αx = αg −1 xg for all g, x ∈ G,

that is, the coefficients αx are constant on conjugacy classes. So we can write each
element w ∈ Z(KG) in the form

w= αC C,
C

where the sum is over the different conjugacy classes C of G. Hence the class sums
span the centre Z(KG) as a K-vector space. 

We now return to the Artin–Wedderburn decomposition of CG, and relate the
number of matrix blocks occurring there to the number of conjugacy classes of G.
∼ Mn (C) × . . . × Mn (C) be
Theorem 6.12. Let G be a finite group and let CG = 1 k
the Artin–Wedderburn decomposition of the group algebra CG. Then the following
are equal:
(i) The number k of matrix blocks.
(ii) The number of conjugacy classes of G.
(iii) The number of simple CG-modules, up to isomorphism.
Proof. By Theorem 6.4, the numbers in (i) and (iii) are equal. In order to prove
the equality of the numbers in (i) and (ii) we consider the centres of the algebras.
The centre of CG has dimension equal to the number of conjugacy classes of G, by
Proposition 6.11. On the other hand, the centre of Mn1 (C) × . . . × Mnk (C) is equal
to Z(Mn1 (C)) × . . . × Z(Mnk (C)) and this has dimension equal to k, the number of
matrix blocks, by Exercise 3.16. 

Example 6.13. We consider the symmetric group S4 on four letters. As we have
mentioned, the conjugacy classes of symmetric groups are determined by the cycle
type. There are five cycle types for elements of S4 , we have the identity, 2-cycles, 3-
cycles, 4-cycles and products of two disjoint 2-cycles. Hence, by Theorem 6.12,
there are five matrix blocks in the Artin–Wedderburn decomposition of CS4 , at
least one of them has size 1. One can see directly that there is a unique solution
for expressing 24 = |S4 | as a sum of five squares where at least one of them is
equal to 1. Furthermore, we can see from Remark 6.6 and the fact that S4 /V4 is
not abelian, that the commutator subgroup of S4 is A4 . So CS4 has precisely two
1-dimensional simple modules (the trivial and the sign module), by Corollary 6.8.
From Theorem 6.4 (b) we get

24 = |S4 | = 12 + 12 + n23 + n24 + n25 with ni ≥ 2.

The only possible solution (up to labelling) is n3 = 2 and n4 = n5 = 3. So


CS4 has five simple modules, of dimensions 1, 1, 2, 3, 3, and the Artin–Wedderburn
126 6 Semisimple Group Algebras and Maschke’s Theorem

decomposition has the form

CS4 ∼
= C × C × M2 (C) × M3 (C) × M3 (C).

Remark 6.14. The consequences of Maschke’s Theorem as discussed also hold if


the field K is not C but is some algebraically closed field whose characteristic does
not divide the order of G.

EXERCISES

6.3. Let G = Dn be the dihedral group of order 2n (that is, the symmetry group
of a regular n-gon). Determine the Artin–Wedderburn decomposition of the
group algebra CDn . (Hint: Apply Exercise 3.14.)
6.4. Let G = S3 be the symmetric group of order 6 and A = CG. We consider
the elements σ = (1 2 3) and τ = (1 2) in S3 . We want to show directly
that this group algebra is a direct product of three matrix algebras. (We know
from Example 6.7 that there should be two blocks of size 1 and one block of
size 2.)
(a) Let e± := 16 (1 ± τ )(1 + σ + σ 2 ); show that e± are idempotents in the
centre of A, and that e+ e− = 0.
(b) Let f = 13 (1 + ω−1 σ + ωσ 2 ) where ω ∈ C is a primitive 3rd root of
unity. Let f1 := τf τ −1 . Show that f and f1 are orthogonal idempotents,
and that

f + f1 = 1A − e− − e+ .

(c) Show that span{f, f τ, τf, f1 } is an algebra, isomorphic to M2 (C).


(d) Apply the above calculations, and show directly that A is isomorphic to
a direct product of three matrix algebras.
6.5. Let Sn be the symmetric group on n letters, where n ≥ 2. It acts on the C-
vector space V := {(v1 , . . . , vn ) ∈ Cn | ni=1 vi = 0} by permuting the
coordinates, that is, σ · (v1 , . . . , vn ) = (vσ (1) , . . . , vσ (n) ) for all σ ∈ Sn (and
extension by linearity). Show that V is a simple CSn -module.
6.6. (a) Let An be the alternating group on n letters. Consider the CSn -module V
from Exercise 6.5, and view it as a CAn -module, by restricting the action.
Show that V is also simple as a module for CAn .
(b) Show that the group A5 has five conjugacy classes.
(c) By applying (a) and (b), and Theorem 6.4, and using the known fact that
A5 = A5 , find the dimensions of the simple CA5 -modules and describe
the Artin–Wedderburn decomposition of the group algebra CA5 .
6.4 Artin–Wedderburn Decomposition and Conjugacy Classes 127

6.7. (a) Let G1 , G2 be two abelian groups of the same order. Explain why CG1
and CG2 have the same Artin–Wedderburn decomposition, and hence are
isomorphic as C-algebras.
(b) Let G be any non-abelian group of order 8. Show that there is a unique
possibility for the Artin–Wedderburn decomposition of CG.
6.8. Let G = {±1, ±i, ±j, ±k} be the quaternion group, as defined in
Remark 1.9.
(i) Show that the commutator subgroup of G is the cyclic group generated
by the element −1.
(ii) Determine the number of simple CG-modules (up to isomorphism) and
their dimensions.
(iii) Compare the Artin–Wedderburn decomposition of CG with that of the
group algebra of the dihedral group D4 of order 8 (that is, the symmetry
group of the square). Are the group algebras CG and CD4 isomorphic?
6.9. In each of the following cases, does there exist a finite group G such that the
Artin–Wedderburn decomposition of the group algebra CG has the following
form?
(i) M3 (C),
(ii) C × M2 (C),
(iii) C × C × M2 (C),
(iv) C × C × M3 (C).
Chapter 7
Indecomposable Modules

We have seen that for a semisimple algebra, any non-zero module is a direct sum of
simple modules (see Theorem 4.11). We investigate now how this generalizes when
we consider finite-dimensional modules. If the algebra is not semisimple, one needs
to consider indecomposable modules instead of just simple modules, and then one
might hope that any finite-dimensional module is a direct sum of indecomposable
modules. We will show that this is indeed the case. In addition, we will show that
a direct sum decomposition into indecomposable summands is essentially unique;
this is known as the Krull–Schmidt Theorem.
In Chap. 3 we have studied simple modules, which are building blocks for
arbitrary modules. They might be thought of as analogues of ‘elementary particles’,
and then indecomposable modules could be viewed as analogues of ‘molecules’.
Throughout this chapter, A is a K-algebra where K is a field.

7.1 Indecomposable Modules

In this section we define indecomposable modules and discuss several examples.


In addition, we will show that every finite-dimensional module is a direct sum of
indecomposable modules.
Definition 7.1. Let A be a K-algebra, and assume M is a non-zero A-module. Then
M is called indecomposable if it cannot be written as a direct sum M = U ⊕ V for
non-zero submodules U and V . Otherwise, M is called decomposable.
Remark 7.2.
(1) Every simple module is indecomposable.
(2) Consider the algebra A = K[X]/(Xt ) for some t ≥ 2, recall that the A-modules
are of the form Vα with α the linear map of the underlying vector space V

© Springer International Publishing AG, part of Springer Nature 2018 129


K. Erdmann, T. Holm, Algebras and Representation Theory, Springer
Undergraduate Mathematics Series, https://doi.org/10.1007/978-3-319-91998-0_7
130 7 Indecomposable Modules

given by the action of the coset of X, and note that α t = 0. Let Vα be the
2-dimensional A-module where α has matrix
 
01
00

with respect to some basis. It has a unique one-dimensional submodule


(spanned by the first basis vector). So it is not simple, and it is indecomposable,
since otherwise it would be a direct sum of two 1-dimensional submodules.
(3) Let A = K[X]/(Xt ) where t ≥ 2, and let M = A as an A-module. By the
submodule correspondence, every non-zero submodule of M is of the form
(g)/(Xt ) where g divides Xt but g is not a scalar multiple of Xt . That is, we
can take g = Xr for r < t. We see that any such submodule contains the
element Xt −1 + (Xt ). This means that any two non-zero submodules of M have
a non-zero intersection. Therefore M must be indecomposable.
(4) Let A be a semisimple K-algebra. Then an A-module is simple if and only
if it is indecomposable. Indeed, by (1) we know that a simple module is
indecomposable. For the converse, let M be an indecomposable A-module and
let U ⊆ M be a non-zero submodule; we must show that U = M. Since A
is a semisimple algebra, the module M is semisimple (see Theorem 4.11). So
by Theorem 4.3, the submodule U has a complement, that is, M = U ⊕ C for
some A-submodule C of M. But M is indecomposable, and U = 0, so C = 0
and then U = M.
This means that to study indecomposable modules, we should focus on algebras
which are not semisimple.
Suppose A is a K-algebra and M an A-module such that M = U ⊕ V with
U, V non-zero submodules of M. Then in particular, this is a direct sum of K-
vector spaces. In linear algebra, having a direct sum decomposition M = U ⊕ V
of a non-zero vector space M is the same as specifying a projection, ε say, which
maps V to zero, and is the identity on U . If U and V are A-submodules of M
then ε is an A-module homomorphism, for example by observing that ε = ι ◦ π
where π : M → U is the canonical surjection, and ι : U → M is the inclusion
homomorphism (see Example 2.22). Then U = ε(M) and V = (idM − ε)(M), and
ε2 = ε, and (idM − ε)2 = idM − ε.
Recall that an idempotent of an algebra is an element ε in this algebra such that
ε2 = ε. We see that from the above direct sum decomposition of M as an A-module
we get an idempotent ε in the endomorphism algebra EndA (M).
Lemma 7.3. Let A be a K-algebra, and let M be a non-zero A-module. Then M is
indecomposable if and only if the endomorphism algebra EndA (M) does not contain
any idempotents except 0 and idM .
Proof. Assume first that M is indecomposable. Suppose ε ∈ EndA (M) is an idem-
potent, then since idM = ε+(idM −ε) we have M = ε(M)+(idM −ε)(M). Moreover,
7.1 Indecomposable Modules 131

this sum is direct: if x ∈ ε(M) ∩ (idM − ε)(M) then ε(m) = x = (idM − ε)(n) with
m, n ∈ M, and then

x = ε(m) = ε2 (m) = ε(idM − ε)(m) = ε(m) − ε2 (m) = 0.

Therefore, M = ε(M) ⊕ (idM − ε)(M) is a direct sum of A-submodules. Since


M is indecomposable, ε(M) = 0 or (idM − ε)(M) = 0. That is, ε = 0 or else
idM − ε = 0, which means ε = idM .
For the converse, if M = U ⊕ V , where U and V are submodules of M, then as
we have seen above, the projection ε : M → M with ε(u+v) = u for u ∈ U, v ∈ V
is an idempotent in EndA (M). By assumption, ε is zero or the identity, which means
that U = 0 or V = 0. Hence M is indecomposable. 

Example 7.4. Submodules, or factor modules of indecomposable modules need not
be indecomposable. As an example, consider the path algebra A := KQ of the
Kronecker quiver as in Example 1.13,

(1) We consider the A-submodule M := Ae1 = span{e1 , a, b} of A, and


U := span{a, b}. Each element in the basis {e1 , e2 , a, b} of A acts on U by
scalars, and hence U and every subspace of U is an A-submodule, and U is the
direct sum of non-zero A-submodules

U = span{a, b} = span{a} ⊕ span{b}.

However, M is indecomposable. We will prove this using the criterion of


Lemma 7.3. Let ε : M → M be an A-module homomorphism with ε2 = ε,
then we must show that ε = idM or ε = 0. We have ε(e1 M) = e1 ε(M) ⊆ e1 M,
but e1 M is spanned by e1 , so ε(e1 ) = λe1 for some scalar λ ∈ K. Next, note
a = ae1 and therefore

ε(a) = ε(ae1) = aε(e1 ) = a(λe1 ) = λ(ae1 ) = λa.

Similarly, since b = be1 we have ε(b) = λb. We have proved that ε = λ · idM .
Now, ε2 = ε and therefore λ2 = λ and hence λ = 0 or λ = 1. That is, ε is the
zero map, or the identity map.
(2) Let N be the A-module with basis {v1 , v1 , v2 } where e1 N has basis {v1 , v1 } and
e2 N has basis {v2 } and where the action of a and b is defined by

av1 = v2 , av1 = 0, av2 = 0 and bv1 = 0, bv1 = v2 , bv2 = 0.

We see that U := span{v2 } is an A-submodule of N. The factor module


N/U has basis consisting of the cosets of v1 , v1 , and from the definition,
132 7 Indecomposable Modules

each element in the basis {e1 , e2 , a, b} of A acts by scalar multiplication on


N/U . As before, this implies that N/U is a direct sum of two 1-dimensional
modules. On the other hand, we claim that N is indecomposable; to show
this we want to use Lemma 7.3, as in (1). So let ε ∈ EndA (N), then
ε(v2 ) = ε(e2 v2 ) = e2 ε(v2 ) ∈ e2 N and hence ε(v2 ) = λv2 , for some λ ∈ K.
Moreover, ε(v1 ) = ε(e1 v1 ) = e1 ε(v1 ) ∈ e1 N, so ε(v1 ) = μv1 + ρv1 for some
μ, ρ ∈ K. Using that bv1 = 0 and bv1 = v2 this implies that ρ = 0. Then
λv2 = ε(v2 ) = ε(av1 ) = aε(v1 ) = μv2 , hence λ = μ and ε(v1 ) = λv1 .
Similarly, one shows ε(v1 ) = λv1 , that is, we get ε = λ · idN . If ε2 = ε then
ε = 0 or ε = idN .
We will now show that every non-zero finite-dimensional module can be written as
a direct sum of indecomposable modules.
Theorem 7.5. Let A be a K-algebra, and let M be a non-zero finite-dimensional A-
module. Then M can be expressed as a direct sum of finitely many indecomposable
A-submodules.
Proof. We use induction on the vector space dimension dimK M. If dimK M = 1,
then M is a simple A-module, hence is an indecomposable A-module, and we
are done. So let dimK M > 1. If M is indecomposable, there is nothing to do.
Otherwise, M = U ⊕ V with non-zero A-submodules U and V of M. Then both U
and V have strictly smaller dimension than M. Hence by the inductive hypothesis,
each of U and V can be written as a direct sum of finitely many indecomposable
A-submodules. Since M = U ⊕ V , it follows that M can be expressed as a direct
sum of finitely many indecomposable A-submodules. 

Remark 7.6. There is a more general version of Theorem 7.5. Recall from Sect. 3.3
that an A-module M is said to be of finite length if M has a composition series;
in this case, the length (M) of M is defined as the length of a composition series
of M (which is uniquely determined by the Jordan–Hölder theorem). If we replace
‘dimension’ in the above theorem and its proof by ‘length’ then everything works
the same, noting that proper submodules of a module of finite length have strictly
smaller lengths, by Proposition 3.17. We get therefore that any non-zero module
of finite length can be expressed as a direct sum of finitely many indecomposable
submodules.
There are many modules which cannot be expressed as a finite direct sum
of indecomposable modules. For example, let K = Q, and let R be the set
of real numbers. Then R is a vector space over Q, hence is a Q-module. The
indecomposable Q-modules are the 1-dimensional Q-vector spaces. Since R has
infinite dimension over Q, it cannot be a finite direct sum of indecomposable
Q-modules. This shows that the condition that the module is finite-dimensional (or
has finite length) in Theorem 7.5 cannot be removed.
7.2 Fitting’s Lemma and Local Algebras 133

7.2 Fitting’s Lemma and Local Algebras

We would like to have criteria which tell us when a given module is indecomposable.
Obviously Definition 7.1 is not so helpful; we would need to inspect all submodules
of a given module. One criterion is Lemma 7.3; in this section we will look for
further information from linear algebra.
Given a linear transformation of a finite-dimensional vector space, one gets a
direct sum decomposition of the vector space, in terms of the kernel and the image
of some power of the linear transformation.
Lemma 7.7 (Fitting’s Lemma I). Let K be a field. Assume V is a finite-
dimensional K-vector space, and θ : V → V is a linear transformation. Then
there is some n ≥ 1 such that the following hold:
(i) For all k ≥ 0 we have ker(θ n ) = ker(θ n+k ) and im(θ n ) = im(θ n+k ).
(ii) V = ker(θ n ) ⊕ im(θ n ).
Proof. This is elementary linear algebra, but since it is important, we give the proof.
(i) We have inclusions of subspaces

ker(θ ) ⊆ ker(θ 2 ) ⊆ ker(θ 3 ) ⊆ . . . ⊆ V (7.1)

and

V ⊇ im(θ ) ⊇ im(θ 2 ) ⊇ im(θ 3 ) ⊇ . . . (7.2)

Since V is finite-dimensional, the ascending chain (7.1) cannot contain


infinitely many strict inequalities, so there exists some n1 ≥ 1 such that
ker(θ n1 ) = ker(θ n1 +k ) for all k ≥ 0. Similarly, for the descending chain (7.2)
there is some n2 ≥ 1 such that im(θ n2 ) = im(θ n2 +k ) for all k ≥ 0. Setting n as the
maximum of n1 and n2 proves (i).
(ii) We show ker(θ n ) ∩ im(θ n ) = 0 and ker(θ n ) + im(θ n ) = V . Let
x ∈ ker(θ n ) ∩ im(θ n ), that is, θ n (x) = 0 and x = θ n (y) for some y ∈ V .
We substitute, and then we have that 0 = θ n (x) = θ 2n (y). This implies
y ∈ ker(θ 2n ) = ker(θ n ) by part (i), and thus x = θ n (y) = 0. Hence
ker(θ n ) ∩ im(θ n ) = 0. Now, by the rank-nullity theorem we have

dimK V = dimK ker(θ n ) + dimK im(θ n )


= dimK ker(θ n ) + dimK im(θ n ) − dimK (ker(θ n ) ∩ im(θ n ))
= dimK (ker(θ n ) + im(θ n )).

Hence the sum ker(θ n )+im(θ n ) is equal to V since it is a subspace whose dimension
is equal to dimK V . 

134 7 Indecomposable Modules

Corollary 7.8. Let A be a K-algebra. Assume M is a finite-dimensional A-module,


and θ : M → M is an A-module homomorphism. Then there is some n ≥ 1 such
that M = ker(θ n ) ⊕ im(θ n ), a direct sum of A-submodules of M.
Proof. This follows directly from Lemma 7.7. Indeed, θ is in particular a linear
transformation. The map θ n is an A-module homomorphism and therefore its kernel
and its image are A-submodules of M. 

Corollary 7.9 (Fitting’s Lemma II). Let A be a K-algebra and M a non-zero
finite-dimensional A-module. Then the following statements are equivalent:
(i) M is an indecomposable A-module.
(ii) Every homomorphism θ ∈ EndA (M) is either an isomorphism, or is nilpotent.
Proof. We first assume that statement (i) holds. By Corollary 7.8, for every
θ ∈ EndA (M) we have M = ker(θ n ) ⊕ im(θ n ) as A-modules, for some n ≥ 1.
But M is indecomposable by assumption, so we conclude that ker(θ n ) = 0 or
im(θ n ) = 0. In the second case we have θ n = 0, that is, θ is nilpotent. In the
first case, θ n and hence also θ are injective, and moreover, M = im(θ n ), therefore
θ n and hence θ are surjective. So θ is an isomorphism.
Conversely, suppose that (ii) holds. To show that M is indecomposable, we
apply Lemma 7.3. So let ε be an endomorphism of M such that ε2 = ε. By
assumption, ε is either nilpotent, or is an isomorphism. In the first case ε = 0
since ε = ε2 = . . . = εn for all n ≥ 1. In the second case, im(ε) = M
and ε is the identity on M: If m ∈ M then m = ε(y) for some y ∈ M, hence
ε(m) = ε2 (y) = ε(y) = m. 

Remark 7.10. Corollary 7.9 shows that the endomorphism algebra E := EndA (M)
of an indecomposable A-module M has the following property: if a ∈ E is not
invertible, then 1E − a ∈ E is invertible. In fact, if a ∈ E is not invertible then it is
nilpotent, by Corollary 7.9, say a n = 0. Then we have

(1E + a + a 2 + . . . + a n−1 )(1E − a) = 1E = (1E − a)(1E + a + a 2 + . . . + a n−1 ),

that is, 1E − a is invertible.


Note that in a non-commutative algebra one has to be slightly careful when
speaking of invertible elements. More precisely, one should speak of elements which
have a left inverse or a right inverse, respectively. If for some element both a left
inverse and a right inverse exist, then they coincide (since invertible elements form
a group).
Algebras (or more generally rings) with the property in Remark 7.10 appear
naturally in many places in mathematics. We now study them in some more detail.
Theorem 7.11. Assume A is any K-algebra, then the following are equivalent:
(i) The set N of elements x ∈ A which do not have a left inverse is a left ideal of A.
(ii) For all a ∈ A, at least one of a and 1A − a has a left inverse in A.
7.2 Fitting’s Lemma and Local Algebras 135

We observe the following: Let N be as in (i). If x ∈ N and a ∈ A then ax cannot


have a left inverse in A, since otherwise x would have a left inverse. That is, we
have ax ∈ N, so that AN ⊆ N.
Proof. Assume (ii) holds, we show that then N is a left ideal of A. By the above
observation, we only have to show that N is an additive subgroup of A. Clearly
0 ∈ N. Now let x, y ∈ N, assume (for a contradiction) that x − y is not in N. Then
there is some a ∈ A such that a(x − y) = 1A , so that

(−a)y = 1A − ax.

We know that ax does not have a left inverse, and therefore, using (ii) we deduce that
(−a)y has a left inverse. But then y has a left inverse, and y ∈ N, a contradiction.
We have now shown that N is a left ideal of A.
Now assume (i) holds, we prove that this implies (ii). Assume a ∈ A does not
have a left inverse in A. We have to show that then 1A − a has a left inverse in A.
If this is false then both a and 1A − a belong to N. By assumption (i), N is closed
under addition, therefore 1A ∈ N, which is not true. This contradiction shows that
1A − a must belong to N. 

Definition 7.12. A K-algebra A is called a local algebra (or just local) if it satisfies
the equivalent conditions from Theorem 7.11.
Exercise 7.1. Let A be a local K-algebra. Show that the left ideal N in Theo-
rem 7.11 is a maximal left ideal of A, and that it is the only maximal left ideal
of A.
Remark 7.13. Let A be a local K-algebra. By Exercise 7.1 the left ideal N in
Theorem 7.11 is then precisely the Jacobson radical as defined and studied in
Sect. 4.3 (see Definition 4.21). In particular, if A is finite-dimensional then this
unique maximal left ideal is even a two-sided ideal (see Theorem 4.23).
Lemma 7.14.
(a) Assume A is a local K-algebra. Then the only idempotents in A are 0 and 1A .
(b) Assume A is a finite-dimensional algebra. Then A is local if and only if the only
idempotents in A are 0 and 1A .
Proof. (a) Let ε ∈ A be an idempotent. If ε has no left inverse, then by
Theorem 7.11 we know that 1A − ε has a left inverse, say a(1A − ε) = 1A for
some a ∈ A. Then it follows that ε = 1A ε = a(1A − ε)ε = aε − aε2 = 0.
On the other hand, if ε has a left inverse, say bε = 1A for some b ∈ A, then
ε = 1A ε = bε2 = bε = 1A .
(b) We must show the converse of (a). Assume that 0 and 1A are the only
idempotents in A. We will verify condition (ii) of Theorem 7.11, that is, let a ∈ A,
then we show that at least one of a and 1A − a has a left inverse in A. Consider the
map θ : A → A defined by θ (x) := xa. This is an A-module homomorphism if we
136 7 Indecomposable Modules

view A as a left A-module. By Corollary 7.8 we have A = ker(θ n ) ⊕ im(θ n ) for


some n ≥ 1. So we have a unique expression

1A = ε1 + ε2 with ε1 ∈ ker(θ n ), ε2 ∈ im(θ n ).

By Lemma 5.1, the εi are idempotents. By assumption, ε1 = 0 or ε1 = 1A .


Furthermore, by Lemma 5.1 we have A = Aε1 ⊕ Aε2 . If ε1 = 0 then
A = im(θ n ) = Aa n and then a has a left inverse in A (since 1A = ba n for
some b ∈ A). Otherwise, ε1 = 1A and then A = ker(θ n ), that is, a n = 0. A
computation as in Remark 7.10 then shows that 1A − a has a left inverse, namely
1A + a + a 2 + . . . + a n−1 . 

We will now investigate some examples.
Example 7.15.
(1) Let A = K, the one-dimensional K-algebra. Then for every a ∈ A, at least one
of a or 1A − a has a left inverse in A and hence A is local, by Theorem 7.11 and
Definition 7.12. The same argument works to show that every division algebra
A = D over the field K (see Definition 1.7) is local.
(2) Let A = Mn (K) where n ≥ 2. Let a := E11 , then a and 1A − a do not have
a left inverse. Hence A is not local. (We have M1 (K) = K, which is local, by
(1).)
(3) Consider the factor algebra A = K[X]/(f ) for a non-constant polynomial
f ∈ K[X]. Then A is a local algebra if and only if f = g m for some m ≥ 1,
where g ∈ K[X] is an irreducible polynomial. The proof of this is Exercise 7.6.
Exercise 7.2. Assume A = KQ, where Q is a quiver with no oriented cycles. Show
that A is local if and only if Q consists just of one vertex.
For finite-dimensional modules, we can now characterize indecomposability in
terms of local endomorphism algebras.
Corollary 7.16 (Fitting’s Lemma III). Let A be a K-algebra and let M be a non-
zero finite-dimensional A-module. Then M is an indecomposable A-module if and
only if the endomorphism algebra EndA (M) is a local algebra.
Proof. Let E := EndA (M). Note that E is finite-dimensional (it is contained in the
space of all K-linear maps M → M, which is finite-dimensional by elementary
linear algebra). By Lemma 7.3, the module M is indecomposable if and only if the
algebra E does not have idempotents other than 0 and idM . By Lemma 7.14 this is
true if and only if E is local. 

Remark 7.17. All three versions of Fitting’s Lemma have a slightly more general
version, if one replaces ‘finite dimensional’ by ‘finite length’, see Remark 7.6.
The assumption that M is finite-dimensional, or has finite length, cannot be
omitted. For instance, consider the polynomial algebra K[X] as a K[X]-module.
Multiplication by X defines a K[X]-module homomorphism θ : K[X] → K[X].
For every n ∈ N we have ker(θ n ) = 0 and im(θ n ) = (Xn ), the ideal generated
7.3 The Krull–Schmidt Theorem 137

by Xn . But K[X] = ker(θ n ) ⊕ im(θ n ). So Lemma 7.7 fails for A. Exercise 7.11
contains further illustrations.

7.3 The Krull–Schmidt Theorem

We have seen in Theorem 7.5 that every non-zero finite-dimensional module


can be decomposed into a finite direct sum of indecomposable modules. The
fundamental Krull–Schmidt Theorem states that such a decomposition is unique
up to isomorphism and up to reordering indecomposable summands. This is one of
the most important results in representation theory.
Theorem 7.18 (Krull–Schmidt Theorem). Let A be a K-algebra, and let M
be a non-zero finite-dimensional A-module. Suppose there are two direct sum
decompositions

M1 ⊕ . . . ⊕ Mr = M = N1 ⊕ . . . ⊕ Ns

of M into indecomposable A-submodules Mi (1 ≤ i ≤ r) and Nj (1 ≤ j ≤ s).


Then r = s, and there is a permutation σ such that Mi ∼
= Nσ (i) for all i = 1, . . . , r.
Before starting with the proof, we introduce some notation we will use for the
canonical homomorphisms associated to a direct sum decomposition, similar to the
notation used in Lemma 5.6. Let μi : M → Mi be the homomorphism defined
by μ(m1 + . . . + mr ) = mi , and let ιi : Mi → M be the inclusion map. Then
μi ◦ ιi = idMi and hence if ei := ιi ◦ μi : M → M then ei is the projection with
image Mi and kernel the direct sum of all Mj for j = i. Then the ei are orthogonal
idempotents in EndA (M) with idM = ri=1 ei .
Similarly let νt : M → Nt be the homomorphism defined by
νt (n1 + . . . + ns ) = nt , and let κt : Nt → M be the inclusion. Then νt ◦ κt
is the identity map of Nt , and if ft := κt ◦ νt : M → M then ft is the projection
with image Nt and kernel the direct sum of all Nj for j = t. In addition, the ft are
orthogonal idempotents in EndA (M) whose sum is the identity idM .
In the proof below we sometimes identify Mi with ιi (Mi ) and Nt with κt (Nt ), to
ease notation.
Proof. We use induction on r, the number of summands in the first decomposition.
When r = 1 we have M1 = M. This means that M is indecomposable, and we
conclude that s = 1 and N1 = M = M1 . Assume now that r > 1.
(1) We will find an isomorphism between N1 and some Mi . We have


r
idN1 = ν1 ◦ κ1 = ν1 ◦ idM ◦ κ1 = ν1 ◦ ei ◦ κ1 . (∗)
i=1
138 7 Indecomposable Modules

The module N1 is indecomposable and finite-dimensional, so by Corollary 7.9,


every endomorphism of N1 is either nilpotent, or is an isomorphism. Assume for a
contradiction that each summand in the above sum (∗) is nilpotent, and hence does
not have a left inverse. We have that EndA (N1 ) is a local algebra, by Corollary 7.16.
Hence by Theorem 7.11, the set of elements with no left inverse is closed under
addition, so it follows that the sum also has no left inverse. But the sum is the
identity of N1 which has a left inverse, a contradiction.
Hence at least one of the summands in (∗) is an isomorphism, and we may relabel
the Mi and assume that φ is an isomorphism N1 → M1 , where

φ := ν1 ◦ e1 ◦ κ1 = ν1 ◦ ι1 ◦ μ1 ◦ κ1 .

Now we apply Lemma 2.30 with M = M1 , and N1 = N = N  and j = μ1 ◦ κ1 and


π = ν1 ◦ ι1 . We obtain

M1 = im(μ1 ◦ κ1 ) ⊕ ker(ν1 ◦ ι1 ).

Now, since M1 is indecomposable and μ1 ◦κ1 is non-zero we have M1 = im(μ1 ◦κ1 )


and ker(ν1 ◦ ι1 ) = 0. Hence the map μ1 ◦ κ1 : N1 → M1 is surjective. It is also
injective (since φ = ν1 ◦ ι1 ◦ μ1 ◦ κ1 is injective). This shows that μ1 ◦ κ1 is an
isomorphism N1 → M1 .
(2) As a tool for the inductive step, we construct an A-module isomorphism
γ : M → M such that γ (N1 ) = M1 and γ (Nj ) = Nj for 2 ≤ j ≤ s. Define

γ := idM − f1 + e1 ◦ f1 .

We first show that γ : M → M is an isomorphism. It suffices to show that γ


is injective, by dimensions. Let γ (x) = 0 for some x ∈ M. Using that f1 is an
idempotent we have

0 = f1 (0) = (f1 ◦ γ )(x) = f1 (x) − f12 (x) + (f1 ◦ e1 ◦ f1 )(x) = (f1 ◦ e1 ◦ f1 )(x).

By definition, f1 ◦ e1 ◦ f1 = κ1 ◦ ν1 ◦ ι1 ◦ μ1 ◦ κ1 ◦ ν1 = κ1 ◦ φ ◦ ν1 with the


isomorphism φ : N1 → M1 from (1). Since κ1 and φ are injective, it follows from
(f1 ◦ e1 ◦ f1 )(x) = 0 that ν1 (x) = 0. Then also f1 (x) = (κ1 ◦ ν1 )(x) = 0 and this
implies x = γ (x) = 0, as desired.
We now show that γ (N1 ) = M1 and γ (Nj ) = Nj for 2 ≤ j ≤ s. From (1) we
have the isomorphism μ1 ◦ κ1 : N1 → M1 , and this is viewed as a homomorphism
e1 ◦ f1 : M → M, by e1 ◦ f1 = ι1 ◦ (μ1 ◦ κ1 ) ◦ ν1 , noting that ν1 is the identity
on N1 and ι1 is the identity on M1 . Furthermore, idM − f1 maps N1 to zero, and in
total we see that γ (N1 ) = M1 .
Moreover, if x ∈ Nj and j ≥ 2 then x = fj (x) and f1 ◦ fj = 0 and it follows
that γ (x) = x. This proves γ (Nj ) = Nj .
7.3 The Krull–Schmidt Theorem 139

(3) We complete the proof of the Krull–Schmidt Theorem. Note that an isomorphism
takes a direct sum decomposition to a direct sum decomposition, see Exercise 7.3.
By (2) we have

M = γ (M) = γ (N1 ) ⊕ γ (N2 ) ⊕ . . . ⊕ γ (Ns ) = M1 ⊕ N2 ⊕ . . . ⊕ Ns .

Using the isomorphism theorem, we have

M2 ⊕ . . . ⊕ Mr ∼
= M/M1 = (M1 ⊕ N2 ⊕ . . . ⊕ Ns )/M1 ∼
= N2 ⊕ . . . ⊕ Ns .

To apply the induction hypothesis, we need two direct sum decompositions of the
same module. Let M  := M2 ⊕ . . . ⊕ Mr . We have obtained an isomorphism
ψ : M  → N2 ⊕ . . . ⊕ Ns . Let Ni := ψ −1 (Ni ), this is a submodule of M  , and we
have, again by Exercise 7.3, the direct sum decomposition

M  = N2 ⊕ . . . ⊕ Ns .

By the induction hypothesis, r − 1 = s − 1 and there is a permutation σ of


{1, 2, 3, . . . , r} with Mi ∼
= Nσ (i) ∼
= Nσ (i) for i ≥ 2 (and σ (1) = 1). This completes
the proof. 

EXERCISES

7.3. Let A be a K-algebra, and M a non-zero A-module. Assume M=U1 ⊕. . .⊕ Ur


is a direct sum of A-submodules, and assume that γ : M → M is an A-
module isomorphism. Show that then M = γ (M) = γ (U1 ) ⊕ . . . ⊕ γ (Ur ).
7.4. Let Tn (K) be the K-algebra of upper triangular n × n-matrices. In the natural
Tn (K)-module K n we consider for 0 ≤ i ≤ n the submodules

Vi := {(λ1 , . . . , λn )t | λj = 0 for all j > i} = span{e1 , . . . , ei },

where ei denotes the i-th standard basis vector. Recall from Exercise 2.14
that V0 , V1 , . . . , Vn are the only Tn (K)-submodules of K n , and that
Vi,j := Vi /Vj (for 0 ≤ j < i ≤ n) are n(n+1) 2 pairwise non-isomorphic
Tn (K)-modules.
(a) Determine the endomorphism algebra EndTn (K) (Vi,j ) for all
0 ≤ j < i ≤ n.
(b) Deduce that each Vi,j is an indecomposable Tn (K)-module.
7.5. Recall that for any K-algebra A and every element a ∈ A the map
θa : A → A, b → ba, is an A-module homomorphism.
140 7 Indecomposable Modules

We consider the algebra A = Tn (K) of upper triangular n × n-matrices.


Determine for each of the following elements a ∈ Tn (K) the minimal d ∈ N
d+j d+j
such that ker(θad ) = ker(θa ) and im(θad ) = im(θa ) for all j ∈ N.
Moreover, give explicitly the decomposition Tn (K) = ker(θad ) ⊕ im(θad )
(which exists by the Fitting Lemma, see Corollary 7.7):
(i) a = E11 ;
(ii) a = E12 + E23 + . . . + En−1,n ;
(iii) a = E1n + E2n + . . . + Enn .
7.6. (a) Let A = K[X]/(f ) with a non-constant polynomial f ∈ K[X]. Show
that A is a local algebra if and only f = g m for some irreducible
polynomial g ∈ K[X] and some m ∈ N (up to multiplication by a non-
zero scalar in K).
(b) Which of the following algebras are local?
(i) Q[X]/(Xn − 1), where n ≥ 2;
(ii) Zp [X]/(Xp − 1) where p is a prime number;
(iii) K[X]/(X3 − 6X2 + 12X − 8).
7.7. Which of the following K-algebras A are local?
(i) A = Tn (K), the algebra of upper triangular n × n-matrices;
(ii) A = {a = (aij ) ∈ Tn (K) | aii = ajj for all i, j }.

7.8. For a field K let K[[X]] := { ∞ i=0 λi X | λi ∈ K} be the set of formal power
i

series. On K[[X]] define the following operations:


  
– addition ( i λi Xi ) + ( i μi Xi ) =  i (λi + μi )Xi ,
– scalar multiplication
 λ( i i λi Xi ) = i λλ i X
i,

– multiplication ( i λi X )( j μj X ) = k ( i+j =k λi μj )Xk .


j

(a) Verify that K[[X]] with these operations becomes a commutative


K-algebra.
(b) Determine the invertible elements in K[[X]].
(c) Show that K[[X]] is a local algebra.
7.9. Let K be a field and A a K-algebra of finite length (as an A-module).
Show that if A is a local algebra then A has only one simple module, up
to isomorphism.
7.10. For a field K consider the path algebra KQ of the Kronecker quiver
7.3 The Krull–Schmidt Theorem 141

For λ ∈ K we consider a 2-dimensional K-vector space Vλ = span{v1 , v2 }.


This becomes a KQ-module via the representation λ : KQ → EndK (Vλ ),
where
   
10 00
λ (e1 ) = , λ (e2 ) =
00 01
   
00 00
λ (a) = , λ (b) = .
10 λ0

(a) Show that for λ = μ the KQ-modules Vλ and Vμ are not isomorphic.
(b) Show that for all λ ∈ K the KQ-module Vλ is indecomposable.
7.11. Let K[X] be the polynomial algebra.
(a) Show that K[X] is indecomposable as a K[X]-module.
(b) Show that the equivalence in the second version of Fitting’s Lemma
(Corollary 7.9) does not hold, by giving a K[X]-module endomorphism
of K[X] which is neither invertible nor nilpotent.
(c) Show that the equivalence in the third version of Fitting’s Lemma
(Corollary 7.16) also does not hold for K[X].
7.12. (a) By applying the Artin–Wedderburn theorem, characterize which
semisimple K-algebras are local algebras.
(b) Let G be a finite group such that the group algebra KG is semisimple
(that is, the characteristic of K does not divide |G|, by Maschke’s
theorem). Deduce that KG is not a local algebra, except for the group
G with one element.
Chapter 8
Representation Type

We have seen in the previous chapter that understanding the finite-dimensional


modules of an algebra reduces to understanding all indecomposable modules.
However, we will see some examples which show that this may be too ambitious
in general: An algebra can have infinitely many indecomposable modules, and
classifying them appears not to be feasible. Algebras can roughly be divided into
those that have only finitely many indecomposable modules (finite representation
type), and those that have infinitely many indecomposable modules (infinite repre-
sentation type). In this chapter we will introduce the notion of representation type,
consider some classes of algebras, and determine whether they have finite or infinite
representation type.

8.1 Definition and Examples

We have seen that any finite-dimensional module of a K-algebra A can be expressed


as a direct sum of indecomposable modules, and moreover, the indecomposable
summands occuring are unique up to isomorphism (see Theorem 7.18). Therefore,
for a given algebra A, it is natural to ask how many indecomposable A-modules it
has, and if possible, to give a complete description up to isomorphism. The notion
for this is the representation type of an algebra, we introduce this now and give
several examples, and some reduction methods.
Throughout K is a field.
Definition 8.1. A K-algebra A has finite representation type if there are only
finitely many finite-dimensional indecomposable A-modules, up to isomorphism.
Otherwise, A is said to be of infinite representation type.

© Springer International Publishing AG, part of Springer Nature 2018 143


K. Erdmann, T. Holm, Algebras and Representation Theory, Springer
Undergraduate Mathematics Series, https://doi.org/10.1007/978-3-319-91998-0_8
144 8 Representation Type

Remark 8.2.
(1) One sometimes alternatively defines finite representation type for an algebra A
by requesting that there are only finitely many indecomposable A-modules of
finite length, up to isomorphism.
For a finite-dimensional algebra A the two versions are the same: That
is, an A-module has finite length if and only if it is finite-dimensional. This
follows since in this case every simple A-module is finite-dimensional, see
Corollary 3.20.
(2) Isomorphic algebras have the same representation type. To see this, let
 : A → B be an isomorphism of K-algebras. Then every B-module
M becomes an A-module by setting a · m = (a)m and conversely,
every A-module becomes a B-module by setting b · m = −1 (b)m.
This correspondence preserves dimensions of modules and it preserves
isomorphisms, and moreover indecomposable A-modules correspond to
indecomposable B-modules. We shall use this tacitly from now on.
Example 8.3.
(1) Every semisimple K-algebra has finite representation type.
In fact, suppose A is a semisimple K-algebra. By Remark 7.2, an A-module
M is indecomposable if and only if it is simple. By Remark 4.10, the algebra
A = S1 ⊕ . . . ⊕ Sk is a direct sum of finitely many simple A-modules. In
particular, A has finite length as an A-module and then every indecomposable
(that is, simple) A-module is isomorphic to one of the finitely many A-modules
S1 , . . . , Sk , by Theorem 3.19. In particular, there are only finitely many finite-
dimensional indecomposable A-modules, and A has finite representation type.
Note that in this case it may happen that the algebra A is not finite-
dimensional, for example it could just be an infinite-dimensional division
algebra.
(2) The polynomial algebra K[X] has infinite representation type.
To see this, take some integer m ≥ 2 and consider the (finite-dimensional)
K[X]-module Wm = K[X]/(Xm ). This is indecomposable: In Remark 7.2
we have seen that it is indecomposable as a module for the factor algebra
K[X]/(Xm ), and the argument we gave works here as well: every non-zero
K[X]-submodule of Wm must contain the coset Xm−1 + (Xm ). So Wm cannot
be expressed as a direct sum of two non-zero submodules. The module Wm has
dimension m, and hence different Wm are not isomorphic.
(3) For any n ∈ N the algebra A := K[X]/(Xn ) has finite representation type.
Recall that a finite-dimensional A-module is of the form Vα where V is a
finite-dimensional K-vector space, and α is a linear transformation of V such
that α n = 0 (here α describes the action of the coset of X). Since α n = 0,
the only eigenvalue of α is 0, so the field K contains all eigenvalues of α. This
means that there exists a Jordan canonical form for α over K. That is, V is the
direct sum V = V1 ⊕ . . . ⊕ Vr , where each Vi is invariant under α, and each Vi
8.1 Definition and Examples 145

has a basis such that the matrix of α on Vi is a Jordan block matrix of the form
⎛ ⎞
0
⎜1 0 ⎟
⎜ ⎟
⎜ ⎟
⎜ 1 ... ⎟.
⎜ ⎟
⎜ .. .. ⎟
⎝ . . ⎠
1 0

Since Vi is invariant under α, it is an A-submodule. So if Vα is indecomposable


then there is only one such direct summand. The matrix of α on Vα is then
just one such Jordan block. Such a module Vα is indeed indecomposable (see
Exercise 8.1). The Jordan block has size m with m ≤ n since α n = 0, and each
m with 1 ≤ m ≤ n occurs. The summands for different m are not isomorphic,
since they have different dimensions. This shows that there are precisely n
indecomposable A-modules, up to isomorphism.
More generally, we can determine the representation type of any algebra
K[X]/(f ) when f ∈ K[X] is a non-constant polynomial. We consider a module
of the form Vα where α is a linear transformation with f (α) = 0. The following
exercise shows that if the minimal polynomial of α is a power g t of some irreducible
polynomial g ∈ K[X] and Vα is a cyclic A-module (see Definition 4.4), then Vα is
indecomposable. A solution can be found in the appendix.
Exercise 8.1. Assume A = K[X]/(f ), where f ∈ K[X] is a non-constant
polynomial, and let Vα be a finite-dimensional A-module.
(a) Assume that Vα is a cyclic A-module, and assume the minimal polynomial
of α is equal to g t , where g is an irreducible polynomial in K[X].
(i) Explain why the map T : Vα → Vα , v → α(v), is an A-module
homomorphism, and why this also has minimal polynomial g t .
(ii) Let φ : Vα → Vα be an A-module homomorphism. Show  that φ is a
polynomial in T , that is, φ can be written in the form φ = i ai T i with
ai ∈ K.
(iii) Show that if φ 2 = φ then φ = 0 or φ = idV . Deduce that Vα is an
indecomposable A-module.
(b) Suppose that with respect to some basis, α has matrix Jn (λ), the Jordan block
of size n with eigenvalue λ. Explain why the previous implies that Vα is
indecomposable.
Let A = K[X]/(f ) as above and assume that Vα is a cyclic A-module, generated
by a vector b, say. We recall from linear algebra how to describe such a module.
 Let
h ∈ K[X] be the minimal polynomial of α, we can write it as h = Xd + d−1 i=0 i X
c i

where ci ∈ K. Then the set of vectors {b, α(b), . . . , α d−1 (b)} is a K-basis of Vα .
146 8 Representation Type

The matrix of α with respect to this basis is given explicitly by


⎛ ⎞
0 ... 0 −c0
⎜1 0 0 −c1 ⎟
⎜ ⎟
⎜ ⎟
⎜0 1 0 0 −c2 ⎟
⎜ .. ⎟
C(h) := ⎜ .. . . . . . . .. ⎟.
⎜. . . . . . ⎟
⎜ ⎟
⎜ .. .. .. ⎟
⎝. . . 0 −cd−2 ⎠
0 ... . . . 0 1 −cd−1

This is sometimes called ‘the companion matrix’ of h. More generally, one


defines the companion matrix for an arbitrary (not necessarily irreducible) monic
polynomial h in the same way. Then the minimal polynomial of C(h) is equal to h,
by linear algebra.
We can now describe all indecomposable modules for the algebras of the form
K[X]/(f ).
Lemma 8.4. Assume A = K[X]/(f ), where f ∈ K[X] is a non-constant
polynomial.
(a) The finite-dimensional indecomposable A-modules are, up to isomorphism,
precisely given by Vα , where α has matrix C(g t ) with respect to a suitable
basis, and where g ∈ K[X] is an irreducible polynomial such that g t divides f
for some t ∈ N.
(b) A has finite representation type.
Proof. (a) Assume Vα is an A-module such that α has matrix C(g t ) with respect
to some basis, and where g is an irreducible polynomial such that g t divides f .
Then from the shape of C(g t ) we see that Vα is a cyclic A-module, generated by the
first vector in this basis. As remarked above, the minimal polynomial of C(g t ), and
hence of α, is equal to g t . By Exercise 8.1 we then know that Vα is indecomposable.
For the converse, take a finite-dimensional indecomposable A-module Vα , so
V is a finite-dimensional K-vector space and α is a linear transformation with
f (α) = 0. We apply two decomposition theorems from linear algebra, a good
reference for these is the book by Blyth and Robertson in this series.1 First, the
Primary Decomposition Theorem (see for example §3 in the book by Blyth and
Robertson) shows that any non-zero A-module can be written as direct sum of
submodules, such that the restriction of α to a summand has minimal polynomial of
the form g m with g an irreducible polynomial and g m divides f . Second, the Cyclic
Decomposition Theorem (see for example §6 in the book by Blyth and Robertson)
shows that an A-module whose minimal polynomial is g m as above, must be a direct

1 T.S. Blyth,
E.F. Robertson, Further Linear Algebra. Springer Undergraduate Mathematics Series.
Springer-Verlag London, Ltd., 2002.
8.1 Definition and Examples 147

sum of cyclic A-modules. Then the minimal polynomial of a cyclic summand must
divide g m and hence is of the form g t with t ≤ m.
We apply these results to Vα . Since Vα is assumed to be indecomposable we see
that Vα must be cyclic and α has minimal polynomial g t where g is an irreducible
polynomial such that g t divides f . By the remark preceding the lemma, we know
that the matrix of α with respect to some basis is of the form C(g t ).
(b) By part (a), every finite-dimensional indecomposable A-module is of the form
Vα , where α has matrix C(g t ) with respect to a suitable basis, and where g is
irreducible and g t divides f . Note that two such modules for the same factor g t
are isomorphic (see Example 2.23 for modules over the polynomial algebra, but the
argument carries over immediately to A = K[X]/(f )).
There are only finitely many factors of f of the form g t with g an irreducible
polynomial in K[X]. Hence A has only finitely many finite-dimensional indecom-
posable A-modules, that is, A has finite representation type. 

One may ask what a small algebra of infinite representation type might look like.
The following is an example:
Lemma 8.5. For any field K, the 3-dimensional commutative K-algebra

A := K[X, Y ]/(X2 , Y 2 , XY )

has infinite representation type.

Proof. Any A-module V is specified by two K-linear maps αX , αY : V → V ,


describing the action on V of the cosets of X and Y in A, respectively. These
K-linear maps must satisfy the equations
2
αX = 0, αY2 = 0 and αX αY = 0 = αY αX . (8.1)

For any n ∈ N and λ ∈ K, we define a 2n-dimensional A-module Vn (λ) as follows.


Take the vector space K 2n , and let the cosets of X and Y act by the K-linear maps
given by the block matrices
   
0 0 0 0
αX := and αY :=
En 0 Jn (λ) 0

where En is the n × n identity matrix, and


⎛ ⎞
λ
⎜1 λ ⎟
⎜ ⎟
⎜ .. ⎟
Jn (λ) = ⎜
⎜ 1 .


⎜ .. .. ⎟
⎝ . . ⎠
1 λ

is a Jordan block of size n for the eigenvalue λ. One checks that αX and αY satisfy
the equations in (8.1), that is, this defines an A-module Vn (λ).
148 8 Representation Type

We will now show that Vn (λ) is an indecomposable A-module; note that because
of the different dimensions, for a fixed λ, the Vn (λ) are pairwise non-isomorphic.
(Even more, for fixed n, the modules Vn (λ) and Vn (μ) are not isomorphic for
different λ, μ in K; see Exercise 8.5.)
By Lemma 7.3 it suffices to show that the only idempotents in the endomorphism
algebra EndA (Vn (λ)) are the zero and the identity map. So let ϕ ∈ EndA (Vn (λ)) be
an idempotent element. Inparticular, ϕ is a K-linear map of Vn (λ), so we can write
A1 A2
it as a block matrix ϕ̃ = where A1 , A2 , A3 , A4 are n × n-matrices over
A3 A4
K. Then ϕ is an A-module homomorphism if and only if this matrix commutes
with the matrices αX and αY . By using that ϕ̃αX = αX ϕ̃ we deduce that A2 = 0
and A 1 = A4 .Moreover, since ϕ̃αY = αY ϕ̃, we get that A1 Jn (λ) = Jn (λ)A1 . So
A1 0
ϕ̃ = , where A1 commutes with the Jordan block Jn (λ).
A3 A1
Assume ϕ̃ 2 = ϕ̃, then in particular A21 = A1 . We exploit that A1 commutes with
Jn (λ), namely we want to apply Exercise 8.1. Take f = (X − λ)n , then A1 is an
endomorphism of the K[X]/(f )-module Vα , where α is given by Jn (λ). This is a
cyclic K[X]/(f )-module (generated by the first basis vector). We have A21 = A1 ,
therefore by Exercise 8.1, A1 is the zero or the identity matrix. In both cases, since
ϕ̃ 2 = ϕ̃ it follows that A3 = 0 and hence ϕ̃ is zero or is the identity. This means that
0 and idVn (λ) are the only idempotents in EndA (Vn (λ)). 

In general, it may be a difficult problem to determine the representation type
of a given algebra. There are some methods which reduce this problem to smaller
algebras, one of these is the following.
Proposition 8.6. Let A be a K-algebra and I ⊂ A a two-sided ideal with I = A.
If the factor algebra A/I has infinite representation type then A has infinite
representation type.
Proof. Note that if I = 0 then there is nothing to do.
We have seen in Lemma 2.37 that the A/I -modules are in bijection with those
A-modules M such that I M = 0. Note that under this bijection, the underlying
K-vector spaces remain the same, and the actions are related by (a + I )m = am
for all a ∈ A and m ∈ M. From this it is clear that for any such module M,
the A/I -submodules are the same as the A-submodules. In particular, M is an
indecomposable A/I -module if and only if it is an indecomposable A-module, and
M has finite dimension as an A/I -module if and only if it has finite dimension as
an A-module. Moreover, two such modules are isomorphic as A/I -modules if and
only if they are isomorphic as A-modules, roughly since they are not changed but
just viewed differently. (Details for this are given in Exercise 8.8.) By assumption
there are infinitely many pairwise non-isomorphic indecomposable A/I -modules of
finite dimension. By the above remarks they also yield infinitely many pairwise non-
isomorphic indecomposable A-modules of finite dimension, hence A has infinite
representation type. 

8.2 Representation Type for Group Algebras 149

Example 8.7.
(1) Consider the commutative 4-dimensional K-algebra A = K[X, Y ]/(X2 , Y 2 ).
Let I be the ideal of A generated by the coset of XY . Then A/I is isomorphic
to the algebra K[X, Y ]/(X2 , Y 2 , XY ); this has infinite representation type by
Lemma 8.5. Hence A has infinite representation type by Proposition 8.6.
(2) More generally, consider the commutative K-algebra A = K[X, Y ]/(Xr , Y r )
for r ≥ 2, this has dimension r 2 . Let I be the ideal generated by the
cosets of X2 , Y 2 and XY . Then again A/I is isomorphic to the algebra
K[X, Y ]/(X2 , Y 2 , XY ) and hence A has infinite representation type, as in
part (1).
The representation type of a direct product of algebras can be determined from
the representation type of its factors.
Proposition 8.8. Let A = A1 × . . . × Ar be the direct product of K-algebras
A1 , . . . , Ar . Then A has finite representation type if and only if all the algebras
A1 , . . . , Ar have finite representation type.
Proof. We have seen that every A-module M can be written as a direct sum
M = M1 ⊕ . . . ⊕ Mr where Mi = εi M, with εi = (0, . . . , 0, 1Ai , 0, . . . , 0), and Mi
is an A-submodule of M (see Lemma 3.30).
Now, assume that M is an indecomposable A-module. So there exists a unique i
such that M = Mi and Mj = 0 for j = i. Let I = A1 ×. . .×Ai−1 ×0×Ai+1 ×. . .×Ar ,
this is an ideal of A and A/I ∼ = Ai . The ideal I acts as zero on M. Hence M is
the inflation of an Ai -module (see Remark 2.38). By Lemma 2.37 the submodules
of M as an A-module are the same as the submodules as an Ai -module. Hence the
indecomposable A-module M is also an indecomposable Ai -module.
Conversely, every indecomposable Ai -module M clearly becomes an inde-
composable A-module by inflation. Again, since A-submodules are the same as
Ai -submodules, M is indecomposable as an A-module.
So we have shown that the indecomposable A-modules are in bijection with the
union of the sets of indecomposable Ai -modules, for 1 ≤ i ≤ r. Moreover, one sees
that under this bijection, isomorphic modules correspond to isomorphic modules,
and modules of finite dimension correspond to modules of finite dimension.
Therefore (see Definition 8.1), A has finite representation type if and only if each
Ai has finite representation type, as claimed. 

8.2 Representation Type for Group Algebras

In this section we give a complete characterization of group algebras of finite


groups which have finite representation type. This is a fundamental result in the
representation theory of finite groups. It turns out that the answer is completely
determined by the structure of the Sylow p-subgroups of the group. More precisely,
a group algebra KG over a field K has finite representation type if and only if K
150 8 Representation Type

has characteristic 0 or K has prime characteristic p > 0 and a Sylow p-subgroup


of G is cyclic. We recall what we need about Sylow p-subgroups of G. Assume the
order of G is |G| = pa m, where p does not divide m. Then a Sylow p-subgroup of
G is a subgroup of G of order pa . Sylow’s Theorem states that such subgroups exist
and any two such subgroups are conjugate in G and in particular are isomorphic.
For example, let G be the symmetric group on three letters, it can be generated by
s, r where s = (1 2) and r = (1 2 3) and we have s 2 = 1, r 3 = 1 and srs = r 2 . The
order of G is 6. There is one Sylow 3-subgroup, which is the cyclic group generated
by r. There are three Sylow 2-subgroups, each of them is generated by a 2-cycle.
If the field K has characteristic zero then the group algebra KG is semisimple,
by Maschke’s theorem (see Theorem 6.3). But semisimple algebras have finite
representation type, see Example 8.3.
We will now deal with group algebras over fields with characteristic p > 0. We
need to analyse in more detail a few group algebras of p-groups, that is, of groups
whose order is a power of p. We start with cyclic p-groups.
Lemma 8.9. Let G be the cyclic group G = Cpa of order pa where p is prime and
a ≥ 1, and let K be a field of characteristic p. Then KG ∼
a
= K[T ]/(T p ).
Proof. Recall from Example 1.27 that for an arbitrary field K the group algebra KG
a a a
is isomorphic to K[X]/(Xp −1). If K has characteristic p then Xp −1 = (X−1)p ,
this follows from the usual binomial expansion. If we substitute T = X − 1 then
KG ∼
a
= K[T ]/(T p ). 

Next, we consider the product of two cyclic groups of order p.
Lemma 8.10. Let K be a field.
(a) The group algebra of the direct product Cp × Cp has the form

K(Cp × Cp ) ∼
p p
= K[X1 , X2 ]/(X1 − 1, X2 − 1).

(b) Suppose K has characteristic p > 0. Then we have

K(Cp × Cp ) ∼
= K[X, Y ]/(Xp , Y p ).

Proof. (a) We choose g1 , g2 ∈ Cp × Cp generating the two factors of the direct


p p
product. Moreover, we set I := (X1 − 1, X2 − 1), the ideal generated by the
p p
polynomials X1 − 1 and X2 − 1. Then we consider the following map defined by

: K[X1 , X2 ] → K(Cp × Cp ), X1a1 X2a2 → (g1a1 , g2as )

and linear extension to arbitrary polynomials. One checks that this map is an algebra
p
homomorphism. Moreover, each Xi − 1 is contained in the kernel of , since
p
gi = 1 in the cyclic group Cp . Hence, I ⊆ ker( ). On the other hand, is clearly
surjective, so the isomorphism theorem for algebras (see Theorem 1.26) implies that

K[X1 , X2 ]/ker( ) ∼
= K(Cp × Cp ).
8.2 Representation Type for Group Algebras 151

Now we compare the dimensions (as K-vector spaces). The group algebra on the
right has dimension p2 . The factor algebra K[X1 , X2 ]/I also has dimension p2 ,
the cosets of the monomials X1a1 X2a2 with 0 ≤ ai ≤ p − 1 form a basis. But since
I ⊆ ker( ) these equal dimensions force I = ker( ), which proves the desired
isomorphism.
(b) Let K have characteristic p > 0. Then by the binomial formula we have
p
Xi − 1 = (Xi − 1)p for i = 1, 2. This means that substituting X1 − 1 for X
and X2 − 1 for Y yields a well-defined isomorphism
p p
K[X, Y ]/(Xp , Y p ) → K[X1 , X2 ]/(X1 − 1, X2 − 1).

By part (a) the algebra on the right-hand side is isomorphic to the group algebra
K(Cp × Cp ) and this completes the proof of the claim in (b). 

As a first step towards our main goal we can now find the representation type in
the case of finite p-groups, the answer is easy to describe:
Theorem 8.11. Let p be a prime number, K a field of characteristic p and let G be
a finite p-group. Then the group algebra KG has finite representation type if and
only if G is cyclic.
To prove this, we will use the following property, which characterizes when a
p-group is not cyclic.
Lemma 8.12. If a finite p-group G is not cyclic then it has a factor group which is
isomorphic to Cp × Cp .
Proof. In the case when G is abelian, we can deduce this from the general
description of a finite abelian group. Indeed, such a group can be written as the
direct product of cyclic groups, and if there are at least two factors, both necessarily
p-groups, then there is a factor group as in the lemma. For the proof in the general
case, we refer to the worked Exercise 8.6. 

Proof of Theorem 8.11. Suppose first that G is cyclic, that is, G = Cpa for some
a ∈ N. Then by Lemma 8.9, we have KG ∼
a
= K[T ]/(T p ). We have seen in
Example 8.3 that this algebra has finite representation type. So KG also has finite
representation type, by Remark 8.2.
Conversely, suppose G is not cyclic. We must show that KG has infinite
representation type. By Lemma 8.12, the group G has a normal subgroup N such
that the factor group Ḡ := G/N is isomorphic to Cp ×Cp . We construct a surjective
algebra homomorphism ψ : KG → K Ḡ, by taking ψ(g) := gN and extending this
to linear combinations. This is an algebra homomorphism, that is, it is compatible
with products since (g1 N)(g2 N) = g1 g2 N in the factor group G/N. Clearly ψ is
surjective. Let I = ker(ψ), then KG/I ∼ = K Ḡ by the isomorphism theorem of
algebras (see Theorem 1.26). We have seen in Lemma 8.10 that

K Ḡ ∼
= K(Cp × Cp ) ∼
= K[X, Y ]/(Xp , Y p ).
152 8 Representation Type

By Example 8.7, the latter algebra is of infinite representation type. Then the
isomorphic algebras K Ḡ and KG/I also have infinite representation type, by
Remark 8.2. Since the factor algebra KG/I has infinite representation type, by
Proposition 8.6 the group algebra KG also has infinite representation type. 

In order to determine precisely which group algebras have finite representation
type, we need new tools to relate modules of a group to modules of a subgroup.
They are known as ‘restriction’ and ‘induction’, and they are used extensively in
the representation theory of finite groups.
The setup is as follows. Assume G is a finite group, and H is a subgroup
of G. Take a field K. Then the group algebra KH is a subalgebra of KG (see
Example 1.16). One would therefore like to relate KG-modules and KH -modules.
(Restriction) If M is any KG-module then by restricting the action of KG to the
subalgebra KH , the space M becomes a KH -module, called the restriction of M
to KH .
(Induction) There is also the process of induction, this is described in detail in
Chap. A, an appendix on induced modules. We briefly sketch the main construction.
Let W be a KH -module. Then we can form the tensor product KG ⊗K W of
vector spaces; this becomes a KG-module, called the induced module, by setting
x · (g ⊗ w) = xg ⊗ w for all x, g ∈ G and w ∈ W (and extending linearly). We
then consider the K-subspace

H = span{gh ⊗ w − g ⊗ hw | g ∈ G, h ∈ H, w ∈ W }

and this is a KG-submodule of KG ⊗K W . The factor module

KG ⊗H W := (KG ⊗K W )/H

is called the KG-module induced from the KH -module W . For convenience one
writes

g ⊗H w := g ⊗ w + H ∈ KG ⊗H W.

The action of KG on this induced module is given by x · (g ⊗H w) = xg ⊗H w and


by the choice of H we have

gh ⊗H w = g ⊗H hw for all g ∈ G, h ∈ H and w ∈ W.

This can be made explicit. We $ take a system T of representatives of the left


cosets of H in G, that is, G = t ∈T tH is a disjoint union; and we can assume
that the identity element 1 of the group G is contained in T . Suppose W is finite-
dimensional, and let {w1 , . . . , wm } be a basis of W as a K-vector space. Then, as a
K-vector space, the induced module KG ⊗H W has a basis

{t ⊗H wi | t ∈ T , i = 1, . . . , m};

a proof of this fact is given in Proposition A.5 in the appendix.


8.2 Representation Type for Group Algebras 153

Example 8.13. Let G = S3 , the group of permutations of {1, 2, 3}. It can be


generated by s, r where s = (1 2) and r = (1 2 3). Let H be the subgroup generated
by r, so that H is cyclic of order 3. As a system of representatives, we can take
T = {1, s}. Let W be a 1-dimensional KH -module, and fix a non-zero element
w ∈ W . Then rw = αw with α ∈ K and α 3 = 1. By the above, the induced module
M := KG ⊗H W has dimension 2. It has basis {1 ⊗H w, s ⊗H w}. We can write
down the matrices for the action of s and r on M with respect to this basis. Noting
that rs = sr 2 we have
   
01 α 0
s → , r → .
10 0 α2

We now collect a few properties of restricted and induced modules, and we


assume for the rest of this chapter that all modules are finite-dimensional (to identify
the representation type, this is all we need, see Definition 8.1).
For the rest of this section we do not distinguish between direct sums and
direct products (or external direct sums) of modules and always write ⊕, to avoid
notational overload (see Definition 2.17 for direct sums of modules which are not a
priori submodules of a given module).
Lemma 8.14. Let K be a field, let G be a finite group and H a subgroup of G.
(a) If M and N are finite-dimensional KG-modules then the restriction of the direct
sum M ⊕ N to KH is the direct sum of the restrictions of M and of N to KH .
(b) If W is some finite-dimensional KH -module then it is isomorphic to a direct
summand of the restriction of KG ⊗H W to KH .
(c) If M is any finite-dimensional KG-module, then the multiplication map

μ : KG ⊗H M → M, g ⊗H m → gm

is a well-defined surjective homomorphism of KG-modules.


Proof. (a) This is clear since both M and N are invariant under the action of KH .
(b) As we have seen above, the induced module KG ⊗H W has a basis

{t ⊗H wi | t ∈ T , i = 1, . . . , m},

where T is a system of representatives of the left cosets of H in G with 1 ∈ T , and


where {w1 , . . . , wm } is a K-basis of W .
We now collect suitable basis elements which span KH -submodules. Namely,
for t ∈ T we set Wt := span{t ⊗H wi | i = 1, . . . , m}. Recall that the KG-action
on the induced module is by multiplication on the first factor. When we restrict this
action to KH we get for any h ∈ H that h · (t ⊗H wi ) = ht ⊗H wi . The element
ht ∈ G appears in precisely one left coset, that is, there exist unique s ∈ T and
h̃ ∈ H such that ht = s h̃; note that for the identity element t = 1 also s = 1. This
154 8 Representation Type

implies that

h · (t ⊗H wi ) = ht ⊗H wi = s h̃ ⊗H wi = s ⊗H h̃wi

and then one checks that W1 and t ∈T \{1} Wt are KH -submodules of KG ⊗H W .
Since these are spanned by elements of a basis of KG ⊗H W we obtain a direct
sum decomposition of KH -modules
⎛ ⎞

KG ⊗H W = W1 ⊕ ⎝ Wt ⎠ .
t ∈T \{1}

Moreover, W1 = span{1 ⊗H wi | i = 1, . . . , m} is as a KH -module isomorphic to


W , an isomorphism W → W1 is given by mapping wi → 1 ⊗H wi (and extending
linearly).
Thus, W is isomorphic to a direct summand (namely W1 ) of the restriction of
KG ⊗H W to KH (that is, of KG ⊗H W considered as a KH -module), as claimed.
(c) Recall the definition of the induced module, namely KG⊗H M=(KG⊗K M)/H.
If {m1 , . . . , mr } is a K-basis of M then a basis of the tensor product of vector spaces
KG ⊗K M is given by {g ⊗ mi | g ∈ G, i = 1, . . . , r} (see Definition A.1). As
is well-known from linear algebra, one can uniquely define a linear map on a
basis and extend it linearly. In particular, setting g ⊗ mi → gmi defines a linear
map KG ⊗K M → M. To see that this extends to a well-defined linear map
KG ⊗H M → M one has to verify that the subspace H is mapped to zero; indeed,
for the generators of H we have gh ⊗ m − g ⊗ hm → (gh)m − g(hm), which is
zero because of the axioms of a module. This shows that we have a well-defined
linear map μ : KG ⊗H M → M as given in the lemma. The map μ is clearly
surjective, since for every m ∈ M we have μ(1 ⊗H m) = 1 · m = m. Finally, μ is a
KG-module homomorphism since for all x, g ∈ G and m ∈ M we have

μ(x · (g ⊗H m)) = μ(xg ⊗H m) = (xg)m = x(gm) = xμ(g ⊗H m).

This completes the proof of the lemma. 



We can now state our first main result in this section, relating the representation
types of the group algebra of a group G and the group algebra of a subgroup H .
Theorem 8.15. Let K be a field and let H be a subgroup of a finite group G. Then
the following holds:
(a) If KG has finite representation type then KH also has finite representation
type.
(b) Suppose that the index |G : H | is invertible in K. Then we have the following.
8.2 Representation Type for Group Algebras 155

(i) Every finite-dimensional KG-module M is isomorphic to a direct summand


of the induced KG-module KG ⊗H M.
(ii) If KH has finite representation type then KG also has finite representation
type.
Remark 8.16. Note that the statement in (ii) above does not hold without the
assumption that the index is invertible in the field K. As an example take a field K
of prime characteristic p > 0 and consider the cyclic group H = Cp as a subgroup
of the direct product G = Cp × Cp . Then KCp has finite representation type, but
K(Cp × Cp ) is of infinite representation type, by Theorem 8.11.
Proof. (a) By assumption, KG has finite representation type, so let M1 , . . . , Mt be
representatives for the isomorphism classes of finite-dimensional indecomposable
KG-modules. When we restrict these to KH , each of them can be expressed as a
direct sum of indecomposable KH -modules. From this we obtain a list of finitely
many indecomposable KH -modules.
We want to show that any finite-dimensional indecomposable KH -module
occurs in this list (up to isomorphism). So let W be a finite-dimensional indecom-
posable KH -module. By Lemma 8.14 we know that W is isomorphic to a direct
summand of the restriction of KG ⊗H W to KH . We may write the KG-module
KG⊗H W as Mi1 ⊕. . .⊕Mis , where each Mij is isomorphic to one of M1 , . . . , Mt .
We know W is isomorphic to a summand of Mi1 ⊕ . . . ⊕ Mis restricted to KH ,
and this is the direct sum of the restrictions of the Mij to KH , by the first part
of Lemma 8.14. Since W is an indecomposable KH -module, the Krull–Schmidt
theorem (Theorem 7.18) implies that there is some j such that W is isomorphic to
a direct summand of Mij restricted to KH . Hence W is one of the modules in our
list, as required.
(b) We take as in the proof of Lemma 8.14 a system T = {g1 , . . . , gr } of
representatives for the left cosets of H in G, and 1 ∈ T .
(i) Let M be a finite-dimensional KG-module. By part (c) of Lemma 8.14,
the multiplication map μ : KG ⊗H M → M is a surjective KG-module
homomorphism. The idea for the proof is to construct a suitable KG-module
homomorphism κ : M → KG ⊗H M such that μ ◦ κ = idM and then to apply
Lemma 2.30. We first consider the following map


r
σ : M → KG ⊗H M, m → gi ⊗H gi−1 m.
i=1

(The reader might wonder how to get the idea to use this map. It is not
too hard to see that we have an injective KH -module homomorphism
i : M → KG ⊗H M , m → 1 ⊗H m; the details are given in Proposition A.6 in
the appendix. To make it a KG-module homomorphism, one mimics the averaging
formula from the proof of Maschke’s theorem, see Lemma 6.1, leading to the above
map σ .)
156 8 Representation Type

We first show that σ is independent of the choice of the coset representatives


g1 , . . . , gr . In fact, any other set of representatives has the form g1 h1 , . . . , gr hr for
some h1 , . . . , hr ∈ H . Then the above image of m under σ reads


r 
r 
r
gi hi ⊗H h−1 −1
i gi m = gi hi h−1 −1
i ⊗H gi m = gi ⊗H gi−1 m = σ (m),
i=1 i=1 i=1

where for the first equation we have used the defining relations in the induced
module (coming from the subspace H in the definition of KG ⊗H M).
Next, we show that σ is a KG-module homomorphism. In fact, for every g ∈ G
and m ∈ M we have


r 
r
σ (gm) = gi ⊗H gi−1 gm = gi ⊗H (g −1 gi )−1 m.
i=1 i=1

Setting g̃i := g −1 gi we get another set of left coset representatives g̃1 , . . . , g̃r . This
implies that the above sum can be rewritten as


r 
r r
σ (gm) = gi ⊗H (g −1 gi )−1 m = g g̃i ⊗H g̃i−1 m = g( g̃i ⊗H g̃i−1 m) = gσ (m),
i=1 i=1 i=1

where the last equation holds since we have seen above that σ is independent of the
choice of coset representatives.
For the composition μ ◦ σ : M → M we obtain for all m ∈ M that


r 
r
(μ ◦ σ )(m) = μ( gi ⊗H gi−1 m) = gi gi−1 m = rm = |G : H |m.
i=1 i=1

So far we have not used our assumption that the index |G : H | is invertible in the
field K, but now it becomes crucial. We set κ := |G:H
1
| σ : M → M. Then from the
above computation we deduce that μ ◦ κ = idM .
As the final step we can now apply Lemma 2.30 to get a direct sum decomposi-
tion KG ⊗H M = im(κ) ⊕ ker(μ). The map κ is injective (since μ ◦ κ = idM ), so
im(κ) ∼ = M and the claim follows.
(ii) By assumption, KH has finite representation type; let W1 , . . . , Ws be the
finite-dimensional indecomposable KH -modules, up to isomorphism. It suffices to
show that every finite-dimensional indecomposable KG-module M is isomorphic
to a direct summand of one of the KG-modules KG ⊗H Wi with 1 ≤ i ≤ s.
In fact, since these finitely many finite-dimensional modules have only finitely
many indecomposable summands, it then follows that there are only finitely many
possibilities for M (up to isomorphism), that is, KG has finite representation type.
8.2 Representation Type for Group Algebras 157

Consider the KG-module M as a KH -module. We can express it as a direct sum


of indecomposable KH -modules, that is, there exist a1 , . . . , as ∈ N0 such that

M∼
= (W1 ⊕ .!"
. . ⊕ W1 ) ⊕ . . . ⊕ (Ws ⊕ . . . ⊕ Ws )
# !" #
a1 as

as a KH -module. Since tensor products commute with finite direct sums (see the
worked Exercise 8.9 for a proof in the special case of induced modules used here),
we obtain

KG ⊗H M ∼
= (KG ⊗H W1 )⊕a1 ⊕ . . . ⊕ (KG ⊗H Ws )⊕as .

By part (i) we know that M is isomorphic to a direct summand of this induced


module KG ⊗H M. Since M is indecomposable, the Krull–Schmidt theorem (see
Theorem 7.18) implies that M is isomorphic to a direct summand of KG ⊗H Wi for
some i ∈ {1, . . . , s}. 

We apply this now, taking for H a Sylow p-subgroup of G, and we come to the
main result of this section.
Theorem 8.17. Let K be a field and let G be a finite group. Then the following
statements are equivalent.
(i) The group algebra KG has finite representation type.
(ii) Either K has characteristic 0, or K has characteristic p > 0 and G has a
cyclic Sylow p-subgroup.
Proof. We first prove that (i) implies (ii). So suppose that KG has finite represen-
tation type. If K has characteristic zero, there is nothing to prove. So suppose that
K has characteristic p > 0, and let H be a Sylow p-subgroup of G. Since KG has
finite representation type, KH also has finite representation type by Theorem 8.15.
But H is a p-group, so Theorem 8.11 implies that H is a cyclic group, which proves
part (ii).
Conversely, suppose (ii) holds. If K has characteristic zero then the group algebra
KG is semisimple by Maschke’s theorem (see Theorem 6.3). Hence KG is of finite
representation type by Example 8.3.
So we are left with the case that K has characteristic p > 0. By assumption
(ii), G has a cyclic Sylow p-subgroup H . The group algebra KH has finite
representation type, by Theorem 8.11. Since H is a Sylow p-subgroup, we have
|G| = pa m and |H | = pa , and the index m = |G : H | is not divisible by p and
hence invertible in K. Then by Theorem 8.15 (part (b)) we conclude that KG has
finite representation type. 

Remark 8.18. Note that in positive characteristic p the condition in part (ii) of
the theorem is also satisfied when p does not divide the group order (a Sylow p-
subgroup is then the trivial group, which is clearly cyclic). In this case the group
158 8 Representation Type

algebra KG is semisimple, by Maschke’s theorem, which also implies that KG has


finite representation type.
Example 8.19.
(1) Consider the symmetric group G = S3 on three letters. Since |G| = 6, the
group algebra KS3 is semisimple whenever the characteristic of K is not 2
or 3. In characteristic 2, a Sylow 2-subgroup is given by (1 2), the subgroup
generated by (1 2), which is clearly cyclic. Similarly, in characteristic 3, a Sylow
3-subgroup is given by (1 2 3) which is also cyclic. Hence KS3 has finite
representation type for all fields K.
(2) Consider the alternating group G = A4 of order |A4 | = 12 = 22 · 3.
If the field K has characteristic not 2 and 3 then KA4 is semisimple and
hence has finite representation type. Suppose K has characteristic 3; a Sylow
3-subgroup is given by (1 2 3), hence it is cyclic and KA4 is of finite
representation type. However, a Sylow 2-subgroup of A4 is the Klein four group
(1 2)(3 4), (1 3)(2 4) ∼
= C2 × C2 , which is not cyclic. Hence, KA4 has infinite
representation type if K has characteristic 2.
The strategy of the proof of Theorem 8.15 can be extended to construct
all indecomposable modules for a group algebra KG from the indecomposable
modules of a group algebra KH when H is a subgroup of G such that the index
|G : H | is invertible in the field K. We illustrate this important method with an
example.
Example 8.20. Assume G is the symmetric group S3 ; we use the notation as in
Example 8.13. Assume also that K has characteristic 3. A Sylow 3-subgroup of
G has order 3, in fact there is only one in this case, which is H generated by
r = (1 2 3). Recall from the proof of Theorem 8.15 that every indecomposable
KG-module appears as a direct summand of an induced module KG ⊗H W , for
some indecomposable KH -module W .
The group algebra KH is isomorphic to K[T ]/(T 3 ), see Lemma 8.9. Recall from
the proof of Lemma 8.9 that the isomorphism takes the generator r of H to T + 1.
The algebra K[T ]/(T 3 ) has three indecomposable modules, of dimensions 1, 2, 3,
and the action of the coset of T is given by Jordan blocks Jn (0) of sizes n = 1, 2, 3
for the eigenvalue 0, see Example 8.3. If we transfer these indecomposable modules
to indecomposable modules for the isomorphic algebra KH , the action of r is given
by the matrices Jn (0)+En = Jn (1), for n = 1, 2, 3, where En is the identity matrix.
(1) Let W = span{w} be the 1-dimensional indecomposable KH -module. The
action of r on W is given by the matrix J1 (1) = E1 , that is, W is the trivial
KH -module. From this we see that H acts trivially on the induced KG-module
M = KG ⊗H W . In fact, for this it suffices to show that r acts trivially
on M: recall from Example 8.13 that M = span{1 ⊗H w, s ⊗H w}. Clearly
r(1 ⊗H w) = r ⊗H w = 1 ⊗H rw = 1 ⊗H w. Note that rs = sr −1 in S3 , so
r(s ⊗H w) = (sr −1 ⊗H w) = s ⊗H r −1 w = s ⊗H w.
8.2 Representation Type for Group Algebras 159

So we can view M as a module for the group algebra K(G/H ). Since G/H has
order 2, it is isomorphic to the subgroup s of S3 generated by s. So we can also
view M as a module for the group algebra Ks. As such, it is the direct sum of
two 1-dimensional submodules, with K-basis (1 + s) ⊗H w and (1 − s) ⊗H w,
respectively. Thus, from M = KG ⊗H W we obtain two 1-dimensional (hence
simple) KG-modules

U1 := span{(1 + s) ⊗H w} and V1 := span{(1 − s) ⊗H w},

where U1 is the trivial KG-module and on V1 the element s acts by multiplication


with −1. These two KG-modules are not isomorphic (due to the different actions
of s).
(2) Let W2 be the 2-dimensional indecomposable
  KH -module. The action of r on
10
W2 is given by the matrix J2 (1) = , so we take a basis for W2 as {b1 , b2 },
11
where rb1 = b1 + b2 and rb2 = b2 .
The induced module KG ⊗H W2 has dimension 4, and a K-basis is given by

{1 ⊗H b1 , 1 ⊗H b2 , s ⊗H b1 , s ⊗H b2 },

see Proposition A.5. One checks that KG ⊗H W2 has the two KG-submodules

U2 := span{(1 + s) ⊗ b1 , (1 − s) ⊗ b2} and V2 := span{(1 − s) ⊗ b1, (1 + s) ⊗ b2 }.

From the above basis it is clear that KG ⊗H W2 = U2 + V2 . Moreover, one


checks that U2 and V2 each have a unique 1-dimensional submodule, namely
span{(1−s)⊗H b2 } ∼ = V1 for U2 and span{(1+s)⊗H b2 } ∼ = U1 for V2 (where U1 and
V1 are the simple KG-modules from (1)). Since these are different it follows that
U2 ∩V2 = 0 and hence we have a direct sum decomposition KG⊗H W2 = U2 ⊕V2 .
Moreover, it also implies that U2 and V2 are indecomposable KG-modules
since the only possible direct sum decomposition would be into two different 1-
dimensional submodules.
Finally, U2 and V2 are not isomorphic; in fact, an isomorphism would yield
an isomorphism between the unique 1-dimensional submodules, but these are
isomorphic to V1 and U1 , respectively.
(3) This leaves us to compute KG ⊗H W3 where W3 is the 3-dimensional indecom-
posable KH -module. Then we can take W3 = KH (in fact, KH ∼ = K[T ]/(T 3 )
and A = K[T ]/(T ) is an indecomposable A-module, see Remark 7.2, so KH is
3

an indecomposable KH -module).
We consider the submodules generated by (1 + s) ⊗H 1 and (1 − s) ⊗H 1. More
precisely, again using that rs = sr −1 in S3 one can check that

U3 := KG((1 + s) ⊗H 1)
= span{u := (1 + s) ⊗H 1, v := 1 ⊗H r + s ⊗H r 2 , w := 1 ⊗H r 2 + s ⊗H r}
160 8 Representation Type

and

V3 := KG((1 − s) ⊗H 1)
= span{x := (1 − s) ⊗H 1, y := 1 ⊗H r − s ⊗H r 2 , z := 1 ⊗H r 2 − s ⊗H r}.

Note that the KG-action on U3 is given on basis elements by

su = u , sv = w , sw = v and ru = v , rv = w , rw = u

and similarly the KG-action on V3 is given by

sx = −x , sy = −z , sz = −y and rx = y , ry = z , rz = x.

From this one checks that U3 and V3 each have a unique 1-dimensional KG-
submodule, namely span{u + v + w} ∼ = U1 for U3 and span{x + y + z} ∼ = V1 for
V3 . From this one deduces that there is a direct sum decomposition KG = U3 ⊕ V3 .
Moreover, we claim that U3 and V3 are indecomposable KG-modules. For this
it suffices to show that they are indecomposable when considered as KH -modules
(with the restricted action); in fact, any direct sum decomposition as KG-modules
would also be a direct sum decomposition as KH -modules.
Note that U3 as a KH -module is isomorphic to KH (an isomorphism is given
on basis elements by u → 1, v → r and w → r 2 ) and this is an indecomposable
KH -module. Hence U3 is an indecomposable KG-module. Similarly, V3 is an
indecomposable KG-module.
Finally, U3 and V3 are not isomorphic; in fact, an isomorphism would yield
an isomorphism between the unique 1-dimensional submodules, but these are
isomorphic to U1 and V1 , respectively.
According to the proof of Theorem 8.15 we have now shown that the group
algebra KS3 for a field K of characteristic 3 has precisely six indecomposable mod-
ules up to isomorphism, two 1-dimensional modules, two 2-dimensional modules
and two 3-dimensional modules. Among these only the 1-dimensional modules are
simple (we have found that each of the other indecomposable modules has a 1-
dimensional submodule).

EXERCISES

8.2. Let A = K[X]/(f ), where f ∈ K[X] is non-constant. Write


f = f1a1 f2a2 . . . frar , where the fi are pairwise coprime irreducible
polynomials, and ai ≥ 1. Find the number of indecomposable A-modules,
up to isomorphism.
8.3. Let A be a K-algebra. Assume M is a finite-dimensional A-module with basis
B = B1 ∪ B2 , where B1 and B2 are non-empty and disjoint. Assume that for
each a ∈ A, the matrix of the action of a with respect to this basis has block
8.2 Representation Type for Group Algebras 161

form
 
M1 (a) 0
.
0 M2 (a)

Show that then M is the direct sum M = M1 ⊕ M2 , where Mi is the space


with basis Bi , and the action of a on Mi is given by the matrix Mi (a).
8.4. For any field K, let A = K[X, Y ]/(X2 , Y 2 , XY ) be the 3-dimensional
commutative algebra in Lemma 8.5. Find all ideals of A. Show that any
proper factor algebra of A has dimension one or two, and is of finite
representation type.
8.5. As in the previous exercise, consider the 3-dimensional commutative K-
algebra

A = K[X, Y ]/(X2 , Y 2 , XY ).

For n ∈ N and λ ∈ K let Vn (λ) be the 2n-dimensional A-module defined in


the proof of Lemma 8.5. Show that for λ = μ ∈ K the A-modules Vn (λ) and
Vn (μ) are not isomorphic. (Hint: You might try the case n = 1 first.)
8.6. Assume G is a group of order pn where p is a prime number and n ≥ 1.
Consider the centre of G, that is,

Z(G) := {g ∈ G | gx = xg for all x ∈ G}.

(a) Show that Z(G) has order at least p. (Hint: note that Z(G) consists of
the elements whose conjugacy class has size 1, and that the number of
elements with this property must be divisible by p.)
(b) Show that if Z(G) = G (that is, G is not abelian) then G/Z(G) cannot
be cyclic.
(c) Suppose G is not cyclic. Show that then G has a normal subgroup N such
that G/N is isomorphic to Cp ×Cp . When G is abelian, this follows from
the structure of finite abelian groups. Prove the general case by induction
on n.
8.7. Let A be a K-algebra. Suppose f : M  → M and g : M → M  are A-
module homomorphisms between A-modules such that f is injective, g is
surjective, and im(f ) = ker(g). This is called a short exact sequence and it
is written as
f g
0 → M  −→ M −→ M  → 0.

Show that for such a short exact sequence the following statements are
equivalent.
162 8 Representation Type

(i) There exists an A-submodule N ⊆ M such that M = ker(g) ⊕ N.


(ii) There exists an A-module homomorphism σ : M  → M with
g ◦ σ = idM  .
(iii) There exists an A-module homomorphism τ : M → M  such that
τ ◦ f = idM  .
A short exact sequence satisfying the equivalent conditions (i)–(iii) is called
a split short exact sequence.
8.8. Assume A is an algebra and I is a two-sided ideal of A with I = A.
Suppose M is an A/I -module, recall that we can view it as an A-module
by am = (a + I )m for a ∈ A and m ∈ M. Assume also that N is an A/I -
module. Show that a map f : M → N is an A-module homomorphism if
and only if it is an A/I -module homomorphism. Deduce that M and N are
isomorphic as A-modules if and only if they are isomorphic as A/I -modules.
8.9. Let K be a field, and G a finite group with a subgroup H . Show that for
every finite-dimensional KH -modules V and W , we have a KG-module
homomorphism

KG ⊗H (V ⊕ W ) ∼
= (KG ⊗H V ) ⊕ (KG ⊗H W ).

8.10. Let G = {±1, ±i, ±j, ±k} be the quaternion group, as defined in Remark 1.9
(see also Exercise 6.8).
(a) Determine a normal subgroup N of G of order 2 and show that G/N is
isomorphic to C2 × C2 .
(b) For which fields K does the group algebra KG have finite representation
type?
8.11. For which fields K does the group algebra KG have finite representation type
where G is:
(a) the alternating group G = A5 of even permutations on five letters,
(b) the dihedral group G = Dn of order 2n where n ≥ 2, that is, the
symmetry group of the regular n-gon.
Chapter 9
Representations of Quivers

We have seen representations of a quiver in Chap. 2 and we have also seen how to
relate representations of a quiver Q over a field K and modules for the path algebra
KQ, and that quiver representations and KQ-modules are basically the same
(see Sect. 2.5.2). For some tasks, quiver representations are more convenient than
modules. In this chapter we develop the theory further and study representations of
quivers in detail. In particular, we want to exploit properties which come from the
graph structure of the quiver Q.

9.1 Definitions and Examples

Throughout we fix a field K.


We fix a quiver Q, that is, a finite directed graph (see Definition 1.11). We will
often assume that Q does not have oriented cycles; recall from Exercise 1.2 that this
is equivalent to the path algebra KQ being finite-dimensional.
We consider representations of Q over K and recall the definition from Defi-
nition 2.44. From now on we restrict to finite-dimensional representations, so the
following is slightly less general than Definition 2.44.
Definition 9.1. Let Q be a quiver with vertex set Q0 and arrow set Q1 .
(a) A representation M of Q over K is given by the following data:
– a finite-dimensional K-vector space M(i) for each vertex i ∈ Q0 ,
– a K-linear map M(α) : M(i) → M(j ) for each arrow α ∈ Q1 , where
α
i −→ j .
We write such a representation M as a tuple M = ((M(i))i∈Q0 , (M(α))α∈Q1 ),
or just M = ((M(i))i , (M(α))α ).

© Springer International Publishing AG, part of Springer Nature 2018 163


K. Erdmann, T. Holm, Algebras and Representation Theory, Springer
Undergraduate Mathematics Series, https://doi.org/10.1007/978-3-319-91998-0_9
164 9 Representations of Quivers

(b) The zero representation O of Q is the representation where to each vertex


i ∈ Q0 we assign the zero vector space. A representation M of Q is called
non-zero if M(i) = 0 for at least one vertex i ∈ Q0 .
Example 9.2. Let Q be the quiver

α β γ
1 −→ 2 −→ 3 −→ 4

and define a representation M of Q over K by

M(1) = K, M(2) = K, M(3) = 0, M(4) = K 2 , M(α) = idK , M(β) = 0, M(γ ) = 0.

We write M using quiver notation as

idK
K −→ K −→ 0 −→ K 2 .

Note that the maps starting or ending at a space which is zero can only be zero maps
and there is no need to write this down.
In Sect. 2.5.2 we have seen how to view a representation of a quiver Q over the
field K as a module for the path algebra KQ, and conversely how to view a module
for KQ as a representation of Q over K. We recall these constructions here, using
the quiver Q as in Example 9.2.
Example 9.3. Let Q be the quiver as in Example 9.2.
(a) Let M be as in Example 9.2. We translate the representation M of Q into
a KQ-module M. The underlying vector space  is 4-dimensional, indeed,
according to Proposition 2.46 (a) we take M = 4i=1 M(i) = K 4 , where

e1 M ={(x, 0, 0, 0) | x ∈ K}
e2 M ={(0, y, 0, 0) | y ∈ K}
e4 M ={(0, 0, z, w) | z, w ∈ K}.

Then α(x, y, z, w) = (0, x, 0, 0) and β, γ act as zero maps. Note that M is the
direct sum of two KQ-submodules, the summands are e1 M ⊕ e2 M and e4 M.
(b) We give now an example which starts with a module and constructs from this
a quiver representation. Start with the KQ-module P = (KQ)e2 , that is, the
submodule of KQ generated by the trivial path e2 . It has basis {e2 , β, γβ}.
According to Proposition 2.46 (b), the representation P of Q corresponding
to the KQ-module P has the following shape. For each i = 1, 2, 3, 4 we
set P (i) = ei P = ei (KQ)e2 . A basis of this K-vector space is given by
paths in Q starting at vertex 2 and ending at vertex i. So we get P (1) = 0,
9.1 Definitions and Examples 165

P (2) = span{e2 }, P (3) = span{β} and P (4) = span{γβ}. Moreover, P (β)


maps e2 → β and P (γ ) maps β → γβ. To arrive at a picture as in Example 9.2,
we identify each of the one-dimensional spaces with K, and then we can take
the above linear maps as the identity map:

idK idK
0 −→ K −→ K −→ K.

The close relation between modules for path algebras and representations of
quivers means that our definitions and results on modules can be translated to
representations of quivers. In particular, module homomorphisms become the
following.
Definition 9.4. Let Q = (Q0 , Q1 ) be a quiver, and let M and N be representations
of Q over K. A homomorphism ϕ : M → N of representations consists of a tuple
(ϕi )i∈Q0 of K-linear maps ϕi : M(i) → N(i) for each vertex i ∈ Q0 such that for
α
each arrow i −→ j in Q1 , the following diagram commutes:

that is,

ϕj ◦ M(α) = N(α) ◦ ϕi .

Such a homomorphism ϕ = (ϕi )i∈Q0 of representations is called an endomorphism


if M = N . It is called an isomorphism if all ϕi are vector space isomorphisms, and
if so, the representations M and N are said to be isomorphic.
Let M be a representation of Q, then there is always the endomorphism ϕ where
ϕi is the identity map, for each vertex i. We call this the identity of M; if M is
translated into a module of the algebra KQ then ϕ corresponds to the identity map.
As an illustration, we compute the endomorphisms of an explicit representation.
This will also be used later. It might be surprising that this representation does not
have any endomorphisms except scalar multiples of the identity, given that the space
at vertex 4 is 2-dimensional.
Lemma 9.5. Let Q be the quiver
166 9 Representations of Quivers

Moreover, let M be the representation of Q with M(i) = K for 1 ≤ i ≤ 3 and


M(4) = K 2 , and define

M(α1 ) : K → K 2 , x → (x, 0)
M(α2 ) : K → K 2 , x → (0, x)
M(α3 ) : K → K 2 , x → (x, x).

Then every homomorphism ϕ : M → M is a scalar multiple of the identity.


Proof. Take a homomorphism of representations ϕ : M → M, that is,
ϕ = (ϕi )1≤i≤4 where each ϕi : M(i) → M(i) is a K-linear map. For i = 1, 2, 3
the map ϕi : K → K is multiplication by a scalar, say ϕi (x) = ci x for ci ∈ K.
Consider the commutative diagram corresponding to the arrow α1 ,

then we have

(c1 x, 0) = (M(α1 ) ◦ ϕ1 )(x) = (ϕ4 ◦ M(α1 ))(x) = ϕ4 (x, 0).

Using the other arrows we similarly find ϕ4 (0, x) = (0, c2 x), and
ϕ4 (x, x) = (c3 x, c3 x). Using linearity,

(c3 x, c3 x) = ϕ4 (x, x) = ϕ4 (x, 0) + ϕ4 (0, x) = (c1 x, 0) + (0, c2 x) = (c1 x, c2 x).

Hence for all x ∈ K we have c1 x = c3 x = c2 x and therefore c1 = c2 = c3 =: c.


Now we deduce for all x, y ∈ K that

ϕ4 (x, y) = ϕ4 (x, 0) + ϕ4 (0, y) = (cx, 0) + (0, cy) = (cx, cy) = c(x, y),

so ϕ4 = c idK 2 . Thus we have proved that every homomorphism ϕ : M → M is a


scalar multiple of the identity. 

The analogue of a submodule is a ‘subrepresentation’.
Definition 9.6. Let M = ((M(i))i∈Q0 , (M(α))α∈Q1 ) be a representation of a
quiver Q.
(a) A representation U = ((U (i))i∈Q0 , (U (α))α∈Q1 ) of Q is a subrepresentation
of M if the following holds:
9.1 Definitions and Examples 167

(i) For each vertex i ∈ Q0 , the vector space U (i) is a subspace of M(i).
α
(ii) For each arrow i −→ j in Q, the linear map U (α) : U (i) −→ U (j ) is the
restriction of M(α) to the subspace U (i).
(b) A non-zero representation S of Q is called simple if its only subrepresentations
are O and S.
Example 9.7. Let Q be a quiver.
(1) Every representation M of Q has the trivial subrepresentations O and M.
(2) For each vertex j ∈ Q0 we have a representation Sj of Q over K given by

K if i = j
Sj (i) =
0 if i = j

and Sj (α) = 0 for all α ∈ Q1 . Then Sj is a simple representation.


α
(3) We consider the quiver 1 −→ 2 and the representation M given by
M(1) = K = M(2) and M(α) = idK . We write this representation in the
form
idK
K −→ K.

Consider the representation U given by U (1) = 0, U (2) = K, U (α) = 0.


This is a subrepresentation of M. But there is no subrepresentation T given by
T (1) = K, T (2) = 0, T (α) = 0, because the map T (α) is not the restriction of
M(α) to T (1).
Exercise 9.1. Let Q be the quiver in Example 9.2, and let M be the representation

idK idK idK


K −→ K −→ K −→ K.

Find all subrepresentations of M. How many of them are simple?


Translating the description of simple modules for finite-dimensional path alge-
bras (see Sect. 3.4.2, in particular Theorem 3.26) we get the following result.
Theorem 9.8. Let Q be a quiver without oriented cycles. Then every simple
representation of Q is isomorphic to one of the simple representations Sj , with
j ∈ Q0 , as defined in Example 9.7 above. Moreover, the representations Sj , with
j ∈ Q0 , are pairwise non-isomorphic.
Instead of translating Theorem 3.26, one could also prove the above theorem
directly in the language of representations of quivers, see Exercise 9.7.
For modules of an algebra we have seen in Chap. 7 that the building blocks are
the indecomposable modules. So we should spell out what this corresponds to for
representations of quivers.
168 9 Representations of Quivers

Definition 9.9. Let Q be a quiver and let K be a field.


(1) Let M = ((M(i))i , (M(α))α ) be a representation of Q over K, and assume
U and V are subrepresentations of M. Then M is the direct sum of U and V
if for each i ∈ Q0 we have M(i) = U (i) ⊕ V (i) as vector spaces. We write
M = U ⊕ V.
(2) A non-zero representation M of Q is called indecomposable if it cannot be
expressed as the direct sum M = U ⊕ V with non-zero subrepresentations
U and V of M.
Note that since U and V are required to be subrepresentations of M in the above
α
definition, for each arrow i −→ j in Q, the linear map M(α) takes U (i) into U (j ),
and it takes V (i) into V (j ) (see Definition 9.6).
Remark 9.10. There is a related notion of a direct product (or external direct sum)
of representations of a quiver Q, see Exercise 9.13.
α
Exercise 9.2. Let Q be the quiver 1 −→ 2.
(i) Consider the representation M of Q as in Example 9.7, so M(1) = M(2) = K,
and M(α) = idK . Show that M is indecomposable.
(ii) Let N be the representation of Q with N(1) = N(2) = K and N(α) = 0.
Show that N decomposes as the direct sum of the subrepresentations K −→ 0
and 0 −→ K.
In Chap. 7 we have studied indecomposable modules in detail. In particular, we
have established methods to check whether a certain module is indecomposable. We
translate some of these to the language of representations of quivers.
The first result is a restatement of Lemma 7.3.
Lemma 9.11. Let Q be a quiver and M a non-zero representation of Q over K.
Then the representation M is indecomposable if and only if the only homomor-
phisms of representations ϕ : M → M with ϕ 2 = ϕ are the zero homomorphism
and the identity.
The condition in this lemma is satisfied in particular if all endomorphisms of M
are scalar multiples of the identity. This gives the following, which is also a conse-
quence of Corollary 7.16 translated into the language of quiver representations. It
is the special case where the endomorphism algebra is isomorphic to the field K,
hence is a local algebra.
Lemma 9.12. Let Q be a quiver and M a non-zero representation of Q over K.
Suppose that every homomorphism of representations ϕ : M → M is a scalar
multiple of the identity. Then the representation M is indecomposable.
9.2 Representations of Subquivers 169

9.2 Representations of Subquivers

When studying representations of quivers it is often useful to relate representations


of a quiver to representations of a ‘subquiver’, a notion we now define.
Definition 9.13. Assume Q is a quiver with vertex set Q0 and arrow set Q1 .
(a) A subquiver of Q is a quiver Q = (Q0 , Q1 ) such that Q0 ⊆ Q0 and Q1 ⊆ Q1 .

(b) A subquiver Q of Q as above is called a full subquiver if for any two vertices
α
i, j ∈ Q0 all arrows i −→ j of Q are also arrows in Q .
Note that since Q must be a quiver, it is part of the definition that in a
subquiver the starting and end points of any arrow are also in the subquiver (see
Definition 1.11). Thus one cannot choose arbitrary subsets Q1 ⊆ Q1 in the above
definition.
Example 9.14. Let Q be the quiver

We determine the subquivers Q of Q with vertex set Q0 = {1, 2}. For the arrow set
we have the following possibilities: Q1 = ∅, Q1 = {α}, Q1 = {β} and Q1 = {α, β}.
Of these, only the last quiver is a full subquiver. However, by the preceding remark
we cannot choose Q1 = {α, γ } since the vertex 3 is not in Q0 .
Given a quiver Q with a subquiver Q , we want to relate representations of Q
with representations of Q . For our purposes two constructions will be particularly
useful. We first present the ‘restriction’ of a representation of Q to a representation
of Q . Starting with a representation of Q , we then introduce the ‘extension by
zero’ which produces a representation of Q.
Definition 9.15. Let Q = (Q0 , Q1 ) be a quiver and Q = (Q0 , Q1 ) a subquiver of
Q. If

M = ((M(i))i∈Q0 , (M(α))α∈Q1 )

is a representation of Q then we can restrict M to Q . That is, we define a


representation M of Q by

M := ((M(i))i∈Q0 , (M(α))α∈Q1 ).

The representation M is called the restriction of M to Q .


170 9 Representations of Quivers

For example, if Q and M are as in Example 9.2, and Q is the subquiver

β γ
2 −→ 3 −→ 4

then M is the representation

K −→ 0 −→ K 2 .

Conversely, suppose we have a representation of a subquiver Q of a quiver Q.


Then we can extend it to a representation of Q, by assigning arbitrary vector spaces
and linear maps to the vertices and arrows which are not in the subquiver Q . So
there are many ways to extend a representation of Q to one of Q (if Q = Q).
Perhaps the easiest construction is to extend ‘by zero’.
Definition 9.16. Suppose Q is a quiver, which has a subquiver Q = (Q0 , Q1 ).
Suppose

M = ((M  (i))i∈Q , (M  (α))α∈Q )


0 1

is a representation of Q . Then we define a representation M of Q by


 
M  (i) if i ∈ Q0 M  (α) if α ∈ Q1
M(i) := and M(α) :=
0 if i ∈ Q0 0 if α ∈ Q1 .

This defines a representation of Q, which we call the extension by zero of M .


Remark 9.17. Let Q be a quiver and let Q be a subquiver of Q. Suppose that
M is a representation of Q and let M be its extension by zero to Q. It follows
directly from Definitions 9.15 and 9.16 that the restriction of M to Q is equal to
M . In other words, extension by zero followed by restriction acts as identity on
representations of Q .
But in general, first restricting a representation M of Q to Q and then extending
it by zero does not give back M, because on vertices which are not in Q non-zero
vector spaces in M are replaced by 0.
As we have seen, indecomposable representations are the building blocks for
arbitrary representations of quivers (just translate the results on modules for path
algebras in Chap. 7 to the language of representations of quivers). The extension by
zero is particularly useful because it preserves indecomposability.
Lemma 9.18. Assume Q is a subquiver of a quiver Q.
(a) Suppose M is a representation of Q and M is its extension by zero to Q. If
M is indecomposable then so is M.
9.2 Representations of Subquivers 171

(b) Suppose M and N  are non-isomorphic indecomposable representations of


Q . Let M and N be their extensions by zero. Then M and N are non-
isomorphic representations of Q.
Proof. (a) Suppose we had M = U ⊕ V with subrepresentations U, V of M.
According to Definition 9.9 we have to show that one of U and V is the zero
representation.
Note that the restriction of M to Q is M (see Remark 9.17). We call U  the
restriction of U to Q and similarly define V  . Then M = U  ⊕ V  because direct
sums are compatible with restriction. But M is indecomposable, therefore one of
U  or V  is the zero representation, that is, U (i) = 0 for all i ∈ Q0 or V (i) = 0 for
all i ∈ Q0 .
On the other hand, since U and V are subrepresentations and M(i) = 0 for
i ∈ Q0 (by Definition 9.16), also U (i) = 0 and V (i) = 0 for all i ∈ Q0 .
In total, we get that U (i) = 0 for all i ∈ Q0 or V (i) = 0 for all i ∈ Q0 , that is,
one of U or V is the zero representation and hence M is indecomposable.
(b) Assume for a contradiction that there is an isomorphism ϕ : M → N
of representations. Then, again using Remark 9.17, if we restrict to vertices and
arrows in Q we get a homomorphism ϕ  : M → N  . Moreover, since each
ϕi : M(i) → N(i) is an isomorphism, ϕi is also an isomorphism for each vertex i of
Q . This means that ϕ  is an isomorphism of representations, and M is isomorphic
to N  , a contradiction. 

There is a very useful reduction: When studying representations of a quiver, it is
usually enough to study quivers which are connected. This is a consequence of the
following:
Lemma 9.19. Assume Q is a quiver which can be expressed as a disjoint union
Q = Q ∪ Q of subquivers with no arrows between Q and Q . Then the
indecomposable representations of Q are precisely the extensions by zero of the
indecomposable representations of Q and the indecomposable representations
of Q .
Proof. Let M be an indecomposable representation of Q . We extend it by zero
and then get an indecomposable representation of Q, by Lemma 9.18. Similarly any
indecomposable representation of Q extends to one for Q.
Conversely, let M be any representation of Q. We show that M can be expressed
as a direct sum. By restriction of M (see Definition 9.15) we get representations M
of Q and M of Q . Now let U be the extension by zero of M , a representation of
Q, and let V be the extension by zero of M , also a representation of Q. We claim
that M = U ⊕ V.
Take a vertex i in Q. If i ∈ Q then U (i) = M  (i) = M(i) and V (i) = 0
and therefore M(i) = U (i) ⊕ V (i). Similarly, if i ∈ Q we have U (i) = 0 and
V (i) = M  (i) = M(i) and M(i) = U (i) ⊕ V (i). Moreover, if α is an arrow of Q
then it is either in Q or in Q , since by assumption there are no arrows between Q
and Q . If it is in Q then U (α) = M  (α) = M(α) and V (α) = 0, so the map M(α)
172 9 Representations of Quivers

is compatible with the direct sum decomposition, and the same holds if α is in Q .
This shows that M = U ⊕ V.
Assume now that M is an indecomposable representation of Q. By the above,
we have M = U ⊕ V, therefore one of U or V must be the zero representation.
Say U is the zero representation, that is, M is the extension by zero of M , a
representation of Q . We claim that M must be indecomposable: Suppose we
had M = X  ⊕ Y  with subrepresentations X  and Y  . Then we extend X 
and Y  by zero and obtain a direct sum decomposition M = X ⊕ Y. Since
M is indecomposable, one of X or Y is the zero representation. But since these
are obtained as extensions by zero this implies that one of X  or Y  is the zero
representation. Therefore, M is indecomposable, as claimed. 

9.3 Stretching Quivers and Representations

There are further methods to relate representations of different quivers. We will now
present a general construction which will be very useful later. This construction
works for quivers without loops; for simplicity we consider from now on only
quivers without oriented cycles. Recall that the corresponding path algebras are then
finite-dimensional, see Exercise 1.2.
Consider two quivers Q and Q  where Q  is obtained from Q by replacing one
γ
vertex i of Q by two vertices i1 , i2 and one arrow, i1 −→ i2 , and by distributing
the arrows adjacent to i between i1 and i2 . The following definition makes this
construction precise.
Definition 9.20. Let Q be a quiver without oriented cycles and i a fixed vertex. Let
T be the set of all arrows adjacent to i, and suppose T = T1 ∪ T2 , a disjoint union.
Define Q to be the quiver obtained from Q as follows.
γ
(i) Replace vertex i by i1 −→ i2 (where i1 , i2 are different vertices);
(ii) Join the arrows in T1 to i1 ;
(iii) Join the arrows in T2 to i2 .
In (ii) and (iii) we keep the original orientation of the arrows. We call the new quiver
 a stretch of Q.
Q
By assumption, Q does not have loops, so any arrow adjacent to i either starts at
i or ends at i but not both, and it belongs either to T1 or to T2 . Note that if T is large
then there are many possible stretches of a quiver Q at a given vertex i, coming from
different choices of the sets T1 and T2 .
We illustrate the general construction from Definition 9.20 with several
examples.
9.3 Stretching Quivers and Representations 173

Example 9.21.
(1) Let Q be the quiver 1 −→ 2. We stretch this quiver at vertex 2 and take T2 = ∅,
and we get the quiver
γ
1 −→ 21 −→ 22 .

If we take T1 = ∅ then we obtain


γ
1 −→ 22 ←− 21 .

(2) Let Q be the quiver 1 −→ 2 −→ 3. Stretching Q at vertex 2, and choosing


T2 = ∅, we obtain the following quiver

(3) Let Q be the quiver

There are several stretches of Q at the vertex i, for example we can get the
quiver
174 9 Representations of Quivers

or the quiver

(4) Let Q be the Kronecker quiver

Stretching the Kronecker quiver at vertex 1 and choosing T1 = ∅ and


T2 = {α, β} gives the stretched quiver

Alternatively, if we stretch the Kronecker quiver at vertex 1 and choose


T1 = {α}, T2 = {β} then we obtain the following triangle-shaped quiver

Exercise 9.3. Let Q be a quiver with vertex set {1, . . . , n} and n − 1 arrows such
that for each i with 1 ≤ i ≤ n − 1, there is precisely one arrow between vertices
i and i + 1, with arbitrary orientation. That is, the underlying graph of Q  has the
shape

Show that one can get the quiver Q  by a finite number of stretches starting with the
one-vertex quiver, that is, the quiver with one vertex and no arrows.
Exercise 9.4. Let Q  be a quiver with vertex set {1, . . . , n + 1} such that there is one
arrow between vertices i and i + 1 for all i = 1, . . . , n and an arrow between n + 1
 has a circular shape
and 1. That is, the underlying graph of Q
9.3 Stretching Quivers and Representations 175

Suppose that the arrows of Q̃ are oriented so that Q̃ is not an oriented cycle. Show
that one can obtain Q̃ by a finite number of stretches starting with the Kronecker
quiver.
So far, stretching a quiver as in Definition 9.20 is a combinatorial construction
which produces new quivers from given ones. We can similarly stretch representa-
tions. Roughly speaking, we replace the vector space M(i) in a representation M
of Q by two copies of M(i) with the identity map between them, distributing the
 and keeping the rest
M(α) with α adjacent to i so that we get a representation of Q
as it is.
Definition 9.22. Let Q be a quiver without oriented cycles and let Q  be the quiver
γ
obtained from Q by stretching at a fixed vertex i, with a new arrow i1 −→ i2 and
where the arrows adjacent to i are the disjoint union T = T1 ∪T2 , see Definition 9.20.
Given a representation M of Q, define M % to be the representation of Q by

 1 ) = M(i) = M(i
M(i  2 ), M(j
 ) = M(j ) (for j = i)

 ) = idM(i) , M(α)
M(γ  = M(α) (for α any arrow of Q).


Note that if α is in T1 then M(α) must start or end at vertex i1 , and similarly for α
in T2 .
Example 9.23.
(1) As in Example 9.21 we consider the quiver Q of the form 1 −→ 2, and the
 : 1 −→ 21 −→ 22 . Moreover, let M be the representation
stretched quiver Q
idK
K −→ K of Q. Then the stretched representation M % of Q has the form
idK idK
K −→ K −→ K. As another example, let N be the representation K −→ 0
 of Q
of Q. Then the stretched representation N  has the form K −→ 0 −→ 0.
(2) Let Q be the Kronecker quiver and let M be the representation

For the stretched quivers appearing in Example 9.21 we then obtain the
following stretched representations:
176 9 Representations of Quivers

and

Our main focus is on indecomposable representations. The following result is


very useful because it shows that stretching representations preserves indecompos-
ability.
Lemma 9.24. Let Q be a quiver without oriented cycles and Q  a quiver obtained
%
by stretching Q. For any representation M of Q we denote by M the representation
 obtained by stretching M (as in Definition 9.22). Then the following holds:
of Q
% is also indecomposable.
(a) If M is an indecomposable representation then M
(b) Suppose M and N are representations of Q which are not isomorphic. Then
M% and N are not isomorphic.
γ
Proof. Assume Q  is obtained from Q by replacing vertex i by i1 −→ i2 . Take
two representations M and N of Q and a homomorphism  ϕ : M%→ N  of the
stretched representations. Since 
ϕ is a homomorphism of representations we have
(see Definition 9.4):

 )=N
ϕi2 ◦ M(γ
 (γ ) ◦ 
ϕi1 .

 ) and N(γ
But M(γ  ) are identity maps by Definition 9.22, and hence  ϕi1 = ϕi2 .
This means that we can define a homomorphism ϕ : M → N of representations
by setting ϕi :=  ϕi1 =  ϕi2 and ϕj :=  ϕj for j = i. One checks that the
relevant diagrams as in Definition 9.4 commute; this follows since the corresponding
diagrams for ϕ commute, and since  ϕi1 = ϕi2 .
With this preliminary observation we will now prove the two assertions.
(a) Consider the case M = N . To show that M % is indecomposable it suffices by
Lemma 9.11 to show that if  ϕ2 = ϕ then ϕ is zero or the identity. By the above
definition of the homomorphism ϕ we see that if  ϕ2 =  ϕ then also ϕ 2 = ϕ. By
assumption, M is indecomposable and hence, again by Lemma 9.11, ϕ is zero or
the identity. But then it follows directly from the definition of ϕ that 
ϕ is also zero
or the identity.
(b) Assume M and N are not isomorphic. Suppose for a contradiction that

ϕ : M % → N  is an isomorphism of representations, that is, all linear maps
  ) → N(j
ϕj : M(j  ) are isomorphisms. Then all linear maps ϕj : M(j ) → N(j ) are
also isomorphisms and hence ϕ : M → N is an isomorphism, a contradiction. 
9.4 Representation Type of Quivers 177

9.4 Representation Type of Quivers

We translate modules over the path algebra to representations of quivers, and


the Krull–Schmidt theorem translates as well. That is, every (finite-dimensional)
representation of a quiver Q is a direct sum of indecomposable representations,
unique up to isomorphism and labelling. Therefore it makes sense to define the
representation type of a quiver.
Recall that we have fixed a field K and that we consider only finite-dimensional
representations of quivers over K, see Definition 9.1. Moreover, we assume
throughout that quivers have no oriented cycles; this allows us to apply the results
of Sect. 9.3.
Definition 9.25. A quiver Q is said to be of finite representation type over
K if there are only finitely many indecomposable representations of Q, up to
isomorphism. Otherwise, we say that the quiver has infinite representation type
over K.
By our Definition 9.1, a representation of Q always corresponds to a finite-
dimensional KQ-module. In addition, we assume Q has no oriented cycles and
hence KQ is finite-dimensional. Therefore the representation type of Q is the same
as the representation type of the path algebra KQ, as in Definition 8.1.
In most situations, our arguments will not refer to a particular field K, so we often
just speak of the representation type of a quiver, without mentioning the underlying
field K explicitly.
For determining the representation type of quivers there are some reductions
which follow from the work done in previous sections.
Given a quiver Q, since we have seen in Sect. 9.2 that we can relate indecom-
posable representations of its subquivers to indecomposable representations of Q,
we might expect that there should be a connection between the representation type
of subquivers with that of Q.
Lemma 9.26. Assume Q is a subquiver of a quiver Q. If Q has infinite represen-
tation type then Q also has infinite representation type.
Proof. This follows directly from Lemma 9.18. 

Furthermore, to identify the representation type, it is enough to consider con-
nected quivers.
Lemma 9.27. Assume a quiver Q is the disjoint union of finitely many subquivers
Q(1) , . . . , Q(r) . Then Q has finite representation type if and only if all subquivers
Q(1) , . . . , Q(r) have finite representation type.
Proof. This follows from Lemma 9.19, by induction on r. 

Example 9.28.
(1) The smallest connected quiver without oriented cycles consists of just one ver-
tex. A representation of this quiver is given by assigning a (finite-dimensional)
178 9 Representations of Quivers

vector space to this vertex. Any vector space has a basis, hence it is a direct
sum of 1-dimensional subspaces; each subspace is a representation of the
one-vertex quiver. So there is just one indecomposable representation of the
one-vertex quiver, it is 1-dimensional. In particular, the one-vertex quiver has
finite representation type.
α
(2) Let Q be the quiver 1 −→ 2. We will determine explicitly its indecomposable
representations. This will show that Q has finite representation type.
Let X be an arbitrary representation of Q, that is, we have two
finite-dimensional vector spaces X(1) and X(2), and a linear map
T = X(α) : X(1) → X(2). We exploit the proof of the rank-nullity theorem
from linear algebra.
Choose a basis {b1 , . . . , bn } of the kernel ker(T ), and extend it to a basis of
X(1), say by {c1 , . . . , cr }. Then the image im(T ) has basis {T (c1 ), . . . , T (cr )},
by the proof of the rank-nullity theorem. Extend this set to a basis of X(2),
say by {d1 , . . . , ds }. With this, we aim at expressing X as a direct sum of
subrepresentations.
For each basis vector bi of the kernel of T we get a subrepresentation Bi of
X of the form

span{bi } −→ 0.

This is a subrepresentation, since the restriction of T = X(α) to span{bi }


maps bi to zero. Each of these subrepresentations is isomorphic to the simple
representation S1 as defined in Example 9.7.
For each basis vector ci we get a subrepresentation Ci of X of the form

span{ci } −→ span{T (ci )}

where the map is given by T . Each of these representations is indecomposable,


in fact it is isomorphic to the indecomposable representation M which we
considered in Exercise 9.2.
For each basis vector di we get a subrepresentation Di of X of the form

0 −→ span{di }

which is isomorphic to the simple representation S2 .


From the choice of our basis vectors we know that

X(1) = span{b1 , . . . , bn } ⊕ span{c1 , . . . , cr }

and that

X(2) = span{T (c1 ), . . . , T (cr )} ⊕ span{d1 , . . . , ds }.


9.4 Representation Type of Quivers 179

This implies that we have a decomposition


   r   

n  
s
X = Bi ⊕ Ci ⊕ Di
i=1 i=1 i=1

of X as a direct sum of subrepresentations.


Now assume that X is indecomposable, then there is only one summand in
total, and hence X is isomorphic to one of Bi ∼
= S1 , Ci ∼
= M or Di ∼
= S2 .
By Theorem 9.8, the representations S1 and S2 are not isomorphic. By
comparing dimensions, M is not isomorphic to S1 or S2 . So we have proved
that the quiver Q has precisely three indecomposable representations, up to
isomorphism.
Example 9.29. Let Q be the Kronecker quiver,

We fix an element λ ∈ K, and we define a representation Cλ of Q as follows:


Cλ (1) = K, Cλ (2) = K, Cλ (α) = idK and Cλ (β) = λ · idK .
We claim that the following holds for these representations of the Kronecker
quiver.
(a) Cλ is isomorphic to Cμ if and only if λ = μ.
(b) For any λ ∈ K, the representation Cλ is indecomposable.
We start by proving (a). Let ϕ : Cλ → Cμ be a homomorphism of representations,
then we have a commutative diagram of K-linear maps

Since Cλ (α) and Cμ (α) are identity maps, we have ϕ1 = ϕ2 . We also have the
commutative diagram
180 9 Representations of Quivers

Since we already know that ϕ1 = ϕ2 we obtain that

λϕ1 = λϕ2 = ϕ2 ◦ Cλ (β) = Cμ (β) ◦ ϕ1 = μϕ1 .

If λ = μ then ϕ1 = 0 and therefore we cannot have an isomorphism Cλ → Cμ . This


proves claim (a).
We now prove (b). Let λ = μ, then by the above we have computed an arbitrary
homomorphism of representations ϕ : Cλ → Cλ . Indeed, we have ϕ1 = ϕ2 and
this is a scalar multiple of the identity (since Cλ (1) = K = Cλ (2)). Hence every
homomorphism of Cλ is a scalar multiple of the identity homomorphism. This shows
that Cλ is indecomposable, by Lemma 9.12.
The previous example shows already that if the field K is infinite, the Kronecker
quiver Q has infinite representation type. However, Q has infinitely many inde-
composable representations for arbitrary fields, as we will now show in the next
example.
Example 9.30. For any n ≥ 1, we define a representation C of the Kronecker quiver
as follows. We take C(1) = K n and C(2) = K n , and we define C(α) = idK n , and
C(β) is the linear map given by Jn (1), the Jordan block of size n with eigenvalue 1.
For simplicity, below we identify the maps with their matrices.
We will show that C is indecomposable. Since the above representations have
different dimensions for different n, they are not isomorphic, and hence this will
prove that the Kronecker quiver Q has infinite representation type for arbitrary
fields.
Let ϕ : C → C be a homomorphism of representations, with K-linear maps ϕ1
and ϕ2 on K n corresponding to the vertices of Q (see Definition 9.4).
Then we have a commutative diagram of K-linear maps

Since C(α) is the identity map, it follows that ϕ1 = ϕ2 . We also have a commutative
diagram of K-linear maps
9.4 Representation Type of Quivers 181

and this gives

Jn (1) ◦ ϕ1 = ϕ2 ◦ Jn (1) = ϕ1 ◦ Jn (1).

That is, ϕ1 is a linear transformation of V = K n which commutes with Jn (1).


We assume that ϕ 2 = ϕ, that is, ϕ12 = ϕ1 , and we want to show that ϕ is either
zero or the identity map; then C is indecomposable by Lemma 9.11, and we are
done.
We want to apply Exercise 8.1. Take f = (X − 1)n ∈ K[X], then VC(β)
becomes a K[X]/(f )-module (see Theorem 2.10); in fact, one checks that
f (Jn (1)) = Jn (0)n is the zero matrix. Note that this is a cyclic K[X]/(f )-module
(generated by the first basis vector). Now, since ϕ1 commutes with Jn (1), the linear
map ϕ1 is even a K[X]/(f )-module homomorphism of VC(β). We have ϕ12 = ϕ1 ,
therefore by Exercise 8.1, ϕ1 = 0 or ϕ1 = idK n . This shows that ϕ is either zero or
is the identity, and C is indecomposable, as observed above.
Suppose we know that a quiver Q has infinite representation type. Recall that in
Sect. 9.3 we have defined how to stretch quivers and representations. We can exploit
this and show that the stretch of Q also has infinite representation type.
Lemma 9.31. Let Q be a quiver without oriented cycles and let Q  be a quiver
which is a stretch of Q (as in Definition 9.20). If Q has infinite representation type
 also has infinite representation type.
then Q
Proof. This follows immediately from Lemma 9.24. 

Example 9.32. We have seen in Example 9.30 that the Kronecker quiver has infinite
representation type over any field K. So by Lemma 9.31 every stretch of the
Kronecker quiver also has infinite representation type. In particular, the quivers

and

appearing in Example 9.21 have infinite representation type over any field K.
182 9 Representations of Quivers

EXERCISES

9.5. Consider the representation defined in Example 9.2. Show that it is the direct
sum of three indecomposable representations.
9.6. Let Q = (Q0 , Q1 ) be a quiver and let M be a representation of Q. Suppose
that M = U ⊕ V is a direct sum of subrepresentations. For each vertex
i ∈ Q0 let ϕi : M(i) = U (i) ⊕ V (i) → U (i) be the linear map given by
projecting onto the first summand, and let ψi : U (i) → M(i) = U (i) ⊕ V (i)
be the inclusion of U (i) into M(i). Show that ϕ = (ϕi )i∈Q0 : M → U and
ψ = (ψi )i∈Q0 : U → M are homomorphisms of representations.
9.7. (This exercise gives an outline of an alternative proof of Theorem 9.8.) Let
Q = (Q0 , Q1 ) be a quiver without oriented cycles. For each vertex j ∈ Q0
let Sj be the simple representation of Q defined in Example 9.7.
(i) Show that for j = k ∈ Q0 the only homomorphism Sj → Sk of
representations is the zero homomorphism. In particular, the different
Sj are pairwise non-isomorphic.
Let M be a simple representation of Q.
(ii) Show that there exists a vertex k ∈ Q0 such that M(k) = 0 and
M(α) = 0 for all arrows α ∈ Q1 starting at k.
(iii) Let k ∈ Q1 be as in (ii). Deduce that M has a subrepresentation U with
U (k) = M(k) and U (i) = 0 for i = k.
(iv) Show that M is isomorphic to the simple representation Sk .
9.8. Let Q be a quiver. Let M be a representation of Q such that for a fixed vertex
j of Q we have M(i) = 0 for all i = j . Show that M is isomorphic to a direct
sum of dimK M(j ) many copies of the simple representation Sj .
Conversely, check that if a representation M of Q is isomorphic to a direct
sum of copies of Sj then M(i) = 0 for all i = j .
9.9. Let Q be a quiver and j a sink of Q, that is, no arrow of Q starts at j . Let
α1 , . . . , αt be all the arrows ending at j . Let M be a representation of Q.
(a) Show that M is a direct sum of subrepresentations, M = X ⊕ Y,
where

(i) Y satisfies Y (k) = M(k) for k = j , and Y (j ) = ti=1 im(M(αi )) is
the sum of the images of the maps M(αi ) : M(i) → M(j ),
(ii) X is isomorphic to the direct sum of copies of the simple
representation Sj , and the number of copies is equal to
dimK M(j ) − dimK Y (j ).

(b) If M has a direct summand isomorphic to Sj then ti=1 im(M(αi )) is a
proper subspace of M(j ).
9.10. Let Q be a quiver and j a source of Q , that is, no arrow of Q ends at j . Let
β1 , . . . , βt be the arrows starting at j . Let N be a representation of Q .
9.4 Representation Type of Quivers 183

t
(a) Consider the subspace X(j ) := i=1 ker(N(βi )) of N(j ). As a K-
vector space we can decompose N(j ) = X(j ) ⊕ Y (j ) for some subspace
Y (j ). Show that N is a direct sum of subrepresentations, N = X ⊕ Y,
where
(i) Y satisfies Y (k) = N(k) for k = j , and Y (j ) is as above,
(ii) X is isomorphic to the direct sum of dimK X(j ) many copies of the
simple representation Sj .

(b) If N has a direct summand isomorphic to Sj then ti=1 ker(N(βi )) is a
non-zero subspace of N(j ).
9.11. Let K be a field and let Q = (Q0 , Q1 ) be a quiver. For each vertex i ∈ Q0
consider the KQ-module Pi = KQei generated by the trivial path ei .
(i) Interpret the KQ-module Pi as a representation Pi of Q. In particular,
describe bases for the vector spaces Pi (j ) for j ∈ Q0 . (Hint: Do it first
for the last quiver in (4) of Example 9.21 and use this as an illustration
for the general case.)
(ii) Suppose that Q has no oriented cycles. Show that the representation Pi
of Q is indecomposable.
9.12. Let Q = (Q0 , Q1 ) be a quiver without oriented cycles and suppose that
M = ((M(i))i∈Q0 , (M(α))α∈Q1 ) is a representation of Q. Show that the
following holds.
(a) The representation M is semisimple (that is, a direct sum of simple
subrepresentations) if and only if M(α) = 0 for each arrow α ∈ Q1 .
(b) For each vertex i ∈ Q0 we set

socM (i) = ker(M(α))
s(α)=i

(where s(α) denotes the starting vertex of the arrow α). Then

SocM = ((socM (i))i∈Q0 , (S(α) = 0)α∈Q1 )

is a semisimple subrepresentation of M, called the socle of M.


(c) Every semisimple subrepresentation U of M is a subrepresentation of
the socle SocM .
9.13. This exercise is an analogue of Exercise 2.15, in the language of representa-
tions of quivers.
Let M and N be representations of a quiver Q. The direct product
(or external direct sum) P = M × N is the representation of Q with
P (i) = M(i) × N(i) for each vertex i of Q (this is the direct product of
vector spaces, that is, the cartesian product with componentwise addition
and scalar multiplication). For every arrow α in Q from i to j we set
184 9 Representations of Quivers

P (α) : P (i) → P (j ), (m, n) → (M(α)(m), N(α)(n)), and we sometimes


also denote this map by P (α) = M(α) × N(α).
(i) Verify that P is a representation of Q.
(ii) Check that the following defines a subrepresentation M % of P. For every
 = {(m, 0) | m ∈ M(i)} and for every arrow α from
vertex i we set M(i)

i to j we set M(α)  → M(j
: M(i)  ), (m, 0) → (M(α)(m), 0).
Similarly we get a subrepresentation N from N .
(iii) Show that P = M %⊕ N  is a direct sum of the subrepresentations.
(iv) Consider M % and N  as representations of Q. Show that M% is isomorphic
to M and that N  is isomorphic to N .
Chapter 10
Diagrams and Roots

Our aim is to determine when a quiver Q with no oriented cycles is of finite


representation type. This is answered completely by Gabriel’s celebrated theorem,
which he proved in the 1970s, and the answer is given in terms of the underlying
graph of Q. This graph is obtained by forgetting the orientation of the arrows in the
quiver. In this chapter, we describe the relevant graphs and their properties we need;
these graphs are known as Dynkin diagrams and Euclidean diagrams, and they occur
in many parts of mathematics. We discuss further tools which we will use to prove
Gabriel’s theorem, such as the Coxeter transformations. The content of this chapter
mainly involves basic combinatorics and linear algebra.
We fix a graph ; later this will be the underlying graph of a quiver. We
sometimes write  = (0 , 1 ), where 0 is the set of vertices and 1 is the set
of (unoriented) edges of . All graphs are assumed to be finite, that is, 0 and 1
are finite sets.

10.1 Dynkin Diagrams and Euclidean Diagrams

Gabriel’s theorem (which will be proved in the next chapter) states that a connected
quiver has finite representation type if and only if the underlying graph  is one of
the Dynkin diagrams of types An for n ≥ 1, Dn for n ≥ 4, E6 , E7 , E8 , which we
define in Fig. 10.1.
We have seen some small special cases of Gabriel’s theorem earlier in the
book. Namely, a quiver of type A1 (that is, the one-vertex quiver) has only
one indecomposable representation by Example 9.28; in particular, it is of finite
representation type. Moreover, also in Example 9.28 we have shown that the quiver
1 −→ 2 has finite representation type; note that this quiver has as underlying graph
a Dynkin diagram of type A2 .

© Springer International Publishing AG, part of Springer Nature 2018 185


K. Erdmann, T. Holm, Algebras and Representation Theory, Springer
Undergraduate Mathematics Series, https://doi.org/10.1007/978-3-319-91998-0_10
186 10 Diagrams and Roots

Fig. 10.1 Dynkin diagrams


of types A, D, E. The index
gives the number of vertices
in each diagram

To deal with the case when  is not a Dynkin diagram, we will only need a small
list of graphs. These are the Euclidean diagrams, sometimes also called extended
Dynkin diagrams. They are shown in Fig. 10.2, and are denoted by A n for n ≥ 1,
n for n ≥ 4, and E
D 6 , E
7 , E
8 . For example, the Kronecker quiver is a quiver with
underlying graph a Euclidean diagram of type A 1 ; and we have seen already in
Example 9.30 that the Kronecker quiver has infinite representation type.
We refer to graphs in Fig. 10.1 as graphs of type A, D, or E. We say that a quiver
has Dynkin type if its underlying graph is one of the graphs in Fig. 10.1. Similarly,
we say that a quiver has Euclidean type if its underlying graph belongs to the list in
Fig. 10.2.
In analogy to the definition of a subquiver in Definition 9.13, a subgraph
  = (0 , 1 ) of a graph  is a graph which consists of a subset 0 ⊆ 0 of
the vertices of  and a subset 1 ⊆ 1 of the edges of .
The following result shows that we might not need any other graphs than Dynkin
and Euclidean diagrams.
Lemma 10.1. Assume  is a connected graph. If  is not a Dynkin diagram then
 has a subgraph which is a Euclidean diagram.
Proof. Assume  does not have a Euclidean diagram as a subgraph, we will show
that then  is a Dynkin diagram.
The Euclidean diagrams of type A n are just the cycles; so  does not contain
a cycle; in particular, it does not have a multiple edge. Since  is connected by
assumption, it must then be a tree.
10.1 Dynkin Diagrams and Euclidean Diagrams 187

 D,
Fig. 10.2 The Euclidean diagrams of types A,  E.
 The index plus 1 gives the number of vertices
in each diagram

The graph  does not have a subgraph of type D 4 and hence every vertex in 
is adjacent to at most three other vertices. Moreover, since there is no subgraph of
type D n for n ≥ 5, at most one vertex in  is adjacent to three other vertices. In
total, this means that the graph  is of the form

where we denote the numbers of vertices in the three ‘arms’ by r, s, t ≥ 0 (the


central vertex is not counted here, so the total number of vertices is r + s + t + 1).
We may assume that r ≤ s ≤ t.
188 10 Diagrams and Roots

6 and hence r ≤ 1. If r = 0
By assumption, there is no subgraph in  of type E
then the graph  is a Dynkin diagram of the form As+t +1, and we are done. So
assume now that r = 1. There is also no subgraph of type E 7 and therefore s ≤ 2.
If s = 1 then the graph is a Dynkin diagram of type Dt +3 , and again we are done.
So assume s = 2. Since  also does not have a subgraph of type E 8 we get
t ≤ 4. If t = 2 the graph  is a Dynkin diagram of type E6 , next if t = 3 we have
the Dynkin diagram E7 and for t = 4 we have E8 . This shows that the graph  is
indeed a Dynkin diagram. 

Exercise 10.1. Let  be a graph of Euclidean type D n (so  has n + 1 vertices).
Show that any subgraph with n vertices is a disjoint union of Dynkin diagrams.

10.2 The Bilinear Form and the Quadratic Form

Let  be a graph, assume that it does not contain loops, that is, edges with the same
starting and end point.
In this section we will define a bilinear form and analyze the corresponding
quadratic form for such a graph . These two forms are defined on Zn , by using
the standard basis vectors εi which form a Z-basis of Zn . We refer to εi as a ‘unit
vector’, it has a 1 in position i and is zero otherwise.
Definition 10.2. Let  = (0 , 1 ) be a graph without loops and label the vertices
by 0 = {1, 2, . . . , n}.
(a) For any vertices i, j ∈ 0 let dij be the number of edges in  between i and j .
Note that dij = dj i (since edges are unoriented).
(b) We define a symmetric bilinear form (−, −) : Zn ×Zn → Z on the unit vectors
by

−dij if i = j
(εi , εj ) =
2 if i = j

and extend it bilinearly to arbitrary elements in Zn × Zn . The n × n-matrix G


with (i, j )-entry equal to (εi , εj ) is called the Gram matrix of the bilinear form
(−, −) .
(c) For each vertex j of  we define the reflection map sj by

sj : Zn → Zn , sj (a) = a − (a, εj ) εj .

Remark 10.3. We can extend the definition of sj to a map on Rn , and then we can
write down a matrix with respect to the standard basis of Rn . But for our application
it is important that sj preserves Zn , and we work mostly with Zn .
We record some properties of the above reflection maps, which also justify why
they are called reflections. Let j be a vertex of the graph .
10.2 The Bilinear Form and the Quadratic Form 189

(i) The map sj is Z-linear.


(ii) When applied to some vector a ∈ Zn , the map sj only changes the j -th
coordinate.
(iii) sj (εj ) = −εj .
(iv) sj2 (a) = a for each a ∈ Zn .
(v) If there is no edge between different vertices i and j then sj (εi ) = εi .
Exercise 10.2.
(i) Let  be the graph of Dynkin type A3 . Write down the Gram
matrix for the bilinear form (−, −) . Compute the reflection s2 : show that
s2 (a1 , a2 , a3 ) = (a1 , a1 − a2 + a3 , a3 ).
(ii) Let  be the graph of Euclidean type Ã1 , that is, the graph with two vertices
and two edges. The Gram matrix is
 
2 −2
.
−2 2

Compute a formula for the reflections s1 and s2 . Check also that their matrices
with respect to the standard basis of R2 are
   
−1 2 1 0
s1 = , s2 = .
0 1 2 −1

Example 10.4. We compute explicitly the Gram matrices of the bilinear forms
corresponding to Dynkin diagrams of type A, D and E, defined in Fig. 10.1.
Note that the bilinear forms depend on the numbering of the vertices of the graph.
It is convenient to fix some ‘standard labelling’. For later use, we also fix an
orientation of the arrows; but note that the bilinear form (−, −) is independent
of the orientation.
Type An

Type Dn
190 10 Diagrams and Roots

Type E8

Then, for E6 we take the subquiver with vertices 1, 2, . . . , 6 and similarly for E7 .
With this fixed standard labelling of the Dynkin diagrams, the Gram matrices of
the bilinear forms are as follows.
Type An :
⎛ ⎞
2 −1 0 . . . . . . 0
⎜−1 2 −1 0 ⎟
⎜ 0⎟
⎜ ⎟
⎜ . ⎟
..
⎜ 0 −1 2 −1 . . ⎟
.
⎜ . . . . . ⎟
⎜ . . . . . ⎟
⎜ . . . . . 0⎟
⎜ ⎟
⎝ 0 . . . 0 −1 2 −1⎠
0 . . . . . . 0 −1 2

Type Dn :
⎛ ⎞
2 0 −1 0 0 ...
0
⎜ 0 2 −1 0⎟
⎜ 0 0 ... ⎟
⎜ ⎟
⎜−1 −1 2 −1 0 ⎟
⎜ .. ⎟
⎜ . ⎟
⎜ 0 0 −1 2 −1 . .. ⎟
⎜ . ⎟
⎜ . .. .. .. .. ⎟
⎜ . . . . .0⎟
⎜ ⎟
⎝ 0 ... 0 −1 2 −1⎠
0 ... ... 0 −1 2

Type En : We write down the Gram matrix for type E8 , which is


⎛ ⎞
2 −1 0 0 0 0 0 0
⎜−1 2 0 −1 0 0 0 0⎟
⎜ ⎟
⎜0 0 2 −1 0 0 0 0⎟
⎜ ⎟
⎜ ⎟
⎜0 −1 −1 2 −1 0 0 0⎟
⎜ ⎟
⎜0 0 0 −1 2 −1 0 0⎟
⎜ ⎟
⎜0 0 0 0 −1 2 −1 0 ⎟
⎜ ⎟
⎝0 0 0 0 0 −1 2 −1⎠
0 0 0 0 0 0 −1 2
10.2 The Bilinear Form and the Quadratic Form 191

The matrices for E6 and E7 are then obtained by removing the last two rows and
columns for E6 and the last row and column for E7 .
Remark 10.5. In the above example we have chosen a certain labelling for each of
the Dynkin diagrams. In general, let  be a graph without loops, and let ˜ be the
graph obtained from  by choosing a different labelling of the vertices. Choosing a
different labelling means permuting the unit vectors ε1 , . . . , εn , and hence the rows
and columns of the Gram matrix G are permuted accordingly. In other words, there
is a permutation matrix P (that is, a matrix with precisely one entry 1 in each row
and column, and zero entries otherwise) describing the basis transformation coming
from the permutation of the unit vectors, and such that P G P −1 is the Gram matrix
˜ Note that any permutation matrix P is orthogonal, hence P −1 = P t ,
of the graph .
the transposed matrix, and P G P −1 = P G P t .
Given any bilinear form, there is an associated quadratic form. We want to write
down explicitly the quadratic form associated to the above bilinear form (−, −)
for a graph .
Definition 10.6. Let  = (0 , 1 ) be a graph without loops, and let
0 = {1, . . . , n}.
(a) If G is the Gram matrix of the bilinear form (−, −) as defined in Defini-
tion 10.2, the associated quadratic form is given as follows

1 1  n 
q : Zn → Z , q (x) = (x, x) = x G x t = xi2 − dij xi xj .
2 2
i=1 i<j

That is, x = (x1 , x2 , . . . , xn ) ∈ Zn taken as a row vector, or an n × 1-matrix,


and then x G x t is a matrix product.
(b) The elements of the set  := {x ∈ Zn | q (x) = 1} are called the roots of q .
Exercise 10.3.
(a) Verify that the different expressions for q (x) in Definition 10.6 are equal.
(b) Show that the unit vectors are roots, that is, εi ∈  for all i = 1, . . . , n.
Remark 10.7. Let ˜ be the graph obtained from  by choosing a different labelling
of the vertices. For the Gram matrices we have G˜ = P G P −1 with a permutation
matrix P , see Remark 10.5. The formulas for the corresponding quadratic forms q
and q˜ are different; however the roots of q˜ are obtained from the roots of q by
permuting coordinates. Namely, there is a bijection  → ˜ given by x → xP −1
(recall that we consider x as a row vector). In fact, for x ∈ Zn we have

1 1
q˜ (xP −1 ) = (xP −1 )G˜ (xP −1 )t = (xP −1 )P G P −1 (xP −1 )t
2 2
1 1
= x(P −1 P )G (P −1 (P −1 )t )x t = xG x t = q (x)
2 2
192 10 Diagrams and Roots

where we have used that (P −1 )t = P since P is an orthogonal matrix. In particular,


x is a root of q if and only if xP −1 is a root of q˜ .
Example 10.8 (Roots in Dynkin Type A). Let  be the Dynkin diagram of type An ,
with standard labelling as in Example 10.4. We will compute the set  and show

| | = n(n + 1).

The quadratic form is (see Definition 10.6)


n 
n−1
q (x) = xi2 − xi xi+1 .
i=1 i=1

We complete squares, this gives


n−1
2q (x) = x12 + (xi − xi+1 )2 + xn2 .
i=1

We want to determine the set of roots  , that is, we have to find all x ∈ Zn such
that 2q (x) = 2. If so, then |xi − xi+1 | ≤ 1 for 1 ≤ i ≤ n − 1, and |x1 | and |xn | also
are ≤ 1 (recall that the xi are integers). Precisely two of the numbers |xi − xi+1 |,
|x1 |, |xn | are equal to 1 and all others are zero.
Let r ∈ {1, . . . , n} be minimal such that xr = 0 (this exists since x cannot be the
zero vector, otherwise q (x) = 0). So xr = ±1 and |xr−1 − xr | = |xr | = 1. Then
among |xi − xi+1 | with r + 1 ≤ i ≤ n − 1, and |xn | precisely one further 1 appears.
So the only possibilities are x = εr + εr+1 + . . . + εs or x = −εr − εr+1 − . . . − εs
for some s ∈ {r, . . . , n}.
Thus we have shown that the roots of a Dynkin diagram of type An are given by

 = {±(εr + εr+1 + . . . + εs ) | 1 ≤ r ≤ n and r ≤ s ≤ n}.

In particular, the total number of roots in Dynkin type An is

| | = 2(n + (n − 1) + . . . + 2 + 1) = n(n + 1).

Exercise 10.4. Write down the roots for Dynkin type A3 .


The set of roots has a lot of symmetry; among the many nice properties we show
here that the set of roots is invariant under the reflections defined in Definition 10.2.
Lemma 10.9. Let  = (0 , 1 ) be a graph without loops and let j be a vertex of
. If x ∈ Zn is a root of q then sj (x) is also a root of q .
10.2 The Bilinear Form and the Quadratic Form 193

Proof. We will show that sj preserves the bilinear form (−, −) : for x ∈ Zn we
have

(sj (x), sj (x)) = (x − (x, εj ) εj , x − (x, εj ) εj )


= (x, x) − 2(x, εj ) (εj , x) + (x, εj )2 (εj , εj )
= (x, x) .

For the last equality we have used that the bilinear form is symmetric and that
(εj , εj ) = 2 by Definition 10.2. For the corresponding quadratic form we get

1 1
q (sj (x)) = (sj (x), sj (x)) = (x, x) = q (x).
2 2
Hence if x is a root, that is q (x) = 1, then q (sj (x)) = 1 and sj (x) is a root. 

We want to show that there are only finitely many roots if  is a Dynkin diagram.
To do so, we will prove that q is positive definite and we want to use tools from
linear algebra. Therefore, we extend the bilinear form (−, −) and the quadratic
form q to Rn . That is, for the standard basis we take the same formulae as in
Definitions 10.2 and 10.6, and we apply them to arbitrary x ∈ Rn .
Recall from linear algebra that a quadratic form q : Rn → Rn is called positive
definite if q(x) > 0 for any non-zero x ∈ Rn . Suppose the quadratic form comes
from a symmetric bilinear form as in our case, where (see Definition 10.6)

1 1
q (x) = (x, x) = x G x t .
2 2
Then the quadratic form is positive definite if and only if, for some labelling,
the Gram matrix of the symmetric bilinear form is positive definite. Recall from
linear algebra that a symmetric real matrix is positive definite if and only if all its
leading principal minors are positive. The leading principal k-minor of an n × n-
matrix is the determinant of the submatrix obtained by deleting rows and columns
k + 1, k + 2, . . . , n. This is what we will use in the proof of the following result.
Proposition 10.10. Assume  is a Dynkin diagram. Then the quadratic form q is
positive definite.
Proof. We have seen in Remarks 10.5 and 10.7 how the quadratic forms change
when the labelling of the vertices is changed. With a different labelling, one only
permutes the coordinates of an element in Rn but this does not affect the condition
of whether q (x) > 0 for all non-zero x ∈ Rn , that is, the condition of whether q
is positive definite. So we can take the standard labelling as in Example 10.4, and it
suffices to show that the Gram matrices given in Example 10.4 are positive definite.
(1) We start with the Gram matrix in type An . Then the leading principal k-minor
is the determinant of the Gram matrix of type Ak , and there is a recursion formula:
Write d(Ak ) for the determinant of the matrix of type Ak . Then we have d(A1 ) = 2
194 10 Diagrams and Roots

and d(A2 ) = 3. Expanding the determinant by the last row of the matrix we find

d(Ak ) = 2d(Ak−1) − d(Ak−2 ) for all k ≥ 3.

It follows by induction on n that d(An ) = n + 1 for all n ∈ N, and hence all leading
principal minors are positive.
(2) Next, consider the Gram matrix of type Dn for n ≥ 4. Again, the leading
principal k-minor for k ≥ 4 is the determinant of the Gram matrix of type Dk . When
k = 2 we write D2 for the submatrix obtained by removing rows and columns with
labels ≥ 3, and similarly we define D3 . We write d(Dk ) for the determinant of the
matrix Dk for k ≥ 2. We see directly that d(D2 ) = 4 and d(D3 ) = 4. For k ≥ 4 the
same expansion of the determinant as for type A gives the recursion

d(Dk ) = 2d(Dk−1 ) − d(Dk−2 ).

By induction, we find that for k ≥ 4 we have d(Dk ) = 4. In particular, all leading


principal minors of the Gram matrix of type Dn are positive and hence the quadratic
form is positive definite.
(3) Now consider the Gram matrix of type En for any n = 6, 7, 8. In Example 10.4
we have given the Gram matrix for E8 ; the matrices for E6 or E7 are obtained
by removing the last two rows and columns, or the last row and column. Direct
calculations show that the values of the first five leading principal minors of the
Gram matrix of type E8 are 2, 3, 6, 5 and 4. Let n ≥ 6. We expand using the first
row and find, after one more step, that

d(En ) = 2d(Dn−1 ) − d(An−2 ).

Using the above calculations in (1) and (2) this gives

d(En ) = 2 · 4 − (n − 1) = 9 − n for all n ≥ 6.

Hence for n = 6, 7, 8 all leading principal minors of the Gram matrix for types
E6 , E7 and E8 are positive and hence the associated quadratic form q is positive
definite. 

1 , as in Exercise 10.2. Verify that the
Exercise 10.5. Let  be the graph of type A
quadratic form is

q (x) = x12 + x22 − 2x1 x2 = (x1 − x2 )2

hence it is not positive definite. However q (x) ≥ 0 for all x ∈ R2 (that is, q is
positive semidefinite).
10.2 The Bilinear Form and the Quadratic Form 195

Remark 10.11.
(1) Alternatively one could prove Proposition 10.10 by finding a suitable formula
for q (x) as a sum of squares. We have used this strategy for Dynkin type An
in Example 10.8. The formula there,


n−1
2q (x) = x12 + (xi − xi+1 )2 + xn2 ,
i=1

implies easily that q (x) > 0 for all non-zero x ∈ Rn , that is, the quadratic
form q is positive definite for Dynkin type An . Similarly, one can find suitable
formulae for the other Dynkin types. See Exercise 10.6 for type Dn .
(2) Usually, the quadratic form of a graph is not positive definite. If  is as in
Exercise 10.5 then obviously q (x) = 0 for x = (a, a) and arbitrary a. We can
see another example if we enlarge the E8 -diagram by more vertices and obtain
En -diagrams for n > 8, then the computation in the above proof still gives
d(En ) = 9 − n; but this means that the quadratic form is not positive definite
for n > 8.
(3) The previous remarks and Proposition 10.10 are a special case of a very nice
result which characterises Dynkin and Euclidean diagrams by the associated
quadratic forms. Namely, let  be a connected graph (without loops). Then the
quadratic form q is positive definite if and only if  is a Dynkin diagram.
Moreover, the quadratic form is positive semidefinite, but not positive definite,
if and only if  is a Euclidean diagram. This is not very difficult but we do not
need it for the proof of Gabriel’s theorem.
Exercise 10.6. Let  be the Dynkin diagram of type Dn with standard labelling as
in Example 10.4. Show that for the quadratic form q we have
n−1 

4q (x) = (2x1 − x3 ) + (2x2 − x3 ) + 2
2 2
(xi − xi+1 ) 2
+ 2xn2.
i=3

Deduce that the quadratic form q is positive definite.


We want to show that for a Dynkin diagram, the quadratic form has only finitely
many roots. We have seen this already for Dynkin type An (with standard labelling)
in Example 10.8, but now we give a unified proof for all Dynkin types. The crucial
input is that the quadratic forms are positive definite, as shown in Proposition 10.10.
Proposition 10.12. Let  be a Dynkin diagram. Then the set  of roots is finite.
Proof. We have seen in Remark 10.7 that changing the labelling of the vertices only
permutes the entries of the roots. So the statement on finiteness of  does not
depend on the labelling of the diagram, so we can use the standard labelling, as in
Example 10.4.
196 10 Diagrams and Roots

We view the Gram matrix G as a real matrix. It is symmetric, therefore, by linear


algebra, all its eigenvalues are real and it is diagonalizable by an orthogonal matrix.
Since q , and hence G , is positive definite, the eigenvalues must be positive. It
follows that G = P DP t , where P is orthogonal and D is diagonal with positive
real diagonal entries. Using the definition of q we can thus write

1 1
q (x) = xG x t = (xP )D(xP )t (10.1)
2 2
and we want to show that there are at most finitely many roots of q , that is, solutions
with x ∈ Zn such that q (x) = 1 (see Definition 10.6).
Suppose q (x) = 1 and write xP = (ξ1 , . . . , ξn ). Then Equation (10.1) becomes


n
2 = 2 q (x) = ξi2 λi ,
i=1

where the λi are the positive real diagonal entries of D.


Then the square of the length of (ξ1 , . . . , ξn ) is bounded; for example, we must
have ξi2 ≤ 2 λ−1 −1
i for each i = 1, . . . , n. Now take R = max{2 λi | 1 ≤ i ≤ n},
then we have


n
ξi2 ≤ nR.
i=1

Since the matrix P is orthogonal, it preserves lengths, so we know


n 
n
xi2 = |x|2 = |xP |2 = ξi2 ≤ nR.
i=1 i=1

Hence there are at most finitely many solutions for (x1 , . . . , xn ) ∈ Zn with
q (x) = 1, that is, there are only finitely many roots for q . 

In Example 10.8 we have determined the (finite) set of roots for a Dynkin
diagram of type An . Exercise 10.14 asks to find the roots for a Dynkin diagram
of type Dn . For most graphs, there are infinitely many roots.
Exercise 10.7. Consider the graph  of type Ã1 as in Exercises 10.2 and 10.5 which
is the underlying graph of the Kronecker quiver. Show that

 = {(a, a ± 1) | a ∈ Z}.

For the Dynkin diagrams we refine the set  of roots, namely we divide the
roots into ‘positive’ and ‘negative’ roots.
10.3 The Coxeter Transformation 197

Lemma 10.13. Let  be a Dynkin diagram. Suppose x ∈ Zn is a root of q . Then


x = 0, and either xt ≥ 0 for all t, or xt ≤ 0 for all t. In the first case we say that x
is a positive root, and in the second case we call x a negative root.
Proof. Assume x = (x1 , . . . , xn ) ∈ Zn and x ∈  is a root, that is q (x) = 1.
Note that we have x = 0 since q (x) = 1. We can sort positive and negative entries
in x and write x = x + + x − , where
 
x+ = xs εs , and x − = xt εt
s,xs >0 t,xt <0

(here the εi are the unit vectors). Moreover, by definition of q (see Definition 10.6)
we have
1 1
q (x) = (x, x) = (x + + x − , x + + x − ) = (x + , x − ) + q (x + ) + q (x − ).
2 2

Using the definition of x + and x − we expand (x + , x − ) and obtain


   
(x + , x − ) = xs xt (εs , εt ) = xs xt (−dst ) ≥ 0
s,xs >0 t,xt <0 s,xs >0 t,xt <0

(for the last equality we used the definition of (−, −) , see Definition 10.2).
Since  is one of the Dynkin diagrams, the quadratic form q is positive definite
by Proposition 10.10. In particular, q (x + ) ≥ 0 and q (x − ) ≥ 0. But since
x = x + + x − is non-zero, at least one of x + and x − is non-zero and then
q (x + ) + q (x − ) > 0, again by positive definiteness.
In summary, we get

1 = q (x) = (x + , x − ) + q (x + ) + q (x − ) ≥ q (x + ) + q (x − ) > 0.

Since the quadratic form has integral values, precisely one of q (x + ) and q (x − ) is
1 and the other is 0. Since q is positive definite, x + = 0 or x − = 0, that is, x = x −
or x = x + , which proves the claim. 

10.3 The Coxeter Transformation

In this section we will introduce a particular map, the Coxeter transformation,


associated to a Dynkin diagram with standard labelling as in Example 10.4. This
map will later be used to show that for Dynkin diagrams positive roots parametrize
indecomposable representations.
198 10 Diagrams and Roots

Let  be one of the Dynkin diagrams, with standard labelling. We have seen in
Lemma 10.9 that each reflection sj , where j is a vertex of , preserves the set  of
roots. Then the set  of roots is also preserved by arbitrary products of reflections,
that is, by any element in the group W , the subgroup of the automorphism group
Aut(Zn ) generated by the reflections sj . The Coxeter transformation is an element
of W and it has special properties.
Definition 10.14. Assume  is a Dynkin diagram with standard labelling as in
Example 10.4. Let sj : Zn → Zn , sj (x) = x − (x, εj ) εj be the reflections as
in Definition 10.2. The Coxeter transformation C is the map

C = sn ◦ sn−1 ◦ . . . ◦ s2 ◦ s1 : Zn → Zn .

The Coxeter matrix is the matrix of C with respect to the standard basis of Rn .
Example 10.15 (Coxeter Transformation in Dynkin Type A). Let  be the Dynkin
diagram of type An with standard labelling. We describe the Coxeter transformation
and its action on the roots of q . To check some of the details, see Exercise 10.8
below. Let sj be the reflection, as defined in Definition 10.2. Explicitly, we have for
x = (x1 , x2 , . . . , xn ) ∈ Rn that

⎨ (−x1 + x2 , x2 , . . . , xn ) j =1
sj (x) = (x1 , . . . , xj −1 , −xj + xj −1 + xj +1 , xj +1 , . . . , xn ) 2 ≤ j ≤ n − 1

(x1 , . . . , xn−1 , xn−1 − xn ) j = n.

With this, we compute the Coxeter transformation

C (x) = sn ◦ sn−1 ◦ . . . ◦ s2 ◦ s1 (x) = (−x1 + x2 , −x1 + x3 , . . . , −x1 + xn , −x1 ).

Consider the action of C on the set of roots. Recall from Example 10.8 that for the
Dynkin diagram of type An the set of roots is given by

 = {±αr,s | 1 ≤ r ≤ s ≤ n},

where αr,s = εr + εr+1 + . . . + εs . Consider the root C (αr,s ). One checks the
following formula:

αr−1,s−1 r > 1
C (αr,s ) =
−αs,n r = 1.

Since also C (−x) = −C (x), we see that C permutes the elements in  (in
fact, this already follows from the fact that this holds for each reflection sj ). We
also see that C can take positive roots to negative roots.
10.3 The Coxeter Transformation 199

Exercise 10.8. Check the details when n = 4 in Example 10.15.


Exercise 10.9. Let  be the graph of type Ã1 as in Exercise 10.2. Define C :=s2 ◦s1 .
Show that with respect to the standard basis of R2 this has matrix
 
−1 2
.
−2 3

Show by induction on k ≥ 1 that Ck has matrix


 
−(2k − 1) 2k
.
−2k 2k + 1

For a vector z = (z1 , . . . , zn ) ∈ Zn we say that z ≥ 0 if zi ≥ 0 for all 1 ≤ i ≤ n,


and otherwise we write z ≥ 0.
Lemma 10.16. Assume  is a Dynkin diagram with standard labelling, and let C
be the Coxeter transformation. Then the following holds.
(a) If y ∈ Zn and C (y) = y then y = 0.
(b) There is a positive number h ∈ N such that Ch is the identity map on Zn .
(c) For every 0 = x ∈ Zn there is some r ≥ 0 such that Cr (x) ≥ 0.
Part (a) of this lemma says that the Coxeter transformation has no fixed points
on Zn except zero. For Dynkin type A this can be deduced from the formulae in
Example 10.15, see Exercise 10.12. In principle, one could prove this lemma case-
by-case. But below, we give a unified proof which works for all Dynkin types. As
we have noted, C preserves  and the elements of  are non-zero. Hence, by
part (a), C does not fix any root.
Proof. We begin with some preliminary observations. Recall from Definition 10.2
that for each vertex i of  we have si (y) = y − (y, εi ) εi . So (y, εi ) = 0 if and
only if si (y) = y. Recall also from Remark 10.3 that si2 is the identity.
 will show that C (y) = y implies that (y, εi ) = 0 for all i. Then for
(a) We
y = ni=1 yi εi we get


n
2q (y) = (y, y) = yi (y, εi ) = 0
i=1

and since the quadratic form q is positive definite (by Proposition 10.10), it will
follow that y = 0.
So suppose that C (y) = y. Since sn2 is the identity this implies
sn−1 ◦ . . . ◦ s1 (y) = sn (y). Since the reflection sj only changes the j -th coordinate,
the n-th coordinate of sn−1 ◦ . . . ◦ s1 (y) is yn , and the n-th coordinate of sn (y) is
yn − (y, εn ) . So we have (y, εn ) = 0.
200 10 Diagrams and Roots

Now we proceed inductively. Since (y, εn ) = 0 then also sn (y) = y by the


introductory observation. So we have now that sn−1 ◦ . . .◦ s1 (y) = y. Then applying
sn−1 and equating the (n − 1)-th coordinate we get yn−1 = yn−1 − (y, εn−1 ) and
hence (y, εn−1 ) = 0.
Repeating the argument we eventually get (y, εi ) = 0 for all i and then y = 0
as explained above.
(b) By Lemma 10.9 we know that each reflection sj permutes the set  of roots.
Hence C = sn ◦ . . . ◦ s1 also permutes the roots. Since  is a Dynkin diagram, the
set of roots is finite by Proposition 10.12. Hence there is some integer h ≥ 1 such
that Ch fixes each root. Then in particular Ch (εi ) = εi for all i, by Exercise 10.3.
The εi form a basis for Zn , and Ch is Z-linear (see Remark 10.3), therefore Ch is
the identity map on Zn .
(c) If x ≥ 0 then we can take r = 0. So assume now that x ≥ 0. Let h be the
minimal positive integer as in part (b) such that Ch is the identity. Then we set


h−1
y := Cr (x) ∈ Zn .
r=0

For this particular vector we have

C (y) = C (x)+C2 (x)+. . .+Ch (x) = C (x)+C2 (x)+. . .+Ch−1 (x)+x = y.

By part (a) we deduce y = 0. Now, x ≥ 0 and x is non-zero. If we had Cr (x) ≥ 0


for all r then it would follow that 0 = y ≥ 0, a contradiction. So there must be some
r ≥ 1 such that C r x ≥ 0, as required. 

The properties in Lemma 10.16 depend crucially on the fact that  is a Dynkin
diagram. For the Coxeter transformation in Exercise 10.9, each part of the lemma
fails. Indeed, C − id is obviously singular, so part (a) does not hold. Furthermore,
from the matrix of Ck in Exercise 10.9 one sees that no power of C can be the
identity, so part (b) does not hold. In addition, for all k ≥ 0 the matrix of Ck yields
Ck (ε2 ) ≥ 0, so part (c) does not hold.

EXERCISES

10.10. Let C be the Coxeter transformation for Dynkin type An (with standard
labelling), see Example 10.15; this permutes the set  of roots.
(a) Find the C -orbit of εn , show that it contains each εi , and that it has size
n + 1.
(b) Show that each orbit contains a unique root of the form αt,n for
1 ≤ t ≤ n, compute its orbit, and verify that it has size n + 1.
(c) Deduce that Cn+1 is the identity map of Rn .
10.3 The Coxeter Transformation 201

10.11. Let  be a Dynkin diagram with standard labelling. The Coxeter number
of  is the smallest positive integer h such that Ch is the identity map of
Rn . Using the previous Exercise 10.10 show that the Coxeter number of the
Dynkin diagram of type An is equal to n + 1.
10.12. Assume C is the Coxeter transformation for the Dynkin diagram  of type
An with standard labelling. Show by using the formula in Example 10.15 that
C (y) = y for y ∈ Zn implies that y = 0.
10.13. Consider the Coxeter transformation C for a Dynkin diagram of type An ,
with standard labelling. Show that its matrix with respect to the standard basis
is given by
⎛ ⎞
−1 1 0 ... ... 0
⎜−1 0 1 0 . . . 0⎟
⎜ ⎟
⎜ . .. . . . . . . .. ⎟
⎜ . ⎟
⎜ . . . . . .⎟
Cn := ⎜ . ⎟
⎜ . .. ⎟
⎜ . . 0 1 0⎟
⎜ ⎟
⎝−1 0 0 1⎠
−1 0 ... ... ... 0

Let fn (x) = det(Cn − xEn ), the characteristic polynomial of Cn .


(a) Check that f1 (x) = −x − 1 and f2 (x) = x 2 + x + 1.
(b) By using the expansion formula for the last row of the matrix Cn − xEn ,
show that

fn (x) = (−1)n − xfn−1 (x) (for n ≥ 3).

(c) By induction on n, deduce a formula for fn (x). Hence show that

x n+1 − 1
fn (x) = (−1)n .
x−1

(d) Deduce from this that Cn does not have an eigenvalue equal to 1. Hence
deduce that C (y) = y implies y = 0.
10.14. (Roots in Dynkin type D) Let  be the Dynkin diagram of Type Dn , where
n = 4 and n = 5, with standard labelling as in Example 10.4. Use the formula
for the quadratic form q given in Exercise 10.6 to determine all roots of q .
(Hint: In total there are 2n(n − 1) roots.)
10.15. Compute the reflections and the Coxeter transformation for a Dynkin diagram
 of type D5 with standard labelling.
(a) Verify that

C (x) = (x3 −x1 , x3 −x2 , x3 +x4 −x1 −x2 , x3 +x5 −x1 −x2 , x3 −x1 −x2 ).
202 10 Diagrams and Roots

(b) The map C permutes the roots. Find the orbits.


(c) Show that the Coxeter number of  (defined in Exercise 10.11) is equal
to 8.
10.16. Let  be the Dynkin diagram of type An with standard labelling, and let
W be the subgroup of Aut(Zn ) generated by s1 , s2 , . . . , sn . We want to
show that W is isomorphic to the symmetric group Sn+1 . The group Sn+1
can be defined by generators and relations, where the generators are the
transpositions τ1 , τ2 , . . . , τn , where τi interchanges i and i + 1, and the
relations are τi2 = 1 and the ‘braid relations’

τi τi+1 τi = τi+1 τi τi+1 (for 1 ≤ i < n), τi τj = τj τi (for |i − j | > 1).

(a) Check that for the unit vectors we have

si (εi ) = − εi
si (εi+1 ) =εi+1 + εi
si (εi−1 ) =εi−1 + εi

and that si (εj ) = εj for j ∈ {i, i ± 1}.


(b) Use (a) to show that the si satisfy the braid relations.
(c) By (b) and since si2 = 1, there is a group homomorphism ρ : Sn+1 → W
such that ρ(τi ) = si . The kernel of ρ is a normal subgroup of Sn+1 .
Using the fact that the only normal subgroups of symmetric groups are
the alternating groups, and the Klein 4-group when n + 1 = 4, show that
the kernel of ρ must be the identity group. Hence ρ is an isomorphism.
Chapter 11
Gabriel’s Theorem

Assume Q is a quiver without oriented cycles, then for any field K the path algebra
KQ is finite-dimensional (see Exercise 1.2). We want to know when Q is of finite
representation type; this is answered completely by Gabriel’s theorem. Let Q̄ be the
underlying graph of Q, which is obtained by ignoring the orientation of the arrows.
Gabriel’s theorem states that KQ is of finite representation type if and only if Q̄
is the disjoint union of Dynkin diagrams of type A, D and E. The relevant Dynkin
diagrams are listed in Fig. 10.1. So the representation type of Q does not depend on
the orientation of the arrows. Note that Gabriel’s theorem holds, and is proved here,
for an arbitrary field K.
Theorem 11.1 (Gabriel’s Theorem). Assume Q is a quiver without oriented
cycles, and K is a field. Then Q has finite representation type if and only if the
underlying graph Q̄ is the disjoint union of Dynkin diagrams of types An for n ≥ 1,
or Dn for n ≥ 4, or E6 , E7 , E8 .
Moreover, if a quiver Q has finite representation type, then the indecomposable
representations are parametrized by the set of positive roots (see Definition 10.6),
associated to the underlying graph of Q. Dynkin diagrams and roots play a central
role in Lie theory, and Gabriel’s theorem connects representation theory with Lie
theory.

11.1 Reflecting Quivers and Representations

Gabriel’s theorem states implicitly that the representation type of a quiver depends
only on the underlying graph but not on the orientation of the arrows. To prove
this, we will use ‘reflection maps’, which relate representations of two quivers with
the same underlying graph but where some arrows have different orientation. This

© Springer International Publishing AG, part of Springer Nature 2018 203


K. Erdmann, T. Holm, Algebras and Representation Theory, Springer
Undergraduate Mathematics Series, https://doi.org/10.1007/978-3-319-91998-0_11
204 11 Gabriel’s Theorem

construction will show that any two quivers with the same underlying graph  have
the same representation type, if  is an arbitrary finite tree.
Throughout this chapter let K be an arbitrary field.
Definition 11.2. Let Q be a quiver. A vertex j of Q is called a sink if no arrows in
Q start at j . A vertex k of Q is a source if no arrows in Q end at k.
For example, consider the quiver 1 −→ 2 ←− 3 ←− 4. Then vertices 1 and 4
are sources, vertex 2 is a sink and vertex 3 is neither a sink nor a source.
Exercise 11.1. Let Q be a quiver without oriented cycles. Show that Q contains a
sink and a source.
Definition 11.3. Let Q be a quiver and let j be a vertex in Q which is a sink or a
source. We define a new quiver σj Q, this is the quiver obtained from Q by reversing
all arrows adjacent to j , and keeping everything else unchanged. We call σj Q the
reflection of Q at the vertex j . Note that if a vertex j is a sink of Q then j is a
source of σj Q, and if j is a source of Q then it is a sink of σj Q. We also have that
σj σj Q = Q.
Example 11.4. Consider all quivers whose underlying graph is the Dynkin diagram
of type A4 . Up to labelling of the vertices, there are four possible quivers,

Q1 : 1 ←− 2 ←− 3 ←− 4
Q2 : 1 −→ 2 ←− 3 ←− 4
Q3 : 1 ←− 2 −→ 3 ←− 4
Q4 : 1 ←− 2 ←− 3 −→ 4

Then σ1 Q1 = Q2 and σ2 σ1 Q1 = Q3 ; and moreover σ3 σ2 σ1 Q1 = Q4 . Hence each


Qi can be obtained from Q1 by applying reflections.
This observation is more general: We will now show that if we start with a quiver
Q whose underlying graph is a tree, and repeatedly apply such reflections, we can
get all quivers with the same underlying graph. That is, we can choose a vertex i1
which is a sink or source of Q, then we get the quiver σi1 Q. Then one can find a
sink or source i2 of the quiver σi1 Q, and hence we get the quiver σi2 σi1 Q, and so
on. If the underlying graph is a tree one can arrange this, and get an arbitrary quiver
Q with the same underlying graph. The idea is to organize this properly.
Proposition 11.5. Let Q and Q be quivers with the same underlying graph and
assume that this graph is a tree. Then there exists a sequence i1 , . . . , ir of vertices
of Q such that the following holds:
(i) The vertex i1 is a sink or source in Q.
(ii) For each j such that 1 < j < r, the vertex ij is a sink or source in the quiver
σij−1 . . . σi1 Q obtained from Q by successively reflecting at the vertices in the
sequence.
(iii) We have Q = σir . . . σi1 Q.
11.1 Reflecting Quivers and Representations 205

Proof. We prove this by induction on the number n of vertices of Q. Let  be the


underlying graph. For n = 1 or n = 2 the statement is clear. So let n ≥ 3 and
assume, as an inductive hypothesis, that the statement holds for quivers with fewer
than n vertices. Since  is a tree, it must have a vertex which is adjacent to only one
other vertex (in fact, if each vertex has at least two neighbours one can find a cycle
in the graph). We choose and fix such a vertex, and then we label the vertices so that
this vertex is n and its unique neighbour is n − 1.
We remove the vertex n and the (unique) adjacent arrow, that is, the arrow
between n and n − 1 from Q and Q ; this gives quivers Q  and Q  , each with n − 1

vertices and which have the same underlying graph  , which is also a tree.
By the inductive hypothesis, there exists a sequence i1 , . . . , it of vertices of Q 

such that for each j , the vertex ij is a sink or source in σij−1 . . . σi1 Q and such that
Q  We want to extend this to Q but we have to be careful in cases
 = σit . . . σi1 Q.
when the vertex ij is equal to n − 1.
At the first step, we have two cases: either i1 is a sink or source in Q, or i1 = n−1
but is not a sink or source in Q. In the second case, i1 must be a sink or source in
the quiver σn Q. We set Q(1) := σi1 Q in the first case, and in the second case we set
Q(1) := σi1 σn Q. Note that if we remove vertex n and the adjacent arrow from Q(1)
we get σi1 Q. 
If i2 is a sink or source in Q(1) then set Q(2) := σ2 Q(1) . Otherwise, i2 = n − 1
and it is a sink or source of σn Q(1) . In this case we set Q(2) := σi2 σn Q(1) . We note
that if we remove vertex n and the adjacent arrow from Q(2) then we get σi2 σi1 Q. 
We repeat this until we have after t steps a quiver Q(t ) , so that if we remove vertex
n and the adjacent arrow from Q(t ) , then we get the quiver σit . . . σi1 Q  = Q . Then
(t )  (t ) 
either Q = Q , or if not then σn Q = Q . In total we have obtained a sequence of
reflections in sinks or sources which takes Q to Q . The parameter r in the statement
is then equal to t plus the number of times we have inserted the reflection σn . 

Example 11.6. We illustrate the proof of Proposition 11.5. Let Q and Q be the
quivers
206 11 Gabriel’s Theorem

and

We write down Q,  and Q  , obtained by removing vertex 5 and the adjacent arrow.

Then we have σ4 σ1 Q = Q  . We extend the sequence to Q and we see that we must
take twice a reflection at vertex 5, and get σ5 σ4 σ5 σ1 (Q) = Q .
Starting with a quiver Q where vertex j is a sink or a source, we have obtained
a new reflected quiver σj Q. We want to compare the representation type of
these quivers, and want to construct from a representation M of Q a ‘reflected
representation’ of the quiver σj Q.

11.1.1 The Reflection j+ at a Sink

We assume that j is a sink of the quiver Q. For every representation M of Q we


will construct from M a representation of σj Q, denoted by j+ (M). The idea is to
keep the vector space M(r) as it is for any vertex r = j , and also to keep the linear
map M(β) as it is for any arrow β which does not end at j . We want to find a vector
space M + (j ), and for each arrow αi : i → j in Q, we want to define a linear map
M + (ᾱi ) from M + (j ) to M(i), to be constructed using only data from M. We first
fix some notation, and then we study small examples.
Definition 11.7. Let j be a sink in the quiver Q. We label the distinct arrows ending
at j by α1 , α2 , . . . , αt , say αi : i → j . Then we write ᾱi : j → i for the arrows of
σj Q obtained by reversing the αi .
Note that with this convention we have t distinct arrows ending at j , but if the
quiver contains multiple arrows then some starting vertices of the arrows αi may
coincide; see also Remark 11.13.
Example 11.8.
(1) Let t = 1, and take the quivers Q and σj Q as follows:

α1 ᾱ1
1 −→ j and 1 ←− j.
11.1 Reflecting Quivers and Representations 207

We start with a representation M of Q,

M(α1 )
M(1) −−−−→ M(j ),

and we want to define a representation of σj Q, that is,

M + (ᾱ1 )
M(1) ←−−−− M + (j ),

and this should only use information from M. There is not much choice, we
take M + (j ) := ker(M(α1 )), which is a subspace of M(1), and we take M + (ᾱ1 )
to be the inclusion map. This defines a representation j+ (M) of σj Q.
(2) Let t = 2, and take the quivers Q and σj Q as follows:

α1 α2 ᾱ1 ᾱ2
1 −→ j ←− 2 and 1 ←− j −→ 2.

We start with a representation M of Q,

M(α1 ) M(α2 )
M(1) −−−
−→ M(j ) ←−−−
− M(2).

Here we can use the construction of the pull-back, which was introduced in
Chap. 2 (see Exercise 2.16). This takes two linear maps to a fixed vector space
and constructs from this a new space E, explicitly,

E = {(m1 , m2 ) ∈ M(1) × M(2) | M(α1 )(m1 ) + M(α2 )(m2 ) = 0}.

Denote the projection maps on E by π1 (m1 , m2 ) = m1 , and π2 (m1 , m2 ) = m2 .


We define M + (j ) := E and M + (ᾱ1 ) := π1 and M + (ᾱ2 ) := π2 . That is, the
representation j+ (M) is then

π1 π2
M(1) ←− E −→ M(2).

(3) Let t = 3. We take the following quiver Q


208 11 Gabriel’s Theorem

Then the quiver σj Q is equal to

Suppose M is a representation of Q. We want to construct from M a new


representation j+ (M) of the quiver σj Q. We modify the idea of the previous
example.
We set M + (i) = M(i) for i = 1, 2, 3, and we define


3
M + (j ) = {(m1 , m2 , m3 ) ∈ M(i) | M(α1 )(m1 ) + M(α2 )(m2 ) + M(α3 )(m3 ) = 0}.
i=1

As the required linear map M + (ᾱi ) : M + (j ) → M(i) we take the i-th


projection, that is

M + (ᾱi )((m1 , m2 , m3 )) = mi for i = 1, 2, 3.

Then j+ (M) is a representation of the quiver σj Q.


We consider two explicit examples for this construction.
(i) Let M be the representation of Q with M(i) = K for 1 ≤ i ≤ 3 and
M(j ) = K 2 , with maps M(αi ) given by

M(α1 ) : K → K 2 , x → (x, 0)
M(α2 ) : K → K 2 , x → (0, x)
M(α3 ) : K → K 2 , x → (x, x).

Then

M + (j ) = {(m1 , m2 , m3 ) ∈ K 3 | (m1 , 0) + (0, m2 ) + (m3 , m3 ) = 0}


= {(−x, −x, x) | x ∈ K}

and M + (ᾱi )((−x, −x, x)) is equal to −x, or to x, respectively.


(ii) Let M be the representation of Q with M(1) = M(2) = M(3) = 0 and
M(j ) = K, then j+ (M) is the zero representation. Note that M = Sj , the
simple representation for the vertex j (see Example 9.7).
The definition of j+ (M) in general when j is a sink of a quiver Q is essentially
the same as in the examples. Recall our notation,
 as in Definition 11.7. We also write
(M(α1 ), . . . , M(αt )) for the linear map from ti=1 M(i) to M(j ) defined as

(m1 , . . . , mt ) → M(α1 )(m1 ) + . . . + M(αt )(mt ).


11.1 Reflecting Quivers and Representations 209

Definition 11.9. Let Q be a quiver and assume j is a sink in Q. For any


representation M of Q we define a representation j+ (M) of the quiver σj Q as
follows. We set
&
+ M(r) (r = j ),
M (r) := t
{(m1 , . . . , mt ) ∈ i=1 M(i) | (M(α1 ), . . . , M(αt ))(m1 , . . . , mt ) = 0} (r = j ).

If γ is an arrow of Q which does not end at the sink j then we set M + (γ ) = M(γ ).
For i = 1, . . . , t we define M + (ᾱi ) : M + (j ) → M + (i) to be the projection onto
M(i), that is, M + (ᾱi )(m1 , . . . , mt ) = mi .
To compare the representation types of the quivers Q and σj Q, we need to
keep track over direct sum decompositions. Fortunately, the construction of j+ is
compatible with taking direct sums:
Lemma 11.10. Let Q be a quiver and let j be a sink in Q. Let M be a
representation of Q such that M = X ⊕ Y for subrepresentations X and Y. Then
we have

j+ (M) = j+ (X ) ⊕ j+ (Y).

We will prove this later, in Sect. 12.1.1, since the proof is slightly technical. With
this lemma, we can focus on indecomposable representations of Q. We consider a
small example.
α
Example 11.11. We consider the quiver 1 −→ 2. In Example 9.28 we have seen
that it has precisely three indecomposable representations (up to isomorphism),
which are listed in the left column of the table below. We now reflect the quiver at the
sink 2, and we compute the reflected representations 2+ (M), using Example 11.8.
The representations 2+ (M) are listed in the right column of the following table.

M 2+ (M)
idK
K −→ 0 K ←− K
idK
K −→ K K ←− 0
0 −→ K 0 ←− 0

We see that 2+ permutes the indecomposable representations other than the simple
representation S2 . Moreover, it takes S2 to the zero representation, and S2 does not
appear as 2+ (M) for some M.
We can generalize the last observation in the example:
Proposition 11.12. Assume Q is a quiver and j is a sink in Q. Let M be a
representation of Q.
210 11 Gabriel’s Theorem

(a) j+ (M) is the zero representation if and only if M(r) = 0 for all vertices
r = j , equivalently M is isomorphic to a direct sum of copies of the simple
representation Sj .
(b) j+ (M) has no subrepresentation isomorphic to the simple representation Sj .
Proof. (a) Assume first that M(r) = 0 for all r = j then it follows directly from
Definition 11.9 that j+ (M) is the zero representation. Conversely, if j+ (M) is
the zero representation then for r = j we have 0 = M + (r) = M(r). This condition
means that M is isomorphic to a direct sum of copies of Sj , by Exercise 9.8.
(b) Suppose for a contradiction that j+ (M) has a subrepresentation isomorphic
to Sj . Then we have a non-zero element m := (m1 , . . . , mt ) ∈ M + (j ) with
M + (ᾱi )(m) = 0 for i = 1, . . . , t. But by definition, the map M + (ᾱi ) takes
(m1 , . . . , mt ) to mi . Therefore mi = 0 for i = 1, . . . , t and m = 0, a contradiction.


Remark 11.13. Let j be a sink of Q, then we take in Definition 11.7 distinct arrows
ending at j , but we are not excluding that some of these may start at the same vertex.
For example, take the Kronecker quiver

then for a representation M of this quiver, to define j+ (M) we must take

M + (j ) = {(m, m ) ∈ M(1) × M(1) | M(α1 )(m) + M(α2 )(m ) = 0}.

We will not introduce extra notation for multiple arrows, since the only time we
have multiple arrows is for examples using the Kronecker quiver.

11.1.2 The Reflection j− at a Source

We assume that j is a source of the quiver Q . For every representation N of Q


we will construct from N a representation of σj Q , denoted by j− (N ). The idea
is to keep the vector space N(r) as it is, for any vertex r = j , and also to keep the
linear map N(γ ) as it is, for any arrow γ which does not start at j . We want to find
a vector space N − (j ), and for each arrow βi : j → i, we want to define a linear
map N − (β̄i ) from N(i) to N − (j ), to be constructed using only data from N . We
first fix some notation, and then we study small examples.
Definition 11.14. Let j be a source in the quiver Q . We label the distinct arrows
starting at j by β1 , β2 , . . . , βt , say βi : j → i. Then we write β̄i : i → j for the
arrows of σj Q obtained by reversing the βi .
11.1 Reflecting Quivers and Representations 211

Example 11.15.
(1) Let t = 1, and take the quivers Q and σj Q as follows:

β1 β̄1
1 ←− j and 1 −→ j.

We start with a representation N of Q ,

N(β1 )
N(1) ←−−− N(j ),

and we want to define a representation of σj Q , that is,

N − (β̄1 )
N(1) −−−−→ N − (j ),

and this should only use information from N . There is not much choice, we take
N − (j ) := N(1)/im(N(β1 )), which is a quotient space of N(1), and we take
N − (β̄1 ) to be the canonical surjection. This defines the representation j− (N )
of σj Q .
(2) Let t = 2, and take the quivers Q and σj Q as follows:

β1 β2 β̄1 β̄2
1 ←− j −→ 2 and 1 −→ j ←− 2.

We start with a representation N of Q ,

N(β1 ) N(β2 )
N(1) ←−−− N(j ) −−−→ N(2).

Here we can use the construction of the push-out, which was introduced in
Chap. 2 (see Exercise 2.17). This takes two linear maps starting at the same
vector space and constructs from this a new space, F , explicitly,

F = (N(1) × N(2))/C, where C = {(N(β1 )(x), N(β2 )(x)) | x ∈ N(j )}.

We have then a canonical linear map μ1 : N(1) → F defined by


μ1 (n1 ) = (n1 , 0) + C, and similarly μ2 : N(2) → F is the map
μ2 (n2 ) = (0, n2 ) + C. We define N − (j ) := F and N − (β̄1 ) := μ1 , and
N − (β̄2 ) := μ2 . That is, the representation j− (N ) is then

μ1 μ2
N(1) −→ F ←− N(2).
212 11 Gabriel’s Theorem

(3) Let t = 3. We take the following quiver Q

Then σj Q is the quiver

Let N be a representation of Q , we want to construct from N a representation


j− (N ) of σj Q . We modify the idea of the previous example. We set
N − (i) := N(i) for i = 1, 2, 3, and we define N − (j ) to be the quotient space

N − (j ) := (N(1) × N(2) × N(3))/CN ,

where CN := {(N(β1 )(x), N(β2 )(x), N(β3 )(x)) | x ∈ N(j )}. As the required
linear map N − (β̄1 ) : N(1) → N − (j ) we take the canonical map

x → (x, 0, 0) + CN ,

and similarly we define N − (β̄i ) for i = 2, 3. Then j− (N ) is a representation


of the quiver σj Q .
We consider two explicit examples.
(i) Let N be the representation of Q with N(i) = K for i = 1, 2, 3 and with
N(j ) = K 2 . Take

N(β1 )(x1 , x2 ) := x1
N(β2 )(x1 , x2 ) := x2
N(β3 )(x1 , x2 ) := x1 + x2 .

Then N − (j ) = (K × K × K)/CN and

CN = {(x1, x2 , x1 + x2 ) | (x1 , x2 ) ∈ K 2 }.

We see that CN is 2-dimensional and N − (j ) is therefore 1-dimensional,


spanned for instance by (1, 0, 0) + CN .
11.1 Reflecting Quivers and Representations 213

(ii) Let N be the representation of Q with N(i) = 0 for i = 1, 2, 3, and


N(j ) = K, then j− (N ) is the zero representation. Note that N = Sj , the
simple representation for the vertex j .
The definition of j− (N ) in general, where j is a source of a quiver Q , is
essentially the same as in the examples. Recall our notation, as it is fixed in
Definition 11.14.
Definition 11.16. Let Q be a quiver and assume j is a source of Q . For any
representation N of Q we define a representation j− (N ) of σj Q as follows.
Given N , then j− (N ) is the representation such that

− N(r) if r =
j
N (r) =
(N(1) × . . . × N(t))/CN if r = j,

where

CN := {(N(β1 )(x), . . . , N(βt )(x)) | x ∈ N(j )}.

Next, define N − (γ ) = N(γ ) if γ is an arrow which does not start at j , and for
1 ≤ i ≤ t define the linear map N − (β̄i ) : N(i) → N − (j ) by setting

N − (β̄i )(ni ) := (0, . . . , 0, ni , 0, . . . , 0) + CN ∈ N − (j )

with ni ∈ N(i) in the i-th coordinate.


To compare the representation type of the quivers Q and σj Q , we need to
keep track over direct sum decompositions. The construction of j− is compatible
with taking direct sums, but the situation is slightly more complicated than for
j+ in Lemma 11.10, since in general j− does not take subrepresentations to
subrepresentations, see Exercise 11.12 for an example.
The direct product of representations has been defined in Exercise 9.13, as an
analogue of the direct product of modules.
Lemma 11.17. Let Q be a quiver and assume j is a source in Q .
(a) Assume N is a representation of Q and N = X ⊕ Y with subrepresentations
X and Y. Then j− (N ) is isomorphic to the direct product j− (X ) × j− (Y)
of representations.
(b) Let V = j− (X )×j− (Y) be the direct product representation in part (a). Then
V has subrepresentations X  and Y where V = X  ⊕ Y, and as representations
of Q we have X ∼= j −
(X ) and 
Y ∼
= j
 −
(Y).
(c) Using the isomorphisms in (b) as identifications, we have

j− (N ) ∼
= j− (X ) ⊕ j− (Y).
214 11 Gabriel’s Theorem

This lemma will be proved in Sect. 12.1.1. Note that part (b) is a direct
application of Exercise 9.13, and part (c) follows from parts (a) and (b). So it remains
to prove part (a) of the lemma, and this is done in Sect. 12.1.1.
With this result, we focus on indecomposable representations, and we consider a
small example.
α
Example 11.18. We consider the quiver 1 −→ 2, and we reflect at the source 1.
Recall that the quiver has three indecomposable representations, they are listed
in the left column of the table below. We compute the reflected representations
1− (N ), using Example 11.15. The representations 1− (N ) are listed in the right
column of the following table.

N 1− (N )
K −→ 0 0 ←− 0
idK
K −→ K 0 ←− K
idK
0 −→ K K ←− K

We see that 1− permutes the indecomposable representations other than the simple
representation S1 . Moreover, it takes S1 to the zero representation, and S1 does not
appear as 1− (N ) for some N .
We can generalize the last observation in this example.
Proposition 11.19. Assume Q is a quiver, and j is a source of Q . Let N be a
representation of Q .
(a) j− (N ) is the zero representation if and only if N(i) = 0 for all i = j , equiva-
lently, N is isomorphic to a direct sum of copies of the simple representation Sj .
(b) j− (N ) has no direct summand isomorphic to the simple representation Sj .
Proof. (a) First, if N(i) = 0 for all i = j then it follows directly from
Definition 11.16 that j− (N ) is the zero representation. Conversely, assume that
j− (N ) is the zero representation, that is N − (i) = 0 for each vertex i. In particular,
for i = j we have N(i) = N − (i) = 0. For the last part, see Exercise 9.8.
(b) Assume for a contradiction that j− (N ) = X ⊕Y, where X is isomorphic to Sj .
Then N(i) = N − (i) = X(i) ⊕ Y (i) = Y (i) for i = j and N − (j ) = X(j ) ⊕ Y (j )
and N − (j ) = Y (j ) since X(j ) is non-zero. We get a contradiction if we show that
Y (j ) is equal to N − (j ).
By definition Y (j ) ⊆ N − (j ). Conversely, take an element in N − (j ), it is of the
form (v1 , . . . , vt ) + CN with vi ∈ N(i). We can write it as

((v1 , 0, . . . , 0) + CN ) + ((0, v2 , 0, . . . , 0) + CN ) + . . . + ((0, . . . , 0, vt ) + CN ).


11.1 Reflecting Quivers and Representations 215

Now (0, . . . , 0, vi , 0, . . . , 0) + CN = N − (β̄i )(vi ) = Y (β̄i )(vi ) since


vi ∈ N(i) = Y (i) and Y is a subrepresentation; moreover, this element lies in
Y (j ) since Y is a subrepresentation. This holds for all i = 1, . . . , t, and then the
sum of these elements is also in Y (j ). We have proved N − (j ) = Y (j ), and have
the required contradiction. 

Remark 11.20. Let j be a source of the quiver Q . In Definition 11.14 we take
distinct arrows starting at j , but we have not excluded that some of these may
end at the same vertex. For example, let Q be the Kronecker quiver with two
arrows β1 , β2 from vertex j to vertex 1, then for a representation N of this
quiver, to define j− (N ) we must take N − (j ) = (N(1) × N(1))/CN and
CN = {(N(β1 )(x), N(β2 )(x)) | x ∈ N(j )}. As for the case of a sink, we will
not introduce extra notation since we only have a multiple arrow in examples with
the Kronecker quiver but not otherwise.

11.1.3 Compositions j− j+ and j+ j−

Assume j is a sink of a quiver Q. Then j is a source in the reflected quiver σj Q. If


M is a representation of Q then j+ (M), as in Definition 11.9, is a representation
of σj Q. So it makes sense to apply j− to this representation. We get

j− j+ (M) := j− (j+ (M)),

which is a representation of Q (since σj σj Q = Q).


Similarly, if j is a source in a quiver Q and N is a representation of Q then we
define j+ j− (N ), which is a representation of Q .
α
Example 11.21. We consider the quiver Q given by 1 −→ 2. The vertex 2 is a sink
in Q and hence a source in σ2 Q. From the table in Example 11.11 we can find the
composition 2− 2+ ,

M 2+ (M) 2− 2+ (M)


idK
K −→ 0 K ←− K K −→ 0
idK idK
K −→ K K ←− 0 K −→ K
0 −→ K 0 ←− 0 0 −→ 0

Similarly, the vertex 1 is a source in Q and hence a sink in σ1 Q. For the composition
1+ 1− we get the following, using the table in Example 11.18,

N 1− (N ) 1+ 1− (N )


K −→ 0 0 ←− 0 0 −→ 0
idK idK
K −→ K 0 ←− K K −→ K
idK
0 −→ K K ←− K 0 −→ K
216 11 Gabriel’s Theorem

We observe in the first table that if M is not the simple representation S2 then
2− 2+ (M) is isomorphic to M. Similarly in the second table, if N is not the simple
representation S1 then 1+ 1− (N ) is isomorphic to N . We will see that this is not a
coincidence.
Proposition 11.22. Assume j is a sink of a quiver Q and let α1 , . . . , αt be the
arrows in Q ending at j . Suppose M is a representation of Q such that the linear
map


t
(M(α1 ), . . . , M(αt )) : M(i) −→ M(j )
i=1

is surjective. Then the representation j− j+ (M) is isomorphic to M.


This will be proved in the last chapter, see Sect. 12.1.2. The surjectivity condition
is necessary. Otherwise M would have a direct summand isomorphic to Sj , by
Exercise 9.9, but j− j+ (M) does not have such a summand, by Proposition 11.19.
Example 11.23. Let Q be the quiver of the form
α1 α2
1 −→ j ←− 2.

Suppose we take a representation M which satisfies the assumption of the above


proposition. In Example 11.8 we have seen that j+ (M) is the representation

π1 π2
M(1) ←− E −→ M(2)

where

M + (j ) = E = {(m1 , m2 ) ∈ M(1) × M(2) | M(α1)(m1 ) + M(α2 )(m2 ) = 0}

is the pull-back as in Exercise 2.16 and π1 , π2 are the projection maps from E onto
M(1) and M(2), respectively.
If N = j− j+ (M) then by Example 11.15 this representation has the form

μ1 μ2
M(1) −→ F ←− M(2)

where N − (j ) = F = (M(1) × M(2))/E, the push-out as in Exercise 2.17, and μ1


is given by m1 → (m1 , 0) + E, and similarly for μ2 .
In Exercise 2.18 we have seen that F is isomorphic to M(j ), where an isomor-
phism ϕ : F → M(j ) is given by (m1 , m2 ) + E → M(α1 )(m1 ) + M(α2 )(m2 ).
Then one checks that the tuple (idM(1) , ϕ, idM(2) ) is a homomorphism of rep-
resentations, and since all maps are isomorphisms, the representations j− j+ (M)
and M are isomorphic.
11.1 Reflecting Quivers and Representations 217

Returning to Example 11.21 we will now show that the observation on


1+ 1− (N ) is not a coincidence.
Proposition 11.24. Assume j is a source of a quiver Q and let β1 , . . . , βt be
the arrows in Q starting at j . Suppose N is a representation of Q such that
 t + −
i=1 ker(N(βi )) = 0. Then the representations j j (N ) and N are isomorphic.
This is analogous to Proposition 11.22, and we will also prove it later in the final
chapter, see Sect. 12.1.2. Again, the condition on the intersection of the kernels
is necessary. If it does not hold then by Exercise 9.10, N has a direct summand
isomorphic to Sj . On the other hand, by Proposition 11.12, the representation
j+ j− (N ) does not have such a direct summand.
The following exercise shows that if we are dealing with indecomposable (but
not simple) representations, the assumptions of Propositions 11.22 and 11.24 are
always satisfied.
Exercise 11.2. Assume Q is a quiver and vertex j of Q is a sink (or a source).
Let α1 , . . . , αt be the arrows in Q ending (or starting) in j . Suppose M is
an indecomposable representation of Q which is not isomorphic to the simple
representation Sj .
t
t) = i=1 im(M(αi )).
(a) Assume j is a sink. Show that then M(j
(b) Assume j is a source. Show that then i=1 ker(M(αi )) = 0.
Hint: Apply Exercises 9.9 and 9.10.
For a worked solution of Exercise 11.2, see the Appendix.
We can now completely describe the action of the reflections j+ and j− on
indecomposable representations.
Theorem 11.25. Let Q be a quiver.
(a) Assume j is a sink of Q and M is an indecomposable representation of Q not
isomorphic to Sj . Then j+ (M) is indecomposable and not isomorphic to Sj .
(b) Assume j is a source in Q and N is an indecomposable representation of Q not
isomorphic to Sj . Then j− (N ) is indecomposable and not isomorphic to Sj .
(c) Assume j is a sink or a source of Q. The reflections j± give mutually inverse
bijections between the indecomposable representations not isomorphic to Sj of
Q and of σj Q.
(d) Assume j is a sink or a source of Q. Then Q has finite representation type if
and only if so does σj Q.
Proof. (a) From the assumption and Exercise 11.2 (a) it follows that
t
M(j ) = i=1 im(M(αi )), where α1 , . . . , αt are the arrows ending at the sink
j . We apply Proposition 11.22 and obtain that j− j+ (M) ∼ = M. If we had
j+ (M) = U ⊕ V for non-zero subrepresentations U and V then it would follow
from Lemma 11.17 that

M∼
= j− j+ (M) ∼
= j− (U) ⊕ j− (V).
218 11 Gabriel’s Theorem

But M is indecomposable by assumption, so one of the summands, say j− (V),


is the zero representation. By Proposition 11.19, if V is not the zero representation
then it is isomorphic to a direct sum of copies of Sj . On the other hand, V is a direct
summand of j+ (M) and therefore, by Proposition 11.12, it does not have a direct
summand isomorphic to Sj . It follows that V must be the zero representation, and
we have a contradiction. This shows that j+ (M) is indecomposable. The last part
also follows from Proposition 11.12.
Part (b) is proved similarly; see Exercise 11.3 below.
(c) Say j is a sink of the quiver Q, and Q = σj Q. If M is an indecomposable
representation of Q which is not isomorphic to Sj then Exercise 11.2 and Proposi-
tion 11.22 give that M ∼ = j− j+ (M). If N is an indecomposable representation
of Q not isomorphic to Sj then by Exercise 11.2 and Proposition 11.24 we have
N ∼ = j+ j− (N ). Part (c) is proved.
(d) This follows directly from (c). 

Exercise 11.3. Write out a proof of part (b) of Theorem 11.25.
The following consequence of Theorem 11.25 shows that the representation
type of a quiver does not depend on the orientation of the arrows, as long as the
underlying graph is a tree.
Corollary 11.26. Let  be a graph which is a tree. Then any two quivers with
underlying graph  have the same representation type.
Proof. By Proposition 11.5 we know that any two quivers with underlying graph 
are related by a sequence of reflections in sinks or sources, so the corollary follows
directly from Theorem 11.25. 

11.2 Quivers of Infinite Representation Type

We will now prove that if the underlying graph of Q is not a union of Dynkin
diagrams then Q has infinite representation type. This is one direction of Gabriel’s
theorem. As we have seen in Lemma 9.27, it is enough to consider connected
quivers, and we should deal with smallest connected quivers whose underlying
graph is not a Dynkin diagram (see Lemma 9.26).
Proposition 11.27. Assume Q is a connected quiver with no oriented cycles. If the
underlying graph of Q is not a Dynkin diagram, then Q has infinite representation
type.
The proof of Proposition 11.27 will take the entire section.
By Lemma 10.1 we know that a connected quiver Q whose underlying graph is not
a Dynkin diagram must have a subquiver Q whose underlying graph is a Euclidean
diagram. By Lemma 9.26, it suffices to show that the subquiver Q has infinite
11.2 Quivers of Infinite Representation Type 219

representation type. We will do this case-by-case going through the Euclidean


diagrams listed in Fig. 10.2.
We start with Euclidean diagrams of type A n , which has almost been done
already.
Proposition 11.28. Assume Q is a quiver without oriented cycles whose underly-
n . Then Q is of infinite representation
ing graph is a Euclidean diagram of type A
type.
Proof. Let n = 1, then Q is the Kronecker quiver, and we have seen in
Example 9.30 that it has infinite representation type. Now assume n > 1. We
will stretch the Kronecker quiver repeatedly as described in Definition 9.20; and
Exercise 9.4 shows that Q can be obtained from the Kronecker quiver by finitely
many stretches. Now Lemma 9.31 implies that Q has infinite representation type.


We will now deal with quivers whose underlying graphs are other Euclidean
diagrams as listed in Fig. 10.2. We observe that each of them is a tree. Therefore, by
Corollary 11.26, in each case we only need to show that it has infinite representation
type just for one orientation, which we can choose as we like.
We will use a more general tool. This is inspired by the indecomposable
representation of the quiver with underlying graph a Dynkin diagram D4 where
the space at the branch vertex is 2-dimensional, which we have seen a few times.
In Lemma 9.5 we have proved that for this representation, any endomorphism is a
scalar multiple of the identity. The following shows that this actually can be used to
produce many representations of a new quiver obtained by just adding one vertex
and one arrow.
Lemma 11.29. Assume Q is a quiver and let K be a field. Suppose Q has a
representation M such that
(a) every endomorphism of M is a scalar multiple of the identity,
(b) there is a vertex k of Q such that dimK M(k) = 2.
Let Q be the quiver obtained from Q by adjoining a new vertex ω together with
α
a new arrow ω −→ k. Then for each λ ∈ K there is an indecomposable
representation Mλ of Q  which extends M (that is, M is the restriction of Mλ
in the sense of Definition 9.15), and such that Mλ and Mμ are not isomorphic for
λ = μ.
Definition 11.30. We call a representation M satisfying (a) and (b) of
Lemma 11.29 a special representation.
Proof. We fix a basis for the vector space M(k) and identify M(k) with K 2 . For any
λ ∈ K we extend M to a representation Mλ of Q  defined as follows:

M(i) if i = ω
Mλ (i) =
K if i = ω
220 11 Gabriel’s Theorem

and Mλ (β) = M(β) for any arrow β = α, and Mλ (α) : Mλ (ω) → M(k),
that is, from K to K 2 , is the map x → (x, λx). We want to show that Mλ is
indecomposable, and that if λ = μ then Mλ ∼ Mμ .
=
Let ϕ : Mλ → Mμ be a homomorphism of representations. Then the restriction
of ϕ to vertices in Q is a homomorphism from M to M. By the assumption, this is
a scalar multiple of the identity. In particular, at the vertex k we have ϕk = c idK 2
for some c ∈ K. Now, the space at vertex ω is the one-dimensional space K, so ϕω
is also a scalar multiple of the identity, say ϕω = d idK with d ∈ K. Consider the
commutative diagram

This gives that for x ∈ Mλ (ω) = K we have

c(x, λx) = (ϕk ◦ Mλ (α))(x) = (Mμ (α) ◦ ϕω )(x) = d(x, μx).

So for all x ∈ Mλ (ω) = K we have cx = dx and cλx = dμx.


If λ = μ then we get the only condition cx = dx for all x ∈ K and this clearly
implies c = d. But since the restriction of ϕ to Q is already a scalar multiple of the
identity we deduce that every homomorphism ϕ : Mλ → Mλ is a scalar multiple
of the identity. This implies that Mλ is indecomposable, by Lemma 9.12.
If λ = μ then the above two conditions combine to

cλx = dμx = cμx and λdx = λcx = dμx,

that is, (λ − μ)cx = 0 and (λ − μ)dx = 0. Since λ − μ = 0 it follows that cx = 0


and dx = 0 for all x ∈ K, and hence c = d = 0. In other words, for λ = μ the only
homomorphism ϕ : Mλ → Mμ is the zero homomorphism. In particular, there is
no isomorphism and Mλ ∼ Mμ .
= 

We consider now quivers whose underlying graph is a Euclidean diagram of type
n . The main work will be for the case n = 4. We take the special representation
D
M of the quiver Q with underlying graph of type D4 as in Lemma 9.5,
11.2 Quivers of Infinite Representation Type 221

Recall that for the representation M we have M(i) = K for i = 4 and M(4) = K 2 .
Now we take the extended quiver Q  as in Lemma 11.29. This is of the form

and the underlying graph is a Euclidean diagram of type D 4 . Using the representa-
tion M of Q from Lemma 9.5 we find by Lemma 11.29 pairwise non-isomorphic
indecomposable representations Mλ of Q  for each λ ∈ K. In particular, the quiver

Q has infinite representation type over any infinite field K.
However, we want to prove Gabriel’s theorem for arbitrary fields. We will
therefore construct indecomposable representations of Q  of arbitrary dimension,

which then shows that Q always has infinite representation type, independent of the
field. Roughly speaking, to construct these representations we take direct sums of
the special representation, and glue them together at vertex 4 using a ‘Jordan block’
matrix. To set our notation, we denote by
⎛ ⎞
1
⎜1 1 ⎟
⎜ ⎟
⎜ .. ⎟
Jm = ⎜
⎜ 1 .


⎜ .. .. ⎟
⎝ . . ⎠
1 1

the m × m Jordan block matrix with eigenvalue 1.


In the following, when writing K m or similar, we always mean column vectors;
for typographical convenience we sometimes write column vectors as transposed
row vectors. We will write linear maps as matrices, applying them to the left.
 of type D
Definition 11.31. Let K be a field and consider the quiver Q 4 given
above. Fix an integer m ≥ 1 and take V = K , an m-dimensional vector space over
m

K. Then take

V (4) = {(v1 , v2 )t | vi ∈ V } = K 2m .
222 11 Gabriel’s Theorem

Then we take each of the other spaces as subspaces of V (4):

V (1) ={(v, 0)t | v ∈ V }


V (2) ={(0, v)t | v ∈ V }
V (3) ={(v, v)t | v ∈ V }
V (ω) ={(v, Jm v)t | v ∈ V }.

 are all taken as inclusions. This


The linear maps corresponding to the arrows of Q

defines a representation Vm of the quiver Q.
 is indecomposable.
Lemma 11.32. The representation Vm of the quiver Q
Proof. To prove this, we use the criterion from Lemma 9.11, that is, we show that
the only endomorphisms ϕ : Vm → Vm of representations such that ϕ 2 = ϕ are the
zero and the identity homomorphism.
Since the spaces V (i) are defined as subspaces of V (4) = K 2m and all maps as
inclusions, any endomorphism ϕ : Vm → Vm of representations is given by a linear
map V (4) → V (4) which takes each V (i) into V (i), for i = 1, 2, 3, ω. We take ϕ
as a linear map ϕ : K 2m → K 2m and write it as a matrix in block form as
 
AB
CD

with m × m-matrices A, B, C, D. The linear map ϕ maps V (1) to V (1), therefore C


is the zero matrix. Since it maps V (2) to V (2), the matrix B is also the zero matrix.
Moreover, it maps V (3) to V (3), which implies A = D. It must also map V (ω) to
V (ω), and this is the case if and only if A commutes with Jm . In fact, we have
    
A 0 v Av
=
0 A Jm v AJm v

and for this to be contained in V (ω) we must have AJm v = Jm Av for all v ∈ K m ;
and if this holds for all v then AJm = Jm A.
We can use the same argument as in the proof of Lemma 8.5 and also in
Example 9.30, that is, we apply Exercise 8.1. The matrix A is an endomorphism
of the module Vβ for the algebra K[X]/(f ), where f = (X − 1)m and where β
is given by Jm . This is a cyclic module, generated by the first basis element. By
Exercise 8.1, if A2 = A then A is zero or the identity.
Therefore the only idempotent endomorphisms ϕ : Vm → Vm are zero and the
identity. As mentioned at the beginning of the proof, Lemma 9.11 then gives that
the representation Vm is indecomposable. 

11.3 Dimension Vectors and Reflections 223

Note that if we had taken any non-zero λ ∈ K as the eigenvalue of the Jordan
block above we would still have obtained indecomposable representations. We
chose λ = 1 since this lies in any field.
The above Lemma 11.32 shows that the quiver Q  has infinite representation type
over any field K. Indeed, the indecomposable representations Vm are pairwise non-
isomorphic since they have different dimensions. We will now use this to show
that every quiver whose underlying graph is of type D n for n ≥ 4 has infinite
representation type. Recall that we may choose the orientation as we like, by
Corollary 11.26. For example, for type D 4 it is enough to deal with the above

quiver Q.
Proposition 11.33. Every quiver whose underlying graph is a Euclidean diagram
n for some n ≥ 4 has infinite representation type.
of type D
Proof. Assume first that n = 4. Then Q  has infinite representation type by
Lemma 11.32. Indeed, if m1 = m2 then Vm1 and Vm2 cannot be isomorphic, as they
have different dimensions. By Corollary 11.26, any quiver with underlying graph
4 has infinite representation type.
D
Now assume n > 4. Any quiver of type D n can be obtained from the above

quiver Q by a finite sequence of stretches in the sense of Definition 9.20. When
n = 5 this is Example 9.21, and for n ≥ 6 one may replace the branching vertex of
4 quiver by a quiver whose underlying graph is a line with the correct number
the D
of vertices. By Lemma 9.31, the stretched quiver has infinite representation type,
and then by Corollary 11.26, every quiver of type D n has infinite representation
type. 

So far we have shown that every quiver without oriented cycles whose underlying
graph is a Euclidean diagram of type A n (n ≥ 1) or D n (n ≥ 4) has infinite
representation type over any field K; see Propositions 11.28 and 11.33.
The only Euclidean diagrams in Fig. 10.2 we have not yet dealt with are quivers
whose underlying graphs are of types E 6 , E
7 , and E
8 . The proof that these have
infinite representation type over any field K will follow the same strategy as for
type Dn above. However, the proofs are longer and more technical. Therefore, they
are postponed to Sect. 12.2.
Taking these proofs for granted we have now completed the proof of Proposi-
tion 11.27 which shows that every quiver whose underlying graph is not a union of
Dynkin diagrams has infinite representation type.

11.3 Dimension Vectors and Reflections

The next task is to show that any quiver whose underlying graph is a union of
Dynkin diagrams has finite representation type. Recall that by Lemma 9.27 we
need only to look at connected quivers. At the same time we want to parametrize
224 11 Gabriel’s Theorem

the indecomposable representations. The appropriate invariants for this are the
dimension vectors, which we will now define.
Again, we fix a field K and all representations are over K.
Definition 11.34. Let Q be a quiver and assume M is a representation of Q.
Suppose Q has n vertices; we label the vertices by 1, 2, . . . , n. The dimension vector
of the representation M is defined to be

dimM := (dimK M(1), . . . , dimK M(n)) ∈ Zn .

Note that by definition the dimension vector depends on the labelling of the
vertices.
Example 11.35.
(1) Let Q be a quiver without oriented cycles. By Theorem 9.8, the simple
representations of Q correspond to the vertices of Q. The simple representation
Sj labelled by vertex j has dimension vector εj , the unit vector.
(2) In Example 9.28 we have classified the indecomposable representations of
the quiver 1 −→ 2, with underlying graph a Dynkin diagram A2 . We have
seen that the three indecomposable representations have dimension vectors
ε1 = (1, 0), ε2 = (0, 1) and (1, 1).
Remark 11.36. Given two isomorphic representations M ∼ = N of a quiver Q, then
for each vertex i the spaces M(i) and N(i) are isomorphic, so they have the same
dimension and hence dimM = dimN . That is, M and N have the same dimension
vector. We will prove soon that for a Dynkin quiver the dimension vector actually
determines the indecomposable representation.
However, this is not true for arbitrary quivers, and we have seen this already:
in Example 9.29 we have constructed indecomposable representations Cλ for the
Kronecker quiver. They all have dimension vector (1, 1), but as we have shown, the
representations Cλ for different values of λ ∈ K are not isomorphic.
We consider quivers with a fixed underlying graph . We will now analyse how
dimension vectors of representations change if we apply the reflections defined in
Sect. 11.1 to representations. We recall the definition of the bilinear form of ,
and the definition of a reflection of Zn , from Definition 10.2. The bilinear form
(−, −) : Zn × Zn → Z is defined on unit vectors by

−dij if i =
j
(εi , εj ) =
2 if i = j

where dij is the number of edges between vertices i and j , and then extended
bilinearly. Soon we will focus on the case when  is a Dynkin diagram, and then
dij = 0 or 1, but we will also take  obtained from the Kronecker quiver, where
d12 = 2.
11.3 Dimension Vectors and Reflections 225

For any vertex j of  the reflection map is defined as

sj : Zn → Zn , sj (a) = a − (a, εj ) εj .

We will now see that the reflection maps precisely describe how the dimension
vectors are changed when a representation is reflected.
Proposition 11.37.
(a) Let Q be a quiver with underlying graph . Assume j is a sink of Q and
α1 , . . . , αtare the arrows in Q ending at j . Let M be a representation
t +
of Q. If i=1 im(M(αi )) = M(j ) then dimj (M) = sj (dimM). In
particular, this holds if M is indecomposable and not isomorphic to the simple
representation Sj .
(b) Let Q be a quiver with underlying graph . Assume j is a source of Q and
 at j . Let N be a representation of Q . If
1t, . . . , βt are the arrows in Q starting
β

i=1 ker(N(βi )) = 0 then dimj (N ) = sj (dim N ). In particular, this holds
if N is indecomposable and not isomorphic to the simple representation Sj .
Proof. The second parts of the statements of (a) and (b) are part of Exercise 11.2;
we include a worked solution in the appendix.
(a) We compare the entries in the vectors dimj+ (M) and sj (dimM), respectively.
For vertices i = j we have M + (i) = M(i) (see Definition 11.9). So the i-th
entry in dimj+ (M) is equal to dimK M(i). On the other hand, the i-th entry in
sj (dimM) also equals dimK M(i) because sj only changes the j -th coordinate (see
Definition 10.2).
Now let i = j , then, by Definition 11.9, M + (j ) is the kernel of the linear map


t
M(1) × . . . × M(t) → M(j ) , (m1 , . . . , mt ) → M(αi )(mi ).
i=1

By rank-nullity, and using the assumption, we have

dimK M + (j ) = dimK (M(1) × . . . × M(t)) − dimK M(j )


 t 

= dimK M(i) − dimK M(j ).
i=1

We set ai = dimK M(i) for abbreviation. Since α1 , . . . , αt are the only arrows
adjacent to the sink j , we have drj = 0 if a vertex r is not the starting point of
one of the arrows α1 , . . . , αt . Recall that some of these arrows can start at the same
vertex, so if r is the starting point of one of these arrows then we have drj arrows
from r to j . So we can write
 t  ⎛ ⎞
 
dimK M + (j ) = ai − aj = ⎝ drj ar ⎠ − aj .
i=1 r∈Q0 \{j }
226 11 Gabriel’s Theorem

This is the j -th coordinate of the vector dimj+ (M). We compare this with the j -th
coordinate of sj (dimM). By Definition 10.2 this is

aj − (dimM, εj ) = aj − ( −drj ar + 2aj ),
r∈Q0 \{j }

which is the same.


(b) Now let j be a source. Similarly to part (a) it can be seen that for i = j the i-th
coordinates coincide in the two relevant vectors.
So it remains to consider the j -th coordinates. By Definition 11.16, the vector
space N − (j ) is the cokernel of the map

N(j ) → N(1) × . . . × N(t) , x → (N(β1 )(x), . . . , N(βt )(x)).

By our assumption this linear map is injective, hence the image has dimension equal
to dimK N(j ). We set bi = dimK N(i) for abbreviation and get
 t 
 
dimK N − (j ) = dimK N(i) − dimK N(j ) = drj br − bj ,
i=1 r∈Q0 \{j }

which is the same formula as in part (a) and by what we have seen there, this is the
j -th coordinate of sj (dim N ). 

We illustrate the above result with an example.
Example 11.38. As in Example 11.11 we consider the quiver Q of the form 1 −→ 2
and reflect at the sink 2. The corresponding reflection map is equal to

s2 : Z2 → Z2 , a = (a1 , a2 ) → a−(a, ε2) ε2 = a−(−a1 +2a2)ε2 = (a1 , a1 −a2 ).

In Example 11.11 we have listed the reflections of the three indecomposable


representations of Q. In the following table we compare their dimension vectors
with the vectors s2 (dim M).

M dimM s2 (dimM) 2+ (M)


idK
K −→ 0 (1, 0) (1, 1) K ←− K
idK
K −→ K (1, 1) (1, 0) K ←− 0
0 −→ K (0, 1) (0, −1) 0 ←− 0

This confirms what we have proved in Proposition 11.37. We also see that exluding
the simple representation S2 in Proposition 11.37 is necessary, observing that
s2 (dimS2 ) is not a dimension vector of a representation.
11.4 Finite Representation Type for Dynkin Quivers 227

11.4 Finite Representation Type for Dynkin Quivers

Assume Q is a quiver with underlying graph . We assume  is a Dynkin diagram


of one of the types An , Dn , E6 , E7 , E8 (as in Fig. 10.1). Let q be the quadratic
form associated to , see Definition 10.2. We have studied the set of roots,

 = {x ∈ Zn | q (x) = 1}

and we have proved that it is finite (see Proposition 10.12). We have also seen that a
root x is either positive or negative, see Lemma 10.13. Recall that a non-zero x ∈ Zn
is positive if xi ≥ 0 for all i, and it is negative if xi ≤ 0 for all i.
In this section we will prove the following, which will complete the proof of
Gabriel’s Theorem:
Theorem 11.39. Assume Q is a quiver whose underlying graph  is a union of
Dynkin diagrams of type An (n ≥ 1) or Dn (n ≥ 4), or E6 , E7 , E8 . Then the
following hold.
(1) If M is an indecomposable representation of Q then dimM is in  .
(2) Every positive root is equal to dimM for a unique indecomposable representa-
tion M of Q.
In particular, Q has finite representation type.
Before starting with the proof, we consider some small examples.
Example 11.40.
(1) Let Q be the quiver 1 −→ 2 with underlying graph the Dynkin diagram A2 . In
Example 9.28 we have seen that this has three indecomposable representations,
with dimension vectors ε1 , ε2 and (1, 1). We see that these are precisely the
positive roots as described in Example 10.8.
(2) Let Q be the quiver 1 −→ 2 ←− 3. The Exercises 11.8 and 11.9 prove using
elementary linear algebra that the above theorem holds for Q. By applying
reflections 1± or 2± (see Exercise 11.11), one deduces that the theorem holds
for any quiver with underlying graph a Dynkin diagram of type A3 .
To prove Theorem 11.39 we first show that it suffices to prove it for connected
quivers. This will follow from the lemma below, suitably adapted to finite unions of
Dynkin diagrams.
Lemma 11.41. Let Q = Q ∪ Q be a disjoint union of two quivers such that
the underlying graph  =   ∪   is a union of two Dynkin diagrams. Then
Theorem 11.39 holds for Q if and only if it holds for Q and for Q .
Proof. We label the vertices of Q as {1, . . . , n } ∪ {n + 1, . . . , n + n } where
{1, . . . , n } are the vertices of Q and {n + 1, . . . , n + n } are the vertices of Q .

So we can write every dimension vector in the form (x  , x  ) with x  ∈ Zn and

x  ∈ Zn .
228 11 Gabriel’s Theorem

We have seen in Lemma 9.19 that the indecomposable representations of Q are


precisely the extensions by zero of the indecomposable representations of Q and of
Q . This means that the dimension vectors of indecomposable representations of Q
are all of the form (x  , 0, . . . , 0) or (0, . . . , 0, x  ). The quadratic form for  is
 +n
n 
q (x) = xi2 − dij xi xj
i=1 i<j

(see Definition 10.6). Since there are no edges between vertices of   and   , we
 
see that for x = (x  , x  ) ∈ Zn +n we have

q (x) = q  (x  ) + q  (x  ). (11.1)

In particular, x = (x  , x  ) is a root of  if and only if q  (x  ) + q  (x  ) = 1


(see Definition 10.6). By assumption,   and   are Dynkin diagrams. Then by
Proposition 10.10 the quadratic forms q  and q  are positive definite. Since the
quadratic forms have integral values, the condition q  (x  )+q  (x  ) = 1 thus holds
if and only if q  (x  ) = 1 and q  (x  ) = 0, or vice versa. Thus, x = (x  , x  ) is a
root of  if and only if x = (x  , 0, . . . , 0) with x  a root of   or x = (0, . . . , 0, x  )
with x  a root of   .
Together with the above description of indecomposable representations of Q this
shows that statement (1) of Theorem 11.39 holds for Q if and only if it holds for Q
and Q .
For statement (2) note that the positive roots of  are precisely the vectors of the
form x = (x  , 0, . . . , 0) with x  a positive root of   or x = (0, . . . , 0, x  ) with x 
a positive root of   . From this and Lemma 9.19 one sees that (2) holds for Q if
and only if it holds for Q and Q . 

Remark 11.42. Observe that on the way the above lemma has proved that if
 =   ∪   is a disjoint union of Dynkin diagrams then the set of roots  can be
interpreted as a union of   and   , where the roots in   and   are extended
by zeros.
Let  be a Dynkin diagram as in Fig. 10.1. We do not have to deal with an
arbitrary quiver, instead it suffices to prove Theorem 11.39 for a fixed orientation
of the arrows and a fixed labelling of the vertices. That is, we have the following
reduction:
Lemma 11.43. To prove (1) and (2) of Theorem 11.39, it is enough to take a quiver
Q with standard labelling (as in Example 10.4).
Proof. By Proposition 11.5, we know that any two quivers with underlying graph 
a Dynkin diagram are related by a sequence of reflections in sinks or sources. So it
suffices to show: if j is a sink or source of Q and the theorem holds for Q, then it
also holds for σj Q. We will write down the details when j is a sink, the other case
11.4 Finite Representation Type for Dynkin Quivers 229

is analogous (we leave this for Exercise 11.4). Suppose j is a sink. We assume that
(1) and (2) hold for Q.
First we show that (1) holds for σj Q. By Theorem 11.25 any indecomposable
representation of σj Q is either isomorphic to the simple representation Sj , or
is of the form j+ (M) where M is an indecomposable representation of Q
not isomorphic to Sj . The dimension vector of Sj is εj , which is in  (see
Exercise 10.3). Moreover, in the second case, the dimension vector of j+ (M) is
sj (dimM), by Proposition 11.37. We have dimM is in  by assumption, and by
Lemma 10.9, sj takes roots to roots. Moreover, sj (dimM) is positive since it is the
dimension vector of a representation.
We show now that (2) holds for σj Q. Let x ∈  be a positive root. If x = εj
then x = dimSj and clearly this is the only possibility. So let x = εj , we must show
that there is a unique indecomposable representation of σj Q with dimension vector
x. Since x = εj , we have y := sj (x) = εj , and this is also a root. It is a positive
root: Since x is positive and not equal to εj , there is some k = j such that xk > 0
(in fact, since q (λεj ) = λ2 q (εj ), the only scalar multiples of εj which are in 
are ±εj ; see also Proposition 12.16). The reflection map sj changes only the j -th
coordinate, therefore yk = xk > 0, and y is a positive root.
By assumption, there is a unique indecomposable representation M of Q,
not isomorphic to Sj , such that dimM = y. Let N := j+ (M). This is an
indecomposable representation, and dim N = sj (y) = x. To prove uniqueness,
let N  be an indecomposable representation of σj Q and dimN  = x, then
N ∼ Sj , hence by Theorem 11.25 there is a unique indecomposable representation
=
M of Q with j+ (M ) = N  . Then we have sj (dimM ) = x and hence
dimM = sj (x) = y. Since (2) holds for Q we have M ∼ = M and then N ∼ = N .
This proves that (2) also holds for σj Q. 

Exercise 11.4. Write down the details for the proof of Lemma 11.43 when the
vertex j is a source of Q, analogous to the case when the vertex is a sink.
Assume from now that Q has standard labelling, as in Example 10.4. Then we
have the following properties:
(i) Vertex 1 is a sink of Q, and for 2 ≤ j ≤ n, vertex j is a sink of the quiver
σj −1 . . . σ1 Q;
(ii) σn σn−1 . . . σ1 Q = Q.
Note that the corresponding sequence of reflections sn ◦ sn−1 ◦ . . . ◦ s1 is the Coxeter
transformation C , where  is the underlying graph of Q (see Definition 10.14).
Exercise 11.5. Verify (i) and (ii) in detail when Q is of type An as in Example 10.4.
By Theorem 11.25, 1+ takes an indecomposable representation M of Q
which is not isomorphic to S1 to an indecomposable representation of σ1 Q not
isomorphic to S1 . Similarly, j+ takes an indecomposable representation M of
σj −1 . . . σ1 Q which is not isomorphic to Sj to an indecomposable representa-
230 11 Gabriel’s Theorem

tion of σj σj −1 . . . σ1 Q not isomorphic to Sj . Moreover, if x = dimM then


sj (x) = dimj+ (M), by Proposition 11.37.
We also want to consider compositions of these reflections. Let  = n+ . . . 1+ .
If M is an indecomposable representation of Q and (M) is non-zero then it is
again an indecomposable representation of Q with dimension vector equal to C (x)
where C is the Coxeter transformation.
The following result proves part (1) of Gabriel’s Theorem 11.39.
Proposition 11.44. Let Q be a quiver whose underlying graph  is a union of
Dynkin diagrams. Assume M is an indecomposable representation of Q. Then
dimM belongs to  .
Proof. By Lemma 11.41 we can assume that  is connected, and by Lemma 11.43
we can assume that Q has standard labelling.
Let M have dimension vector x ∈ Zn . We apply the Coxeter transformation
C = C to x. By Lemma 10.16, there is some r ≥ 0 such that C r (x) ≥ 0. We must
have r ≥ 1 (since x ≥ 0), and we take r minimal with this property. Then there is
some j such that for

τ := sj ◦ sj −1 ◦ . . . ◦ s1 ◦ C r−1

we have τ (x) ≥ 0; we also take j minimal. Then sj −1 ◦ . . . ◦ s1 ◦ C r−1 (x) ≥ 0 but


applying sj gives τ (x) ≥ 0. Let

M := j+−1 . . . 1+ ( r−1 (M)).

By minimality of r and j and by Proposition 11.12, this representation is non-


zero. (If at some intermediate step we obtained the zero representation then the
corresponding dimension vector would be a multiple of the unit vector, and the
associated reflection would produce a vector with non-positive entries, contradicting
the minimality of r and j .) But the reflections take indecomposable representations
to indecomposable representations, or to zero, see Theorem 11.25. That is, M
must be indecomposable. Then sj takes the dimension vector of M to τ (x) ≥ 0.
This cannot be the dimension vector of a representation. Therefore Theorem 11.25
implies that M ∼ = Sj , and hence has dimension vector εj ∈  , which is a
root. Then dimM = (s1 s2 . . . sn )r−1 s1 . . . sj −1 (εj ) and hence it belongs to  ,
by Lemma 10.9. 

Example 11.45. We give an example to illustrate the proof of Proposition 11.44.
Consider the quiver Q given by

1 ←− 2 ←− 3

We have computed a formula for C of type An in Example 10.15, which in this


case is

C (x1 , x2 , x3 ) = (−x1 + x2 , −x1 + x3 , −x1 ).


11.4 Finite Representation Type for Dynkin Quivers 231

By definition  = 3+ 2+ 1+ . Consider an indecomposable representation M of


Q with dimension vector x = (0, x2 , x3 ) and x2 , x3 ≥ 1. We have

C (x) = (x2 , x3 , 0) ≥ 0, C2 (x) = (−x2 + x3 , −x2 , −x2 ) ≥ 0.

We know therefore that (M) =: N is a non-zero indecomposable representation


of Q with dimension vector (x2 , x3 , 0) and (N ) = 0. Since N ∼
= S1
we can say that 1+ (N ) must be indecomposable. This has dimension vector
s1 (x2 , x3 , 0) = (x3 − x2 , x3 , 0).
We have s2 (x3 − x2 , x3 , 0) = (x3 − x2 , −x2 , 0) ≥ 0. This is not a dimension
vector, and it follows that 2+ (1+ (N )) = 0 and therefore 1+ (N ) ∼ = S2 . Hence
(x3 − x2 , x3 , 0) = ε2 and x3 = 1, x2 = 1 and M has dimension vector (0, 1, 1).
The following proposition proves part (2) of Gabriel’s theorem, thus completing
the proof of Theorem 11.39 (recall that part (1) has been proved in Proposi-
tion 11.44).
Proposition 11.46. Let Q be a quiver whose underlying graph  is a union of
Dynkin diagrams. For every positive root x ∈  there is a unique indecomposable
representation M of Q whose dimension vector is equal to x.
Proof. By Lemma 11.41 we can assume that  is connected, and by Lemma 11.43
we can assume that Q has standard labelling.
As in the proof of Proposition 11.44 there are r ≥ 1 and j with 1 ≤ j ≤ n such
that sj ◦ . . . ◦ s1 ◦ C r−1 (x) ≥ 0 and where r, j are minimal with these properties.
Let y := sj −1 ◦ . . . ◦ s1 ◦ C r−1 (x), so that y ≥ 0 but sj (y) ≥ 0.
Now, y and sj (y) are in  and it follows that y = εj (in fact, y is the
dimension vector of a representation, and this must be Sj since sj (y) ≥ 0). Take
the representation Sj of the quiver σj −1 . . . σ1 Q and let

M := ( −1 )r−1 1− . . . j−−1 (Sj ).

This is an indecomposable representation of Q and has dimension vector x.


If M is also an indecomposable representation of Q with dimension vector x
then j+−1 . . . 1+  r−1 (M ) has dimension vector εj , hence is isomorphic to Sj ,
and it follows that M is isomorphic to M. 

Gabriel’s theorem allows one to determine all indecomposable representations of
a quiver of Dynkin type, by knowing the roots of the associated quadratic form. In
particular, this works for all possible orientations of the arrows of the quiver (see
Lemma 11.43).
We illustrate this with an example.
Example 11.47. We have calculated the roots for the Dynkin diagram  of type An
in Example 10.8. The positive roots are

αr,s = εr + εr+1 + . . . + εs for 1 ≤ r ≤ s ≤ n.


232 11 Gabriel’s Theorem

According to Theorem 11.39, these positive roots are in bijection with the indecom-
posable representations of any quiver Q with underlying graph . In particular, any
such quiver of Dynkin type An has precisely n(n+1)
2 indecomposable representations
(up to isomorphism).
We can write down an indecomposable representation with dimension vector
αr,s . This should exist independent of the orientation of Q. In fact, we can see this
directly. Take the representation M where

K if r ≤ i ≤ s
M(i) =
0 else.

Suppose α is an arrow i → j of Q so that j = i ± 1. Then set



idK if r ≤ i, j ≤ s
M(α) =
0 else.

Then these representations are a complete set of indecomposable representations of


a Dynkin quiver of type An .
Exercise 11.6. Let Q be the quiver of type An with standard labelling. Verify
directly that the representation with dimension vector αr,s (defined above) is
indecomposable.
Let Q be a quiver which is the disjoint union of quivers Qt for 1 ≤ t ≤ m,
and where each Qt has an underlying graph which is a Dynkin diagram of
type A, D or E. By Lemma 9.19 the indecomposable representations of Q are
precisely the extensions by zero of the indecomposable representations of the Qt .
Now by Remark 11.42, the observation following Lemma 12.41, we deduce that
the dimension vectors of these indecomposable representations are precisely the
positive roots of the associated quadratic form.
We illustrate this with an example.
Example 11.48. Let Q be the (non-connected) quiver

1 ←− 2 3 ←− 4 ←− 5

which is equal to Q1 ∪ Q2 where Q1 is 1 ←− 2 and Q2 is 3 ←− 4 ←− 5. So the


underlying graph  = 1 ∪ 2 is a union of a Dynkin diagram of type A2 and a
Dynkin diagram of type A3 .
By Lemma 9.19 the indecomposable representations of Q are precisely the
extensions by zero (see Definition 9.16) of the indecomposable representations of
Q1 and Q2 , and by Remark 11.42 the set of roots of  can be interpreted as the
union of the set of roots of 1 and of 2 .
11.4 Finite Representation Type for Dynkin Quivers 233

Recall the Gram matrix of , see Definition 10.2. This has block form,
 
G1 0
G =
0 G2

where
⎛ ⎞
  2 −1 0
2 −1
G1 = , G2 = ⎝−1 2 −1⎠ .
−1 2
0 −1 2

This shows that the quadratic form q is equal to

q (x1 , x2 , x3 , x4 , x5 ) = q1 (x1 , x2 ) + q2 (x3 , x4 , x5 ).

For x ∈ Z5 , we get q (x) = 1 if and only if one of q1 (x1 , x2 ) and q2 (x3 , x4 , x5 )
is equal to 1 and the other is zero, since the quadratic forms are positive definite (see
Proposition 10.10). Hence a root of  is either a root of 1 or a root of 2 .
Exercise 11.7. Explain briefly why this holds in general. That is, if Q is any
quiver where all connected components are of Dynkin type A, D or E then the
indecomposable representations of Q are in bijection with the positive roots of the
quadratic form q of Q.

EXERCISES

11.8. Let Q be the quiver of Dynkin type A3 with orientation


α1 α2
1 −→ 2 ←− 3

Assume M is an indecomposable representation of Q which is not simple.


(a) Show that M(α1 ) and M(α2 ) must be injective, and that

M(2) = im(M(α1 )) + im(M(α2 )).

(b) Determine all M such that M(1) = 0, by using the classification of inde-
composable representations for Dynkin type A2 . Similarly determine all
M such that M(3) = 0.
(c) Assume M(1) and M(3) are non-zero, deduce that then M(2) is non-
zero.
234 11 Gabriel’s Theorem

The following exercise uses the linear algebra proof of the dimension
formula, dimK (X + Y ) = dimK X + dimK Y − dimK (X ∩ Y ) where X, Y
are subspaces of some finite-dimensional vector space.
11.9. Let Q be as in Exercise 11.8. Take a representation M of Q which satisfies
the conditions in part (a) of Exercise 11.8. Let D := im(M(α1 ))∩im(M(α2 )),
a subspace of M(2).
(a) Explain why M(2) has a basis B={x1 , . . . , xd ; v1 , . . . , vm ; w1 , . . . , wn }
such that
(i) {x1 , . . . , xd } is a basis of D;
(ii) {x1 , . . . , xd ; v1 , . . . , vm } is a basis of im(M(α1 ));
(iii) {x1 , . . . , xd ; w1 , . . . , wn } is a basis of im(M(α2 )).
(b) Explain why M(1) has a basis of the form {a1, . . . , ad , a1 , . . . , am  },

where M(α1 )(ai ) = xi , and M(α1 )(aj ) = vj . Similarly, explain why
M(3) has a basis {b1 , . . . , bd , b1 , . . . , bn } such that M(α2 )(bi ) = xi ,
and M(α2 )(bj ) = wj .
(c) Show that each xi gives rise to an indecomposable direct summand of M
of the form K −→ K ←− K. Moreover, show that each vj gives rise to
an indecomposable direct summand of M of the form K −→ K ←− 0.
Similarly each wj gives rise to an indecomposable direct summand of
M of the form 0 −→ K ←− K.
11.10. Let Q be the quiver of Dynkin type A3 with the orientation as in Exer-
cise 11.8. Explain how Exercises 11.8 and 11.9 classify the indecomposable
representations of this quiver of type A3 . Confirm that the dimension vectors
of the indecomposable representations are precisely the positive roots for the
Dynkin diagram A3 .
11.11. Consider quivers whose underlying graph is the Dynkin diagram of type A3 .
We have classified the indecomposable representations for the quiver
α1 α2
1 −→ 2 ←− 3

in the previous exercises. Explain how the general results on reflection maps
j± imply Gabriel’s theorem for the other two possible orientations.
11.12. The following shows that j− does not take subrepresentations to subrepre-
sentations in general. Let Q be the quiver of type A3 with labelling

β1 β2
1 ←− j −→ 2

Define a representation N of Q as follows

span{f1 } ←− span{e1 , e2 } −→ span{f2 }


11.4 Finite Representation Type for Dynkin Quivers 235

and define the maps by N(β1 )(e1 ) = f1 and N(β1 )(e2 ) = 0, and moreover
N(β2 )(e1 ) = 0 and N(β2 )(e2 ) = f2 .
(a) Show that CN = N(1) × N(2) and hence that N − (j ) = 0.
(b) Let U be the subrepresentation of N given by

span{f1 } ←− span{e1 + e2 } −→ span{f2 }.

Compute CU , and show that U − (j ) is non-zero.


11.13. Let Q be a quiver and j a sink in Q. We denote by α1 , . . . , αt the arrows of
Q ending in j . Let M be an indecomposable representation of Q. Show that
the following statements are equivalent.
t
(i) We have i=1 im(M(αi )) = M(j ), alternatively the map
(M(α1 ), . . . , M(αt )) is surjective.
(ii) j− j+ (M) ∼ = M.
+
(iii) dimj (M) = sj (dimM).
(iv) M is not isomorphic to the simple representation Sj .
(v) j+ (M) is indecomposable.
11.14. Formulate and verify an analogue of the previous exercise for the case when
j is a source in a quiver Q .
11.15. Consider a quiver with underlying graph  a Dynkin diagram of type Dn ;
assume that Q has standard labelling as in Example 10.4. For n = 4 we
have seen in Exercise 10.14 that (1, 1, 2, 1) is a root of ; a corresponding
indecomposable representation has been constructed in Lemma 9.5 (but note
that the labelling in Lemma 9.5 is not the standard labelling).
Use the stretching method from Chap. 9, and show that a quiver of type
Dn for n ≥ 4 with standard labelling has indecomposable representations
with dimension vectors

(1, 1, 2, 2, . . . , 2, 1, 1, . . . , 1, 0, . . . , 0)
!" # !" # !" #
a b c

for any positive integers a, b and any non-negative integer c such that
a + b + c = n − 2.
In the following exercises, we take the Kronecker quiver Q of the form

to illustrate the various constructions and proofs in this chapter.


236 11 Gabriel’s Theorem

11.16. Assume M is an indecomposable representation of the Kronecker quiver Q


which is not simple. Show that then
(i) ker(M(α1 )) ∩ ker(M(α2 )) = 0,
(ii) im(M(α1 )) + im(M(α2 )) = M(2).
(Hint: Apply Exercises 9.9 and 9.10.)
11.17. Assume M is a representation of the Kronecker quiver Q which satisfies

M(1) = ker(M(α1 )) ⊕ ker(M(α2 )) and M(2) = im(M(α1 )) = im(M(α2 )).

(a) Show that the restriction of M(α1 ) to ker(M(α2 )) is an isomorphism


from ker(M(α2 )) to M(2) and also the restriction of M(α2 ) is an
isomorphism from ker(M(α1 )) to M(1).
(b) Show that M is decomposable as follows. Take a basis {b1 , . . . , br } of
M(2) and take v1 , . . . , vr in ker(M(α2 )) such that M(α1 )(vi ) = bi .
Choose also w1 , . . . , wr in ker(M(α1 )) such that M(α2 )(wi ) = bi .
(i) Show that {v1 , . . . , vr , w1 , . . . , wr }
is a basis for M(1).
r
(ii) Deduce that M is a direct sum i=1 Ni of r representations,
where Ni (1) = span{vi , wi } and Ni (2) = span{bi }.
(iii) Show that N1 is indecomposable, and each Ni is isomorphic to N1 .
11.18. Assume M is a representation of the Kronecker quiver such that
dimK M(1) = 4 and dimK M(2) = 2. Show by applying the previous
exercises that M is decomposable.
With the methods of this chapter we can also classify indecomposable
representations for the Kronecker quiver Q. Note that we must use the
reflection  − in the next exercise since vertex 1 is a source in Q (and vertex
2 is a source in σ1 Q).
11.19. Let Q be the Kronecker quiver as above, and  its underlying graph,
the Euclidean diagram of type A 1 . The reflections s1 , s2 and the Coxeter
transformation C were computed in Exercises 10.2 and 10.9. Assume M is
an indecomposable representation of Q which is not simple.
(a) Explain briefly why 2− 1− (M) is either zero, or is an indecomposable
representation
' ( of Q, and if it is non-zero, show that it has dimension
vector C ab where (a, b) = dimM.
(b) Show also that 2− 1− (M) = 0 for M not simple precisely when M
has dimension vector (2, 1). Construct such an M, either directly, or by
exploiting that 1− (M) ∼
= S2 .
11.20. The aim is to classify the indecomposable representations M of the Kro-
necker quiver Q with dimension vector (a, b), where a > b ≥ 0.
' ( ' (
(a) If a > b ≥ 0, let ab11 := C ab . Check that b1 < a1 < a and that
a1 − b1 = a − b.
11.4 Finite Representation Type for Dynkin Quivers 237

' ( ' ( ' (


(b) Define a sequence abkk where abkk = Ck ab . Then a > a1 > a2 > . . .
and ak > bk . Hence there is a largest r ≥ 0 such that ar > br ≥ 0.
(c) Let  = 2− 1− . Explain why the representation
'ar ( N =  r (M)
is indecomposable with dimension vector br and (N ) is the zero
representation.
(d) Deduce that either N = S1 , or 1− (N ) = S2 . Hence deduce that M
has dimension vector (a, a − 1) with a ≥ 1. Note that this is a root
of . Explain briefly why this gives a classification of indecomposable
representations of Q with dimension vector of the form (a, a − 1).
11.21. Suppose M is an indecomposable representation of the Kronecker
quiver Q with dimension vector (a, b), where 0 ≤ a < b. Show that
(a, b) = (a, a + 1) and hence is also a root. (Hint: this is analogous to
Exercise 11.20.)
11.22. Let A be the 4-dimensional algebra

A := K[X, Y ]/(X2 , Y 2 ).

It has basis the cosets of 1, X, Y, XY ; we identify 1, X, Y, XY with their


cosets in A. Let M be a representation of the Kronecker quiver Q as above,
we construct an A-module VM with underlying space VM = M(1) × M(2).
We define the action of X and of Y via matrices, written in block form
   
0 0 0 0
x= , y= .
M(α1 ) 0 M(α2 ) 0

(a) Check that x 2 = 0, y 2 = 0 and that xy = yx, that is, VM is an A-


module.
(b) Assume M is an indecomposable representation, and at least one of
the M(αi ) is surjective. Show that then VM is an indecomposable A-
module.
(c) Let M and N be representations of Q and let ϕ : VM → VN be an
isomorphism. Assume at least one of the M(αi ) is surjective. Show that
then M is isomorphic to N .
11.23. Let A be as in Exercise 11.22. Assume that V is an indecomposable A-
module on which XY acts as zero. Let U := {v ∈ V | Xv = 0 and Y v = 0}.
We assume 0 = U = V .
(a) Show that U is an A-submodule of V .
(b) Write V = W ⊕ U for some subspace W of V . Check that X(W ) ⊆ U
and Y (W ) ⊆ U .
238 11 Gabriel’s Theorem

(c) Fix a basis of V which is a union of bases of W and of U , write the


action of X and Y as matrices with respect to this basis. Check that
these matrices have block form
   
0 0 0 0
x → , y → .
A1 0 A2 0

Define M = (M(1), M(2), M(α1 ), M(α2 )) to be the representation


of the Kronecker quiver where M(1) = W and M(2) = U , and
M(αi ) = Ai .
(d) Show that M is an indecomposable representation.
11.24. Let Q be the quiver of type D̃4 ,

Let A = KQ/I where I is the ideal spanned by {βγ , αγ , δγ }. Identify the


trivial path ei with the coset ei + I , and identify each arrow of Q with its
coset in A. Then A is an algebra of dimension 9 where the product of any
two arrows is zero in A.
(a) Explain briefly why A-modules are the same as representations M of Q
which satisfy the paths in I , so that, for example, M(β) ◦ M(γ ) = 0.
(b) Let M be such a representation corresponding to an A-module.
(i) Show that the kernel of M(γ ) gives rise to direct summands
isomorphic to the simple representation S5 .
(ii) Assume that M(γ ) is injective. Show that M is the direct sum of
two representations, where one has dimension vector (0, 0, d, 0, d),
with d = dimK M(5), and the other is zero at vertex 5. Moreover,
the direct summand with dimension vector (0, 0, d, 0, d) is the
direct sum of d copies of an indecomposable representation with
dimension vector (0, 0, 1, 0, 1).
(c) Deduce that A has finite representation type.
(d) Why does this not contradict Gabriel’s theorem?
Chapter 12
Proofs and Background

In this chapter we will give the proofs, and fill in the details, which were postponed
in Chap. 11. First we will prove the results on reflection maps from Sect. 11.1,
namely the compatibility of j+ and j− with direct sums (see Lemmas 11.10
and 11.17) and the fact that j+ and j− compose to the identity map (under certain
assumptions), see Propositions 11.22 and 11.24. This is done in Sect. 12.1.
Secondly, in Sect. 11.2 we have shown that every connected quiver whose
underlying graph is not a Dynkin diagram has infinite representation type. The
crucial step is to prove this for a quiver whose underlying graph is a Euclidean
diagram. We have done this in detail in Sect. 11.2 for types A n and Dn . In Sect. 12.2
below we will provide the technically more involved proofs that quivers of types E 6 ,
7 and E
E 8 have infinite representation type.
Finally, we have two sections containing some background and outlook. We give
a brief account of root systems as they occur in Lie theory, and we show that the
set of roots, as defined in Chap. 10, is in fact a root system in this sense. Then we
provide an informal account of Morita equivalence.

12.1 Proofs on Reflections of Representations

In this section we give details for the results on the reflection maps j± . Recall that
we define these when j is a sink or a source in a quiver Q. We use the notation
as before: If j is a sink of Q then we label the arrows ending at j by α1 , . . . , αt
(see Definition 11.7), and we let αi : i → j for 1 ≤ i ≤ t. As explained in
Remark 11.13, there may be multiple arrows, and if so then we identify the relevant
vertices at which they start (rather than introducing more notation). If j is a source of
Q then we label the arrows starting at vertex j by β1 , . . . , βt (see Definition 11.14),
where βi : j → i, and we make the same convention as above in the case of multiple
arrows, see also Remark 11.20.

© Springer International Publishing AG, part of Springer Nature 2018 239


K. Erdmann, T. Holm, Algebras and Representation Theory, Springer
Undergraduate Mathematics Series, https://doi.org/10.1007/978-3-319-91998-0_12
240 12 Proofs and Background

12.1.1 Invariance Under Direct Sums

The following is Lemma 11.10, which we restate for convenience.


Lemma (Lemma 11.10). Suppose Q is a quiver with a sink j , and M is a
representation of Q. Assume M = X ⊕ Y is a direct sum of subrepresentations,
then j+ (M) = j+ (X ) ⊕ j+ (Y).
Proof. We use the notation as in Definition 11.7. Recall that for any representation
V of Q, the representation j+ (V) is defined as follows (see Definition 11.9): we
have V + (r) = V (r) for all vertices r = j , and V + (j ) is the kernel of the linear
map


t 
t
(V (α1 ), . . . , V (αt )) : V (i) → V (j ) , (v1 , . . . , vt ) → V (αi )(vi ).
i=1 i=1

Moreover, if γ is an arrow not adjacent to j then we set V + (γ ) = V (γ ); finally


for i = 1, . . . , t the linear map V + (ᾱi ) : V + (j ) → V + (i) = V (i) is defined by
projecting onto the i-th coordinate. We apply this for V = M and for V = X or Y.
(1) We show that j+ (X ) and j+ (Y) are subrepresentations of j+ (M):
We prove it for j+ (X ); the proof for j+ (Y) is analogous. For the definition of
subrepresentation, see Definition 9.6.
If r = j then X+ (r) = X(r) is a subspace of M(r) = M + (r), since X is a
subrepresentation of M. If γ is an arrow of Q not adjacent to j , say γ : r → s,
then we claim that X+ (γ ) is the restriction of M + (γ ) to X+ (r) and it maps into
X+ (s): By definition X+ (γ ) = X(γ ) and, since X is a subrepresentation, it is the
restriction of M(γ ) to X(r) and maps into X(s). In addition, M + (γ ) = M(γ ) and
X+ (r) = X(r), and X+ (s) = X(s), hence the claim holds.
Let r = +
tj , and take w ∈ X (j ), then w = (x1 , . . . , xt ) with xi ∈ X(i),
such that i=1 X(αi )(xi ) = 0. Since X is a subrepresentation of M we have
xi ∈ X(i) ⊆ M(i), and


t 
t
M(αi )(xi ) = X(αi )(xi ) = 0,
i=1 i=1

that is, w ∈ M + (j ). Moreover, X+ (ᾱi )(w) = xi = M + (ᾱi )(w), that is, X+ (ᾱi ) is
the restriction of M + (ᾱi ) to X+ (j ). We have now proved (1).
(2) We show now that j+ (M) is the direct sum of j+ (X ) and j+ (Y),
that is, verify Definition 9.9. We must show that for each vertex r, we have
M + (r) = X+ (r) ⊕ Y + (r) as vector spaces. This is clear for r = j , then since
M = X ⊕ Y, we have

M + (r) = M(r) = X(r) ⊕ Y (r) = X+ (r) ⊕ Y + (r).


12.1 Proofs on Reflections of Representations 241

Let r = j . If w ∈ X+ (j ) ∩ Y + (j ) then w = (x1 , . . . , xt ) = (y1 , . . . , yt ) with


xi ∈ X(i) and yi ∈ Y (i) for each i. So xi = yi ∈ X(i) ∩ Y (i) = 0 and w = 0.
Hence X+ (j ) ∩ Y + (j ) = 0.
We are left to show that M + (j ) = X+ (j )  + Y + (j ). Take an element
+
(m1 , . . . , mt ) ∈ M (j ), then mi ∈ M(i) and ti=1 M(αi )(mi ) = 0. Since
M(i) = X(i) ⊕ Y (i) we have mi = xi + yi with xi ∈ X(i) and yi ∈ Y (i). Therefore


t 
t 
t
0= M(αi )(mi ) = M(αi )(xi + yi ) = (M(αi )(xi ) + M(αi )(yi ))
i=1 i=1 i=1


t 
t 
t 
t
= M(αi )(xi ) + M(αi )(yi ) = X(αi )(xi ) + Y (αi )(yi ).
i=1 i=1 i=1 i=1


Now, because X and Y are subrepresentations, we know that ti=1 X(αi )(xi ) lies in
X(j ) and that ti=1 Y (αi )(yi ) lies in Y (j ). We assume that M(j) = X(j ) ⊕ Y (j ),
so the intersection of X(j ) and Y (j ) is zero. It follows that both ti=1 X(αi )(xi )=0
t +
and i=1 Y (αi )(yi ) = 0. This means that (x1 , . . . , xt ) ∈ X (j ) and
+
(y1 , . . . , yt ) ∈ Y (j ) and hence

(m1 , . . . , mt ) = (x1 , . . . xt ) + (y1 , . . . , yt ) ∈ X+ (j ) + Y + (j ).

The other inclusion follows from part (1) since X+ (j ) and Y + (j ) are subspaces of
M + (j ). 

We now give the proof of the analogous result for the reflection map j− . This
is Lemma 11.17, which has several parts. We explained right after Lemma 11.17
that it only remains to prove part (a) of that lemma, which we also restate here for
convenience.
Lemma (Lemma 11.17 (a)). Suppose Q is a quiver with a source j , and N is a
representation of Q . Assume N = X ⊕Y is a direct sum of subrepresentations, then
j− (N ) is isomorphic to the direct product j− (X ) × j− (Y) of representations.
Proof. We use the notation as in Definition 11.14. We recall from Definition 11.16
how the reflection j− (V) is defined for any representation V of Q . For
vertices r = j we set V − (r) = V (r), and V − (j ) is the factor space
V − (j ) = (V (1) × . . . × V (t))/CV , where CV is the image of the linear map

V (j ) → V (1) × . . . × V (t) , v → (V (β1 )(v), . . . , V (βt )(v)).

In this proof, we will take the direct product of the representations j− (X ) and
j− (Y), see Exercise 9.13.
We will first construct an isomorphism of vector spaces N − (j )∼
=X− (j )×Y − (j ),
and then use this to prove the lemma. Throughout, we write nr for an element in
N(r), and we use that nr = xr + yr for unique elements xr ∈ X(r) and yr ∈ Y (r).
242 12 Proofs and Background

(1) We define a linear map ψ : N − (j ) → X− (j ) × Y − (j ). We set

ψ((n1 , . . . , nt ) + CN ) := ((x1 , . . . , xt ) + CX , (y1 , . . . , yt ) + CY ).

This is well-defined: If (n1 , . . . , nt ) ∈ CN then there is an n ∈ N(j ) such that

(n1 , . . . , nt ) = (N(β1 )(n), . . . , N(βt )(n)).

Since N = X ⊕ Y we can write n = x + y and ni = xi + yi with x ∈ X(j ) and


xi ∈ X(i), and y ∈ Y (j ) and yi ∈ Y (i). Moreover, for each i we have

xi + yi = ni = N(βi )(n) = N(βi )(x) + N(βi )(y) = X(βi )(x) + Y (βi )(y).

We get (x1 , . . . , xt ) = (X(β1 )(x), . . . , X(βt )(x)) ∈ CX and similarly


(y1 , . . . , yt ) ∈ CY .
The map ψ is an isomorphism: Suppose ψ((n1 , . . . , nt ) + CN ) = 0 then
(x1 , . . . , xt ) ∈ CX , that is, it is equal to (X(β1 )(x), . . . , X(βt )(x)) for some
x ∈ X(j ), and similarly we have (y1 , . . . , yt ) = (Y (β1 )(y), . . . , Y (βt )(y)) for
some y ∈ Y (j ). It follows that

ni = xi + yi = X(βi )(x) + Y (βi )(y) = N(βi )(x) + N(βi )(y) = N(βi )(x + y)

and (n1 , . . . , nt ) = (N(β1 )(n), . . . , N(βt )(n)) for n = x + y, hence it lies in CN .


This shows that ψ is injective. The map ψ is clearly surjective, and we have now
shown that it is an isomorphism.
(2) We define a homomorphism of representations
θ := (θr ) : j− (N ) → j− (X ) × j− (Y) as follows. Let n ∈ N − (r), and if
r = j write n = x + y for x ∈ X(r), y ∈ Y (r), then

(x, y) r = j
θr (n) :=
ψ(n) r = j.

For each r, the map θr is a vector space isomorphism; for r = j this holds because

N − (r) = N(r) = X(r) ⊕ Y (r) = X− (r) ⊕ Y − (r)

and for r = j it holds by (1).


We are left to check that the relevant diagrams commute (see Definition 9.4).
(i) Let γ : r → s be an arrow of Q which is not adjacent to j , then we must show
that θs ◦ N − (γ ) = (X− (γ ), Y − (γ )) ◦ θr . Take n ∈ N − (r) = N(r), then

(θs ◦ N − (γ ))(n) = (θs ◦ N(γ ))(n) = θs (N(γ ))(x + y)


= θs (X(γ )(x) + Y (γ )(y)) = (X(γ )(x), Y (γ )(y))
= (X(γ ), Y (γ ))(x, y) = ((X− (γ ), Y − (γ )) ◦ θr )(n).
12.1 Proofs on Reflections of Representations 243

(ii) This leaves us to deal with an arrow β̄i : i → j , hence we must show that we
have θj ◦ N − (β̄i ) = (X− (β̄i ), Y − (β̄i )) ◦ θi . Let ni ∈ N − (i) = N(i) = X(i) ⊕ Y (i),
so that ni = xi + yi with xi ∈ X(i) and yi ∈ Y (i). Then

(θj ◦ N − (β̄i ))(ni ) = θj ((0, . . . , 0, ni , 0, . . . , 0) + CN )


= ψ((0, . . . , 0, ni , 0, . . . , 0) + CN )
= ((0, . . . , 0, xi , 0, . . . , 0) + CX , (0, . . . , 0, yi , 0, . . . , 0) + CY )
= (X− (β̄i )(xi ), Y − (β̄i )(yi ))
= (X− (β̄i ), Y − (β̄i ))(xi , yi )
= ((X− (β̄i ), Y − (β̄i )) ◦ θi )(ni ),

as required. 

12.1.2 Compositions of Reflections

In this section we will give the proofs for the results on compositions of the
reflection maps j± . More precisely, we have stated in Propositions 11.22 and 11.24
that under certain assumptions on the representations M and N we have that
j− j+ (M) ∼= M and j+ j− (N ) ∼ = N , respectively. For our purposes the most
important case is when M and N are indecomposable (and not isomorphic to
the simple representation Sj ), and then the assumptions are always satisfied (see
Exercise 11.2). The following two propositions are crucial for the proof that j+
and j− give mutually inverse bijections as described in Theorem 11.25.
We start by proving Proposition 11.22, which we restate here.
Proposition (Proposition 11.22). Assume j is a sink of a quiver Q and let
α1 , . . . , αt be the arrows in Q ending at j . Suppose M is a representation of Q
such that the linear map


t
(M(α1 ), . . . , M(αt )) : M(i) → M(j ), (m1 , . . . , mt ) → M(α1 )(m1 )+. . .+M(αt )(mt )
i=1

is surjective. Then the representation j− j+ (M) is isomorphic to M.

Proof. We set N = j+ (M) = M+ and N − = j− (N ), and we want to show that
N − is isomorphic to M.
(1) We claim that CN is equal to M + (j ). By definition, we have

CN = {(N(ᾱ1 )(y), . . . , N(ᾱt )(y)) | y ∈ N(j )}.


244 12 Proofs and Background

Now, N(j ) = M + (j ), and an element y ∈ M + (j ) is in particular of the form


t
y = (m1 , . . . , mt ) ∈ M(i).
i=1

The map N(ᾱi ) is the projection onto the i-th coordinate, therefore

(N(ᾱ1 )(y), . . . , N(ᾱt )(y)) = (m1 , . . . , mt ) = y.

This shows that CN = M + (j ).


(2) We define a homomorphism of representations ϕ = (ϕr ) : N − → M with
linear maps ϕr : N − (r) → M(r), as follows.
If r = j then N − (r) = N(r) = M + (r) = M(r) and we take ϕr to be the identity
map, which is an isomorphism. Now let r = j , then
 t   
 
t

N (j ) = N(i) /CN = M(i) /M + (j )
i=1 i=1

by part (1), and since N(i) = M + (i) = M(i). We define ϕj : N − (j ) → M(j ) by


t
ϕj ((m1 , . . . , mt ) + CN ) = M(αi )(mi ) ∈ M(j ).
i=1

Then ϕj is well-defined:
t If (m1 , . . . , mt ) ∈ CN = M + (j ) then by defi-
nition we have
 i=1 M(αi )(mi ) = 0. Moreover, ϕj is injective: indeed, if
t +
i=1 M(αi )(mi ) = 0 then (m1 , . . . , mt ) ∈ M (j ) = CN . Furthermore, ϕj is
surjective, by assumption, and we have shown that ϕj is an isomorphism.
Finally, we check that ϕ is a homomorphism of representations. If γ : r → s
is an arrow not adjacent to j then N − (γ ) = M(γ ), and both ϕr and ϕs are the
identity maps, so the relevant square commutes. This leaves us to consider the maps
corresponding to the arrows αi : i → j . They are the maps in the diagram
12.1 Proofs on Reflections of Representations 245

Since ϕi is the identity map of M(i) and the maps N − (αi ) are induced by inclusion
maps, we have

(M(αi )◦ϕi )(mi ) = M(αi )(mi ) = ϕj ((0, . . . , 0, mi , 0, . . . , 0)+CN ) = (ϕj ◦N − (αi ))(mi ).

Thus, ϕ : N − = j− j+ (M) → M is an isomorphism of representations, as


claimed.

We will now prove the analogous result for the other composition. We also restate
this proposition for convenience.
Proposition (Proposition 11.24). Assume j is a source of a quiver Q and let
β1 , . . . , βtbe the arrows in Q starting at j . Suppose N is a representation of Q
such that ti=1 ker(N(βi )) = 0. Then the representations j+ j− (N ) and N are
isomorphic.
Proof. Let M = j− (N ), then we want to show that M+ = j+ (M) is isomorphic
to N .
(1) We assume that the intersection of the kernels of the N(βi ) is zero, therefore the
linear map

ϕj : N(j ) → CN , y → (N(β1 )(y), . . . , N(βt )(y))

is injective. It is surjective by definition of CN , hence it is an isomorphism.


(2) We claim that M + (j ) = CN : By definition


t
M + (j ) = {(n1 , . . . , nt ) ∈ N(1) × . . . × N(t) | M(β̄i )(ni ) = 0}.
i=1

Now, M(β̄i )(ni ) = N − (β̄i )(ni ) = (0, . . . , 0, ni , 0, . . . , 0)+CN ∈ M(j ) = N − (j ).


So


t
M(β̄i )(ni ) = (n1 , . . . , nt ) + CN .
i=1

This is zero precisely when (n1 , . . . , nt ) ∈ CN . Hence the claim M + (j ) = CN


holds.
(3) Now we define a homomorphism of representations ϕ : N → M+ by setting
ϕr as the identity map on N(r) = M(r) if r = j and

ϕj : N(j ) → M + (j ) = CN , ϕj (y) = (N(β1 )(y), . . . , N(βt )(y)).

Each of these linear maps is an isomorphism, and we are left to check that ϕ is a
homomorphism of representations.
246 12 Proofs and Background

If γ : r → s is an arrow which is not adjacent to j then ϕr and ϕs are identity


maps, and N(γ ) = M + (γ ), and the relevant diagram commutes.
Now let βi : j −→ i be an arrow, we require that the following diagram
commutes:

Recall that M + (βi ) is the projection onto the i-th component, and ϕi is the identity,
and so we get for any y ∈ N(j ) that

(M + (βi )◦ϕj )(y) = M + (βi )(N(β1 )(y), . . . , N(βt )(y)) = N(βi )(y) = (ϕi ◦N(βi ))(y),

as required. Thus, ϕ : N → M+ = j+ j− (N ) is an isomorphism of


representations.


12.2 All Euclidean Quivers Are of Infinite Representation


Type

We have already proved in Sect. 11.2 that quivers with underlying graphs of type
n and of type D
A n have infinite representation type, over any field K. We will
now deal with the three missing Euclidean diagrams E 6 , E
7 and E8 . Recall from
Corollary 11.26 that the orientation of the arrows does not affect the representation
type. So it suffices in each case to consider a quiver with a fixed chosen orientation
of the arrows. We take the labelling as in Example 10.4. However, we do not take
the orientation as in 10.4. Instead, we take the orientation so that the branch vertex
is the only sink. This will make the notation easier (then we can always take the
maps to be inclusions).
As a strategy, in each case we will first construct a special representation as in
Lemma 11.29 (see Definition 11.30). This will already imply infinite representation
type if the underlying field is infinite. This is not yet sufficient for our purposes
since we prove Gabriel’s theorem for arbitrary fields. Thus we then construct
representations of arbitrary dimensions over an arbitrary field and show that they
are indecomposable. The details for the general case are analogous to those in the
construction of the special representation.
12.2 All Euclidean Quivers Are of Infinite Representation Type 247

6 Have Infinite Representation Type


12.2.1 Quivers of Type E

Let Q be the quiver of Dynkin type E6 with the following labelling and orientation:

Lemma 12.1. The quiver Q has a special representation M with dimK M(3) = 2.

Proof. We define M to be a representation for which all maps are inclusion maps, so
we do not need to specify names for the arrows. We take M(4) to be a 3-dimensional
space, and all other M(i) are subspaces.

M(4) = span{e, f, g}
M(1) = span{g}
M(2) = span{f, g}
M(3) = span{e + f, f + g}
M(5) = span{e, f }
M(6) = span{e}.

According to the definition of a special representation (see Definition 11.30) we


must check that any endomorphism of M is a scalar multiple of the identity. Let
ϕ : M → M be a homomorphism of representations. Since all maps are inclusion
maps, it follows that for each i, the map ϕi is the restriction of ϕ4 to M(i).
Therefore ϕ4 (g) ∈ M(1) and hence ϕ4 (g) = c1 g for some c1 ∈ K.
Similarly ϕ4 (e) ∈ M(6) and ϕ4 (e) = c2 e for some c2 ∈ K. Furthermore,
ϕ4 (f ) ∈ M(2) ∩ M(5) = span{f } and therefore ϕ4 (f ) = c3 f for some c3 ∈ K.
We have ϕ4 (e + f ) ∈ M(3) so it must be of the form
ϕ4 (e + f ) = a(e + f ) + b(f + g) for a, b ∈ K. On the other hand, by linearity

ϕ4 (e + f ) = ϕ4 (e) + ϕ4 (f ) = c2 e + c3 f.

Now, e, f, g are linearly independent, and we can equate coefficients. We get b = 0


and a = c2 = c3 .
Similarly ϕ4 (f + g) = a  (e + f ) + b (f + g) with a  , b  ∈ K, but it is equal to
c3 f + c1 g. We deduce a  = 0 and b = c3 = c1 .
Combining the two calculations above we see that c1 = c2 = c3 . But this means
that ϕ4 = c1 idM(4) and hence ϕ is a scalar multiple of the identity. 

248 12 Proofs and Background

We have therefore found a special representation of Q, with M(3) two-


dimensional. So if we extend the quiver by a new vertex ω and a new arrow

ω → 3 we get the following extended quiver Q,

This quiver Q  has underlying graph a Euclidean diagram of type E 6 . By


Lemma 11.29 we can construct pairwise non-isomorphic indecomposable represen-
tations Mλ of Q  for λ ∈ K. In particular, the quiver Q has infinite representation
type if the field K is infinite.
We will now define representations of arbitrary large dimension, for the above
quiver Q of type E6 . Then we will show that these representations are indecompos-
able.
Definition 12.2. We fix m ∈ N, and we define a representation V = Vm of Q, 
where V (4) has dimension 3m and all other spaces are subspaces, and all the linear
maps are inclusions. That is, let

V (4) = span{e1 , . . . , em , f1 , . . . , fm , g1 , . . . , gm }

and define the other spaces as follows,

V (1) = span{g1 , . . . , gm }
V (2) = span{f1 , . . . , fm , g1 , . . . , gm }
V (3) = span{e1 + f1 , . . . , em + fm , f1 + g1 , . . . , fm + gm }
V (5) = span{e1 , . . . , em , f1 , . . . , fm }
V (6) = span{e1 , . . . , em }
V (ω) = span{e1 + f1 , (e2 + f2 ) + (f1 + g1 ), . . . , (em + fm ) + (fm−1 + gm−1 )}.

Remark 12.3. If we look at the restriction of Vm to the subquiver Q of type E6 then


we see that it is the direct sum of m copies of the special representation defined in
the proof of Lemma 12.1. The proof that the representation Vm is indecomposable
is thus similar to the proof of Lemma 11.29.
 is indecomposable.
Lemma 12.4. For every m ∈ N, the representation Vm of Q
12.2 All Euclidean Quivers Are of Infinite Representation Type 249

Proof. To prove that the representation is indecomposable, we use the criterion in


Lemma 9.11. So we take a homomorphism ϕ : Vm → Vm of representations and
assume ϕ 2 = ϕ. Then it suffices to show that ϕ is the zero homomorphism or the
identity.
Since all maps in the representation Vm are inclusion maps, any homomorphism
of representations ϕ : Vm → Vm is given by a linear map ϕ4 : V (4) → V (4) such
 We just write ϕ instead of ϕ4 , to simplify
that ϕ4 (V (j )) ⊆ V (j ) for all j of Q.
notation.
We observe that V (5) ∩ V (ω) = span{e1 + f1 }. Since V (5) and V (ω) are
invariant under ϕ, it follows that e1 + f1 is an eigenvector of ϕ. Moreover, since
ϕ 2 = ϕ by assumption, the corresponding eigenvalue is 0 or 1. We may assume that
ϕ(e1 + f1 ) = 0 since otherwise we can replace the representation ϕ by idVm − ϕ.
This gives by linearity that ϕ(e1 ) = −ϕ(f1 ) and this lies in V (6) ∩ V (2), which is
zero, so that

ϕ(e1 ) = 0 = ϕ(f1 ).

Consider now ϕ(g1 ), this lies in V (1), and we also have

ϕ(g1 ) = ϕ(f1 ) + ϕ(g1 ) = ϕ(f1 + g1 ) ∈ V (3).

But V (1) ∩ V (3) = 0, therefore ϕ(g1 ) = 0. In total we know that


ϕ(e1 ) = ϕ(f1 ) = ϕ(g1 ) = 0.
To prove that ϕ is the zero map, we show by induction that
ϕ(ek ) = ϕ(fk ) = ϕ(gk ) = 0 for all 1 ≤ k ≤ m.
The case k = 1 has been shown above. For the inductive step we have by linearity
of ϕ that

ϕ(ek+1 ) + ϕ(fk+1 ) = ϕ(ek+1 + fk+1 )

= ϕ(ek+1 + fk+1 ) + ϕ(fk ) + ϕ(gk ) (by the induction hypothesis)

= ϕ((ek+1 + fk+1 ) + (fk + gk )).

Since ϕ preserves the spaces spanned by the ej and the fj , this element now belongs
to V (5)∩V (ω) = span{e1 +f1 }. Hence ϕ(ek+1 +fk+1 ) = λ(e1 +f1 ) for some scalar
λ ∈ K. Now using our assumption ϕ 2 = ϕ and that ϕ(e1 +f1 ) = ϕ(e1 )+ϕ(f1 ) = 0
by induction hypothesis, it follows that ϕ(ek+1 + fk+1 ) = 0. From this we deduce
by linearity that

ϕ(ek+1 ) = −ϕ(fk+1 ) ∈ V (6) ∩ V (2) = 0.

Furthermore, we can then deduce that

ϕ(gk+1 ) = ϕ(fk+1 ) + ϕ(gk+1 ) = ϕ(fk+1 + gk+1 ) ∈ V (1) ∩ V (3) = 0.


250 12 Proofs and Background

We have shown that ϕ(ek+1 ) = ϕ(fk+1 ) = ϕ(gk+1 ) = 0. This completes the


inductive step.
So we have proved that ϕ is the zero homomorphism and hence that the
representation Vm is indecomposable. 

Since they have different dimensions, the representations Vm for different m are
pairwise non-isomorphic, and hence the quiver Q  has infinite representation type,
over any field K. By the arguments at the beginning of this section, this also shows
6 has infinite representation type,
that every quiver with underlying graph of type E
over any field K.

7 Have Infinite Representation Type


12.2.2 Quivers of Type E

Let Q be the quiver of Dynkin type E7 with the following labelling and orientation:

Lemma 12.5. The quiver Q has a special representation M with dimK M(1) = 2.

Proof. We define a representation M of this quiver for which all maps are inclusion
maps, so we do not specify names for the arrows. We take M(4) to be a 4-
dimensional space, and all other spaces are subspaces.

M(4) = span{e, f, g, h}
M(1) = span{f − g, e + h}
M(2) = span{e + f, e + g, e + h}
M(3) = span{g, h}
M(5) = span{e, f, g}
M(6) = span{e, f }
M(7) = span{e}.

Note that indeed for each arrow in Q the space corresponding to the starting
vertex is a subspace of the space corresponding to the end vertex. The only arrow
for which this is not immediate is 1 −→ 2, and here we have M(1) ⊆ M(2) since
f − g = (e + f ) − (e + g).
12.2 All Euclidean Quivers Are of Infinite Representation Type 251

According to Definition 11.30 we have to show that every endomorphism of


M is a scalar multiple of the identity. So let ϕ : M → M be a homomorphism
of representations. Then since all maps for M are inclusions it follows that
ϕi : M(i) → M(i) is the restriction of ϕ4 to M(i), for each i = 1, . . . , 7.
First, ϕ4 (e) ∈ M(7) and hence ϕ4 (e) = c1 e for some c1 ∈ K. Next
ϕ4 (g) ∈ M(5) ∩ M(3), which is spanned by g, so ϕ4 (g) = c2 g for some c2 ∈ K.
Furthermore, ϕ4 (e+f ) ∈ M(6)∩M(2) = span{e+f }, so ϕ4 (e+f ) = c3 (e+f )
for some c3 ∈ K. Similarly we get ϕ4 (f − g) ∈ M(5) ∩ M(1) = span{f − g} and
hence ϕ4 (f − g) = c4 (f − g) with c4 ∈ K. Finally,
ϕ4 (g − h) ∈ M(2) ∩ M(3) = span{g − h} and thus ϕ4 (g − h) = c5 (g − h)
for some c5 ∈ K.
Now consider ϕ4 (f ). Using linearity we have two expressions

ϕ4 (f ) = ϕ4 (e + f ) − ϕ4 (e) = c3 (e + f ) − c1 e
ϕ4 (f ) = ϕ4 (f − g) + ϕ4 (g) = c4 (f − g) + c2 g.

Since e, f, g are linearly independent, we can equate coefficients and get


c3 − c1 = 0, c3 = c4 and c2 − c4 = 0. Thus, we have c1 = c2 = c3 = c4 .
This already implies that the basis elements e, g, f are mapped by ϕ4 to the same
scalar multiple of themselves.
It remains to deal with the fourth basis element h. We observe that

ϕ4 (e + h) ∈ M(1) ∩ (M(7) + M(3)) = span{e + h},

so ϕ4 (e + h) = c6 (e + h) with c6 ∈ K. Again using linearity this gives us two


expressions for ϕ4 (h), namely

ϕ4 (h) = −ϕ4 (g − h) + ϕ4 (g) = (c2 − c5 )g + c5 h


ϕ4 (h) = ϕ4 (e + h) − ϕ4 (e) = (c6 − c1 )e + c6 h.

By equating coefficients we obtain c6 − c1 = 0, c2 − c5 = 0 and c5 = c6 . But this


means that h is mapped by ϕ4 to the same scalar multiple of itself as e, f, g. Hence
the homomorphism ϕ : M → M is a scalar multiple of the identity, as claimed. 
We now have a special representation M of the quiver Q with M(1) two-
dimensional. So if we extend the quiver by a new vertex ω and a new arrow ω −→ 1
 as shown below,
then we obtain the quiver Q
252 12 Proofs and Background

The underlying graph of Q  is a Euclidean diagram of type E 7 . By Lemma 11.29,


 has infinite representation type if the field K is infinite.
the quiver Q
We will now show that Q  has infinite representation type, for arbitrary fields. We
construct representations of arbitrary dimension and show that they are indecom-
posable. Recall from Corollary 11.26 that then every quiver with underlying graph
7 has infinite representation type.
E
Definition 12.6. Fix an integer m ∈ N and define a representation V = Vm of
 where all maps are inclusions, and all spaces V (i) are subspaces of the 4m-
Q
dimensional K-vector space

V (4) = span{ei , fi , gi , hi | 1 ≤ i ≤ m}.


 are defined as follows:
The subspaces assigned to the other vertices of Q

V (ω) = span{f1 − g1 , (fi − gi ) + (ei−1 + hi−1 ) | 2 ≤ i ≤ m}


V (1) = span{fi − gi , ei + hi | 1 ≤ i ≤ m}
V (2) = span{ei + fi , ei + gi , ei + hi | 1 ≤ i ≤ m}
V (3) = span{gi , hi | 1 ≤ i ≤ m}
V (5) = span{ei , fi , gi | 1 ≤ i ≤ m}
V (6) = span{ei , fi | 1 ≤ i ≤ m}
V (7) = span{ei | 1 ≤ i ≤ m}.

 is
Lemma 12.7. For every m ∈ N, the representation Vm of the quiver Q
indecomposable.
Proof. We write briefly V = Vm . To prove that V is indecomposable, we use
Lemma 9.11. So let ϕ : V → V be a homomorphism with ϕ 2 = ϕ. Then we
have to show that ϕ is zero or the identity.
Since all maps in V are inclusions, the morphism ϕ is given by a single linear
map, which we also denote by ϕ, on V (4) such that all subspaces V (i), where
i ∈ {1, . . . , 7, ω}, are invariant under ϕ (see Definition 9.4).
First we note that V (ω) ∩ V (5) = span{f1 − g1 }. Therefore f1 − g1 must be an
eigenvector of ϕ. Again, we may assume that ϕ(f1 − g1 ) = 0 (since ϕ 2 = ϕ the
eigenvalue is 0 or 1, and if necessary we may replace ϕ by idV − ϕ). Then

ϕ(f1 ) = ϕ(g1 ) ∈ V (6) ∩ V (3) = 0.

From this we can deduce that

ϕ(e1 ) = ϕ(e1 ) + ϕ(f1 ) = ϕ(e1 + f1 ) ∈ V (7) ∩ V (2) = 0

and similarly that ϕ(h1 ) = ϕ(e1 + h1 ) ∈ V (3) ∩ V (1) = 0.


12.2 All Euclidean Quivers Are of Infinite Representation Type 253

Then we prove by induction on k that if ϕ(ek ) = ϕ(fk ) = ϕ(gk ) = ϕ(hk ) = 0


and k < m then also ϕ(ek+1 ) = ϕ(fk+1 ) = ϕ(gk+1 ) = ϕ(hk+1 ) = 0.
By the inductive hypothesis and by linearity we have

ϕ(fk+1 − gk+1 ) = ϕ(fk+1 − gk+1 + ek + hk ) ∈ V (5) ∩ V (ω) = span{f1 − g1 }.

Thus, there exists a scalar μ ∈ K such that ϕ(fk+1 − gk+1 ) = μ(f1 − g1 ). Since
ϕ 2 = ϕ and ϕ(f1 ) = 0 = ϕ(g1 ) we conclude that ϕ(fk+1 − gk+1 ) = 0. But then
we have

ϕ(fk+1 ) = ϕ(gk+1 ) ∈ V (6) ∩ V (3) = 0.

Now we proceed as above, that is, we have

ϕ(ek+1 ) = ϕ(ek+1 ) + ϕ(fk+1 ) = ϕ(ek+1 + fk+1 ) ∈ V (7) ∩ V (2) = 0

and then also

ϕ(hk+1 ) = ϕ(ek+1 + hk+1 ) ∈ V (3) ∩ V (1) = 0.

It follows that ϕ = 0 and then Lemma 9.11 implies that the representation V = Vm
is indecomposable.


8 Have Infinite Representation Type


12.2.3 Quivers of Type E

Now we consider the quiver Q of type E8 with labelling and orientation as follows:

Lemma 12.8. The quiver Q has a special representation M with dimK M(8) = 2.

Proof. We define a representation M of this quiver for which all maps are inclusion
maps, so we do not specify names of the arrows. We take M(4) to be a 6-dimensional
space, and all other spaces are subspaces.

M(4) = span{e, f, g, h, k, l}
M(1) = span{e, l}
M(2) = span{e, f, g, l}
254 12 Proofs and Background

M(3) = span{h + l, e + g + k, e + f + h}
M(5) = span{e, f, g, h, k}
M(6) = span{f, g, h, k}
M(7) = span{g, h, k}
M(8) = span{h, k}.

According to Definition 11.30 we have to show that every endomorphism of M


is a scalar multiple of the identity. So let ϕ : M → M be a homomorphism of
representations. Then as before each linear map ϕi : M(i) → M(i) is the restriction
of ϕ4 to M(i). We again use that each subspace M(i) is invariant under ϕ4 to get
restrictions for ϕ4 .
First, ϕ4 (g) = c1 g for some c1 ∈ K since ϕ4 (g) ∈ M(2) ∩ M(7) = span{g}.
Similarly, ϕ4 (e) ∈ M(1) ∩ M(5) = span{e}, so ϕ4 (e) = c2 e for some c2 ∈ K.
Moreover, we have ϕ4 (k) ∈ M(8), thus ϕ4 (k) = ck + zh for c, z ∈ K. With this,
we have by linearity

ϕ4 (e + g + k) = c2 e + c1 g + (ck + zh).

But this must lie in M(3). Since l and f do not occur in the above expression, it
follows from the definition of M(3) that ϕ4 (e + g + k) must be a scalar multiple of
e + g + k and hence z = 0 and c1 = c2 = c. In particular, ϕ4 (k) = ck.
We may write ϕ4 (l) = ue + vl with u, v ∈ K since it is in M(1), and
ϕ4 (h) = rh + sk with r, s ∈ K since it is in M(8). We have ϕ4 (h + l) ∈ M(3). In
ϕ4 (h) + ϕ4 (l), basis vectors g and f do not occur, and it follows that ϕ4 (h + l) is a
scalar multiple of h + l. Hence

ϕ4 (l) = vl, ϕ4 (h) = rh, and v = r.

We can write ϕ4 (f )=af +bg with a, b ∈ K since it is in M(2)∩M(6)=span{f, g}.


Now we have by linearity that

ϕ4 (e + f + h) = c2 e + (af + bg) + rh ∈ M(3).

Since the basis vectors l and k do not occur in this expression, it follows that
ϕ4 (e + f + h) is a scalar multiple of e + f + h. So b = 0 and a = c2 = r; in
particular, we have ϕ4 (f ) = af .
In total we have now seen that c = c1 = c2 = a = r = v, so all six basis vectors
of M(4) are mapped to the same scalar multiple of themselves. This proves that ϕ4
is a scalar multiple of the identity, and then so is ϕ. 

12.2 All Euclidean Quivers Are of Infinite Representation Type 255

We extend now the quiver Q by a new vertex ω and a new arrow ω −→ 8. Hence
 whose underlying graph is a Euclidean diagram
we consider the following quiver Q

of type E8 :

The result in Lemma 12.8 together with Lemma 11.29 already yields that Q  has
infinite representation type over every infinite field K. But since we prove Gabriel’s
theorem for arbitrary fields, we need to show that Q  has infinite representation
type over any field K. To this end we now define representations of arbitrary
large dimensions and afterwards show that they are indeed indecomposable. The
construction is inspired by the special representation of Q just considered; in fact,
the restriction to the subquiver Q is a direct sum of copies of the above special
representation.
Definition 12.9. Fix an integer m ∈ N. We will define a representation V = Vm of
 where all maps are inclusions, and all spaces V (i) are subspaces
the above quiver Q
of V (4), a 6m-dimensional vector space over K. We give the bases of the spaces.

V (4) = span{ei , fi , gi , hi , ki , li | 1 ≤ i ≤ m}
V (1) = span{ei , li | 1 ≤ i ≤ m}
V (2) = span{ei , fi , gi , li | 1 ≤ i ≤ m}
V (3) = span{hi + li , ei + gi + ki , ei + fi + hi | 1 ≤ i ≤ m}
V (5) = span{ei , fi , gi , hi , ki | 1 ≤ i ≤ m}
V (6) = span{fi , gi , hi , ki | 1 ≤ i ≤ m}
V (7) = span{gi , hi , ki | 1 ≤ i ≤ m}
V (8) = span{hi , ki | 1 ≤ i ≤ m}
V (ω) = span{h1 , h2 + k1 , h3 + k2 , . . . , hm + km−1 }.

 is
Lemma 12.10. For every m ∈ N, the representation Vm of the quiver Q
indecomposable.
Proof. We briefly write V := Vm . To show that V is indecomposable we use the
criterion in Lemma 9.11, that is, we show that the only endomorphisms ϕ : V → V
of representations satisfying ϕ 2 = ϕ are zero and the identity.
As before, since all maps are given by inclusions, any endomorphism ϕ on V
is given by a linear map V (4) → V (4), which we also denote by ϕ, such that

ϕ(V (i)) ⊆ V (i) for all vertices i of Q.
256 12 Proofs and Background

The space V (4) is the direct sum of six subspaces, each of dimension m, spanned
by the basis vectors with the same letter. We write E for the span of the set
{e1 , . . . , em }, and similarly we define subspaces F, G, H, K and L.
(1) We show that ϕ leaves each of these six subspaces of V (4) invariant:
(i) We have ϕ(ei ) ∈ V (1) ∩ V (5) = E. Similarly, ϕ(gi ) ∈ V (7) ∩ V (2) = G.
(ii) We show now that ϕ(hi ) is in H and that ϕ(li ) is in L: To do so, we compute
ϕ(hi + li ) = ϕ(hi ) + ϕ(li ). First, ϕ(hi ) is in V (8), which is H ⊕ K. Moreover, ϕ(li )
is in V (1), that is, in E ⊕ L. Therefore ϕ(hi + li ) is in H ⊕ K ⊕ E ⊕ L. Secondly,
since hi + li ∈ V (3), its image ϕ(hi + li ) is also in V (3). If expressed in terms of
the basis of V (3), there cannot be any ej + gj + kj occuring since this involves a
non-zero element in G. Similarly no basis vector ei + fi + hi can occur. It follows
that ϕ(hi + li ) is in H ⊕ L. This implies that ϕ(hi ) cannot involve any non-zero
element in K, so it must lie in H . Similarly ϕ(li ) must lie in L.
In the following steps, the strategy is similar to that in (ii).
(iii) We show that ϕ(ki ) is in K. To prove this, we compute

ϕ(ei + gi + ki ) = ϕ(ei + gi ) + ϕ(ki ).

We know ϕ(ei +gi ) ∈ E⊕G and ϕ(ki ) ∈ V (8) = H ⊕K and therefore ϕ(ei +gi +ki )
lies in E ⊕ G ⊕ H ⊕ K. On the other hand, ϕ(ei + gi + ki ) lies in V (3). It cannot
involve a basis element in which some lj occurs, or some fj , and it follows that it
must be in the span of the elements of the form ej + gj + kj . Therefore it follows
that ϕ(ki ) ∈ K.
(iv) We claim that ϕ(fi ) is in F . It lies in V (6) ∩ V (2), so it is in F ⊕ G. We
compute ϕ(ei + fi + hi ) = ϕ(ei + hi ) + ϕ(fi ). By parts (i) and (ii), we know
ϕ(ei + hi ) is in E ⊕ H and therefore ϕ(ei + hi ) + ϕ(fi ) is in E ⊕ H ⊕ F ⊕ G. On
the other hand, it lies in V (3) and since it cannot involve a basis vector with a kj or
lj we deduce that it must be in E ⊕ F ⊕ H . Therefore ϕ(fi ) cannot involve any gj
and hence it belongs of F .
(2) Consider ϕ(h1 ). It belongs to H and also to V (ω), so it is a scalar multiple of h1 ,
that is, h1 is an eigenvector of ϕ. Since ϕ 2 = ϕ, the eigenvalue is 0 or 1. As before,
we may assume that ϕ(h1 ) = 0, otherwise we replace ϕ by idV (4) − ϕ.
(3) We show that ϕ(l1 ) = 0 = ϕ(e1 ) = ϕ(f1 ) and ϕ(g1 ) = ϕ(k1 ) = 0. First we
have ϕ(h1 + l1 ) = ϕ(h1 ) + ϕ(l1 ) = ϕ(l1 ) ∈ L ∩ V (3) = 0. Next, we have

ϕ(e1 + f1 + h1 ) = ϕ(e1 + f1 ) + ϕ(h1 ) = ϕ(e1 + f1 ) ∈ (E ⊕ F ) ∩ V (3) = 0,

hence ϕ(e1 ) = −ϕ(f1 ) ∈ E ∩ F = 0. Similarly

ϕ(e1 + g1 + k1 ) = ϕ(e1 ) + ϕ(g1 + k1 ) = ϕ(g1 + k1 ) ∈ (G ⊕ K) ∩ V (3) = 0

and furthermore ϕ(g1 ) = −ϕ(k1 ) ∈ G ∩ K = 0.


By (2) and (3) we have that all basis vectors with index 1 are mapped to zero
by ϕ.
12.3 Root Systems 257

(4) Now one proves by induction on r that if ϕ maps each of hr , lr , er , fr , gr and


kr to zero, and if r < m then it also maps all basis vectors hr+1 , lr+1 , er+1 , fr+1 ,
gr+1 and kr+1 to zero. The arguments are the same as in step (3), and similar to
6 and E
those in the proofs for E 7 , so we will omit the details.
Combining (2), (3) and (4) shows that ϕ = 0. We have therefore proved that Vm
is indecomposable. 

Since they have different dimensions, the indecomposable representations in
Lemma 12.10 are pairwise non-isomorphic. This shows that the quiver Q  has infinite
representation type over any field K. In Corollary 11.26 we have seen that the
representation type is independent of the orientation of the arrows, as long as the
underlying graph is a tree, as is the case here. Therefore we have proved that every
quiver with underlying graph E 8 has infinite representation type, over any field K.

12.3 Root Systems

Root systems were first discovered in Lie algebras, as collections of eigenvec-


tors of certain diagonalizable linear maps; they form a key tool in the famous
Cartan–Killing classification of simple Lie algebras over the complex numbers.
Subsequently, root systems have been found in many other contexts, and it is
therefore convenient to use an axiomatic definition.
We will only give a brief account of root systems here; for more details we refer
to the book by Erdmann and Wildon in this series.1
Let E be a finite-dimensional real vector space with an inner product (−, −).
Given a non-zero vector v ∈ E, let sv be the reflection in the hyperplane
perpendicular to v. It is given by the formula

2(x, v)
sv (x) = x − v for every x ∈ E.
(v, v)
2(x,v)
Then sv (v) = −v, and if (y, v) = 0 then sv (y) = y. Write x, v := (v,v) .
Definition 12.11. A subset R of E is a root system if it satisfies the following:
(R1) R is finite, it spans E and 0 ∈ R.
(R2) If α ∈ R, the only scalar multiples of α in R are ±α.
(R3) If α ∈ R then the reflection sα permutes the elements of R.
(R4) If α, β ∈ R then β, α ∈ Z.
The elements of the root system R are called roots.

1 K. Erdmann, M.J. Wildon, Introduction to Lie algebras. Springer Undergraduate Mathematics

Series. Springer-Verlag London, Ltd., London, 2006. x+251 pp.


258 12 Proofs and Background

Remark 12.12. Condition (R4) is closely related to the possible angles between two
roots. If α, β ∈ R and θ is the angle between α and β then

(α, β)2
α, β · β, α = 4 = 4 cos2 (θ ) ≤ 4
|α|2 |β|2

and this is an integer by (R4). So there are only finitely many possibilities for the
numbers β, α.
Definition 12.13. Let R be a root system in E. A base of R is a subset B of R such
that
(i) B is a vector space basis of E.
(ii) Every β ∈ R can be written as

β= kα α
α∈B

with kα ∈ Z and where all non-zero coefficients kα have the same sign.
One can show that every root system has a base. With this, R = R + ∪ R − , where
R+ is the set of all β where the signs are positive, and R − is the set of all β where
signs are negative. Call R + the set of ‘positive roots’ , and R − the set of ‘negative
roots’.
We fix a base B = {α1 , . . . , αn } of the root system R. Note that the cardinality n
of B is the vector space dimension of E. The Cartan matrix of R is the (integral)
matrix with (i, j )-entry αi , αj .
Root systems are classified by their Cartan matrices. We consider root systems
whose Cartan matrices are symmetric, known as ‘simply laced’ root systems. One
can show that for these root systems, if i = j then αi , αj  is equal to 0 or to −1.
Definition 12.14. Let R be a simply laced root system and let B = {α1 , . . . , αn } be
a base of R. The Dynkin diagram of R is the graph R , with vertices {1, 2, . . . , n},
and there is an edge between vertices i and j if αi , αj  = 0.
The classification of root systems via Dynkin diagrams then takes the following
form. Recall that the Dynkin diagrams of type A, D, E were given in Fig. 10.1.
Theorem 12.15. The Dynkin diagrams for simply laced root systems are the unions
of Dynkin diagrams of type An (for n ≥ 1), Dn (for n ≥ 4) and E6 , E7 and E8 .
Now we relate this to quivers. Let Q be a connected quiver without oriented
cycles, with underlying graph . We have defined the symmetric bilinear form
(−, −) on Zn × Zn , see Definition 10.2. With the same Gram matrix we get a
symmetric bilinear form on Rn × Rn .
Let  be a union of Dynkin diagrams. Then the quadratic form q corresponding
to the above symmetric bilinear form is positive definite; in fact, Proposition 10.10
12.3 Root Systems 259

shows this for a single Dynkin diagram, and the general case can be deduced from
the formula (11.1) in the proof of Lemma 11.41.
Hence, (−, −) is an inner product. So we consider the vector space E = Rn
with the inner product (−, −) , where n is the number of vertices of .
Proposition 12.16. Let Q be a quiver whose underlying graph  is a union of
Dynkin diagrams of type A, D, or E. Let q be the quadratic form associated to
, and let  = {x ∈ Zn | q (x) = 1} be the set of roots, as in Definition 10.6.
Then  is a root system in E = Rn , as in Definition 12.11. It has a base (as in
Definition 12.13) consisting of the unit vectors of Zn . The associated Cartan matrix
is the Gram matrix of (−, −) .
Proof. (R1) We have seen that  is finite (see Proposition 10.12 and
Remark 11.42). Since the unit vectors are roots (see Exercise 10.3), the set  spans
E = Rn . From the definition of q (see Definition 10.6) we see that q (0) = 0, that
is, the zero vector is not in  .
(R2) Let x ∈  and λ ∈ R such that λx ∈  . Then we have q (λx) = λ2 q (x).
They are both equal to 1 if and only if λ = ±1.
(R3) We have proved in Lemma 10.9 that a reflection si permutes the elements of 
but a similar computation shows that for any y ∈  we have q (sy (x)) = q (x):
Since y ∈  we have (y, y) = 2q (y) = 2; then

2(x, y)
sy (x) = x − v = x − (x, y) y for every x ∈ E.
(y, y)

It follows that

(sy (x), sy (x)) = (x − (x, y) y, x − (x, y) y)


= (x, x) − 2(x, y)2 + (x, y)2 (y, y)
= (x, x) .

Thus, q (sy (x)) = 12 (sy (x), sy (x)) = 12 (x, x) = q (x) and hence sy permutes
the elements in  .
(R4) We have for x, y ∈  that (y, y) = 2q (y) = 2 and hence x, y = (x, y) ∈ Z.
This proves that  satisfies the axioms of a root system.
We now show that the unit vectors form a base of the root system  (in the sense
of Definition 12.13): the unit vectors clearly form a vector space basis of E = Rn ;
moreover, they satisfy the property (ii) as in Definition 12.13, as we have seen in
Lemma 10.13.
As noted, for unit vectors εi , εj we have that εi , εj  is equal to (εi , εj ) ; this says
that the Cartan matrix of the root system  is the same as the Gram matrix in
Definition 10.2. 

260 12 Proofs and Background

12.4 Morita Equivalence

A precise definition of Morita equivalence needs the setting of categories and


functors (and it can be found in more advanced texts). Here, we will give an informal
account, for finite-dimensional algebras over a field K, and we give some examples
as illustrations. Roughly speaking, two algebras A and B over the same field K are
Morita equivalent ‘if they have the same representation theory’. That is, there should
be a canonical correspondence between finite-dimensional A-modules and finite-
dimensional B-modules, say M → F (M), and between module homomorphisms:
If M and N are A-modules and f : M → N is an A-module homomorphism then
F (f ) is a B-module homomorphism F (M) → F (N), such that
(i) it preserves (finite) direct sums, and takes indecomposable A-modules to
indecomposable B-modules, and simple A-modules to simple B-modules,
(ii) every B-module is isomorphic to F (M) for some A-module M, where M is
unique up to isomorphism,
(iii) the correspondence between homomorphisms gives a vector space isomor-
phism

HomA (M, N)) ∼


= HomB (F (M), F (N))

for arbitrary A-modules M, N.


Clearly, isomorphic algebras are Morita equivalent. But there are many non-
isomorphic algebras which are Morita equivalent, and we will illustrate this by
considering some examples which appeared in this book.
Example 12.17.
(1) Let A = Mn (K), the algebra of n × n-matrices over the field K. This is
a semisimple algebra, hence every A-module is a direct sum of simple A-
modules. Moreover, up to isomorphism there is only one simple A-module,
namely the natural module K n . Precisely the same properties hold for K-
modules, that is, K-vector spaces; every K-module is a direct sum of the only
simple K-module, which is K itself. So one might ask whether A is Morita
equivalent to the K-algebra K. This is indeed the case.
Consider the matrix unit e = E11 ∈ A. This is an idempotent, and the algebra
eAe is one-dimensional, spanned by E11 ; in particular, eAe is isomorphic to K.
If M is an A-module, one can take

F (M) := eM,

which is an eAe-module, by restricting the given action. If f : M → N is an


A-module homomorphism then F (f ) is the restriction of f to eM, and this is
a homomorphism of eAe-modules eM → eN.
12.4 Morita Equivalence 261

If M is an A-module and M = U ⊕ V is a direct sum of A-submodules U


and V then one checks that eM = eU ⊕ eV , a direct sum of eAe-submodules.
Inductively, one can also see that F preserves arbitrary finite direct sums. If M
is indecomposable, then, since A is semisimple, M is simple, and isomorphic
to the natural module K n . We have e(K n ) = {(x, 0, . . . , 0)t | x ∈ K} so this
is 1-dimensional and is indecomposable as a K-module. Hence property (i) is
satisfied.
One can see that (ii) holds: since an eAe-module is a vector space, and if it
has dimension d then it is isomorphic to eM where M is the direct sum of d
copies of the natural module K n of A. Moreover, since every finite-dimensional
A-module is a direct sum of copies of K n , we see that such M is unique up to
isomorphism.
Similarly one can see that (iii) holds.
With a precise definition one can show that Morita equivalence is an
equivalence relation, and therefore any two matrix algebras Mn (K) and Mm (K)
over a fixed field are Morita equivalent.
This example also shows that dimensions of simple modules are not
preserved by a Morita equivalence. In fact, it is an advantage for calculations to
have smaller simple modules.
(2) If A = Mn (D) where D is a division algebra over K then similarly A is
Morita equivalent to D, with the same correspondence as in (1). But note that if
D∼ K, then K and D are not Morita equivalent. The simple K-module has an
=
endomorphism algebra which is 1-dimensional over K but the endomorphism
algebra of the simple D-module, that is of D, is not 1-dimensional over K; so
condition (iii) fails.
(3) Consider a finite-dimensional semisimple algebra. If A = Mn (K) × Mr (K),
then one can take F (M) = eM where e is the idempotent

e = e1 + e2 , e1 = (E11 , 0) and e2 = (0, E11 ).

Then eAe is Morita equivalent to A. Again, A is semisimple, so indecomposable


modules are the same as simple modules. We can see that F gives a bijection
between simple modules: The algebra A has simple modules S1 := K n × 0 and
S2 := 0 × K r (up to isomorphism), and eAe is isomorphic to K × K and the
simple modules are K × 0 = F (S1 ) and 0 × K = F (S2 ). This generalizes to
the product of finitely many matrix algebras, and also where the factors are of
the form as in (2).
Example (3) is a special case of a more general fact. Namely, if A and B are
Morita equivalent and C and D are Morita equivalent, then A × C and B × D are
Morita equivalent. By Lemma 3.30, an indecomposable module of a direct product
is an indecomposable module for one of the factors, and the other factor acts as zero.
Using this, the two given Morita equivalences can be used to construct one between
the direct products.
262 12 Proofs and Background

Given any K-algebra A, we describe a method for constructing algebras Morita


equivalent to A (without proofs). Assume that A has n simple modules S1 , . . . , Sn
(up to isomorphism). Suppose e is an idempotent of A such that the modules eSi are
all non-zero. Then one can show that eAe is Morita equivalent to A. (It is easy to
check that eSi is a simple module for eAe.)
A special case of this method is used to construct an algebra which is Morita
equivalent to A and is called a basic algebra of A. Namely, start with the algebra
Ā := A/J (A). This is semisimple, see Theorem 4.23, so it is isomorphic to a
product of finitely many matrix algebras. Take an idempotent ε of Ā as we have
described in Example 12.17 (3). Then εĀε is a direct product of finitely many
division algebras Di , and the modules for εĀε are then as vector spaces also equal
to the Di . One can show that there is an idempotent e of A such that e + J (A) = ε,
and that the idempotent e produces such a basic algebra of A. Note that then the
simple eAe-modules are the same as the simple εĀε-modules.
In the case when Ā is a direct product Mn1 (K) × . . . × Mnr (K), then we have
that all simple eAe-modules are 1-dimensional. This is the nicest setting, and it has
inspired the following definition:
Definition 12.18. A finite-dimensional K-algebra B is said to be basic if all simple
B-modules are 1-dimensional. Equivalently, the factor algebra B/J (B) (which is
semisimple) is isomorphic to a direct product of copies of K.
These conditions are equivalent, recall that by Theorem 4.23 the simple B-
modules are the same as the simple B/J (B)-modules. Since B/J (B) is semisimple,
it is a direct product of matrix algebras over K, and the simple modules are 1-
dimensional if and only if all blocks have size 1 and are just K.
Example 12.19.
(1) Let A be the product of matrix algebras over division algebras. Then the basic
algebra of A is basic in the above sense if and only if all the division algebras
are equal to K.
(2) Let A = KQ where Q is any quiver without oriented cycles. Then A is basic,
see Theorem 3.26.
To summarize, a basic algebra of an algebra is Morita equivalent to A, and in the
nicest setting it has all simple modules 1-dimensional. There is a theorem of Gabriel
which gives a completely general description of such algebras. This theorem might
also give another reason why quivers play an important role.
Recall that if Q is a quiver then KQ≥t is the span of all paths of length ≥ t. We
call an ideal I in a path algebra KQ admissible if KQ≥m ⊆ I ⊆ KQ≥2 for some
m ≥ 2.
Theorem 12.20 (Gabriel). Assume A is a finite-dimensional K-algebra which is
basic. Then A ∼
= KQ/I , where Q is a quiver, and I is an admissible ideal of KQ.
Moreover, the quiver Q is unique.
12.4 Morita Equivalence 263

The condition that I contains all paths of length ≥ m for some m makes sure
that KQ/I is finite-dimensional, and one can show that Q is unique by using that
I is contained in the span of paths of length ≥ 2. The ideal I in this theorem is not
unique.
Example 12.21.
(1) Let A = K[X]/(Xr ) for r ≥ 1. We have seen that K[X] is isomorphic to the
path algebra KQ where Q is the quiver with one vertex and a loop α, hence A
is isomorphic to KQ/I where I = (α r ) is the ideal generated by α r . Moreover,
A is basic, see Proposition 3.23.
(2) Let A = KG where G is the symmetric group S3 , with elements r = (1 2 3)
and s = (1 2).
(i) Assume that K has characteristic 3. Then by Example 8.20 we know that
the simple A-modules are 1-dimensional. They are the trivial module and
the sign module. Therefore A is basic.
(ii) Now assume that K = C, then we have seen in Example 6.7 that the
simple A-modules have dimensions 1, 1, 2 and hence A is not basic. In
Exercise 6.4 we obtained the Artin–Wedderburn decomposition of CS3 ,
with orthogonal idempotents e+ , e− , f and f1 . We can take as isomorphism
classes of simple A-modules Ae+ and Ae− (the trivial and the sign
representation) and in addition Af . So we can write down the basic algebra
eAe of A by taking e = e+ + e− + f .
Example 12.22. As before, let A = KG be the group algebra for the symmetric
group G = S3 and assume that K has characteristic 3. We describe the quiver
Q and a factor algebra KQ/I isomorphic to A. Note that the group algebra Ks
of the subgroup of order 2 is a subalgebra, and it is semisimple (by Maschke’s
theorem). It has two orthogonal idempotents ε1 , ε2 with 1A = ε1 + ε2 . Namely, take
ε1 = −1 − s and ε2 = −1 + s (use that −2 = 1 in K). These are also idempotents
of A, still orthogonal. Then one checks that A as a left A-module is the direct sum
A = Aε1 ⊕ Aε2. For the following, we leave the checking of the details as an
exercise.
For i = 1, 2 the vector space Aεi has a basis of the form {εi , rεi , r 2 εi }. We can
write Aε1 = ε1 Aε1 ⊕ ε2 Aε1 as vector spaces, and Aε2 = ε2 Aε2 ⊕ ε1 Aε2 . One
checks that for i = 1, 2 the space εi Aεi has basis {εi , (1 + r + r 2 )εi }. Moreover,
ε2 Aε1 is 1-dimensional and spanned by ε2 rε1 , and ε1 Aε2 also is 1-dimensional and
spanned by ε1 rε2 . Furthermore, one checks that
(i) (ε1 rε2 )(ε2 rε1 ) = (1 + r + r 2 )ε1 and (ε2 rε1 )(ε1 rε2 ) = (1 + r + r 2 )ε2
(ii) (εi rεj )(εj rεi )(εi rεj ) = 0 for {i, j } = {1, 2}.
We take the quiver Q defined as
264 12 Proofs and Background

Note that the path algebra KQ is infinite-dimensional since Q has oriented cycles.
Then we have a surjective algebra homomorphism ψ : KQ → A where we let

ψ(ei ) = εi , ψ(α) = ε2 rε1 , ψ(β) = ε1 rε2

and extend this to linear combinations of paths in KQ. Let I be the kernel of ψ. One
shows, using (i) and (ii) above, that I = (αβα, βαβ), the ideal generated by αβα
and βαβ. Hence I is an admissible ideal of KQ, and KQ/I is isomorphic to A.
Appendix A
Induced Modules for Group Algebras

In this appendix we provide the details of the construction of induced modules


for group algebras of finite groups, as used in Sect. 8.2 for determining the
representation type of group algebras. Readers familiar with tensor products over
algebras can entirely skip this appendix. When defining induced modules we will
take a hands-on approach and only develop the theory as far as we need it in this
book to make it self-contained. For more details on induced modules for group
algebras we refer the reader to textbooks on group representation theory.
We start by introducing tensor products of vector spaces. For simplicity we
restrict to finite-dimensional vector spaces (however, the construction carries over
almost verbatim to arbitrary vector spaces).
Definition A.1. Let K be a field and let V and W be two finite-dimensional K-
vector spaces. We choose bases {v1 , . . . , vn } of V and {w1 , . . . , wm } of W , and
introduce new symbols vi ⊗ wj for 1 ≤ i ≤ n and 1 ≤ j ≤ m. The tensor product
of V and W over K is the K-vector space having the symbols vi ⊗ wj as a basis,
that is,

V ⊗K W := span{vi ⊗ wj | 1 ≤ i ≤ n, 1 ≤ j ≤ m}.

Remark A.2.
(1) By definition, for the dimensions we have

dimK (V ⊗K W ) = (dimK V )(dimK W ).


 
(2) An arbitrary element of V ⊗K W has the form ni=1 m j =1 λij (vi ⊗ wj ) with
scalars λij ∈ K. To ease notation, we often abbreviate the double sum to

© Springer International Publishing AG, part of Springer Nature 2018 265


K. Erdmann, T. Holm, Algebras and Representation Theory, Springer
Undergraduate Mathematics Series, https://doi.org/10.1007/978-3-319-91998-0
266 A Induced Modules for Group Algebras


i,j λij (vi ⊗ wj ). Addition and scalar multiplication in V ⊗K W are given
by
  
( λij (vi ⊗ wj )) + ( μij (vi ⊗ wj )) = (λij + μij )(vi ⊗ wj )
i,j i,j i,j

and
 
λ( λij (vi ⊗ wj )) = λλij (vi ⊗ wj ),
i,j i,j

respectively.
The next result collects some fundamental ‘calculation rules’ for tensor products
of vector spaces, in particular, it shows that the tensor product is bilinear. Moreover
it makes clear that the vector space V ⊗K W does not depend on the choice of the
bases used in the definition.
Proposition A.3. We keep the notation from Definition A.1. For arbitrary
 elements
v ∈ V and w ∈ W , expressed in the given bases as v = i λi vi and
w = j μj wj , we set

v ⊗ w := λi μj (vi ⊗ wj ) ∈ V ⊗K W. (A.1)
i,j

Then the following holds.


(a) For all v ∈ V , w ∈ W and λ ∈ K we have v ⊗ (λw) = (λv) ⊗ w = λ(v ⊗ w).
(b) Let x1 , . . . , xr ∈ V and y1 , . . . , ys ∈ W be arbitrary elements. Then we have


r 
s 
r 
s
( xi ) ⊗ ( yj ) = xi ⊗ yj .
i=1 j =1 i=1 j =1

(c) Let {b1 , . . . , bn } and {c1 , . . . , cm } be arbitrary bases of V and W , respectively.


Then {bk ⊗ c | 1 ≤ k ≤ n, 1 ≤  ≤ m} is a basis of V ⊗K W . In particular, the
vector space V ⊗K W does not depend on the choice of the bases of V and W .
Proof. (a) Since the scalars from the field K commute we get from (A.1) that
  
v ⊗ (λw) = ( λi vi ) ⊗ ( λμj wj ) = λi λμj (vi ⊗ wj )
i j i,j
  
= λλi μj (vi ⊗ wj ) = ( λλi vi ) ⊗ ( μj wj )
i,j i j

= (λv) ⊗ w.

Completely analogously one shows that (λv) ⊗ w = λ(v ⊗ w).


A Induced Modules for Group Algebras 267


 write the elements in the given bases as xi =
(b) We k λik vk ∈ V and
yj =  μj  w ∈ W . Then, again by (A.1), we have
   
( xi ) ⊗ ( yj ) = ( ( λik )vk ) ⊗ ( ( μj  )w )
i j k i  j
 
= ( λik )( μj  )(vk ⊗ w )
k, i j

= λik μj  (vk ⊗ w )
i,j k,
   
= (( λik vk ) ⊗ ( μj  w )) = xi ⊗ yj .
i,j k  i,j


 the elements of the given bases in the new bases as vi =
(c) We express k λik bk
and wj =  μj  c . Applying parts (b) and (a) we obtain
 
vi ⊗ wj = (λik bk ⊗ μj  c ) = λik μj  (bk ⊗ c ).
k, k,

This means that {bk ⊗ c | 1 ≤ k ≤ n, 1 ≤  ≤ m} is a spanning set for the vector


space V ⊗K W (since they generate all basis elements vi ⊗ wj ). On the other hand,
this set contains precisely nm = dimK (V ⊗K W ) elements, so it must be a basis.

Now we can start with the actual topic of this section, namely defining induced
modules for group algebras.
Let K be a field, G be a finite group and H ⊆ G a subgroup. Suppose we are
given a finite-dimensional KH -module W , with basis {w1 , . . . , wm }. Our aim is to
construct from W a (finite-dimensional) KG-module in a way which extends the
given KH -action on W .
First we can build the tensor product of the K-vector spaces KG and W , thus

KG ⊗K W = span{g ⊗ wi | g ∈ G, i = 1, . . . , m}.

We can turn KG ⊗K W into a KG-module by setting

x · (g ⊗ w) = xg ⊗ w, (A.2)

for every x, g ∈ G and w ∈ W , and then extending linearly. (We leave it as an


exercise to verify that the axioms for a module are indeed satisfied.) However, note
that this is not yet what we want, since the given KH -action on W is completely
ignored. Therefore, in KG ⊗K W we consider the K-subspace

H := span{gh ⊗ w − g ⊗ hw | g ∈ G, h ∈ H, w ∈ W }.

It is immediate from (A.2) that H ⊆ KG ⊗K W is a KG-submodule.


268 A Induced Modules for Group Algebras

Definition A.4. We keep the notations from above. The factor module

KG ⊗H W := (KG ⊗K W )/H

is called the KG-module induced from the KH -module W . For short we write

g ⊗H w := g ⊗ w + H ∈ KG ⊗H W ;

by definition we then have

gh ⊗H w = g ⊗H hw for all g ∈ G, h ∈ H and w ∈ W. (A.3)

We now want to collect some elementary properties of induced modules. We do


not aim at giving a full picture but restrict to the few facts which we use in the main
text of the book.
Proposition A.5. We keep the above notations. Let T ⊆ G be a system of
representatives of the left cosets of H in G. Then the set

{t ⊗H wi | t ∈ T , i = 1, . . . , m}

is a basis of the K-vector space KG ⊗H W . In particular, for the dimension of the


induced module we have

dimK (KG ⊗H W ) = |G : H | · dimK W,

where |G : H | is the index of the subgroup H in G (that is, the number of cosets).
$
Proof. By definition, G is a disjoint union G = t ∈T tH of left cosets. So every
g ∈ G can be written as g = t h̃ for some t ∈ T and h̃ ∈ H . Hence every element
g ⊗H wi is of the form g ⊗H wi = t h̃ ⊗H wi = t ⊗H h̃wi , by (A.3). Since the
elements g ⊗H wi for g ∈ G, i = 1, . . . , m, clearly form a spanning set of the vector
space KG ⊗H W , we conclude that {t ⊗H wi | t ∈ T , i = 1, . . . , m} also forms a
spanning set and that

dimK (KG ⊗H W ) ≤ |T | · dimK W = |G : H | · dimK W.

Conversely, we claim that dimK (KG ⊗H W ) ≥ |G : H | · dimK W.


Note that this is sufficient for proving the entire proposition since then
dimK (KG ⊗H W ) = |G : H | · dimK W and hence the spanning set
{t ⊗H wi | t ∈ T , i = 1, . . . , m} from above must be linearly independent.
To prove the claim, we exhibit a generating set for the subspace H. Take an
arbitrary generating element α := gh ⊗ w − g ⊗ hw from H. By linearity of the
tensor product in both arguments (see Proposition A.3) we can assume that w = wi
A Induced Modules for Group Algebras 269

is one of the basis elements of W . Again write g = t h̃ with t ∈ T and h̃ ∈ H . Then

α = gh ⊗ w − g ⊗ hw = t h̃h ⊗ w − t h̃ ⊗ hw
= t h̃h ⊗ w − t ⊗ h̃hw + t ⊗ h̃hw − t h̃ ⊗ hw.

But this implies that the set

{ty ⊗ wi − t ⊗ ywi | t ∈ T , y ∈ H, i = 1, . . . , m}

is a spanning set of the subspace H of KG ⊗K W . Moreover, for y = 1 the identity


element of the subgroup H the expression becomes 0 and hence can be left out. This
means that the vector space H can be spanned by at most |T | · (|H | − 1) · dimK W
elements. For the dimension of the induced module we thus obtain

dimK (KG ⊗H W ) = dimK (KG ⊗K W )/H


= dimK (KG ⊗K W ) − dimK H
≥ |G| · dimK W − |T | · (|H | − 1) · dimK W
= |G : H | · dimK W,

where the last equation follows from Lagrange’s theorem |G| = |G : H | · |H |


from elementary group theory. As remarked above, this completes the proof of the
proposition. 

We collect some further properties needed in the main text of the book in the
following proposition.
Proposition A.6. We keep the notations from above.
(a) The map i : W → KG ⊗H W , w → 1 ⊗H w, is an injective KH -module
homomorphism.
(b) Identifying W with the image under the map i from part (a), the induced module
KG ⊗H W has the desired property that the given KH -module action on W is
extended.
(c) In the case G = H there is an isomorphism W ∼ = KG ⊗G W of KG-modules.
m
Proof. (a) Let w = j =1 λj wj ∈ ker(i), written as a linear combination of a basis
{w1 , . . . , wm } of W . Then we have


m
0 = i(w) = 1 ⊗H w = λj (1 ⊗H wj ).
j =1

We can choose our system T of coset representatives to contain the identity element,
then Proposition A.5 in particular says that the elements 1⊗H wj , for j = 1, . . . , m,
270 A Induced Modules for Group Algebras

are part of a basis of KG ⊗H W . So in the equation above we can deduce that all
λj = 0. This implies that w = 0 and hence i is injective.
It is easy to check that i is a K-linear map. Moreover, using (A.3), the following
holds for every h ∈ H and w ∈ W :

i(hw) = 1 ⊗H hw = h ⊗H w = h(1 ⊗H w) = hi(w). (A.4)

Thus i is a KH -module homomorphism.


(b) This follows directly from (A.4).
(c) By part (a), the map i : W → KG ⊗G W is an injective KG-module homomor-
phism. On the other hand, dimK (KG ⊗G W ) = |G : G| · dimK W = dimK W by
Proposition A.5; so i is also surjective and hence an isomorphism. 

Appendix B
Solutions to Selected Exercises

Usually there are many different ways to solve a problem. The following are
possible approaches, but are not unique.

Chapter 1

Exercise 1.12. Let u = a + bi + cj + dk ∈ H where a, b, c, d ∈ R. The product


formula given in Example 1.8 yields that

u2 = (a 2 − b 2 − c2 − d 2 ) + 2abi + 2acj + 2adk.

So u is a root of X2 + 1 if and only if a 2 − b2 − c2 − d 2 = −1 and


2ab = 2ac = 2ad = 0. There are two cases. If a = 0 then b = c = d = 0 and
hence a 2 = −1, contradicting a ∈ R. If a = 0 then the only condition remaining is
b2 + c2 + d 2 = 1. There are clearly infinitely many solutions; in fact, the solutions
correspond bijectively to the points on the unit sphere in R3 .
Exercise 1.13. We check the three conditions in Definition 1.14. By definition all
subsets are K-subspaces of M3 (K). The first subset is not a subalgebra since it is
not closed under multiplication, for example
⎛ ⎞2 ⎛ ⎞
010 001
⎝0 0 1⎠ = ⎝0 0 0⎠
000 000

which does not lie in the first subalgebra. The fifth subset is also not a subalgebra
since it does not contain the identity matrix, which is the identity element of M3 (K).
The other four subsets are subalgebras: they contain the identity element of M3 (K),
and one checks that they are closed under taking products.

© Springer International Publishing AG, part of Springer Nature 2018 271


K. Erdmann, T. Holm, Algebras and Representation Theory, Springer
Undergraduate Mathematics Series, https://doi.org/10.1007/978-3-319-91998-0
272 B Solutions to Selected Exercises

Exercise 1.14. Let ψ = φ −1 , so that ψ is a map from B to A and ψ ◦ φ = idA and


φ ◦ ψ = idB .
Let b, b ∈ B, then b = φ(a) for a unique a ∈ A, and b = φ(a  ) for a unique
a ∈ A, and then ψ(b) = a and ψ(b ) = a  . We check the three conditions in


Definition 1.22.
(i) Let λ, μ ∈ K. We must show that ψ(λb + μb ) = λψ(b) + μψ(b ). In fact,

ψ(λb + μb ) = ψ(λφ(a) + μφ(a  )) = ψ(φ(λa + μa  ))


= (ψ ◦ φ)(λa + μa  ) = idA (λa + μa  )
= λa + μa  = λψ(b) + μψ(b ),

where we have used in the second step that φ is K-linear.


(ii) To check that ψ commutes with taking products,

ψ(bb ) = ψ(φ(a)φ(a  )) = ψ(φ(aa  )) = (ψ ◦ φ)(aa  )


= idA (aa ) = aa  = ψ(b)ψ(b ).

In the second step we have used that φ commutes with taking products.
(iii) We have ψ(1B ) = ψ(φ(1A )) = ψ ◦ φ(1A ) = idA (1A ) = 1A .
Exercise 1.15. (i) If a 2 = 0 then φ(a)2 = φ(a 2 ) = φ(0) = 0, using that φ is an
algebra homomorphism. Conversely, if φ(a 2 ) = φ(a)2 = 0, then a 2 is in the kernel
of φ which is zero since φ is injective.
(ii) Let a be a (left) zero divisor, that is, there exists an a  ∈ A \ {0} with aa  = 0.
It follows that 0 = φ(0) = φ(aa  ) = φ(a)φ(a  ). Moreover, φ(a  ) = 0 since
φ is injective. Hence φ(a) is a zero divisor. Conversely, let φ(a) be a (left) zero
divisor, then there exists a b ∈ B \ {0} such that φ(a)b = 0. Since φ is surjective
there exists an a  ∈ A with b = φ(a  ); note that a  = 0 because b = 0. Then
0 = φ(a)b = φ(a)φ(a  ) = φ(aa  ). This implies aa  = 0 since φ is injective, and
hence a is a (left) zero divisor. The same proof works for right zero divisors.
(iii) Let A be commutative, and let b, b ∈ B. We must show that bb − b b = 0.
Since φ is surjective, there are a, a  ∈ A such that b = φ(a) and b = φ(a  ).
Therefore

bb − b  b = φ(a)φ(a  ) − φ(a  )φ(a) = φ(aa  − a  a) = φ(0) = 0

using that φ is an algebra homomorphism, and that A is commutative.


For the converse, interchange the roles of A and B, and use the inverse φ −1 of φ
(which is also an algebra isomorphism, see Exercise 1.14).
(iv) Assume A is a field, then B is commutative by part (iii). We must show that
every non-zero element in B is invertible. Take 0 = b ∈ B, then b = φ(a) for
B Solutions to Selected Exercises 273

a ∈ A and a is non-zero. Since A is a field, there is an a  ∈ A such that aa  = 1A ,


and then

bφ(a  ) = φ(a)φ(a  ) = φ(aa  ) = φ(1A ) = 1B .

That is, b has inverse φ(a  ) ∈ B. We have proved if A is a field then so is B. For the
converse, interchange the roles of A and B, and use the inverse φ −1 of φ.
Exercise 1.16. (i) One checks that each Ai is a K-subspace of M3 (K), contains
the identity matrix, and that each Ai is closed under multiplication; hence it
is a K-subalgebra of M3 (K). Moreover, some direct computations show that
A1 , A2 , A3 , A5 are commutative. But A4 is not commutative, for example
⎛ ⎞⎛ ⎞ ⎛ ⎞⎛ ⎞
100 110 110 10 0
⎝0 0 0⎠ ⎝0 0 0⎠ = ⎝0 0 0⎠ ⎝0 0 0⎠ .
001 001 001 00 1

(ii) We compute Ā1 = 0,


⎧⎛ ⎞ ⎫ ⎧⎛ ⎞ ⎫
⎨ 0y z ⎬ ⎨ 00 z ⎬
Ā2 = ⎝0 0 0⎠ | y, z ∈ K , Ā3 = ⎝0 0 0⎠ | z ∈ K
⎩ ⎭ ⎩ ⎭
00 0 00 0
⎧⎛ ⎞ ⎫ ⎧⎛ ⎞ ⎫
⎨ 0y0 ⎬ ⎨ 00 y ⎬
Ā4 = ⎝0 0 0 | y ∈ K , Ā5 = ⎝0 0
⎠ z ⎠ | y, z ∈ K .
⎩ ⎭ ⎩ ⎭
000 00 0

(iii) By part (iii) of Exercise 1.15, A4 is not isomorphic to any of A1 , A2 , A3 , A5 .


Note that the sets Āi computed in (ii) are even K-subspaces; hence we can compare
dimensions. By part (ii) of Exercise 1.15 we conclude that A1 is not isomorphic to
A2 , A3 , A5 , that A2 is not isomorphic to A3 and that A3 is not isomorphic to A5 .
The remaining algebras A2 and A5 are isomorphic; in fact the map
⎛ ⎞ ⎛ ⎞
xy z x0y
φ : A2 → A5 , ⎝ 0 x 0 ⎠ → ⎝ 0 x z ⎠
00x 00x

is bijective by definition, and one checks that it is a K-algebra homomorphism.


Exercise 1.18. Tn (K) has a K-basis given by matrix units {Eij | 1 ≤ i ≤ j ≤ n}.
On the other hand, we denote by αi the arrow from vertex i + 1 to i (where
1 ≤ i ≤ n − 1). Then the path algebra KQ has a K-basis

{ei | 1 ≤ i ≤ n} ∪ {αi αi+1 . . . αj −1 | 1 ≤ i < j ≤ n}.


274 B Solutions to Selected Exercises

We define a bijective K-linear map φ : Tn (K) → KQ by mapping a basis to a


basis as follows: φ(Eii ) = ei for 1 ≤ i ≤ n and φ(Eij ) = αi αi+1 . . . αj −1 for all
1 ≤ i < j ≤ n. It remains to check that φ is a K-algebra homomorphism. To show
that φ preserves products, it suffices to check it on basis elements, see Remark 1.23.
Note that φ(Eij ) is a path starting at vertex j , and φ(Ek ) is a path ending at vertex
k. Thus, if j = k then

φ(Eij )φ(Ek ) = 0 = φ(0) = φ(Eij Ek ).

So suppose j = k, and we have to show that φ(Eij )φ(Ej  ) = φ(Eij Ej  ). One


checks that this formula holds if i = j or j = . Otherwise, we have

φ(Eij )φ(Ej  ) = αi αi+1 . . . αj −1 αj αj +1 . . . α−1 = φ(Ei ) = φ(Eij Ej  ).

Finally, to check condition (iii) of Definition 1.22, we have


n 
n 
n
φ(1Tn (K) ) = φ( Eii ) = φ(Eii ) = ei = 1KQ .
i=1 i=1 i=1

Exercise 1.24. Recall that in the proof of Proposition 1.29 the essential part was to
find a basis {1, b̃} for which b̃ squares to either 0 or ±1. For D2 (R) we can choose
b̃ as a diagonal matrix with entries 1 and −1; clearly, b̃2is the identity matrix and
0 1
hence D2 (R) ∼ = R[X]/(X2 − 1). For A choose b̃ = ; then b̃2 = 0 and
00
 
∼ 0 1
hence A = R[X]/(X ). Finally, for B we choose b̃ =
2 ; then b̃ 2 equals the
−1 0
negative of the identity matrix and hence B ∼ = R[X]/(X2 + 1). In particular, there
are no isomorphisms between any of the three algebras.
Exercise 1.26. One possibility is to imitate the proof of Proposition 1.29; it
still gives the three possibilities A0 = C[X]/(X2 ), A1 = C[X]/(X2 − 1) and
A−1 = C[X]/(X2 + 1). Furthermore, the argument that A0 ∼ A1 also works for C.
=
However, the C-algebras A1 and A−1 are isomorphic. In fact, the map

C[X]/(X2 − 1) → C[X]/(X2 + 1), g(X) + (X2 − 1) → g(iX) + (X2 + 1)

is well-defined (that is, independent of the coset representatives) and a bijective C-


algebra homomorphism.
Alternatively, we apply Exercise 1.25. In each of (a) and (b) we get a unique
algebra (they are A−1 and A0 with the previous notation). Furthermore, we know
that there are no irreducible polynomials of degree 2 in C[X] and therefore there is
no C-algebra of dimension 2 in (c).
B Solutions to Selected Exercises 275

Exercise 1.27. We fix the basis {1A , α} of A = Q[X]/(X2 − p) where 1A is the


coset of 1 and α is the coset of X. That is, α 2 = p1A . Similarly we fix the basis of
B = Q[X]/(X2 − q) consisting of 1B and β with β 2 = q1B . Suppose ψ : A → B
is a Q-algebra isomorphism, then

ψ(1A ) = 1B , ψ(α) = c1B + dβ,

where c, d ∈ Q and d = 0 (otherwise ψ is not injective). We then have

p1B = pψ(1A ) = ψ(p1A ) = ψ(α 2 ) = ψ(α)2


= (c1B + dβ)2 = c2 1B + d 2 β 2 + 2cdβ
= (c2 + d 2 q)1B + 2cdβ.

Comparing coefficients yields 2cd = 0 and p = (c2 + d 2 q). Since d = 0 we must


have c = 0. Then it follows that p = d 2 q. Now write d = m n with coprime m, n ∈ Z.
Then we have n p = m q, which forces p = q (otherwise p divides m2 and then
2 2

since p is prime, p also divides n2 , contradicting m and n being coprime). But we


assumed that p and q are different, so this shows that A and B are not isomorphic.
       
00 10 01 11
Exercise 1.28. (a) The elements of B are , , , . It is then
00 01 11 10
easily checked that B is a Z2 -subspace of M2 (Z2 ) and that it is closed under matrix
multiplication. Moreover, every non-zero matrix in B has determinant 1 and hence
is invertible. Furthermore, B is 2-dimensional and therefore is commutative (see
Proposition 1.28). That is, B is a field.
(b) Let A be a 2-dimensional Z2 -algebra. One can imitate the proof of Proposi-
tion 1.29 (we will not write this down). Alternatively, we apply Exercise 1.25. For
each of (a) and (b) we get a unique algebra, namely Z2 [X]/(X(X + 1)) ∼ = Z2 × Z2
in (a) and Z2 [X]/(X2 ) in (b). Then to get an algebra in (c) we need to look for
irreducible polynomials over Z2 of degree 2. By inspection, we see that there is
just one such polynomial, namely X2 + X + 1. In total we have proved that every
2-dimensional Z2 -algebra is isomorphic to precisely one of

Z2 [X]/(X2 + X) , Z2 [X]/(X2 ) or Z2 [X]/(X2 + X + 1).

Exercise 1.29. (a) We use the axioms of a K-algebra from Definition 1.1.
(i) For every x, y ∈ A and λ, μ ∈ K we have

la (λx + μy) = a(λx + μy) = a(λx) + a(μy)


= λ(ax) + μ(ay) = λla (x) + μla (y).
276 B Solutions to Selected Exercises

(ii) For every x ∈ A we have

lλa+μb (x) = (λa + μb)x = λ(ax) + μ(bx)


= λla (x) + μlb (x) = (λla + μlb )(x).

Since x ∈ A was arbitrary it follows that lλa+μb = λla + μlb .


(iii) For every x ∈ A we have lab (x) = (ab)x = a(bx) = ła (lb (x)), hence
lab = la ◦ lb .
(b) If la is the zero map for some a ∈ A, then ax = la (x) = 0 for every x ∈ A. In
particular, a = a1A = 0, hence ψ is injective.
(c) Choosing a fixed basis of A yields an algebra isomorphism EndK (A) ∼= Mn (K),
see Example 1.24. So from part (b) we get an injective algebra homomorphism
ψ̃ : A → Mn (K), that is, A is isomorphic to the image of ψ̃, a subalgebra of
Mn (K) (see Theorem 1.26).
(d) We fix the basis {e1 , e2 , α} of KQ. From the multiplication table of KQ in
Example 1.13 one works out the matrices of the maps le1 , le2 and lα as
⎛ ⎞ ⎛ ⎞ ⎛ ⎞
100 00 0 000
le1 ↔ ⎝0 0 0⎠ le2 ↔ ⎝0 1 0⎠ lα ↔ ⎝0 0 0⎠ .
001 00 0 010

By parts (b) and (c), the path algebra KQ is isomorphic to the subalgebra of M3 (K)
consisting of all K-linear combinations of the matrices corresponding to le1 , le2 and
lα , that is, to the subalgebra
⎧⎛ ⎞ ⎫
⎨ a00 ⎬
⎝0 b 0⎠ | a, b, c ∈ K .
⎩ ⎭
0ca

Chapter 2

Exercise 2.12. According to Theorem 2.10 the 1-dimensional A-modules are in


bijection with scalars α ∈ K satisfying α 4 = 2.
(i) For K = Q there is no such scalar, that is, there is no 1-dimensional
Q[X]/(X4 − 2)-module.
(ii) For K = R there
√ are precisely two 1-dimensional R[X]/(X4 − 2)-modules,
given by X · v = ± 2v (where v is a basis vector of the 1-dimensional module).
4

(iii) For K√= C there are four√1-dimensional C[X]/(X4 − 2)-modules, given by


X · v = ± 4 2v and X · v = ±i 4 2v.
B Solutions to Selected Exercises 277

(iv) There is no α ∈ Z3 with α 4 = 2; hence there is no 1-dimensional


Z3 [X]/(X4 − 2)-module.
(v) In Z7 = {0̄, 1̄, 2̄, 3̄, 4̄, 5̄, 6̄} there are precisely two elements with fourth
power equal to 2̄, namely 2̄ and 5̄. So there are precisely two 1-dimensional
Z7 [X]/(X4 − 2)-modules, given by X · v = ±2̄v.
Exercise 2.14. (a) Suppose that U ⊆ K n is a Tn (K)-submodule. If U = 0 then
U = V0 and we are done. If U = 0 we set

i := max{k | ∃ (u1 , . . . , un )t ∈ U with uk = 0}.

We now show that U = Vi (proving the claim in (a)). By definition of i we have


U ⊆ Vi . Conversely, we take an element u = (u1 , . . . , ui , 0, . . . , 0)t ∈ U with
ui = 0 (which exists by definition of i). Then for every 1 ≤ r ≤ i we have that
er = u−1
i Eri u ∈ U since U is a Tn (K)-submodule. So Vi = span{e1 , . . . , ei } ⊆ U .
(b) Let φ : Vi,j → Vr,s be a Tn (K)-module homomorphism. Note that we
have Vi,j = span{ej +1 + Vj , . . . , ei + Vj } and similarly
Vr,s = span{es+1 + Vs , . . . , 
er + Vs }. Then we consider the image of the first
basis vector, φ(ej +1 + Vj ) = r=s+1 λ (e + Vs ), and we get

φ(ej +1 + Vj ) = φ(Ej +1,j +1 (ej +1 + Vj )) = Ej +1,j +1 φ(ej +1 + Vj )



r
= λ Ej +1,j +1 (e + Vs ).
=s+1

Note that if j < s then Ej +1,j +1 e = 0 for all  = s + 1, . . . , r. Thus, if j < s then
ej +1 + Vj ∈ ker(φ) and φ is not an isomorphism.
Completely analogously, there can’t be an isomorphism φ if j > s (since then
φ −1 is not injective, by the argument just given). So we have shown that if φ is
an isomorphism, then j = s. Moreover, if φ is an isomorphism, the dimensions of
Vi,j and Vr,s must agree, that is, we have i − j = r − s = r − j and hence also
i = r. This shows that the Tn (K)-modules Vi,j with 0 ≤ j < i ≤ n are pairwise
non-isomorphic. An easy counting argument shows that there are n(n+1) 2 such
modules.
(c) The annihilator AnnTn (K) (ei ) consists precisely of those upper triangular
matrices with i-th column equal to zero. So the factor module Tn (K)/AnnTn (K)(ei )
is spanned by the cosets E1i + AnnTn (K)(ei ), . . . , Eii + AnnTn (K) (ei ). Then one
checks that the map

ψ : Vi → Tn (K)/AnnTn (K) (ei ) , ej → Ej i + AnnTn (K) (ei )

(for 1 ≤ j ≤ i) is a Tn (K)-module isomorphism.


278 B Solutions to Selected Exercises

Exercise 2.16. (a) E is non-empty since (0, 0) ∈ E. Let (m1 , m2 ), (n1 , n2 ) ∈ E and
λ, μ ∈ K. Then

α1 (λm1 + μn1 ) + α2 (λm2 + μn2 ) = λα1 (m1 ) + μα1 (n1 ) + λα2 (m2 ) + μα2 (n2 )
= λ(α1 (m1 ) + α2 (m2 )) + μ(α1 (n1 ) + α2 (n2 ))
= λ · 0 + μ · 0 = 0,

that is, λ(m1 , m2 ) + μ(n1 , n2 ) = (λm1 + μn1 , λm2 + μn2 ) ∈ E.


(b) By definition, the map ϕ : M1 × M2 → M, (m1 , m2 ) → α1 (m1 ) + α2 (m2 )
has kernel E, and it is surjective since M = im(α1 ) + im(α2 ) by assumption. The
rank-nullity theorem from linear algebra then gives

dimK M1 + dimK M2 = dimK M1 × M2


= dimK im(ϕ) + dimK ker(ϕ)
= dimK M + dimK E,

proving the claim.


(c) By part (a), E is a K-subspace. Moreover, for a ∈ A and (m1 , m2 ) ∈ E we have

α1 (am1 ) + α2 (am2) = aα1 (m1 ) + aα2 (m2 )


= a(α1 (m1 ) + α2 (m2 )) = a · 0 = 0,

that is, a(m1 , m2 ) = (am1 , am2 ) ∈ E and E is an A-submodule.


Exercise 2.17. (a) Let (β1 (w), β2 (w)) ∈ C and (β1 (v), β2 (v)) ∈ C, with w, v ∈ W .
For any λ, μ ∈ K we have

λ(β1 (w), β2 (w)) + μ(β1 (v), β2 (v)) = (λβ1 (w) + μβ1 (v), λβ2 (w) + μβ2 (v))
= (β1 (λw + μv), β2 (λw + μv)),

which is in C since W is a K-vector space, that is, λw + μv ∈ W . Then F is the


factor space, which is known from linear algebra to be a K-vector space.
(b) The map ψ : W → C, w → (β1 (w), β2 (w)) is surjective by definition of C,
and it is injective since ker(β1 ) ∩ ker(β2 ) = 0 by assumption. For the dimensions
we obtain

dimK F = dimK M1 × M2 − dimK C = dimK M1 + dimK M2 − dimK W.

(c) By part (a), C is a K-subspace. For every a ∈ A and (β1 (w), β2 (w)) ∈ C we
have

a((β1 (w), β2 (w)) = (aβ1 (w), aβ2 (w)) = (β1 (aw), β2 (aw)) ∈ C,

hence C is an A-submodule. Then F is the factor module as in Proposition 2.18.


B Solutions to Selected Exercises 279

Exercise 2.18. (a) By definition of the maps βi we have

ker(β1 ) ∩ ker(β2 ) = {(m1 , m2 ) ∈ E | m1 = 0, m2 = 0} = {(0, 0)}.

(b) By construction

C = {(β1 (w), β2 (w)) | w = (m1 , m2 ) ∈ E} = {(m1 , m2 ) | (m1 , m2 ) ∈ E} = E.

(c) Let ϕ : M1 × M2 → M be the map (m1 , m2 ) → α1 (m1 ) + α2 (m2 ), as in


Exercise 2.16. Since M = im(α1 ) + im(α2 ), this map is surjective. The kernel is
E, by definition of E. We have proved in part (b) that E = C and therefore by the
isomorphism theorem of vector spaces,

F = (M1 × M2 )/C = (M1 × M2 )/ker(ϕ) ∼


= im(ϕ) = M.

Exercise 2.22. We prove part (b) (all other parts are straightforward). Let W ⊆ Ae1
be a 1-dimensional submodule, take 0 = w ∈ W . We express it in terms of the basis
of Ae1 , as w = ce1 +dα with c, d ∈ K. Then e1 w and e2 w are both scalar multiples
of w, and e1 w = ce1 but e2 w = dα. It follows that one of c, d must be zero and
the other is non-zero. If c = 0 then αw = cα = 0 and is not a scalar multiple of w.
Hence c = 0 and w is a non-zero scalar multiple of α.
Now suppose we have non-zero submodules U and V of Ae1 such that
Ae1 = U ⊕ V . Then U and V must be 1-dimensional. By part (b) it follows that
U = span{α} = V and we do not have a direct sum, a contradiction.
Exercise 2.23. We apply Example 2.22 with R = A. We have seen that a map from
A to a module taking r ∈ A to rm for m a fixed element in a module is an A-module
homomorphism. Hence we have the module homomorphism

ψ : K[X] → (g)/(f ), ψ(r) = r(g + (f )) = rg + (f ),

which is surjective. We have ψ(r) = 0 if and only if rg belongs to (f ),


that is, rg is a multiple of f = gh. This holds if and only if r is a multi-
ple of h (recall that the ring K[X] is a UFD). By the isomorphism theorem,
K[X]/(h) = K[X]/ker(ψ) ∼ = im(ψ) = (g)/(f ).

Chapter 3

Exercise 3.3. (a) It suffices to prove one direction; the other one then follows by
using the inverse of the isomorphism. So suppose that V is a simple A-module, and
let U ⊆ W be an A-submodule. By Proposition 2.26 the preimage φ −1 (U ) is an
A-submodule of V . We assume V is simple, and we conclude that φ −1 (U ) = 0 or
280 B Solutions to Selected Exercises

φ −1 (U ) = V . Since φ is surjective this implies that U = 0 or U = W . Hence W is


a simple A-module.
(b) For i ∈ {0, 1, . . . , n − 1} let ψ : Vi+1 → φ(Vi+1 )/φ(Vi ) be the
composition of the restriction φ|Vi+1 : Vi+1 → φ(Vi+1 ) with the canonical
map onto φ(Vi+1 )/φ(Vi ); this is a surjective A-module homomorphism and
ker(ψ) = Vi (since φ|Vi+1 is an isomorphism). By the isomorphism theorem
we have Vi+1 /Vi ∼
= φ(Vi+1 )/φ(Vi ). Since Vi+1 /Vi is simple, φ(Vi+1 )/φ(Vi ) is
also a simple A-module by part (a). Hence we have a composition series of W , as
claimed.
Exercise 3.6. (a)/(b) The statement in (a) is a special case of the following solution
of (b). Let Vn := span{v1 , . . . , vn } be an n-dimensional vector space on which we
let x and y act as follows: x · v1 = 0, x · vi = vi−1 for 2 ≤ i ≤ n, and y · vi = vi+1
for 1 ≤ i ≤ n − 1, y · vn = 0. (This action is extended to a basis of KQ by letting
products of powers of x and y act by successive action of x and y.) Let 0 = U ⊆ Vn
be a KQ-submodule, and α = ni=1 λi vi ∈ U \ {0}. Set m = max{i | λi = 0}.
Then v1 = λ−1 m x
m−1 · α ∈ U . But then for every j ∈ {1, . . . , n} we also get that

vj = y j −1 v1 ∈ U , that is, U = Vn and Vn is simple.


(c) We consider an infinite-dimensional vector space V∞ := span{vi | i ∈ N} and
make this a KQ-module by setting x · v1 = 0, x · vi = vi−1 for all i ≥ 2 and
y · vi = vi+1 for all i ∈ N. Then the argument in (b) carries over verbatim (in part
(b) we have not used that y · vn = 0). Thus, V∞ is a simple KQ-module.
Exercise 3.10. By Example 3.25, every simple A-module S has dimension at most 2.
If S is 1-dimensional, then the endomorphism algebra is also 1-dimensional, hence
EndR (S) ∼ = R, by Proposition 1.28. Now let S be of dimension 2. Then for every
v ∈ S \ {0} we have S = span{v, Xv} (where we write X also for its coset in A); in
fact, otherwise span{v}
 isa non-trivial A-submodule of S. So the action of X on S
0a
is given by a matrix for some a, b ∈ R. Writing endomorphisms as matrices
1b
we have
     
0a 0a
EndR (S) ∼
= ϕ ∈ M2 (R) | ϕ = ϕ .
1b 1b
 
α aγ
By direct calculation, ϕ is of the form ϕ = , where α, γ ∈ R;
γ α + bγ
in particular, EndR (S) has dimension 2. On the other hand, Schur’s lemma (cf.
Theorem 3.33) implies that EndR (S) is a division algebra. In the classification of
2-dimensional R-algebras in Proposition 1.29 there is only one division algebra,
namely R[X]/(X2 + 1) ∼ = C.

 Eij where 1 ≤ i, j ≤ n. Every


Exercise 3.16. We use the matrix units
element of Mn (D) can be written as ni,j =1 zij Eij with zij ∈ D. Suppose that
B Solutions to Selected Exercises 281

n
z= i,j =1 zij Eij ∈ Z(A). Then for every k,  ∈ {1, . . . , n} we have on the one
hand


n 
n
Ek z = zij Ek Eij = zj Ekj
i,j =1 j =1

and on the other hand


n 
n
zEk = zij Eij Ek = zik Ei .
i,j =1 i=1

Note that the first matrix has the -th row of z in row k, the second one has the k-th
column of z in column . However, the two matrices must be equal since z ∈ Z(A).
So we get zk = 0 for all  = k and z = zkk for all k, . This just means that z
is a multiple of the identity matrix by a ‘scalar’ from D. However, D need not be
commutative so only the Z(D)-multiples of the identity matrix form the centre.
Exercise 3.18. We apply Proposition 3.17 to the chain
M0 ⊂ M1 ⊂ . . . ⊂ Mr−1 ⊂ Mr = M. Since all inclusions are strict we obtain for
the lengths that

0 ≤ (M0 ) < (M1 ) < . . . < (Mr−1 ) < (Mr ) = (M),

and hence r ≤ (M).

Chapter 4

Exercise 4.3. By the division algorithm of polynomials, since g and h are coprime,
there are polynomials q1 , q2 such that 1K[X] = gq1 + hq2 and therefore we have

1K[X] + (f ) = (gq1 + (f )) + (hq2 + (f ))

and it follows that K[X]/(f ) is the sum of the two submodules (g)/(f ) and
(h)/(f ). We show that the intersection is zero: Suppose r is a polynomial such
that r + (f ) is in the intersection of (g)/(f ) and (h)/(f ). Then

r + (f ) = gp1 + (f ) = hp2 + (f )

for polynomials p1 , p2 . Since r + (f ) = gp1 + (f ) there is some polynomial k


such that r − gp1 = f k = ghk and therefore r = g(p1 + hk), that is, g divides r.
Similarly h divides r. Since K[X] is a UFD, and since g, h are coprime it follows
that gh divides r, and f = gh and therefore r + (f ) is zero in K[X]/(f ).
282 B Solutions to Selected Exercises

Exercise 4.5. (a) This holds since v1 + v2 + v3 is fixed by every g ∈ G.


(b) Consider the 2-dimensional subspace C := span{v1 − v2 , v2 − v3 }. One
checks that for any field K this is a KS3 -submodule of V . We claim that
C is even a simple module. In fact, let C̃ ⊆ C be a non-zero submodule,
and c := α(v1 − v2 ) + β(v2 − v3 ) ∈ C̃ \ {0}, where α, β ∈ K. Then
C̃ # c − (1 3)c = (α + β)(v1 − v3 ). So if α = −β then v1 − v3 ∈ C̃, but then also
(1 2)(v1 − v3 ) = v2 − v3 ∈ C̃ and hence C̃ = C. So we can suppose that α = −β;
then C̃ # c − (1 2)c = 3α(v1 − v2 ). Since K has characteristic = 3 (and α = 0,
otherwise c = 0) this implies that v1 − v2 ∈ C̃ and then C̃ = C as above.
Finally, we show that V = U ⊕ C. For dimension reasons it suffices to check
that U ∩ C = 0; let α(v1 + v2 + v3 ) = β(v1 − v2 ) + γ (v2 − v3 ) ∈ U ∩ C, with
α, β, γ ∈ K. Comparing coefficients yields α = β = −γ = −β + γ ; this implies
that 3α = 0, hence α = 0 (since K has characteristic different from 3) and then
also β = γ = 0. So we have shown that V = U ⊕ C is a direct sum of simple
KS3 -submodules, that is, V is a semisimple KS3 -module.
(c) Let K have characteristic 3, so −2 = 1 in K. Then
v1 + v2 + v3 = (v1 − v2 ) − (v2 − v3 ) ∈ C (recall that C is a submodule for every
field). Hence U ⊆ C. Suppose (for a contradiction) that V is semisimple. Then the
submodule C is also semisimple, by Corollary 4.7. But then, by Theorem 4.3, U
must have a complement, that is, a KS3 -module Ũ such that C = U ⊕ Ũ ; note that
Ũ must be 1-dimensional, say Ũ = span{ũ} where ũ = α(v1 − v2 ) + β(v2 − v3 )
with α, β ∈ K. Since Ũ is a KS3 -submodule we have

ũ + (1 2)ũ = β(v1 + v2 + v3 ) ∈ U ∩ Ũ .

But U ∩ Ũ = 0, so β = 0. Similarly, ũ + (2 3)ũ = −α(v1 + v2 + v3 ) yields α = 0.


So ũ = 0, a contradiction. Thus the KS3 -module V is not semisimple if K has
characteristic 3.
Exercise 4.6. Let x ∈ I , and suppose (for a contradiction) that x ∈ J (A). Since
J (A) is the intersection over all maximal left ideals of A, there exists a maximal left
ideal M with x ∈ M. Since M is maximal, we have M + Ax = A. In particular, we
can write 1A = m + ax with m ∈ M and a ∈ A, so 1A − ax = m ∈ M. On the
other hand, ax ∈ I since I is a left ideal and hence (ax)r = 0 by assumption on I .
It follows that

(1A + ax + (ax)2 + . . . + (ax)r−1)(1A − ax) = 1A ,

that is, 1A − ax is invertible in A. But from above 1A − ax = m ∈ M, so also


1A ∈ M and thus M = A, a contradiction since M is a maximal left ideal of A.
Exercise 4.7. A1 is isomorphic to K × K × K, hence is semisimple (see Exam-
⎛ ⎞
00∗
ple 4.19). For A2 and A3 , consider the left ideal I = ⎝0 0 0⎠; this is nilpotent,
000
B Solutions to Selected Exercises 283

namely I 2 = 0, and therefore I ⊆ J (A2 ) and I ⊆ J (A3 ) by Exercise 4.6. In


particular, J (A2 ) = 0 and J (A3 ) = 0 and then by Theorem 4.23 (g), the algebras
⎛ ⎞
∗00
A2 and A3 are not semisimple. We have a map from A4 to the algebra B = ⎝0 ∗ ∗⎠
0∗∗
⎛ ⎞ ⎛ ⎞
a0b c00
defined by ⎝ 0 c 0⎠ → ⎝0 a b⎠ . One checks that this is an algebra isomorphism.
d0e 0d e
The algebra B is isomorphic to K×M2 (K) which is semisimple (see Example 4.19),
hence so is A4 .
Exercise 4.9. (a) According to Proposition 4.14 we have to check that f is square-
free in R[X] if and only if f is square-free in C[X]. Recall that every irreducible
polynomial in R[X] has degree at most 2; let

f = (X − a1 ) . . . (X − ar )(X − z1 )(X − z1 ) . . . (X − zs )(X − zs )

with ai ∈ R and zi ∈ C \ R. This is the factorisation in C[X], the one in R[X] is


obtained from it by taking together (X − zi )(X − zi ) ∈ R[X] for each i. Then f is
square-free in R[X] if and only if the ai and zj are pairwise different if and only if
f is square-free in C[X].
(b) This is also true. It is clear that if f is square-free in R[X] then f is also square-
free in Q[X]. For the converse, let f = f1 . . . fr be square-free with pairwise
coprime irreducible polynomials fi ∈ Q[X]. Let z ∈ C be a root of some fi
(here we use the fundamental theorem of algebra). Then the minimal polynomial
of z (over Q) divides fi . Since fi is irreducible in Q[X] it follows that fi is the
minimal polynomial of z (up to a non-zero scalar multiple). This means in particular
that different fi cannot have common factors in R[X]. Moreover, since we are in
characteristic 0, an irreducible polynomial in Q[X] cannot have multiple roots, thus
no fi itself can produce multiple factors in R[X]. So f is square-free in R[X].

Chapter 5

Exercise 5.1. According to the Artin–Wedderburn theorem there are precisely four
such algebras: C9 , C5 × M2 (C), M3 (C) and C × M2 (C) × M2 (C).
Exercise 5.2. Here, again by the Artin–Wedderburn theorem there are five possible
algebras, up to isomorphism, namely: R4 , M2 (R), R × R × C, C × C and H.
Exercise 5.4. (a) By the Artin–Wedderburn theorem, A ∼ = Mn1 (D1 )×. . .×Mnr (Dr )
with positive
r integers ni and division algebras D i . For the centres we get
Z(A) ∼= i=1 Z(Mni (Di )). We have seen in Exercise 3.16 that the centre of a
matrix algebra Mni (Di ) consists precisely of the Z(Di )-multiples of the identity
matrix, that is Z(Mni (Di )) ∼
= Z(Di ) as K-algebras. Note that Z(Di ) is by definition
284 B Solutions to Selected Exercises

a commutative algebra, and it is still a division algebra (the inverse of an element in


Z(Di ) also commutes
r with all elements in Di ); thus Z(Di ) is a field. Hence we get
that Z(A) ∼= i=1 Z(Di ) is isomorphic to a direct product of fields.
(b) By part (a), there are fields K1 , . . . , Kr (containing K), such that
Z(A) ∼
= K1 × . . . × Kr . Since fields do not contain any non-zero zero divisors, an
element

x := (x1 , . . . , xr ) ∈ K1 × . . . × Kr

can only be nilpotent (that is, satisfy x  = 0 for some  ∈ N) if all xi = 0, that is, if
x = 0. Clearly, this property is invariant under isomorphisms, so holds for Z(A) as
well.
Exercise
 5.11. (a) By definition of εi we have that (X −λi +I )εi is a scalar multiple
of ri=1 (X − λi ) + I , which is zero in A.

(b) The product εi εj for i = j has a factor rt=1 (X − λt ) + I and hence is zero in
A. Moreover, using part (a), we have
 
εi2 = (1/ci ) (X − λj + I )εi = (1/ci ) (λi − λj )εi = εi .
j =i j =i

(c) It follows from (b) that the elements ε1 , . . . , εr are linearly independent over
K (if λ1 ε1 + . . . + λr εr = 0 then multiplication by εi yields λi εi = 0 and hence
λi = 0). Observe that dimK A = r and hence ε1 , . . . , εr are a K-basis for A. So we
can write 1A = b1 ε1 + . . . + br εr for bi ∈ K. Now use εi = εi · 1A and deduce that
bi = 1 for all i.

Chapter 6

Exercise 6.5. Let U ⊆ V be a non-zero CSn -submodule. Take a non-zero element


u ∈ U ; by a suitable permutation of the coordinates we can assume that u is of
the form (0, . . . , 0, ui , . . . , un ) where uj = 0 for all j = i, i + 1, . . . , n. Since
the coordinates must sum to zero, u must have at least two non-zero entries; so
there exists an index j > i such that uj = 0. Then we consider the element
ũ := u − uuji (i j )u, which is in U (since U is a submodule). By construction, the i-th
coordinate of ũ is zero, that is, ũ ∈ U has fewer non-zero entries than u. We repeat
the argument until we reach an element in U with precisely two non-zero entries.
Since the coordinates sum to 0, this element is of the form (0, . . . , 0, un−1 , −un−1 ).
By scaling and then permuting coordinates we conclude that the n − 1 linearly
independent vectors of the form (0, . . . , 0, 1, −1, 0, . . . , 0) are in U . Since V has
dimension n − 1 this implies that U = V and hence V is simple.
B Solutions to Selected Exercises 285

Exercise 6.8. (i) Since −1 commutes with every element in G, the cyclic group
−1 generated by −1 is a normal subgroup in G. The factor group G/−1 is of
order 4, hence abelian. This implies that G ⊆ −1. On the other hand, as G is not
abelian, G = 1, and we get G = −1.
(ii) The number of one-dimensional simple CG-modules is equal to |G/G | = 4.
Then the only possible solution of


k 
k
8 = |G| = n2i = 1 + 1 + 1 + 1 + n2i
i=1 i=5

is that k = 5 and n5 = 2. So CG has five simple modules, of dimensions 1, 1, 1, 1, 2.

(iii) By part (ii), the Artin–Wedderburn decomposition of CG has the form

CG ∼
= C × C × C × C × M2 (C).

This is actually the same as for the group algebra of the dihedral group D4 of order
8. Thus, the group algebras CG and CD4 are isomorphic algebras (but, of course,
the groups G and D4 are not isomorphic).
Exercise 6.9. (i) No. Every group G has the 1-dimensional trivial CG-module;
hence there is always at least one factor C in the Artin–Wedderburn decomposition
of CG.
(ii) No. Such a group would have order |G| = 12 +22 = 5. But every group of prime
order is cyclic (by Lagrange’s theorem from elementary group theory), and hence
abelian. But then every simple CG-module is 1-dimensional (see Theorem 6.4), so
a factor M2 (C) cannot occur.
(iii) Yes. G = S3 is such a group, see Example 6.7.
(iv) No. Such a group would have order |G| = 12 + 12 + 32 = 11. On the
other hand, the number of 1-dimensional CG-modules divides the group order (see
Corollary 6.8). But this would give that 2 divides 11, a contradiction.

Chapter 7

Exercise 7.3. Let M = U1 ⊕. . .⊕Ur , and let γ : M → M be an isomorphism. Then


γ (M) = M since γ is surjective. We verify the conditions in Definition 2.15. (i) By
Theorem 2.24, each γ (Ui ) is a submodule of γ (M), and then γ (U1 ) + . . . + γ (Ur )
is also a submodule of γ (M), using Exercise 2.3 (and induction on r). To prove
equality, take an element in γ (M), that is, some γ (m) with m ∈ M. Then
m = u1 + . . . + ur with ui ∈ Ui and we have

γ (m) = γ (u1 ) + . . . + γ (ur ) ∈ γ (U1 ) + . . . + γ (Ur ).


286 B Solutions to Selected Exercises

 
(ii) Now let x ∈ γ (Ui ) ∩ j =i γ (Uj ), then x = γ (ui ) = j =i γ (uj ) for elements
ui ∈ Ui and uj ∈ Uj . We have
 
0 = −γ (ui ) + γ (uj ) = γ (−ui + uj ).
j =i j =i


follows that −ui + j =i uj = 0 and then ui is in the
Since γ is injective, it 
intersection of Ui and j =i Uj . This is zero, by the definition of a direct sum.
Then also x = γ (ui ) = 0. This shows that γ (M) = γ (U1 ) ⊕ . . . ⊕ γ (Ur ).
Exercise 7.4. (a) Analogous to the argument in the proof of Exercise 2.14 (b) one
shows that for any fixed t with j + 1 ≤ t ≤ i we have φ(et + Vj ) = λt (et + Vj ),
where λt ∈ K. If we use the action of Ej +1,t similarly, we deduce that λt = λj +1
and hence φ is a scalar multiple of the identity. This gives EndTn (K) (Vi,j ) ∼
= K, the
1-dimensional K-algebra.
(b) By part (a), the endomorphism algebra of Vi,j is a local algebra. Then Fitting’s
Lemma (see Corollary 7.16) shows that each Vi,j is an indecomposable Tn (K)-
module.
Exercise 7.6. (a) The left ideals of A = K[X]/(f ) are given by (h)/(f ), where h
divides f . In particular, the maximal left ideals are given by the irreducible divisors
h of f . By definition, A is a local algebra if and only if A has a unique maximal left
ideal. By the previous argument this holds if and only if f has only one irreducible
divisor (up to scalars), that is, if f = g m is a power of an irreducible polynomial.
(b) Applying part (a) gives the following answers: (i) No; (ii) Yes (since
Xp − 1 = (X − 1)p in characteristic p); (iii) Yes (since
X3 − 6X2 + 12X − 8 = (X − 2)3 ).
Exercise 7.9. As a local algebra, A has a unique maximal left ideal (see Exercise 7.1)
which then is the Jacobson radical J (A) (see Definition 4.21). Now let S be a simple
A-module. So S = As for any non-zero s ∈ S (see Lemma 3.3) and by Lemma 3.18
we know that S ∼ = A/AnnA (s). Then AnnA (s) is a maximal left ideal of A (by
the submodule correspondence), thus AnnA (s) = J (A) as observed above, since
A is local. Hence any simple A-module is isomorphic to A/J (A), which proves
uniqueness.
Exercise 7.12. (a) The Artin–Wedderburn theorem gives that up to isomorphism the
semisimple algebra A has the form

A∼
= Mn1 (D1 ) × . . . × Mnr (Dr )

with division algebras Di over K. For each 1 ≤ i ≤ r and 1 ≤ j ≤ ni we consider


the left ideal


i−1 
r
Ii,j := Mn (D ) × Uj × Mn (D ),
=1 =i+1
B Solutions to Selected Exercises 287

where Uj ⊆ Mni (Di ) is the left ideal of all matrices with zero entries in the j -th
column. Clearly, each Ii,j is a left ideal of A, that is, an A-submodule, with factor
module A/Ii,j ∼ n
= Di i . The latter is a simple A-module (see Lemma 5.8, and its
proof), and thus each Ii,j is a maximal left ideal of A. However, if A is also a local
algebra, then it must have a unique maximal left ideal. So in the Artin–Wedderburn
decomposition above we must have r = 1 and n1 = 1, that is, A = D1 is a division
algebra over K.
Conversely, any division algebra over K is a local algebra, see Example 7.15.
(b) For the group G with one element we have KG ∼ = K, which is local and
semisimple. So let G have at least two elements and let 1 = g ∈ G. If m is the
order of g then g m = 1 in G, which implies that in KG we have

(g − 1)(g m−1 + . . . + g 2 + g + 1) = 0,

that is, g − 1 is a non-zero zero divisor. In particular, g − 1 is not invertible and


hence KG is not a division algebra. But then part (a) implies that KG is not local.

Chapter 8

Exercise 8.1. (a) (i) Recall that the action of the coset of X on Vα is given by
the linear map α, and T is a different name for this map. Since the coset of X
commutes with every element of A, applying α commutes with the action of an
arbitrary element in A. Hence T is an A-module homomorphism. Since T = α,
it has minimal polynomial g t , that is, g t (T ) = 0 and if h is a polynomial with
h(T ) = 0 then g t divides h.
(ii) By assumption, Vα is cyclic, so we can fix a generator w of Vα as an A-module.
The cosets of monomials Xi span A, and they take w to α i (w) and therefore Vα is
spanned by elements of the form α i (w). Since Vα is finite-dimensional, there exists
an m ∈ N such that Vα has a K-basis of the form {α i (w) | 0 ≤ i ≤ m}.
Let φ : Vα → Vα be an A-module homomorphism, then


m 
m 
m
φ(w) = ai α i (w) = ai T i (w) = ( ai T i )(w)
i=0 i=0 i=0

for unique elements ai ∈ K. But the A-module homomorphisms φ and


h(T ) := m a
i=0 i T i are both uniquely determined by the image of w since w gener-

ates Vα and it follows that φ = h(T ). (To make this explicit: an arbitrary element in
Vα is of the form zw for z ∈ A. Then φ(zw) = zφ(w) = z(h(T )(w)) = h(T )(zw)
and therefore the two maps φ and h(T ) are equal.)
(iii) By (ii) we have φ = h(T ), where h is a polynomial. Assume φ 2 = φ, then
h(T )(h(T ) − idV ) is the zero map on Vα . By part (i), the map T has minimal
polynomial g t , and therefore g t divides the polynomial h(h − 1). We have that
288 B Solutions to Selected Exercises

g is irreducible and clearly h and h − 1 are coprime, then since K[X] is a unique
factorisation domain, it follows that either g t divides h or g t divides h − 1. In the
first case, h(T ) is the zero map on Vα and φ = 0 and in the second case h(T ) − idV
is the zero map on Vα and φ = idV .
We have shown that the zero map and the identity map are the only idempotents
in the endomorphism algebra of Vα . Then Lemma 7.3 implies that Vα is indecom-
posable as an A-module.
(b) By assumption,
⎛ ⎞
λ
⎜1 λ ⎟
⎜ ⎟
⎜ .. ⎟
Jn (λ) = ⎜
⎜ 1 .


⎜ .. .. ⎟
⎝ . . ⎠
1 λ

is the matrix of α with respect to some K-basis {w1 , . . . , wn } of V . This means that
α(wi ) = λwi + wi+1 for 1 ≤ i ≤ n − 1 and α(wn ) = λwn . Since α describes the
action of the coset of X, this implies that Aw1 contains w1 , . . . , wn and hence Vα is
a cyclic A-module, generated by w1 . The minimal polynomial of Jn (λ), and hence
of α, is g n , where g = X − λ. This is irreducible, so we can apply part (a) and Vα is
indecomposable by (iii).
Exercise 8.6. (a) Recall (from basic algebra) that the conjugacy class of an element g
in some group G is the set {xgx −1 | x ∈ G}, and that the size of the conjugacy class
divides the order of G. If |G| = pn then each conjugacy class has size some power
of p. Since G is the disjoint union of conjugacy classes, the number of conjugacy
classes of size 1 must be a multiple of p. It is non-zero since the identity element
is in such a class. Hence there must be at least one non-identity element g with
conjugacy class of size 1. Then g is in the centre Z(G), hence part (a) holds.
(b) Assume Z(G) is a proper subgroup of G; it is normal, so we have the factor
group. Assume (for a contradiction) that it is cyclic, say generated by the coset
xZ(G) for x ∈ Z(G). So every element of G belongs to a coset x r Z(G) for some
r. Take elements y1 , y2 ∈ G, then y1 = x r z1 and y2 = x s z2 for some r, s ∈ N0 and
z1 , z2 ∈ Z(G). We see directly that y1 and y2 commute. So G is abelian, and then
Z(G) = G, a contradiction.
(c) Since G is not cyclic, we must have n ≥ 2. Assume first that n = 2: Then G
must be abelian. Indeed, otherwise the centre of G would be a subgroup of order
p, by (a) and Lagrange’s theorem. Then the factor group G/Z(G) has order p and
must be cyclic, and this contradicts (b). So if G of order p2 is not cyclic then it is
abelian and can only be Cp × Cp by the structure of finite abelian groups. Thus the
claim holds, with N = {1} the trivial normal subgroup.
As an inductive hypothesis, we assume the claim is true for any group of order
pm where m ≤ n. Now take a group G of order pn+1 and assume G is not cyclic. If
B Solutions to Selected Exercises 289

G is abelian then it is a direct product of cyclic groups with at least two factors, and
then we can see directly that the claim holds. So assume G is not abelian. Then by
(a) we have G/Z(G) has order pk for k ≤ n, and the factor group is not cyclic, by
(b). By the inductive hypothesis it has a factor group N̄ = (G/Z(G))/(N/Z(G))
isomorphic to Cp × Cp . By the isomorphism theorem for groups, this group is
isomorphic to G/N, where N is normal in G, that is, we have proved (c).
Exercise 8.9. Let {v1 , . . . , vr } be a basis of V and {w1 , . . . , ws } be a basis of W as
K-vector spaces. Then the elements (v1 , 0), . . . , (vr , 0), (0, w1 ), . . . , (0, ws ) form
a K-vector space basis of V ⊕ W . (Recall that we agreed in Sect. 8.2 to use the
symbol ⊕ also for external direct sums, that is, elements of V ⊕ W are written as
tuples, see Definition 2.17.)
Let T be a system of representatives of the left cosets of H in G. Then we get K-
vector space bases for the modules involved from Proposition A.5 in the appendix.
Indeed, a basis for KG ⊗H (V ⊕ W ) is given by the elements

t ⊗H (v1 , 0), . . . , t ⊗H (vr , 0), t ⊗H (0, w1 ), . . . , t ⊗H (0, ws ).

On the other hand, a basis for (KG ⊗H V ) ⊕ (KG ⊗H W ) is given by

(t ⊗H v1 , 0), . . . , (t ⊗H vr , 0), (0, t ⊗H w1 ), . . . , (0, t ⊗H ws ).

From linear algebra it is well known that mapping a basis onto a basis and extending
linearly yields a bijective K-linear map. If we choose to map

t ⊗H (vi , 0) → (t ⊗H vi , 0) for 1 ≤ i ≤ r

and

t ⊗H (0, wj ) → (0, t ⊗H wj ) for 1 ≤ j ≤ s

then we obtain a bijective K-linear map of the form

KG⊗H (V ⊕W ) → (KG⊗H V )⊕(KG⊗H W ) , g⊗H (v, w) → (g⊗H v, g⊗H w)

for all g ∈ G, v ∈ V and w ∈ W . Since the KG-action on induced modules is


by multiplication on the first factor and on direct sums diagonally, it is readily seen
that the above map is also a KG-module homomorphism. So we have the desired
isomorphism of KG-modules.
290 B Solutions to Selected Exercises

Chapter 9

Exercise 9.8. Let v ∈ M(j ) be a non-zero element. Then we define a representation


Tv of Q by setting Tv (j ) = span{v} and Tv (i) = 0 for all i = j . Moreover, for
every arrow γ of Q we set Tv (γ ) = 0. Then Tv clearly is a subrepresentation of M
and it is isomorphic to the simple representation Sj .
Now let v1 , . . . , vd be a K-basis of M(j ), that is,
M(j ) = span{v1 } ⊕ . . . ⊕ span{vd } is a direct sum of K-vector spaces. Then
M = Tv1 ⊕ . . . ⊕ Tvd ∼ = Sj ⊕ . . . ⊕ Sj is a direct sum of subrepresentations.
The last statement on the converse is clear.
Exercise 9.9. (a) (i) To define Y, take the spaces as given in the question. If γ is
any arrow of Q then, by assumption, γ starts at a vertex different from j and we
take Y (γ ) = M(γ ). If γ = αi for some i then the image of Y (γ ) is in Y (j ), by the
definition of Y (j ). Otherwise, if γ ends at a vertex k = j then Y (k) = M(k) and
Y (γ ) maps into Y (k). This shows that Y is a subrepresentation of M.
To construct X , for X(j ) we choose a subspace of M(j ) such that
X(j ) ⊕ Y (j ) = M(j ). Note that this may be zero. For k = j we take
X(k) = 0 and for each arrow γ of Q we take X(γ ) to be the zero map. Then
X is a subrepresentation of M. With these, for each vertex k of Q we have
M(k) = X(k) ⊕ Y (k) as vector spaces. This proves the claim.
(ii) The representation X satisfies the conditions of Exercise 9.8. Hence it is
isomorphic to a direct sum of copies of Sj , the number of copies is equal to
dimK X(j ) = dimK M(j ) − dimK Y (j ), by basic linear algebra.
(b) Suppose M = X ⊕ Sj for a subrepresentation X of M. Then for
t Sj (αi ) = 0 and the map M(αi ) has image contained
each i, we have
in X(j ). So i=1 im(M(αi )) ⊆ X(j ) which is a proper subspace of
M(j ) = X(j ) ⊕ Sj (j ) = X(j ) ⊕ K.
Exercise 9.10. (a) (i) Take Y with Y (k) as defined in the question. Let γ be an arrow.
It does not end at j , by assumption. If γ : k → l and k = j then take Y (γ ) = N(γ ),
it maps into Y (l) = N(l). If k = j then we take Y (γ ) to be the restriction of N(γ )
to Y (j ), it maps to Y (l) = N(l). So Y is a subrepresentation of N . Define X with
X(j ) as in the question, and X(i) = 0 for i = j , and take X(γ ) to be the zero map,
for all arrows γ . This is a subrepresentation of N , and moreover, N = X ⊕ Y.
(ii) This follows by applying Exercise 9.8.
(b) Let N = U ⊕ V, where V ∼ = Sj , then V (j ) = K is contained in the kernel of
each N(βi ), so the intersection of these kernels is non-zero.
Exercise 9.11. (i) Take the last quiver in Example 9.21, with i = 11 . Then we set

Pi (i) = span{ei }, Pi (12 ) = span{γ }, Pi (2) = span{α, βγ }.

For each arrow, the corresponding map in the representation is given by left
multiplication with this arrow, for instance Pi (β) is given by γ → βγ .
B Solutions to Selected Exercises 291

In general, for each vertex j of Q, define Pi (j ) to be the span of all paths from
i to j (which may be zero). Assume β : j → t is an arrow in Q. Then define
Pi (β) : Pi (j ) → Pi (t) by mapping each element p in Pi (j ) to βp, which lies in
Pi (t).
(ii) Assume Q has no oriented cycles. We give three alternative arguments for
proving indecomposability.
Let Pi = U ⊕ V for subrepresentations U and V. The space Pi (i) = span{ei }
is only 1-dimensional, by assumption, and must be equal to U (i) ⊕ V (i), and it
follows that one of U (i) and V (i) must be zero, and the other is spanned by ei . Say
U (i) = span{ei } and V (i) = 0. Take some vertex t. Then Pi (t) is the span of all
paths from i to t. Suppose Pi (t) is non-zero, then any path p in Pi (t) is equal to
pei . By the definition of the maps in Pi , such a path belongs to U (t). This means
that U (t) = Pi (t), and then V (t) = 0 since by assumption Pi (t) = U (t) ⊕ V (t).
This is true for all vertices t and hence V is the zero representation.
Alternatively we can also show that KQei is an indecomposable KQ-module:
Suppose KQei = U ⊕ V for submodules U and V . Then ei = ui + vi for unique
ui ∈ U and vi ∈ V . Since ei2 = ei we have ui +vi = ei = ei2 = ei ui +ei vi . Note that
ei ui ∈ U and ei vi ∈ V since U and V are submodules. By uniqueness, ei ui = ui
and ei vi = vi ; these are elements of ei KQei which is 1-dimensional (since Q
has no oriented cycles). If ui and vi were non-zero then they would be linearly
independent (they lie in different summands), a contradiction. So say vi = 0. Then
ei = ui belongs to U and since ei generates the module, U = KQei and V = 0.
As a second alternative, we can also prove indecomposability by applying our
previous work. By Exercise 5.9 we know that EndKQ (KQei ) is isomorphic to
(ei KQei )op . Since there are no oriented cycles, this is just the span of ei . Then
we see directly that any endomorphism ϕ of Pi = KQei with ϕ 2 = ϕ is zero or the
identity. Hence the module is indecomposable, by Lemma 7.3.

Chapter 10

Exercise 10.10. (a) Using the explicit formula for C in Example 10.15, we find
that the orbit of εn is {εn , εn−1 , . . . , ε1 , −α1,n }.
(b) Using the formula for C (αr,s ) in Example 10.15 we see that the orbit of αr,n is
given by

{αr,n , αr−1,n−1 , . . . , α1,n−r+1 , −αn−r+1,n , −αn−r,n−1 , . . . , −α1,r }.

This has n + 1 elements and αr,n is the unique root of the form αt,n . Moreover, there
are n such orbits, each containing n + 1 elements. Since in total there are n(n + 1)
roots for Dynkin type An (see Example 10.8), these are all orbits and the claim
follows.
292 B Solutions to Selected Exercises

(c) We have Cn+1 (εi ) = εi for all i = 1, . . . , n (as one sees from the calculation
for (a)). The εi form a basis for Rn , and C is linear. It follows that Cn+1 fixes each
element in Rn .
Exercise 10.11. In Exercise 10.10 we have seen that Cn+1 is the identity. Moreover,
for 1 ≤ k < n + 1 the map Ck is not the identity map, as we see from the
computation in the previous exercise, namely all C -orbits have size n + 1.
Exercise 10.14. (a) The positive roots for D4 are

εi , (1, 0, 1, 0), (0, 1, 1, 0), (0, 0, 1, 1), (1, 1, 1, 0), (0, 1, 1, 1), (1, 0, 1, 1),
(1, 1, 1, 1), (1, 1, 2, 1).

(b) The positive roots for type D5 are as follows: first, append a zero to each of the
roots for D4 . Then any other positive root for D5 has 5-th coordinate equal to 1. One
gets

ε5 , (0, 0, 0, 1, 1), (0, 0, 1, 1, 1), (0, 1, 1, 1, 1), (1, 0, 1, 1, 1), (1, 1, 1, 1, 1),
(1, 1, 2, 1, 1), (1, 1, 2, 2, 1).

We get 20 roots in total.

Chapter 11

Exercise 11.2. By Exercises 9.9 and 9.10 we have that M = X ⊕ Y is a direct


sum of subrepresentations. Since M is indecomposable, one of X and Y must be
the zero representation. Suppose that Y is the zero representation, then M = X ,
but as shown in Exercises 9.9 and 9.10, the representation X is isomorphic to a
direct sum of copies of the simple representation Sj . This is a contradiction to the
assumption that M is indecomposable and not isomorphic to Sj . Therefore, X is
M = Y. For part (a), by definition in Exercise 9.9 we
the zero representation and 
then have M(j ) = Y (j ) = ti=1 im(M(αi )),as claimed. For part (b), by definition
in Exercise 9.10 we then have 0 = X(j ) = ti=1 ker(M(αi )), as claimed.
Exercise 11.8. (a) Vertex 1 is a source, so by Exercise 9.10 we can write
M = X ⊕ Y where X(1) = ker(M(α1 )) and X is a direct sum of copies of
the simple representation S1 . We assume M is indecomposable and not simple.
It follows that X = 0, that is, M(α1 ) is injective. Similarly M(α2 ) is injective.
The vertex 2 is a sink, so by Exercise 9.9 we can write M = X ⊕ Y where
Y (2) = im(M(α1 )) + im(M(α2 )), and where X is isomorphic to a direct sum of
copies of S2 . Since M is indecomposable and not simple, it follows that X = 0,
and therefore M(2) = Y (2) = im(M(α1 )) + im(M(α2 )).
B Solutions to Selected Exercises 293

(b) An indecomposable representation M with M(1) = 0 is the extension by zero


of an indecomposable representation of the subquiver of Dynkin type A2 obtained
by removing vertex 1. The indecomposable representations of a quiver of type A2
are the simple representations and one other with dimension vector (1, 1). Therefore
the indecomposable representations M of Q with M(1) = 0 are

id
S2 , S3 and 0 −→ K ←− K.

Similarly the indecomposable representations M of Q with M(3) = 0 are

id
S1 , S2 and K −→ K ←− 0.

(c) This follows directly from part (a).


Exercise 11.10. Exercise 11.8 describes the indecomposable representations with
M(1) = 0 or M(3) = 0; moreover, we have seen that if M(1) and M(3) are
both non-zero then M(2) must also be non-zero. In Exercise 11.9 we see that up to
isomorphism there is only one indecomposable representation of Q with M(i) = 0
for i = 1, 2, 3. In total we have six indecomposable representations, and their
dimension vectors are in fact the positive roots of the Dynkin diagram of type A3
(which we have computed in Example 10.8).
Exercise 11.13. (i) ⇒ (ii): This holds by Proposition 11.22.
(ii) ⇒ (i):
t We show that if (i) does not hold, then also (ii) does not hold.
Assume i=1 im(M(αi )) is a proper subspace of M(j ). By Exercise 9.9 we
have M = X⊕ Y, where X is isomorphic to a direct sum of copies of Sj ,
t
and Y (j ) = i=1 im(M(αi )). By our assumption, X is non-zero, and since
M is indecomposable, it follows that M ∼ = Sj . But then j+ (M) = 0 (by
Proposition 11.12) and j− j+ (M) = 0 ∼
M, so (ii) does not hold.
=
(i) ⇒ (iii): This holds by Proposition 11.37.
(iii) ⇒ (iv): Assume that (iv) does not hold, that is, M ∼
= Sj . Then we have
sj (dimM) = sj (dimSj ) = −εj ; on the other hand, j+ (M) = 0 and therefore
dimj+ (M) = 0 = −εj . So (iii) does not hold.
(iv) ⇒ (v): This is Theorem 11.25 part (a) (use that M is indecomposable by
assumption).
(v) ⇒ (iv): Assume  + (M) is indecomposable. Suppose we had M ∼
j = Sj , then
j+ (M) = 0, a contradiction. So M is not isomorphic to Sj .
(iv) ⇒ (i): This holds by Exercise 9.9.
Exercise 11.14. We only formulate the analogue; the proof is very similar to the
proof of Exercise 11.13 and is left to the reader.
Let Q be a quiver and j a source in Q . We denote by β1 , . . . , βt the arrows of
Q starting at j . Let N be an indecomposable representation of Q . Show that the
294 B Solutions to Selected Exercises

following statements are equivalent:



(i) ti=1 ker(N(βi )) = 0.
(ii) j+ j− (N ) ∼
= N.

(iii) dimj (N ) = sj (dimN ).
(iv) N is not isomorphic to the simple representation Sj .
(v) j− (N ) is indecomposable.

Exercise 11.18. If one of conditions (i) or (ii) of Exercise 11.16 does not hold then
the representation is decomposable, by Exercise 11.16. So assume now that both
(i) and (ii) hold. We only have to show that the hypotheses of Exercise 11.17 are
satisfied. By (i) we know

dimK ker(M(α1 )) + dimK ker(M(α2 )) ≤ dimK M(1) = 4

and by rank-nullity

4 − dimK ker(M(αi )) = dimK M(1) − dimK ker(M(αi )) = dimK im(M(αi )) ≤ 2

for i = 1, 2. Hence 2 ≤ dimK ker(M(αi )) for i = 1, 2. These force

2 = dimK ker(M(αi )) = dimK im(M(αi ))

and also M(1) = ker(M(α1 )) ⊕ ker(M(α2 )).


Exercise 11.19. (a) We observe that σ2 σ1 Q = Q, and the statements follow from
Theorem 11.25, and Proposition 11.37; note that the properties of j± hold not just
for Dynkin quivers, but also for the Kronecker quiver Q as well.
(b) Since M is not simple, it is not isomorphic to S1 and hence 1− (M) is not
zero. So 2− 1− (M) is zero precisely for 1− (M) = S2 . Let (a, b) = dimM. The
dimension vector of 1− (M) is s1 (dimM) = (−a + 2b, b) and this is equal to
(0, 1) precisely for (a, b) = (2, 1). Explicitly, we can take M to have M(1) = K 2
and M(2) = K, and for x = (x1 , x2 ) ∈ K 2 the maps are M(α1 )(x) = x1 and
M(α2 )(x) = x2 , that is, the projections onto the first and second coordinate.
 
−1 2
Exercise 11.20. (a) Recall from Exercise 10.9 that C has matrix , with
−2 3
respect to the standard basis of R2 . Then a1 = −a + 2b and b1 = −2a + 3b and
therefore we have a1 − b1 = a − b > 0. Furthermore, a − a1 = 2(a − b) > 0.
(b) This follows by iterating the argument in (a).
(c) This follows by applying Exercise 11.19.
B Solutions to Selected Exercises 295

(d) Clearly, either N = S1 , or otherwise 2− (N ) = S2 . By Theorem 11.25 we can


apply (1+ 2+ )r to N and get M, that is, we see that such a representation exists.
Then

dimM = dim(1+ 2+ )r (N ) = (s1 s2 )r dimN

by Proposition 11.37. Note that s1 s2 = (s2 s1 )−1 = C−1 and this is given by the
 
3 −2
matrix . Then one checks that the dimension vector of M is of the form
2 −1
' (
(a, a − 1); indeed, if N = S1 then dimM = 2r+1 and if 2− (N ) = S2 then
'2r+2( 2r
dimM = 2r+1 .
Finally, we get uniqueness by using the argument as in Proposition 11.46.
Exercise 11.22. (a) This is straightforward.
(b) Let ϕ : VM → VM be an A-module homomorphism such that ϕ 2 = ϕ;
according to Lemma 7.3 we must show that ϕ is the zero map or the identity. Write
ϕ as a matrix, in block form, as
 
T1 T 
T =
T  T2

so that Ti is the matrix of a linear transformation of M(i), and we may view T  as a


linear map M(2) → M(1). This is an A-module endomorphism of VM if and only
if T commutes with x and y, equivalently, if and only if for i = 1, 2 we have
(i) T  M(αi ) = 0 and M(αi )T  = 0
(ii) T2 M(αi ) = M(αi )T1 .
Assuming these, we claim that T  = 0. Say M(α1 ) is surjective, then

T  (M(2)) = T  (M(α1 )(M(1))) = T  M(α1 )(M(1)) = 0

by (i), that is, T  = 0. Recall that ϕ 2 = ϕ, since T  = 0 we deduce from this that
T12 = T1 and T22 = T2 . Let τ := (T1 , T2 ), by (ii) this defines a homomorphism of
representations M → M and we have τ 2 = τ . We assume M is indecomposable,
so by Lemma 9.11 it follows that τ is zero or the identity. If τ = 0 then we see that
T 2 = 0. But we should have T 2 = T since ϕ 2 = ϕ and therefore T = 0 and ϕ = 0.
If τ = idM then T 2 = T implies that 2T  = T  and therefore T  = 0 and then
ϕ = idVM . We have proved that VM is indecomposable.
(c) Assume ϕ : VM → VN is an isomorphism and write it as a matrix in block
form, as
 
T1 T 
T = .
T  T2
296 B Solutions to Selected Exercises

We write xM and yM for the matrices describing the action of X and Y on VM and
similarly xN and yN for the action on VN . Since ϕ is an A-module homomorphism
we have xN T = T xM , and yN T = T yM . That is, the following two conditions
are satisfied for i = 1, 2:
(i) N(αi )T  = 0 and T  M(αi ) = 0
(ii) N(αi )T1 = T2 M(αi ).
Say M(α1 ) is surjective, then it follows by the argument as in part (b) that T  = 0.
Using this, we see that since T is invertible, we must have that both T1 and T2
are invertible. Therefore τ = (T1 , T2 ) gives an isomorphism of representations
τ : M → N . This proves part (c).
Exercise 11.24. (a) An A-module M can be viewed as a module of KQ, by inflation.
So we get from this a representation M of Q as usual. That is, M(i) = ei M and the
maps are given by multiplication with the arrows. Since βγ is zero in A we have
M(β) ◦ M(γ ) is zero, similarly for αγ and δγ .
(b) (i) Vertex 5 is a source, so we can apply Exercise 9.10, which shows that
M = X ⊕ Y where X(5) = ker(M(γ )), and X is isomorphic to a direct sum
of copies of the simple representation S5 .
(ii) Let U be the subrepresentation with U (5) = M(5) and U (3) = im(M(γ )),
and where U (γ ) = M(γ ). This is a subrepresentation of M since M(α), M(β)
and M(δ) map the image of M(γ ) to zero, by part (a). The dimension vector of this
subrepresentation is (0, 0, d, 0, d) since M(γ ) is injective. From the construction
it is clear that U decomposes as a direct sum of d copies of a representation with
dimension vector (0, 0, 1, 0, 1), and that this representation is indecomposable (it is
the extension by zero of an indecomposable representation for a quiver of type A2 ).
Now choose a subspace C of M(3) such that C ⊕ U (3) = M(3), and then we
have a subrepresentation V of M with V (i) = M(i) for i = 1, 2, 4 and V (3) = C,
V (5) = 0 and where V (ω) = M(ω) for ω = α, β, δ. Then M = U ⊕ V.
(c) Let M be an indecomposable A-module, and let M be the corresponding
indecomposable representation of Q, satisfying the relations as in (a).
Suppose first that M(5) = 0. Then M is the extension by zero of an
indecomposable representation of a quiver of Dynkin type D4 , so there are finitely
many of these, by Gabriel’s theorem.
Suppose now that M(5) = 0. If M(γ ) is not injective then M ∼ = S5 , by part
(b) (i) (and since M is indecomposable by assumption). So we can assume now that
M(γ ) is injective. Then by part (b) (ii) (and because M is indecomposable), M has
dimension vector (0, 0, 1, 0, 1), and hence is unique up to isomorphism.
In total we have finitely many indecomposable representations of Q satisfying
the relations in A, and hence we have finitely many indecomposable A-modules.
(d) Gabriel’s theorem is a result about path algebras of quivers; it does not make any
statement about representations of a quiver where the arrows satisfy relations.
Index

algebra, 2 direct sum, 37, 38


commutative, 2 of modules, 39
finite-dimensional, 2 of representations, 168
local, 135 division algebra, 4, 109
semisimple, 91 Dynkin diagram, 186, 258
alternating group, 126
annihilator, 44, 71, 97
Artin–Wedderburn decomposition, 120 endomorphism algebra, 106, 130, 134, 136
Artin–Wedderburn theorem, 111 Euclidean diagram, 187
extension by zero, 170

basic algebra, 262


bilinear form (associated to a graph), 188 factor algebra, 12
factor module, 39
Fitting’s lemma, 133, 134, 136
Cartan matrix, 258
centre, 80 Gabriel’s theorem, 203
commutator subgroup, 122 Gram matrix, 188
composition factor, 66 of Dynkin diagrams, 189
composition series, 63 group algebra, 6
equivalent, 66
length, 63
conjugacy class, 124 homomorphism, 13
Coxeter matrix, 198 of algebras, 13
Coxeter number, 201 of modules, 40
Coxeter transformation, 198 of representations, 165
in Dynkin type A, 198
in Dynkin type D, 201
ideal, 11
idempotent, 25, 135
dihedral group, 52, 58, 83, 123, 126 indecomposable, 129
dimension vector, 224 induced module (for group algebras), 268
of reflected representations, 225 isomorphism theorem, 16
direct product, 4, 78, 95 for algebras, 16
of modules, 39 for modules, 42

© Springer International Publishing AG, part of Springer Nature 2018 297


K. Erdmann, T. Holm, Algebras and Representation Theory, Springer
Undergraduate Mathematics Series, https://doi.org/10.1007/978-3-319-91998-0
298 Index

Jacobson radical, 96 indecomposable, 168


Jordan–Hölder theorem, 66 of a quiver, 53, 163
regular, 49
simple, 167
Kronecker quiver, 8, 25, 131, 174, 179 special, 219, 247, 250, 253
Krull–Schmidt theorem, 137 representation type, 143
of direct products, 149
length of a module, 69 of Euclidean quivers, 219, 223, 246
of factor algebras, 148
finite, 143
Maschke’s theorem, 119 of group algebras, 157
matrix representation, 47, 51 infinite, 143
matrix units, 3, 23 of quivers, 177
minimal polynomial, 18 of reflected quivers, 217
module, 29 of stretched quivers, 181
cyclic, 88, 145 of subquivers, 177
finite length, 69 of unions of subquivers, 177
indecomposable, 129 restriction, 169
natural, 31 ring, 1
semisimple, 85 commutative, 1
simple, 61 roots, 191
trivial, 31 of Dynkin diagrams, 195
Morita equivalence, 260 in Dynkin type A, 192
in Dynkin type D, 201
positive, 197
one-loop quiver, 14, 53 root system, 257
opposite algebra, 4, 26
Schur’s lemma, 79, 109
path, 7 semisimple, 85, 91
length, 7 short exact sequence, 161
trivial, 7 sink, 204
path algebra, 7 socle, 183
pull-back, 57 source, 204
push-out, 57 stretch (of a quiver), 172
subalgebra, 9
subgraph, 186
quadratic form, 191 submodule, 36
of Dynkin diagrams, 193 submodule correspondence, 45
positive definite, 193 subquiver, 169
quaternion group, 5, 127, 162, 285 full, 169
quaternions, 5, 19, 23 subrepresentation, 166
quiver, 6 symmetric group, 125, 126

reflection, 204 tensor product, 265


of a quiver, 204 three-subspace algebra, 10, 25
of a representation, 209 three-subspace quiver, 8
at a sink, 209 two-loop quiver, 72
at a source, 213
reflection map, 188
representation, 47 upper triangular matrices Tn (K), 9, 16, 57, 68,
of an algebra, 47 77, 139, 277
equivalent, 49
of a group, 51 Zorn’s lemma, 88

You might also like