100% found this document useful (1 vote)
245 views68 pages

Associative Algebras and Schur-Weyl Duality

This document is a dissertation presented for a Bachelor of Science degree in mathematics. It is divided into two parts. Part I concerns associative algebras, including proofs of Frobenius' theorem on R-division algebras and Wedderburn-Artin's theorem on the structure of semisimple R-algebras. It also discusses central simple algebras, the Brauer group of a field, and splitting fields. Part II relates finite-dimensional, irreducible representations of the symmetric group Sn and the general linear group GL(V) using tools from combinatorics and Lie theory. It proves the Schur-Weyl duality theorem for GL(V) and an equivalent statement for representations of the Lie algebra

Uploaded by

rodri
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
100% found this document useful (1 vote)
245 views68 pages

Associative Algebras and Schur-Weyl Duality

This document is a dissertation presented for a Bachelor of Science degree in mathematics. It is divided into two parts. Part I concerns associative algebras, including proofs of Frobenius' theorem on R-division algebras and Wedderburn-Artin's theorem on the structure of semisimple R-algebras. It also discusses central simple algebras, the Brauer group of a field, and splitting fields. Part II relates finite-dimensional, irreducible representations of the symmetric group Sn and the general linear group GL(V) using tools from combinatorics and Lie theory. It proves the Schur-Weyl duality theorem for GL(V) and an equivalent statement for representations of the Lie algebra

Uploaded by

rodri
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 68

KING’S COLLEGE LONDON

Department of Mathematics

ASSOCIATIVE ALGEBRAS AND


SCHUR-WEYL DUALITY

Rodrigo Lope Prieto

under the supervision of


Professor Nicholas Shepherd-Barron FRS

A dissertation presented for the degree of Bachelor of Science

2019-2020
2

Abstract

This dissertation is divided into two components. Part I of this dissertation mainly concerns associative algebras, focusing
on finite dimensional associative algebras over a ring R. The first chapters include a full proof of Frobenius Theorem on
R-division algebras and Wedderburn-Artin Theorem on the structure of semisimple R-algebras. Later chapters discuss
key results on Central Simple Algebras and the Brauer group Br(F ) of a field F , introducing the concept of a splitting
field for a Central Simple F -algebra.
Part II of this dissertation uses some of the ideas from Part I to relate finite dimensional, irreducible representations of
the Symmetric Group Sn and the general linear group GL(V ), with V being a complex vector space, using elements from
combinatorics and Lie Theory. In particular, we prove the Schur-Weyl Duality Theorem for GL(V ), and an equivalent
statement for irreducible representations of the Lie algebra gl(V ). The essay ends with a deeper insight on Schur functors.
Contents

I Associative Algebras 5

1 Generalities on Associative Algebras 6

1.1 Endomorphism Algebras . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

1.2 Quaternion Algebras . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

2 Modules 10

2.1 Structure of Simple and Semisimple Modules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10

2.2 The Radical . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12

3 Wedderburn-Artin Theorem 14

3.1 Simple and Semisimple Algebras . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14

3.2 Homomorphism Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14

3.3 The Proof of Wedderburn-Artin Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16

4 Indecomposable and Projective Modules 18

4.1 Local algebras . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18

4.2 The Krull-Schmidt Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19

4.3 Projective Modules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20

4.4 Basic Algebras . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22

5 Tensor Products 24

5.1 Motivation and Construction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24

5.2 Tensor Product of Modules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25

5.3 Tensor Product of Algebras . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28

3
4 CONTENTS

5.4 Tensor Product Modules over Algebras . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29

6 Central Simple Algberas 31

6.1 The Density Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31

6.2 Central Simple Algebras . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31

6.3 Brauer Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32

6.4 The Double Centraliser Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34

7 Splitting Fields 36

7.1 Maximal Subfields of Simple Algebras . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36

7.2 Splitting Fields . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38

7.3 Algebraic Splitting Fields . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39

7.4 Splitting Fields and Galois Extensions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39

II Schur-Weyl Duality 41

8 Schur - Weyl Duality 42

8.1 Representations of Finite Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42

8.2 Character Theory of Finite Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44

8.3 Irreducible Representations of Sn . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45

8.3.1 The Group Sn . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45

8.3.2 Specht Modules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46

8.4 Lie Groups and Lie Algebras . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54

8.4.1 Analytic Manifolds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54

8.4.2 Representations of Lie Algebras . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57

8.5 Symmetric Polynomials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58

8.6 Schur-Weyl Duality for GL(V ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58

8.7 Schur Functors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60

9 Conclusions 63

Acknowledgements 63
Part I

Associative Algebras

5
Chapter 1

Generalities on Associative Algebras

Definition 1.0.1. Let R be a ring. An algebra over R or R-algebra is a right R-module A equipped with a
bilinear product A ⇥ A ! A, defined (x, y) 7! x · y, such that:
i) A contains an identity element 1A with respect to the bilinear product.
ii) The product · is associative, i.e. x · (y · z) = (x · y) · z for all x, y, z 2 A.
iii) The product · satisfies the right and left distributive laws.
iv) (x · y)a = x · (ya) = (xa) · y for all x, y 2 A and for all a 2 R.

Evidently there is an analogous definition when A happens to be a lef t R-module. Note that for the rest of
this essay, we will ommit the dot when denoting the product of two elements of an R-algebra A, i.e. x · y ⌘ xy.
From the definition, we see that A has a (unital) ring structure and an R-module structure. Hence, given a
(unital) ring S, if we can define an R-module structure on S which satisfies condition iv), then S becomes an
R-algebra, where the ring multiplication acts as the algebra bilinear product. Furthermore, if we pick R = F
to be a field, then A is a ring and an F -vector space. An important type of algebras over fields is the following

Definition 1.0.2. Let F be a field and D be an F -algebra. We say D is a division algebra if every nonzero
element of D has an inverse with respect to the bilinear product on D.

In other words, for any given x 2 D and any nonzero y 2 D, there exists a unique r 2 D such that x = yr
and a unique s 2 D such that x = sy. Therefore, an R-division algebra D has a field structure. Division
algebras will be fundamental throughout the course of this dissertation.

Definition 1.0.3. Let R be a ring and A, B be R-algebras. An R-algebra homomorphism : A ! B is a map


which is concurrently an R-module homomorphism and a ring homomorphism. In other words:
i) (xa + yb) = (x)a + (y)b for all x, y 2 A and for all a, b 2 R.
ii) (1A ) = 1B

As usual, if is an R-module isomorphism and a ring isomorphism then is an R-algebra isomorphism,


and we will write A =⇠ B.
Again let A be an R-algebra. The ring structure of A allows us to define modules over A, and the definition of
an A-module is exactly the same as the definition of a module over any other arbitrary ring. Moreover, if M
is a right A-module, it also inherits the R-module structure of A, since for u 2 M and a 2 R we could define
ur = u(1A r). An analogous definition applies if M is a left A-module. Moreover, the concept of a bimodule
is exactly the same for modules over algebras, i.e. if M is a left A-module and a right B-module such that

6
1.1. ENDOMORPHISM ALGEBRAS 7

for all u 2 M, a 2 R, x 2 A and y 2 B one has (xu)y = x(uy) and ua = au, then the concept of M being an
(A, B)-bimodule is precisely the same as a ring bimodule.

Now if I ✓ A is an ideal of A, then it automatically is an A-module and an R-submodule of A. If I is a two


sided ideal, then we can introduce the notion of a quotient algebra A/I, which as well would be both a quotient
ring of A and an R-quotient module. For instance, taking the integers Z as an algebra (and thus as a module)
over itself, the ring Z/6Z would be a perfectly fine Z-algebra. Note that all well-known facts about rings and
modules apply to algebras. For example, Z/6Z ⇠ = Z/2Z Z/3Z (Chinese Reminder Theorem) as Z algebras,

or A/ ker = B if : A ! B is a surjective algebra homomorphism. Another important tool which we will
use during this dissertation is the following well-known fact

Correspondence Theorem: Let R be a ring with unity and let I ✓ R be an ideal of R. There is a one to
one correspondence between the ideals I ✓ J ✓ R containing I and the ideals of A/I. In particular, the ideals
of A/I are of the form J/I.

To summarise, rather than an overcomplication of the concepts of ring and module, the algebra structure
is a powerful tool which allows us to use properties of di↵erent algebraic structures at the same time. There
are endless examples of associative algebras, though on this chapter we will exemplify two that are of vital
relevance.

1.1 Endomorphism Algebras

Suppose R is any ring, A is an R-algebra. Let M, N be right A-modules. Recall that the set HomA (M, N )
of A-module homomorphisms between M and N has an R-module structure with internal sum and scalar
multiplication defined as expected. Nevertheless, if M = N then the composition of functions (u) = ( (u))
is an associative bilinear product on HomA (M, N ). The set HomA (M, M ) with function composition as a
bilinear product is called the Endomorphism Algebra of the A-module M , which we will denote by EndA (M ).
Note that the action of EndA (M ) on M defines a left EndA (M )-module structure on M , and therefore M
can be considered as an (EndA (M ), A)-bimodule: for 2 EndA (M ), x 2 A and given any u 2 M we see that
(ux) = ( (u))x since is an A-module homomorphism, and if a 2 R we have au = (idM a)(u) = (idM u)a = ua.
The units of EndA (M ) are by definition all isomorphisms M ! M , which are called automorphisms. We denote
such algebra AutA (M ). If V is an n-dimensional algebra over a field F (and thus an n-dimensional F -vector
space), then the automorphism algebra of V is denoted GL(V ), which is precisely the General Linear Group
GLn (F ). This algebraic structure will be of vital importance in Part II of this dissertation, when we discuss
Nn
representations of GL(V ) on the nth tensor space V . Nevertheless, for the rest of Part I our main result on
endomorphism algebras is the following.

Proposition 1.1.1. Let A be an R-algebra. Then AA ⇠


= End AA , where AA denotes the algebra A as a right
module over itself, and therefore as an algebra over itself.

Proof. Let L : AA ! End AA be defined as L (a) = a where a (x) = a.x for all a 2 A. We firstly check
that actually a 2 End AA .

a (x + y) = a(x + y) = a.x + a.y = a (x) + a (y) 8x, y 2 AA


(1.1)
a (x.b) = a.(x.b) = (ax).b = a (x).b 8x 2 AA 8b 2 A
8 CHAPTER 1. GENERALITIES ON ASSOCIATIVE ALGEBRAS

So indeed a 2 End AA . Next, we examine whether L itself is a homomorphism.

L (a + b)(x) = a+b (x) = (a + b).x = a.x + b.x = a (x) + b (x)


(1.2)
L (b.a)(x) = a.b (x) = (a.b)x = (a.x)b = a (x).b = L (a).b

Lastly, we prove L is bijective, and hence an isomorphism. Let a, a0 2 AA and suppose L (a) = L (a0 ). That
means a = a0 and so a (1) = a0 (1) implying a = a0 .

Now suppose ' 2 End AA and set a = '(1). Then L (a)(x) = a (x) = a.x = '(1).x = '(x). Therefore L
is surjective, and hence an isomorphism.

1.2 Quaternion Algebras

Definition 1.2.1. Let F be a field and a, b 2 F . Let A be the four dimensional F -vector space with basis
{1, i, j, k} subject to the relations
i2 = a j2 = b ij = -ji = k

where elements of A are of the form ↵ + i + j + k. We call A a quaternion algebra, and denote it A = ( a,b
F ).

The discovery of quaternion algebras was a major turning point in the history of Algebra, and in the history
of Mathematics in general. It was on the 16th of October of 1843 when Sir William Rowan Hamilton came up
with the idea of the so-called Hamilton’s Quaternions H = ( 1,R 1 ) ⇠
= R iR jR kR and famously used his
knife to carve his formula on the stone of Borougham Bridge in Dublin. Hamilton’s discovery set the beginning
of noncommutative algebra, even preceding the concept of matrices, introduced by Arthur Cayley in 1855. As
we will prove later, the Hamilton’s Quaternions are, alongside C and R itself, the only associative R-division
algebras up to isomorphism. In fact, those three together with the octonions are the only real division algebras
up to isomorphism, with the octonions being nonassociative. The concept of octonions is deeply connected
to the concept of quaternions, with the octonions being 8-dimensional. For a deep insight into octonions see
[CS03].

Example 1.2.1. Let F be any field and consider the quaternion algebra ( a,1
F ). The following map
! ! ! !
1 0 0 1 1 0 0 1
1 7! i 7! j 7! k 7!
0 1 a 0 0 1 a 0

defines an isomorphism ( a,1 ⇠


F ) = M2 (F ). In particular, taking F = R we see that up to isomorphism the only
R-quaternion algebras are H and M2 (R). Similarly, the following map
! ! ! !
1 0 0 1 b 0 0 b
1 7! i 7! j 7! k 7!
0 1 a 0 0 b ab 0
2
defines an isomorphism ( a,b ⇠
F ) = M2 (F ). So in particular, taking F = C the only C-quaternion algebra is
M2 (C).

Lemma 1.2.1. Let a, b 2 F be nonzero. The quaternion algebra ( a,b


F ) is simple and its centre is F .

Remark. We say an algebra is simple if it has no proper two sided ideals. We will eventually discuss simple
algebras in more depth, but for now just this definition is enough.
1.2. QUATERNION ALGEBRAS 9

Theorem 1.2.2 (Frobenius). The only associative R-division algebras up to isomorphism are R, C and the
Hamilton’s Quaternions H

Proof. Let D be a real division algebra and assume dimR 2, since otherwise we already get D ⇠
= R. Note that
for a nonzero ↵ 2 D R, we have R[↵] = ⇠ C since R[↵] is a proper algebraic extension.
Now fix a copy C ,! D and see D as a complex vector space. Consider the following C-subspaces of D

D+ = {d 2 D : di = id} D = {d 2 D : di = id}
p
where i = 1 as usual. We have D+ \ D = {0}. We claim that indeed D+ D = D. Let a 2 D. Consider
the following two elements
d+ := ia + ai 2 D+ d := ia ai 2 D

we have d+ + d = 2ia and therefore

1
a = (2i) (d+ + d ) 2 D+ + D

So indeed D ⇠= D+ D . Now consider any + 2 D+ . Then C[ + ] is a proper algebraic extension of C,


therefore D+ ⇠
= C. If D =
6 0, then fix a nonzero z 2 D and consider the C-linear map ⇠ : D ! D+ defined
x 7! xz for all x 2 D . This map is injective, and thus dimC D+ = dimC D = 1. Therefore, dimR D = 4.
Now z is then algebraic over R, so z 2 2 R zR. But also z = ⇠(z) 2 D+ ⇠= C and hence

z 2 C \ (R + zR) = R

So z 2 is a real number. Now if z 2 > 0 in the real numbers then z 2 = r2 for some nonzero r 2 R. But then this
leads to z = ±r, which is a contradiction. Thus, z 2 < 0 in R, i.e. z 2 = r2 for some nonzero r 2 R. Thus,
z
letting j = r we get j 2 = i2 = 1 and ji = ij. Hence, we have a decomposition

D⇠
=C Cj ⇠
=R Ri Rj Rij

which is equivalent to D being isomorphic to the Hamilton’s Quaternions H.


Chapter 2

Modules

Let A be an R-algebra. Recall that A has a ring structure, and thus we can consider A itself as both a left
and a right A-module, and an therefore as an algebra over itself. We will use the notation AA to denote A as
a right A-module and as an algebra over itself, as we did in previous chapter. Analogously, denote AA for A
considered as a left A-module.
Let A and B be R-algebras and let : A ! B be an R-algebra homomorphism. Let M be a right B-module.
We can see M as a right A-module by setting the scalar operation to be ux = u (x) for all x 2 A. Similarly,
if B is a quotient algebra B = A/I for some ideal I ✓ A and ⇡ : A ! B is the natural projection, we can see
any B-module M as an A-module defining ux = u(x + I) = u⇡(x) for any u 2 M and x 2 A.

2.1 Structure of Simple and Semisimple Modules

Definition 2.1.1. An R-module M is said to be Noetherian if its submodules satisfy the ascending chain
condition. On the other hand, M is said to be Artinian if the submodules of M satisfy the descending chain
condition.

Remark. Equivalently, a module M is Noetherian if all its submodules are finitely generated.

Definition 2.1.2. A right or left nonzero module N is called simple if the only submodules of N are 0 and N
itself. A module M is semisimple if it can be decomposed into a direct sum of simple modules.

From this definition, it easily follows that if I is a right ideal of an R-algebra A is a simple A-module if and
only if it is a minimal right ideal. Moreover, by the Correspondence Theorem, if AA /I is a simple module, then
I is a maximal right ideal, and conversely if I is a maximal right ideal then AA /I is a simple right A-module.
In particular

Proposition 2.1.1. Let N be a nonzero right A-module. The following are equivalent
i) N is simple
ii) For all nonzero n 2 N , we have nA = N
iii) N ⇠
= AA /I for some maximal right ideal M of A.

Proof. We firstly show that i) , ii)


Let N be simple and n 2 N be a nonzero element. This assumption implies that nA is a submodule of N . But

10
2.1. STRUCTURE OF SIMPLE AND SEMISIMPLE MODULES 11

since N is simple also by assumption, we must have nA = N . Conversely, if N is nonzero, the assumption of
ii) implies that N itself is the only nonzero submodule of N .
Now iii) ) i) is a direct consequence of the Correspondence Theorem. Thus, to finish the proof, it suffices to
show that ii) ) iii).
Let n 2 N be nonzero. Assuming ii), the mapping x 7! nx is a surjective module homomorphism AA ! N
whose kernel, say M, is a right ideal of A. But ii) and i) are equivalent statements, so AA ⇠
= N is a simple
A-module. Therefore, again using the Correspondence Theorem, M is a maximal ideal of A.

Lemma 2.1.2 (Schur’s Lemma). Let M and N be A-modules, and let :M ! N be a non zero homomor-
phism. Then,
i) If M is simple, is injective
ii) If N is simple, is surjective

Proof. If :M ! N is a homomorphism, then ker ✓ M is a submodule of M . Therefore, if M is simple,


then either ker = {0} or ker = M , but since by assumption is non zero, then ker must be equal to zero;
equivalently, is injective.

Similarly, since Im ✓ N is a submodule, and by assumption N is simple and is non zero, then Im = N ,
so is surjective.

As a consequence of Schur’s Lemma, if M and N are simple modules, then either Hom(M, N ) = 0 or
M⇠
= N . Another consequence of Schur’s Lemma is the following Corollary, which will be used numerous times
during the next few chapters

Corollary 2.1.2.1. Let N be a simple A-module. Then, EndA (N ) is a division algebra.

Proof. By Schur’s Lemma, any non zero f 2 End(N ) must be injective and surjective. Therefore, every non zero
endomorphism of N is an isomorphism, or equivalently, any non zero endomorphism of N has a multiplicative
inverse.

Proposition 2.1.3. Let M be a right A-module. The following are equivalent.


L
i) M is semisimple and M = i2J Ni with each Ni simple.
P
ii) M = i2I {Ni ✓ M : Ni is simple}
iii) Every submodule of M has a complement (within the submodule lattice of M ).
iv) If N is a simple right A-module and : N ! M a nonzero homomorphism, then there is a j 2 J such that
L
M = (N ) ( i6=j Ni )

Now let {Ni : i 2 J} be a set of representatives of all the isomorphism classes of simple A-modules, with J
being a non empty indexing set. We will write M (i) to denote the submodule of M defined
X
M (i) = {Nj ✓ M : Nj ⇠ = Ni }
j

where Ni is one of such representatives in {Ni : i 2 J}. The most interesting characteristic of this construc-
tion is the following Proposition.
L
Proposition 2.1.4. Let M be a semisimple right A-module. Then M = i M (i)

L L
Proof. By assumption, M is semisimple, i.e. M = i Mi with each Mi = j Nij and the Nij ⇠
= Ni . Thus,
Mi ✓ M (i).
12 CHAPTER 2. MODULES

Conversely, let Ni ⇠
= N where N is a submodule of M . We can decompose M as M = Mi Mi0 where
L
M 0 = i6=j Mj . Let ⇡ : M ! Mi0 be the usual projection. Then, from the previous Lemma we have ⇡(N ) = 0,
i.e. N ✓ Mi . But N is any arbitrary submodule of M such that N ⇠= Ni . Thus, M (i) ✓ Mi .

Lemma 2.1.5. Let M and M 0 be semisimple right A-modules. If : M ! M 0 is a homomorphism (M (i)) ✓


M 0 (i) for all i 2 J.

Proof. Let N ✓ M be a simple submodule such that N ⇠ = Ni . Let be a homomorphism and e the restriction
e : N ! (N ) defined e(n) = (n) for all n 2 N . This homomorphism is clearly surjective and since N is
simple, then by Shcur’s Lemma either (N ) = 0 or e is injective.

If (N ) 6= 0 then (N ) ⇠
=N ⇠
= Ni . Moreover, (N ) ✓ M and (N ) ⇠
= Ni , so we must have (N ) ✓ M 0 (i).
But since this is true for all N ✓ M (i) we get (M (i)) ✓ M 0 (i).

If = 0 then the Lemma is also true, since by definition {0} ✓ M 0 (i) (and in general {0} is a subset of
any module).

L L
Roughly speaking, M (i) = nN ⇠
= nNi . Each individual (N ) is isomorphic to Ni or equal to zero,
L L L
and so ( nN ) = rn rN and hence ( nN ) = (M (i)) ✓ M 0 (i).

We are more interested on the following corollary rather than on the Lemma.

L L
Corollary 2.1.5.1. Let M1 = nN1 and M2 = rN2 If N1 ⇠
6 N2 , then HomA (M1 , M2 ) = 0.
=

Proof. Consider again the class of representatives {Ni : i 2 J}. Then 9i0 , j0 2 J such that N1 ⇠
= Ni0 and
⇠ Nj .
N2 = 0

Suppose : M1 ! M2 is a module homomorphism. Then we have (M1 (i0 )) = (M1 ) ✓ M2 (i0 ) by


L
the Lemma, and since by construction M1 (i0 ) = M1 itself. However, also by construction, M2 = rNj0 and
⇠ Nj . Hence M2 (i0 ) = 0 and therefore (M1 ) = 0.
Ni 6=0 0

2.2 The Radical


T
Definition 2.2.1. Let M be an A-module. The radical of M is rad(M ) = {N ✓ M : M/N is simple}

The following are basic facts about the radical, which can be easily deduced using the Correspondence
Theorem

Proposition 2.2.1. Let A be an R-algebra and M be an A-module.


i) rad(M ) is a submodule of M
ii) If N ✓ M is a submodule of M and rad M/N = 0, then N contains rad M .

Lemma 2.2.2. If M is a semisimple A-module, then rad(M ) = 0


2.2. THE RADICAL 13

L P
Proof. Let M be decomposed as i2I Ni with Ni all simple. Write Oj = i6=j Ni . Then, we have M/Oj ⇠
= Nj
T
is simple, and thus rad(M ) ✓ j2I Oj = 0.

Definition 2.2.2. The Jacobson Radical J(A) = rad(AA ) of the R-algebra A is defined as
\
J(A) = {Mi ✓ AA : A/Mi is simple}
i
\
= {I ✓ AA : I is a maximal ideal}
i

Proposition 2.2.3. The Jacobson radical J(A) is a two sided ideal of A, and can be equivalently defined as
follows.
T
i)J(A) = {Maximal right ideals of A}.
T
ii) J(A) = {Maximal left ideals of A}.
iii) J(A) {x 2 A : 1 + xy 2 A⇥ for all y 2 A}. iv) J(A) {x 2 A : 1 + yx 2 A⇥ for all y 2 A}

Corollary 2.2.3.1. If M is a (right or left) ideal of A such that for all x 2 M one has 1 + x 2 A⇥ , then
M ✓ J(A). with equality holding if rad A/M = 0.

Proof. Based on the assumption that 1 + x 2 A⇥ for all x, the proposition yields that M ✓ J(A). Furthermore,
by Proposition 2.2.1, if rad A/M = 0, then we also have the inclusion J(A) ✓ M .

Corollary 2.2.3.2. Let M be an ideal (right or left) of A where every element is nilpotent. Then M ✓ J(A).
Chapter 3

Wedderburn-Artin Theorem

3.1 Simple and Semisimple Algebras

Definition 3.1.1. An R-algebra A is called semisimple if it is semisimple as a right A-module. An R-algebra


B is called simple if the only two sided ideals of B are 0 and B itself.

L
More precisely, A is semisimple if A = i2I Ni where J is some index set and the Ni are simple right
A-modules, and therefore minimal right ideals. Moreover, if A is semisimple, then any right A-module is
semisimple

Proposition 3.1.1. Let A be a semisimple algebra and A1 ··· Ar its decomposition into simple right
A-modules. Then, every simple A-module is isomorphic to on one of the Ai .

Proof. Let N be a simple right A-module, and let n 2 N . By Prop 2.1.1 we have An = N . Now consider the
surjective homomorphism : A ! nAn = N defined a 7! an. Consider the restriction i: Ai ! N . Since
is nonzero, by Shcur’s Lemma there exists an i such that i is an isomorphism. The assumption that N is
simple implies that j = 0 for any other j 6= i.

Corollary 3.1.1.1. If A is simple, then all simple right A-modules are isomorphic.

Remark. As we have stated before, there is no distinction between minimal right ideals and simple modules.

3.2 Homomorphism Matrices

Let us intrpoduce some more abstract notation: let A be an R-algebra and let (M1 , · · · , Mn ) be a sequence of
right A-modules. Then we denote by

2 3
Hom(M1 , M1 ) Hom(M2 , M1 ) ··· Hom(Mn , M1 )
6 7
6 Hom(M1 , M2 ) Hom(M2 , M2 ) ··· Hom(Mn , M2 ) 7
6 7
[HomA (Mj , Mi )] = 6 .. .. .. .. 7
6 . . . . 7
4 5
Hom(M1 , Mn ) Hom(M2 , Mn ) ··· Hom(Mn , Mn )

14
3.2. HOMOMORPHISM MATRICES 15

the set of all n by n matrices

2 3
11 12 ··· 1n
6 7
6 21 ··· 7
6 22 2n 7
6 . .. .. .. 7
6 .. . . . 7
4 5
n1 n2 ··· nn

with ij 2 HomA (Mj , Mi ). This set has R-algebra structure with addition, scalar multiplication and matrix
multiplication defined on the usual way, using composition as product of functions.

Proposition 3.2.1. [HomA (Mj , Mi )] is isomorphic to End(M1 ··· Mn ).

Ln
Proof. Denote M = i=1 Mi , and let ⇡j : M ! Mj and j : Mj ! M be the projection and injection
homomorphisms. Then
n
X
 j ⇡j =  1 ⇡1 ··· n ⇡n = idM (3.1)
j=1

⇡i j = 0 (i 6= j) ; ⇡j j = idMj (3.2)

Define ↵ : EndA (M ) ! [HomA (Mj , Mi )] and : [HomA (Mj , Mi )] ! EndA (M ) as follows

n
X
↵( ) = [⇡i j ] ([ ij ]) = i ij ⇡j
i,j=1

1
We aim to show that ↵ is an invertible homomorphism via showing =↵ .

↵( + ) = [⇡i ( + )j ] = [⇡i j + ⇡i j ] = [⇡i j ] + [⇡i j ] = ↵( ) + ↵( ) (3.3)


2 0 1 3 2 3
Xn X n
↵( ) = [⇡i j ] = 4⇡i @ j ⇡i A j 5 = 4 (⇡i j )(⇡j k )5 = ↵( )↵( ) (3.4)
i,j=1 j=1

A similar calculation for scalar multiplication shows that ↵ is a homomorphism. Now let us show it is
invertible.
n
X n
X
↵ ( ) = ([⇡i j ]) =  i ⇡i ⇡ j  j = idMi idMj = (3.5)
i,j=1 i,j=1

Ln
Therefore ↵ is an isomorphism and so EndA ( i=1 Mi ) ⇠
= [Hom(Mi , Mj )]

We shall not use this proposition directly to prove the Wedderburn-Artin theorem. However, we do need
two of its corollaries.
L
Corollary 3.2.1.1. There is an isomorphism EndA ( nM ) ⇠
= Mn (EndA (M ))

This follows directly from the proposition by setting i = j in all R-algebras HomA (Mi , Mj ). The following
is slightly less trivial:
16 CHAPTER 3. WEDDERBURN-ARTIN THEOREM

Corollary 3.2.1.2. Let A be an R-algebra and M = A n


the free A-module of rank n. Then, EndA ⇠
= Mn (A).

L
Proof. By construction of M , we must have EndA (M ) ⇠
= EndA ( nA). But then:
M
EndA (M ) ⇠= EndA ( nA)

=Mn (EndA (AA ) by Corollary 3.2.1.1.

=Mn (A) by P roposition 1.1.1.

Finally, we show a crucial result for the proof of our main theorem

Corollary 3.2.1.3. If A is an R-algebra and M1 , . . . , Mn are (right) A-modules such that HomA = 0 if i 6= j,
Ln
then EndA ( )⇠= EndA (M1 ) · · · EndA (Mn )
i=1

Proof. By the proposition, [Hom(Mj , Mi )] ⇠


= End(M1 ··· Mn ). But since HomA (Mi , Mj ) = 0 whenever
i 6= j:

2 3
End(M1 ) 0 n
6 .. 7 M
[Hom(Mj , Mi )] = 6 . 7⇠= EndA (Mi )
4 5
i=1
0 End(Mn )

3.3 The Proof of Wedderburn-Artin Theorem

We only need one more thing to be ready to prove our main result.

Proposition 3.3.1. Let D be a division algebra and n 2 N. Then Mn (D) is a semisimple algebra.

Proof. Let Mj be the Mn (D)-module of column vectors. That is, matrices which are zero everywhere but the
Ln
j th column. Clearly, Mj ⇠
= Dn . But then, there is an isomorphism : Mn (D) ! Mj defined: j=1

0 1 0 1 0 1
a11 ··· a1n a11 a1n
B . .. .. C B . C B . C
B . C B C B . C
@ . . . A 7 ! @ .. A ··· @ . A
an1 ··· ann an1 ann

Ln
which gives us Mn (A) ⇠
= i=1 Mj .

But every element of D has a multiplicative inverse, and so every ideal of D contains a unit. Hence, D is
simple, making Dn semisimple. Hence, Mn (A) is a direct sum of semisimple modules, making it semisimple.

Remark. Column vectors are a left Mn (D)-module. The same proof holds taking the right Mn (D)-module of
row vectors.
3.3. THE PROOF OF WEDDERBURN-ARTIN THEOREM 17

Theorem 3.3.2 (Wedderburn-Artin). Let A be a semisimple R-algebra. Then, 9 n1 , . . . , nr 2 N and division


algebras D1 , . . . ,Dr such that
A⇠
= Mn1 (D1 ) ··· Mnr (Dr )

Conversely, if n1 , . . . , nr 2 N and D1 , . . . ,Dr are division algebras, then A ⇠


= Mn1 (D1 ) ··· Mnr (Dr ) ir
a (right or left) semisimple R-algebra.

Proof. Firstly, by definition since A is semisimple we must have

AA ⇠
= M1 ··· Mr
L
where each Mi is a semisimple right A-module. We can rearrange the direct summands so that Mi = ni N i

with each Ni a simple right AA -module (and hence a minimal right ideal of A) such that Ni =
6 Nj if i 6= j.
Therefore, by Corollary 2.1.5.1.. we must have Hom(Mi , Mj ) = 0. Thus:

A⇠
= End(AA ) by P roposition 1.1.1.

= EndA (M1 ··· Mr ) by construction of AA
r
M

= EndA (Mi ) by Corollary 3.2.1.3.
i=1
Mr ⇣M ⌘

= EndA ni N i by construction of Mi
i=1
Mr

= Mni (EndA (Ni )) by Corollary 3.2.1.1.
i=1

By Corollary 2.1.2.1. since each Ni is simple, the R-algebra EndA (Ni ) is a division algebra. Hence, setting
Di = EndA (Ni ) we get

AA ⇠
= Mn1 (D1 ) ··· Mnr (Dr )

Conversely, by Proposition 3.3.1. the algebra of n ⇥ n matrices over a division algebra is semisimple for all
n 2 N. Hence for n1 , . . . , nr 2 N and D1 , . . . ,Dr division algebras, the direct sum Mn1 (D1 ) ··· Mnr (Dr ) is
semisimple.

Remark. In fact, the pairs (n1 , D1 ), · · · , (nr , Dr ) are uniquely determined by A. An more detailed explanation
of this fact can be found in Section 1.3 of [Lam13].
Chapter 4

Indecomposable and Projective


Modules

4.1 Local algebras

Definition 4.1.1. Let A be an R-algebra. Let N be an A-module. We say N is indecomposable if N 6= 0 and


the only nonzero direct summand of N is N itself. We say N is decomposable if N = M1 M2 , where M1 , M2
are nonzero A-modules.

Definition 4.1.2. An R-algebra is local if A/J(A) is a division algebra.

Now we formulate an equivalent, useful definition of a local algebra.

Proposition 4.1.1. An R-algebra A is local if and only if it has a unique maximal ideal.

Proof. The assumption of A being local implies that every x 2 A/J(A) is a unit. Therefore, A/J(A) is simple,
and hence by the Correspondence Theorem there are no ideals of A containing J(A) besides itself, and thus
J(A) must be the only maximal ideal of A.

Proposition 4.1.2. The following statements are equivalent:


i) A is local.
ii) A A⇥ ✓ J(A)
iii) A A⇥ is closed under +

Proof. Let A/J(A) be a division algebra. Then, for any x 2 A J(A) and its projection x + J(A) 2 A/J we
can find an element y 2 A such that xy + J = 1 + J(A) . Hence xy 1 2 J(A), and by Proposition 2.2.3. we
find 1 + (xy 1) 2 A⇥ , so x has a right inverse. Using the analogous left definition we find that y is also the
left inverse of x. Therefore x 2 A⇥ .

Now assume A A⇥ ✓ J(A). Since J(A is a proper ideal of A, no unit belongs in J(A), so indeed
A A⇥ = J(A). But this implies A A⇥ is an ideal of A and thus it must be closed under +.

Lastly, let A A⇥ be closed under addition and suppose x 2 A J(A). By the definition of J(A), there

exist y, z 2 A such that 1 + xy, 1 + zx 2 A. Hence, xy, zx 2 A because otherwise 1 2 A A⇥ since it is closed

18
4.2. THE KRULL-SCHMIDT THEOREM 19

under addition by assumption. Hence x has a left and a right inverse, thus x 2 A⇥ so A J(A) ✓ A⇥ Therefore,
every a 2 A such that a 62 J(A) has a left and right inverse, so when we construct A/J(A) all non-invertible
elements belong in the zero congruence class 0 + J(A) and all other a + J(A) have an inverse.

Corollary 4.1.2.1. If every non-unit of A is nilpotent, then A is a local algebra.

Proof. Let 0 6= x 2 A A⇥ . Then xk = 0 for some minimal k 2 N. Moreover, for any y 2 A we must have
xy 2 A A⇥ , since otherwise xk y = xk 1
(xy) = 0 would imply xk 1
= 0, contradicting the minimality of
k. Therefore, every element of xA is nilpotent, and so x 2 xA ✓ J(A) by Corollary 2.2.3.2. This implies
A A⇥ ✓ J(A), so by the Proposition, A is a local algebra.

Corollary 4.1.2.2. Let N be an A-module such that EndA (N ) is a local algebra. Then, N is indecomposable.

Proof. If EndA (N ) is local, then idN 6= 0, so N 6= 0. Suppose N = P Q and consider the projections ⇡1 : N !
P and ⇡2 : N ! Q. Then we have (⇡1 + ⇡2 ) : N ! N is just idN . But then, since EndA (N ) AutA (N ) is
⇠ P or
closed under + by the proposition, either ⇡1 or ⇡2 is a unit (and hence an isomorphism). But then N =

N = Q implying that the other is zero.

4.2 The Krull-Schmidt Theorem

Main Result. If M is a right A-module both Artinian and Noetherian, then M can be expressed as a unique
finite direct sum of indecomposable A-modules.

To prove this statement we need to firstly prove and state several useful results and algebraic tools.

Proposition 4.2.1. Let M be an A-module either Artinian or Noetherian. Then, M can be written as a finite
sum of indecomposable modules.

Proof. If M = 0 it follows by convention. Now let M 6= 0. If M is already indecomposable, the proof finishes
here. If not, then we notice that M has at least one indecomposable summand, say N1 (see remark). In such
case, M = N1 M1 . Again, if M1 is already indecomposable the proof finishes here. If not, repeat the process
to find M = N1 (N2 M2 ) and so on:

M = N1 M1 = N 1 (N2 M2 ) = N 1 N2 (N3 M3 ) = · · · (4.1)

where each Ni is indecomposable, and


M1 M2 M3 ··· (4.2)
N1 ⇢ N1 N2 ⇢ N1 N2 N3 ⇢ · · · (4.3)

But since M is either Artinian or Noetherian, either the DCC or the ACC apply, forcing (4.2) and (4.3)
to terminate. For (4.2), 9k 2 N such that Mk = Mk+1 = · · · . But then, if Mk is decomposable, then
Mk = Mk+1 Nk+1 = Mk Nk+1 , forcing Nk+1 to be zero. Hence, M = N1 ··· Nk Mk with all
indecomposable. Similarly, for (4.3), if N1 ⇢ N1 N2 ⇢ . . . terminates, then at some point Nk = 0. Thus, we
could write M = N1 · · · Nk 1 Mk 1 , since Mk 1 = Mk Nk = Mk 0 implies Mk is also indecomposable.

Remark. The fact that M has at least one indecomposable summand follows from the fact that M is either
Artinian or Noetherian by assumption. Let N be minimal among the summands of M . Clearly N is indecom-
posable. Such N must exist if M is Artinian, and if M is Noetherian it suffices to pick the complement of a
maximal direct summand.
20 CHAPTER 4. INDECOMPOSABLE AND PROJECTIVE MODULES

This proposition already shows the existence of a decomposition into indecomposable modules for any A-
module M when M is Artinian and/or Noetherian. To be prepared to prove the uniqueness statement of the
Krull-Schmidt Theorem we need a bit of extra background.

Theorem 4.2.2 (Fittings Lemma). Let M be an A-module that is both Artinian and Noetherian. Let 2
EndA (M ). Then, there is a decomposition M = P Q such that:
i) (P ) ✓ P and (Q) ✓ Q
ii) The restriction :P ! P is an automorphism.
iii) (Q) is nilpotent

Corollary 4.2.2.1. If the A-module M is both Artinian and Noetherian, then M is indecomposable if and only
if EndA (M ) is a local algebra.

Proof. Let EndA (M ) be a local algebra. Then, by Corollary 4.1.2.2. M is indecomposable.

Conversely, let M be indecomposable. By Fittings Lemma, every 2 EndA (M ) is either a unit or nilpotent.
By Corollary 4.1.2.1., if every non unit of EndA (M ) is nilpotent then EndA (M ) is a local algebra.

Lastly, we state a proposition which will play a substantial role on the proof of the Krull-Schmidt Theorem.
Lr
Proposition 4.2.3. Let A be an R-algebra. Let M , N be right A-modules with M = i=1 Mi and N =
Ls ⇠
i=1 Ni , where all EndA (Mi ) and EndA (Ni ) are local algebras. If M = N , then r = s and there exists a
permutation such that Mi = N (i) .

A full proof of this statement can be found in Section 5.4 of [Pie82], but it is rather long and somewhat
tedious. That being said, we are now fully equipped to prove the main Theorem of this section.

Theorem 4.2.4 (Krull-Schmidt). If M is a right A-module both Artinian and Noetherian, then M can be
expressed as a unique finite direct sum of indecomposable A-modules.

Proof. Let M be an A-module which is both Artinian and Noetherian. Then, we know by Proposition 4.2.1.
that M can be written as a direct sum M = M1 ··· Mr , where each Mi is indecomposable.

Now suppose that there exists another decomposition into indecomposable modules:

M = M1 ··· Mr = M10 ··· Ms0 (4.4)


Lr
By Corollary 4.2.2.1., each EndA (Mi ) and EndA (Mi0 ) is a local algebra. Moreover, trivially we have i=1 Mi ⇠
=
Ls 0
i=1 Mi

But these two facts combined allow us to apply Proposition 4.2.3: we must have r = s, and there exists a
permutation : [1, r] ! [1, r] such that Mi ⇠
= M0 . (i)

Hence, both decompositions are essentially (up to isomorphism and rearrangement) the same.

4.3 Projective Modules

Definition 4.3.1. Let A be an R-algebra. We say an A-module P is projective if it is isomorphic to a direct


summand of a free A-module.
4.3. PROJECTIVE MODULES 21

For the rest of this section, denote A for a right Artinian R-algebra and P for a projective right A-module
unless otherwise stated.

Proposition 4.3.1. There is an isomorphism

EndA (P )/J(A) ⇠
= EndA/J(A) (P/P J(A))

The proof of this Proposition falls more within the language of Category Theory. Broadly speaking, if (P, Q)
is a pair of A-modules and I is an ideal of A, there is an R-module homomorphism ✓(P, Q) : HomA (P, Q) !
HomA (P/P I, Q/QI). Given an ideal I of A, this homomorphism acts like a functor from the category of A-
modules to the category of A/I-modules. In particular, ✓(P, P ) is an R-algebra homomorphism EndA (P ) !
EndA (P/P I). Restricting the coefficient ring to A/J(A) and the Isomorphism Theorem yield the result. An-
other useful application of the functoriality of ✓ is the following result.

Corollary 4.3.1.1. Let P and Q be projective right A-modules. Then, P ⇠


= Q if and only if P/J(A) ⇠
= QJ(A).

1 1
Proof. Let :P ! Q be an isomorphism of A-modules. Using functoriality of ✓, ✓( ) = ✓( ) . Hence
✓( ) : P/J(A) ! Q/J(A) is an isomorphism.
Conversely, let P/J(A) ⇠
= Q/J(A). Then there must be isomorphisms ↵ : P ! Q and :Q ! P such
that idP ↵ 2 ker ✓ = J(EndA (P )). and idQ ↵ 2 J(EndA (Q)). Then, by Proposition 2.2.3, ↵ =
idP (idP ↵) 2 EndA (P )⇥ and thus ↵ has a left inverse. By a similar argument, ↵ has a right inverse. Thus
↵ is an isomorphism.

Lemma 4.3.2. A direct summand P of AA is indecomposable if and only if P /P J(A) is a simple A/J(A)-
module

Proof. Note that since A is Artinian, P can be assumed to be both Artinian and Noetherian. Hence, by
Corollary 4.2.2.1 (of Fittings Lemma), P is indecomposable if and only if EndA (P ) is a local algebra. That is,
P is indecomposable if and only if EndA (P )/J(EndA (P )) is a division algebra.
But by Proposition 4.3.1, EndA (P )/J(EndA (P ) ⇠= EndA/J(A) (P /P J(A)). Therefore, using Schur’s Lemma,
EndA/J(A) (P /P J(A)) is a division algebra if and only if P /P J(A) is simple as a right module over A/J(A)

Definition 4.3.2. An indecomposable right A-module P is said to be principal if it is a direct summand of A.

Proposition 4.3.3. The mapping P 7! P /P J(A) defines a bijective correspondence between the isomorphism
classes of principal indecomposable right A-modules and simple right A/J(A)-modules.

Proof. If P is a principal indecomposable A-module, then by the Lemma P /P J(A) is a simple right A/J(A)-
module. By Corollary 4.3.1.1, P ⇠= Q if and only if P /P J(A) ⇠
= Q/QJ(A). Hence the mapping is injective.
Now, if A = P1 ··· Pn then, by the Lemma, there is a decomposition A/J(A) = P1 /P1 J(A) ··· Pn /J(A)
where each Pi /Pi J(A) is simple. But, in general, the direct summands of any semisimple algebra, say S,
represent all isomorphism classes of simple S-modules. In particular, {P1 /P1 J(A), . . . , Pn /Pn J(A)} is a set of
representatives of all isomorphism classes of simple right A/J(A)-modules. So the mapping P 7! P /P J(A) is
surjective.

Theorem 4.3.4 (Structure Theorem for Projective Modules). If A is a right Artinian R-algebra, then every
projective right A-module is isomorphic to a unique direct sum of principal indecomposable A-modules.
22 CHAPTER 4. INDECOMPOSABLE AND PROJECTIVE MODULES

Proof. Let P be a projective right A-module. As in the previous proposition, since A is Artinian, A/J(A) is
Lr
semisimple; in particular, so is P /P J(A), so P /P J(A) ⇠
= Ni where each Ni is simple. By the proposition,
i=1
there exists principal indecomposable modules Pi such that Ni ⇠
= Pi /Pi J(A) for all i  n. Hence:
s
M s
M s
M
P /P J(A) ⇠
= Pi /Pi J(A) ⇠
=( Pi ) / ( Pi )J(A)
i=1 i=1 i=1
Lr
So by Corollary 4.3.1.1, P ⇠
= i=1 Pi where each Pi is principal indecomposable.
Now to prove uniqueness, let there be an isomorphism ' such that
r
M s
M
Pi ⇠
= Qj
i=1 j=1

This would imply


r
M s
M
Pe = Pi /Pi J(A) ⇠
= e
Qj /Qj J(A) = Q
i=1 j=1

where each of the Pi /Pi J(A) and Qj /Qj J(A) is a simple A/J(A)-module. Now for each Qj /Qj J(A) there
must be an Sj ✓ Pe such that Qj /Qj J(A) = '(Sj ). But then '(Sj ) has to be a simple A/J(A)-module with
Sj ⇠
= '(Sj ). Thus we have a decomposition
s
M s
M s
M
e ⇠
Q = Qj /Qj J(A) ⇠
= '(Sj ) ⇠
= Sj
j=1 j=1 j=1

Now we must have Sj 6= 0 and so there must be at least one Pi /Pi J(A) such that Pi /Pi J(A) \ Sj 6= ?. So
let : Sj ! Pi /Pi J(A) be the identity on Pi /Pi J(A) \ Sj and zero elsewhere. Then is clearly not the zero
map, and so by Schur’s Lemma, must be an isomorphism. But then Sj ⇠ = Qj /Qj J(A) ⇠
= Pi /Pi J(A) and thus
r
M s
M s
M
Pe ⇠
= Pi /Pi J(A) ⇠
= Sj ⇠
= Pi /Pi J(A)
i=1 j=1 i=1

and therefore r = s. Lastly, the choice of i and j was arbitrary to deduce Qj /Qj J(A) ⇠
= Pi /Pi J(A), so in

particular there must a permutation such that Qj /Qj J(A) = P (i) /P (i) J(A)

4.4 Basic Algebras

Definition 4.4.1. An R-algebra B is called reduced if B/J(B) is a finite product of division algebras.
Ln
Lemma 4.4.1. Let P1 , · · · , Pn be principal indecomposable A-modules. Define P = i=1 Pi . Then, EndA (P )
⇠ Pj for all i 6= j.
is reduced if and only if Pi 6=

Proof. Firstly. note that by Proposition 4.3.1:


n
!
M
EndA (P )/J(EndA (P )) ⇠
= EndA/J(A) (P/P J(A)) ⇠
= EndA/J(A) Pi /Pi J(A)
i=1

where from Lemma, each Pj /Pj J(A) is a simple A/J(A)-module. Let Pi ⇠


6= Pj . Each Pj /Pj J(A) is simple, and
hence, by Schur’s Lemma, HomA/J(A (Pi /Pi J(A), Pj /Pj J(A)) = 0. But then, by Corollary 3.2.1.3. we have

n
! n
M M
EndA/J(A) Pi /Pi J(A) ⇠
= EndA/J(A) (Pi /Pi J(A))
i=1 i=1
4.4. BASIC ALGEBRAS 23

Again, by Schur’s Lemma, since Pi /Pi J(A) are simple, EndA/J(A) (Pi /Pi J(A)) are division algebras.
Conversely, let EndA (P ) be reduced. From Lemma 4.3.2, each Pi /Pi J(A) is simple. Hence, by Schur’s Lemma,
either Hom(Pi /Pi J(A), Pj /Pj J(A)) = 0 or Pi /Pi J(A) ⇠
= Pj /Pj J(A)). But since by assumption the algebra
EndA (P )/J(EndA (P )) ⇠
= EndA/J(A) (P /P J(A)) is a product of division algebras, the later statement is dis-
carded. By Corollary 4.3.1.1, since Pi /Pi J(A) ⇠
6 Pj /Pj J(A)), we must have Pi ⇠
= 6 Pj .
=
Lr
Lemma 4.4.2. Let P be a direct summand of AA . Let P = j=1 Pj with each Pj indecomposable. Then, the
following conditions are equivalent.
i) P/P J(A) is a torsion-free right A/J(A)-module.
ii) AP = A
iii) Every principa indecomposable A-module is isomorphic to at least one of the Pj .

Proposition 4.4.3. Let P be a right ideal of A. The following conditions are equivalent.
i) P is a direct summand of AA .
ii) AP = P .
iii) EndA (P ) is a reduced A-algebra.
Moreover, such P is unique up to isomorphism.

Proof. Let P be a direct summand of AA . By the two previous Lemmas, this is equivalent to ii) and iii) if
and only if P = P1 ··· Pr where {P1 , · · · , Pr } is a set of representatives of distinct isomorphism classes of
principal indecomposable right A-modules. But as we have seen before, we can indeed construct such P , and
since Pi 6⇠
= Pj the decomposition is unique up to reordering and isomorphism.

Definition 4.4.2. If P is a right A-module satisfying the above conditions, we call B = EndA (P ) the basic
algebra of A. We will denote B = Abasic

Example 4.4.1. Let us illustrate this definition by directly constructing the basic algebra of a semisimple
algebra. Suppose the R-algebra A is semisimple. By Wedderburn-Artin Theorem, we have A ⇠ = A 1 · · · Ar
where Ai ⇠
= Mni (Di ), with each Di being a division algebra. In fact, Di ⇠
= EndA (Pi ) where Pi is a simple right
Lr
A-module that is a direct summand of Ai . Let P = i=1 Pı́ . Then P is a direct summand of AA , and so
⇠ Lr Lr
AP = A and EndA (P ) = Di is reduced. Therefore, Abasic = EndA (P ) ⇠
i=1 = Di .
i=1
Chapter 5

Tensor Products

5.1 Motivation and Construction

Let R be a ring and suppose M, N and T are R-modules. Let m 2 M and n 2 N , and suppose there is a map
: M ⇥N ! T . We say is bilinear if the restrictions (m) : M ! T and (n) : N ! T are module
homomorphisms.

A useful property of bilinearity is that it remains unaltered under composition of linear maps: if is a
bilinear map defined as before and :T ! Q is a linear map, then its composite is a bilinear map M ⇥N ! Q

Bilinear( )
M ⇥N T

Linear( )
Bilinear

A problem arises from this definition: given M ⇥ N , construct an R-module T and a bilinear map b : M ⇥
N ! T such that every bilinear map from M ⇥ N to any other R-module, say P , is the composition of b and
l
a unique linear map T ! P
b
M ⇥N T

9l linear?
Bilinear

Definition 5.1.1. Let M, N be R-modules. A tensor product of M and N is an R-module M ⌦R N together


with a bilinear map M ⇥ N ! M ⌦R N , defined (u, v) 7! u ⌦ v such that

i) M ⌦R N is generated by {u ⌦ v : u 2 M, v 2 N }

ii) If : M ⇥ N ! P is bilinear, then 9 a homomorphism : M ⌦R N ! P such that (u ⌦ v) = (u, v).

24
5.2. TENSOR PRODUCT OF MODULES 25

The map (u, v) 7! u ⌦ v being bilinear implies the following identities:

u ⌦ (v1 a + v2 ) = (u ⌦ v1 )a + (u ⌦ v2 )b
(u1 a + u2 b) ⌦ v = (u1 ⌦ v)a + (u2 ⌦ v)b
u⌦0=0⌦u=0
ua ⌦ v = (u ⌦ v)a = u ⌦ (va)

This rather vague definition is not representative of the complexity that these algebraic structures enclose.
To understand the construction of the tensor product, consider first M ⇥ N as a set, and define the following
R-module.
M
FR (M ⇥ N ) = (m,n) R
(m,n)2M ⇥N

This is the free R-module generated by all elements of M ⇥ N . So, if for instance M = N = R = R, then
every (x, y) such that (x, y) 2 R is a basis element. Hence FR (R ⇥ R) would be R2 -many copies of R. Thus,
FR (M ⇥ N ) is a huge module, immensely bigger than M ⇥ N : in general, dim(M ⇥ N ) = dimM + dimN , so
in our previous example evidently dim(R2 ) = 2. However, dimFR (R2 ) = 2@0 .

Once we have defined FR (M ⇥ N ), construct the submodule GR (M ⇥ N ) ✓ FR (M ⇥ N ) generated by the


elements of the form

(u, v1 a + v2 b) (u, v1 )a (u, v2 )b


(u1 a + u2 b, v) (u1 .v)a (u2 .v)b

This motivates the definition

Definition 5.1.2. The tensor product of M and N is defined as the quotient module FR (M ⇥ N )/GR (M ⇥ N )

5.2 Tensor Product of Modules

Definition 5.2.1. Let : M1 ! M2 and : N1 ! N2 be R-module homomorphisms. Then, the tensor


homomorphism ⌦ is the unique module homomorphism

( ⌦ ) : M1 ⌦ R N 1 ! M2 ⌦ R N 2

defined as follows, for all u 2 M1 and v 2 N1

( ⌦ )(u ⌦ v) = (u) ⌦ (v)

Lemma 5.2.1 (Identities for Tensor Products). Let M, M1 , M2 and N, N1 , N2 and P be R-modules. Then
i) M ⌦R (N1 N2 ) ⇠= (M ⌦R N1 ) (M ⌦R N2 ) via u ⌦ (v1 , v2 ) 7! (u ⌦ v1 , u ⌦ v2 ).

ii)M ⌦R (N ⌦R P ) = (M ⌦R N ) ⌦R P via u ⌦ (v ⌦ w) 7! (u ⌦ v) ⌦ w.
iii) M ⌦R N ⇠
= N ⌦R M via u ⌦ v 7! v ⌦ u.
Now suppose : M1 ! M2 and : N1 ! N2 are R-module homomorphisms, and consider the tensor homo-
morphism ⌦ : M1 ⌦R N1 ! M2 ⌦R N2 . Then
0 0 0 0
i) ( ⌦ )( ⌦ )= ⌦
ii) idM ⌦ idN = idM ⌦N
iii) ⌦( 1a + 2 b) =( ⌦ 1 )a +( ⌦ 2 )b and similarly ( 1a ⌦ 2 b) ⌦ =( 1 ⌦ )a + ( 2 ⌦ )b
iv) ⌦0=0⌦ = 0 and ( a) ⌦ = ⌦ ( a) = ( ⌦ )a.
26 CHAPTER 5. TENSOR PRODUCTS

Proposition 5.2.2. Let F be a field and M and N be F -spaces with bases {ui : i  m} and {vj : j  n}. Then
{ui ⌦ vj : (i, j) 2 I ⇥ J} is a basis of M ⌦F N

Lm Ln L N
Proof. M ⌦F N = ( i=1 ui F ) ⌦ F ( j=1 vj vj F ) ⇠
= i,j (ui F ⌦F vj F ) ⇠
= i,j (ui ⌦ vj )F

Proposition 5.2.3. Let M1 M2 M3 0 be a exact sequence of R-modules. Then, for any


R-module N the sequence


M1 ⌦ R N M2 ⌦ R N M3 ⌦ R N 0

is also exact, where = ⌦ idN and ✓ = ⌦ idN

Proof. We know that is surjective, so 8m3 2 M3 , 9m2 2 M2 such that m3 = (m2 ). Therefore for any
(m3 ⌦ n) 2 M3 ⌦R N

(m3 ⌦ n) = (m2 ) ⌦ n = (m2 ) ⌦ idN (n) = ( ⌦ idN )(m2 ⌦ n) = ✓(m2 ⌦ n) (5.1)

Hence, ✓ is surjective as well. Now, let us show Im = ker✓. Since = 0:

✓ = ( ⌦ idN )( ⌦ idN ) = ( ⌦ idN idN ) = (0 ⌦ idN ) = 0 (5.2)

And therefore Im ✓ ker✓. Conversely, let us show that ker✓ ✓ Im :

1
Consider the projection ⇡ : M2 ⌦R N ! Coker = M2 ⌦R N /Im . Then, (0) ⌦R N = Im ⌦R N ✓
Im = ker⇡. Hence define a bilinear map:

: M3 ⇥ N ! M2 ⌦R N /Im = Coker

1
(m3 , n) 7! ⇡( (m3 ⌦ n))

Thus, by the universal mapping property of the tensor product, we have assured the existence of a homomor-
phism : M3 ⌦R N ! Coker as illustrated by the following commutative diagram

M3 ⇥ N Coker

⌦ (5.3)

M3 ⌦ R N

1
Such homomorphism is defined as so (m3 ⌦ n) = (m3 , n) = ⇡( (m3 ) ⌦ n). Hence

1
( (m2 ) ⌦ n) = ⇡( ( (m2 )) ⌦ n) = ⇡(m2 ⌦ n) (5.4)

and so ✓ = ⇡. Hence, ker ✓ ✓ ker ✓ = ker ⇡ = Im .

Thus, ker ✓ = Im and so the sequence is exact.

Corollary 5.2.3.1. If the sequence ⌃ : 0 M1 M2 M3 0 is split exact, then ⌃ ⌦R N : 0 M1 ⌦


is also split exact.

The algebra used to show the previous results is rather abstract. A more specific and practical illustration
of the applications of the tensor product would be the following Theorem:

Theorem 5.2.4. Let a, b 2 Z and d = gcd(a, b). Then, there is a Z-module isomorphism Z/ bZ⌦Z Z/bZ ⇠
= Z/dZ
5.2. TENSOR PRODUCT OF MODULES 27

Proof. Since 1 spans Z/aZ and Z/bZ, then 1 ⌦ 1 must span Z/aZ ⌦Z Z/bZ. Now a(1 ⌦ 1) = a ⌦ 1 = 0 ⌦ 1 = 0
and b(1 ⌦ 1) = 1 ⌦ b = 1 ⌦ 0 = 0. Hence, #(Z/aZ ⌦Z Z/bZ)  d.

Now consider the following map:

B : Z/aZ Z/bZ ! Z/dZ

(xmod(a), ymod(b)) 7! xymod(d)

This is bilinear, and hence by the universal mapping property it induces a homomorphism of Z-modules


Z/aZ Z/bZ Z/aZ ⌦Z Z/bZ

B
Z/dZ

such that (x ⌦ y) = xy mod (d). In particular, since d  a and d  b, for any x 2 Z/dZ we have
(x ⌦ 1) = x mod (d). Hence is surjective.

This implies #(Z/aZ ⌦Z Z/bZ) d, and therefore #(Z/aZ ⌦Z Z/bZ) = #(Z/dZ). But any surjective map
between two sets of the same cardinality must be bijective, so must be an isomorphism.

Theorem 5.2.5. Let R be an integral domain with field of fractions Frac(R) = K. Let V ne a K-vector space.
Then, K ⌦R V ⇠
=V

Proof. Consider the bilinear map B : K ⇥ V ! V defined (x, v) 7! xv. Again, this implies the existence of a
a
homomorphism f : K ⌦R V ! V with f (x ⌦ v) = B(x, v) = xv. Let x ⌦ v 2 K ⌦R V and write x = b for
some a, b 2 R with b 6= 0. Then:

a 1 1 ab 1 a
x⌦v = ⌦ v = ⌦ av = ⌦ v = b ⌦ v = 1 ⌦ xv (5.5)
b b b b b b

This shows that all elementery tensors in K ⌦R V have the form 1 ⌦ v for some v 2 V . Now we aim to
show that the map is injective: Let t 2 K ⌦R V be an elementary tensor. If f (t) = f (1 ⌦ v) = 0, and since
evidently 1 6= 0, we must have v = 0, so t = 0. Thus, ker f = 0 and hence f is injective, and K ⌦R V ⇠
=V.

Remark. This Theorem can be extended to any R-module M ✓ K: Letting V be any K-vector space, there is
an isomorphism M ⌦R V ⇠
= V as R-modules. In particular, if I ✓ R is a nonzero ideal, I ⌦R K ⇠
= K.

The proof of this result might be slightly straightfrward, but the results we obtain from it can be surprising.
For instance, taking V = K we get an isomorphism K ⌦R K ⇠ = K as R-modules, e.g. Q ⌦Z Q ⇠ = Q. Additional
⇠ ⇠
examples are Q ⌦Z R = R as a Z-module or R ⌦R C = C as an R-module.
p p
On the other hand, the remark is an even stronger statement. Take R = Z[ 10] and I = (2, 10). Clearly
p p p p p p
(2, 10) ⇠ 6 Z[ 10]. However, (2, 10) ⌦Z[p10] Q( 10) ⇠
= = Z[ 10] ⌦Z[p10] Q( 10) since both are isomorphic to
p
Q( 10). In general, taking R an integral domain, K = Frac(R) its field of fractions and I ✓ R a nonzero ideal,
there is an isomorphism I ⌦R K ⇠= R ⌦R K.

Theorem 5.2.6. Let M1 , M2 be finite free R-modules. Let S be any ring. Then, S ⌦R HomR (M1 , M2 ) ⇠
=
HomS (S ⌦R M, S ⌦R M2 )
28 CHAPTER 5. TENSOR PRODUCTS

Proof. Denote 'S = (1S ⌦ '. Consider the mapping : S ⇥ HomR (M1 , M2 ) ! HomS (S ⌦R M1 , S ⌦R M2 )
defined (s, ') = s'S . Let us show that it is bilinear.

Consider 1: S ! HomS (S ⌦R M1 , S ⌦R M2 ):

1 (s1 + s2 ) = (s1 + s2 )(1S ⌦ ') = (s1 + s2 ) ⌦ ' = (s1 ⌦ ') + (s2 ⌦ ')
= s1 (1 ⌦ ') + s2 (1 ⌦ ') = 1 (s1 ) + 2 (s2 )

Consider 2: HomR (M1 , M2 ) ! HomS (S ⌦R M1 , S ⌦R M2 )

2 (('1 + '2 )r) = s(1S ⌦ ('1 + '2 )r) = (s ⌦ '1 r) + (s ⌦ '2 r) = (s ⌦ '1 )r + (s ⌦ '2 )r = 2 ('1 )r + 2 (')r

therefore, the diagram

S ⇥ HomR (M1 , M2 ) HomS (S ⌦R M1 , S ⌦R M2 )


L

S ⌦R HomR (M1 , M2 )

commutes. Now we have to show that L is an isomorphism. Let {ei } and {e0j } be bases of M1 and M2
respectively. Then {1 ⌦ ei } and {1 ⌦ e0j } are bases of S ⌦R M1 and S ⌦R M2 . The functions 'ij sending ei 7! e0j
and possible remaining ek 7! 0 (if dimM1 > dimM2 ) are a basis of HomR (M1 , M2 ) and so {1S ⌦ 'ij } is a basis
of S ⌦R HomR (M1 , M2 ). But then:

L((1S ⌦ ')(1 ⌦ ei )) = (1 ⌦ 'ij )(1 ⌦ ei ) = (1 ⌦ 'ij (ei )) = 1 ⌦ e0j

So L sends basis to basis and hence it is an isomorphism.

5.3 Tensor Product of Algebras

Consider two R-algebras A, B. Their R-module structure allows us to define the tensor product A ⌦R B.

Definition 5.3.1. If A, B are R-algebras, the multiplication on A⌦R B satisfies (x1 ⌦y1 )·(x2 ⌦y2 ) = x1 x2 ⌦y1 y2 .
Moreover, 1A⌦R B = 1A ⌦ 1B and all identities defined in 5.2 apply.

Lemma 5.3.1. For the homomorphisms A : A ! A ⌦R B defined A (x) = x ⌦ 1B and B : B ! A ⌦R B


defined B (x) = 1A ⌦ x, the following properties apply:

i) A [ B generates A ⌦R B

ii) A (x)B (y) = A (y)B (x)

iii) If A, B are F -algebras, where F is a field, then both are injective

iv) If {xi : i 2 I} is a basis of A and {yj : j 2 J} is a basis of B, then {A (xi )B (yj ) : (i, j) 2 I ⇥ J} is a
basis of A ⌦R B.

Definition 5.3.2. Let X ✓ A be a subset of the R-algebra A. The centraliser of X in A is

CA (X) = {y 2 A : xy = yx 8x 2 X}

Proposition 5.3.2. Let A, B and E be R-algebras. If : B ! A and : E ! A are homomorphisms such that
(C) ✓ CA ( (B)), then there exists an algebra homomorphism ✓ : B ⌦R E ! A satisfying ✓(x ⌦ y) = (x) (y).
5.4. TENSOR PRODUCT MODULES OVER ALGEBRAS 29

Proof. Since and are R-algebra homomorphisms, there is a billinear map ⇥ : B ⇥ E ! A such that
(x, y) 7! (x) (y). Therefore, by the universal mapping property, there exists an R-algebra homomorphism
✓ : B ⌦R E ! A such that ✓(x ⌦ y) = ⇥(x, y) = (x) (y).
Thus, we have ✓((x1 ⌦ y1 )(x2 ⌦ y2 )) = ✓(x1 x2 ⌦ y1 y2 ) = (x1 x2 ) (y1 y2 ) = (x1 ) (x2 ) (y2 ) (y2 ). But since
(E) ✓ CA ( (B)):

✓((x1 ⌦ y1 )(x2 ⌦ y2 )) = (x1 ) (x2 ) (y2 ) (y2 ) = (x1 ) (y2 ) (x2 ) (y2 ) = ✓(x1 ⌦ y1 )✓(x2 ⌦ y2 )

So indeed ✓ is a homomorphism.

In particular, consider B : B ! B ⌦R E and E : E ! B ⌦R E defined as in the Lemma. Then B (B) =


B ⌦R 1E and so it follows that ✓B (B) = ✓(B ⌦R 1E ) = (B) (1) = (B). Hence, = ✓B (B). In a similar way,
✓E = . Hence, the following diagrams both commute

B A E A

B E
✓ ✓

B ⌦R E B ⌦R E

5.4 Tensor Product Modules over Algebras

Definition 5.4.1. Let M be a right A-module and let N be a right B-module. Then, M ⌦A N is a right
A ⌦R B-module, and for all x 2 A, y 2 B, u 2 M , v 2 N the scalar multiplication is defined:

(u ⌦ v)(x ⌦ y) = ux ⌦ vy

Proposition 5.4.1. Let M1 , M2 be right A-modules and N1 , N2 be right B-modules. Then:


i) If 2 HomA (M1 , M2 ) and 2 HomB (N1 , N2 ), then ⌦ 2 HomA⌦R B (M1 ⌦A⌦B N1 , M2 ⌦A⌦B N2 )
ii) There is a map ✓ : HomA (M1 , M2 )⌦R HomB (N1 , N2 ) ! HomA⌦R B (M1 ⌦N1 , M2 ⌦N2 ) which is an R-module
homomorphism induced by ( , ) 7! ⌦ .

Proof. Since by assumption , are module homomorphisms:

( ⌦ )((u ⌦ v)(x ⌦ y)) = (ux) ⌦ (vy) = (u)x ⌦ (v)y = [( ⌦ )(u ⌦ v)](x ⌦ y)

which is an element of the A ⌦R B-algebra HomA⌦R B (M1 ⌦A⌦B N1 , M2 ⌦A⌦B N2 ), as required.


Now, we construct the mapping : HomA (M1 , M2 ) ⇥ HomB (N1 , N2 ) ! HomA⌦R B (M1 ⌦ N1 , M2 ⌦ N2 ). This
is bilinear, so by the universal mapping property, such ✓ exists, and so ⌦ : M1 ⌦A⌦B N1 ! M2 ⌦A⌦B N2
is a homomorphism.

In general, such ✓ is neither injective nor surjective. However, for any R-algebra A and any A-module M we
could define an isomorphism HomA (A, M ) ! M via ( ) = (1A ). Now let B be another R-algebra and
A:

N a B-module. Taking tensor products, we have analogous isomorphisms HomA⌦R B (A ⌦R B, M ⌦ N ) ⇠


= M ⌦N

and HomA (A, M ) ⌦R HomB (B, N ) = M ⌦R N . Therefore, there is a commutative diagram
30 CHAPTER 5. TENSOR PRODUCTS

HomA (A, M ) ⌦R HomB (B, N )


A⌦ B

✓ M ⌦N
A⌦B

HomA⌦B (A ⌦ B, M ⌦ N )

which makes the map ✓ : HomA (A, M ) ⌦R HomB (B, N ) ! HomA⌦R B (A ⌦R B, M ⌦ N ) an isomorphism.

Corollary 5.4.1.1. Let M1 , M2 be A-modules and N1 , N2 be B-modules. Let M1 and N1 be free. Then, there
is an isomorphism

HomA (M1 , M2 ) ⌦R HomB (N1 , N2 ) ⇠


= HomA⌦B (M1 ⌦ N1 , M2 ⌦ N2 )

L ⇠ L mB. Hence
Proof. Since M1 and N1 are free, M1 ⇠
= nA and N1 =
M M
HomA (M1 , M2 ) ⌦R HomB (N1 , N2 ) ⇠
= HomA ( nA, M2 ) ⌦R HomB n(B, N2 )
M M

=( n HomA (A, M2 )) ⌦R ( m HomB (B, N2 ))
M

= mn(HomA (A, M2 ) ⌦R HomB (B, N2 ))
M

= mn(HomA⌦B (A ⌦ B, M2 ⌦ N2 ))

= HomA⌦B (M1 ⌦ N1 , M2 ⌦ N2 )

Corollary 5.4.1.2. Mm (A) ⌦R Mn (B) ⇠


= Mmn (A ⌦R B)

Proof.
M M
Mm (A) ⌦R Mn (B) ⇠
= EndA ( mA) ⌦R EndB ( nB)
O

= EndA⌦R B ( mn
A ⌦R B)

= Mmn (A ⌦R B)
Chapter 6

Central Simple Algberas

6.1 The Density Theorem

Lemma 6.1.1. Let M be a semisimple right A-module. Denote D = EndA (M ) and consider M as a (D, EndA (M ))-
bimodule. If 2 EndD (M ) and u1 , · · · , un 2 M , then there exists an x 2 A such that ui = ui x.

L
Proof. Since M is semisimple, N = nM is also a semisimple A-module. Let w = (u1 , · · · , un ) 2 N . By
Proposition 2.1.3, there is a submodule P ✓ N such that N = wA P . Let ⇡ 2 EndA (N ) be the projection
⇡ ⇠ Mn (EndA (M )) ⇠
N ! wA. By Corollary 3.2.1.2. EndA (N ) = = Mn (D). Then 1n 2 EndA (N ).
Thus, (u1 , · · · , un ) = w(1n ) = ⇡(w(1n )) 2 wA That is, 9x 2 A such that (u1 , · · · , un ) = wx =
(u1 x, · · · , un ). Hence, ui = ui x.

Theorem 6.1.2 (Jacobson). Let M be a simple right A-module. Let D = EndA (M ) be a division algebra and
let M be considered as a left D-space. If u1 , · · · , un are linearly independent over M and w1 , · · · , un 2 M , then
there exists an x 2 A such that ui x = wi .

Proof. Since D is a division algebra, M is free (and semisimple) as a left D-algebra. By proposition 2.1.3,
since u1 , · · · , un are linearly independent, there exists an N ✓ M such that M = Du1 ··· Dun N . Let
2 EndD (M ) defined ui = wi and N = 0. By the Lemma, there is an x 2 A such that wi = ui = ui x.

6.2 Central Simple Algebras

Definition 6.2.1. Let F be a field. A simple F -algebra A such that Z(A) = F is called central simple.

Lemma 6.2.1. Let B, C ⇢ A be F -algebras such that C ⇢ CA (B). Let B be central simple. Let x1 , · · · , xn be
Pn
linearly independent elements of B and y1 , · · · , yn 2 C. Then, i=1 xi yi = 0 implies that y1 = · · · = yn = 0.

Lemma 6.2.2. Let B, C be F -algebras. The following statements hold.


i) If B ⌦F C is simple, then both B and C are simple.
ii) If B is central simple and C is simple, then B ⌦F C is simple.

Lemma 6.2.3. Let B, C be F -algebras and denote A := B ⌦F C. The following statements hold.
i) CA (B ⌦F F ) = Z(B) ⌦F C.
ii) Z(A) = Z(B) ⌦F Z(C).

31
32 CHAPTER 6. CENTRAL SIMPLE ALGBERAS

These lemmas will be essential tools for the remaining of this section.

Proposition 6.2.4. Let B, C ⇢ A be finite dimensional F -algebras with C ✓ CA (B) and B central simple. The
following are equivalent.
i) A = BC
ii) dimF A = (dimF B)(dimF C)
iii) The inclusions B ,! A and C ,! A induce an isomorphism B ⌦F C ⇠
=A

Pn Pn
Proof. Let x1 , · · · , xn be an F -algebra basis of B and y1 , · · · , ym be a basis of C. Suppose i=1 j=1 aij xi yj
for some aij 2 F . Then, by Lemma 6.2.1, aij = 0. Thus, the set

X = {xi yj : i  n, j  m}

is a linearly independent set with #(X) = mn. Hence, both i) and ii) imply X is a basis for A. Now, by
Proposition 5.2.2, {xi ⌦F yj } is a basis for B ⌦F C. Hence, xi yj 7! xi ⌦ yj is a homomorphism sending basis to
basis, so it is an isomorphism. Therefore, i), ii) ! iii).
L
Conversely, let B ⌦F C ⇠ = A. Then, also by Proposition 5.2.2, B ⌦F C ⇠
= i,j (xi ⌦F yj )F , so dimF A =
(dimF B)(dimF C). Moreover, using xi ⌦F yj 7! xi yj (the inverse of the map defined before), we also see it sends
basis to basis and therefore is an isomorphism.

We finish this section stating a classical theorem in the theory of Central Simple Algebras

Theorem 6.2.5 (Noether-Skolem). Let A be a central simple algebra and let B ✓ A be a simple subalgebra.
Let ⇠ : B ! A be an F -algebra homomorphism. Then, there is a unit u 2 A⇥ such that ⇠(y) = u 1
yu for all
y 2 B.

6.3 Brauer Groups

Again, let F be a field. We will denote the set of all central simple F -algebras up to isomorphism class by G (F ).

Lemma 6.3.1. Let A, B 2 G (F ). The following are equivalent.


i) Abasic ⇠
= B basic .
ii) There exists a division algebra D 2 G (F ) and m, n 2 N such that A ⇠
= Mn (D) and B ⇠
= Mm (D).

iii) There exist r, s 2 N such that A ⌦F Mr (F ) = B ⌦F Ms (F ).

Proof. i) ) ii)
By the Wedderburn-Artin Theorem, A ⇠ = Mn (D1 ) and B ⇠ = Mm (D2 ) where D1 , D2 2 G (F ) since both are

central simple. Therefore, by Example 4.4.1, D1 = A basic
and D2 ⇠
= B basic and hence D1 ⇠
= D2 .
ii) ) iii)
If A ⇠
= Mn (D) and B ⇠
= Mm (D), then we can construct isomorphisms A ⌦F Mm (F ) ⇠
= Mn (D) ⌦F Mm (F ) and
B ⌦F Mn (F ) ⇠ M
= m (D) ⌦ F M n (F ). But as we have seen before

Mn (D) ⌦F Mm (F ) ⇠
= Mmn (D) ⇠
= Mm (D) ⌦F Mn (F )

Hence the statement is proved.


iii) ) i)
Assume A ⌦F Mr (F ) ⇠
= B ⌦F Ms (F ). Again, using Wedderburn-Artin Theorem, we can construct

A ⌦F Mr (F ) ⇠
= Mn (D1 ) ⌦F Mr (F ) B ⌦F Ms (F ) ⇠
= Mm (D2 ) ⌦F Mr (F )
6.3. BRAUER GROUPS 33

Hence we have a commutative diagram



Mm (D2 ) ⌦F Ms (F ) Mn (D1 ) ⌦F Mr (F )

⇠ ⇠

Msm (D2 ) ⇠ Mrn (D1 )

Hence, by the uniqueness statmenet of Wedderburn-Artin Theorem, D1 ⇠


= D2 . Since D1 := Abasic and D2 :=
B basic by Example 4.4.1, we must have Abasic ⇠
= B basic .

Definition 6.3.1. Let A, B 2 G (F ). We say A and B are Morita Equivalent, and denote it A ⇠ B if they
satisfy the conditions of the Lemma.

Alternatively, A and B are Morita equivalent if they have equivalent module cateogories. We shall not
discuss category theory just yet, but this approach is widely covered in Bass’ book [Bas68] in algebraic K-
theory.
Denote [A] for the equivalent class of A in G (F ) up to the Morita relation.

Definition 6.3.2. The set Br(F ) = {[A] : A 2 (F )} = G (F )/ ⇠M or is an abelian group under the operation
[A].[B] = [A ⌦F B]. Such group is called the Brauer Group of the field F . The identity in Br(F ) is [F ] and
1
inverse elements are defined [A ] = [Aopp ]

Definition 6.3.3. Let A be a central simple F -algebra. The exponent of A is the order of [A] in Br(F ).

Theorem 6.3.2 (Merkurjev-Suslin). Every central simple algebra of exponent two is Morita equivalent to a
tensor product of quaternion algebras.

The proof of this Theorem is beyond the scope of this essay. A full and detailed proof can be found in
[GS17]. The Merkurjev-Suslin Theorem is the culmination of the work started by Brauer, Noether, Hasse and
Albert in the early 20th Century. Although in fact, this result is just a special case of the Norm Residue
Isomorphism Theorem, which relates Milnor K-Theory and Galois Cohomology. But again, this is way beyond
the scope of this essay. Nevertheless, it is adequate to state the following consequence of the Merkurjev-Suslin
Theorem.

Corollary 6.3.2.1. Denote Br2 (F ) for the subgroup of 2-torsion elements in Br(F ). The set of equivalence
classes of quaternion algebras generate Br2 (F ).

Proposition 6.3.3. Let F be a field.


i) If A, B 2 G (F ) then A ⇠
= B if and only if [A] = [B] and dimF A = dimF B.
ii) Every equivalence class in Br(F ) is represented by a unique division algebra (up to isomorphism).

Proof. If A ⇠
= B it is trivial that [A] = [B] and dimF A = dimF B. Conversely, if [A] = [B], by our definition
of Morita equivalence we must have A ⇠ = Mn (D) and B ⇠= Mm (D) for some division algebra D and positive
integers m, n. If dimF A = dimF B, then m = n and therefore A ⇠
= B.
For ii), let [A] 2 Br(F ). By the Wedderburn-Artin Theorem we have an isomorphism A ⇠
= Mn (D) for some
division algebra D 2 G (F ). Thus, [A] = [D]. To prove uniqueness of such D, suppose that there exist two non
isomorphic division algebras D1 , D2 such that [D1 ] = [D2 ]. Then, again by definition of Morita equivalence,
Mn (D1 ) ⇠
= Mm (D2 ). By the uniqueness statement of Wedderburn-Artin Theorem, D1 ⇠ = D2 .

Corollary 6.3.3.1. If F is algebraically closed, then Br(F ) is trivial.


34 CHAPTER 6. CENTRAL SIMPLE ALGBERAS

Proof. Let D be any division algebra over F . Take any d 2 D. Then, F [d]/F forms a well defined algebraic
extension of F . But since F is algebraically closed, the only algebraic extension is F itself. Hence, d 2 D so
D = F . Therefore, F 2 [A] for all A 2 G (F ). Since [F ] = [1] in Br(F ), we must have Br(F ) = {1}.

Let :F ! E be a field homomorphism. Then, induces ⇤: Br(F ) ! Br(E) via ([A]) = [A ⌦E E].
The correspondence
F 7! Br(F ) 7! ⇤

defines a functor Fields 7!Ab, the category of abelian groups.

6.4 The Double Centraliser Theorem

Definition 6.4.1. If A is any algebra, X ✓ A a subset, then the censtraliser of X in A is

CA (X) = {y 2 A : xy = yx 8x 2 X} ✓ Z(A)

Remark. If B ✓ A, then B ✓ CA (CA (B))

Definition 6.4.2. Let A and B be R-algebras and M an (A, B)-bimodule. For each element x 2 A define an
A-endomorphism x: M ! M as u 7! xu for all u 2 M . Let :A ! EndA (M ) be the homomorphism
defined x 7! x for all x 2 A. This is called the left regular representation of A. An analogous definition applies
for the right regular representation of B

The term Double Centraliser Theorem can refer to one of many similar results relating subalgebras B of an
algebra A to their double centralisers. Subtle changes on any of such similar statements can lead to a variety
of results in Algebra, Representation Theory, Operator Theory, Functional Analysis and Theoretical Physics.
Nevertherless, on this essay we are only interested on applications to Algebra and Representation Theory, for
which we just need the statements below. Note that for the rest of this dissertation, whenever the context
makes the choice obvious, we will use Double Centraliser Theorem to refer to any of the following statements.

Lemma 6.4.1. Let A be an F -algebra, and B ✓ A a subalgebra of A. Consider A as a right B opp ⌦F A-module
via the homomorphism B opp ⌦F A ! Aopp ⌦F A := Ae . Then, the left regular representation : B opp ⌦F A !
EndBopp ⌦F A (A) induces an isomorphism from CA (B) to EndBopp ⌦F A (A).

Proof. If x 2 B, y 2 CA and z, w 2 A, then

y (w(x ⌦F z)) = yxwz = xywz = y (w)(x ⌦ z)

Hence, (CA (B)) ✓ EndBopp ⌦F A (A). Conversely, if y 2 EndBopp ⌦F A (A) and x 2 B, then

yx = y (1(x ⌦ 1)) = y (1)(x ⌦ 1) = xy

Thus, EndBopp ⌦F A (A) ✓ (CA (B)), so the restriction : CA (B) ! EndBopp ⌦F A (A) is surjective.
Now suppose y1 , y2 2 CA (B) and let (y1 ) = (y2 ). Then, y1 = y2 in EndBopp ⌦F A (A) and hence y1 (1A ) =
y2 (1A ) implying y1 = y2 . Hence, is injective, and hence

CA (B) ⇠
= EndBopp ⌦F A (A)
6.4. THE DOUBLE CENTRALISER THEOREM 35

Theorem 6.4.2. Let A 2 G (A) and let B ✓ A be a simple subalgebra of A.


i) CA is simple.
ii)(dimF B)(dimF A) + dimF A
iii) CA (CA (B) = B
iv) If B is central simple, then so is CA (B). Furthermore, A = B ⌦F CA (B)

Proof. By Lemma 6.2.2, B opp ⌦F A is simple. Let P be a suitable minimal right ideal of B opp ⌦F A. By
Wedderburn-Artin Theorem, B opp ⌦F A ⇠ = Mn (D) where D is the division algebra EndBopp ⌦F A (P ). Hence
opp ⇠ L ⇠ L
B ⌦F A = nP and P = nD. Therefore

dim(B opp ⌦F A) = (dimF B)(dimF A) = n2 (dim D) (6.1)


L
Since A is a finite B opp ⌦F A-module, we have A ⇠
= kP for some natural number k. Hence, by the lemma
M
CA (B) ⇠
= EndBopp ⌦F A ( kP ) ⇠
= Mk (D)

and so CA (B) is a simple algebra. Moreover

dimF A = k(dimF P ) = kn(dimF D) (6.2)

dimF CA (B) = k 2 (dimF (D)) (6.3)

We can therefore use (1), (2), and (3) to eliminate k, n and dim(D

k 2 n2 (dimF D)2 k 2 n2 (dimF D)2


(dimF B)(dimF CA (B)) = = = kn(dimF D)2 = dimF A
dimF A kn(dimF D)

Thus i) and ii) are proved. Now substituting B by CA (B) in ii)

(dimF CA (B))(dimF CA (CA (B))) = dimF A = (dimF B)(dimF CA (B))

and thus dimF B = dimF CA (CA (B)). Since B ✓ CA (CA (B)), we must have B = CA (CA (B)), so iii) also
holds. Lastly, B 2 G (F ) so A ⇠
= B ⌦F CA (B) by Proposition 6.2.4. and ii).Moreover, F = Z(B ⌦F CA (B)) =
F ⌦F Z(CA (B)) and so CA (B) 2 G (F ).

Theorem 6.4.3. Let V be a finite dimensional vector space over a field F , A a semisimple subalgebra of End(V )
and B = EndA (V ). Then
i) B is semisimple
ii) A = EndB (V )
iii) As an A ⌦F B-module, there is a decomposition
M
V ⇠
= Ni ⌦F Hom(Ni , V )
i

where Ni are all the simple modules of A.

This version of the DCT has remarkable consequences within the framework of Represetation Theory, and
will be one of basis steps to our discussion of Schur-Weyl Duality in Chapter 8.
Chapter 7

Splitting Fields

7.1 Maximal Subfields of Simple Algebras

A subfield of an F -algebra A is a subalgebra E ⇢ A such that E is a field. In particular, E ◆ 1A F so E is a


field extension of F with [E : F ]  dimF A. If there is no subfield K such that E ✓ K ⇢6= A then E is said to
be a maximal subfield.

Lemma 7.1.1. If B is an F -algebra and dimF B = k < 1 and n 2 N with k|n, then B is isomorphic to some
subalgebra of Mn (F ).

Lemma 7.1.2. Let D be an F -division algebra . If ↵ 2 D then there exists a subfield E ✓ D such that ↵ 2 E.
In particular, if dimF D < 1 then F [↵] = { (↵) : (X) 2 F [X]} is a subfield of D.

Proof. Firstly, let dimF D < 1. Since F ✓ Z(D) the set F [↵] ✓ D is an abelian subalgebra of D. Let
✓ : F [X] ! F [↵] be the homomorphism defined by 7!
(↵). Note that since by assumption dimF D < 1,
⇠ F [↵]. Since D has no zero divisors,
6 0. Then, ✓ is indeed surjective, and therefore F [X]/ ker ✓ =
then ker ✓ =
it follows that F [↵] is an integral domain, and ker ✓ is a prime ideal. But F [X] is a principal ideal domain so
ker ✓ is also a maximal ideal, and therefore F [X]/ ker ✓ is a field. It follows that F [↵] is a field.
1
If dimF D = 1, then the set { (↵) (↵) : , 2 F [X], 6= 0} is a subfield of D that includes ↵.

Remark. If dimF D < 1, we could have also used the following approach: as before, let ✓ : F [X] ! F [↵] be
the homomorphism defined by 7! (↵). Let m↵ (X) be the minimal plynomial of ↵. Then, in particular
ker ✓ = (m↵ (X)), where (m↵ (X)) is the principal ideal generated by m↵ (X) in F [X]. By definition of minimal
polynomial, m↵ (X) is also irreducible, so the principal ideal it generates is always a maximal ideal (in a principal
ideal domain).

Corollary 7.1.2.1. Let D 2 G (F ) i.e. D is a central simple, F -division algebra. Then every subalgebra B ✓ D
is a division algebra.

Definition 7.1.1. Let F be a field and n be a positive integer. We say F is n-closed if there exists no proper
extension E/F such that [E : F ] = n

Every field is 1-closed. If F is algebraically closed, then F is N-closed.

Lemma 7.1.3. Let A be a simple, finite dimensional F -algebra. Suppose F is also a maximal subfield of A.
Then, A ⇠
= Mn (F) and F is n-closed,

36
7.1. MAXIMAL SUBFIELDS OF SIMPLE ALGEBRAS 37

Proof. If A is simple and finite dimensional, then by Wedderburn Structure Theorem A ⇠


= Mn (D) for some
division algebra D (and hence dimF Mn (D) = n2 = dimF A). In fact, D = F since otherwise by Lemma 7.1.2.
there is a subfield E ✓ D such that F ⇢6= E, contrdicting the maximality of F .
Now, suppose F is not n-closed. Then there is a proper extension E/F such that [E : F ]|n. In such case,
Mn (F ) contains a subfield isomorphic to E by Lemma 7.1.1, also contradicting the maximality of F .

Theorem 7.1.4. Let A 2 G (F ). Then dimF A = m2 for some m 2 N. In particular, for any subfield E ✓ A,
[E : F ] divides m.

We will call the positive integer m the degree of A, and denote it by DegA. In other words, DegA =
p
dimF A. It follows that if E ✓ A is a subfield (and as always [E : F ]  Deg A) but the equality [E : F ] = Deg A
holds, then E is a maximal subfield. Note that the converse to this statement is not true in general: if F is
n-closed, then F is maximal in Mn (F ), but Deg Mn (F ) = n 6= 1 = [F : F ].
If E ✓ A is a subfield such that [E : F ] = Deg A, we say that E is strictly maximal.

Lemma 7.1.5. A subfield E ✓ A is strictly maximal if and only if CA (E) = E. If A is a division algebra,
every maximal subfield of A is strictly maximal.

Proposition 7.1.6. Let A be a central simple F -algebra and E ✓ A a subfield. The following are equivalent.
i) [A : F ] = [E : F ]2
ii) E is its own centraliser
iii) E is a maximal commutative ring.

Proof. The statements i) and ii) are equivalent by the Double Centraliser Theorem. The statements i) and iii)
are equivalent by the statement after the previous theorem.

Corollary 7.1.6.1. Every maximal subfield E ✓ A satisfies [A : F ] = [E : F ]2

Theorem 7.1.7. Let F be a field with char(F ) 6= 2. If A 2 G (F ) has degree 2, then A is isomorphic to a
quaternion algebra.

Proof. Let E ✓ A be a maximal subfield. If E = F then A ⇠


= M2 (F ) ⇠
= 1,1
F by Lemma 7.1.3. If E 6= F , then
E/F is a quadratic extension and since char(F ) 6= 2 then E = F (↵) where ↵ 2 F ⇥ and ↵ 62 F . Call ↵2 = a.
2

The map ↵ 7! ↵ is an E-automorphism, so by the Noether-Skolem Theorem there is a 2 A⇥ such that


1
↵ = ↵. Hence, 2A E. Therefore, dimF (F ↵F F ↵ F ) = 4 = dimF A. Note that ↵ = ↵
so ↵ 2
= ↵ = 2
2 Z(A)
↵. This implies

2
⌘ = F . Call = b 2 F ⇥ . Then the map 1 7! 1, ↵ 7! i, 7! j,
7 k extends to an isomorphism A ⇠
↵ ! a,b
= F .

Corollary 7.1.7.1 (Frobenius). The only finite dimensional, noncommutative R-division algebra is H =
1, 1
R .

Proof. Let D be as stated. The only nontrivial algebraic extension of R is C, so either Z(D) = R or Z(D) = C.
But since C is algebraically closed, Br(C) = {1} by Corollary 6.3.3.1. Hence, D 2 G (R). Now let E ✓ D be a
maximal subfield of D. Then we must have Deg D = [E : R] = [C : R] = 2. Thus, D must be isomorphic to the
Hamilton Quaternions H.

As mentioned in the proof of the Corollary, the centre of C as an R-division algebra is C itself. Hence,
C 62 G (R). Therefore, using Frobenius Theorem and Proposition 6.3.3, the Brauer Group of the real numbers
must be Br(R) = {[H], [R]} ⇠= Z/2Z.
38 CHAPTER 7. SPLITTING FIELDS

7.2 Splitting Fields

Definition 7.2.1. Let F be a field and A 2 G (F ). Let E/F be a field extension. We say E is a splitting field
for A if A ⌦E E ⇠
= Mn (E) as E-algebras, where n = Deg A.

Equivalently, E is a splitting field for A if the class [A] 2 Br(F ) maps to zero under the map ⇤: Br(F ) !
Br(E) induced by the inclusion homomorphism F ,! E For simplicity, we shall say E splits A.

We will see how splitting fields play a key role on the study of central simple algebras and their Brauer
groups, as illustrated by the following theorem [Ami55]

Theorem 7.2.1 (Amistur). If A1 and A2 are central simple algebras with the same splitting fields, then [A1 ]
and [A2 ] generate the same subgroup of Br(F ).

Let us now give alternative definitions to a splitting field of an algebra which we shall use for the rest of
this chapter.

Proposition 7.2.2. Let A 2 G (F ) have degree n. Let E be a field extension of F . The following statements
are equivalent.
i) E splits A
ii) There is an F -algebra homomorphism : A ! Mn (E).
iii) There is an F -algebra homomorphism : A ! E such that (A spans Mn (E) as an E-vector space.

Indeed, if we find a homomorphism : A ! Mm (E) for any m 2 N (not necessarily the degree of A),
then E splits A. Now we proceed to illustrate the relation between extensions of fields and splitting fields of
central simple algebras via the following theorem.

Theorem 7.2.3. Let A be a central simple F -algebra. Let E/F be a field extension. Then, E splits A if and
only if there exists a finite dimensional central simple algebra B Morita equivalent to A such that E ✓ B and
[B : F ] = [E : F ]2 .

Therefore, combining this result with Corollary 7.1.6.1. we find that if A is a finite central simple F -algebra,
then every maximal subfield of A is a splitting field.

Definition 7.2.2. Let E/F be a field extension and  : F ,! E be the inclusion map. Then  induces a
homomorphism ⇥ : Br(F ) ! Br(E). Then, ker ⇥ = Br(E/F ) is called the Relative Brauer Group of the
field extension E/F .

Therefore, if A 2 G (F ) and E/F is a field extension, E splits A if and only if [A] 2 Br(E/F ). Now, if
F ✓ E ✓ K is a chain of field extensions with corresponding inclusions 1 : F ,! E and 2 : E ,! K, then the
composition 2 1 is the inclusion map F ,! K. Such maps induce homomorphisms ⇥
i of their Brauer groups
as illustrated in the following diagram

⇥ ⇥
1 2

⇥
1 ⇥
2
Br(F ) Br(E) Br(K)

Br(E/F ) Br(K/F )
7.3. ALGEBRAIC SPLITTING FIELDS 39

Evidently Br(E/F ) = ker ⇥ ⇥ ⇥


1 ✓ ker 2 1 = Br(K/F ). Therefore, we obtain the following useful consequence.

Lemma 7.2.4. If E splits A 2 G (F ), then every field extension of E splits A.

7.3 Algebraic Splitting Fields

Lemma 7.3.1. Let A 2 G (F ). If E is a subfield of A, then CA (E) 2 G (E) and CA (E) ⇠ AE := A ⌦E E as


E-algebras.

Proof. Let E be a subfield of A. As in Lemma 6.4.1, we can see E as a right AE -module, and by the same
Lemma CA (E) ⇠= EndAE (A). Since AE is simple, there is a unique isomorphism class of simple AE -modules.
Let P be a representative of such isomorphism class and denote D for the division algebra EndAE (P ). Then,
there exist positive integers n, m such that
M M
CA (E) ⇠
= EndAE (A) ⇠
= EndAE ( mP ) ⇠
= Mk (D) ⇠ Mn (D) ⇠
= EndAE ( nP ) ⇠
= EndAE (AE ) ⇠
= AE

Hence, CA (E) must belong in G (E).

On the other hand, CA (E) is simple by Lemma 6.4.1. Moreover, since E is a field then clearly E ✓
Z(CA (E)). Conversely, if x 2 Z(CA (E)) then it might or might not commute with other elements of A within
CA (E), but surely it must commute with all elements of E, hence Z(CA (E)) ✓ E.

Proposition 7.3.2. Let A 2 G (F ). For a subfield E ✓ A, the following conditions are equivalent.
i) E is a splitting field for A
ii) CA (E) ⇠
= Mk (E) where k[E : F ] = Deg(A).
iii) A = B ⌦F C where B 2 G (F) and C =⇠ Mk (F ). Moreover, E is a strictly maximal subfield of B

This proposition follows from the Lemma, the DCT and Lemma 7.1.5. Furthermore, these equivalent
statements give an intuition of the remark made after Theorem 7.2.3: for any F -algebra A, every maximal
subfield splits A. For an alternative proof, see [Pie82] or [dJea].
Now we can proceed to prove the main result of this section

Theorem 7.3.3. Let A 2 G (F ). Let E/F be a field extension. The following statements are equivalent.
i) E is a splitting field for A.
ii) There is an algebra B 2 G (F ) such that B ⇠ A and E is a strictly maximal subfield of B.
iii) There is an algebra B 2 G (F ) such that B ⇠ A and E is a maximal subfield of B.

Proof. The implication ii))iii) is trivial. Now assume E splits A. By the first Lemma 7.1.1. we can assume
that E is a subfield of Mn (F ) where n = [E : F ]. By the proposition, we have A = B ⌦F Mk (F ) where
B 2 G (F ) and E is a maximal subfield of B. But B ⌦F Mk (F ) ⇠ B. The implication iii) , i) follows from the
Proposition and the definition of the Relative Brauer Group: an extension E/F is a splitting field for A if and
only if [A] 2 Br(E/F ).

7.4 Splitting Fields and Galois Extensions

We firstly need to introduce some basic notions of Galois Theory.


40 CHAPTER 7. SPLITTING FIELDS

Definition 7.4.1. Let F be a field and f 2 F [X]. Let E/F be a field extension. We say E is a splitting field
for the polynomial f if f factors completely into linear factors in E[X] and it does not factor completely into
linear factors for any proper subfield of E containing F .

Definition 7.4.2. Let E/F be a field extension and let ↵ 2 E with minimal polynomial m↵ (x) 2 F [X]. We
say that E/F is a normal extension if, for any other root y of m↵ (x), we have y 2 E.

Definition 7.4.3. Let E/F be a field extension and let ↵ 2 E with m↵ (x) 2 F [X]. We say E/F is separable
if m↵ (x) has distinct roots in its splitting field.
We say an extension E/F is purely inseparable if for all ↵ 2 E there exists a power q = (char F )r (r being a
positive integer) such that ↵q 2 F .

Note that a field extension can have individual separable and/or normal elements. However, we say that
the extension is separable and/or normal if all elements are spearable/normal.
A field extension that is separable and normal is called a Galois extension. Note that we will assume that all
field extensions with which we are working are finite dimensional.

Lemma 7.4.1. Let D 2 G (F ) be a division algebra. If every subfield of D is purely inseparable over F , then
D = F.

This Lemma is very useful to prove how Morita equivalent F -algebras can be used to construct Galois
extensions of F . We will need a last tool

Proposition 7.4.2. If D 2 G (F ) is a division algebra and K is a maximal subfield of D such that K/F is
separable, then K is strictly maximal.

Proof. We know by Lemma 7.3.1. and Corollary 7.1.2.1. that CD (K) 2 G (K) is a division algebra. Since K is
maximal and K/F is separable (and every separable extension of a separable extension is separable) we see that
every subfield of CD (K) is purely inseparable over K, so by the Lemma CD (K) = K. Therefore K is strictly
maximal in D by Lemma 7.1.5.

Theorem 7.4.3. Let A 2 G (F ). Then, there is a B 2 G (F ) and a strictly maximal subfield E ⇢ B such that
B ⇠ A under Morita equivalence and E/F is a Galois extension.

Proof. Let A ⇠ D 2 G (F ), with D being a division algebra. By the proposition, D has a strictly maximal
subfield K such that K/F is separable. Let F ✓ K ✓ E be a Galois extension of K. Since K splits D, by
Proposition 7.3.2. E splits D as well. But then, by Theorem 7.3.3, E must be a maximal subfield of B.

Corollary 7.4.3.1. Let F be a field. Then, the Brauer group Br(F ) is equal to the union of all the subgroups
Br(E/F ) such that E/F is a Galois extension.
Part II

Schur-Weyl Duality

41
Chapter 8

Schur - Weyl Duality

For the rest of this dissertation, we will have our focus on Representation Theory. In particular, we will embrace
the task of proving the Schur-Weyl Duality Theorem, one of the fundamental theorems in Representation Theory
and Invariant Theory. This remarkable result connects the representation theories of the symmetric group Sn
Nn
and the general linear group GL(V ) of a complex vector space V , via their images on the tensor space V,
Nn
which are mutually commuting. In particular, Schur-Weyl duality gives a decomposition of V under the
action of Sn ⇥ GL(V ).
Nn
Indeed, same statement is true for representations of the Lie algebra gl(V ): we can decompose V into
representations of Sn and representations of gl(V ) in the same way we do with GL(V ). Moreover, we will see
that the representations of Sn , GL(V ) and gl(V ) appearing in such decompositions are actually irreducible.

This phenomenon was discovered in the late 1920s by Issai Schur, who was one of the pioneers on the
representation theory of Lie algebras. Nevertheless, it was popularised later on by German mathematical
physicist Hermann Weyl. As a matter of fact, Schur-Weyl Duality has many applications in particle physics
and quantum information theory, but such topics do not fall within our matters of concern.

This chapter will start with a general introduction to Representation Theory of finite groups, including
some results in Character Theory. We will then proceed to develop the necessary tools to prove Schur-Weyl
Duality from section 8.3 onward. Lastly, in the last section we will give an insight on the theory of Schur
functors.

8.1 Representations of Finite Groups

Definition 8.1.1. A representation of a finite group G on a finite dimensional vector space V is a homomor-
phism ⇢ : G ! GL(V )

It is common to simply say that V is a representation of G. Moreover, for g 2 G and v 2 V , it is also


frequent to write gv instead of ⇢(g)v.
For future reference, recall that a projection operator is a linear map ⇡ : V ! V such that ⇡ ⇡ = ⇡.

Definition 8.1.2. A subspace W ✓ V is said to be a subrepresentation or invariant subspace of V is g.w 2 W


for all w 2 W and for all g 2 G.
A representation is said to be irreducible if the only subrepresentations of V are {0} and V itself. A representa-

42
8.1. REPRESENTATIONS OF FINITE GROUPS 43

Lm
tion is called completely reducible if V = i=1 Wi for subrepresentations Wi , such that each Wi is irreducible.

Theorem 8.1.1. The number of conjugacy classes of a finite group G equals the number of irreducible repre-
sentations of G.

Lemma 8.1.2. Let V be a representation of G and W a subrepresentation. Let ⇡ : V ! V be a projection


operator with ⇡(V ) = W . Suppose that ⇡ has the property ⇡(g.v) = g.⇡(v) for all g 2 G and v 2 V . Then
W 0 := ker ⇡ is an invariant complement of W .

Any map :V ! V such that (g.v) = g. (v) is called G-equivariant.

Theorem 8.1.3 (Maschke). Let G be a finite group and K a field such that char(K) does not divide #G. Let
V be a finite dimensional representation of G over K. Then, any invariant subspace W has a complement W 0 ,
ie there exists a W 0 such that V = W W 0.

Proof. Take a basis of W and extend it to a basis of V . Construct a linear map ⇡ 0 : V ! V which acts as the
identity on W and sends the other basis elements of V to zero. Construct now
1 X
⇡(v) = g.⇡ 0 (g 1
v)
#G
g2G

This can be checked to still be an equivariant projection operator. Moreover, for w 2 W


1 X 1 X 0 1
⇡(w) = g.⇡ 0 (g 1
w) = ⇡ (w) = (⇡ 0 (w))#G = w
#G #G #G
g2G g2G

so still acts as the identity in W . Therefore, by the Lemma, one has W 0 := ker ⇡ is an invariant complement to
W.

Now we proceed to reformulate a familiar result in terms of representations of finite groups.

Lemma 8.1.4 (Representation Theoretic Schur’s Lemma). Let f : V ! W be a nonzero G-map. Then
i) If V is irreducible, then f is injective.
ii) If W is irreducible, then f is surjective.
iii) If both V and W are irreducible, then V ⇠
= W , and f = tI for some t 2 C.

It follows from Schur’s Lemma that any representation V of a finite group G can be uniquely written as

V ⇠
= V1 n1
··· Vk nk

where Vi are distinct irreducible representations of G.

Definition 8.1.3. The group algebra K[G] of a group G is the set of all possible linear combinations of elements
of G with scalars on a field K. Indeed, K[G] is a K-algebra.

In particular, any representation of G can be extended to a representation of C[G] and any representation
of C[G] can be restricted to a representation of G. However, the structures of G and C[G] are not identical
algebraically, although one can find substantial similarities, as illustrated by the table below.

G representation C[G]-module
Subrepresentation Submodule
Irreducible representation Simple module
G-map C[G]-homomorphism
44 CHAPTER 8. SCHUR - WEYL DUALITY

Moreover, for any finite group G, the C-algebra C[G] is semisimple. This will follow from the Duality
Theorem later on, but a full proof without using Schur-Weyl Duality can be found on [Yaf16]. As direct
consequence of this
L
Proposition 8.1.5. As C-algebras, there is an isomorphism C[G] ⇠
= i End(Vi ), where Vi are all distinct
irreducible representations of G.

Proof. Using the fact that C[G] is semisimple, this follows directly from Wedderburn-Artin Theorem.

8.2 Character Theory of Finite Groups

Definition 8.2.1. The character of a representation V of G is defined to be a mapping V : G ! C defined


V (g) = tr(⇢V (g)).

One could deduce identities on the function V such as

V W = V + W V ⌦W = V W V⇤ = V

1
Now, a function ↵ : G ! V such that ↵(hgh ) = ↵(g) is called a class function. In particular, (g) =
1
(hgh ) and therefore
is a class function. A property of such functions that we will use later is that, if ↵ is
1
P
a class function, then the map f = #G g2G ↵(g)g is in EndG (V ), i.e. it is a G-map.

Denote by F(G) the set of all class functions G ! C endowed with the inner product
1 X
< ↵, >= ↵(g) (g)
#G
g2G

Lemma 8.2.1. Let G be a finite group and suppose V and W are irreducible representations of G, with
characters V and W. Then V (g) W (g) = Hom(V,W ) .

Theorem 8.2.2. The set of characters V of the irreducible representations of G form an orthonormal basis
of F(G) with respect to the inner product defined before.

Proof. Let V and W be irreducible representations of a finite group G. By the Lemma, we have V (g) W (g) =
Hom(V,W ) . Hence
1 X 1 X
< V , W >= V (g) W (g) = Hom(V,W ) = tr(⇡Hom(V,W ) )
#G #G
g2G g2G

1
P
where ⇡ is the projection Hom(V, W ) ! HomG (V, W ) defined 7! #G g2G g.

Claim. dim(HomG (V, W )) =< V , W >

Proof of Claim Consider the projection ⇡ : Hom(V, W ) ! HomG (V, W ). By definition ⇡ ⇡ = ⇡ so its
eigenvalues are 0 and 1. Furthermore, we can decompose Hom(V, W ) into eigenspaces

Hom(V, W ) ⇠
= HomG (V, W ) ker ⇡

Hence, < V , W >= tr(⇡Hom(V,W ) ) is the sum of the 1 eigenvalues, giving dim HomG (V, W ).#G (and the zero
eigenvalues corresponding to ker ⇡). Therefore
1 X
< V , W >= Hom(V,W ) (g) = dim HomG (V, W )
#G
g2G
8.3. IRREDUCIBLE REPRESENTATIONS OF SN 45

Therefore, by Schur’s Lemma


8
<1 V ⇠
=W
< V, W >= tr(⇡Hom(V,W ) ) = dim HomG (V, W ) =
:0 V ⇠
6 W
=

So characters are orthonormal, and thus linearly independent. Now, we need to prove that they indeed form a
basis. It suffices to show that if ↵ 2 F(G) and < ↵, V >= 0, then ↵ = 0.
1 X 1 X
0 =< ↵, V >= ↵(g) V (g) = ↵(g) tr(gV ) = tr(fV )
#G #G
g2G g2G

1
P
where f = #G g2G ↵(g)g is in EndG (V ) as defined before. Thus, by Schur’s Lemma we have that f = tI for
some t 2 C. But since tr(fV ) = 0, we must have t = 0, and hence ↵ = 0.

Corollary 8.2.2.1. V = W if and only if V ⇠


= W . Furthermore, V is irreducible if and only if < V , V >=
1.

Corollary 8.2.2.2. The number of irreducible representations of a group G equals the number of conjugacy
classes of G.

Proof. We know that the irreducible characters of G form a basis for the space of class functions F(G). Thus,
it suffices to show that the dimension of F(G) equals the number of conjugacy classes of G. Let h1 , · · · , hN be
1
representatives of each conjugacy class of G, and denote Ci = {ghg : g 2 G} for the conjugacy class of hi .
Then the functions

f 7! (f (h1 ), · · · , f (hN )) (z1 , · · · , zN ) 7! f such that f (Ci ) = zi

are mutually inverse homomorphisms between F(G) and CN . In particular, dimF(G) = dim CN = N , which is
the number of conjugacy classes of G.

The theorem and both corollaries are relevant results in Representation Theory, and in particular Corollary
8.2.2.2. will be fundamental for our proof of Specht Theorem in the next section

8.3 Irreducible Representations of Sn

8.3.1 The Group Sn

Recall that the symmetric group Sn is the group of permutations of n elements, i.e. the set of all bi-
jections {1, · · · , n} ! {1, · · · , n}. Every ↵ 2 Sn can be written as a product of disjoint cycles ↵ =
(m1 , · · · , mr )(mr+1 , · · · , ms ) · · · where mi 7! mi+1 under ↵. If 1, · · · ,
are the lengths of such cycles, then
l
Pl
1 ··· l, and the sequence = ( 1 , · · · , l ) is called the cycle type of ↵. In particular, i=1 i = n, so
defines a partition of n.
Indeed, determines conjugacy classes in Sn .

Theorem 8.3.1. Let 2 Sn have cycle type ( 1, · · · , l ). If a distinct ⌧ 2 Sn has the same cycle type, then
is conjugate to ⌧ .

The main subject of study on this section are representations of Sn , so before proceed to prove further
results on this matter let us recall a few examples.
46 CHAPTER 8. SCHUR - WEYL DUALITY

Example 8.3.1. Let n be a positive integer and let {e1 , · · · , en } be a basis of Cn . Then, there is an action of
Sn on Cn
⌧ · (↵1 e1 + · · · ↵n en ) = ↵1 e⌧ (1) + · · · + ↵n e⌧ (n)

for ⌧ 2 Sn and ↵ 2 Cn . Essentially, this action permutes the basis vectors, defining an automorphism of
Cn . This is called the permutation representation of Sn . It has an irreducible subrepresentation spanned by
v = e1 +· · ·+en . By Maschkes theorem this has an invariant complement defined to be W = {(x1 , · · · , xn ) : x1 +
· · · + xn = 0}, which is also an irreducible subrepresentation. Such W is named the standard representation of
Sn .

Example 8.3.2. Consider the action of Sn on Cn defined ⌧ · v = v if ⌧ has even parity and ⌧ · v = v if ⌧ is
odd. This extends to a linear representation. It is called the alternating representation.

8.3.2 Specht Modules

Let =( 1, · · · , l) be the cycle type of some ↵ 2 Sn , which is the same as a partition of n. We can visualise
through a diagram made of n cells in tabular form, arranged as so the ith row is composed of i cells. For
instance, if ↵ 2 S8 has cycle type (4, 2, 1, 1) (which is a partition 4 + 2 + 1 + 1 = 8), then such diagram would
look like this

This is called the Young Diagram of the partition . We could also assign an integer in [1, · · · , n] to each
one of the cells as illustrated below. Such labeling is called a tableau (plural: tableaux ).

4 2 6 1
8 3
7
5

Definition 8.3.1. A standard Young tableau is a Young tableau where the values are strictly increasing towards
the right on each row, and strictly increasing downwards on each column

All the Young diagrams below are equipped with a standard Young tableau

1 2 3 4 1 2 5 1 1 2 5 6
5 6 7 3 4 2 3 4
8 3 7 8

Now we can define an equivalence relation on the di↵erent tableaux of a Young diagram of shape . If
T and T 0 are two di↵erent tableaux of , we say they are equivalent if their rows contain the same numbers
individually, i.e.
8.3. IRREDUCIBLE REPRESENTATIONS OF SN 47

6 7 8 3 3 6 8 7
2 1 1 2
5 4 5 4

Definition 8.3.2. Fix a partition of n (in other words, a cycle type of an element of Sn ). The set of Young
tableaux on the Young diagram of up to equivalence class is a basis for a complex vector space M , which is
called the tabloid representation of Sn . The action of Sn in M is defined [T ] 7! [ .T ], i.e. permuting the labels.

Remark. Each equivalence class has a unique representative in which the entries in each row increase from left
to right.

Example 8.3.3. For instance, consider the partition = (2, 2) of n = 4. Then the following Young diagrams
with their respective tableaux would form an equivalence class, and therefore a basis vector of M (2,2)
( )
1 2 2 1 1 2 2 1
[T ] =
3 4 3 4 4 3 4 3

Indeed, dim M (2,2) = 6, since there are n! = 24 possible tableaux for = (2, 2) but gathered on six di↵erent
equivalence classes of cardinality equal to 4. The following are representatives of each of the equivalence classes
which define the basis vectors for M (2,2)

[T1 ] = 1 2 [T2 ] = 1 3 [T3 ] = 1 4


3 4 2 4 2 3

[T4 ] = 2 3 [T5 ] = 2 4 [T6 ] = 3 4


1 4 1 3 1 2

Example 8.3.4. Consider the partition = (3, 2) of n = 5. The Young diagram for this partition with a
tableau T1 looks like this

1 2 3
4 5

The cardinality of the equivalence class [T1 ] is 12, because there are 3! = 6 ways to shu✏e the values
{1, 2, 3} in the upper row and 2! = 2 ways to shu✏e the values {4, 5} in the bottom row. Clearly this is true in
general, i.e. if we assign a tableau Ti

i j k
l m

to the Young diagram of we have 3! = 6 ways to shu✏e {i, j, k} and 2! = 2 ways to shu✏e {l, m}, giving
a total of 12 di↵erent tableaux in which the set of values assigned to each row is invariant. Hence, there are
5! = 120 possible tableaux for = (3, 2) gathered in equivalence classes of size 12, and so dimM (3,2) = 10. An
example of an equivalence class in M (3,2) would be
48 CHAPTER 8. SCHUR - WEYL DUALITY
8 9
>
> 1 2 3 2 1 3 2 3 1 1 3 2 >
>
>
> >
>
>
> 4 5 4 5 4 5 4 5 >
>
>
> >
>
>
> >
>
>
> >
>
>
> >
>
>
> >
>
< 3 2 1 3 1 2 1 2 3 2 1 3 >
>
=
[T1 ] = 4 5 4 5 5 4 5 4
>
> >
>
>
> >
>
>
> >
>
>
> >
>
>
> 2 3 1 1 3 2 3 2 1 3 1 2 >
>
>
>
>
> >
>
>
> 5 4 5 4 5 4 5 4 >
>
>
> >
>
: ;

And a list of representatives of each one of the 10 equivalence classes is

1 2 3 1 2 4 1 2 5 1 4 3
[T1 ] = 4 5 [T2 ] = 3 5 [T3 ] = 3 4 [T4 ] = 2 5

1 5 3 4 2 3 5 2 3
[T5 ] = 4 2 [T6 ] = 1 5 [T7 ] = 1 4

4 5 3 4 2 5 4 5 1
[T8 ] = 1 2 [T9 ] = 1 3 [T10 ] = 2 3

where each one of the [Ti ] is a basis vector for the 10-dimensional complex vector space M (3,2) .

Definition 8.3.3. Let be a partition of n. Fix a Young tableau ( , T ). We define the following are subgroups
of Sn .

P ( , T ) = { 2 Sn : preserves each row of ( , T )} Q( , T ) = { 2 Sn : preserves each column of ( , T )}

Note that if T and T 0 are equivalent, then P ( , T ) = P ( , T 0 ) and Q( , T ) = Q( , T 0 )

Remark. It is clear that P ( , T ) and Q( , T ) are indeed subgroups of Sn .

We will associate to each of these subgroups an element in the group algebra C[Sn ].

X X
a( , T ) = b( , T ) = sign( )
2P ( ,T ) 2Q( ,T )

where sign( ) gives the parity of each of the permutations.

Definition 8.3.4. The Young Symmetriser of the partition is c( , T ) := a( , T )b( , T ) 2 C[Sn ].

Lemma 8.3.2. · c( , T ) = c( , · T ) for all 2 Sn

The following diagram is a sketch proof of this Lemma.


8.3. IRREDUCIBLE REPRESENTATIONS OF SN 49

* i * * i *
* * ! * *
* * * *

q2Q( ,T ) # q 0 2Q( , ·T )
#

* * * q0 i
* qi ! qi
* *

Now that we have constructed both the Young symmetriser and the tabloid representation of Sn , we are
ready to deal with the the main object of study of this section

Proposition 8.3.3. The image V = C[Sn ] · c( , T ) ✓ M is a C[Sn ]-submodule of M , or equivalently, V is


a subrepresentation of M . Indeed, V is independent of the choice of tableau, i.e. picking any other T 0 we still
have V = C[Sn ] · c( , T 0 ) = C[Sn ] · c( , T )

P
Proof. We show that V is invariant under the action of Sn . Every element of V is of the form T ↵T c( ,T )
by the previous lemma, where ↵T 2 C and the sum runs through all taleaux T of . Then
X X X
↵T c( , T ) = ↵T ( · c( , T )) = ↵T c( , · T ) 2 V
T T T

since evidently · T is still a tableaux on . Thus V is a subrepresentation of M .


Now fix a tableau T and consider a distinct T 0 . We can write T 0 = T0 · T where sends the value of each box
0
in T to its analogous in T . Therefore
!
X X
↵T 0 c( , T 0 ) = ↵T T0 · c( , T ) 2 V
T T

Definition 8.3.5. Let n be a positive integer and a partition of n. We call V a Specht module

For simplicity, we will often refer to a( , T ), b( , T ) and the Young symmetriser c( , T ) as simply a , b
and c , since they generate the same Specht module. As we will show later, every irreducible representation of
Sn is isomorphic to a Specht module, so one could simply identify Specht modules with such representations.
Moreover, this would imply that in particular Specht modules are simple C[Sn ]-submodules of M . But for
the time being, let us just compute some examples.

Example 8.3.5. Consider the group S3 . The three possible partitions of n = 3 are = (3), = (2, 1) and
= (1, 1, 1), as illustrated below

1 2 1
1 2 3 = (3) 3 = (2, 1) 2 = (1, 1, 1)
3

P
i) For = (3) we have P(3) = S3 and Q(3) = {1}. Hence, c(3) = a(3) b(3) = 2S3 . However, for any
⌧ 2 S3 , we obtain ⌧ c(3) = ⌧ since c(3) runs through every permutation of S3 . Therefore, we have V(3) is trivial.
50 CHAPTER 8. SCHUR - WEYL DUALITY

Furthermore, M (3) itself is generated by the equivalence class of the tableau of (3) above. So dim V(3) = 1 and
dim M = 1.

ii) For = (2, 1) we have P(2,1) = {(1), (12)} and Q(2,1) = {1, (13)}. Hence, c(2,1) = a(2,1) b(2,1) =
1 + (12) (13) (132). It follows that V(2,1) = C[Sn ]c(2,1) = span{c(2,1) , (13)c(2,1) }. Thus, dim V(2,1) = 2 and
(2,1)
dim M = 3 since there are 3 distinct equivalence classes.

P
iii) For = (1, 1, 1) we have P(1,1,1) = {1} and Q(1,1,1) = S3 . Thus c(1,1,1) = 2S3 sign( ) and therefore,
(1,1,1)
for all ⌧ 2 S3 we have ⌧ c(1,1,1) = sign(⌧ ). Hence dim V(1,1,1) = 1 and dim M =6

Example 8.3.6. In general, if we have a partition = (n 1, 1) of n, the vector space M (n 1,1)


is spanned by
the the conjugacy classes [Ti ] where the label of the isolated cell is i

···
i

We will see later why, in general, we have a decomposition V(n 1,1) V(n) ⇠
= M (n 1,1)
.

Nevertheless, explicitly computing Specht modules for large n is a task which would require an immense
amount of time if one wishes to do so via the explicit calculation of the submodule of M generated by the
Young symmetriser. That is why the following results are essential to deal with potentially high dimensional
Specht modules

Theorem 8.3.4 (Hook-Length Formula). Let be a partition of n and fix a cell (i, j) in its Young diagram.
Let h(i, j) be the number of cells strictly below (i, j) plus the number of cells strictly to the right of (i, j) plus
one, as illustrated by the diagram below

h(1, 2) = 6 h(2, 3) = 2
X O O O
O X O
O

Then, we have
n!
dim V = Q
i,j h(i, j)

where h(i, j) runs through all cells in the Young diagram of .

The proof of the Hook-Length formula is not at all easy or short. The original proof was published in 1952
by Thrull, Robinson and Frame. For a slightly simplified version, see [GN04]. The Hook-Length formula gives
us the dimension of the Specht module, but we will try to go a step further and specify a particular basis. In
order to do so, we need the following theorem.

Theorem 8.3.5. Let be a partition of n and {T1 , · · · , Tm } the set of all standard Young tableaux on the
Young diagram of up to equivalence class. Then, then elements b [Ti ] span C[Sn ] · c , i.e. the elements b · [Ti ]
form a basis of the Specht module V .
8.3. IRREDUCIBLE REPRESENTATIONS OF SN 51

Example 8.3.7. Let us look at S4 . The following are the Young diagrams for all possible partitions of 4 with
their respective standard Young tableaux

= (4) = (3, 1) = (2, 2) = (1, 1, 1, 1)


1 2 3 4 1 2 3 1 2 1
4 3 4 2
3
4

= (2, 1, 1)
1 2
3
4

i) For = (4): There is only one equivalence class of Young tableaux, and therefore dim M (4) = dim V(4) =
P
1. The Young symmetriser would be just c(4) = a(a) = Sn , i.e. V(4) = C[Sn ] · a(4) and the only basis vector
in terms of Young diagrams would be the one above.

ii) For = (3, 1): There are four equivalence classes of Young tableaux for such partition, thus dim M (3,1) =
4. By the Hook-Length formula, the dimension of V(3,1) is equal to 3.

iii) For = (2, 2): We have seen before that dim M (2,2) = 6. By the Hook-Length formula, dim V(2,2) = 2.
The two basis vectors correspond to the two possible standard Young tableaux for this partition, namely

1 2

1 2 1 3
3 4 2 4

The Young symmeriser for the first tableau is c 1 = (123) (142)+(12) (134) (243)+(34) (13) (24)+1,
and the Young symmetriser for the second tableau is c 2
= c 1 . But, as we have proved, they both generate
the same Specht module V(2,2) , i.e. we have C[Sn ] · c 1
= C[Sn ] · c 2 . Finally, to be more specific, the two basis
elements of V(2,2) are

1 2 3 2 1 4 1

3 4 1 4 3 2

1 3 2 3 1 4 2

2 4 1 4 2 3

iv) For = (1, 1, 1, 1): Here dim M (1,1,1,1) = 24, but by the Hook-length formula tells us that again V(1,1,1,1)
is one-dimensional.
52 CHAPTER 8. SCHUR - WEYL DUALITY

iv) For = (2, 1, 1): There is a total of 4! = 24 possible combinantions of the integers {1, 2, 3, 4} and
equivalence classes for = (2, 1, 1) have cardinality two (i.e. swapping the two integers in the top cells). Thus,
(2,1,1)
dim M = 12. By the Hook-Length formula, dim V(2,1,1) = 3. The basis vectors correspond to the three
standard Young tableaux

1 2 3

1 2 1 3 1 4
3 2 2
4 4 3

The Young symmetrisers are

c 1 = (1) + (134) (13) (14) (24) + (21) + (1234) (123) (124) (12)(34)

c 2
= (1) + (124) (12) (24) (14) + (13) + (1324) (132) (13)(24) (134)

c 3 = (1) + (123) (12) (13) (23) + (14) + (1423) (142) (143) (14)(23)

which again give rise to C[Sn ] · c ⇠


= V(2,1,1) . Lastly, an explicit basis for V(2,1,1) is
i

1 2 4 2 3 2
3 + 1 1
4 3 4
1

4 2 1 2
3 4
1 3

1 3 4 3 2 3
2 + 1 1
4 2 4
2

1 3 4 3
4 2
2 1

1 4 3 4 2 4
2 + 1 1
3 2 3
3

3 4 1 4
2 3
1 2
8.3. IRREDUCIBLE REPRESENTATIONS OF SN 53

Definition 8.3.6. Let =( 1, · · · , r) and µ = (µ1 , · · · , µs ) be two partitions of n. We say > µ if i > µi
for some i, and k = µk for all k < i.

Lemma 8.3.6. If > µ are distinct partitions of n, then a C[Sn ]bµ = 0

Proof. We will specify tableaux of and µ, so for clarity use again the notation a( , T ), b( , T ) ⌘ a , b and
a(µ, S), b(µ, S) ⌘ aµ , bµ .
1
It suffices to show that for g 2 Sn there is a transposition t 2 P ( , T ) such that g tg 2 Q(µ, S) since in such
case
0 1 0 1
X X
a( , T )gb(µ, S) = @ Ag@ sign(⌧ )⌧ A (8.1)
P ( ,T ) Q(µ,S)
0 1 0 1
X X
=@ A tg(g 1
tg) sign(g 1
tg) @ sign(⌧ )⌧ A (8.2)
P ( ,T ) Q(µ,S)
0 1 0 1
X X
= @ Ag@ sign(⌧ )⌧ A = a( , T )gb(µ, S) (8.3)
P ( ,T ) Q(µ,S)

P P
and so a( , T )gb( , S) = 0; note that we have used the equalities P ( ,T ) =⇡ P ( ,T ) if ⇡ 2 P ( , T ) and
P P
Q(µ,S) sign(⌧ )⌧ = ⇡ Q(µ,S) sign(⌧ )⌧ if ⇡ 2 Q(µ, S).
We claim that there are two (distinct) integers lying in the same row of of T , which also both lie in the same
column of gS. If 1 > µ1 then this follows directly by the pigeonhole principle: there are 1 integers in the first
row of ( , T ) which can lie in only µ1 columns of gS.
Otherwise, we can find elements p1 2 P ( , T ) and q1 2 Q(µ, S) such that p1 T and q1 gS have the same first
row. We can continue this argument until i > µi , (which we will reach after i 1 iterations). Then, again
using the pigeonhole principle, there have to exist two (distinct) integers lying in the same row of pi 1 · · · p1 T
which also lie in the same column of q1 · · · qi 1 gS.

Corollary 8.3.6.1. For 6= µ, we have c C[Sn ]cµ = 0

Proof. c C[Sn ]cµ = a (b C[Sn ]aµ )bµ ✓ a C[Sn ]bµ = 0 by the previous Lemma.

Lemma 8.3.7. There is a t 2 C[Sn ]⇤ (the dual vector space) such that c( , T )gc( , T ) = t (g)c( , T ) for all
g 2 C[Sn ]..
n!
Lemma 8.3.8. We have c c = n c with n = dim V

Proof. From the previous Lemma, we have c c = n c for some n 2 C. Consider the map ⇤ : C[Sn ] ! V
defined x 7! xc . We notice that ⇤ multiplies elements of V by n , and by zero in ker ⇤. Therefore tr ⇤ =
n dim V . Moreover, tr ⇤ = dim C[Sn ] = n!, so we get the given identity.

Indeed, n is real and it is equal to the product of the hook lengths, by the Hook-Length formula. The
last two Lemmas are essential to show the following remarkable statement, proved by Wilhelm Specht in 1935

Theorem 8.3.9 (Specht). Every irreducible representation of Sn is isomorphic to a Specht module V of some
partition of n.

Proof. Firstly we will show that V is an irreducible representation of Sn for each partition of n. By Lemma
8.3.7 we have c V = Cc , and since c c 6= 0 by the Lemma 8.3.8, we have c c 2 V . Now, let W ✓ V be an
54 CHAPTER 8. SCHUR - WEYL DUALITY

irreducible (sub)representation. There are two cases: if c W = 0 then W · W ✓ V = 0, and therefore W = 0.


On the other hand, if c W 6= 0 then c W = c C and so

V = C[Sn ] · c = C[Sn ] · Cc = C[Sn ](c W ) = (C[Sn ]c )W ✓ W

since W is a subrepresentation of Sn . Hence V is irreducible.


Now partitions of n enumerate all conjugacy classes of Sn and conjugacy classes of Sn enumerate all irreducible
representations of Sn . Thus, we have to prove that if 6= µ are two distinct partitions of n, then V ⇠ 6 Vµ .
=
Let and µ be given and assume without loss of generality that > µ. Then c V = Cc . However,
c Vµ = c C[Sn ]cµ = 0 by Corollary 8.3.6.1. Hence, suppose that there is an isomorphism ⇤ : Vµ ! V . Then

Cc = c V = c ⇤(Vµ ) = c ⇤(C[Sn ]cµ ) = ⇤(c C[Sn ]cµ ) = 0

which is a contradiction.

Example 8.3.8. Let us come back to our example S3 . Evidently V(3) would be isomorphic to the trivial
representation, whereas V(2,1) is isomorphic to the standard representation; lastly, V(1,1,1) is isomorphic to the
alternating representation. So indeed we have a decomposition M (2,1) ⇠= V(2,1) V(3) .

Example 8.3.9. Let us associate the Specht modules of S4 derived before to its irreducible representations.
We have V(4) is again isomorphic to the trivial representation, whereas V(1,1,1,1) is isomorphic to the alternating
representation. V(3,1) is isomorphic to the standard representation and again we have a decomposition M (3,1) = ⇠
V(3,1) V(4) . The irreducible representations associated to V(2,2) and V(2,1,1) are not as mainstream. The
irreducible representation isomorphic to V(2,2) is the homomorphism ⇢(2,2) : S4 ! GL2 (M (2,2) ) defined
! ! !
1 0 1 0 0 1
⇢(e) = ⇢(12) = ⇢(34) = ⇢(23) =
0 1 1 1 1 0

extended linearly, since every element of S4 can be expressed as a combination of the elements (12), (34), (23),
and the matrix of any other element of S4 can be expressed as a product of powers of ⇢(12) and ⇢(23).
The irreducible representation isomorphic to V(2,1,1) is the homomorphism ⇢ : S4 ! GL3 (M (2,1,1) ) defined
0 1 0 1 0 1 0 1
1 0 0 1 0 0 0 1 0 1 0 0
B C B C B C B C
B
⇢(e) = @0 1 0A C B
⇢(12) = @ 1 1 0A C B
⇢(23) = @1 0 0C ⇢(34) = B C
A @ 0 0 1A
0 0 1 1 0 1 0 0 1 0 1 0

In both examples we claimed to have a decomposition M (n 1,1) ⇠


= V(n 1,1) V(n) . This is due to the well-
known fact that the trivial and standard representations are complements of each other. Moreover, as discussed
on Part I of this essay, modules over semisimple algebras are always semisimple. Since C[Sn ] is semisimple, we
expect M to be semisimple, and in particular it must be a direct sum of some of the Specht submodules of
M . By Maschke’s theorem, every irreducible representation has an irreducible complement when working over
C, and by the Hook-Length formula the dimension of V(n 1,1) is always equal to n 1, and the dimension of
the trivial representation is evidently one.

8.4 Lie Groups and Lie Algebras

8.4.1 Analytic Manifolds

Let X be a topological space and K be a complete field. A chart on X is a triple c = (U , , n) such that n is a
positive integer, U ✓ X is an open set, and :U ! U ⇢ K n is a homeomorphism.
8.4. LIE GROUPS AND LIE ALGEBRAS 55

We say two charts c, c0 are compatible if, for V = U \ U 0 , the maps 0


and 0
are analytic.
(V ) 0 (V )

(V ) ⇢ K n

0 1 0 1
V

0
(V ) ⇢ K n

An atlas on X is a collection of charts A = {ci }i2I which cover X and for all i, j 2 I, ci and cj are
compatible. Two atlases A, A0 are compatible if if A [ A0 is an atlas.

Definition 8.4.1. Let X be a topological space. An analytic manifold structure on X is an equivalence class
of compatible atlases on X.

Definition 8.4.2. A topological group is a group G endowed with a topology T , where the group operation and
1
the map x 7! x are continous functions with respect to T .

Now we are ready to give the main definitions of this section, which will be crucial to prove the Schur-Weyl
Duality theorem.

Definition 8.4.3. Let G be a topological group with an analytic manifold structure over a complete field K. We
1
say G is a Lie group or analytic group if the group operation and x 7! x are analytic (on top of continous).

Definition 8.4.4. A Lie algebra g is a vector space with a bilinear map [·, ·] : g ⇥ g ! g satisfying
i) [X, Y ] = [Y, X].
ii) [X, [Y, Z]] + [Y, [Z, X]] + [Z, [X, Y ]] = 0 (Jacobi identity).

Homomorphisms of Lie groups and Lie algebras are defined as expected: a homomorphism of Lie groups
is a smooth group homomorphism and a homomorphism of Lie algebras is a linear map preserving the bracket
operator, that is, [ (x), (y)] = ([x, y]). Furthermore, it is an important result that Lie groups are generated
by the elements of the open neighbourhodds of the identity, as shown in [Ser09] and [Bel16].

One of the fundamental bases of Lie theory is that every Lie group has an associated Lie algebra. However,
geometric objects such as manifolds are inheritaly non linear in general, i.e. one cannot measure deformations
linearly. Instead, we will use linear approximation on the manifold structure of a Lie group to define its
associated Lie algebra.

Let M be a manifold, and fix m 2 M . A point derivation at m is a linear map ✓ : C 1 (M ) ! R (where


C 1 (M ) denotes the set of all analytic functions M ! M ) such that ✓(f g) = ✓(f )g(m) + f (m)✓(g). We
can deduce that the set of point derivations at m 2 M forms an R-vector space We shall not get into specific
properties of these maps, but rather give the relevant definition:

Definition 8.4.5. Let M be a manifold and m 2 M . The tangent space at m is the vector space Tm M of point
derivations at m.

We can now introduce the notion of the Lie algebra g associated to a Lie group G, via linear approximation
of the tangent space Te M at the identity e 2 G.
We have seen that Lie groups are generated by the elements of any open neighbourhood of the identity. In
particular, if f : G ! H is a smooth homomorphism of Lie groups, then f depends on its behaviour on such
open neighbourhoods of the identity. Hence, taking smaller open sets of e, one eventually reaches the tangent
space Te G and the map de f : Te G ! Te H.
56 CHAPTER 8. SCHUR - WEYL DUALITY

Let us try to make this a bit more formal. For all g 2 G we have a map Ad(g) 2 Aut(G) defined
1
Ad(g)(h) = ghg . As one could have guessed, this is called the adjoint map. Di↵erentiating as before we
obtain commutative diagrams
f de f
G H Te G Te H
Ad Ad(f (g)) de Ad de Ad(f (g))

G f
H Te G de f
Te H

Nevertheless if we want to check whether de f is actually the di↵erential of some smooth homomorphism
we would also need to study the value of f (g) for g 6= e, since the map de Ad(f (g)) depends on f (g) also for
elements di↵erent to the identity. To bypass this problem we shall consider Ad as a map G ⇥ G ! G defined
1
(g, h) 7! ghg . The commutative diagram would now be
f ⇥f
G⇥G H ⇥H
Ad Ad

G f
H

Definition 8.4.6. Let M, N, K be manifolds and f : M ⇥ N ! K a bilinear map. The bidi↵erential b(m,n) f
at (m, n) is a bilinear map Tm M ⇥ Tn N ! Tf (m,n) K

Specifics of the construction of this map can be found on [Bel16]. However, we are interested in the case
(m, n) = (e, e) as given by the diagram below. This will allow us to define the associeted Lie algebra to a Lie
group.
b(e,e) f ⇥f
Te G ⇥ Te G T e H ⇥ Te H
b(e,e) Ad b(e,e) Ad

Te G de f
Te H

Definition 8.4.7. [·, ·]G := b(e,e) Ad : g ⇥ g ! g is called the Lie bracket operator.

Lemma 8.4.1. The map Ad preserves the Lie bracket operator. That is, Ad([X, Y ]G ) = [Ad(X), Ad(Y )]G

Proposition 8.4.2. Let G be a Lie group with identity e. Then (Te G, [·, ·]G ) is a Lie algebra.

Proof. By construction, [·, ·]G is bilinear, so we only need to show that it is antisymmetric and the Jacobi
identity holds. to show antisymmetry, it suffices to show that for all X 2 Te G we have [X, X]G = 0.
0
So let X 2 Te G. We can write X = (0) for some : ( ✏, ✏) ! G. Let Y = ⌫ 0 (0) be another element of Te G.
First of all, we have
1
Ad( (t))(g) = (t)g (t) t 2 ( ✏, ✏)
Therefore, taking g = ⌫(s)
✓ ◆ ✓ ◆
d 1 d 1
de Ad( (t)(Y )) = (t)⌫(s) (t) =) b(e,e) Ad(X)(Y ) = (t)⌫(s) (t)
ds s=0 ds s=0 t=0

Therefore, we conclude
 ✓ ◆  ✓ ◆ 
d d 1 d d d
[X, X] = b(e,e) Ad(X)(X) = (t)⌫(s) (t) = (s) = X =0
dt ds s=0 t=0 dt ds s=0 t=0 dt t=0

Now we have to prove the Jacobi identity [X, [Y, Z]] [Y, [X, Z]] = [[X, Y ], Z]. Since b(e,e) Ad(X)(Y ) = [X, Y ],
we can rewrite this identity as [b(e,e) Ad(X) b(e,e) Ad(Y ) b(e,e) (Y ) b(e,e) Ad(X)](Z) = b(e,e) Ad([X, Y ]) in
End(Te G), provided that b(e,e) Ad(X) b(e,e) Ad(Y ) b(e,e) Ad(Y ) b(e,e) Ad(X) = b(e,e) Ad([X, Y ]) But by the
Lemma, b(e,e) Ad indeed preserves the commutator [A, B] = AB BA, and thus the proof is finished.
8.4. LIE GROUPS AND LIE ALGEBRAS 57

Theorem 8.4.3. Let G and H be Lie groups with G simply connected, and µ : Lie(G) ! Lie(H) be a Lie
algebra homomorphism. Then, there exists a unique Lie group homomorphism f : G ! H such that µ = de f .

8.4.2 Representations of Lie Algebras

Definition 8.4.8. A representation of a Lie group G on a complex vector space W is a Lie group homomorphism
⇢ : G ! GL(W ).
A representation of a Lie algebra g on a complex vector space W is a Lie algebra homomorphism ⇢ : g ! gl(W ).

Remark. We use gl(W ) to denote the set GL(W ) considered as a Lie algebra.

From previous section, one could deduce that if G is simply connected, there is a correspondence between
representations of G and representations of its associated Lie algebra g ⌘ Lie(G).

Recall that for any finite group G we have an associative algebra C[G] where representations of one another
are in one on one correspondence. More generally, for any associative C-algebra A, one has

HomAlg (C[G], A) ⇠
= HomGrp (G, A⇥ )

Analogously, for a Lie algebra g we have an associative algebra Ug called the universal enveloping algebra
of g, with the parallel property
HomAlg (Ug, A) ⇠
= HomLieAlg (g, L(A))

where L(A) is the set A but considered as a Lie algebra with the commutator as a Lie bracket. Note that
L defines a functor L : {Associative Algberas} ! {Lie Algebras}.

Construction of Ug
L
Let g be a Lie algebra. Consider the algebra Tg = n 0 g⌦n where we set g⌦0 := C. Multiplication on Tg is
given by the isomorphism g⌦n ⌦ g⌦m ! g⌦(m+n) . Let Ig ✓ Tg be the ideal generated by the elements of the
form
X ⌦Y Y ⌦X [X, Y ] 8X, Y 2 g ✓ Tg

Then, the universal eveloping algebra of g is the quotient algebra Tg /Ig .

Equivalently, we could define the universal enveloping algebra as follows

Definition 8.4.9. Let g be a Lie algebra. The universal enveloping algebra of g is the associative algebra E ⌘ Ug
together with a Lie algebra homomorphism ◆g : g ! L(E) such that for any other associative algebra A and
Lie algebra homomorphism ' : g ! L(A) there exists a unique associative algebra homomorphism :E !A
such that the following diagram commutes
◆g
g L(E)

'

L(A)

where = ⇤ is considered as a Lie algebra homomorphism.


58 CHAPTER 8. SCHUR - WEYL DUALITY

One can notice the similarities between the construction of Ug and the construction of the tensor product
algebra A ⌦R B where A and B are two associative algebras over an arbitrary ring R.
We are almost ready to prove the Schur-Weyl Duality Theorem, although we still need one more ingredient for
the proof

8.5 Symmetric Polynomials

Definition 8.5.1. An n-symmetric polynomial is a polynomial f (x1 , · · · , xn ) such that for a permutation 2 Sn
on has the equality f (x1 , · · · , fn ) = f (x (1) , · · · ,x (n) )

A special type of these kind of polynomials are the n-elementary symmetric polynomials, which are defined
as follows
X
⇧1 (x1 , · · · , xn ) = xi
1in
X
⇧2 (x1 , · · · , xn ) = xi xj
1ijn
X
⇧3 (x1 , · · · , xn ) = xi xj xk
1ijkn
..
.
Y
⇧n (x1 , · · · , xn ) = xi
1in

Definition 8.5.2. Let p be a positive integer. The power sum symmetric polynomial is
n
X
P Sp (x1 , · · · , xn ) = xpk
k=1

In fact, if ⇧1 , · · · , ⇧n are symmetric polynomials, we have an equality P Sp (⇧1 , · · · , ⇧n ) = ( 1)p 1


Sp (x1 , · · · , xn ) =
Pn
( 1)p 1 k=1 xpk . This is known as the Newton-Girard Formula, and yields the following result

Theorem 8.5.1 (Fundamental Theorem of Symmetric Polynomials). Every n-elementary symmetric polynomial
can be expressed as some polynomial in the power sum P Sp=n (x1 , · · · , xn ).

We will use this theorem to prove some previous Lemmas to the Schur-Weyl Duality Theorem.

8.6 Schur-Weyl Duality for GL(V )


Nn
Let V be a complex vector space, and consider the tensor product V . Since there are n factors, there is a
natural action of Sn
(v1 ⌦ · · · ⌦ vn ) = v (1) ⌦ ··· ⌦ v (n)

Moreover, letting g 2 GL(V )


g(v1 ⌦ · · · ⌦ vn ) = g(v1 ) ⌦ · · · ⌦ g(vn )

These two actions commute with each other, and as we will show, the span of the images of Sn and GL(V ) in
Nn
End( V ) are centralisers of each other. This is the key to connect the representation theory of both groups.
8.6. SCHUR-WEYL DUALITY FOR GL(V ) 59

Nn Nn
Lemma 8.6.1. The image of Ugl(V ) in End( V ) is EndC[Sn ] ( V)

Nn
Proof. The action of X 2 gl(V ) on v1 ⌦ · · · ⌦ vn 2 V is given by
n
X
X(v1 ⌦ · · · ⌦ vn ) = v1 ⌦ · · · ⌦ Xvi ⌦ · · · ⌦ vn
i=1
Nn
Therefore, the image of X in End( V ) is the n-elementary symmetric polynomial
n
X
⇧n (X) := id ⌦ · · · ⌦ X ⌦ · · · ⌦ id
i=1
Nn
where X is on the ith position. Therefore, the image of gl(V ), and hence of Ugl(V ) is contained in EndC[Sn ] ( V)
Conversely, from the previous section we know that elementary symmetric polynomials can be expressed as a
polynomial in the power sum P Sn

X ⌦ · · · ⌦ X = Q(⇧n (X), ⇧n (X 2 ), · · · , ⇧n (X n ))

So elements X ⌦n for X 2 End(V ) are generated by images of elements of Ugl(V ). But elements of the form
X ⌦n span
O O
(End(V )⌦n ) · Sn ⇠
= (End( n
V ) · Sn ⇠
= EndC[Sn ] ( n
V)
Nn
Thus, EndC[Sn ] ( V ) is contained in the image of Ugl(V ).
Nn
Proposition 8.6.2. The span of the images of C[Sn ] and Ugl(V ) in End( V ) are centralisers of each other.

Nn Nn
Proof. Consider the image of C[Sn ] in End( V ). Indeed, ImC[Sn ] is a subalgebra of End( V ), and by
Maschkes Theorem it is semisimple. Hence, using the previous Lemma and the Double Centraliser Theorem
O
CEnd(Nn V ) (ImC[Sn ]) ⇠
= EndImC[Sn ] ( n
V)⇠
= Im Ugl(V )

Conversely, consider the image of Ugl(V ) in Endn (V ). We already know that CEnd(Nn V ) (ImC[Sn ]) ⇠
= Im Ugl(V ).
But again, by the Double Centraliser Theorem

CEnd(Nn V ) (Im Ugl(V )) ⇠


= CEnd(Nn V ) (CEnd(Nn V ) (ImC[Sn ])) ⇠
= ImC[Sn ]

Nn
Proposition 8.6.3. The span of the images of Ugl(V ) and GL(V ) is the same in End( V)

Nn
Proof. The actions of Sn and GL(V ) in V commute. Therefore the span of ImGL(V ) must be contained in
Nn
EndC[Sn ] ( V ) = Im Ugl(V ).
Nn
Conversely, let X 2 Im Ugl(V ) = EndC[Sn ] ( V ). Since span{g ⌦n : g 2 GL(V )} = span{X ⌦n : X 2 End(V )},
it suffices to show that any X 2 End(V ) is in the span of {g : g 2 GL(V )}. But for infinitely many ↵ 2 C,
the matrix X + ↵I is invertible. Therefore, we can write X = (X + ↵I) ↵I, which is a linear combination of
elements of GL(V ).

Therefore, using the Double Centraliser Theorem we arrive to our main result

Theorem 8.6.4 (General Schur - Weyl Duality). Let V be a complex vector space and n a positive integer.
Then, there is a decomposition
O M
n
V ⇠
= V ⌦C S V
| |=n

as a representation of Sn ⇥ GL(V ), where the Specht modules V run through all irreducible representations of
Sn and each S V := HomSn (V , V ⌦n ) is either an irreducible representation of GL(V ) or zero.
60 CHAPTER 8. SCHUR - WEYL DUALITY

Remark. Indeed, Proposition 8.6.2 also yields an (ImC[Sn ] ⌦C Im Ugl) - module decomposition
O M
n
V ⇠
= V ⌦C L
| |=n

where V are the Specht modules for Sn and L are distinct irreducible representations of gl(V ) or zero.

The reasoning behind this remark is the following: we have shown that indeed the spans of ImUgl(V ) and
Nn
ImGL(V ) are the same in End( V ). Call this subspace B. We also have that S V is a simple B-module,
Nn
by the Double Centraliser Theorem. Both GL(V ) and Ugl(V ) act on V via B, and therefore each S V is
naturally a GL(V )-representation and a Ugl(V )-representation.
We see that S V is still a simple B-module as a GL(V )-representation and as a Ugl(V )-representation: by
Nn
definition, B ✓ End( V ) is spanned by the images of either GL(V ) or Ugl(V ), so if the image of the span
of Ugl(V ) was not simple, neither would be the image of GL(V ) (and vice versa). Hence, since the span of
ImGL(V ) is simple, so is the span of ImUgl(V ).
Now the underlying vector space structure of the Ugl-representation is the same as that of the corresponding
gl(V )-representation, due to the correspondence intrinsic on the definition of the universal enveloping algebra

HomAlg (Ug, A) ⇠
= HomLieAlg (g, L(A))

In this case, the correspondence Hom(Ugl(V ), ) ⇠


= Hom(gl(V ), L( )) yields our assertion, and a simple Ugl(V )-
module corresponds to a simple gl(V )-module.

8.7 Schur Functors

Recall from previous sections the definitions of P , Q ✓ Sn and a , b , and the Young symmetriser c := a b .
Nn
Definition 8.7.1. The complex nth symmetric subalgebra of V , denoted Symn V , is the subspace of V
spanned by ( )
X
v (1) ⌦ ··· ⌦ v (n) : vi 2 V
2Sn
Vn Nn
Definition 8.7.2. The complex nth alternating subalgebra of V , denoted (V ), is the subspace of V
spanned by ( )
X
sign( )v (1) ⌦ ··· ⌦ v (n) : vi 2 V
2Sn

Lemma 8.7.1. Consider P , Q ✓ Sn . Let k be the number of rows of the Young diagram of a partition
Lk Lk 0
= ( 1 , · · · , k ) of n 2 N and let k 0 be the number of columns. Then P ⇠
= i=1 Sn= i and Q = i=1 Sn= 0i .

Proof. The stabiliser of a row of the Young diagram of is a subgroup of Sn , and such stabilisers generate P .
Similarly the stabilisers of every column generate Q . The isomorphisms arise naturally from this fact.

Lemma 8.7.2. Let V be a finite dimensional C-algebra, =( 1, · · · , k a partition of n and consider a and
b . We have decompositions
O
a ( n
V)⇠
= Sym 1 V ⌦ · · · ⌦ Sym k V (8.4)
O ^ 0 ^ 0
b ( n
V)⇠
= 1 (V ) ⌦ · · · ⌦ k0 (V ) (8.5)

Let us explicitly compute examples of such decompositions


8.7. SCHUR FUNCTORS 61

N3
Example 8.7.1. Let V = C and n = 3. Explicitly, the action of a for any v1 ⌦ v2 ⌦ v3 2 C is

a (v) = a (v1 ⌦ v2 ⌦ v3 ) = v1 ⌦ v2 ⌦ v3 + v2 ⌦ v1 ⌦ v3

By the Lemma, we must have an isomorphism


O
a ( 3
C) ⇠
= Sym2 V ⌦C C

To show this, we define a canonical isomorphism ' : C⌦3 ! C2 ⌦ C, given by (v1 ⌦ v2 ⌦ v3 ) 7! (v1 ⌦ v2 ) ⌦ v3 ,
and the imbedding ◆ : Sym2 (C) ⌦ C ,! C⌦2 ⌦ C
1 1
' (◆(v1 · v2 ⌦ v3 )) = ' ((v1 ⌦ v2 ) + (v2 ⌦ v1 ) ⌦ v3
= v1 ⌦ v2 ⌦ v3 + v2 ⌦ v1 ⌦ v3
= a (v1 ⌦ v2 ⌦ v3 )
Nn
Hence, we can deduce the existence of an isomorphism ⌦ : a ( C) ! Sym2 (C) ⌦ C defined ⌦(a (v1 ⌦ v2 ⌦
v3 )) = v1 v2 ⌦ v3 as given by the following commutative diagram

Sym2 (C) ⌦ C C⌦2 ⌦ C
⇠ ' 1
⌦ ⇠
N3 N3
a ( C) a C
Nn
Example 8.7.2. Let us do the same now for b . For v1 · v2 ⌦ v3 2 C we have an action

b (v1 · v2 ⌦ v3 ) = v1 · v2 ⌦ v3 v3 · v2 ⌦ v1
N3
Again, we have a (not so) canonical isomorphism ' : C ! C⌦2 ⌦ C defined v1 ⌦ v2 ⌦ v3 7! (v1 ⌦ v3 ) ⌦ v2
V2
and an imbedding ◆ : (C) ⌦ C ,! C⌦2 ⌦ C. So again, based on these two maps, we will define an isomorphism
N3 V2
:b ( C) ! (C) ⌦ C. Let b (v1 ⌦ v2 ⌦ v3 ) 7! (v1 ^ v3 ) ⌦ v2 . Then
1 1
' (◆(v1 ^ v2 ) ⌦ v3 ) = ' ((v1 ⌦ v2 v2 ⌦ v1 ) ⌦ v3 )
= v1 ⌦ v3 ⌦ v2 v2 ⌦ v3 ⌦ v1
= b (v1 ⌦ v2 ⌦ v3 )

Thus, we get another commutative diagram


V2 ◆
(C) ⌦ C C⌦2 ⌦ C
⇠ ' 1

N3 N3
b ( C) b
C

Once we have seen those general notions, we can move on now to a characterisation and definition of the
Nn
Schur Functor on a general C-tensor space V.
Nn
Theorem 8.7.3. The image of the Young symmetriser on V is S V .

Proof.

S V := HomSn (V , V ⌦n )
O

= (V )⇤ ⌦C[Sn ] V ⌦n ⇠
= V ⌦C[Sn ] n
V
O

= C[Sn ] · c ⌦C[Sn ] n
V ⇠
= C[Sn ] ⌦C[Sn ] V ⌦n · c
O

= n
V ·c
62 CHAPTER 8. SCHUR - WEYL DUALITY

Thus, we are now fully equipped to explain the functorial properties of S . Let V, W, U be C-vector spaces
(or C- algebras) and ⇠ : V ! W and ⌘ : W ! U be linear maps (or a C-algebra homomorphisms). We can
define S ⇠ : S V ! S W by 7! ⇠ ⌦n
, and similarly S ⌘ by 7! ⌘ ⌦n . Thus we get a commutative
diagram

Nn ⇠ ⌦n Nn ⌘ ⌦n Nn
V W U

b b b

Nn Nk ⇣V 0
⌘ Nn Nk ⇣V 0
⌘ Nn Nk ⇣V 0

b ( V)= i=1
i
(V ) b ( W) = i=1
i
(W ) b ( U) = i=1
i
(U )

a a a

Nn Nn Nn
c ( V) c ( W) c ( U)

S (⇠) S (⌘)
S V S W S U

Therefore, S : VectC ! VectC is a functor from the category of complex vector spaces to itself, mapping
V 7! S V and for (say) f a C-linear map we have f 7! S (f ) as defined before. Using a similar argument, one
could show that (if equipped with a bilinear product), S is also a functor S : C-Alg ! C-Alg.
P ⇠ Vn (V ). Similarly, for
Example 8.7.3. Let 1= (1, · · · , 1). Then c 1 = 2Sn sign( ) = b 1 . Thus S V =
P
2 = (n) we have c 2
= 2Sn = a 2 . Therefore S V ⇠= Symn V .Therefore, as a representation of GL(V ),
Schur-Weyl Duality gives us a decomposition
M ⇣^ ⌘ M
V ⌦n ⇠
= S V = n
(V ) Symn V S V
| |=n other

where by other we mean any partition of n except for the ones corresponding to the trivial and alternating
representation of Sn .
Chapter 9

Conclusions

On the first part of this dissertation we proved several key results in the theory of associative algebras: we
deduced that R, C. and the Hamilton quaternions H are the only real division algebras and proved that every
semisimple algebra is isomorphic to a direct sum of matrix algebras over division algebras. We then moved on
to show structure theorems for indecomposable and projective modules.

Later chapters focused their attention on two essential concepts in modern algebra: central simple algebras
and Brauer groups. We started with a general discussion of well-known theorems by some of the biggest
algebraists in the last century, such as the Density Theorem (Jacobson), the Nother-Skolem Theorem and
the Double Centraliser Theorem, to subsequently define the Brauer group Br(F ) of a field F and the Morita
equivalence relation. In the last chapter of Part I we introduced two powerful concepts in the theory of central
simple algebras: splitting fields and relative Brauer groups for extensions E/F , which arise as subgroups of
Br(F ). We showed that an extension field E/F is a splitting field for a central simple F -algebra A if and only if
A is Morita equivalent to another central simple algebra B such that E is a maximal subfield of B. Furthermore
we used such concepts to prove several results in Galois Theory: Notably we proved that if A is a central simple
algebra, then A is Morita equivalent to another central simple algebra B which contains a maximal subfield
E ⇢ B such that E/F is a Galois extension. As a corollary of this, the Brauer group Br(F ) of a field F is equal
to the union of all the relative Brauer groups Br(E/F ) such that E/F is a Galois extension.

On Part II we proved the Schur-Weyl Duality Theorem, a remarkable result which links the representation
theory of the groups Sn and GL(V ). In particular, as a representation of GL(V ) ⇥ Sn , there is a decompo-
Nn
sition of V whose summands range through all irreducible representations of Sn tensored with irreducible
representations of GL(V ). We showed that an equivalent statement holds for the Lie algebra gl(V ).

To prove this Theorem we provided all necessary background in Representation Theory, Combinatorics and
Lie Theory, and proved some key results: we proved Maschke’s Theorem and showed that irreducible characters
from a basis for the algebra F(G) of class functions. Then we proceeded to introduce the concept of a Young
tableau and a Specht module, and proved the notable statement that every irreducible representation of Sn is
isomorphic to a Specht module V for some partition of n. We then presented the concept of an analytic
manifold and a Lie group, and constructed the associated Lie algebra of a Lie group.

Lastly, we discussed some properties of the Schur functor S , a functor from the category of complex vector
spaces (or complex algebras) to itself. In particular, we showed that S V is the image of the Young symmetriser
Nn
c on V.

63
Acknowledgements

Firstly, I would like to express my immense gratitude towards Professor Shepherd-Barron for his advice and
patience during the last months. I have learnt an immense amount of Mathematics due to his recommendations
and comments. It is incredibly motivational to know that such a remarkable mathematician believes in me.
Furthermore, I would also like to thank my Representation Theory tutor Ashwin Iyengar for helpful advice on
representations of Lie algebras.

I am deeply grateful to all of my friends here in London, with whom I have experienced some of the best
years of my life, for their constant support.

And indeed, I am forever indebted to my family and my mates back in Madrid, specially my brother
Gonzalo and the Bron lads. I know that no matter what happens, I will always have a home to come back to.

64
Bibliography

[Ami55] Shimshon A Amitsur. Generic splitting fields of central simple algebras. Annals of mathematics 62,
Vol. 1, pages 8–43, 1955.

[Bas68] Hyman Bass. Algebraic K-Theory. W.A. Benjamin, INC., 1968.

[Bec16] Karim Johannes Becher. Splitting fields of central simple algebras of exponent two. Journal of Pure
and Applied Algebra, 220(10):3450–3453, 2016.

[Bel16] Gwyn Bellamy. Lie groups, lie algebras and their representations. University of Glasgow Lecture
Notes, 2016. Available at https://www.maths.gla.ac.uk/~gbellamy/lie.pdf.

[BSC17] Ben Blum-Smith and Samuel Coskey. The fundamental theorem on symmetric polynomials: His-
tory’s first whi↵ of galois theory. The College Mathematics Journal, 48(1):18–29, 2017.

[BT13] Tsvi Benson-Tilsen. Notes on specht modules. 2013. Available at https://www.researchgate.


net/publication/258920325_NOTES_ON_SPECHT_MODULES.

[Cla12] Pete L Clark. Non-commutative algebra. Lecture notes: http://math. uga.


edu/ pete/noncommutativealgebra. pdf, 2012.

[Coa05] Tom Coates. The tensor product of vector spaces. Harvard University Lecture Notes, 2005. Available
at http://abel.math.harvard.edu/archive/25b_spring_05/tensor.pdf.

[Cona] Keith Conrad. Tensor products 1. University of Connecticut Lecture Notes. Available at https:
//kconrad.math.uconn.edu/blurbs/linmultialg/tensorprod.pdf.

[Conb] Keith Conrad. Tensor products 2. University of Connecticut Lecture Notes. Available at https:
//kconrad.math.uconn.edu/blurbs/linmultialg/tensorprod2.pdf.

[CS03] John H Conway and Derek A Smith. On quaternions and octonions. CRC Press, 2003.

[DF04] David Steven Dummit and Richard M Foote. Abstract algebra, volume 3. Wiley Hoboken, 2004.

[dJea] Johan de Jong et. al. The Stacks Project, chapter 11.8 Splitting Fields. Available at https://
stacks.math.columbia.edu/download/brauer.pdf#nameddest=074X.

[EGH+ 11] Pavel I Etingof, Oleg Golberg, Sebastian Hensel, Tiankai Liu, Alex Schwendner, Dmitry Vaintrob,
and Elena Yudovina. Introduction to representation theory, volume 59. American Mathematical
Soc., 2011.

[FH13] William Fulton and Joe Harris. Representation theory: a first course, volume 129. Springer Science
& Business Media, 2013.

65
66 BIBLIOGRAPHY

[GN04] Kenneth Glass and Chi-Keung Ng. A simple proof of the hook-length formula. The American
Mathematical Monthly, 111(8):700–704, 2004.

[GS17] Philippe Gille and Tamás Szamuely. Central simple algebras and Galois cohomology, volume 165.
Cambridge University Press, 2017.

[GW09] Roe Goodman and Nolan R Wallach. Symmetry, representations, and invariants, volume 255.
Springer, 2009.

[Jam76] G.D. James. The irreducible representations of the symmetric groups. Bulletin of the London
Mathematical Society, Issue 3, Vol. 8, pages 229–232, 1976.

[KJ08] Alexander Kirillov Jr. An introduction to Lie groups and Lie algebras, volume 113. Cambridge
University Press, 2008.

[Lam13] Tsit-Yuen Lam. A first course in noncommutative rings, volume 131. Springer Science & Business
Media, 2013.

[Lew06] David W Lewis. Quaternion algebras and the algebraic legacy of hamilton’s quaternions. Irish Math.
Soc. Bull, 57:41–64, 2006.

[Pie82] R.S. Pierce. Associative Algebras. Springer-Verlag, 1982.

[Sai15] Manjil Saikia. Representations of the symmetric group. Thesis at the Abdus Salam International
Centre for Theoretical Physics, 2015.

[Ser09] Jean-Pierre Serre. Lie algebras and Lie groups: 1964 lectures given at Harvard University. Springer,
2009.

[Spe35] Wilhelm Specht. Die irreduziblen darstellungen der symmetrischen gruppe. Mathematische
Zeitschrift, 39(1):696–711, 1935.

[Ste16] James Stevens. Schur-weyl duality. University of Chicago REU, 2016.

[Van] R. Vandermolen. Schur functors. University of South Carolina. Available at http://people.math.


sc.edu/robertv/schur.pdf.

[Voi17] John Voight. Quaternion algebras (2017), 2017.

[Wes11] Quinton Westrich. Young’s natural representations of S4 . arXiv preprint arXiv:1112.0687, 2011.

[Yaf16] Andrei Yafaev. Group algebras. UCL Lecture Notes, 2016. Available at https://www.ucl.ac.uk/
~ucahaya/GroupAlgebras.pdf.

[Zha08] Yufei Zhao. Young tableaux and the representations of the symmetric group. dimension, 3(1):3,
2008.

You might also like