PartII_representation_theory
PartII_representation_theory
Lent 2016
These notes are not endorsed by the lecturers, and I have modified them (often
significantly) after lectures. They are nowhere near accurate representations of what
was actually lectured, and in particular, all errors are almost surely mine.
Character theory
Determination of a representation by its character. The group algebra, conjugacy classes,
and orthogonality relations. Regular representation. Permutation representations and
their characters. Induced representations and the Frobenius reciprocity theorem.
Mackey’s theorem. Frobenius’s Theorem. [12]
Tensor products
Tensor products of representations and products of characters. The character ring.
Tensor, symmetric and exterior algebras. [3]
1
Contents II Representation Theory
Contents
0 Introduction 3
1 Group actions 4
2 Basic definitions 7
4 Schur’s lemma 16
5 Character theory 21
6 Proof of orthogonality 28
7 Permutation representations 32
11 Frobenius groups 58
12 Mackey theory 61
14 Burnside’s theorem 69
Index 88
2
0 Introduction II Representation Theory
0 Introduction
The course studies how groups act as groups of linear transformations on vector
spaces. Hopefully, you understand all the words in this sentence. If so, this is a
good start.
In our case, groups are usually either finite groups or topological compact
groups (to be defined later). Topological compact groups are typically subgroups
of the general linear group over some infinite fields. It turns out the tools we
have for finite groups often work well for these particular kinds of infinite groups.
The vector spaces are always finite-dimensional, and usually over C.
Prerequisites of this course include knowledge of group theory (as much as the
IB Groups, Rings and Modules course), linear algebra, and, optionally, geometry,
Galois theory and metric and topological spaces. There is one lemma where
we must use something from Galois theory, but if you don’t know about Galois
theory, you can just assume the lemma to be true without bothering yourself
too much about the proofs.
3
1 Group actions II Representation Theory
1 Group actions
We start by reviewing some basic group theory and linear algebra.
In fact, we have Aθ ∈ GLn (F), the general linear group. It is easy to see the
following:
Proposition. As groups, GL(V ) ∼
= GLn (F), with the isomorphism given by
θ 7→ Aθ .
Of course, picking a different basis of V gives a different isomorphism to
GLn (F), but we have the following fact:
Proposition. Matrices A1 , A2 represent the same element of GL(V ) with respect
to different bases if and only if they are conjugate, namely there is some X ∈
GLn (F) such that
A2 = XA1 X −1 .
P
Recall that tr(A) = i aii , where A = (aij ) ∈ Mn (F), is the trace of A. A
nice property of the trace is that it doesn’t notice conjugacy:
Proposition.
tr(XAX −1 ) = tr(A).
Hence we can define the trace of an operator tr(θ) = tr(Aθ ), which is
independent of our choice of basis. This is an important result. When we study
representations, we will have matrices flying all over the place, which are scary.
Instead, we often just look at the traces of these matrices. This reduces our
problem of studying matrices to plain arithmetic.
4
1 Group actions II Representation Theory
Cm = hx : xm = 1i.
This also occurs naturally, as Z/mZ over addition, and also the group of nth roots
of unity in C. We can view this as a subgroup of GL1 (C) ∼ = C× . Alternatively,
this is the group of rotation symmetries of a regular m-gon in R2 , and can be
viewed as a subgroup of GL2 (R).
Definition (Dihedral group D2m ). The dihedral group D2m of order 2m is
This is the symmetry group of a regular m-gon. The xi are the rotations and
xi y are the reflections. For example, in D8 , x is rotation by π2 and y is any
reflection.
This group can be viewed as a subgroup of GL2 (R), but since it also acts on
the vertices, it can be viewed as a subgroup of Sm .
5
1 Group actions II Representation Theory
CG (g) = {x ∈ G : xg = gx}.
(i) 1x = x
(ii) g(hx) = (gh)x
The group action can also be characterised in terms of a homomorphism.
Lemma. Given an action of G on X, we obtain a homomorphism θ : G →
Sym(X), where Sym(X) is the set of all permutations of X.
Proof. For g ∈ G, define θ(g) = θg ∈ Sym(X) as the function X → X by x 7→ gx.
This is indeed a permutation of X because θg−1 is an inverse.
Moreover, for any g1 , g2 ∈ G, we get θg1 g2 = θg1 θg2 , since (g1 g2 )x = g1 (g2 x).
6
2 Basic definitions II Representation Theory
2 Basic definitions
We now start doing representation theory. We boringly start by defining a
representation. In fact, we will come up with several equivalent definitions of a
representation. As always, G will be a finite group and F will be a field, usually
C.
Definition (Representation). Let V be a finite-dimensional vector space over
F. A (linear) representation of G on V is a group homomorphism
ρ = ρV : G → GL(V ).
7
2 Basic definitions II Representation Theory
8
2 Basic definitions II Representation Theory
i.e. no matter which way we go form V (top left) to V 0 (bottom right), we still
get the same map.
We say ϕ intertwines ρ and ρ0 . We write HomG (V, V 0 ) for the F-space of all
these maps.
Definition (G-isomorphism). A G-homomorphism is a G-isomorphism if ϕ is
bijective.
ρ0 = ϕρϕ−1 . (†)
ρj (xi ) = ω ij ,
9
2 Basic definitions II Representation Theory
gϕ(v) = ϕ(gv).
ρg (W ) ≤ W
for all g ∈ G.
Obviously, {0} and V are G-subspaces. These are the trivial G-subspaces.
Definition (Irreducible/simple representation). A representation ρ is irreducible
or simple if there are no proper non-zero G-subspaces.
Example. Any 1-dimensional representation of G is necessarily irreducible, but
the converse does not hold, or else life would be very boring. We will later see
that D8 has a two-dimensional irreducible complex representation.
10
2 Basic definitions II Representation Theory
11
3 Complete reducibility and Maschke’s theorem II Representation Theory
This is in some sense an averaging operator, averaging over what ρ(g) does. Here
1
we need the field to have characteristic zero such that |G| is well-defined. In fact,
this theorem holds as long as char F - |G|.
For simplicity of expression, we drop the ρ’s, and simply write
1 X
q̄ : v 7→ gq(g −1 v).
|G|
g∈G
We first claim that q̄ has image in W . This is true since for v ∈ V , q(g −1 v) ∈ W ,
and gW ≤ W . So this is a little bit like a projection.
12
3 Complete reducibility and Maschke’s theorem II Representation Theory
Next, we claim that for w ∈ W , we have q̄(w) = w. This follows from the
fact that q itself fixes W . Since W is G-invariant, we have g −1 w ∈ W for all
w ∈ W . So we get
1 X 1 X −1 1 X
q̄(w) = gq(g −1 w) = gg w = w = w.
|G| |G| |G|
g∈G g∈G g∈G
We now put g 0 = hg. Since h is invertible, summing over all g is the same as
summing over all g 0 . So we get
1 X 0 0−1
= g q(g (hv))
|G| 0
g ∈G
= q̄(hv).
We are pretty much done. We finally show that ker q̄ is G-invariant. If v ∈ ker q̄
and h ∈ G, then q̄(hv) = hq̄(v) = 0. So hv ∈ ker q̄.
Thus
V = im q̄ ⊕ ker q̄ = W ⊕ ker q̄
is a G-subspace decomposition.
The crux of the whole proof is the definition of q̄. Once we have that,
everything else follows easily.
1
Yet, for the whole proof to work, we need |G| to exist, which in particular
means G must be a finite group. There is no obvious way to generalize this to
infinite groups. So let’s try a different proof.
The second proof uses inner products, and hence we must take F = C. This
can be generalized to infinite compact groups, as we will later see.
Recall the definition of an inner product:
Definition (Hermitian inner product). For V a complex space, h · , · i is a
Hermitian inner product if
(i) hv, wi = hw, vi (Hermitian)
(ii) hv, λ1 w1 + λ2 w2 i = λ1 hv, w1 i + λ2 hv, w2 i (sesquilinear)
(iii) hv, vi > 0 if v 6= 0 (positive definite)
Definition (G-invariant inner product). An inner product h · , · i is in addition
G-invariant if
hgv, gwi = hv, wi.
13
3 Complete reducibility and Maschke’s theorem II Representation Theory
hgv, gwi = 0
for all g ∈ G, w ∈ W .
Thus for all w0 ∈ W , pick w = g −1 w0 ∈ W , and this shows hgv, w0 i = 0.
Hence gv ∈ W ⊥ .
Hence if there is a G-invariant inner product on any complex G-space V ,
then we get another proof of Maschke’s theorem.
Theorem (Weyl’s unitary trick). Let ρ be a complex representation of a finite
group G on the complex vector space V . Then there is a G-invariant Hermitian
inner product on V .
Recall that the unitary group is defined by
= hv, wi.
14
3 Complete reducibility and Maschke’s theorem II Representation Theory
FG = heg : g ∈ Gi,
It is not hard to see this is a G-homomorphism. We are now going to exploit the
fact that V is irreducible. Thus, since im θ is a G-subspace of V and non-zero,
we must have im θ = V . Also, ker θ is a G-subspace of FG. Now let W be
the G-complement of ker θ in FG, which exists by Maschke’s theorem. Then
W ≤ FG is a G-subspace and
FG = ker θ ⊕ W.
W ∼
= FG/ ker θ ∼
= im θ = V.
More generally, G doesn’t have to just act on the vector space generated by
itself. If G acts on any set, we can take that space and create a space acted on
by G.
Definition (Permutation representation). Let F be a field, and let G act on a
set X. Let FX = hex : x ∈ Xi with a G-action given by
X X
g ax ex = ax egx .
x x
15
4 Schur’s lemma II Representation Theory
4 Schur’s lemma
The topic of this chapter is Schur’s lemma, an easy yet extremely useful lemma
in representation theory.
Theorem (Schur’s lemma).
(i) Assume V and W are irreducible G-spaces over a field F. Then any
G-homomorphism θ : V → W is either zero or an isomorphism.
(ii) If F is algebraically closed, and V is an irreducible G-space, then any
G-endomorphism V → V is a scalar multiple of the identity map ιV .
Proof.
(i) Let θ : V → W be a G-homomorphism between irreducibles. Then ker θ is
a G-subspace of V , and since V is irreducible, either ker θ = 0 or ker θ = V .
Similarly, im θ is a G-subspace of W , and as W is irreducible, we must
have im θ = 0 or im θ = W . Hence either ker θ = V , in which case θ = 0,
or ker θ = 0 and im θ = W , i.e. θ is a bijection.
(ii) Since F is algebraically closed, θ has an eigenvalue λ. Then θ − λιV is a
singular G-endomorphism of V . So by (i), it must be the zero map. So
θ = λιV .
Recall that the F-space HomG (V, W ) is the space of all G-homomorphisms
V → W . If V = W , we write EndG (V ) for the G-endomorphisms of V .
Corollary. If V, W are irreducible complex G-spaces, then
(
1 V, W are G-isomorphic
dimC HomG (V, W ) =
0 otherwise
Proof. If V and W are not isomorphic, then the only possible map between
them is the zero map by Schur’s lemma.
Otherwise, suppose V ∼ = W and let θ1 , θ2 ∈ HomG (V, W ) be both non-
zero. By Schur’s lemma, they are isomorphisms, and hence invertible. So
θ2−1 θ1 ∈ EndG (V ). Thus θ2−1 θ1 = λιV for some λ ∈ C. Thus θ1 = λθ2 .
16
4 Schur’s lemma II Representation Theory
σ : Z(G) → C×
z 7→ µg
θg : V → V
v 7→ gv
gv = λg v
17
4 Schur’s lemma II Representation Theory
1 x x2 x3
ρ1 1 1 1 1
ρ2 1 i −1 −i
ρ2 1 −1 1 −1
ρ2 1 −i −1 i
(ii) Consider the Klein four group G = VR = hx1 i × hx2 i. The irreducible
representations are
1 x1 x2 x1 x2
ρ1 1 1 1 1
ρ2 1 1 −1 −1
ρ2 1 −1 1 −1
ρ2 1 −1 −1 1
These are also known as character tables, and we will spend quite a lot of
time computing these for non-abelian groups.
Note that there is no “natural” one-to-one correspondence between the
elements of G and the representations of G (for G finite-abelian). If we choose
an isomorphism G ∼ = Cn1 × · · · Cnr , then we can identify the two sets, but it
depends on the choice of the isomorphism (while the decomposition is unique,
we can pick a different generator of, say, Cn1 and get a different isomorphism to
the same decomposition).
Isotypical decompositions
Recall that we proved we can decompose any G-representation into a sum of
irreducible representations. Is this decomposition unique? If it isn’t, can we say
anything about, say, the size of the irreducible representations, or the number of
factors in the decomposition?
We know any diagonalizable endomorphism α : V → V for a vector space V
gives us a vector space decomposition
M
V = V (λ),
λ
18
4 Schur’s lemma II Representation Theory
where
V (λ) = {v ∈ V : α(v) = λv}.
This is canonical in that it depends on α alone, and nothing else.
If V is moreover a G-representation, how does this tie in to the decomposition
of V into the irreducible representations?
Let’s do an example.
Example. Consider G = D6 ∼ = S3 = hr, s : r3 = s2 = 1, rs = sr−1 i. We have
previously seen that each irreducible representation has dimension at most 2.
We spot at least three irreducible representations:
1 triad r→
7 1 s 7→ 1
S sign r→7 1 s 7→ −1
W 2-dimensional
The last representation is the action of D6 on R2 in the natural way, i.e. the
rotations and reflections of the plane that corresponds to the symmetries of the
triangle. It is helpful to view this as a complex representation in order to make
the matrix look nice. The 2-dimensional representation (ρ, W ) is defined by
W = C2 , where r and s act on W as
ω 0 0 1
ρ(r) = , ρ(s) = ,
0 ω2 1 0
and ω = e2πi/3 is a third root of unity. We will now show that these are indeed
all the irreducible representations, by decomposing any representation into sum
of these.
So let’s decompose an arbitrary representation. Let (ρ0 , V ) be any complex
representation of G. Since ρ0 (r) has order 3, it is diagonalizable has eigenvalues
1, ω, ω 2 . We diagonalize ρ0 (r) and then V splits as a vector space into the
eigenspaces
V = V (1) ⊕ V (ω) ⊕ V (ω 2 ).
Since srs−1 = r−1 , we know ρ0 (s) preserves V (1) and interchanges V (ω) and
V (ω 2 ).
Now we decompose V (1) into ρ0 (s) eigenspaces, with eigenvalues ±1. Since r
has to act trivially on these eigenspaces, V (1) splits into sums of copies of the
irreducible representations 1 and S.
For the remaining mess, choose a basis v1 , · · · , vn of V (ω), and let vj0 =
0 0 0 0 1
ρ (s)vj . Then ρ (s) acts on the two-dimensional space hvj , vj i as , while
1 0
ω 0
ρ0 (r) acts as . This means V (ω) ⊕ V (ω 2 ) decomposes into many copies
0 ω2
of W.
We did this for D6 by brute force. How do we generalize this? We first have
the following lemma:
Lemma. Let V, V1 , V2 be G-vector spaces over F. Then
(i) HomG (V, V1 ⊕ V2 ) ∼
= HomG (V, V1 ) ⊕ HomG (V, V2 )
(ii) HomG (V1 ⊕ V2 , V ) ∼
= HomG (V1 , V ) ⊕ HomG (V2 , V ).
19
4 Schur’s lemma II Representation Theory
Proof. The proof is to write down the obvious homomorphisms and inverses.
Define the projection map
πi : V1 ⊕ V2 → Vi ,
|{j : Vj ∼
= S}| = dim HomG (S, V ).
20
5 Character theory II Representation Theory
5 Character theory
In topology, we want to classify spaces. To do so, we come up with invariants of
spaces, like the number of holes. Then we know that a torus is not the same
as a sphere. Here, we want to attach invariants to a representation ρ of a finite
group G on V .
One thing we might want to do is to look at the matrix coefficients of ρ(g).
However, this is bad, since this is highly highly highly basis dependent. It is not
a true invariant. We need to do something better than that.
Let F = C, and G be a finite group. Let ρ = ρV : G → GL(V ) be a
representation of G. The clue is to look at the characteristic polynomial of the
matrix. The coefficients are functions of the eigenvalues — on one extreme, the
determinant is the product of all eigenvalues; on the other extreme, and the
trace is the sum of all of them. Surprisingly, it is the trace that works. We don’t
have to bother ourselves with the other coefficients.
Definition (Character). The character of a representation ρ : G → GL(V ),
written χρ = χv = χ, is defined as
χ(g) = tr ρ(g).
χV (hgh−1 ) = χV (g)
21
5 Character theory II Representation Theory
(iii) χV (g −1 ) = χV (g).
(iv) For two representations V, W , we have
χV ⊕W = χV + χW .
These results, despite being rather easy to prove, are very useful, since they
save us a lot of work when computing the characters of representations.
Proof.
(i) Obvious since ρV (1) = idV .
(ii) Let Rg be the matrix representing g. Then
22
5 Character theory II Representation Theory
Proof. Fix g, and pick a basis of eigenvectors of ρ(g). Then the matrix of ρ(g)
is diagonal, say
λ1
ρ(g) =
.. ,
.
λn
Hence X X X
|χ(g)| = λi ≤ |λi | = 1 = dim V = χ(1).
In the triangle inequality, we have equality if and only if all the λi ’s are equal,
to λ, say. So ρ(g) = λI. Since all the λi ’s are roots of unity, so is λ.
And, if χ(g) = χ(1), then since ρ(g) = λI, taking the trace gives χ(g) = λχ(1).
So λ = 1, i.e. ρ(g) = I. So g ∈ ker ρ.
23
5 Character theory II Representation Theory
It should be clear (especially using the original formula) that hχ, χ0 i = hχ0 , χi.
So when restricted to characters, this is a real symmetric form.
The main result is the following theorem:
Theorem (Completeness of characters). The complex irreducible characters of
G form an orthonormal basis of C(G), namely
(i) If ρ : G → GL(V ) and ρ0 : G → GL(V 0 ) are two complex irreducible
representations affording characters χ, χ0 respectively, then
(
0 1 if ρ and ρ0 are isomorphic representations
hχ, χ i =
0 otherwise
χ = m1 χ1 + · · · + mk χk ,
mj = hχ, χj i.
24
5 Character theory II Representation Theory
This is not true for infinite groups. For example, if G = Z, then the
representations
1 0 1 1
1 7→ , 1 7→
0 1 0 1
are non-isomorphic, but have the same character 2.
Corollary (Irreducibility criterion). If ρ : G → GL(V ) is a complex repre-
sentation of G affording the character χ, then ρ is irreducible if and only if
hχ, χi = 1.
But hχ, χi = 1. So exactly one of mj is 1, while the others are all zero, and
χ = χj . So χ is irreducible.
Theorem. Let ρ1 , · · · , ρk be the irreducible complex representations of G, and
let their dimensions be n1 , · · · , nk . Then
X
|G| = n2i .
Recall that for abelian groups, each irreducible character has dimension 1,
and there are |G| representations. So this is trivially satisfied.
1 X 1
aj = hπreg , χj i = πreg (g)χj (g) = · |G|χj (1) = χj (1).
|G| |G|
g∈G
Then we get
X X X
|G| = πreg (1) = aj χj (1) = χj (1)2 = n2j .
From this proof, we also see that each irreducible representation is a subrep-
resentation of the regular representation.
25
5 Character theory II Representation Theory
Proof. The irreducible characters and the characteristic functions of the conju-
gacy classes are both bases of C(G).
Corollary. Two elements g1 , g2 are conjugate if and only if χ(g1 ) = χ(g2 ) for
all irreducible characters χ of G.
Proof. If g1 , g2 are conjugate, since characters are class functions, we must have
χ(g1 ) = χ(g2 ).
For the other direction, let δ be the characteristic function of the class of g1 .
Then since δ is a class function, we can write
X
δ= mj χj ,
Alternatively, we can view this as S3 and use the fact that two elements are
conjugate if and only if they have the same cycle types. We have found the
three representations: 1, the trivial representation; S, the sign; and W, the
two-dimensional representation. In W , the reflections srj acts by matrix with
eigenvalues ±1. So the sum of eigenvalues is 0, hence χw (sr2 ) = 0. It also also
not hard to see
cos 2mπ − sin 2mπ
m
r 7→ 3 3 .
sin 2mπ
3 cos 2mπ
3
So χw (rm ) = −1.
Fortunately, after developing some theory, we will not need to find all the
irreducible representations in order to compute the character table.
26
5 Character theory II Representation Theory
1 C2 C3
1 1 1 1
S 1 −1 1
χw 2 0 −1
We see that the sum of the squares of the first column is 12 + 12 + 22 = 6 = |D6 |,
as expected. We can also check that W is genuinely an irreducible representation.
Noting that the centralizers of C1 , C2 and C3 are of sizes 6, 2, 3. So the inner
product is
22 02 (−1)2
hχW , χW i = + + = 1,
6 2 3
as expected.
So we now need to prove orthogonality.
27
6 Proof of orthogonality II Representation Theory
6 Proof of orthogonality
We will do the proof in parts.
Theorem (Row orthogonality relations). If ρ : G → GL(V ) and ρ0 : G →
GL(V 0 ) are two complex irreducible representations affording characters χ, χ0
respectively, then
(
0 1 if ρ and ρ0 are isomorphic representations
hχ, χ i = .
0 otherwise
Proof. We fix a basis of V and of V 0 . Write R(g), R0 (g) for the matrices of ρ(g)
and ρ0 (g) with respect to these bases respectively. Then by definition, we have
1 X 0 −1
hχ0 , χi = χ (g )χ(g)
|G|
g∈G
1 X X 0 −1
= R (g )ii R(g)jj .
|G| 0
g∈G 1≤i≤n
1≤j≤n
= ϕ̃(v).
(i) Now we first consider the case where ρ, ρ0 is not isomorphic. Then by
Schur’s lemma, we must have ϕ̃ = 0 for any linear ϕ : V → V 0 .
We now pick a very nice ϕ, where everything disappears. We let ϕ = εαβ ,
the operator having matrix Eαβ with entries 0 everywhere except 1 in the
(α, β) position.
Then ε̃αβ = 0. So for each i, j, we have
1 X 0 −1
(R (g )Eαβ R(g))ij = 0.
|G|
g∈G
28
6 Proof of orthogonality II Representation Theory
We can sum this thing over all i and j to get that hχ0 , χi = 0.
(ii) Now suppose ρ, ρ0 are isomorphic. So we might as well take χ = χ0 , V = V 0
and ρ = ρ0 . If ϕ : V → V is linear, then ϕ̃ ∈ EndG (V ).
We first claim that tr ϕ̃ = tr ϕ. To see this, we have
1 X 1 X
tr ϕ̃ = tr(ρ(g −1 )ϕρ(g)) = tr ϕ = tr ϕ,
|G| |G|
g∈G g∈G
using the fact that traces don’t see conjugacy (and ρ(g −1 ) = ρ(g)−1 since
ρ is a group homomorphism).
By Schur’s lemma, we know ϕ̃ = λιv for some λ ∈ C (which depends on
ϕ). Then if n = dim V , then
1
λ= tr ϕ.
n
Let ϕ = εαβ . Then tr ϕ = δαβ . Hence
1 X 1
ε̃αβ = ρ(g −1 )εαβ ρ(g) = δαβ ι.
|G| g n
After learning about tensor products and duals later on, one can provide a
shorter proof of this result as follows:
Alternative proof. Consider two representation spaces V and W . Then
1 X 1 X
hχW , χV i = χW (g)χV (g) = χV ⊗W ∗ (g).
|G| |G|
29
6 Proof of orthogonality II Representation Theory
dim HomG (V, W ) copies of the trivial representation. By Schur’s lemma, this
number is 1 if V ∼= W , and 0 if V 6∼
= W.
So it suffices to show that if χ is a non-trivial irreducible character, then
X
χ(g) = 0.
g∈G
P
But if ρ affords χ, then any element in the image
P of g∈G ρ(g) is fixed by G.
By irreducibility, the image must be trivial. So g∈G ρ(g) = 0.
What we have done is show the orthogonality of the rows. There is a similar
one for the columns:
Theorem (Column orthogonality relations). We have
k
X
χi (gj )χi (g` ) = δj` |CG (g` )|.
i=1
This is analogous to the fact that if a matrix has orthonormal rows, then it
also has orthonormal columns. We get the extra factor of |CG (g` )| because of
the way we count.
This has an easy corollary, which we have already shown previously using
the regular representation:
Corollary.
k
X
|G| = χ2i (1).
i=1
Proof of column orthogonality. Consider the character table X = (χi (gj )). We
know X 1
δij = hχi , χj i = χi (g` )χk (g` ).
|CG (g` )|
`
Then
X̄D−1 X T = Ik×k ,
where
|CG (g1 )| ··· 0
D= .. .. ..
.
. . .
0 · · · |CG (gk )|
Since X is square, it follows that D−1 X̄ T is the inverse of X. So X̄ T X = D,
which is exactly the theorem.
The proof requires that X is square, i.e. there are k many irreducible repre-
sentations. So we need to prove the last part of the completeness of characters.
Theorem. Each class function of G can be expressed as a linear combination
of irreducible characters of G.
30
6 Proof of orthogonality II Representation Theory
P
So λ = 0, i.e. g f (g)ρ(g) = 0, the zero endomorphism on V . This is valid for
any irreducible representation, and hence for every representation, by complete
reducibility.
In particular, take ρ = ρreg , where ρreg (g) : e1 7→ eg for each g ∈ G. Hence
X X
f (g)ρreg (g) : e1 7→ f (g)eg .
g
P
Since this is zero, it follows that we must have f (g)eg = 0. Since the eg ’s are
linearly independent, we must have f (g) = 0 for all g ∈ G, i.e. f = 0.
31
7 Permutation representations II Representation Theory
7 Permutation representations
In this chapter, we are going to study permutation representations in a bit more
detail, since we can find some explicit formulas for the character of them. This
is particularly useful if we are dealing with the symmetric group.
Let G be a finite group acting on a finite set X = {x1 , · · · , xn } (sometimes
known as a G-set). We define CX to be the complex vector space with basis
{ex1 , · · · , exn }. More explicitly,
X
CX = aj exj : aj ∈ C .
j
ρX : G → GL(CX)
g 7→ ρ(g)
πX = πX1 + · · · + πx ` ,
32
7 Permutation representations II Representation Theory
definition, we have
1 X
hπX , 1i = πX (g)
|G| g
1
= |{(g, x) ∈ G × X : gx = x}|
|G|
1 X
= |Gx |,
|G|
x∈X
So done.
Lemma. Let G act on the sets X1 , X2 . Then G acts on X1 × X2 by
33
7 Permutation representations II Representation Theory
34
7 Permutation representations II Representation Theory
For the last representation we can find the dimension by computing 24 − (12 +
12 + 32 + 32 ) = 22 . So it has dimension 2. To obtain the whole of χ5 , we can
use column orthogonality — for example, we let the entry in the second column
be x. Then column orthogonality says 1 + 1 − 3 − 3 + 2x = 0 . So x = 2. In the
end, we find
1 3 8 6 6
1 (1 2)(3 4) (1 2 3) (1 2 3 4) (1 2)
trivial χ1 1 1 1 1 1
sign χ2 1 1 1 −1 −1
πX − 1G χ3 3 −1 0 −1 1
χ3 χ2 χ4 3 −1 0 1 −1
χ5 2 2 −1 0 0
We know CSn (g) ⊇ CAn (g), but we need not have equality, since elements needed
to conjugate g to h ∈ CSn (g) might not be in An . For example, consider
σ = (1 2 3) ∈ A3 . We have CA3 (σ) = {σ} and CS3 (σ) = {σ, σ −1 }.
We know An is an index 2 subgroup of Sn . So CAn (g) ⊆ CSn (g) either has
index 2 or 1. If the index is 1, then |CAn | = 12 |CSn |. Otherwise, the sizes are
equal.
A useful criterion for determining which case happens is the following:
35
8 Normal subgroups and lifting II Representation Theory
We say χ̃ lifts to χ.
Proof. Since a representation of G is just a homomorphism G → GL(V ), and
the composition of homomorphisms is a homomorphisms, it follows immediately
that ρ as defined in the lemma is a representation.
(i) We can compute
1 X
hχ, χi = χ(g)χ(g)
|G|
g∈G
1 X X
= χ(gk)χ(gk)
|G|
gN ∈G/N k∈N
1 X X
= χ̃(gN )χ̃(gN )
|G|
gN ∈G/N k∈N
1 X
= |N |χ̃(gN )χ̃(gN )
|G|
gN ∈G/N
1 X
= χ̃(gN )χ̃(gN )
|G/N |
gN ∈G/N
= hχ̃, χ̃i.
So hχ, χi = 1 if and only if hχ̃, χ̃i = 1. So ρ is irreducible if and only if ρ̃
is irreducible.
(ii) We can directly compute
χ(g) = tr ρ(g) = tr(ρ̃(gN )) = χ̃(gN )
for all g ∈ G.
36
8 Normal subgroups and lifting II Representation Theory
(iii) To see that χ and χ̃ have the same degree, we just notice that
Alternatively, to show they have the same dimension, just note that ρ and
ρ̃ map to the general linear group of the same vector space.
(iv) To show this is a bijection, suppose χ̃ is a character of G/N and χ is its
lift to G. We need to show the kernel contains N . By definition, we know
χ̃(N ) = χ(1). Also, if k ∈ N , then χ(k) = χ̃(kN ) = χ̃(N ) = χ(1). So
N ≤ ker χ.
Now let χ be a character of G with N ≤ ker χ. Suppose ρ : G → GL(V )
affords χ. Define
ρ̃ : G/N → GL(V )
gN 7→ ρ(g)
G0 = h[a, b] : a, b ∈ Gi,
where [a, b] = aba−1 b−1 , is the unique minimal normal subgroup of G such that
G/G0 is abelian. So if G/N is abelian, then G0 ≤ N .
Moreover, G has precisely ` = |G : G0 | representations of dimension 1, all
with kernel containing G0 , and are obtained by lifting from G/G0 .
In particular, by Lagrange’s theorem, ` | G.
Proof. Consider [a, b] = aba−1 b−1 ∈ G0 . Then for any h ∈ G, we have
h(aba−1 b−1 )h−1 = (ha)b(ha)−1 b−1 bhb−1 h−1 = [ha, b][b, h] ∈ G0
h[a1 , b1 ][a2 , b2 ] · · · [an , bn ]h−1 = (h[a1 , b1 ]h−1 )(h[a2 , b2 ]h−1 ) · · · (h[an , bn ]h−1 ),
37
8 Normal subgroups and lifting II Representation Theory
So by symmetry, all 3-cycles are in Sn0 . Since the 3-cycles generate An , we know
we must have Sn0 = An .
Since Sn0 /An ∼
= C2 , we get ` = 2. So Sn must have exactly two linear
characters, namely the trivial character and the sign.
Lemma. G is not simple if and only if χ(g) = χ(1) for some irreducible character
χ 6= 1G and some 1 6= g ∈ G. Any normal subgroup of G is theTintersection of
the kernels of some of the irreducible characters of G, i.e. N = ker χi .
38
9 Dual spaces and tensor products II Representation Theory
Thus we get
ρ∗ (g)εj = λ−1
j εj .
So X
χρ∗ (g) = λ−1
j = χρ (g
−1
).
39
9 Dual spaces and tensor products II Representation Theory
{vi ⊗ wj : 1 ≤ i ≤ m, 1 ≤ j ≤ n}.
Thus nX o
V ⊗W = λij vi ⊗ wj : λij ∈ F ,
with the “obvious” addition and scalar multiplication.
If X X
v= αi vi ∈ V, w = βj wj ,
we define X
v⊗w = αi βj (vi ⊗ wj ).
i,j
Note that note all elements of V ⊗ W are of this form. Some are genuine
combinations. For example, v1 ⊗ w1 + v2 ⊗ w2 cannot be written as a tensor
product of an element in V and another in W .
We can imagine our formula of the tensor of two elements as writing
X X
αi vi ⊗ βj wj ,
40
9 Dual spaces and tensor products II Representation Theory
Proof.
P P
(i) Let v = αi vi and w = βj wj . Then
X
(λv) ⊗ w = (λαi )βj vi ⊗ wj
ij
X
λ(v ⊗ w) = λ αi βj vi ⊗ wj
ij
X
v ⊗ (λw) = αi (λβj )vi ⊗ wj ,
φ:V ×W →V ⊗W
(v, w) 7→ v ⊗ w.
41
9 Dual spaces and tensor products II Representation Theory
Proof. Writing X X
vk = αik ei , w` = βj` f` ,
we have X
vk ⊗ w` = αik βjl ei ⊗ fj .
ρ ⊗ ρ0 : G → GL(V ⊗ V 0 )
by X X
(ρ ⊗ ρ0 )(g) : λij vi ⊗ wj 7→ λij (ρ(g)vi ) ⊗ (ρ0 (g)wj ).
for all g ∈ G.
ρ(g)vi = λi vi , ρ0 (g)wj = µj wj .
Then
So X X X
χρ⊗ρ0 (g) = λi µj = λi µj = χρ (g)χρ0 (g).
i,j
42
9 Dual spaces and tensor products II Representation Theory
S 2 V = {x ∈ V ⊗2 : τ (x) = x}
Λ2 V = {x ∈ V ⊗2 : τ (x) = −x}.
The exterior square is also known as the anti-symmetric square and wedge power.
Lemma. For any G-space V , S 2 V and Λ2 V are G-subspaces of V ⊗2 , and
V ⊗2 = S 2 V ⊕ Λ2 V.
{vi vj = vi ⊗ vj + vj ⊗ vi : 1 ≤ i ≤ j ≤ n},
43
9 Dual spaces and tensor products II Representation Theory
ρ(g)vi = λi vi .
We’ll be lazy and just write gvi instead of ρ(g)vi . Then, acting on Λ2 V , we get
g(vi ∧ vj ) = λi λj vi ∧ vj .
Thus X
χΛ (g) = λi λj .
1≤i<j≤n
Since the answer involves the square of the character, let’s write that down:
X 2
(χ(g))2 = λi
X X
= λ2i + 2 λi λj
i<j
X
= χ(g 2 ) + 2 λi λj
i<j
1 3 8 6 6
1 (1 2)(3 4) (1 2 3) (1 2 3 4) (1 2)
trivial χ1 1 1 1 1 1
sign χ2 1 1 1 −1 −1
πX − 1G χ3 3 −1 0 −1 1
χ3 χ2 χ4 3 −1 0 1 −1
χ5
44
9 Dual spaces and tensor products II Representation Theory
1 3 8 6 6
1 (1 2)(3 4) (1 2 3) (1 2 3 4) (1 2)
χ23 9 1 0 1 1
χ3 (g 2 ) 3 3 0 −1 3
S 2 χ3 6 2 0 0 2
Λ 2 χ3 3 −1 0 1 −1
1 3 8 6 6
1 (1 2)(3 4) (1 2 3) (1 2 3 4) (1 2)
trivial χ1 1 1 1 1 1
sign χ2 1 1 1 −1 −1
πX − 1G χ3 3 −1 0 −1 1
χ3 χ2 χ4 3 −1 0 1 −1
S 3 χ3 − 1 − χ3 χ5 2 2 −1 0 0
9.4 Characters of G × H
We have looked at characters of direct products a bit before, when we decomposed
an abelian group into a product of cyclic groups. We will now consider this in
the general case.
Proposition. Let G and H be two finite groups with irreducible characters
χ1 , · · · , χk and ψ1 , · · · , ψr respectively. Then the irreducible characters of the
direct product G × H are precisely
{χi ψj : 1 ≤ i ≤ k, 1 ≤ j ≤ r},
where
(χi ψj )(g, h) = χi (g)ψj (h).
Proof. Take ρ : G → GL(V ) affording χ, and ρ0 : H → GL(W ) affording ψ.
Then define
ρ ⊗ ρ0 : G × H → GL(V ⊗ W )
(g, h) 7→ ρ(g) ⊗ ρ0 (h),
where
(ρ(g) ⊗ ρ0 (h))(vi ⊗ wj ) 7→ ρ(g)vi ⊗ ρ0 (h)wj .
This is a representation of G × H on V ⊗ W , and χρ⊗ρ0 = χψ. The proof is
similar to the case where ρ, ρ0 are both representations of G, and we will not
repeat it here.
45
9 Dual spaces and tensor products II Representation Theory
= δir δjs .
So it follows that {χi ψj } are distinct and irreducible. We need to show this is
complete. We can consider
X X X X
χi ψj (1)2 = χ2i (1)ψj2 (1) = χ2i (1) ψj2 (1) = |G||H| = |G × H|.
i,j
So done.
T n V = V ⊗n = V ⊗ · · · ⊗ V .
| {z }
n times
First of all, there is an obvious Sn -action on this space, permuting the multipli-
cands of V ⊗n . For any σ ∈ Sn , we can define an action σ : V ⊗n → V ⊗n given
by
v1 ⊗ · · · vn 7→ vσ(1) ⊗ · · · ⊗ vσ(n) .
Note that this is a left action only if we compose permutations right-to-left. If
we compose left-to-right instead, we have to use σ −1 instead of σ (or use σ and
obtain a right action). In fact, this defines a representation of S n on V ⊗n by
linear representation.
If V itself is a G-space, then we also get a G-action on V ⊗n . Let ρ : G →
GL(V ) be a representation. Then we obtain the action of G on V ⊗n by
Staring at this carefully, it is clear that these two actions commute with each other.
This rather simple innocent-looking observation is the basis of an important
theorem by Schur, and has many many applications. However, that would be
for another course.
Getting back on track, since the two actions commute, we can decompose
V ⊗n as an Sn -module, and each isotypical component is a G-invariant subspace
of V ⊗n . We don’t really need this, but it is a good generalization of what we’ve
had before.
46
9 Dual spaces and tensor products II Representation Theory
The dimension is (
n 0 n>d
dim Λ V = d
n n≤d
Details are left for the third example sheet.
T · (V ) = T (V ) =
M
T n V,
n≥0
with T 0 V = F by convention.
This is a vector space over F with the obvious addition and multiplication by
scalars. T (V ) is also a (non-commutative) (graded) ring with product x·y = x⊗y.
This is graded in the sense that if x ∈ T n V and y ∈ T m V , x·y = x⊗y ∈ T n+m V .
47
9 Dual spaces and tensor products II Representation Theory
So exactly one of ni is ±1, while the others are all zero. So α = ±χi for some i.
Finally, since α(1) > 0 and also χ(1) > 0, we must have ni = +1. So α = χi .
48
9 Dual spaces and tensor products II Representation Theory
49
10 Induction and restriction II Representation Theory
hResG 6 0.
H χ, ψi =
|G|
hResG
H πreg , ψi = ψ(1) 6= 0.
|H|
If this sum has to be non-zero, then there must be some i such that hResG 6
H χi , ψi =
0.
Lemma. Let χ be an irreducible character of G, and let
X
ResGHχ= ci χi ,
i
50
10 Induction and restriction II Representation Theory
Proof. We have X
hResG G
H χ, ResH χiH = c2i .
However, by definition, we also have
1 X
hResG G
H χ, ResH χiH = |χ(h)|2 .
|H|
h∈H
1 = hχ, χiG
1 X
= |χ(g)|2
|G|
g∈G
1 X X
= |χ(h)|2 + |χ(g)|2
|G|
h∈H g∈G\H
|H| X 1 X
= c2i + |χ(g)|2
|G| |G|
g∈G\H
|H| X 2
≥ ci .
|G|
1 X
IndG
H ψ(g) = ψ̊(x−1 gx),
|H|
x∈G
where (
ψ(y) y ∈ H
ψ̊(y) = .
0 y∈6 H
51
10 Induction and restriction II Representation Theory
−1
We now write y = x gx. Then summing over g is the same as summing over y.
Since ϕ is a G-class function, this becomes
1 X
= ϕ(y)ψ̊(y)
|G||H|
x,y∈G
= hϕH , ψiH .
52
10 Induction and restriction II Representation Theory
hIndG G
H ψ, χi = hψ, ResH χi.
G
Since ψ and ResGH χ are characters, the thing on the right is in Z≥0 . Hence IndH
is a linear combination of irreducible characters with non-negative coefficients,
and is hence a character.
Recall we denote the conjugacy class of g ∈ G as CG (g), while the centralizer
is CG (g). If we take a conjugacy class CG (g) in G and restrict it to H, then
the result CG (g) ∩ H need not be a conjugacy class, since the element needed
to conjugate x, y ∈ CG (g) ∩ H need not be present in H, as is familiar from the
case of An ≤ Sn . However, we know that CG (g) ∩ H is a union of conjugacy
classes in H, since elements conjugate in H are also conjugate in G.
Proposition. Let ψ be a character of H ≤ G, and let g ∈ G. Let
m
[
CG (g) ∩ H = CH (xi ),
i=1
m
X ψ(xi )
IndG
H ψ(g) = |CG (g)| .
i=1
|CH (xi )|
This is all just group theory. Note that some people will think this proof
is excessive — everything shown is “obvious”. In some sense it is. Some steps
seem very obvious to certain people, but we are spelling out all the details so
that everyone is happy.
Proof. If m = 0, then {x ∈ G : x−1 gx ∈ H} = ∅. So ψ̊(x−1 gx) = 0 for all x. So
IndG
H ψ(g) = 0 by definition.
Now assume m > 0. We let
53
10 Induction and restriction II Representation Theory
We fix some 1 ≤ i ≤ m. Choose some gi ∈ G such that gi−1 ggi = xi . This exists
by definition of xi . So for every c ∈ CG (g) and h ∈ H, we have
We now use the fact that c commutes with g, since c ∈ CG (g), to get
CG (g)gi H ⊆ Xi .
Conversely, if x ∈ Xi , then x−1 gx = h−1 xi h = h−1 (gi−1 ggi )h for some h. Thus
xh−1 gi−1 ∈ CG (g), and so x ∈ CG (g)gi h. So
x ∈ CG (g)gi H.
So we conclude
Xi = CG (g)gi H.
Thus, using some group theory magic, which we shall not prove, we get
|CG (g)||H|
|Xi | = |CG (g)gi H| =
|H ∩ gi−1 CG (g)gi |
Finally, we note
gi−1 CG (g)gi = CG (gi−1 ggi ) = CG (xi ).
Thus
|H||CG (g)| |H||CG (g)|
|Xi | = = .
|H ∩ CG (xi )| |CH (xi )|
Dividing, we get
|Xi | |CG (g)|
= .
|H| |CH (xi )|
So done.
To clarify matters, if H, K ≤ G, then a double coset of H, K in G is a set
HxK = {hxk : h ∈ H, k ∈ K}
54
10 Induction and restriction II Representation Theory
But ti Ht−1
i is the stabilizer in G of the coset ti H ∈ X. So this is equal to
= | fixX (g)|
= πX (g).
By Frobenius reciprocity, we know
hπX , 1G iG = hIndG
H 1H , 1G iG = h1H , 1H iH = 1.
So the degree of IndG H (α) is 6. Also, the elements (1 2) and (3 4) are not
conjugate to anything in C4 . So the character vanishes.
For (1 2)(3 4), only one of the three conjugates in S4 lie in H (namely
(1 3)(2 4)). So, using the sizes of the centralizers, we obtain
−1
IndG
H ((1 3)(2 4)) = 8 = −2.
4
For (1 2 3 4), it is conjugate to 6 elements of S4 , two of which are in C4 , namely
(1 2 3 4) and (1 4 3 2). So
i −i
IndG
H (1 2 3 4) = 4 + = 0.
4 4
So we get the following character table:
1 6 8 3 6
1 (1 2) (1 2 3) (1 2)(3 4) (1 2 3 4)
IndG
H (α) 6 0 0 −2 0
55
10 Induction and restriction II Representation Theory
Induced representations
We have constructed a class function, and showed it is indeed the character of
some representation. However, we currently have no idea what this corresponding
representation is. Here we will try to construct such a representation explicitly.
This is not too enlightening, but its nice to know it can be done. We will also
need this explicit construction when proving Mackey’s theorem later.
Let H ≤ G have index n, and 1 = t1 , · · · , tn be a left transversal. Let W be
a H-space. Define the vector space
V = IndG
H W = W ⊕ (t2 ⊗ W ) ⊕ · · · ⊕ (tn ⊗ W ),
where
ti ⊗ W = {ti ⊗ w : w ∈ W }.
So dim V = n dim W .
To define the G-action on V , let g ∈ G. Then for every i, there is a unique j
such that t−1
j gti ∈ H, i.e. gti H = tj H. We define
g(ti ⊗ w) = tj ⊗ ((t−1
j gti )w).
g(ti w) = tj (t−1
j gti w).
t` ((t−1 −1 −1
` gtj )(tj g2 ti )w) = t` ((t` (g1 g2 )ti )w) = (g1 g2 )(ti w),
IndG
H W = W ⊕ t2 ⊗ W ⊕ · · · ⊕ tn ⊗ W,
Thus we get
n
X
IndG
H ψ(g) = ψ̊(t−1
i gti ).
i=1
56
10 Induction and restriction II Representation Theory
Note that this construction is rather ugly. It could be made much nicer if we
knew a bit more algebra — we can write the induced module simply as
IndG
H W = FG ⊗FH W,
57
11 Frobenius groups II Representation Theory
11 Frobenius groups
We will now use character theory to prove some major results in finite group
theory. We first do Frobenius’ theorem here. Later, will prove Burnside’s pa q b
theorem.
This is a theorem with lots and lots of proofs, but all of these proofs involves
some sort of representation theory — it seems like representation is an unavoidable
ingredient of it.
Theorem (Frobenius’ theorem (1891)). Let G be a transitive permutation
group on a finite set X, with |X| = n. Assume that each non-identity element
of G fixes at most one element of X. Then the set of fixed point-free elements
(“derangements”)
On the face of it, it is not clear K is even a subgroup at all. It turns out
normality isn’t really hard to prove — the hard part is indeed showing it is a
subgroup.
Note that we did not explicitly say G is finite. But these conditions imply
G ≤ Sn , which forces G to be finite.
58
11 Frobenius groups II Representation Theory
θ = ψ G − ψ(1)(1H )G + ψ(1)1G .
Note that we chose the coefficients exactly so that the final property of θ
holds. This is a matter of computation:
1 h ∈ H \ {1} K \ {1}
ψG nψ(1) ψ(h) 0
ψ(1)(1H )G nψ(1) ψ(1) 0
ψ(1)1G ψ(1) ψ(1) ψ(1)
θi ψ(1) ψ(h) ψ(1)
The less obvious part is that θ is a character. From the way we wrote it, we
already know it is a virtual character. We then compute the inner product
1 X
hθ, θiG = |θ(g)|2
|G|
g∈G
1 X X
= |θ(g)|2 + |θ(g)|2
|G|
g∈K g∈G\K
1 X
= n|ψ(1)|2 + n |ψ(h)|2
|G|
h6=1∈H
!
1 X
2
= n |ψ(h)|
|G|
h∈H
1
= (n|H|hψ, ψiH )
|G|
= 1.
59
11 Frobenius groups II Representation Theory
Then we have (
|H| g∈K
θ(g) = .
0 g 6∈ K
From this, it follows that the kernel of the representation affording θ is K, and
in particular K is a normal subgroup of G.
This is again a computation using column orthogonality. For 1 6= h ∈ H, we
have
Xt
Θ(h) = ψi (1)ψi (h) = 0,
i=1
Note that J. Thompson (in his 1959 thesis) proved any finite group having
a fixed-point free automorphism of prime order is nilpotent. This implies K is
nilpotent, which means K is a direct product of its Sylow subgroups.
60
12 Mackey theory II Representation Theory
12 Mackey theory
We work over C. We wish to describe the restriction to a subgroup K ≤ G of an
induced representation IndG G G
H W , i.e. the character ResK IndH W . In general, K
and H can be unrelated, but in many applications, we have K = H, in which
case we can characterize when IndG H W is irreducible.
It is quite helpful to first look at a special case, where W = 1H is the trivial
representation. Thus IndG H 1H is the permutation representation of G on G/H
(coset action on the left cosets of H in G).
Recall that by the orbit-stabilizer theorem, if G is transitive on the set X,
and H = Gα for some α ∈ X, then the action of G on X is isomorphic the action
on G/H, namely the correspondence
gα ↔ gH
is a well-defined bijection, and commutes with with the g-action (i.e. x(gα) =
(xg)α ↔ x(gH) = (xg)H).
We now consider the action of G on G/H and let K ≤ G. Then K also acts
on G/H, and G/H splits into K-orbits. The K-orbit of gH contains precisely
kgH for k ∈ K. So it is the double coset
KgH = {kgh : k ∈ K, h ∈ H}.
The set of double cosets K\G/H partition G, and the number of double cosets
is
|K\G/H| = hπG/K , πG/H i.
We don’t need this, but this is true.
If it happens that H = K, and H is normal, then we just have K\G/H =
G/H.
What about stabilizers? Clearly, GgH = gHg −1 . Thus, restricting to the
action of K, we have KgH = gHg −1 ∩ K. We call Hg = KgH .
So by our correspondence above, the action of K on the orbit containing
gH is isomorphic to the action of K on K/(gHg −1 ∩ K) = K/Hg . From this,
and using the fact that IndG
H 1H = C(G/H), we get the special case of Mackey’s
theorem:
Proposition. Let G be a finite group and H, K ≤ G. Let g1 , · · · , gk be the
representatives of the double cosets K\G/H. Then
k
∼
M
G
ResG
K IndH 1H = IndK
gi Hg −1 ∩K
1.
i
i=1
61
12 Mackey theory II Representation Theory
where h = g −1 xg ∈ H by construction.
This is clearly well-defined. Since Hg ≤ K, we obtain an induced representa-
tion IndK
Hg Wg .
Let G be finite, H, K ≤ G, and W be a H-space. Then
M
G
ResGK IndH W = IndK
Hg W g .
g∈S
We will defer the proof of this for a minute or two. We will first derive some
corollaries of this, starting with the character version of the theorem.
Corollary. Let ψ be a character of a representation of H. Then
X
G
ResG
K IndH ψ = IndK
Hg ψg ,
g∈S
62
12 Mackey theory II Representation Theory
Proof. We take K = H C G. So the double cosets are just left cosets. Also,
Hg = H for all g. Moreover, Wg is irreducible since W is irreducible.
So, by Mackey’s irreducible criterion, IndGH W is irreducible precisely if
W =∼
6 Wg for all g ∈ G \ H. This is equivalent to ψ 6= ψg .
Note that again we could check the conditions on a set of representatives of
(double) cosets. In fact, the isomorphism class of Wg (for g ∈ G) depends only
on the coset gH.
We now prove Mackey’s theorem.
Theorem (Mackey’s restriction formula). In general, for K, H ≤ G, we let
S = {1, g1 , · · · , gr } be a set of double coset representatives, so that
[
G= Kgi H.
where h = g −1 xg ∈ H by construction.
This is clearly well-defined. Since Hg ≤ K, we obtain an induced representa-
tion IndK
Hg Wg .
Let G be finite, H, K ≤ G, and W be a H-space. Then
M
G
ResGK IndH W = IndK
Hg W g .
g∈S
This is possibly the hardest and most sophisticated proof in the course.
The idea is to “coarsen” this direct sum decomposition using double coset
representatives, by collecting together the t ⊗ W ’s with t ∈ KgH. We define
M
V (g) = t ⊗ W.
t∈KgH∩T
k · (t ⊗ w) = t0 ⊗ (ρ(t0−1 kt)w),
63
12 Mackey theory II Representation Theory
where t0 kt ∈ H.
Viewing V as a K-space (forgetting its whole G-structure), we have
M
ResGKV = V (g).
g∈S
The left hand side is what we want, but the right hand side looks absolutely
nothing like IndK
Hg Wg . So we need to show
t⊗W ∼
M
V (g) = = IndK
Hg Wg ,
t∈KgH∩T
as required.
Example. Let g = Sn and H = An . Consider a σ ∈ Sn . Then its conjugacy
class in Sn is determined by its cycle type.
If the class of σ splits into two classes in An , and χ is an irreducible character
of An that takes different values on the two classes, then by the irreducibility
criterion, IndSAnn χ is irreducible.
64
13 Integrality in the group algebra II Representation Theory
65
13 Integrality in the group algebra II Representation Theory
The claim is now Cj lives in the center of CG (note that the center of the
group algebra is different from the group algebra of the center). Moreover, they
form a basis:
By definition, this commutes with all elements of CG. So for all h ∈ G, we must
have
αh−1 gh = αg .
So the function g 7→ αg is constant on conjugacy classes of G. So we can write
αj = αg for g ∈ Cj . Then
Xk
g= αj Cj .
j=1
for some complex numbers aij` , since the Cj span. The claim is that aij` ∈ Z≥0
for all i, j`. To see this, we fix g` ∈ C` . Then by definition of multiplication, we
know
aij` = |{(x, y) ∈ Ci × Cj : xy = g` }|,
which is clearly a non-negative integer.
66
13 Integrality in the group algebra II Representation Theory
Let z ∈ Z(CG). Then ρ(z) commutes with all ρ(g) for g ∈ G. Hence, by
Schur’s lemma, we can write ρ(z) = λz I for some λz ∈ C. We then obtain the
algebra homomorphism
ωρ = ωχ = ω : Z(CG) → C
z 7→ λz .
By definition, we have ρ(Ci ) = ω(Ci )I. Taking traces of both sides, we know
X
χ(g) = χ(1)ω(Ci ).
g∈Ci
χ(gi )
ω(Ci ) = |Ci |.
χ(1)
Why should we care about this? The thing we are looking at is in fact an
algebraic integer.
Lemma. The values of
χ(g)
ωχ (Ci ) = |Ci |
χ(1)
are algebraic integers.
where we sum over the irreducible characters of G. We will neither prove it nor
use it. But if you want to try, the key idea is to use column orthogonality.
Finally, we get to the main result:
67
13 Integrality in the group algebra II Representation Theory
χj (1) | |G|
|G| 1 X
= χ(g)χ(g −1 )
χ(1) χ(1)
g∈G
k
1 X
= |Ci |χ(gi )χ(gi−1 )
χ(1) i=1
k
X |Ci |χ(gi )
= χ(gi )−1 .
i=1
χ(1)
Now we notice
|Ci |χ(gi )
χ(1)
is an algebraic integer, by the previous lemma. Also, χ(gi−1 ) is an algebraic
integer. So the whole mess is an algebraic integer since algebraic integers are
closed under addition and multiplication.
|G|
But we also know χ(1) is rational. So it must be an integer!
68
14 Burnside’s theorem II Representation Theory
14 Burnside’s theorem
Finally, we get to Burnside’s theorem.
Theorem (Burside’s pa q b theorem). Let p, q be primes, and let |G| = pa q b ,
where a, b ∈ Z≥0 , with a + b ≥ 2. Then G is not simple.
Note that if a + b = 1 or 0, then the group is trivially simple.
In fact even more is true, namely that G is soluble, but this follows easily
from above by induction.
This result is the best possible in the sense that |A5 | = 60 = 22 · 3 · 5, and
A5 is simple. So we cannot allow for three different primes factors (in fact, there
are exactly 8 simple groups with three prime factors). Also, if a = 0 or b = 0,
then G is a p-group, which have non-trivial center. So these cases trivially work.
Later, in 1963, Feit and Thompson proved that every group of odd order is
soluble. The proof was 255 pages long. We will not prove this.
In 1972, H. Bender found the first proof of Burnside’s theorem without the
use of representation theory, but the proof is much more complicated.
This theorem follows from two lemmas, and one of them involves some Galois
theory, and is hence non-examinable.
Lemma. Suppose
m
1 X
α= λj ,
m j=1
is an algebraic integer, where λnj = 1 for all j and some n. Then either α = 0 or
|α| = 1.
Proof (non-examinable). Observe α ∈ F = Q(ε), where ε = e2πi/n (since λj ∈ F
for all j). We let G = Gal(F/Q). Then
|χ(g)| = χ(1) or 0.
69
14 Burnside’s theorem II Representation Theory
Since χ(g) is the sum of deg χ = χ(1) many roots of unity, it suffices to show
that α is an algebraic integer.
By Bézout’s theorem, there exists a, b ∈ Z such that
aχ(1) + b|C| = 1.
So we can write
χ(g) χ(g)
α= = aχ(g) + b |C|.
χ(1) χ(1)
χ(g)
Since χ(g) and χ(1) |C| are both algebraic integers, we know α is.
1 X χ(1)
− = χ(g).
p p
χ6=1
But this is both an algebraic integer and a rational number, but not integer.
This is a contradiction.
We can now prove Burnside’s theorem.
Theorem (Burside’s pa q b theorem). Let p, q be primes, and let |G| = pa q b ,
where a, b ∈ Z≥0 , with a + b ≥ 2. Then G is not simple.
Proof. Let |G| = pa q b . If a = 0 or b = 0, then the result is trivial. Suppose
a, b > 0. We let Q ∈ Sylq (G). Since Q is a p-group, we know Z(Q) is non-trivial.
Hence there is some 1 6= g ∈ Z(Q). By definition of center, we know Q ≤ CG (g).
Also, CG (g) is not the whole of G, since the center of G is trivial. So
70
15 Representations of compact groups II Representation Theory
71
15 Representations of compact groups II Representation Theory
S 1 = U (1) = {g ∈ C× : |g| = 1} ∼
= R/Z,
where the last isomorphism is an abelian group isomorphism via the map
x 7→ e2πix : R/Z → S 1 .
What happens if we just view S 1 as an abstract group, and not a topological
group? We can view R as a vector space over Q, and this has a basis (by Hamel’s
basis theorem), say, A ⊆ R. Moreover, we can assume the basis is ordered, and
we can assume 1 ∈ A.
As abelian groups, we then have
R∼
M
=Q⊕ Qα,
α∈A\{1}
R/Z ∼
M
= Q/Z ⊕ Qα.
α∈A\{1}
ρ : z 7→ z n
for some n ∈ Z.
This is a nice countable family of representations. To prove this, we need
two lemmas from real analysis.
72
15 Representations of compact groups II Representation Theory
ϕ(x) = eicx
for some c ∈ R.
Proof. Let ε : (R, +) → S 1 be defined by x 7→ eix . This homomorphism wraps
the real line around S 1 with period 2π.
We now claim that given any continuous function ϕ : R → S 1 such that
ϕ(0) = 1, there exists a unique continuous lifting homomorphism ψ : R → R
such that
ε ◦ ψ = ϕ, ψ(0) = 0.
(R, +) 0
∃!ψ ϕ
(R, +) S1
The lifting is constructed by starting with ψ(0) = 0, and then extending a small
interval at a time to get a continuous map R → R. We will not go into the
details. Alternatively, this follows from the lifting criterion from IID Algebraic
Topology.
73
15 Representations of compact groups II Representation Theory
74
15 Representations of compact groups II Representation Theory
That has taken nearly an hour, and we’ve used two not-so-trivial facts from
analysis. So we actually had to work quite hard to get this. But the result is
good. We have a complete list of representations, and we don’t have to fiddle
with things like characters.
Our next objective is to repeat this for SU(2), and this time we cannot just
get away with doing some analysis. We have to do representation theory properly.
So we study the general theory of representations of complex spaces.
In studying finite
P groups, we often took the “average” over the group via
1
the operation |G| g∈G something. Of course, we can’t do this now, since the
group is infinite. As we all know, the continuous analogue of summingR is called
integration. Informally, we want to be able to write something like G dg.
This is actually a measure, called the “Haar measure”, and always exists as
long as you are compact and Hausdorff. However, we will not need or use this
result, since we can construct it by hand for S 1 and SU(2).
Definition (Haar measure). Let G be a topological group, and let
for all x ∈ G.
Example. For a finite group G, we can define
Z
1 X
f (g) dg = f (g).
G |G|
g∈G
The division by the group order is the thing we need to make the normalization
hold.
Example. Let G = S 1 . Then we can have
Z Z 2π
1
f (g) dg = f (eiθ ) dθ.
G 2π 0
Again, division by 2π is the normalization required.
75
15 Representations of compact groups II Representation Theory
76
15 Representations of compact groups II Representation Theory
ρn : z 7→ z n
where an ∈ Z≥0 , and only finitely many an are non-zero (since we are assuming
finite-dimensionality).
Actually, an is the number of copies of ρn in the decomposition of V . We
can find out the value of an by computing
Z 2π
1
an = hχn , χV i = e−inθ χV (eiθ ) dθ.
2π 0
Hence we know
X 1 Z 2π
iθ iθ 0 −inθ 0 0
χV (e ) = χV (e )e dθ einθ .
2π 0
n∈Z
Note that if
a b
A= ∈ SU(2),
c d
then since det A = 1, we have
d −b
A−1 = .
−c a
77
15 Representations of compact groups II Representation Theory
aā + bb̄ = 1.
Topologically, we know G ∼ = S 3 ⊆ C2 ∼
= R4 .
2
Instead of thinking about C in the usual vector space way, we can think of
it as a subgroup of M2 (C) via
z w
H= : w, z ∈ C ≤ M2 (C).
−w̄ z̄
This is known as Hamilton’s quaternion algebra. Then H is a 4-dimensional
Euclidean space (two components from z and two components from w), with a
norm on H given by
kAk2 = det A.
We now see that SU(2) ≤ H is exactly the unit sphere kAk2 = 1 in H. If
A ∈ SU(2) and x ∈ H, then kAxk = kxk since kAk = 1. So elements of G acts
as isometries on the space.
After normalization (by 2π1 2 ), the usual integration of functions on S 3 defines
a Haar measure on G. It is an exercise on the last example sheet to write this
out explicitly.
We now look at conjugacy in G. We let
a 0
T = : a ∈ C, |a|2 = 1 ∼= S1.
0 ā
This is a maximal torus in G, and it plays a fundamental role, since we happen
to know about S 1 . We also have a favorite element
0 1
s= ∈ SU(2).
−1 0
78
15 Representations of compact groups II Representation Theory
then {µ, µ−1 } = {λ, λ−1 }. Also, we can get the second matrix from the
first by conjugating with s.
79
15 Representations of compact groups II Representation Theory
80
15 Representations of compact groups II Representation Theory
has matrix
a b
c d
with respect to the standard basis {x, y} of V1 = C2 .
More interestingly, when n = 2, we know
a b
ρ2
c d
has matrix
a2 b2
ab
2ac ad + bc 2bd ,
c2 cd d2
with respect to the basis x2 , xy, y 2 of V2 = C3 . We obtain the matrix by
computing, e.g.
for n ∈ Z.
SU(2)
Proof. If V is a representation of SU(2), then ResT V is a representation
SU(2)
of T , and its character ResT χ is the restriction of χV to T . But every
representation of T has its character of the given form. So done.
81
15 Representations of compact groups II Representation Theory
Notation. We write N[z, z −1 ] for the set of all Laurent polynomials, i.e.
nX o
N[z, z −1 ] = an z n : an ∈ N : only finitely many an non-zero .
We further write
z n+1 − z −(n+1)
z 0
χn −1 = z n + z n−2 + · · · + z −n = ,
0 z z − z −1
where the last expression is valid unless z = ±1.
We can now state the result we are aiming for:
Theorem. The representations ρn : SU(2) → GL(Vn ) of dimension n + 1 are
irreducible for n ∈ Z≥0 .
Again, we get a complete set (completeness proven later). A complete list of
all irreducible representations, given in a really nice form. This is spectacular.
Proof. Let 0 6= W ≤ Vn be a G-invariant subspace, i.e. a subrepresentation of
Vn . We will show that W = Vn .
All we know about W is that it is non-zero. So we take some non-zero vector
of W .
Claim. Let
n
X
0 6= w = rj xn−j y j ∈ W.
j=0
Since this is non-zero, there is some i such that ri 6= 0. The claim is that
xn−i y i ∈ W .
82
15 Representations of compact groups II Representation Theory
rj (z n−2j − z n−2i ).
xn−j y j ∈ W
83
15 Representations of compact groups II Representation Theory
This is the Weyl integration formula, which is the Haar measure for SU(2). Then
if χn is the character of Vn , we can use this to show that hχn , χn i = 1. Hence
χn is irreducible. We will not go into the details of this construction.
χ0 = 1
χ1 = z + z −1
χ2 = z 2 + 1 + z −2
..
.
form a basis of Q[z, z −1 ]ev , which is a non-finite dimensional vector space over
Q. Hence we can write X
χV = an χn ,
n
V ∼
SU(2) SU(2)
ResT = ResT W,
then in fact
V ∼
= W.
This gives us the following result:
Proposition. Let G = SU(2) or G = S 1 , and V, W are representations of G.
Then
χV ⊗W = χV χW .
84
15 Representations of compact groups II Representation Theory
ρ(z)ei = z ni ei , ρ(z)fj = z mj fj
Example. We have
So we have
V1 ⊗ V1 ∼
= V2 ⊕ V0 .
Similarly, we can compute
So we get
V1 ⊗ V2 ∼
= V3 ⊕ V1 .
Proposition (Clebsch-Gordon rule). For n, m ∈ N, we have
Vn ⊗ Vm ∼
= Vn+m ⊕ Vn+m−2 ⊕ · · · ⊕ V|n−m|+2 ⊕ V|n−m| .
Proof. We just check this works for characters. Without loss of generality, we
assume n ≥ m. We can compute
z n+1 − z −n−1 m
(χn χm )(z) = (z + z m−2 + · · · + z −m )
z − z −1
m
X z n+m+1−2j − z 2j−n−m−1
=
j=0
z − z −1
m
X
= χn+m−2j (z).
j=0
Note that the condition n ≥ m ensures there are no cancellations in the sum.
85
15 Representations of compact groups II Representation Theory
(i) SO(3) ∼
= SU(2)/{±I} = PSU(2)
(ii) SO(4) ∼
= SU(2) × SU(2)/{±(I, I)}
(iii) U(2) ∼
= U(1) × SU(2)/{±(I, I)}
All maps are group isomorphisms, but in fact also homeomorphisms. To show
this, we can use the fact that a continuous bijection from a Hausdorff space to a
compact space is automatically a homeomorphism.
Assuming this is true, we obtain the following corollary;
Corollary. Every irreducible representation of SO(3) has the following form:
(−1)n
(−1)n−2
ρ(−I) = = (−1)n I.
. ..
−n
(−1)
kAk2 = det A.
This gives a nice 3-dimensional Euclidean space, and SU(2) acts as isometries
on H0 by conjugation, i.e.
X · A = XAX −1 ,
giving a group homomorphism
ϕ : SU(2) → O(3),
86
15 Representations of compact groups II Representation Theory
and the kernel of this map is Z(SU(2)) = {±I}. We also know that SU(2) is
compact, and O(3) is Hausdorff. Hence the continuous group isomorphism
ϕ̄ : SU(2)/{±I} → im ϕ
So
eiθ
0
0 e−iθ
acts on Rhi, j, ki by a rotation in the (j, k)-plane through an angle 2θ. We can
check that
cos θ sin θ cos θ i sin θ
,
− sin θ cos θ i sin θ cos θ
act by rotation of 2θ in the (i, k)-plane and (i, j)-plane respectively. So done.
We can adapt this proof to prove the other isomorphisms. However, it is
slightly more difficult to get the irreducible representations, since it involves
taking some direct products. We need a result about products G × H of two
compact groups. Similar to the finite case, we get the complete list of irreducible
representations by taking the tensor products V ⊗ W , where V and W range
over the irreducibles of G and H independently.
We will just assert the results.
Proposition. The complete list of irreducible representations of SO(4) is ρm ×ρn ,
where m, n > 0 and m ≡ n (mod 2).
Proposition. The complete list of irreducible representations of U(2) is
det ⊗m ⊗ ρn ,
87
Index II Representation Theory
Index
88
Index II Representation Theory
89