CS 252, Lecture 6: Spectral Graph Theory
Guest Lecture by Pravesh Kothari
1 Introduction
In this lecture, we give an overview of spectral graph theory, where in we use tools from Linear Algebra to
study graphs. We demonstrate how we can “read off” combinatorial properties of graphs from their linear
algebra properties. In particular, we focus on the problem of connectivity of graphs, and study how we
can calculate the number of connected components of a graph from reading eigenvalues of its adjacency
matrix.
We denote a graph G by G = (V, E) where V is the set of vertices, and E is the set of edges. In this
lecture, we assume all the graphs are undirected, unweighted, and there are no multiple edges or self loops.
We also assume that the graphs that we study are d-regular i.e. all the vertices have degree d. Let n = |V |
denote the number of vertices in the graph.
We start with defining adjacency matrix and Laplacian of a graph.
Definition 1 (Adjacency matrix). Given a graph G, the adjacency matrix AG is defined as the follows:
(
1 if (i, j) ∈ E
AG (i, j) =
0 otherwise.
The Laplacian matrix of a graph is defined similar to the adjacency matrix, is easier to use and
generalizes well to graphs that are not regular and weighted graphs.
Definition 2 (Laplacian of a graph). Given a graph G, the Laplacian matrix Ln×n is defined as dIn×n −A,
where In×n is the n × n identity matrix. In other words, we have
d if i = j
LG (i, j) = −1 if (i, j) ∈ E
0 otherwise.
2 Linear Algebra Background
Consider a real symmetric matrix M ∈ Rn×n i.e. Mi,j = Mj,i for all i 6= j.
Definition 3 (Eigenvalue, Eigenvectors). A scalar λ is called eigenvalue of M if there exist a vector x 6= 0
such that Ax = λx. The corresponding vector x is called eigenvector.
1
Fact 4. The following are standard facts about eigenvalues of a real symmetric matrix M :
1. M has n real eigenvalues (including repetitions) λ1 ≤ λ2 ≤ · · · ≤ λn .
2. There exist eigenvectors v1 , v2 , . . . , vn such that M vi = λi vi for all 1 ≤ i ≤ n, and the set of vectors
v1 , v2 , . . . , vn are linearly independent. The eigenvectors corresponding to distinct eigenvectors are
orthogonal.
Henceforth, we let λ1 ≤ λ2 ≤ · · · ≤ λn denote the eigenvalues of M . The corresponding eigenvectors are
denoted by v1 , v2 , . . . , vn . We assume that these n vectors are unit vectors, and are mutually orthogonal
i.e. hvi , vj i = 0 for all i 6= j.
Definition 5 (Quadratic form). The quadratic form of a matrix M is the function M (v) = v T M v that
outputs a scalar on taking a vector from Rn as input.
Lemma 6. Let M be a real symmetric matrix. v T M v ≥ 0 for all vectors v ∈ Rn if and only if all the
eigenvalues of M are non-negative.
Proof. Suppose that v T M v ≥ 0 for all v ∈ Rn . Then, we claim that all the eigenvalues are non-negative.
Suppose for contradiction that there is λ < 0 and a vector x such that M x = λx. We get xT M x = λxT x =
λ( ni=1 x2i ) < 0, contradicting the fact that v T M v ≥ 0 for all vectors v.
P
Suppose that all the eigenvalues of M are non-negative. Then, for an arbitrary vector v, let v =
α1 v1 + α2 v2 + . . . + αn vn . Such an expression is possible for every vector v since the vectors v1 , v2 , . . . , vn
span the whole of Rn . We have v T M v = ni=1 λi hvi , vi i ≥ 0.
P
A real symmetric matrix with all eigenvalues being non-negative is called as a Positive Semidefinite
matrix.
Lemma 7. (Rayleigh Coefficient) We have
vT M v
λ1 = min
v6=0 vT v
and
vT M v
λ2 = min
v6=0,hv,v1 i=0 v T v
Proof. We only prove the first part, the proof of the second part follows very much along the same lines.
vT M v T
As λ1 = 1vT v 1 , we get λ1 ≥ minv6=0 v vTMv v . Furthermore, let v = α1 v1 + α2 v2 + . . . + αn vn . We have
Pn1 1 2
vT M v α i λi
vT v
= Pi=1
n
α2
≥ λ1 for all v 6= 0.
i=1 i
Recall the Laplacian of the graph G : LG . For simplicity, we use L to mean LG . For a vector
u = (u1 , u2 , . . . , un ), we have
X X
(Lu)i = dui − uj = (ui − uj )
j:(i,j)∈E j:(i,j)∈E
2
and X
uT Lu = (ui − uj )2 (1)
(i,j)∈E
Using this, we can observe that L is Positive Semidefinite matrix.
When we project the vertices of G on to the real line such that the vertex i is mapped to ui , the
quadratic form uT Lu denotes the sum of squares of distances between neighboring vertices of G.
3 Graph Connectivity
Let G be an undirected unweighted d-regular graph and let L be the Laplacian matrix of G. Let 0 ≤ λ1 ≤
λ2 ≤ . . . ≤ λn be the eigenvalues of L.
The smallest eigenvalue of L is always zero:
Lemma 8. λ1 = 0.
Proof. As L is a positive semidefinite matrix, λ1 ≥ 0. Setting u = (1, 1, . . . , 1), we have Lu = (0, 0, . . . , 0),
thus proving that 0 is an eigenvalue of L.
We can deduce if the graph G is connected or not by reading the next eigenvalue of L ! We prove the
fact below:
Lemma 9. λ2 > 0 if and only if the graph is connected. More generally, t he number of zero eigenvalues
of L is equal to the number of connected components of G.
Proof. Let k be the number of connected components of G and S1 , S2 , . . . , Sk be the connected components
of G.
First, we will prove that the number of zero eigenvalues is at least the number of connected components.
Towards this, we define the following set of k vectors u(1) , u(2) , . . . , u(k) where u(i) is defined as follows:
(
(i) 1, if j ∈ Si
u (j) =
0, otherwise.
Note the following:
1. hu(i) , u(j) i = 0 for all i 6= j.
2. Lu(i) = 0 for all i ∈ {1, 2, . . . , k}.
Thus, there are k mutually orthogonal vectors that are all eigenvectors of L corresponding to the eigenvalue
0. In other words, the number of zero eigenvalues of L is at least k.
Next, we will prove that the number of zero eigenvalues is at most k. From Equation (1), we can
deduce that uT Lu = 0 if and only if ui = uj for all (i, j) ∈ E. That is, u should have equal value for
all vertices in a connected component. Thus, if Lu = 0, then uT Lu = 0, which implies that there exist
3
scalars α1 , α2 , . . . , αk such that u = ki=1 αi u(i) . This implies that every eigenvector of L corresponding to
P
the eigenvalue 0 is contained in the subspace spanned by {u(1) , u(2) , . . . , u(k) }. Thus, there are at most k
linearly independent eigenvectors of L corresponding to the eigenvalue 0. This proves that the eigenvalue
0 of M is repeated at most k times.
4 Expansion of Graphs
We have seen that λ2 > 0 if and only if the graph is connected. Can we say something more general? Does
the magnitude of λ2 have any meaning? It turns out that the absolute value of λ2 indicates how much
“robustly connected” G is. If λ2 is large, the graph is “very” connected i.e. for every set S ⊆ V , a good
fraction of edges originating in S cross S.
We first start with a few definitions.
Definition 10 (Cut/Boundary). For a set S ⊆ V , the boundary of S, denoted by δS is the number of
edges that are adjacent to S that go out of S. That is,
X
δ(S) = |{j : (i, j) ∈ E}|
i∈S
Definition 11 (Conductance/Isoperimetric ratio of a set). For a set S ⊆ V , a scaled version of the
fractional number of edges originating in S that cross it is called as the Conductance of S and denoted by
Θ(S). To be precise,
δ(S)
Θ(S) = |S|
d n |V \ S|
Note that the above definition is within factor 2 of the fraction of the edges originating in S that go
out of S.
Definition 12 (Conductance of a graph). The conductance of a graph G is defined as Θ(G) = minS⊆V Θ(S).
The conductance of a graph is directly related to the second eigenvalue λ2 of the Laplacian of the graph:
q
λ2 2λ2
Theorem 13 (Cheeger’s inequality). 2d ≤ Θ(G) ≤ d .
Finally, we define Expander graphs, the graphs which are sparse, yet well connected.
Definition 14 (Expander Graphs). A sequence of d-regular graphs G1 , G2 , . . . is called an Expander if
there exist an absolute constant C > 0 such that λ2 (G
d
i)
≥ C for all i.
Expanders have various applications, both in theoretical computer science and beyond, ranging from
fault tolerant networks to construction of error correcting codes.