Graph Theory
Graph Theory
Graph theory is a branch of mathematics that studies graphs, which are mathematical
structures used to model pairwise relations between objects. A graph consists of a set of
vertices (or nodes) and a set of edges (or arcs) that connect pairs of vertices. Graph theory
provides a framework for modeling and solving problems in various fields, such as computer
science, biology, social sciences, engineering, and many others.
2. Basic Terminology
Before diving into the concepts, let's first define the basic elements of a graph:
Vertex (plural: Vertices): A point in the graph. Each vertex represents an entity in the
system. For example, in a social network, a vertex could represent a person.
Degree of a Vertex: The degree of a vertex is the number of edges incident to it.
In-degree: The number of edges directed towards a vertex (in directed graphs).
Out-degree: The number of edges emanating from a vertex (in directed graphs).
Path: A sequence of edges that connect a sequence of vertices in the graph. A simple
path does not repeat any vertices, while a cycle is a path that starts and ends at the
same vertex.
Connected Graph: A graph is connected if there is a path between any pair of vertices.
In a directed graph, a graph is strongly connected if there is a directed path from any
vertex to any other vertex.
Subgraph: A graph formed from a subset of the vertices and edges of another graph.
1/89
Weighted Graph: A graph where each edge is assigned a weight or cost, which could
represent distance, time, or other metrics.
3. Types of Graphs
In an undirected graph, edges have no direction. The edge between two vertices (u,
v) is the same as the edge (v, u).
A simple graph has at most one edge between any two vertices, and no loops
(edges from a vertex to itself).
A multigraph allows multiple edges between the same pair of vertices and may
have loops.
In a weighted graph, each edge has a weight or cost associated with it.
In an unweighted graph, all edges are considered to have the same weight or cost.
Complete Graph: A graph in which there is an edge between every pair of distinct
vertices.
Bipartite Graph: A graph whose vertices can be divided into two disjoint sets, such that
every edge connects a vertex in one set to a vertex in the other set.
4. Graph Representation
To analyze and work with graphs computationally, we need to represent them in a way that
can be easily processed. Some of the most common methods for representing graphs
include:
Adjacency Matrix: A square matrix used to represent a graph, where the element at row
i and column j is non-zero (or contains the weight) if there is an edge between vertex i
and vertex j.
2/89
Example for undirected graph:
mathematica
A - B - C
| |
D --- E
A B C D E
A 0 1 0 1 0
B 1 0 1 0 1
C 0 1 0 0 1
D 1 0 0 0 1
E 0 1 1 1 0
Adjacency List: An array (or list) where each index represents a vertex, and the value at
each index is a list of vertices connected by edges. This representation is more space-
efficient for sparse graphs.
mathematica
A: [B, D]
B: [A, C, E]
C: [B, E]
D: [A, E]
E: [B, C, D]
Graph theory provides a powerful way to model and solve real-world problems. Some
common applications of graph theory include:
3/89
Biology: Graphs are used in biological networks, such as protein-protein interaction
networks or the spread of diseases.
Consider a social network where individuals are represented by vertices and friendships
between them as edges. The degree of a vertex indicates how many friends a person has.
We might use this graph to identify central individuals (those with high degree) or detect
communities (sets of vertices with many internal edges and fewer connections to the rest of
the graph).
Betweenness centrality: The number of shortest paths that pass through a vertex.
Closeness centrality: The average distance from a vertex to all other vertices in the
graph.
7. Conclusion
Graph theory provides a versatile and powerful framework for modeling and solving
problems involving relationships between objects. Whether in computer science, biology, or
social science, graphs are an essential tool for understanding complex systems and
networks. As we proceed through this course, we will explore various types of graphs, their
properties, and algorithms designed to work with them, all of which are grounded in the
foundational principles we have introduced today.
4/89
Before diving into the more advanced concepts, let's define the basic notions of paths,
cycles, and trails.
Path: A path in a graph is a sequence of vertices such that each vertex is connected to
the next one by an edge. A path is defined as:
A simple path does not repeat any vertex, i.e., all vertices are distinct.
A closed path (or cycle) starts and ends at the same vertex, forming a loop.
Cycle: A cycle is a closed path, meaning the first and last vertex are the same, and no
other vertex appears more than once. In a graph, a simple cycle is a cycle with no
repeated vertices other than the starting and ending vertex.
2. Peterson Graph
The Peterson graph is a well-known graph in the study of graph theory and has several
interesting properties. It is a 3-regular graph (each vertex has degree 3) and has 10 vertices
and 15 edges. The Peterson graph is often used as a counterexample in graph theory
because it is:
Hamiltonian: It contains a Hamiltonian cycle, which is a cycle that visits every vertex
exactly once.
The Peterson graph can be constructed by taking the complete graph on five vertices
(denoted K5 ) and connecting two non-adjacent vertices to form a second set of edges. It can
also be visualized as a pentagon with edges between non-adjacent vertices, forming a star-
like shape.
5/89
10 vertices and 15 edges.
The Peterson graph is non-planar, meaning it cannot be drawn on a plane without edge
crossings.
3. Connection in Graphs
A graph is said to be connected if there is a path between every pair of vertices. More
formally:
A graph is connected if for every pair of vertices u and v , there exists a path from u to v .
A graph is disconnected if there is at least one pair of vertices for which no path exists.
Types of Connectivity:
Edge-Connectivity: The minimum number of edges that must be removed from a graph
to disconnect it.
4. Bipartite Graphs
A bipartite graph is a graph whose set of vertices can be divided into two disjoint sets such
that every edge in the graph connects a vertex in one set to a vertex in the other set. This
means there are no edges within the same set.
Formally, a graph G = (V , E) is bipartite if there exists a partition of the vertex set V into
two disjoint sets V1 and V2 such that every edge in E connects a vertex in V1 to a vertex in
V2 .
6/89
No odd-length cycles: A graph is bipartite if and only if it contains no odd-length cycles.
This can be proven by attempting to color the graph with two colors, where each
adjacent vertex is colored differently. If any cycle is odd, it cannot be two-colored, thus is
not bipartite.
Bipartite Matching: In bipartite graphs, matching refers to finding a set of edges such
that no two edges share a common vertex. Perfect matching means that every vertex in
one set is connected to exactly one vertex in the other set.
Example: A bipartite graph can be represented as two sets of vertices. Let the sets be V1 =
{A, B} and V2 = {C, D}, with edges (A, C), (A, D), (B, C). This graph is bipartite
Matching Problems: Bipartite graphs are used in problems where two sets must be
paired. For instance, in job assignment problems, one set could represent workers and
the other set could represent jobs.
Graph Coloring: Bipartite graphs are often used in problems involving two-coloring,
where the vertices can be colored using two colors such that no two adjacent vertices
share the same color.
Network Flow: Bipartite graphs are used in network flow problems, such as the
maximum flow problem, where one set represents sources and the other set represents
sinks.
6. Conclusion
In this lecture, we introduced key concepts related to paths, cycles, and trails, along with
important types of graphs such as the Peterson graph and bipartite graphs. Understanding
these foundational elements is crucial for analyzing the structure and properties of graphs,
and they form the basis for many applications and advanced topics in graph theory. Next, we
will explore more properties of special types of graphs and the algorithms used to analyze
them.
7/89
graph theory. These topics are essential for understanding the structural properties of
graphs and solving various problems involving graph traversal and connectivity.
1. Eulerian Circuits
An Eulerian circuit (or Eulerian cycle) is a cycle in a graph that visits every edge exactly once
and returns to the starting vertex. Euler's Eulerian Circuit Theorem provides necessary and
sufficient conditions for the existence of an Eulerian circuit in an undirected graph:
Eulerian Circuit Theorem: A connected undirected graph has an Eulerian circuit if and
only if every vertex has an even degree.
Key Points:
A connected graph means there is a path between any two vertices in the graph.
For an Eulerian circuit to exist, each vertex must have an even degree. This ensures that,
when traversing through the graph, you can enter and exit each vertex an equal number
of times, enabling the path to return to the starting vertex after covering all edges.
Eulerian Path: A path that visits every edge exactly once but may not return to the
starting vertex. A graph has an Eulerian path if it has exactly zero or two vertices of odd
degree.
If a graph has exactly two vertices with an odd degree, it has an Eulerian path but
not an Eulerian circuit.
If all vertices have an even degree, it has both an Eulerian path and an Eulerian
circuit.
If more than two vertices have an odd degree, the graph has neither an Eulerian
path nor an Eulerian circuit.
Examples:
Graph 1: If a graph has 4 vertices all of even degree (say, degree 2), then it has an
Eulerian circuit.
8/89
Graph 2: A graph with two vertices of odd degree and others of even degree has an
Eulerian path but no Eulerian circuit.
Euler’s Theorem for Directed Graphs: In a directed graph, an Eulerian circuit exists
if and only if:
The graph is strongly connected (i.e., there is a directed path between any pair
of vertices).
2. Vertex Degrees
The degree of a vertex in a graph is the number of edges incident to it. The degree of a
vertex is an important parameter because it characterizes the connectivity of a vertex in the
graph.
Degree of a Vertex:
Handshaking Lemma: In any undirected graph, the sum of the degrees of all vertices is
twice the number of edges. This can be formally expressed as:
∑ deg(v) = 2∣E∣
v∈V
where deg(v) is the degree of vertex v and ∣E∣ is the number of edges in the graph.
This result is called the Handshaking Lemma because it can be interpreted as follows:
each edge contributes 1 to the degree of two vertices (one at each endpoint), so the total
degree count must be even.
Vertex Degree and Connectivity: The degree of a vertex can give us insight into the
structure and connectivity of a graph:
High-degree vertices often play central roles in graphs (e.g., hubs in social
networks).
9/89
2.1 Degree Sequences
Total Degree of a Graph: The total degree of a graph is simply the sum of the degrees of
all its vertices. According to the Handshaking Lemma, this is twice the number of edges
in the graph.
The number of paths between two vertices can vary depending on the structure of
the graph (e.g., in a complete graph, there are many more paths than in a tree).
Graphical Counting:
A complete graph Kn is the graph that contains all possible edges between n
vertices.
10/89
Extremal problems in graph theory involve determining the maximum or minimum value of
some parameter in a graph subject to certain constraints. These problems often relate to the
structure and connectivity of graphs.
Turán's Theorem: This theorem gives an upper bound on the number of edges in a
graph that does not contain a complete subgraph Kr (a clique of size r ). It states that
the maximum number of edges in an n-vertex graph that does not contain a complete
subgraph Kr is maximized by a Turán graph T (n, r
− 1), which is a complete bipartite
graph.
Ramsey Theory: Ramsey theory deals with finding conditions under which certain
structures must appear in graphs. For example, a classic result is that in any graph with
enough edges, there must exist a clique of size r or an independent set of size s, where
r and s are fixed constants.
Extremal Function: The extremal function ex(n, H) represents the maximum number
of edges in a graph with n vertices that does not contain a subgraph isomorphic to H .
This function is important for understanding the boundaries of graph properties.
5. Conclusion
In this lecture, we explored Eulerian circuits and paths, learned about vertex degrees and
their significance, and covered basic counting techniques and extremal problems in graph
theory. These foundational concepts are essential for understanding more advanced topics
in graph theory, including traversability, graph coloring, network theory, and combinatorial
optimization. Next, we will delve into more specialized graph structures and algorithms that
build on these basic principles.
The Chinese Postman Problem (CPP) is an optimization problem that asks for the shortest
possible route that visits every edge of a given graph at least once and returns to the starting
11/89
point. The problem gets its name from a scenario where a postman needs to deliver mail to
each street in a neighborhood and return to the post office.
Key Concepts:
Eulerian Circuit: The problem seeks to find an Eulerian circuit, which visits every edge
exactly once and returns to the starting point.
An undirected graph has an Eulerian circuit if and only if it is connected, and every vertex
has an even degree (as discussed in the previous lecture).
In most cases, the graph will not have an Eulerian circuit, because not all vertices will have
even degrees. When there are vertices of odd degree, it is impossible to form a true Eulerian
circuit.
Solution to CPP:
Step 1: Identify the vertices with odd degree. These vertices must be paired in such a
way that the resulting graph has an Eulerian circuit.
Step 2: Find the shortest path between each pair of odd-degree vertices. This path
should minimize the total length or cost, depending on the context (e.g., distance, time).
Step 3: Duplicate the edges used in the shortest paths, effectively "evening out" the
degrees of the odd-degree vertices.
Step 4: Once all degrees are even, the graph now has an Eulerian circuit, and a traversal
of the graph that visits every edge at least once and returns to the starting point can be
constructed.
Given a graph G = (V , E), let the set of vertices with odd degree be O ⊂ V . If ∣O∣ is odd,
the graph cannot have an Eulerian circuit, and no solution exists. If ∣O∣ is even, we proceed
to find the shortest paths between all pairs of odd-degree vertices.
The solution is to pair up the odd-degree vertices optimally by minimizing the total length of
the added edges, thus making all degrees even. This is a well-known minimum weight
12/89
perfect matching problem.
Example:
Consider a graph where vertices A, B, C, and D have degrees 3, 3, 2, and 3, respectively. The
odd-degree vertices are A, B, and D. To solve this, find the shortest paths between these odd-
degree vertices, add those paths as edges to the graph, and then perform an Eulerian circuit
traversal on the modified graph.
Applications:
Route planning: In logistics and delivery systems, determining the most efficient way to
visit every road or route at least once.
Maintenance: In networks (e.g., electrical grids), ensuring that all connections are
checked or maintained at least once with minimal travel distance.
2. Graphic Sequences
The Havel-Hakimi algorithm provides a way to determine whether a given degree sequence
is graphical, meaning whether there exists a simple graph whose degree sequence is exactly
the sequence given.
Definition: A degree sequence is graphical if there exists a simple graph (no loops or
multiple edges) such that the degree of each vertex in the graph matches the degree
sequence.
2. Step 2: If the first degree d1 is greater than or equal to the number of remaining
vertices, remove the first element and decrease the degrees of the next d1 vertices by 1.
The degree sequence becomes all zeros (indicating the sequence is graphical).
13/89
The degree sequence contains negative numbers (indicating the sequence is not
graphical).
Step 2: Decrease the first degree (3) by 1 and decrease the next 3 degrees (3, 2, 2) by 1:
The new sequence is [2, 1, 1, 1, 0, 0].
Step 3: Repeat the process. The sequence now becomes [1, 1, 0, 0, 0, 0], which ends with
zeros, so the sequence is graphical.
This means that a simple graph exists that has the given degree sequence.
if:
n k n
∑ di is even, and for every k, ∑ di ≤ k(k − 1) + ∑ min(di , k)
This is a more advanced criterion and is typically used in more formal proofs or in
algorithmic implementations.
3. Conclusion
In this lecture, we discussed the Chinese Postman Problem, an optimization problem that
leverages Eulerian circuits to find the shortest route that covers every edge in a graph. We
also explored graphic sequences and the criteria for determining whether a given sequence
of degrees corresponds to a simple graph. The tools and theorems covered, such as the
Havel-Hakimi and Erdős-Gallai theorems, are crucial for understanding the structure of
graphs and solving problems related to degree sequences and graph construction. These
topics form the basis for solving more complex network design and optimization problems in
graph theory.
14/89
Lecture 5: Trees and Distance
In this lecture, we will delve into trees, one of the fundamental types of graphs, and explore
their basic properties. Additionally, we will discuss the concept of distance in trees and
general graphs, which plays a key role in various graph-related problems, including network
design, shortest path algorithms, and more.
1. Introduction to Trees
A tree is a special type of graph that is connected and acyclic. Trees are essential structures
in graph theory because they provide a simple yet powerful way to represent hierarchical
relationships and are fundamental in many algorithms and data structures.
Acyclic: A tree contains no cycles. In other words, there are no closed loops in the
structure.
Connected: A tree is connected, meaning there is a path between every pair of vertices.
This is a key property of trees. If a graph has more than n − 1 edges, it must contain at
least one cycle. Similarly, if a graph has fewer than n − 1 edges, it is not connected.
Rooted Trees: A rooted tree is a tree in which one vertex is designated as the root. The
other vertices are organized into a hierarchy based on their distance from the root. The
root is the topmost node, and the tree branches out from it.
Leaf: A leaf (or terminal node) is a vertex in a tree that has degree 1, i.e., it is connected
to only one other vertex.
Subtree: A subtree of a tree is any connected subgraph of the tree that includes a vertex
and all its descendants (if the tree is rooted).
Types of Trees:
Binary Tree: A rooted tree where each vertex has at most two children, often referred to
as the left child and right child.
Spanning Tree: A spanning tree of a graph is a subgraph that includes all the vertices of
the graph and is itself a tree. It contains no cycles and has n − 1 edges, where n is the
15/89
number of vertices.
Example:
Consider a graph with 4 vertices and 3 edges, connected as follows: {A, B, C, D} with
edges (A, B), (B, C), (B, D). This is a tree because:
It is connected.
The concept of distance in a graph refers to the shortest path between two vertices, where
the path is measured by the number of edges traversed.
In a tree, there is exactly one unique path between any pair of vertices due to the tree's
acyclic and connected nature. As such, the distance between two vertices is simply the
number of edges in the unique path that connects them.
Distance Between Two Vertices: The distance d(u, v) between two vertices u and v in a
tree is the number of edges in the path from u to v .
In a general graph (not necessarily a tree), the distance between two vertices is still defined
as the shortest path between them. However, a graph may contain cycles, meaning there
could be multiple paths between two vertices. In such cases, we use shortest path
algorithms (such as Dijkstra's or Bellman-Ford) to find the path with the fewest edges (or
least cost, if edge weights are assigned).
Unweighted Graph: If the graph is unweighted (i.e., each edge has the same weight),
the shortest path between two vertices is the path with the fewest edges.
Weighted Graph: If edges have weights (such as distances, costs, or times), the shortest
path is the one with the minimum total weight, and algorithms like Dijkstra’s algorithm
16/89
are used to compute this.
Height: The height of a tree is the length of the longest path from the root to a leaf. It is
a measure of the tree's "depth" and is useful for understanding the tree's structure and
performance in algorithms like search or traversal.
Depth: The depth of a vertex v is the number of edges from the root to v . It indicates
how far v is from the root.
Example: For a rooted tree with root A and vertices B, C, D , where edges are
(A, B), (A, C), (C, D), the depth of B is 1, the depth of C is 1, and the depth of D is 2.
The height of the tree is 2 because the longest path from the root A to a leaf (vertex D ) has
2 edges.
Radius: The radius of a tree is the minimum distance from any vertex to the farthest leaf
vertex. It gives an indication of how "central" the tree is with respect to its structure.
Diameter: The diameter of a tree is the longest distance between any two vertices in the
tree. This is useful for understanding the "spread" of the tree.
Example: For a tree with the vertices A, B, C, D and edges (A, B), (A, C), (C, D):
Farthest Vertex: The vertex that is at the maximum distance from a given vertex.
Nearest Vertex: The vertex that is at the minimum distance from a given vertex.
17/89
These concepts are used in applications where we need to find the most distant or closest
point in a network, such as in communication networks or transportation systems.
Network Design: Trees are often used in network design to minimize the number of
edges while maintaining connectivity, as seen in spanning tree algorithms (e.g., Prim’s
and Kruskal’s algorithms).
File Systems: Trees represent hierarchical file systems where directories are nodes, and
files are leaves. The distance in the tree helps determine the file depth and search
efficiency.
Shortest Path Algorithms: In a general graph, shortest path algorithms use distance
concepts to compute the minimum path between nodes, which is important in routing,
logistics, and transportation planning.
5. Conclusion
In this lecture, we explored the fundamental properties of trees, including their structure,
number of edges, and types of trees. We also discussed the concept of distance in trees and
general graphs, including how distance measures such as height, depth, radius, and
diameter provide insights into the graph's structure. These concepts have broad applications
in network design, optimization problems, and algorithm design. Understanding the
distance in trees and graphs is crucial for efficient graph traversal and solving real-world
problems.
18/89
1. Spanning Trees
Is connected,
A graph may have multiple spanning trees, and finding all spanning trees of a graph is a
significant problem in graph theory.
Any graph with n vertices has at least one spanning tree (provided it is connected).
If a graph is not connected, it does not have a spanning tree because there is no way to
connect all vertices without cycles in the subgraph.
Example:
Vertices: V = {A, B, C, D}
Edges: E = {(A, B), (A, C), (B, D), (C, D)}
A spanning tree for this graph could be {(A, B), (A, C), (B, D)}, which connects all the
vertices without forming any cycles.
2. Prüfer Code
The Prüfer code is a unique sequence associated with a labeled tree, and it provides a
compact way to represent a tree structure. It is particularly useful for enumerating spanning
trees.
For a labeled tree with n vertices, the Prüfer code is a sequence of n − 2 integers.
19/89
1. Identify the leaf (vertex with degree 1) with the smallest label.
The resulting sequence of labels forms the Prüfer code for the tree.
Example:
Consider a tree with 4 vertices labeled {1, 2, 3, 4} and edges (1, 2), (1, 3), (3, 4).
2. Remove vertex 2, leaving the tree with edges (1, 3), (3, 4).
Properties:
There are exactly nn−2 different Prüfer codes for trees with n vertices.
The Prüfer code is helpful in counting the number of spanning trees for a complete
graph.
It provides a bijection between the set of labeled trees on n vertices and the set of
sequences of length n − 2 from the set of vertex labels.
3. Cayley’s Formula
Cayley’s formula is a famous result in combinatorics that gives the number of spanning
trees in a complete graph Kn , where n is the number of vertices.
20/89
T (Kn ) = nn−2
This formula provides a direct way to calculate the number of spanning trees in a complete
graph.
Example:
T (K4 ) = 44−2 = 42 = 16
Derivation (Brief):
Cayley’s formula can be derived using several techniques, including the Prüfer code or
matrix methods. It is a powerful result because it gives a direct count of the number of
spanning trees without requiring an exhaustive search.
The Matrix Tree Theorem provides a method for counting the number of spanning trees in a
general graph, not necessarily a complete graph. This theorem relates the number of
spanning trees to the Laplacian matrix of the graph.
The Matrix Tree Theorem states that the number of spanning trees in a graph G is given by
any cofactor of the Laplacian matrix L, i.e., the determinant of the matrix obtained by
removing any row and any column from L.
Formula:
Let L′ be the matrix obtained by deleting any row and column from the Laplacian matrix L,
then the number of spanning trees T (G) is given by:
21/89
T (G) = det(L′ )
Example:
For a graph with vertices V = {1, 2, 3} and edges E = {(1, 2), (1, 3), (2, 3)}, the Laplacian
matrix is:
2 −1 −1
L = −1 2 −1
−1 −1 2
Removing the first row and first column, we get the matrix:
2 −1
L′ = [ ]
−1 2
Counting the number of spanning trees in large and complex graphs, especially when
the graph is not a complete graph.
Analyzing networks and electrical circuits, where spanning trees play a key role in
network flow and resistance calculations.
5. Conclusion
In this lecture, we covered several important methods for counting the number of spanning
trees in a graph. We began with the Prüfer code, which provides a compact representation
of labeled trees and is useful for enumerating spanning trees. We then discussed Cayley’s
formula, which gives the number of spanning trees for a complete graph. Finally, we
explored the Matrix Tree Theorem, which provides a general method for counting spanning
trees in any graph using the Laplacian matrix. These tools are essential for solving problems
in combinatorics, network theory, and graph theory.
22/89
Lecture 7: Matchings and Covers
In this lecture, we will explore the concept of matchings in graphs, a fundamental topic in
combinatorics and graph theory. We will define key types of matchings such as perfect
matchings, maximal matchings, and maximum matchings. We will also discuss important
concepts such as M-alternating paths, M-augmenting paths, symmetric differences, Hall’s
Matching condition, and vertex covers, which are closely related to matchings in graphs.
1. Matching in Graphs
In other words, each vertex can be involved in at most one edge of the matching.
Example:
2. Types of Matchings
A perfect matching is a matching where every vertex in the graph is incident to exactly one
edge from the matching. In other words, a perfect matching covers all vertices of the graph.
Condition: A graph has a perfect matching if and only if it has as many edges in the
matching as the number of vertices, i.e., if the graph has n vertices, there should be
exactly n/2 edges in the matching.
23/89
2.2 Maximal Matching
A maximal matching is a matching that cannot be extended by adding any more edges. In
other words, it is a matching where no additional edges can be added to the matching
without violating the condition that no two edges share a common vertex.
A maximum matching is a matching that contains the largest possible number of edges. It
is the matching with the maximum cardinality, i.e., the matching that maximizes the number
of edges, while still satisfying the condition that no two edges share a common vertex.
Condition: A maximum matching is the one that maximizes the number of matched
pairs of vertices.
An M-alternating path with respect to a matching M is a path in the graph where the edges
alternate between being in the matching M and not being in M .
24/89
An M-augmenting path is an alternating path that begins and ends with unmatched
vertices. Augmenting a matching refers to adding or removing edges along such a path to
increase the number of matched edges in the graph.
Condition: If there is an M-augmenting path, we can increase the size of the matching
by flipping the matching status (matching or non-matching) of the edges along the path.
4. Symmetric Difference
edges that are in either M1 or M2 , but not in both. This concept is important when working
with alternating and augmenting paths, as it helps in the process of adjusting matchings.
Example: If M1 = {(A, B), (C, D)} and M2 = {(B, C), (A, D)}, the symmetric
Hall’s Theorem provides a necessary and sufficient condition for the existence of a perfect
matching in a bipartite graph. It is formulated as follows:
Interpretation: Hall’s condition ensures that there is enough connectivity between the
two sets X and Y to form a perfect matching.
Example: Consider a bipartite graph with X = {A, B} and Y = {C, D, E}, and edges
E = {(A, C), (A, D), (B, D), (B, E)}. For any subset S ⊆ X , N (S) must satisfy the
condition ∣N (S)∣ ≥ ∣S∣. If this is satisfied, a perfect matching exists.
25/89
6. Vertex Covers
A vertex cover of a graph is a set of vertices such that every edge in the graph is incident to
at least one vertex in the cover. A minimum vertex cover is the smallest possible vertex
cover.
Example: In a bipartite graph, the vertex cover might include vertices from both sets
such that all edges are covered. For instance, in a graph with sets X = {A, B} and
Y = {C, D}, the vertex cover could be {B, C}, covering all edges.
7. Conclusion
In this lecture, we discussed the fundamental concepts related to matchings and covers in
graphs. We defined various types of matchings, including perfect matchings, maximal
matchings, and maximum matchings, and explored the concepts of M-alternating paths
and M-augmenting paths. Additionally, we introduced Hall’s matching condition and the
relationship between matchings and vertex covers, providing a comprehensive
understanding of how these concepts relate to each other in graph theory. These ideas are
crucial in various applications such as network design, resource allocation, and
combinatorial optimization.
26/89
1. Independent Sets
An independent set in a graph is a set of vertices such that no two vertices in the set are
adjacent. In other words, for every pair of vertices u, v in the independent set S , there is no
edge between them in the graph.
Properties:
The complement of an independent set is a clique (a set of vertices where every pair
of vertices is adjacent).
Independent sets are useful in various optimization problems and are related to
graph coloring problems.
2. Covers in Graphs
Covers are key concepts related to matchings in graphs, particularly in the context of vertex
covers and edge covers.
A vertex cover is a set of vertices such that every edge in the graph is incident to at least one
vertex in the set. In other words, every edge has at least one endpoint in the vertex cover.
Minimum Vertex Cover: A minimum vertex cover is the smallest possible vertex cover in
a graph.
An edge cover is a set of edges such that every vertex in the graph is incident to at least one
edge in the set. It is similar to a vertex cover, but instead of covering the edges with vertices,
27/89
we cover the vertices with edges.
Minimum Edge Cover: A minimum edge cover is the smallest possible set of edges that
covers all the vertices of the graph.
Example: In the same graph V = {A, B, C, D} and E = {(A, B), (B, C), (C, D)},
one possible edge cover could be {(A, B), (C, D)}, as every vertex is incident to at
least one edge in this set.
3. König-Egerváry Theorem
The König-Egerváry theorem is a fundamental result in combinatorics and graph theory that
applies to bipartite graphs. It establishes a relationship between the maximum matching
and the minimum vertex cover in bipartite graphs.
For a bipartite graph G= (X ∪ Y , E), the size of the maximum matching M is equal to the
size of the minimum vertex cover C . Formally:
∣M ∣ = ∣C∣
where ∣M ∣ is the number of edges in a maximum matching, and ∣C∣ is the number of
vertices in a minimum vertex cover.
Implications:
This theorem provides an efficient way to compute the size of the maximum matching in
bipartite graphs by finding the minimum vertex cover.
It is also useful for establishing the optimality of algorithms that find maximum
matchings and vertex covers in bipartite graphs.
Example:
28/89
4. Maximum Bipartite Matching
A maximum bipartite matching is a matching that contains the largest number of edges in
a bipartite graph, such that no two edges share a vertex.
Key Concepts:
Bipartite Graph: A graph is bipartite if its vertex set can be divided into two disjoint sets
X and Y , such that every edge connects a vertex in X to a vertex in Y , and no edge
connects two vertices within the same set.
Algorithm:
The Augmenting Path Algorithm is used to find the maximum matching in a bipartite
graph. It works by searching for augmenting paths, which are paths that alternate between
matched and unmatched edges and start and end with unmatched vertices.
1. Start with an initial matching: This can be an empty matching or a partial matching.
2. Find an augmenting path: Search for a path that alternates between edges in the
matching and edges not in the matching, and starts and ends with unmatched vertices.
3. Augment the matching: Flip the edges along the augmenting path to increase the size
of the matching.
4. Repeat: Repeat the process of finding and augmenting paths until no more augmenting
paths can be found.
Termination: The algorithm terminates when no more augmenting paths can be found,
at which point the current matching is maximum.
29/89
Example:
Consider a bipartite graph with sets X = {A, B, C} and Y = {1, 2, 3} and edges E =
{(A, 1), (B, 2), (C, 3)}. Suppose we start with an empty matching. The algorithm finds
augmenting paths, such as A → 1 and B → 2, and updates the matching by adding these
edges.
After finding no more augmenting paths, the algorithm stops, and the maximum matching is
obtained.
6. Conclusion
In this lecture, we explored key concepts related to independent sets, covers, and
maximum bipartite matching. We began with an understanding of independent sets and
vertex and edge covers, which are central in optimization problems. The König-Egerváry
theorem provides an important relationship between the size of the maximum matching
and the minimum vertex cover in bipartite graphs. We also discussed the Augmenting Path
Algorithm, which is an efficient method for finding the maximum matching in bipartite
graphs. These concepts and algorithms are crucial in various applications, such as network
design, resource allocation, and combinatorial optimization.
A weighted bipartite graph is a bipartite graph where each edge has a weight, typically
representing a cost or a profit associated with matching two vertices from different sets. The
goal of a weighted bipartite matching is to find a matching of maximum total weight (either
30/89
maximum or minimum, depending on the context) rather than just the maximum number of
edges.
Weighted Matching: For each edge e = (x, y) ∈ E , there is an associated weight w(e),
which represents the cost or benefit of matching vertex x with vertex y .
Goal:
The goal is to find a maximum weighted matching in the bipartite graph, which maximizes
the sum of the weights of the matched edges, i.e.,
maximize ∑ w(x, y)
(x,y)∈M
Example:
Consider a bipartite graph with sets X= {A, B, C} and Y = {1, 2, 3}, with edges E =
{(A, 1), (B, 2), (C, 3)} and corresponding weights w(A, 1) = 5, w(B, 2) = 7, and
w(C, 3) = 6. A matching could be M = {(A, 1), (B, 2), (C, 3)}, and the total weight
would be 5 + 7 + 6 = 18.
2. Transversal
A transversal in a bipartite graph is a set of edges such that every vertex in the graph is
incident to exactly one edge in the set. Essentially, a transversal is a matching, but it may not
be a maximum matching. The concept of a transversal is important in the context of finding
optimal matchings in weighted bipartite graphs.
31/89
3. Equality Subgraph
Equality Subgraph: If all edges in the graph have the same weight, this is a trivial
equality subgraph. However, in practice, the equality subgraph helps identify edges with
equal cost, which can be leveraged to simplify or optimize algorithms like the Hungarian
Algorithm.
Example: In a graph where all edges have a weight of 5, the equality subgraph would
consist of all the edges of the graph, since all have equal weight.
4. Hungarian Algorithm
Problem Definition:
Given a weighted bipartite graph G = (X ∪ Y , E), where each edge (x, y) ∈ E has a
weight w(x, y), the goal of the Hungarian algorithm is to find a perfect matching M such
that the sum of the weights of the edges in the matching is minimized (or maximized).
1. Subtract Row Minimums: For each row, subtract the minimum value of the row from
every element in that row.
2. Subtract Column Minimums: For each column, subtract the minimum value of the
column from every element in that column.
3. Cover Zeros: Cover all zeros in the matrix with a minimum number of horizontal and
vertical lines.
4. Test Optimality:
32/89
If the number of lines covering zeros is equal to the number of vertices, an optimal
assignment is possible.
Example:
Consider a bipartite graph with sets X = {A, B, C} and Y = {1, 2, 3}, and a cost matrix
for the matching:
5 7 6
8 6 7
7 8 9
Applying the Hungarian Algorithm would involve row and column reductions, covering zeros,
and adjusting the matrix to find the optimal matching that minimizes the total cost.
Network Flow Problems: Weighted bipartite matching has applications in network flow
problems, particularly in cases where resources must be matched with demands, such
as in supply chain management, scheduling, and transportation.
6. Conclusion
33/89
In this lecture, we explored weighted bipartite matching, a generalization of bipartite
matching where edges have weights. We discussed related concepts such as transversals,
equality subgraphs, and introduced the Hungarian Algorithm, which is used to solve the
assignment problem and find optimal matchings in weighted bipartite graphs. These
concepts are widely applicable in areas like assignment problems, resource allocation, and
optimization. The Hungarian Algorithm, in particular, provides an efficient solution to these
problems, making it an essential tool in combinatorial optimization.
1. Stable Matchings
A stable matching refers to a matching between two sets of elements such that no pair of
elements, one from each set, prefers each other over their current partners. The idea of
stability ensures that there are no "blocking pairs" in the matching, which is a concept
central to many applications like matching marriages, job placements, and course
assignments.
Given two sets X = {x1 , x2 , ..., xn } and Y = {y1 , y2 , ..., yn }, a matching M is a stable
matching if there are no pairs (xi , yj ) such that both of the following conditions hold:
2. xi prefers yj over the partner they are currently matched with in M , and yj prefers xi
If these conditions are met for any pair, then the matching is considered unstable because
(xi , yj ) would prefer to form a match rather than remaining with their current partners.
Example:
34/89
Set X = {A, B}, Set Y = {1, 2}
Preferences of A: 1 > 2
Preferences of B : 2 > 1
Preferences of 1: A > B
Preferences of 2: B > A
A matching where A is matched with 1 and B with 2 is stable because no one prefers
someone else more than their current match.
The Gale-Shapley algorithm is an efficient algorithm for finding a stable matching in two-
sided matching problems, such as the Stable Marriage Problem. The algorithm works in a
"deferred acceptance" manner, where participants make proposals and acceptances are
deferred until a stable matching is found.
1. Initialize: Each element in set X (e.g., suitors) is free, and each element in set Y (e.g.,
women) is free.
3. Acceptance/Deferral:
They "tentatively accept" the best proposal they have received, but may "defer" their
decision, keeping their current match if a better option is proposed later.
4. Repeat: The process continues until no free elements in X remain, meaning all
elements have been matched.
Termination:
The algorithm terminates when no unmatched participants remain, and the resulting
matching is stable. It has been proven that the Gale-Shapley algorithm always terminates in
35/89
a stable matching, and the solution is optimal for the proposing side (i.e., set X ).
Example:
The Hopcroft-Karp algorithm is a fast algorithm for finding the maximum matching in a
bipartite graph. It improves upon the basic approach by using augmenting paths and
performing layered breadth-first searches (BFS) to speed up the search for augmenting
paths. This makes the algorithm more efficient than earlier algorithms like the Ford-
Fulkerson method for general matching problems.
Key Concepts:
Augmenting Path: An augmenting path is a path where the edges alternate between
unmatched and matched edges, starting and ending with unmatched vertices.
Layered BFS: A BFS is used to find the shortest augmenting paths in the graph,
improving the efficiency of the search.
Maximum Matching: The goal of the Hopcroft-Karp algorithm is to find the maximum
matching in a bipartite graph, i.e., the largest set of pairwise non-adjacent edges.
36/89
1. Initialization: Start with an empty matching M .
2. BFS: Perform a BFS to find the shortest augmenting paths in the graph, where the
search is layered to ensure that the shortest paths are found first.
3. DFS: Use a DFS to find augmenting paths starting from unmatched vertices and
augment the matching by flipping the edges along these paths.
4. Repeat: Repeat the BFS and DFS steps until no augmenting paths are found.
Time Complexity:
The Hopcroft-Karp algorithm runs in O( V ⋅ E), where V is the number of vertices and E
is the number of edges in the bipartite graph. This is significantly faster than the basic
augmenting path algorithm, which runs in O(V ⋅ E).
Example:
Marriage Matching: The Stable Marriage Problem, where pairs of men and women are
matched in such a way that there are no blocking pairs.
Network Flow Problems: Finding maximum matchings in bipartite graphs that model
flow networks.
Job Scheduling: Assigning workers to tasks where the goal is to maximize total profit or
minimize cost, using bipartite matching.
37/89
Resource Allocation: Matching suppliers to customers in a way that maximizes
efficiency.
5. Conclusion
In this lecture, we covered stable matchings, which ensure that no blocking pairs exist in a
bipartite matching, and explored the Gale-Shapley algorithm for solving the Stable Marriage
Problem. We also discussed faster bipartite matching through the Hopcroft-Karp
algorithm, which improves efficiency in finding maximum matchings in bipartite graphs.
These concepts and algorithms are widely applicable in real-world problems, including
marriage matching, job assignments, and resource allocation. The Gale-Shapley algorithm
guarantees stability, while the Hopcroft-Karp algorithm provides an efficient solution to
large-scale matching problems.
1. Factors in Graphs
Formal Definition:
38/89
An f-factor is a generalization where the degree of each vertex is prescribed by a
function f .
A 1-factor (or perfect matching) is a special case where f (v) = 1 for all vertices in G,
implying that each vertex is matched to exactly one other vertex, and the resulting
subgraph is a matching.
Example:
2. Perfect Matchings
A perfect matching in a graph is a matching that covers all the vertices in the graph. This
means that every vertex is matched to exactly one other vertex, and no vertex remains
unmatched. A perfect matching is a special case of a 1-factor where the degree of each
vertex is exactly 1.
Formal Definition:
1-Factor: A perfect matching is also referred to as a 1-factor in graph theory, where the
degree of each vertex is 1 in the subgraph formed by the matching.
Example:
For the graph G with vertices V = {1, 2, 3, 4} and edges E = {(1, 2), (2, 3), (3, 4), (1, 4)}
, the set {(1, 2), (3, 4)} forms a perfect matching, because each vertex is included in exactly
one edge.
Tutte's 1-Factor Theorem provides a necessary and sufficient condition for the existence of a
perfect matching in a general graph. The theorem characterizes graphs that have a perfect
39/89
matching in terms of certain properties of their subgraphs.
A graph G has a perfect matching if and only if for every subset S of the vertices V , the
number of odd-sized components in the subgraph G − S (the graph obtained by removing
the vertices in S ) is less than or equal to the size of S , i.e.,
Interpretation:
The theorem provides a structural condition for the existence of a perfect matching.
Specifically, if for any subset S of vertices, the number of odd-sized components in G − S is
not too large, then a perfect matching exists. This condition ensures that the graph has
enough "balance" between vertices and edges to form a perfect matching.
Example:
For a graph with 6 vertices, consider a subset S of 3 vertices. Tutte’s theorem provides a
condition involving the components of the graph that remain after removing S . If the
number of odd-sized components in the remaining subgraph is less than or equal to 3, then
the graph has a perfect matching.
4. f-Factor of Graphs
The concept of an f-factor generalizes the idea of perfect matchings to allow for more
flexibility in the degree of vertices. An f-factor of a graph is a subgraph where the degree of
each vertex is prescribed by a function f (v).
Formal Definition:
function f defines the desired degree for each vertex in the subgraph.
For example:
40/89
An even factor is an f-factor where f (v) is even for all v , corresponding to a matching
problem where each vertex is connected to an even number of edges.
Existence of f-Factors:
The existence of an f-factor in a graph is not always guaranteed, and there are various
necessary and sufficient conditions for its existence. These conditions depend on the
structure of the graph and the degree function f . For instance, a well-known result is Hall’s
Theorem, which provides conditions for the existence of a matching when the degree
function f is constant (i.e., for 1-factors).
Job Assignment: Perfect matchings are used in assigning workers to jobs such that
every worker is assigned exactly one job, and every job is assigned exactly one worker.
Network Design: In wireless networks, perfect matchings are used to pair up nodes for
communication, ensuring that all nodes are connected in a balanced way.
Edmonds’ Blossom Algorithm: This algorithm can find maximum matchings in general
graphs, including those with odd cycles.
6. Conclusion
In this lecture, we have explored the concepts of f-factors and perfect matchings in general
graphs. We discussed Tutte’s 1-Factor Theorem, which provides a necessary and sufficient
condition for the existence of a perfect matching. We also covered the concept of f-factors,
41/89
which generalize the notion of perfect matchings by allowing for arbitrary degree functions.
These results are fundamental in understanding matching theory and its applications, and
they provide important tools for solving real-world problems in fields such as network
design, scheduling, and bioinformatics.
A matching in a graph is a subset of edges such that no two edges share a common vertex.
The maximum matching is the matching that contains the largest number of edges.
In bipartite graphs, maximum matching can be found using simpler algorithms like the
Hungarian algorithm or Hopcroft-Karp algorithm. However, when dealing with general
graphs (graphs that may contain odd-length cycles, i.e., non-bipartite graphs), these
algorithms fail. Edmonds' Blossom Algorithm addresses this issue and provides an efficient
solution for finding maximum matchings in general graphs.
When dealing with general graphs, the primary challenge arises from the existence of odd-
length cycles. In bipartite graphs, matchings can be found easily because the graph is two-
colorable, but in general graphs, the situation is more complicated.
A blossom refers to an odd-length cycle in the graph. The presence of blossoms complicates
the search for augmenting paths (paths that increase the size of the matching). Therefore, a
new strategy is needed to handle such cycles when searching for maximum matchings.
42/89
3. Edmonds' Blossom Algorithm: Overview
The Blossom Algorithm (also known as Edmonds' algorithm) was developed by Jack
Edmonds in 1965 and is designed to find the maximum matching in general graphs,
including graphs that contain odd-length cycles (blossoms). The algorithm is efficient and
works in O(E ⋅ V ) time, where E is the number of edges and V is the number of vertices.
The main idea behind the Blossom Algorithm is to find augmenting paths in a graph, even
when blossoms are present, by shrinking these odd-length cycles into single vertices and
recursively finding augmenting paths in the reduced graph. Once an augmenting path is
found, the matching is increased, and the graph is updated accordingly.
1. Augmenting Path: A path that alternates between unmatched and matched edges,
starting and ending at unmatched vertices. An augmenting path can be used to increase
the size of the matching.
3. Alternating Path: A path where edges alternate between edges not in the matching and
edges in the matching.
If a blossom is encountered (an odd-length cycle), shrink it and recurse on the smaller
graph.
1. Initialization:
43/89
2. Find an Augmenting Path:
If no augmenting path exists, the algorithm terminates and returns the current
matching.
3. Shrink Blossoms:
Once an augmenting path is found, flip the edges along the path, i.e., if an edge was
part of the matching, remove it; if it was not, add it.
5. Repeat:
Continue the process until no more augmenting paths can be found. When this
happens, the matching is maximum.
2. Shrinking the blossom: The cycle is contracted into a single "super vertex," say BCD .
Now the graph is simpler to work with.
3. Search for augmenting paths: Perform a BFS or DFS in the contracted graph to find
augmenting paths.
44/89
4. Unshrink: Once an augmenting path is found, unshrink the blossom and update the
matching.
This time complexity is much better than the naive approach of checking all possible
matchings, which would have an exponential time complexity.
Matching Theory: Solving problems in economics, game theory, and social sciences that
involve finding optimal pairings or allocations.
8. Conclusion
45/89
general graphs, making it one of the most important algorithms in combinatorial
optimization. The Blossom Algorithm has broad applications in areas such as network
design, scheduling, and economic matching problems.
Connectivity in graphs refers to the ability to traverse or communicate between any two
vertices within the graph. A graph is said to be connected if there is a path between every
pair of vertices. Connectivity is a fundamental concept in graph theory with applications in
network design, reliability analysis, and communication systems.
In the study of graph connectivity, we are often interested in the minimum number of
elements (vertices or edges) whose removal would disconnect the graph.
2. Vertex Connectivity
Vertex connectivity, denoted by κ(G), is the minimum number of vertices that must be
removed from a graph G to disconnect the graph or make it trivial (i.e., reduce it to a single
vertex or an empty graph).
Formal Definition: The vertex connectivity κ(G) of a graph G is the minimum number
of vertices that must be removed from G to disconnect the graph, or to make it a single
vertex.
46/89
If a graph is k-connected, it means there are no sets of fewer than k vertices whose
removal would disconnect the graph.
Example:
Consider a graph G with vertices V = {1, 2, 3, 4} and edges E = {(1, 2), (2, 3), (3, 4)}. To
disconnect this graph, we would need to remove at least one vertex, and therefore the vertex
connectivity κ(G) = 1.
3. Edge Connectivity
Edge connectivity, denoted by λ(G), is the minimum number of edges that must be
removed from a graph G to disconnect the graph or make it trivial.
Formal Definition: The edge connectivity λ(G) of a graph G is the minimum number
of edges whose removal disconnects the graph.
If a graph is k-edge-connected, it means there are no sets of fewer than k edges whose
removal would disconnect the graph.
Example:
4. Cuts in Graphs
A cut in a graph is a partition of the vertices into two disjoint sets. The edges that connect
these two sets are called the cut-set. Cuts and cut-sets are used to measure the connectivity
of a graph.
The cut-set E(S, T ) is the set of edges that have one endpoint in S and the other
endpoint in T .
47/89
The capacity of the cut-set E(S, T ) is the number of edges in E(S, T ), and this is used
to measure the edge connectivity of the graph.
Minimum Cut: The minimum cut is the cut whose cut-set contains the smallest number of
edges. The minimum edge connectivity of a graph λ(G) is the capacity of the minimum cut.
A bond in a graph is a minimal set of edges whose removal disconnects the graph. The
edge connectivity of the graph is the size of the smallest bond.
Formal Definition: A bond is a set of edges B ⊆ E such that the removal of B from
G results in a disconnected graph. A bond is minimal if no proper subset of B
disconnects the graph.
A block in a graph is a maximal connected subgraph that does not have a cut-vertex. A
cut-vertex (also known as an articulation point) is a vertex whose removal disconnects
the graph.
Formal Definition: A block is a subgraph of G such that it is connected and does not
contain any cut-vertex. Blocks are the "building blocks" of a graph.
Example:
Consider a graph G where a set of edges is removed, disconnecting the graph. The minimal
set of edges that achieves this disconnection is a bond. If no vertex removal can disconnect
one of the connected components, it is considered a block.
6. Theorems on Connectivity
48/89
to the maximum number of vertex-disjoint paths between any two vertices in G.
Theorem: In a flow network, the maximum amount of flow that can be sent from
the source to the sink is equal to the capacity of the minimum cut separating the
source and sink.
7. Applications of Connectivity
Circuit Design: In electrical networks and integrated circuit design, the connectivity of
the network affects its robustness and reliability.
Social Networks: Analyzing the connectivity in social networks can help understand how
robust or fragile the network is to node removal (e.g., the removal of influential
individuals or groups).
49/89
connected even under failure scenarios.
8. Conclusion
In this lecture, we discussed the fundamental concepts of cuts and connectivity in graphs,
focusing on vertex connectivity, edge connectivity, bonds, and blocks. We also explored
key theorems, such as Menger’s Theorem and the Max-Flow Min-Cut Theorem, which
provide deep insights into the structural properties of graphs and their resilience to
disruptions. These concepts are crucial for understanding the robustness of networks and
have significant applications in various domains, including computer science, engineering,
and social sciences.
1. k -Connected Graphs
The vertex connectivity of a graph G, denoted by κ(G), is the smallest value of k such
that G is k -connected. If κ(G) = k , then the graph is k -connected.
Example:
A triangle (a graph with three vertices and three edges) is 2-connected because
removing any one vertex still leaves a connected graph.
50/89
A star graph (a graph with one central vertex connected to all other vertices) is 1-
connected, as removing the central vertex disconnects the graph.
2. k -Edge-Connected Graphs
Example:
A cycle graph is 2-edge-connected because removing one edge does not disconnect the
graph, but removing two edges does.
51/89
Menger’s Theorem (Edge Connectivity):
Menger’s theorem thus provides a way to compute the connectivity of a graph by counting
disjoint paths, which can help identify critical edges or vertices in a network.
Example:
Consider a graph with vertices u and v . If there are two vertex-disjoint paths between u and
v , the vertex connectivity κ(G) is at least 2.
4. Line Graph
Formal Definition: The line graph L(G) of a graph G has one vertex for each edge of G.
Two vertices in L(G) are adjacent if and only if their corresponding edges in G are
incident to a common vertex in G.
Example:
For a graph G with edges {(1, 2), (2, 3), (3, 4)}, the line graph L(G) would have vertices
{(1, 2), (2, 3), (3, 4)}, with edges connecting vertices (1, 2) and (2, 3), and (2, 3) and (3, 4)
since those edges in G share common vertices.
52/89
The converse is not necessarily true: A graph can be k -vertex-connected but not k -
edge-connected.
2. Complete Graphs:
because every vertex is connected to every other vertex, making it highly resilient to
the removal of vertices or edges.
3. Cyclic Graphs:
For planar graphs, the maximal vertex connectivity is at most 4. This is known as
the Four-Vertex-Connectivity Theorem for planar graphs.
Graph Theory and Algorithm Design: Many problems in optimization and algorithm
design require finding or analyzing k -connected graphs, such as in the case of minimum
cut problems or maximum flow problems.
Chemical and Molecular Networks: Line graphs can be used in chemistry to represent
molecules where edges correspond to bonds, and vertices represent atoms. The
structure of these graphs can be studied to understand the stability and reactivity of
molecules.
53/89
7. Conclusion
In this lecture, we discussed the important concepts of k -connected graphs and k -edge-
connected graphs, which quantify how resilient a graph is to vertex or edge failures. We
explored Menger’s Theorem in both vertex and edge versions, which provides a powerful
method for computing the connectivity of graphs. Additionally, we introduced the line
graph, which provides a new perspective on graphs by converting edges into vertices and
analyzing their relationships. These concepts are crucial for understanding graph
robustness, with applications in network design, communication systems, and optimization
problems.
A network flow is a directed graph where each edge has a capacity (maximum allowable
flow) and each edge carries a flow that does not exceed its capacity. The objective of network
flow problems is to maximize or minimize the flow in the network under various constraints.
Flow: The flow on an edge is a non-negative value representing the amount of "stuff"
(e.g., data, goods, or people) being transmitted along the edge.
Capacity: The capacity of an edge (u, v) is the maximum amount of flow that can be
sent from vertex u to vertex v .
Source and Sink: The network has a designated source vertex s from which flow
originates, and a sink vertex t to which the flow is sent.
The Maximum Flow Problem asks to find the maximum amount of flow that can be sent
from the source s to the sink t, subject to the capacities on the edges.
54/89
2. Maximum Network Flow
The maximum network flow problem is a classic optimization problem that involves finding
the maximum flow from a source to a sink in a flow network. The flow must respect the
capacity constraints on edges and obey the conservation of flow at each vertex (except at the
source and sink).
Conservation of Flow: For every vertex v in the network (except the source and sink), the
total flow into v must equal the total flow out of v .
Capacity Constraints: The flow on any edge (u, v) cannot exceed the capacity of that
edge.
The goal is to maximize the flow from the source s to the sink t while respecting the
constraints.
Example:
Consider a simple flow network with source s and sink t. The capacities on the edges are
given as follows:
From s to v1 : capacity = 10
From v1 to t: capacity = 5
From s to v2 : capacity = 5
From v2 to t: capacity = 10
3. f-Augmenting Path
An augmenting path is a path from the source to the sink along which additional flow can
be sent. The flow along an augmenting path can be increased by finding the bottleneck
along the path — the edge with the minimum residual capacity (the difference between the
edge's capacity and its current flow).
Residual Graph: The residual graph represents the available capacity of the network
after considering the current flow. It is derived by subtracting the flow already sent on
each edge from the edge's capacity.
55/89
An f-augmenting path is a path in the residual graph along which the flow can still be
augmented (increased).
Traverse the residual graph from the source to the sink, choosing edges that have
available capacity (positive residual capacity). The path with the smallest residual
capacity is the bottleneck.
Once an augmenting path is found, the flow along that path is increased by the bottleneck
value.
4. Ford-Fulkerson Algorithm
The Ford-Fulkerson Algorithm is a well-known algorithm for solving the maximum flow
problem. It iteratively finds augmenting paths in the residual graph and augments the flow
until no more augmenting paths can be found.
Find an augmenting path from the source s to the sink t in the residual graph.
3. Output the maximum flow as the sum of the flow values on edges emanating from the
source.
Example of Ford-Fulkerson:
Given a network with edges (s, v1 ), (v1 , t), (s, v2 ), and (v2 , t) with specified capacities,
Ford-Fulkerson finds augmenting paths iteratively, updating the flow until no more
augmenting paths exist.
56/89
5. Max-Flow Min-Cut Theorem
The Max-Flow Min-Cut Theorem is a fundamental result in network flow theory that
establishes a strong connection between the maximum flow and the minimum cut in a flow
network.
Min-Cut: A cut in a flow network is a partition of the vertices into two disjoint sets, S and
T , where the source s is in set S and the sink t is in set T . The capacity of the cut is the
sum of the capacities of the edges going from set S to set T .
Max-Flow Min-Cut Theorem: The maximum flow in a flow network is equal to the
capacity of the minimum cut. In other words, the amount of flow that can be sent from
the source to the sink is limited by the smallest set of edges that, if removed, would
disconnect the source from the sink.
Mathematically:
Max-Flow = Min-Cut
This theorem is powerful because it provides an alternate way to compute the maximum
flow by finding the minimum cut.
Example:
For a network with source s, sink t, and various edges, the maximum flow can be found by
applying the Ford-Fulkerson algorithm. The minimum cut can then be found by identifying
the set of edges crossing from the source side of the flow to the sink side.
We can prove Menger’s Theorem using the Max-Flow Min-Cut Theorem, which relates the
maximum number of disjoint paths between two vertices to the minimum number of
vertices or edges whose removal disconnects the graph.
Menger’s Theorem (Vertex Version): The vertex connectivity κ(G) of a graph G is equal
to the minimum number of vertices whose removal disconnects G, which is also equal to
the maximum number of vertex-disjoint paths between any two vertices u and v .
Proof Outline:
57/89
1. Maximum Number of Vertex-Disjoint Paths: The maximum number of vertex-disjoint
paths between u and v can be determined by applying the Max-Flow Min-Cut Theorem
to a flow network where each vertex is split into two vertices, one for incoming edges
and one for outgoing edges.
2. Minimum Cut: The minimum number of vertices whose removal disconnects the graph
is the size of the minimum vertex cut, which corresponds to the minimum number of
vertices separating u and v .
Thus, the maximum number of disjoint paths is equal to the minimum number of vertex
cuts, which proves Menger’s theorem.
Project Scheduling: In problems such as job scheduling, where tasks depend on the flow
of resources.
Bipartite Matching: In problems where one set of entities is matched with another set,
such as job assignment or resource allocation.
8. Conclusion
In this lecture, we have covered the key concepts and algorithms related to network flow
problems, including the Maximum Flow Problem, the Ford-Fulkerson Algorithm, the Max-
Flow Min-Cut Theorem, and the proof of Menger’s Theorem using the Max-Flow Min-Cut
Theorem. These concepts are crucial for solving optimization problems in various real-world
applications, such as transportation, communication, and scheduling. By understanding
58/89
these algorithms and theorems, we gain powerful tools for analyzing and optimizing flow in
networks.
Chromatic Number: The chromatic number χ(G) of a graph G is the smallest number
of colors required to color the vertices of G so that no two adjacent vertices share the
same color.
Proper Coloring: A coloring is proper if no two adjacent vertices share the same color.
Example:
{(v1 , v2 ), (v2 , v3 ), (v3 , v4 ), (v4 , v1 )}. A proper 2-coloring of the graph might assign color 1
to v1 and v3 , and color 2 to v2 and v4 , satisfying the requirement that adjacent vertices have
different colors.
2. Optimal Coloring
An optimal coloring refers to a coloring that uses the fewest possible number of colors to
achieve a proper coloring of a graph. The chromatic number of a graph is the minimum
number of colors needed for such a coloring.
Chromatic Polynomial: For some families of graphs, the chromatic number can be
determined by the chromatic polynomial, which counts the number of ways to color the
59/89
graph with k colors.
Example:
For the cycle graph Cn , the chromatic number is 2 if n is even, and Cn requires 3 colors if n
is odd.
The clique number of a graph G is the size of the largest complete subgraph (clique) in G. It
provides a lower bound on the chromatic number:
Clique Number: The clique number ω(G) of a graph G is the size of the largest
complete subgraph in G.
Example:
In a graph with a 3-clique (triangle), the chromatic number is at least 3, because any proper
coloring must assign at least 3 colors to the vertices of the triangle.
The Cartesian product of two graphs G1 and G2 , denoted G1 □G2 , is a graph whose vertex
set is the Cartesian product of the vertex sets of G1 and G2 . Two vertices (u1 , v1 ) and
1. u1 = u2 and v1 is adjacent to v2 in G2 , or
2. v1 = v2 and u1 is adjacent to u2 in G1 .
The chromatic number of the Cartesian product of two graphs satisfies the following:
60/89
This upper bound arises because in the Cartesian product, the coloring of G1 and G2 can be
used to color the product graph. If both graphs are properly colored, their product is also
properly colored.
Example:
length 4 (C4 ) with chromatic number 2, the chromatic number of G1 □G2 will be at most
3 × 2 = 6.
To compute the chromatic number efficiently, several upper bounds can be used:
Greedy Coloring: The greedy coloring algorithm is a simple heuristic that colors the
vertices of the graph one by one. For each vertex, assign the smallest available color that
has not been used by its adjacent vertices. While greedy coloring does not always give
the optimal coloring, it provides an upper bound for the chromatic number.
Example: For a graph with a maximum degree Δ = 4, greedy coloring can use at
most 5 colors.
Interval Graphs: An interval graph is a graph where each vertex can be associated with
an interval on the real line, and two vertices are adjacent if their intervals overlap.
Interval graphs are a class of graphs that have useful properties for coloring.
61/89
maximum number of overlapping intervals.
Example:
Consider the interval graph where the intervals are [1, 4], [2, 5], [4, 7], [6, 8]. The maximum
clique consists of the overlapping intervals [1, 4] and [2, 5], so the chromatic number is 2.
6. Conclusion
In this lecture, we have explored the concepts of vertex coloring in graphs, optimal coloring,
upper bounds on the chromatic number, and various applications such as register
allocation and interval graphs. We have discussed techniques such as greedy coloring, the
relationship between clique number and chromatic number, and how the Cartesian product
of graphs affects the chromatic number. Understanding these concepts is fundamental for
solving practical problems in areas like computer science, scheduling, and optimization.
1. Brooks' Theorem
Brooks' Theorem provides a characterization of the chromatic number for most types of
connected graphs. It relates the chromatic number of a graph to its maximum degree.
If a graph G is connected and does not contain a clique or an odd cycle (i.e., the
graph is not a complete graph or an odd cycle), then the chromatic number of G is
at most the maximum degree Δ of G:
62/89
χ(G) ≤ Δ
For graphs that are either complete graphs or odd cycles, the chromatic number is
Δ + 1, i.e., χ(G) = Δ + 1.
Key Points:
Complete Graph: A complete graph Kn with n vertices has chromatic number n, since
Odd Cycle: A cycle graph Cn where n is odd requires n colors, as each vertex must have
General Case: If the graph is not a complete graph or an odd cycle, Brooks' theorem
ensures that the chromatic number does not exceed the maximum degree.
Example:
Cycle Graph: For C4 (even cycle), the chromatic number is 2, because it can be colored
General Graph: For a tree, the chromatic number is 2 (since trees are bipartite), and for a
graph with maximum degree 3, the chromatic number is at most 3 according to Brooks'
Theorem.
2. Color-Critical Graphs
A color-critical graph (also called k-critical graph) is a graph for which the chromatic
number is exactly k , and the graph becomes colorable with fewer colors if any single vertex
is removed.
63/89
Thus, a graph is color-critical if it "requires" exactly k colors, and removing any vertex
reduces the number of colors needed to color the graph.
Example:
Consider a graph G that is a triangle (3 vertices, each connected to the other two). The
chromatic number is 3 because it is a complete graph K3 . If any vertex is removed, the
resulting graph is a path on 2 vertices, which can be colored with 2 colors. Thus, G is 3-
critical.
Δ(G) ≥ k − 1
In other words, a k -critical graph must have vertices with a degree that ensures it cannot
be colored with fewer than k colors.
Subgraphs: A k -critical graph can have subgraphs with a chromatic number strictly less
than k , but removing any vertex will result in a graph whose chromatic number is less
than k .
Induced Subgraphs: Any induced subgraph of a k -critical graph that is not itself k -
critical must have a chromatic number less than k . Hence, k -critical graphs are
"maximally" difficult to color.
Connectivity: A k -critical graph is often highly connected. In fact, the edge connectivity
of a k -critical graph is usually large, and removing certain edges can reduce the
chromatic number. However, removing a vertex will always reduce the chromatic number
for k -critical graphs.
64/89
Kn (Complete Graph): The complete graph on n vertices Kn is n-critical because it
requires exactly n colors, and removing any vertex results in Kn−1 , which still requires
n − 1 colors.
Cycle Graphs: Odd cycle graphs are critical for their chromatic number. For example, the
odd cycle C5 is 3-critical, since it requires exactly 3 colors, and removing any vertex
Complete Bipartite Graphs: The graph Km,n (complete bipartite graph) is 2-critical for
any m, n ≥ 1, as it requires exactly 2 colors (since bipartite graphs are 2-colorable), and
removing any vertex from the graph leaves a subgraph that is also 2-colorable.
Brooks' Theorem and the study of color-critical graphs have several applications in real-world
problems:
Graph Coloring Heuristics: Brooks' Theorem provides an upper bound for graph
coloring algorithms, guiding the design of efficient heuristics for coloring large graphs.
6. Conclusion
In this lecture, we have explored Brooks' Theorem, which provides an upper bound on the
chromatic number for most connected graphs, and we have examined color-critical graphs,
which are graphs that require exactly k colors and have interesting structural properties. We
discussed the elementary properties of k-critical graphs, including their high connectivity
and the relationship between their chromatic number and maximum degree. Understanding
65/89
these theorems is crucial for applications in graph coloring problems, such as scheduling,
register allocation, and network design.
A proper coloring of a graph is an assignment of colors to the vertices such that no two
adjacent vertices share the same color. The goal in this section is to count how many
different ways we can color the vertices of a graph using k colors.
Chromatic Number: The chromatic number χ(G) of a graph G is the smallest number
of colors required to properly color the graph.
Counting the number of proper colorings for a graph is related to the chromatic polynomial,
which encodes this counting process.
2. Chromatic Polynomial
The chromatic polynomial P (G, k) of a graph G is a polynomial that gives the number of
proper colorings of G using k colors. It is defined as:
Properties:
66/89
If G is a complete graph Kn , the chromatic polynomial is P (Kn , k)
= k(k −
1)(k − 2)...(k − n + 1), since each vertex must have a unique color.
Example:
This is because C3 (a triangle) requires that the first vertex can be any of the k colors, the
− 1 choices), and
second vertex can be any color except for the first vertex’s color (giving k
the third vertex can be any color except the two colors of the adjacent vertices (giving k − 2
choices).
3. Chromatic Recurrence
For certain types of graphs, we can derive the chromatic polynomial using recurrence
relations. One such recurrence is the deletion-contraction recurrence, which allows us to
compute the chromatic polynomial of a graph by breaking it down into simpler components.
contracting e (i.e., merging u and v into a single vertex). Then the chromatic polynomial
satisfies the recurrence:
P (G, k) = P (G − e, k) − P (G/e, k)
This recurrence is helpful because it breaks the problem into smaller subproblems. By
applying this recurrence iteratively, we can compute the chromatic polynomial for a given
graph.
Example:
For a graph with a single edge e and two vertices, the chromatic polynomial is:
This recurrence can be used for more complex graphs, leading to a more systematic way of
determining the chromatic polynomial.
67/89
4. Chromatic Polynomial for Trees
For trees, a well-known property is that the chromatic polynomial is very simple. Since a tree
is a bipartite graph (i.e., it can be colored using two colors), its chromatic polynomial is:
P (T , k) = k(k − 1)n−1
Where T is a tree with n vertices. This is because for the first vertex, we have k color choices,
and for each subsequent vertex, we have k − 1 color choices to ensure it does not share the
color of its adjacent vertex.
The chromatic polynomial provides valuable insights into a graph's coloring behavior. It has
applications in various fields, including:
Scheduling: In scheduling problems, we often need to assign resources (e.g., time slots,
machines) to tasks (represented as vertices) such that conflicting tasks (adjacent
vertices) do not share the same resource. The chromatic polynomial provides a count of
how many different ways we can assign these resources.
Network Design: The chromatic polynomial can be used to model problems in network
coloring, where different devices or channels must be assigned distinct resources to
avoid interference.
Another approach to count the number of proper colorings is through the inclusion-
exclusion principle. This technique involves considering all possible colorings and
subtracting the invalid colorings where adjacent vertices share the same color.
Inclusion-Exclusion Principle:
68/89
1. Start by counting all possible ways to color the graph with k colors (without any
restrictions).
2. Subtract the colorings where at least one pair of adjacent vertices shares the same
color.
3. Add back the cases where more than one pair shares the same color, and so on,
alternating addition and subtraction.
This approach is more general and can be applied to graphs where the chromatic polynomial
might be difficult to calculate directly.
7. Conclusion
In this lecture, we have explored the process of counting proper colorings of a graph, which
is a central problem in graph theory and combinatorics. We discussed the chromatic
polynomial, which provides a way to compute the number of proper colorings for a given
graph using k colors. We also covered the chromatic recurrence, which helps break down
the problem of counting colorings for more complex graphs, and we provided specific results
for trees. Additionally, we touched on the inclusion-exclusion principle for counting
colorings in general. Understanding these concepts is crucial for applications in scheduling,
network design, and other areas of applied mathematics and computer science.
A planar graph is a graph that can be embedded in the plane, i.e., it can be drawn on a flat
surface without any of its edges crossing. More formally:
69/89
A graph G = (V , E) is planar if there exists an embedding of G in the plane, such that
no two edges intersect except at their endpoints.
A graph that cannot be drawn without edge crossings is called a non-planar graph.
Example:
The complete graph on four vertices K4 is planar because it can be drawn without
edge crossings.
A plane graph is a graph that is embedded in the plane such that no two edges cross. A
graph embedding refers to the actual drawing of a graph in the plane.
Embedding: An embedding is the way a graph is laid out on the plane. The graph is
embedded if the edges do not intersect except at the vertices where they meet.
Faces: In a plane graph, the regions created by the edges are called faces. There are two
types of faces: the outer face, which is the region surrounding the entire graph, and the
inner faces, which are enclosed by the edges of the graph.
The key challenge in working with planar graphs is to find a way to embed them in the plane
without violating the condition of edge crossings.
3. Dual Graphs
There is an edge between two vertices in G∗ if and only if their corresponding faces in G
share an edge.
70/89
In other words, the dual graph is formed by placing a vertex inside each face of the original
graph and connecting vertices of the dual graph whenever their corresponding faces in the
original graph are adjacent.
Example: The dual of a triangulation of a surface has the property that the dual graph is
also planar, and it reflects the connectivity of the regions (faces) of the original graph.
Properties:
The dual graph has an edge for every edge in the original graph, but the
connections correspond to shared faces rather than shared vertices.
The dual of the dual graph G∗∗ is isomorphic to the original graph G.
Example:
Consider the planar graph that forms a square with a diagonal. The faces are the interior
region and the outer region. The dual graph would have a vertex for each of these faces and
an edge connecting them because the faces share an edge (the diagonal).
One of the most important results in the study of planar graphs is Euler’s formula, which
relates the number of vertices V , edges E , and faces F of a connected planar graph. Euler’s
formula states that:
V −E+F =2
Where:
This formula holds for any connected planar graph, and it is a crucial tool in understanding
the structure of planar graphs.
71/89
Example 2: For the square with a diagonal (a graph with four vertices and five edges):
and 3 vertices).
A homeomorphism of a graph involves replacing edges with paths. Thus, K5 and K3,3
Planarity Testing: There are algorithms that allow you to test whether a given graph is
planar. One well-known algorithm is the Hopcroft and Tarjan planarity testing
algorithm, which runs in linear time.
6. Regular Polyhedra
In three-dimensional space, the regular polyhedra (also known as Platonic solids) are the
geometric objects that can be represented as planar graphs. These polyhedra are convex,
and each face is a regular polygon with the same number of edges and angles.
72/89
These polyhedra correspond to specific planar graphs and are used in various fields such as
chemistry (modeling molecules), crystallography, and computer graphics.
Geographic Mapping: Planar graphs are used to model geographical regions, such as
countries on a map, where each region is a face, and the edges represent borders.
Circuit Design: In electronics, planar graphs are used in the design of circuits where
wires must be routed without crossings.
Graph Drawing and Visualization: Algorithms for graph drawing aim to display graphs
in a way that emphasizes their planar structure, helping with understanding the graph’s
structure.
Topology and Geometry: Planar graphs are key objects in topological graph theory and
are used to study the properties of surfaces and higher-dimensional spaces.
8. Conclusion
In this lecture, we have covered the basic properties and concepts of planar graphs,
including plane graph embeddings, dual graphs, and Euler’s formula. We also discussed
regular polyhedra and their connection to planar graphs. Euler’s formula provides a
powerful tool for analyzing the structure of planar graphs, while dual graphs offer a way to
explore the relationships between the faces of a planar graph. Understanding planar graphs
is essential for solving problems in geography, network design, and other areas of applied
mathematics.
73/89
concepts is essential in graph theory and has applications in areas like graph drawing,
network design, and topology.
There is a mapping between the edges of G and the edges of G′ such that each
edge in G is replaced by a path in G′ .
A graph is planar if it can be embedded in the plane without edge crossings. Subdivision
is an important operation because:
Example:
The concept of minors generalizes the idea of subdivision, and the presence of certain
minors implies that a graph is non-planar.
74/89
Example:
3. Kuratowski’s Theorem
Kuratowski’s Theorem: A graph G is planar if and only if it does not contain a subgraph that
is a subdivision of either K5 (the complete graph on 5 vertices) or K3,3 (the complete
edge.
K3,3 is a complete bipartite graph where there are two sets of 3 vertices, and every
Example:
K3,3 is also non-planar due to its inherent structure, which cannot be drawn without
4. Wagner’s Theorem
Wagner’s Theorem: A graph is planar if and only if it does not contain a minor that is
isomorphic to K5 or K3,3 .
75/89
This theorem provides a minor-based characterization of planarity, similar to Kuratowski’s
Theorem, but with the concept of graph minors rather than subdivisions. Specifically, it
states that a graph is planar if it does not contain a K5 or K3,3 minor.
The minor relationship is broader than subdivision, as it allows both edge contraction
and deletion.
Example:
A graph that can be reduced to K5 or K3,3 by a series of edge contractions and deletions is
Both Kuratowski’s Theorem and Wagner’s Theorem provide the theoretical foundation for
planarity testing algorithms. While Kuratowski’s Theorem provides a necessary condition
based on subdivisions, Wagner’s Theorem allows planarity testing using graph minors.
Planarity Testing: To determine whether a given graph is planar, one can check for the
presence of K5 or K3,3 minors or subdivisions. Algorithms like Hopcroft and Tarjan’s
Graph Minors: To test whether a graph contains a minor isomorphic to K5 or K3,3 , one
can attempt to find the K5 or K3,3 structure by edge contractions and deletions. This is
The results from Kuratowski’s and Wagner’s Theorems are not only useful in pure graph
theory but also have applications in various practical fields:
Circuit Design: Planar graphs model circuit layouts where components (vertices) are
connected by wires (edges), and edge crossings must be avoided.
Geographic Mapping: Planar graphs are used to represent regions on a map, where the
goal is to assign regions to the plane without crossings, ensuring the map is planar.
76/89
Graph Drawing: Algorithms for drawing graphs often rely on these characterization
theorems to determine whether a graph can be drawn without edge crossings.
7. Conclusion
In this lecture, we have explored the characterization of planar graphs through the
concepts of subdivision, minor, Kuratowski’s Theorem, and Wagner’s Theorem. These
theorems provide necessary and sufficient conditions for determining whether a graph is
planar or non-planar. Understanding these characterizations is crucial for graph drawing,
circuit design, and other applied fields. Furthermore, these theorems form the foundation
for planarity testing algorithms, which are essential for practical graph theory applications.
1. Line Graphs
A line graph L(G) of a graph G is a graph where each vertex of L(G) represents an edge of
G, and two vertices of L(G) are adjacent if and only if their corresponding edges in G are
incident to a common vertex.
Formally, given a graph G = (V , E), the line graph L(G) is defined as follows:
The vertices of L(G) correspond to the edges of G.
Two vertices of L(G) are adjacent if and only if their corresponding edges in G are
incident to the same vertex.
Example:
77/89
(A, B), (B, C), (C, A), and edges between two vertices in L(G) if their corresponding
edges in G share a common vertex.
For instance, if (A, B) and (B, C) are incident to vertex B , then in L(G), the vertex
corresponding to edge (A, B) will be adjacent to the vertex corresponding to edge
(B, C).
The line graph operation is useful in applications such as graph coloring and matching
problems, where it helps translate edge-related problems into vertex-related problems.
2. Edge Coloring
Edge-coloring is the assignment of colors to the edges of a graph such that no two edges
sharing a common vertex have the same color. The goal is to minimize the number of colors
used.
The chromatic index χ′ (G) of a graph G is the smallest number of colors required for a
proper edge-coloring of G.
Example:
For a simple graph G, if G has a vertex with degree 3, at least 3 colors are required for the
edges incident to that vertex. The chromatic index of G is the minimum number of colors
needed to color all edges of G while ensuring that no two adjacent edges share the same
color.
3. Chromatic Index
The chromatic index of a graph G, denoted as χ′ (G), is the smallest number of colors
needed to color the edges of G such that no two edges that share a common vertex have
the same color.
78/89
Results on Chromatic Index:
Vizing’s Theorem: For any graph G, the chromatic index χ′ (G) satisfies the inequality:
where Δ(G) is the maximum degree of any vertex in G. This means that the chromatic
index is either equal to Δ(G) or Δ(G) + 1.
A graph with chromatic index equal to Δ(G) can be edge-colored with Δ(G) colors,
whereas a graph with chromatic index Δ(G) + 1 requires one more color.
Example:
is 4, meaning we need 4 colors to properly color its edges. By Vizing’s Theorem, the
chromatic index of K4 satisfies 3
≤ χ′ (K4 ) ≤ 4.
Multiplicity refers to the number of different ways a graph can be edge-colored with a given
chromatic index. In other words, it is the number of distinct ways to assign colors to the
edges while maintaining a proper edge-coloring.
Example: In the case of K6 , a 1-factorization would involve partitioning its edges into 3
79/89
5. Applications of Edge Coloring
Network Design: Edge coloring is used in network design, where resources like
channels or frequencies need to be assigned to edges (connections) such that no two
adjacent edges share the same resource.
6. Conclusion
In this lecture, we have explored line graphs, edge-coloring, and the chromatic index. We
discussed Vizing’s Theorem and the classification of graphs into Class 1 and Class 2 based
on their chromatic index. We also touched upon multiplicity and 1-factorization, which
provide deeper insights into the structure and coloring of graphs. These concepts are
fundamental in solving practical problems related to scheduling, network design, and other
optimization tasks in various fields. Understanding these ideas enables the effective
application of graph theory to real-world problems, particularly those involving resource
allocation and conflict resolution.
80/89
1. Hamiltonian Graphs
A Hamiltonian graph is a graph that contains a Hamiltonian cycle, which is a cycle that visits
each vertex exactly once and returns to the starting vertex. The existence of a Hamiltonian
cycle is a central topic in combinatorial optimization and theoretical computer science.
Hamiltonian Path: A path that visits every vertex exactly once (but does not necessarily
return to the starting vertex).
Example:
Hamiltonian Cycle Problem: The decision problem for determining if a given graph contains
a Hamiltonian cycle is NP-complete, meaning it is computationally difficult to solve for large
graphs.
The Traveling Salesman Problem (TSP) is a classic problem in optimization. The problem is
defined as follows:
Problem Definition: Given a set of cities (vertices) and the distances (edges) between
them, the objective is to find the shortest possible route that visits each city exactly once
and returns to the starting city. The TSP is a combinatorial optimization problem that
seeks to minimize the total distance traveled.
Formal Statement: Given a graph G = (V , E) where each edge has a weight (representing
the distance between two cities), find a Hamiltonian cycle that minimizes the total weight of
the cycle.
81/89
TSP as an NP-Complete Problem: The TSP is NP-complete, meaning it is unlikely that a
polynomial-time algorithm exists to solve all instances of the problem.
Example:
For a set of 4 cities, the goal is to find the shortest path that visits each city once and returns
to the starting city. The number of possible routes grows exponentially as the number of
cities increases, making the problem computationally hard.
Intractability refers to problems that are so computationally difficult that solving them in a
reasonable amount of time (e.g., polynomial time) is infeasible as the size of the problem
increases. This is closely related to the concept of NP-completeness.
Intractable Problems: Problems that cannot be solved efficiently, and whose solutions
cannot be found in polynomial time, are often classified as NP-hard or NP-complete.
82/89
NP-complete problems are particularly important because if any NP-complete problem
can be solved in polynomial time, then all problems in NP can also be solved in
polynomial time (i.e., P = NP).
Cook-Levin Theorem: This theorem states that the Boolean satisfiability problem (SAT)
was the first NP-complete problem.
One of the main techniques for proving that a problem is NP-complete is reduction. In this
process, a known NP-complete problem is transformed into the problem in question in
polynomial time. If the original problem can be reduced to the new problem, and the new
problem is known to be NP-complete, then the original problem must also be NP-complete.
Reduction Example: To prove that the TSP is NP-complete, one can reduce the
Hamiltonian cycle problem to the TSP. If one can transform any instance of the
Hamiltonian cycle problem into an instance of TSP in polynomial time, and if solving the
TSP leads to solving the Hamiltonian cycle problem, then TSP is NP-complete.
Since NP-complete problems like TSP are computationally intractable, various heuristic
methods are used to find approximate solutions in a reasonable amount of time. These
methods do not guarantee an optimal solution, but they often provide good enough
solutions in practice.
Bounds:
83/89
Upper bounds provide a limit on the best possible solution (for example, the
maximum distance a salesman could travel).
Lower bounds give a minimum value for the objective function, helping to assess
the quality of approximate solutions.
The Nearest-Neighbor Algorithm for the Traveling Salesman Problem is a simple, greedy
approach:
2. At each step, choose the closest city that has not yet been visited.
3. Repeat this process until all cities have been visited, then return to the starting city.
Example: If the cities are A, B , C , and D , and the distances between them are known, the
algorithm would start at an arbitrary city (say A), then move to the nearest city (say B ), then
to the next nearest unvisited city, and so on, until all cities are visited.
While the Nearest-Neighbor algorithm is fast, it does not guarantee the optimal solution and
can sometimes produce a solution that is far from the best possible route.
To show that the Traveling Salesman Problem (TSP) is NP-complete, we can reduce the
Hamiltonian cycle problem to the TSP in polynomial time:
TSP: Given a complete weighted graph G with distances between vertices, find the
shortest Hamiltonian cycle.
Reduction:
1. From a given instance of the Hamiltonian cycle problem, construct a TSP instance where
the graph is made complete.
84/89
2. Assign a distance of 1 for edges that exist in the original graph and a large value (e.g.,
infinity) for edges that do not exist in the original graph.
4. Thus, solving the TSP will also solve the Hamiltonian cycle problem, proving the NP-
completeness of TSP.
9. Conclusion
In this lecture, we covered fundamental concepts such as Hamiltonian graphs, the Traveling
Salesman Problem (TSP), NP-completeness, and NP-hardness. We explored the theoretical
background of intractable problems, decision problems, and reductions, specifically from the
Hamiltonian Cycle problem to the TSP. Additionally, we discussed various heuristic
approaches for solving TSP, including the Nearest-Neighbor Algorithm and approximation
algorithms like Christofides' Algorithm. Understanding these concepts is crucial for tackling
complex computational problems in fields ranging from logistics to cryptography and
artificial intelligence.
A connected dominating set (CDS) is a dominating set that is also connected, meaning there
is a path between any two vertices in the set D . A connected dominating set is useful in
85/89
scenarios like network routing, where we want a small set of nodes that can efficiently
communicate with all nodes in the network, and the set itself must be connected to ensure
network connectivity.
Example:
{(v1 , v2 ), (v2 , v3 ), (v3 , v4 ), (v4 , v5 ), (v5 , v1 )}. The set D = {v1 , v3 , v5 } forms a connected
dominating set, as every other vertex is adjacent to at least one vertex in D , and the set D
itself is connected.
Applications of CDS:
Wireless Networks: In mobile ad hoc networks (MANETs), CDS can be used to reduce the
communication overhead by selecting a small number of nodes that ensure network
connectivity.
Routing: In routing protocols, the CDS serves as a backbone that can be used to
minimize routing paths and reduce energy consumption in a wireless network.
Data Collection: In sensor networks, a connected dominating set can be used for
efficient data aggregation and communication among sensor nodes.
One of the key challenges in the study of CDS is finding the minimum connected
dominating set, or one that has the smallest possible size. The size of a CDS is an important
factor because it determines how many nodes need to be selected to maintain both
domination and connectivity.
The problem of finding the minimum CDS is NP-hard. This means that finding the optimal
solution in a reasonable time frame is computationally difficult for large networks. As such,
researchers focus on approximation algorithms that provide near-optimal solutions.
Example of Approximation:
For a given graph G, the Greedy Algorithm for finding a CDS works as follows:
2. Repeatedly add the vertex that covers the largest number of uncovered vertices (i.e., the
vertex with the largest degree in the subgraph of uncovered vertices).
86/89
3. Continue this process until all vertices are covered.
4. Check if the set D is connected. If not, add additional vertices to ensure connectivity.
While this greedy algorithm doesn't guarantee the smallest CDS, it often provides a solution
that is close to optimal.
A distributed algorithm is one in which each node in the graph can compute its part of the
solution without needing centralized control. In the context of a connected dominating set,
a distributed algorithm allows nodes to independently decide whether to be included in the
CDS based on local information and communication with neighboring nodes.
We will focus on a distributed algorithm that computes a small connected dominating set
by ensuring that the nodes selected for the CDS can form a connected subgraph.
1. Initialization:
Each vertex v ∈ V initially assumes it is not in the CDS and does not know the
global structure of the graph.
Nodes that are not part of the CDS but are adjacent to a node in the CDS can
propagate information to their neighbors, ensuring that they are dominated.
3. Ensuring Connectivity:
Once a set of nodes has been selected to form the dominating set, the algorithm
ensures that the set is connected. This is done by checking the local structure of the
network and adding edges between the nodes in the CDS if necessary.
If a node’s addition would make the CDS disconnected, it is removed from the set.
4. Termination:
87/89
The algorithm terminates when all nodes have been either dominated or included in
the connected dominating set, and connectivity is guaranteed.
Example:
In a simple line graph, each vertex will consider whether it is isolated from others. For
example, a node in the middle may add itself to the CDS if none of its neighbors are
already in the CDS. The algorithm will iterate to ensure that all vertices are dominated
while maintaining connectivity among the selected nodes.
Locality: Nodes only need to communicate with their immediate neighbors to make
decisions.
Scalability: The algorithm works efficiently even for large graphs, as the decision-making
process is distributed and localized.
Convergence: The algorithm converges when all nodes are either in the CDS or
dominated, ensuring that the set is both dominating and connected.
The distributed algorithm may not always produce the smallest connected dominating set,
but it aims to minimize the size while ensuring that the algorithm runs efficiently and in a
decentralized manner. The quality of the approximation depends on the specific strategy
used for selecting nodes and ensuring connectivity.
Approximation Ratio: The algorithm may not guarantee the optimal size of the CDS but
typically provides a solution within a constant factor of the optimal size. For example,
some algorithms are known to find a CDS that is at most 3 times the size of the smallest
CDS.
Efficiency: The distributed algorithm should run in polynomial time with respect to the
number of nodes and edges, which is important for practical applications in large-scale
networks.
88/89
5. Applications of Distributed CDS Algorithms
Mobile Ad-Hoc Networks (MANETs): In MANETs, nodes are distributed and may be
mobile. A distributed CDS algorithm helps in selecting a small number of nodes that can
act as routers or relays, ensuring network connectivity with minimal communication
overhead.
Wireless Sensor Networks: Distributed CDS algorithms are used for efficient data
routing and aggregation in wireless sensor networks, where the nodes must be energy-
efficient while ensuring that the network remains connected and data can flow across
the network.
6. Conclusion
In this final lecture, we covered the concept of the Connected Dominating Set (CDS), its
applications in network design, and the challenges of finding the minimum CDS. We
explored a distributed algorithm for computing a connected dominating set, focusing on its
efficiency and approximation. While finding the minimum CDS is NP-hard, the distributed
algorithm provides a practical solution for large-scale networks. Understanding CDS and
distributed algorithms is crucial for solving real-world problems in areas such as wireless
communication, distributed computing, and sensor networks.
This lecture concludes the course, and the concepts learned here form the foundation for
tackling complex graph-theoretic problems in various applied domains.
89/89