3 Greedy
3 Greedy
1
The greedy method
• An optimization problem is one in which we want to find,
not just a solution, but the best solution
– A “greedy algorithm” works well for most optimization
problems
2
Feasible vs. optimal solution
• Greedy method solves problem by making a sequence of
decisions.
• Decisions are made one by one in some order.
• Each decision is made using a greedy criterion.
• A decision, once made, is (usually) not changed later.
• Given n inputs we are required to obtain a subset that satisfies
some constraints
–Any subset that satisfies the given constraints is called a
feasible solution
–A feasible solution that either maximizes or minimizes a given
objective function is called an optimal solution.
3
Greedy algorithm
To apply greedy algorithm
• Decide optimization measure (maximization of profit or
minimization of cost)
– Sort the input in increasing or decreasing order based on the
optimization measure selected for the given problem
4
Greedy Choice Property
• Greedy algorithm always makes the choice that looks
best at the moment
– With the hope that a locally optimal choice will
lead to a globally optimal solution
5
The Problem of Making Coin Change
•Assume the coin denominations are: 50, 25, 10, 5, and 1.
•Problem: Make a change of a given amount using the smallest
possible number of coins
•Example: make a change for x = 92.
–Mathematically this is written as
x = 50a + 25b + 10c + 5d + 1e
So that a + b + c + d +e is minimum & a, b, c, d, e ≥ 0.
•Greedy algorithm for coin changing
–Order coins in decreasing order
–Select coins one at a time (divide x by denomination)
–Solution: contains a =1 , b = 1, c = 1 , d = 1 e = 2.
6
Algorithm
procedure greedy (A, n)
Solution ← { }; // set that will hold the solution set.
FOR i = 1 to n DO
x = SELECT (A)
IF FEASIBLE (Solution, x) THEN
Solution = UNION (Solution, x)
end if
end FOR
RETURN Solution
end procedure
•SELECT function:
–selects the most promising candidates from A[ ] and removes it
from the list.
•FEASIBLE function:
–a Boolean valued function that determines whether x can be
included into the solution vector.
•UNION function:
–combines x with the solution 7
Algorithm for Coin Change
• Make change for n units using the least possible number of
coins.
8
A failure of the greedy algorithm
◼ In some (fictional) monetary system, “krons” come in
1 kron, 7 kron, and 10 kron coins
◼ Using a greedy algorithm to count out 15 krons, we
would get
◼ A 10 kron piece
9
Minimum Spanning Trees
Problem: Laying Telephone Wire
Central office
10
Minimum Spanning Tree (MST)
• Assume we have an undirected graph G = (V,E) with weights
assigned to edges.
•The objective is “use smallest set of edges of the given graph to
connect everything together”. How?
•A minimum spanning tree is a least-cost subset of the edges of a
graph that connects all the nodes
• MST is a sub-graph of an undirected weighted graph G, such that:
•It is a tree (i.e., it is acyclic)
•It covers all the vertices V
•contains |V| - 1 edges
•The total cost associated with tree edges is the minimum among
all possible spanning trees
Applications of MST
• Network design, road planning, hydraulic, electric,
telecommunication etc.
11
How can we generate a MST?
•A MST is a least-cost subset of the edges of a graph that connects all
the nodes
•A greedy method to obtain a minimum-cost spanning tree builds this
tree edge by edge.
–The next edge to include is chosen according to some optimization
criterion.
•Criteria: to choose an edge that results in a minimum increase in the
sum of the costs of the edges so far included.
•General procedure: 4
6
2
– Start by picking any node and adding it to the tree
4
– Repeatedly: Pick any least-cost edge from a node in 1 5
the tree to a node not in the tree, and add the edge 3 2
and new node to the tree
3 3 2 3
– Stop when all nodes have been added to the tree 3
•Two techniques: Prim and Kruskal algorithms 4
2 12
4
Kruskal’s algorithm
• Kruskal algorithm: Always tries the lowest-cost remaining edge
• It considers the edges of the graph in increasing order of cost.
• In this approach, the set T of edges so far selected for the spanning
tree may not be a tree at all stages in the algorithm. But it is possible
to complete T into a tree.
• Create a forest of trees from the vertices
• Repeatedly merge trees by adding “safe edges” until only one tree
remains. A “safe edge” is an edge of minimum weight which does
not create a cycle
• Example:
9 Initially there is a forest:
b
a 2 6 V={a}, {b}, {c}, {d}, {e}
d
4 5
5 4 E = {(a,d), (c,d), (d,e), (a,c),
5 e (b,e), (c,e), (b,d), (a,b)}
c
13
Cont..
14
Cont..
15
Cont..
• MST cost=2+2+3+3+7=17
16
Kruskal Algorithm
procedure kruskalMST(G, cost, n, T)
i = mincost = 0
while i < n – 1 do •After each iteration,
delete a minimum cost edge (u,v) every tree in the
forest is a MST of the
j = find(u) vertices it connects
k = find(v)
•Algorithm terminates
if j ≠ k then when all vertices are
i = i +1 connected into one
T(i,1) = u; T(i,2) = v tree
mincost = mincost + cost(u,v) •Running time is
union(j,k) bounded by sorting
end if (or findMin): O(n2)
end while
if i ≠ n-1 then return “no spanning tree”
return mincost
end procedure
17
Correctness of Kruskal
• If the algorithm is correct it halts with the right answer or
optimal solution.
• Prove by contradiction
• Suppose it wasn't construct minimum spanning tree.
18
Prim’s algorithm
•Prim’s: Always takes the lowest-cost edge between nodes in the
spanning tree and nodes not yet in the spanning tree
•If A is the set of edges selected so far, then A forms a tree.
–The next edge (u,v) to be included in A is a minimum cost edge
not in A such that A υ {(u,v)} is also a tree, where u is in the tree
& v is not.
•Property: At each step, we add the edge (u,v). the weight of (u,v) is
minimum among all edges
–This spanning tree grows by one new node and edge at each
iteration.
•Each step maintains a minimum spanning tree of the vertices that
have been included thus far
19
Prim’s algorithm
•Example: find the minimum spanning tree using Prim
algorithm
9 2 9 2
1 2 6 1 2 6
4 4
4 5 4 5
5 4 5 4
5 5 5 5
3 3
20
Prim’s Algorithm
procedure primMST(G, cost, n, T)
Pick a vertex 1 to be the root of the spanning tree T
mincost = 0
for i = 2 to n do near (i) = 1
near(1) = 0
for i = 1 to n-1 do
find j such that near(j) ≠ 0 and cost(j,near(j)) is min
T(i,1) = j; T(i,2) = near (j)
mincost = mincost + cost(j,near(j))
near (j) = 0
for k = 1 to n do
if near(k) ≠ 0 and cost(k,near(k) > cost(k,j) then
near (k) = j
end for
end for
return mincost
end procedure
21
Correctness of Prim’s
• If the algorithm is correct it halts with the right answer or
optimal solution.
• Prove by contradiction
• Suppose it wasn't.
22
Single source shortest path
•Path problem asks questions such as, given a weighted directed graph G
= (V,E) with cost associated with every edges
–Is there a path from vi to vj
–If there is more than one path from A to B, which is the shortest path?
•The length of a path is defined by the sum of the weights of the edges on
that path.
–The starting vertex of the path is referred as the source
–The last vertex is the destination
•To formulate a greedy approach to generate the shortest path, starting
source vertex v0 think of:
–A multi-stage solution: build the
shortest path one by one
–An optimization measure: Minimize
the sum of all the shortest paths. For this
measure to be minimized each
individual path must be of minimum
length
• Use the sum of the lengths of all paths
so far generated.
23
Dijkstra’s shortest-path algorithm
• Dijkstra’s algorithm finds the shortest paths from a given node to all
other nodes in a graph
–Always takes the shortest edge connecting a known node to an
unknown node
• Initially,
–Mark the given node as known (path length is zero)
–For each out-edge, set the distance in each neighboring node
equal to the cost (length) of the out-edge, and set its predecessor
to the initially given node
• Repeatedly (until all nodes are known),
–Find an unknown node containing the smallest distance
–Mark the new node as known
–For each node adjacent to the new node, examine its neighbors to
see whether their estimated distance can be reduced (distance to
known node plus cost of out-edge)
• If so, also reset the predecessor of the new node
24
Dijkstra’s shortest-path algorithm
PROCEDURE shortestPath(v,COST,DIST,n)
start with V1(s)
FOR i = 1 to n DO //initialize S and DIST
S(i) = 0; DIST(i) = COST(v,i);
END FOR
S(v1) = 1
FOR num = 2 to n-1 DO
choose vertex u such that S(u) = 0 and DIST(u) is min
S(u) = 1 //put u in S
FOR each w adjacent to u with S(w) = 0 DO
IF DIST[w] > DIST[u] + COST(u,w) THEN
DIST[w] = DIST[u] + COST(u,w);
END FOR
END FOR
END PROCEDURE 25
Scheduling
• The Greedy algorithm scheduling is a heuristic approach to scheduling
problems, where the algorithm makes the locally optimal choice at
each step, hoping to achieve a globally optimal solution.
• The basic idea of the greedy algorithm scheduling is to sort the jobs by
some criteria and assign the jobs to the machines one by one, starting
with the job that has the earliest deadline or the shortest processing
time.
• The algorithm then assigns the job to the machine that is available at
that time.
• For example, let's say we have three jobs J1, J2, and J3, with
processing times p1=2, p2=5, and p3=7, and deadlines d1=3, d2=5,
and d3=8, respectively. We also have two machines M1 and M2, and
we want to assign the jobs to the machines in such a way that the
maximum lateness is minimized.
26
Cont..
• The greedy algorithm scheduling can be applied in the following way:
• Step 1: Sort the jobs by the earliest deadline. In this case, the order is
J1, J2, and J3.
• Step 2: Assign job J1 to machine M1, since it has the earliest
deadline. The job completes at time t=2.
• Step 3: Assign job J2 to machine M1, since it has the earliest deadline
among the remaining jobs. The job completes at time t=7.
• Step 4: Assign job J3 to machine M2, since it has the earliest deadline
among the remaining jobs. The job completes at time t=14.
• The maximum lateness is the difference between the completion time
and the deadline. In this case, the lateness for job J1 is 0, for job J2 is
2, and for job J3 is 6. The maximum lateness is 6, which is achieved
when job J3 is completed.
27
Cont..
• Another example where the greedy algorithm scheduling can be
applied is the interval scheduling problem. Suppose we have n tasks,
each with a start time and an end time. We want to find the maximum
number of non-overlapping tasks that can be scheduled.
• The greedy algorithm scheduling can be applied in the following way:
• Step 1: Sort the tasks by their end time.
• Step 2: Select the task with the earliest end time, and schedule it.
• Step 3: For each remaining task, if the start time is after the end time
of the previously scheduled task, then schedule it.
• Repeat steps 2 and 3 until no more tasks remain.
28
Cont..
• For example, suppose we have the following four tasks:
• Task 1: (1, 3)
• Task 2: (2, 4)
• Task 3: (3, 6)
• Task 4: (5, 7)
• The greedy algorithm scheduling can be applied in the following way:
• Step 1: Sort the tasks by their end time: Task 1, Task 2, Task 3, Task 4.
• Step 2: Select Task 1 and schedule it.
• Step 3: Task 2 cannot be scheduled since it overlaps with Task 1.
• Step 4: Task 3 can be scheduled since its start time is after the end time of Task 1.
• Step 5: Task 4 cannot be scheduled since it overlaps with Task 3.
• Thus, the maximum number of non-overlapping tasks that can be scheduled is 2,
which are Task 1 and Task 3.
29