0% found this document useful (0 votes)
35 views21 pages

Brute Force Approaches:: Traveling Salesman Problem

The document covers various algorithmic approaches including brute force methods for the Traveling Salesman Problem and the Knapsack Problem, as well as techniques like decrease-and-conquer, divide-and-conquer, and sorting algorithms such as Merge Sort and Quick Sort. It explains the concepts of topological sorting in directed graphs and binary tree traversals, providing examples and pseudocode for clarity. The document highlights the complexity of NP-hard problems and the efficiency of different algorithmic strategies.

Uploaded by

aftab4395575
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
35 views21 pages

Brute Force Approaches:: Traveling Salesman Problem

The document covers various algorithmic approaches including brute force methods for the Traveling Salesman Problem and the Knapsack Problem, as well as techniques like decrease-and-conquer, divide-and-conquer, and sorting algorithms such as Merge Sort and Quick Sort. It explains the concepts of topological sorting in directed graphs and binary tree traversals, providing examples and pseudocode for clarity. The document highlights the complexity of NP-hard problems and the efficiency of different algorithmic strategies.

Uploaded by

aftab4395575
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 21

Module 2| BCS401|Anooplal KS

Module 2
Brute force Approaches:
Traveling Salesman Problem:
 Exhaustive search is simply a brute-force approach to combinatorial problems. It suggests
generating each and every element of the problem domain, selecting those of them that satisfy
all the constraints, and then finding a desired element. Its implementation typically requires an
algorithm for generating certain combinatorial objects.
 In layman’s terms, the problem asks to find the shortest tour through a given set of n cities that
visits each city exactly once before returning to the city where it started.It is modeled by a
weighted graph, with the graph’s vertices representing the cities and the edge weights
specifying the distances. Then the problem can be stated as the problem of finding the shortest
Hamiltonian circuit of the graph.
 Hamiltonian circuit is defined as a cycle that passes through all the vertices of the graph exactly
once. The tours is done by generating all the permutations of n-1 intermediate cities, compute
− the tour lengths, and find the shortest among them.

b
a
a

c d

Tour Length

a ---> b ---> c ---> d ---> a I = 2 + 8 + 1 + 7 = 18


a ---> b ---> d ---> c ---> a I = 2 + 3 + 1 + 5 = 11 optimal
a ---> c ---> b ---> d ---> a I = 5 + 8 + 3 + 7 = 23
a ---> c ---> d ---> b ---> a I = 5 + 1 + 3 + 2 = 11 optimal
a ---> d ---> b ---> c ---> a I = 7 + 3 + 8 + 5 = 23
a ---> d ---> c ---> b ---> a I = 7 + 1 + 8 + 2 = 18

1
Module 2| BCS401|Anooplal KS

Knapsack Problem
 Given n items of known weights w1, w2, . . . , w n and values v1, v2, . . . , vn and a knapsack of
capacity W , find the most valuable subset of the items that fit into the knapsack.
 The exhaustive-search approach to this problem leads to generating all the subsets of the set of
n items given, computing the total weight of each subset in order to identify feasible subsets
(i.e., the ones with the total weight not exceeding the knapsack capacity), and finding a subset
of the largest value among them. , these two problems are the best-known examples of so-
called NP-hard problems. No polynomial-time algorithm is known for any NP- hard problem.

2
Module 2| BCS401|Anooplal KS

Decrease-and-Conquer
The decrease-and-conquer technique is an approach for solving the problem by
1. Change an instance into one smaller instance of the problem
2. Solve the smaller instance
3. Convert the solution of the smaller instance to solution of the larger problem
There are three major variations of decrease-and-conquer:
 decrease by a constant
 decrease by a constant factor
 variable size decrease

Decrease-by-a-constant
 In the decrease-by-a-constant variation, the size of an instance is reduced by the same constant
on each iteration of the algorithm.
 Generally the constant is equal to 1
 The relationship between a solution to an instance of size n and an instance of size n-1 is
obtained by the obvious. Example a10=a9*a
 formula is an= an−1 * a.

Application of decrease by constant


 Insertion sort
 Graph searching algorithms

Decrease by a constant factor


 It reduces a problem instance by the same constant factor on each iteration of the algorithm.
 In most applications, this constant factor is equal to two which means that it decreases the size
of algorithm by half

3
Module 2| BCS401|Anooplal KS

Example: a10=a5*a5

• an=an/2*an/2 if n is even and positive

• an=a(n-1)/2*a(n-1)/2 if n is odd

• an=1 if n=1

variable-size-decrease
• In the variable-size-decrease variety of decrease-and-conquer, the size-reduction pattern varies
from one iteration of an algorithm to another.
• Euclid’s algorithm for computing the greatest common divisor provides a good ex- ample of
such a situation. Recall that this algorithm is based on the formula gcd(m, n) = gcd(n, m mod
n).

Insertion Sort
 It is a decrease-by-one technique to − sorting an array A[0..n 1].
ALGORITHM InsertionSort(A[0..n − 1])
//Sorts a given array by insertion sort
//Input: An array A[0..n − 1] of n orderable elements
//Output: Array A[0..n − 1] sorted in nondecreasing order

for i ← 1 to n − 1 do
v ← A[i]
j←i−1
while j ≥ 0 and A[j ] >v do
A[j + 1] ← A[j ]
j←j−1
A[j + 1] ← v

4
Module 2| BCS401|Anooplal KS

 The basic operation of the algorithm is the key comparison A[j ] > v.
 The number of key comparisons in this algorithm obviously depends on the nature of the
input.In the worst case, A[j ] >v is executed the largest number of times, i.e., for every j= i-1,
... , 0.

Topological Sorting
 A directed graph, or digraph for short, is a graph with directions specified for all its edges
 There are only two notable differences between undirected and directed graphs in representing
them:
(1) the adjacency matrix of a directed graph does not have to be symmetric;
(2) an edge in a directed graph has just one (not two) corresponding nodes in the digraph’s adjacency
lists.
 A directed cycle in a digraph is a sequence of three or more of its vertices that starts and ends
with the same vertex
 If a DFS forest of a digraph has no back edges, the digraph is a dag, an acronym for directed
acyclic graph.
 consider a set of five required courses {C1, C2, C3, C4, C5} a part-time student has to take in
some degree program. The courses can be taken in any order as long as the following course
prerequisites are met: C1 and C2 have no prerequisites, C3 requires C1 and C2, C4 requires
C3, and C5 requires C3 and C4. The student can take only one course per term.

5
Module 2| BCS401|Anooplal KS

 Topological Sorting is the process in which the main goal is to find an ordering of vertices in
a directed acyclic graph (DAG) that places vertex u before vertex v for any directed edge (u,
v).
 Topological sorting is possible if and only if the graph is a DAG

Depth-first search:
 The first algorithm is a simple application of depth-first search:
 Perform a DFS traversal and note the order in which vertices become dead-ends (i.e., popped
off the traversal stack).
 Reversing this order yields a solution to the topological sorting problem, provided, of course,
no back edge has been encountered during the traversal.
 If a back edge has been encountered, the digraph is not a dag, and topological sorting of its
vertices is impossible.

Source removal method


The second algorithm is based on a direct implementation of the decrease-(by one)-and-conquer
technique:
 Repeatedly, identify in a remaining digraph a source, which is a vertex with no incoming edges,
and delete it along with all the edges outgoing from it.
 (If there are several sources, break the tie arbitrarily. If there are none, stop because the problem
cannot be solved—see Problem 6a in this section’s exercises.) The order in which the vertices
are deleted yields a solution to the topological sorting problem.

6
Module 2| BCS401|Anooplal KS

Divide-and-Conquer
 It is a top down approach of designing algorithm in which the technique divides the given
problem in to smaller sub problems and find the solution,then combine their solutions to get
the solution of the original problem
 A problem is divided into several subproblems of the same type, ideally of about equal size.
 The subproblems are solved
 If necessary, the solutions to the subproblems are combined to get a solution to the original
problem.

7
Module 2| BCS401|Anooplal KS

8
Module 2| BCS401|Anooplal KS

Merge sort
Merge sort is a perfect example of a successful application of the divide-and- conquer technique. It sorts
a given array A[0..n 1] by dividing it into two halves A[0.. n/2 − 1] and A[ n/2 ..n-1], sorting each of
them recursively, and then merging the two smaller sorted arrays into a single sorted one

The merging of two sorted arrays can be done as follows. Two pointers (array indices) are initialized to
point to the first elements of the arrays being merged. The elements pointed to are compared, and the
smaller of them is added to a new array being constructed; after that, the index of the smaller element
is incremented to point to its immediate successor in the array it was copied from. This operation is
repeated until one of the two given arrays is exhausted, and then the remaining elements of the other
array are copied to the end of the new array.

The operation of the algorithm on the list 8, 3, 2, 9, 7, 1, 5, 4

9
Module 2| BCS401|Anooplal KS

Assuming for simplicity that n is a power of 2, the recurrence relation for the number of key
comparisons C(n) is
C(n) = 2C(n/2) + Cmerge(n) for n> 1, C(1) = 0.
Let us analyse Cmerge(n), the number of key comparisons performed during the merging stage
At each step, exactly one comparison is made, after which the total number of elements in the two arrays
still needing to be processed is reduced by 1. In the worst case, neither of the two arrays becomes empty
before the other one contains just one element (e.g., smaller elements may come from the alternating
arrays). Therefore, for the worst case, Cmerge(n) =n-1, and we have the recurrence

For large n, the number of comparisons made by this algorithm in the average case turns out to be about
0.25n less and hence is also in Θ (n log n).
First, the algorithm can be implemented bottom up by merging pairs of the array’s elements, then
merging the sorted pairs, and so on

QUICK SORT
Quicksort is the other important sorting algorithm that is based on the divide-and-conquer approach.
quicksort divides input elements according to their value. A partition is an arrangement of the array’s
elements so that all the elements to the left of some element A[s] are less than or equal to A[s], and all
the elements to the right of A[s] are greater than or equal to it:

10
Module 2| BCS401|Anooplal KS

Sort the two subarrays to the left and to the right of A[s] independently. No work required to combine
the solutions to the subproblems.
Here is pseudocode of quicksort: call Quicksort(A[0..n − 1]) where As a partition algorithm use the
HoarePartition.

11
Module 2| BCS401|Anooplal KS

12
Module 2| BCS401|Anooplal KS

Binary Tree Traversals


A binary tree T is defined as a finite set of nodes that is either empty or consists of
 a root and

13
Module 2| BCS401|Anooplal KS

 two disjoint binary trees the left and right subtree of the root.

 The most important divide-and-conquer algorithms for binary trees are the three classic
traversals: preorder, inorder, and postorder.
 In the preorder traversal, the root is visited before the left and right subtrees are visited (in
that order).
 In the inorder traversal, the root is visited after visiting its left subtree but before visiting the
right subtree.
 In the postorder traversal, the root is visited after visiting the left and right subtrees (in that
order).
Example:

let us consider a recursive algorithm for computing the height of a binary tree. Recall that the height is
defined as the length of the longest path from the root to a leaf.

14
Module 2| BCS401|Anooplal KS

ALGORITHM Height(T )
//Computes recursively the height of a binary tree
//Input: A binary tree T
//Output: The height of T
if T = ∅ return −1
else return max{Height(Tleft), Height(Tright)} + 1
Eg:

• The tree has the root node A, which has two children: B and C.
• B has two children: D and E,while C has no children
Compute the height of subtree rooted at A:
• We compute Height(T left) for A, which is Height(B).
• We compute Height(T right) for A, which is Height(C).
Height of C:
• C has no children,
• Height(C) = max(Height(∅), Height(∅)) + 1 = max(-1, -1) + 1 = 0.
Height of B:
• B has two children: D and E.
• Height(D) and Height(E) both return -0 because both D and E are leaf nodes with no
children.
• Thus, Height(B) = max(Height(D), Height(E)) + 1 = max(0, 0) + 1 = 1.
Height of A:
• Now we know Height(B) = 1 and Height(C) = 0.
• Therefore, Height(A) = max(Height(B), Height(C)) + 1 = max(1, 0) + 1 = 2.
• The height of the tree rooted at A is 2

We measure the problem’s instance size by the number of nodes n(T ) in a given binary tree T
. Obviously, the number of comparisons made to compute the maximum of two numbers and
the number of additions A(n(T )) made by the algorithm are the same. We have the following
recurrence relation for A(n(T )):

A(n(T )) = A(n(Tleft)) + A(n(Tright)) + 1 for n(T ) > 0,


A(0) = 0.

For the empty tree, the comparison T=∅ is executed once but there are no addition, and for
single node tree, the comparison and addition numbers are 3 and 1 respectively.

15
Module 2| BCS401|Anooplal KS

It helps in analysis of tree algorithm to draw the tree extension by replacing the empty
sub tree by special nodes. The extra node (little square ) are called externa nodes. The
original nodes are called internal nodes. By the definition, the extension of empty binary
tree is a single external node.
It is easy to see that the height algorithm make exactly one addition for every internal
node of extended tree, and it make one comparison to check whether the tree is empty
for every internal node and external node. Therefore, to ascertain the algorithm efficiency,
we nee to know how many external nodes an extended binary tree with n nodes can have.

that the number of external nodes x is always 1 more than the number of internal nodes n:
x=n+1

To prove this equality, consider the total number of nodes, both internal and external. Since
every node, except the root, is one of the two children of an internal node, we have the equation

2n+1=x+n
Also apply to full binary tree, in which, by definition, every node has either zero
or two children: for a full binary tree, n and x denote the number of parent nodes
and leaf nodes, respectively.
Returning to algorithm Height, the number of comparisons to check whether the tree is empty
is
C(n)=x+n = 2n+1
And the number of addition is
A(n)=n

16
Module 2| BCS401|Anooplal KS

MULTIPLICATION OF LARGE INTEGERS

17
Module 2| BCS401|Anooplal KS

18
Module 2| BCS401|Anooplal KS

STRASSEN’S MATRIX MULTIPLICATION:

19
Module 2| BCS401|Anooplal KS

20
Module 2| BCS401|Anooplal KS

21

You might also like