0% found this document useful (0 votes)
33 views29 pages

CSC 401

CSC 401 focuses on algorithm analysis and complexity, emphasizing the importance of developing efficient algorithms to solve problems effectively. The course covers various types of time and space complexity, practical applications, and factors influencing program efficiency, enabling students to make informed decisions in algorithm design. Key concepts include worst-case, average-case, and best-case time complexities, along with different algorithm complexities such as constant, linear, quadratic, and exponential.

Uploaded by

berryofficial29
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
33 views29 pages

CSC 401

CSC 401 focuses on algorithm analysis and complexity, emphasizing the importance of developing efficient algorithms to solve problems effectively. The course covers various types of time and space complexity, practical applications, and factors influencing program efficiency, enabling students to make informed decisions in algorithm design. Key concepts include worst-case, average-case, and best-case time complexities, along with different algorithm complexities such as constant, linear, quadratic, and exponential.

Uploaded by

berryofficial29
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

CSC 401: Algorithms and Complexity Analysis (3 Units)

Overview
Algorithm as you have been taught in earlier is the step by step of problem solving. In this course
we are going to study the basic algorithm analysis and different strategies of algorithm. The aim
of this study is to be equipped with knowledge on how to develop good algorithms for best solution
to problems. Algorithm Analysis is at the heart of computer science, serving as a toolset that
allows you to evaluate and compare the performance of different algorithms in solving specific
tasks. It can be defined as the study of the computational complexity of algorithms which helps in
minimizing the resources required by a program, thereby improving overall program efficiency.
Algorithm analysis is something designed to compare two algorithms at the idea level — ignoring
low-level details such as the implementation programming language, the hardware the algorithm
runs on, or the instruction set of the given CPU. The algorithms are compared in terms of just what
they are: Ideas of how something is computed. If our algorithm takes 1 second to run for an input
of size 1000, how will it behave if the input size is doubled? Will it run just as fast, half as fast, or
four times slower? In practical programming, this is important as it allows us to predict how our
algorithm will behave when the input data becomes larger. For example, if we've made an
algorithm for a web application that works well with 1000 users and measure its running time,
using algorithm complexity analysis we can have a pretty good idea of what will happen once we
get 2000 users instead.

Importance of Algorithm Analysis in Computer Science


Algorithm analysis plays a significant role in computer science in various ways:
i. It provides a better understanding on how to optimize codes and make informed decisions
regarding algorithm design (that is making choice of better algothm)
ii. Performance Optimization: Analyzing algorithm complexity helps you make your code
more efficient, reducing the time and space required for executing a program.
iii. Scalability: By understanding the behavior of an algorithm as input size increases, you can
design algorithms that scale well with the problem size.
iv. Resource Utilization: Efficient algorithms utilize fewer computing resources such as
processing power and memory.
v. Better Decision Making: It allows for a more objective comparison of different algorithms
and data structures based on their efficiency
vi. Studying this gives one a key understanding of how to approach complex coding challenges
and develop more efficient solutions.

Factors that influence Programs efficiency


i. Problems being solved
ii. Programming language
iii. Programmer effectiveness
iv. Compiler
v. Computer hardware
vi. Algorithm

Analysis of Algorithm
The analysis is a process of estimating the efficiency of an algorithm, that is, trying to know how
good or bad an algorithm could be.
There are two main parameters based on which an algorithm can be analyzed:
• Space Complexity: The space complexity is concerned with the amount of space required
by an algorithm to run to completion.
• Time Complexity: Time complexity is a function of input size n that refers to the execution
time of an algorithm as n increases..

Practical illustration of Algorithm analysis


Assuming there is a problem P1, then it may have many solutions, and each of these solutions is
regarded as an algorithm. So, there may be many algorithms such as A1, A2, A3…, An.
Before making the choice of algorithm to implement as a program, it is better to find out which
among these algorithms (solutions) is good in terms of time and memory by analyzing every
algorithm to find out which one executes faster (Time) and which one will take less memory
(Space)
So, the Design and Analysis of Algorithm focuses on how to design and analyze various algorithms
after which the choice of the best algorithm that takes the least time and the least memory can be
made and then implement as a program in any preferable language.
Time is considered more than the space in algorithm analysis because time is a more limiting
parameter in terms of hardware, the processor speed cannot be easily changed, so we are more or
less stuck with the performance that platform can give us in terms of speed.
However, on the other hand, memory is relatively more flexible and can be easily increased when
required by simply adding a memory card.
So, we will focus on time than space. The running time is measured in terms of a particular piece
of hardware, not a robust measure. When the same algorithm is run on a different computer which
might be faster or use different programming languages which may be designed to compile faster,
it will be observed that the same algorithm takes a different time.

Time Complexity Analysis


This depicts the time efficiency of an algorithm which is analyzed by determining
the number of repetitions of the basic operation as a function of input size (n).
Basic operation is the operation that contributes most towards the running time of
the algorithm.
T(n) ≈ Cxo C(n)

Running Time
Number Of Times
Execution Time For Basic Operation Is
Basic Operation Executed

Examples of Basic Operations and Input Size in given


Problem Input size measure Basic Operation
Searching for key in a list of Number of list items (n) Key comparison
n items
Checking Primality of a ‘n’ size = number of digits in Division
given integer n binary representation
Matrices multiplication Matrix dimensions or total Multiplication of two
no of elements numbers
Graph problems Vertices and/or edges Visiting a vertex or
traversing an edge.

Types of Time Complexity Analysis


There are three types time complexity. These are:
i. Worst-case time complexity
ii. Average case time complexity
iii. Best case time complexity

Worst-case time complexity: For a given input size ‘n’, the worst-case time complexity
can be defined as the maximum amount of time needed by an algorithm to complete its
execution. It is simply a function defined by the maximum number of steps performed on
an instance having an input size of n. We are more interest in this for algorithm analysis.

Average case time complexity: For 'n' input size, the average case time complexity can be
defined as the average amount of time needed by an algorithm to complete its execution.
It is simply a function defined by the average number of steps performed on an instance
having an input size of n.

Best case time complexity: For 'n' input size, the best-case time complexity can be defined
as the minimum amount of time needed by an algorithm to complete its execution. It is
simply a function defined by the minimum number of steps performed on an instance
having an input size of n. This can be used to determine inefficient algorithm.

Algorithm Complexity
The algorithm complexity measures the number of steps required by the algorithm to
solve a given problem. It evaluates the order of count of basic operations executed by an
algorithm as a function of input data size.
To assess the complexity, the order (approximation) of the count of operation is always
considered instead of counting the exact steps.
OC(n) also known as Asymptotic notation represents the complexity of an algorithm,
which or "Big O" notation. Here the f(n) corresponds to the function whose size is the
same as that of the input data. The complexity of the asymptotic computation O(n)
determines in which order the resources such as CPU time, memory, etc. are consumed
by the algorithm that is articulated as a function of input data size.
The complexity can be found in any form such as constant, logarithmic, linear, n*log(n),
quadratic, cubic, exponential, etc. It is nothing but the order of constant, logarithmic,
linear and so on, the number of steps encountered for the completion of a particular
algorithm. The complexity of an algorithm is also known as "running time".

Typical Complexities of an Algorithm


Our algorithm or program will fall into any of these categories of complexities of an algorithm:
Constant Complexity
In this case, the execution time of the basic operation does not depend on the size of the input, so
this function has a constant time complexity, classifying it as O(1) in Big O notation.
It undergoes an execution of a constant number of steps like 1, 3, 10, etc. for solving a given
problem.

For example,
// Pseudocode for checking if a number is even or odd
function isEvenOrOdd(number) {
return (number % 2 == 0) ? 'Even' : 'Odd'
}

Logarithmic Complexity
Imposes a complexity of O (log(N)). An algorithm Complexity if its runtime is proportional to
the logarithm of the input size. To perform operations on N elements, it often takes the
logarithmic base as 2. For N = 1,000,000, an algorithm that has a complexity of O(log(N)) would
undergo 20 steps (with a constant precision). Here, the logarithmic base does not hold a
necessary consequence for the operation count order, so it is usually omitted.
Example algorithm
Def binary_search(arr, target);
Low = 0
High = len(arr) -1
while low <= high
mid = (low + high)//2
if arr(mid)= = target;
return mid
elif arr[md] < target;
low = mid + 1
else
high = mid – 1
return -1

arr = [2,5,8,12,16,23,38]
target =16
index = binary_search(arr, target)
if index != -1
print(“Element”, target, “found at index”, index)
else
print(“Element”, target, “not found in the list.”)

Linear Complexity
Imposes a complexity of O (N). An algoritm whose time execution is dependent on the input size
(n) is said to have linear complexity.
For example, if there exist 1000 elements, then it will take about 1000 steps. Basically, in linear
complexity, the number of elements linearly depends on the number of steps. For example, the
number of steps for N elements can be N/2 or 3*N. For instance, if the statements within a loop
run with 1unit of time, so if the loop runs for n time, its complexity is 1*O(n) = O(n)
Lets take a practical example;
Function sum (n){
Let sum = 0 // 1 unit of time i.e O(1)
For (let I =1; I,= n: 1++){ //loop repeat upto n times
Sum += I; // 1 unit of time *n
}
Return sum; //1 unit of time}

Writing the equation


C(n) = 1+n+1
=2+n
= O(n) ignoring constant and taking upper limit which is n

Linearithmic Complexity
It also imposes a run time of O(n*log(n)). This similar to logarithmic has an extra
dependency on input size. It uses the principle of Divide and Conquer and undergoes the
execution of the order n*log(n) on n number of elements to solve the given problem. For
a given 1000 elements, the linearithmic complexity will execute 10,000 steps for solving
a given problem. Good example is Merge sort as shown below:

Def merge_sort(arr)
If len(arr) <= 1
return arr
mid = len(arr)//2
left = merge_sort(arr[:mid])
right = merge_sort(arr[:mid])
return merge(left, right)
def merge(left, right);
merge = []
i=0
j=0
while i < len (left) and j < len(right);
if left[i] < = right [j];
[Link](left[i])
i += 1
else
[Link](right[j])
j+=1
[Link](left[i:])
[Link](left[j:])
return merged
arr = [8, 3, 1, 7, 4, 6, 2, 5]
sorted_arr = merge_sort(arr)
print(sorted_arr)

The algorithm has an array of 8 elements which is split into two continuously until it
cannot be split any more. The arrays are then merged to give a sorted array.

Quadratic Complexity
It imposes a complexity of O (n2). If an algorithm running time is directly proportional to
the squared size of the input size (n), it is said to have quadratic complexity. If N = 100, it
will endure 10,000 steps. In other words, whenever the order of operation tends to have a
quadratic relation with the input data size, it results in quadratic complexity.
For example, for N number of elements, the steps are found to be in the order of 3*N2/2.
Practical example: an algorithm to check for duplicate in array

Function checkforDuplicate(Array){
For (i=0; i<[Link]: i++) // the loop repeats until n times
Const tempArray = [Link](i+1): // O(1*n) times; the statement
runs for O(1) times n repetition
If ([Link](Array[i] !==-1{ // O(n*n)// the function
indexoff will repeat n times for tempArray and also loop for n times of the above
for loop, therefore if statement will run for total of O(n*n)
Return true://O(1*n), runs constant time O(1) but repeats n times
because of for loop
}
}
Return false:// O(1); repeated only 1 time because it is outside loop
Calculating the time complexity of the above code, we have:
C(n) = O(1*n) + )(n*n) + O(1*n) + O(1)
= O(n*n)
=O(n2) ignoring all the constants and lower terms.
Cubic Complexity
It imposes a complexity of O (n3). This complexity increases even faster than quadratic
complexity For N input data size, it executes the order of N3 steps on N elements to solve
a given problem. It is similar to quadratic complexity, instead of having two nested loops,
it has three. If there exist 100 elements, it is going to execute 1,000,000 steps. Greater
polynomial complexities should be avoided where possible.
Floyd Warshall algorithm is an example of cubic complexity:

Def Floyd_warshall(graph)
N=len(graph)
Distances = [Link]()
For k in range(n);
For I in range(n)
For j in range(n)
Distances[i][j] = min(distances[i][j], distances[i][k] + distances[k][j])
Return distances
Inf= float(ínf’)
Graph = [ 0, inf, -2. Inf],
[4, 0, 3 inf],
[inf, inf, 0,2],
[inf, -1, inf, 0]

Shortest path = Floyd_warshall(graph)


For row in shortest_paths;
Print (row)
Since there is three (3) nested loops for iterating over each vertex pair, and the potential
shorter path, the complexity is cubic. All the loops depend on the input size and all
iterations must be complete.

Exponential Complexity
It imposes a complexity of O(2n), O(N!), O(nk), . this simply means the number of operations
doubles as the input increases. This can be illustrated using Fibonacci process:

Def Fibonacci(n)
If n<= 1;
Return n
Else
Return Fibonacci(n-1) + Fibonacci(n-2)
Result = Fibonacci(5)
Print result

This example calculates Fibonnaci number for the 5th index.


Exponential operations are not usable on large dataset as the complexity increases very fast.
For example, if N = 10, then the exponential function 2N will result in 1024. Similarly, if
N = 20, it will result in 1048 576, and if N = 100, it will result in a number having 30
digits.
Since the constants do not hold a significant effect on the order of count of operation, so
it is better to ignore them.

Factorial Complexity
This imposes a complexity of O(N!). The factorial of a number is the multiplication of every
integer that comes before it plus itself. For instance, factorial of 5 is 1*2*3*4*5 which equal to
120. Factorial complexity growth rate is so huge, so it is rarely use in everyday programming.
However generating all the permutations in a string or the possible combinations of individual
values is an example of factorial complexity process.

def generate_permutations(elements);
if len(elements) = = 1:
return [elements]
permutations = []
for i in range(len(elements));
remaining =elements[:i] + elements[i +1:]
sub_permutations = generate_permutations(remaining)
for perm in sub_permutations
[Link]([elements[i]] + perm)
return permutations
elements = [1,2,3,4]
permutations = generate_permutations(elements)
for perm in permutations
print(perm)

here we try to find possible permutation of the elements [1, 2, 3, 4]. Recursively each element is
picked to be the starting element and permutations are calculated. For this we receive 12
permutations because there are four elements.

Key points in Algorithm complexity


Time complexity measures algorithm performance based on input size
Factors affecting time complexity include input size, basic operations performed, nested
loop/recursion and hardware.
Common type complexity types include O(1), O(n), O(logn), O(nlogn), O(nᶺ2), O(nᶺ3), O(2ᶺn),
and O(n!)
Lower time complexity means shorter run-time and better scalability with larger dataset.
Understanding time complexity is important for efficient algorithm design and optimization in
various fields.
Factors that affect Time Complexity are:
• Input size
• Number of basic operations performed
• Presence of nested loops, recursion and even the hardware we are using.
Some examples of Real-World Applications of Algorithm Analysis
Algorithm Analysis can often be found in a varied array of fields, including database processing,
image manipulation, and even genetics research.

Asymptotic Analysis of algorithms (Growth of function)


Resources for an algorithm are usually expressed as a function of input. Often this
function is messy and complicated to work. For effective study of Function growth, the
function should be reduced to only important part.

Let f(n) = an2+bn+c

In this function, the n2 term dominates the function, that is when n gets sufficiently large.
In function reduction, we are interested in dominate terms, because they determine the
function growth rate. Thus; we ignore all constants and coefficient and look at the highest
order term concerning n.

Asymptotic analysis
It is a technique of representing limiting behavior which can be used to analyze the
performance of an algorithm for some large data set.
In algorithms analysis (considering the performance of algorithms when applied to very
large input datasets), The simplest example is a function
ƒ(n) = n2+3n,
the term 3n becomes insignificant compared to n2 when n is very large. The function "ƒ (n)
is said to be asymptotically equivalent to n2 as n → ∞", and here is written symbolically
as
ƒ (n) ~ n2.
Asymptotic notations are used to write fastest and slowest possible running time for an
algorithm also known as 'best case' and 'worst case' scenarios respectively.
In asymptotic notations, we derive the complexity concerning the size
of the input in terms of n . These notations enable us to estimate the complexity of
algorithms without expanding its running cost. These notations compare functions,
ignoring constant factors and small input sizes.

Importance of Asymptotic Notations


a. Provide simple characteristics of an algorithm's efficiency.
b. Allow the comparisons of the performances of various algorithms.
Types of Asymptotic Notations:
Three notations are used to calculate the running time complexity of an algorithm:
Big-O notation:
Big-O is the formal method of expressing the upper bound of an algorithm's running
time. It is the measure of the longest amount of time. The function f (n) = O (g (n)) [read
as "f of n is big-O of g of n"] if and only if there exist positive constant c and such that
f (n) ⩽ k.g (n)f(n)⩽k.g(n) for n>n0n>n0 in all case
Hence, function g (n) is an upper bound for function f (n), as g (n) grows
faster than f (n)

Figure 1: Asymptotic Upper bound


Examples:
1. 3n+2=O(n) as 3n+2≤4n for all n≥2
2. 3n+3=O(n) as 3n+3≤4n for all n≥3
Hence, the complexity of f(n) can be represented as O (g (n))

Big Omega () Notation


The function f (n) = Ω (g (n)) [read as "f of n is omega of g of n"] if and only if there exists
positive constant c and n0 such that
F (n) ≥ k* g (n) for all n, n≥ n0

Figure 2: Asymptotic Lower bound


Example:
f (n) =4n2+2n-3≥4n2-3
f(n) =5n2+(n2-3)≥5n2 (g(n))
Thus, k1=5
Hence, the complexity of f (n) can be represented as Ω (g (n))
Big Theta (θ)
The function f (n) = θ (g (n)) [read as "f is the theta of g of n"] if and
only if there exists positive constant k1, k2 and k0 such that
k1 * g (n) ≤ f(n)≤ k2 g(n)for all n, n≥ n0

Figure 3: Asymptotic Tight bound

For Example:
3n+2= θ (n) as 3n+2≥3n and 3n+2≤ 4n, for n
k1=3,k2=4, and n0=2
Hence, the complexity of f (n) can be represented as θ (g(n)).
The Theta Notation is more precise than both the big-oh and Omega notation. The function f (n) =
θ (g (n)) if g(n) is both an upper and lower bound.

ALGORITHM DESIGN STRATEGIES


An algorithm design strategy (or “technique” or “paradigm”) is a general approach to solving
problems step wisely that is applicable to a variety of problems from different areas of computing.

Reasons for Learning these strategies:


• They provide guidance for designing algorithms for new problems (problems without
known satisfactory algorithm).
• Algorithms classification: Algorithm design techniques enable algorithms classification
according to an underlying design idea; therefore, they can serve as a natural way to both
categorize and study algorithms.
Though algorithm design techniques provide a powerful set of general approaches to algorithmic
problem solving, an algorithm design may still be a challenging task. Some problem may not have
applicable design strategy.
.
Typical Algorithm Design Strategies
• Brute Force
• Divide and Conquer Approach
• Greedy Strategy
• Dynamic Programming
• Branch and Bound
• Backtracking Algorithm

Brute Force
This is a simple technique with naïve approach. It relies on huge processing power and testing of
all possibilities to improve efficiency. A scenario where a brute force search can be used; suppose
you forgot the combination of a 4-digit padlock and still want to use it, the padlock can be opened
by trying all possible 4-digit combinations from0 to 9 to unlock it. That combination could be
anything between 0000 to 9999, hence there are 10,000 combinations. So we can say that in the
worst case, for you to find the actual combination, you have up to 10, 000 possibilities.
The time complexity of brute force is O(mn), which can also be written as O(n*m). This means
that if we need to search a string of n characters in a string of m characters, the no of turns should
be n*m times.

Divide and Conquer Approach


This algorithmic technique is preferred for complex problems. It uses top-down approach
following the underlisted steps as the name implies:
Step 1: Divide the problem into several subproblems.
Step 2: Conquer or solve each sub-problem.
Step 3:Combine each sub-problem to get the required result.

Divide and Conquer solve each subproblem recursively, so each subproblem will be the smaller
original problem. Example is shown in Figure2.

Examples of some standard algorithms that are of the Divide and Conquer algorithms variety.
a. Binary Search: a searching algorithm. ...
b. Quicksort: sorting algorithm. ...
c. Merge Sort : sorting algorithm. ...
d. Closest Pair of Points: The problem is to find the closest pair of points in a set of
points in x-y plane.
Figure 4: Divide and Conquer algorithm

Greedy Algorithm
This is algorithm strategy that builds up a solution piece by piece, always choosing the next piece
that offers the most obvious and immediate benefit. It is used to solve optimization problems in
which a set of input values given, are required either to be increased or decreased according to the
objective. Greedy Algorithm solves problems always by the choice of the option, which appears
to be the best at the moment (hence the name Greedy). It may not always give the optimized
solution.
Two stages to solve a problem using greedy algorithm:
a) Examining the list of Items.
b) Optimization
This means that a greedy algorithm selects the best immediate option without proper
reconsideration of its decisions. When it comes to optimizing a solution, this simply implies that
the greedy solution will seek out local optimum solutions, which could be multiple, and may skip
a global optimal solution. For example, greedy algorithm in animation below aims to locate the
path with the largest sum.
Figure 5: Greedy Algorithm
With a goal of reaching the largest sum, at each step, the greedy algorithm will choose what
appears to be the optimal immediate choice, so it will choose 12 instead of 3 at the second step
and will not reach the best solution, which contains 99.

Examples of Greedy Algorithms


a. Prim's Minimal Spanning Tree Algorithm.
b. Travelling Salesman Problem.
c. Graph – Map Coloring.
d. Kruskal's Minimal Spanning Tree Algorithm.
e. Dijkstra's Minimal Spanning Tree Algorithm.
f. Graph – Vertex Cover.
g. Knapsack Problem.
h. Job Scheduling Problem.

Dynamic Programming
Dynamic Programming (DP) is an algorithmic technique for solving optimization problems by
breaking them into simpler sub-problems and storing each sub-solution for reuse. For instance
when using this technique to figure out all possible results from a set of numbers, the solution
obtained from first calculation is saved and put into the equation later instead of being recalculated,
so it is used for complicated equations and processes, thus it is both a mathematical optimization
method and a computer programming method. The sub-problems are optimized to find the overall
solution which usually has to do with finding the maximum and minimum range of algorithmic
query. DP can be used in calculation of Fibonacci Series in which each number is the sum of the
two preceding numbers. Suppose the first two numbers of the series are 0,1.

Fib(n) =Fib(n-1) + Fib(n-2), for n >1

To solve the problem of finding the nth number of the series, the overall problem i.e., Fib(n), we
can be tackled by breaking it down into two smaller sub-problems i.e.; Fib(n-1) and Fib(n-2).
Hence, we can use Dynamic Programming to solve above mentioned problem, which is
elaborated in more detail in the following Figure 3:
Figure 6: Fibonacci Series using Dynamic Programming

Some examples of Dynamic Programming algorithms are;


a. Tower of Hanoi
b. Dijkstra Shortest Path
c. Fibonacci sequence
d. Matrix chain multiplication
e. Egg-dropping puzzle, etc

Branch and Bound (BnB) Algorithm


BnB is algorithmic design strategy that solves combinatorial and discrete optimization problems.
Many optimization problems which cannot be solved in polynomial time are solved by BnB, This
algorithm enumerates possible candidate solutions in a stepwise manner by exploring all possible
set of solutions. An important advantage of branch-and-bound algorithms is that we can control
the quality of the solution to be expected, even if it is not yet found. The cost of an optimal solution
is only up to smaller than the cost of the best computed one.

Figure 7: Branch and Bound Algorithm Example

Firstly, a rooted decision tree where the root node represents the entire search space is built. Each
child node is a part of the solution set and is a partial solution. Based on the optimal solution, we
set an upper and lower bound for a given problem before constructing the rooted decision tree
and we need to make a decision about which node to include in the solution set at each level. It is
very important to find upper and lower bound and to find upper bound any local optimization
method can be used. It can also be found by picking any point in the search space and convex
relaxation. Whereas, duality can be used for finding lower bound.

Examples of BnB problems


a. Crew scheduling
b. network flow problems
c. production planning.
d. Traveling Salesman Problem
e. Job Assignment Problem

Randomized Algorithm
Randomized Algorithm strategy uses random numbers to determine the next line of action at any
point in its logic. In a standard algorithm, it is usually used to reduce either the running time, or
time complexity, or the memory used, or space complexity. The algorithm works by creating a
random number, r, from a set of numbers and making decisions based on its value. This algorithm
could assist in making a decision in a situation of doubt by flipping a coin or drawing a card from
a deck.

Input Output
Algorithm

Random Number
Figure 8: Randomized Algorithm Flowchart
The output of a randomized algorithm on a given input is a random variable. Thus,
there may be a positive probability that the outcome is incorrect. As long as the
probability of error is small for every possible input to the algorithm, this is not a
problem
When utilizing a randomized method, keep the following two considerations in mind:
It takes source of random numbers and makes random choices during execution along with the
input. Behavior of an algorithm varies even on fixed inputs.
Two main types of randomized algorithms:
a. Las Vegas algorithms
b. Monte-Carlo algorithms.

Examples of Randomized algorithm


In Quick Sort: using a random number to choose a pivot.
Trying to factor a large number by choosing a random number as possible divisors.

Backtracking Algorithms
This technique steps backward to try another option if current solution fails. It is a method for
resolving issues recursively by attempting to construct a solution incrementally, one piece at a
time, discarding any solutions that do not satisfy the problem’s constraints at any point in time. It
ca be said to use brute force approach which resolves problems with multiple solutions. It finds a
solution by building a solution step by step, increasing levels over time, using recursive calling. A
search tree known as the statespace tree is used to find these solutions. Each branch in a state-
space tree represents a variable, and each level represents a solution.
A backtracking algorithm uses the depth-first search method. When the algorithm begins to
explore the solutions, the abounding function is applied so that the algorithm can determine
whether the proposed solution satisfies the constraints. If it does, it will keep looking. If it does
not, the branch is removed, and the algorithm returns to the previous level.
In any backtracking algorithm, the algorithm seeks a path to a feasible solution that includes some
intermediate checkpoints. If the checkpoints do not lead to a viable solution, the problem can return
to the checkpoints and take another path to find a solution.
The algorithm works as follows:
Given a problem:
\Backtrack(s) if is not a solution return false if is a new solution add to list of
solutions backtrack (expand s)

For example, if we want to find all the possible ways of arranging 2 boys and 1 girl on 3 benches
with a constraint that Girl should not be on the middle bench. So there will be 3! = 6 (3x2x1)
possibilities to solve this problem. All possible ways should be tried recursively to get the required
solution as shown:

Figure 9: Solution of backtracking

Examples of Backtracking algorithm problems:


backtracking:
• Variety of problems: finding a feasible solution to a decision problem.
• Optimization problems.
• Finding all feasible solutions to the enumeration problem.
• Finding all Hamiltonian paths present in a graph
• Solving the N-Queen problem
• Knights Tour problem, etc
Backtracking, on the other hand, is not regarded as an optimal problem-solving technique. It is
useful when the solution to a problem does not have a time limit.

RECURSION AND RECURSIVE ALGORITHM


Recursion is a method of solving problems that involves breaking a problem down into smaller
sub-problems until you get to a small enough problem that it can be easily solved. In computer
science, recursion involves a function calling itself. With recursion complex problems can be
stylishly programmed. It can be generally referred to as self-reference concept.
Recursion in Computer science is a computer programming technique that uses procedure,
subroutine, function, or algorithm that calls itself one or more times until a specified condition is
met at which time the rest of each repetition is processed from the last one called to the first.”
Most programming languages support recursion.

Instances of Recursion
There are two main instances of recursion.
• Recursion as a technique in which a function makes one or more calls to itself.
• A data structure using smaller instances of the exact same type of data structure when it
represents itself.

Importance of Recursion
• It provides an alternative for performing repetitions of the task in which a loop is not
ideal.
• It serves as a great tool for building out particular data structures.

Uses of Recursion
A real world problem solve by recursion
Stacks of documents are sorted using recursive method. Assuming you have about 200
documents with names on them to sort, first is to place the documents into piles according to
their first letters, then sort each pile.

In software engineering, recursion function can be used for sorting and searching data structures
like linked-lists, binary trees, and graphs. They are also used for string manipulation. It can also
be used in traversing complex data structures such as JSON objects directory trees in file system.

Most programming problems can be solved without recursion, recursion is better use only for
repetitive tasks where loop is not ideal. Recursion should not be used for every problem.

Consider the following factors before opting for recursion as a solution


• In some problems, recursive solution will be confusing instead of elegant
• Recursive implementation consumes more memory than non-recursive ones
• It can also consume more running time.
How to write recursive function
Writing a recursive function involves writing a base case along with recursive steps that are
taken. For example, the recursive way to write factorial operation is:

Fact(0) =1fact(x) = (x) x fact(x-1)


The base case is the first line in the above example, it does not refer back to itself as the rest of
the recursive function does. Recursive function are calculated by going backward until the base
case is reached. Using the above to calculate factorial of 5, the function states that the factorial of
5 is equal to 5 times factorial of 5 minus 1. That is:
5! = 5 x (5-1)! = 5 x 4!. The factorial operation calls itself.
What makes a function recursive is that it requires its own terms to figure out its next term. For
instance, to find nth term, you need to know to know the previous term and the term before that
one.

Examples of Recursive functions


Factorial function is a good example of recursion.
The factorial function is denoted with an exclamation point (!) and is defined as the product of
the integers from 1 to n. n! can be formally stated as:
n! = n ⋅ (n−1) ⋅ (n−2) … 3 ⋅ 2 ⋅ 1
Note, if n = 0, then n! = 1. This is important to not that if n=0, the n! = 1 because it will serve as
our base case.

Take this example:


5! = 5.4 ⋅ 3 ⋅ 2 ⋅ 1 = 120.
Stating this in a recursive manner is where the concept of base case comes in. Base case is a key
part of understanding recursion.

Let’s rewrite the above equation of 5! so it looks like this:


5! = 5 ⋅ (4.3 ⋅ 2 ⋅ 1) = 120
Notice that this is the same as:
5! = 5 ⋅ 4! = 120
Meaning we can rewrite the formal recursion definition in terms of recursion like so:
n! = n ⋅ (n−1) !
Note, if n = 0, then n! = 1. This means the base case occurs once n=0, the recursive cases are
defined in the equation above. Whenever you are trying to develop a recursive solution it is very
important to think about the base case, as your solution will need to return the base case once all
the recursive cases have been worked through.
Let’s see how we can create the factorial function in Python:
def fact(n):
'''
Returns factorial of n (n!).
Note use of recursion
'''
# BASE CASE!
if n == 0:
return 1
# Recursion!
else:
return n * fact(n-1)

Let’s see it in action! Fact (5) = 120


Take note of the “if statement” that checks if a base case occurred.
Without it this function would not have successfully completed running.
We can visualize the recursion with the following figure:

Figure 10: illustration of Fact (5) = 120


We can follow this flow chart from the top, reaching the base case, and
then working our way back up.
Recursion is a powerful tool, but it can be a tricky concept to implement.

Conditionals to Start, Continue, and Stop the Recursion


Function with string or array argument. Starting conditions may often be the exact same
conditions that force the recursion to continue.
More importantly, you want to establish a condition where the recursive action stops. These
conditionals, known as base cases, produce an actual value rather than another call to the
function. However, in the case of tail-end recursion, the return value still calls a function but gets
the value of that function right away.
The establishment of base cases is commonly achieved by having a conditional observe some
quality, such as the length of an array or the amount of a number, just like loops.
Laws of Recursion
All recursive algorithms must obey three important laws:
i. A recursive algorithm must have a base case, which denotes the point when it should
stop.
ii. A recursive algorithm must change its state and move toward the base case which enables
it to store and accumulate values that end up becoming the answer.
iii. A recursive algorithm must call itself, recursively with smaller and smaller values.

Types of Recursion
Recursion are mainly of two types:
i. Direct Recursion: when a function calls itself from within itself
ii. Indirect Recursion: When more than one function call one another mutually.

Direct Recursion
These can be further categorized into four types:
a. Tail Recursion:
If a recursive function calling itself and that recursive call is the last statement in the function
then it’s known as Tail Recursion. After that call the recursive function performs nothing. The
function has to process or perform any operation at the time of calling and it does nothing at
returning time.

Example:
// Code Showing Tail Recursion
#include <iostream>
using namespace std;
// Recursion function
void fun(int n)
{
if (n > 0) {
cout << n << " ";
// Last statement in the function
fun(n - 1);
}
}
// Driver Code
int main()
{
int x = 3;
fun(x);
return 0;
}
Output:
321
Time Complexity For Tail Recursion : O(n)
Space Complexity For Tail Recursion : O(n)

Lets us convert Tail Recursion into Loop and compare each other in terms of Time & Space
Complexity and decide which is more efficient.
// Converting Tail Recursion into Loop
#include <iostream>
using namespace std;
void fun(int y)
{
while (y > 0) {
cout << y << " ";
y--; }}
// Driver code
int main()
{
int x = 3;
fun(x);
return 0;
}
Output
321
Time Complexity: O(n)
Space Complexity: O(1)

So it was seen that in case of loop the Space Complexity is O(1) so it was better to write code in
loop instead of tail recursion in terms of Space Complexity which is more efficient than tail
recursion.

b. Head Recursion:
If a recursive function calling itself and that recursive call is the first statement in the function
then it’s known as Head Recursion. There’s no statement, no operation before the call. The
function doesn’t have to process or perform any operation at the time of calling and all
operations are done at returning time.

Example:
// C++ program showing Head Recursion
#include <bits/stdc++.h>
using namespace std;
// Recursive function
void fun(int n)
{
if (n > 0) {
// First statement in the function
fun(n - 1);
cout << " "<< n;
}
}
// Driver code
int main()
{
int x = 3;
fun(x);
return 0;
}
Output:
123
Time Complexity For Head Recursion: O(n)
Space Complexity For Head Recursion: O(n)

Let’s convert the above code into the loop.


// Converting Head Recursion into Loop
#include <iostream>
using namespace std;
// Recursive function
void fun(int n)
{
int i = 1;
while (i <= n) {
cout <<" "<< i;
i++;
}
}
// Driver code
int main()
{
int x = 3;
fun(x);
return 0;
}
Output:
123

c. Tree Recursion:
To understand Tree Recursion let’s first understand Linear Recursion. If a recursive function
calling itself for one time then it’s known as Linear Recursion. Otherwise if a recursive
function calling itself for more than one time then it’s known as Tree Recursion.

Example: Pseudo Code for linear recursion


fun(n)
{
// some code
if(n>0)
{
fun(n-1); // Calling itself only once
}
// some code
}

Program for tree recursion


// C++ program to show Tree Recursion
#include <iostream>
using namespace std;
// Recursive function
void fun(int n)
{
if (n > 0)
{
cout << " " << n;
// Calling once
fun(n - 1);
// Calling twice
fun(n - 1);
}
}
// Driver code
int main()
{
fun(3);
return 0;
}
Output:
3211211

Time Complexity For Tree Recursion: O(2n)


Space Complexity For Tree Recursion: O(n)
.
d. Nested Recursion:
In this recursion, a recursive function will pass the parameter as a recursive call. That means
“recursion inside recursion”.

Example:
// C++ program to show Nested Recursion
#include <iostream>
using namespace std;
int fun(int n)
{
if (n > 100)
return n - 10;
// A recursive function passing parameter
// as a recursive call or recursion inside
// the recursion
return fun(fun(n + 11));
}
// Driver code
int main()
{
int r;
r = fun(95);
cout << " " << r;
return 0;
}
Output:
91

Indirect Recursion:
In this recursion, there may be more than one functions and they are calling one another in a
circular manner. From the diagram below fun(A) is calling for fun(B), fun(B) is calling for
fun(C) and fun(C) is calling for fun(A) and thus it makes a cycle.

Figure 11: Indirect Recursion

Example:
// C++ program to show Indirect Recursion
#include <iostream>
using namespace std;
void funB(int n);
void funA(int n)
{
if (n > 0) {
cout <<" "<< n;
// Fun(A) is calling fun(B)
funB(n - 1);
}
}
void funB(int n)
{
if (n > 1) {
cout <<" "<< n;
// Fun(B) is calling fun(A)
funA(n / 2);
}
}
// Driver code
int main()
{
funA(20);
return 0;
}
Output:
20 19 9 8 4 3 1

Recursion versus Iteration


The Recursion and Iteration both repeatedly execute the set of instructions. Recursion is when a
statement in a function calls itself repeatedly. The iteration is when a loop repeatedly executes
until the controlling condition becomes false. The primary difference between recursion and
iteration is that recursion is a process, always applied to a function and iteration is applied to the
set of instructions which we want to get repeatedly executed.

Features Recursion
• Recursion uses selection structure.
• Infinite recursion occurs if the recursion step does not reduce the problem in a manner
that converges on some condition (base case) and Infinite recursion can crash the system.
• Recursion terminates when a base case is recognized.
• Recursion is usually slower than iteration due to the overhead of maintaining the stack.
• Recursion uses more memory than iteration.
• Recursion makes the code smaller.

Features of Iteration
• Iteration uses repetition structure.
• An infinite loop occurs with iteration if the loop condition test never becomes false and
Infinite looping uses CPU cycles repeatedly.
• An iteration terminates when the loop condition fails.
• An iteration does not use the stack so it's faster than recursion.
• Iteration consumes less memory.
• Iteration makes the code longer.

Some examples of Recursive Algorithms


Reversing an Array
Let us consider the problem of reversing the n elements of an array, A, so that the first element
becomes the last, the second element becomes the second to the last, and so on. We can solve
this problem using the linear recursion, by observing that the reversal of an array can be achieved
by swapping the first and last elements and then recursively
reversing the remaining elements in the array.

Algorithm ReverseArray(A, i, j):


Input: An array A and nonnegative integer indices i and j
Output: The reversal of the elements in A starting at index i and ending at j
if i < j then
Swap A[i] and A[j]
ReverseArray(A, i+1, j-1)
return
Fibonacci Sequence
Fibonacci sequence is the sequence of numbers 1, 1, 2, 3, 5, 8, 13, 21,34, 55, .... The first two
numbers of the sequence are both 1, while each succeeding number is the sum of the two
numbers before it. We can define a function F(n) that calculates the nth Fibonacci number.
First, the base cases are: F(0) = 1 and F(1) = 1.
Now, the recursive case: F(n) = F(n-1) + F(n-2).
Write the recursive function and the call tree for F(5).

Algorithm Fib(n) {
if (n < 2) return 1
else return Fib(n-1) + Fib(n-2)
}
The above recursion is called binary recursion since it makes two recursive calls instead of one.
How many number of calls are needed to compute the kth Fibonacci number? Let nk denote the
number of calls performed in the execution.
n0 = 1
n1 = 1
n2 = n1 + n0 + 1 = 3 > 21
n3 = n2 + n1 + 1 = 5 > 22
n4 = n3 + n2 + 1 = 9 > 23
n5 = n4 + n3 + 1 = 15 > 23
...
nk > 2k/2
This means that the Fibonacci recursion makes a number of calls that are exponential in k. In
other words, using binary recursion to compute Fibonacci numbers is very inefficient. Compare
this problem with binary search, which is very efficient in searching items, why is this binary
recursion inefficient? The main problem with the approach
above, is that there are multiple overlapping recursive calls.
We can compute F(n) much more efficiently using linear recursion. One way to accomplish this
conversion is to define a recursive function that computes a pair of consecutive Fibonacci
numbers F(n) and F(n-1) using the convention F(-1) = 0.

Algorithm LinearFib(n) {
Input: A nonnegative integer n
Output: Pair of Fibonacci numbers (Fn, Fn-1)
if (n <= 1) then
return (n, 0)
else
(i, j) <-- LinearFib(n-1)
return (i + j, i)
}
Since each recursive call to LinearFib decreases the argument n by 1, the original call results in a
series of n-1 additional calls. This performance is significantly faster than the exponential time
needed by the binary recursion. Therefore, when using binary recursion, we should first try to
fully partition the problem in two or, we should be sure that overlapping recursive calls are really
necessary.

Let's use iteration to generate the Fibonacci numbers.


public static int IterationFib(int n) {
if (n < 2) return n;
int f0 = 0, f1 = 1, f2 = 1;
for (int i = 2; i < n; i++) {
f0 = f1;
f1 = f2;
f2 = f0 + f1;
}
return f2;
}

What's the complexity of this algorithm?


It is a linear complexity O(n)
Exercises
i. Find the Sum of the elements of an array recursively
ii. Find the maximum number of elements in an array A of n elements using recursion, then
iteration. What are their time complexities.

More on Recursion Tree Method


Recursion Tree Method is a pictorial representation of an iteration method which is in the
form of a tree where at each level nodes are expanded.
In general, we consider the second term in recurrence as root. It is useful when the divide
& Conquer algorithm is used.
It is sometimes difficult to come up with a good guess. In Recursion tree, each root and
child represents the cost of a single sub-problem. We sum the costs within each of the
levels of the tree to obtain a set of pre-level costs and then sum all pre-level costs to
determine the total cost of all levels of the recursion.
A Recursion Tree is best used to generate a good guess, which can be verified by the
Substitution Method.
𝑛
The general method. Suppose you have recurrence of the form 𝑇(𝑛) = 𝑎𝑇 ( ) + 𝑓(𝑛),
𝑏
where a and b are arbitrary constants and f is some function of n. This recurrence would
arise in the analysis of a recursive algorithm that for large inputs of size n, breaks up the
input into a subproblems each of size n/b, recursively solves the subproblems, then
recombine the results. The work to split the problem into subproblems and recombine the
results is f(n).

Example
𝑛
Consider 𝑇(𝑛) = 2𝑇 ( ) + 𝑛2
2
We have to obtain the asymptotic bound using recursion tree method.
Solution:
The Recursion tree for the above recurrence is shown in Figure12. For input size n,
𝑛
there are 2 subproblems, each of size. 𝑛2 is the f(n) that splits the problem into
2
subproblems and recombine the result.
𝑛
Figure 12: Recursion tree for 𝑇(𝑛) = 2𝑇 ( ) + 𝑛2
2
𝑛2 𝑛2 𝑛2
𝑇(𝑛) = 𝑛2 + + + … log 𝑛 𝑡𝑖𝑚𝑒𝑠
2 4 8
2 ∑∞ 1
≤𝑛 𝑖=0(2𝑖 )
1
≤ 𝑛2 ( 1)
1− 2
2
≤ 2𝑛
T(n) = Ꝋ(𝑛2 )
Exercise
Consider the following recurrence and obtain the asymptotic bound using recursion tree
method.
𝑛 2𝑛
𝑇(𝑛) = 𝑇( ) + 𝑇( ) + 𝑛
3 4

You might also like