0% found this document useful (0 votes)
70 views13 pages

Recurrence Relation

Recurrence Relation.docx
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
70 views13 pages

Recurrence Relation

Recurrence Relation.docx
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 13

Recurrence Relation

A recurrence is an equation or inequality that describes a function in terms of its values on smaller inputs. To solve a Recurrence
Relation means to obtain a function defined on the natural numbers that satisfy the recurrence.

For Example, the Worst Case Running Time T(n) of the MERGE SORT Procedures is described by the recurrence.

T (n) = θ (1) if n=1

2T + θ (n) if n>1

There are four methods for solving Recurrence:

1. Substitution Method
2. Iteration Method
3. Recursion Tree Method
4. Master Method

1. Substitution Method:

The Substitution Method Consists of two main steps:

1. Guess the Solution.


2. Use the mathematical induction to find the boundary condition and shows that the guess is correct.

For Example1 Solve the equation by Substitution Method.

T (n) = T + n

We have to show that it is asymptotically bound by O (log n).

Solution:

For T (n) = O (log n)

We have to show that for some constant c

1. T (n) ≤c logn.

Put this in given Recurrence Equation.

T (n) ≤c log + 1

≤c log + 1 = c logn-clog2 2+1


≤c logn for c≥1
Thus T (n) =O logn.

Example2 Consider the Recurrence

T (n) = 2T + n n>1

Find an Asymptotic bound on T.

Solution:

We guess the solution is O (n (logn)).Thus for constant 'c'.


T (n) ≤c n logn
Put this in given Recurrence Equation.
Now,

T (n) ≤2c log +n


≤cnlogn-cnlog2+n
=cn logn-n (clog2-1)
≤cn logn for (c≥1)
Thus T (n) = O (n logn).

Example3 Solve the following recurrence relation using the substitute method. T(n) = 2*T(n/2) + n when n>1 else T(n) = 1 when
n=1

Sollution : To solve the given recurrence relation we'll use the substitution method.

Let's start by guessing a solution to the recurrence relation:

Now, let's prove this using the substitution method:

We assume T(k) ≤ ck log k for all k < n , where c is a constant to be determined.

Now, we substitute this assumption into the original recurrence:

Now, for T(n) to be O(n log n) , we need the last expression to be less than or equal to cn log n : n(1-c) ≤0
This holds true when c ≥ 1 .

So, we've proved that T(n) = O(n log n) .

Now, to establish a lower bound, we'll show that T(n) = Ω(n log n) using a similar process. We'll assume that T(k) ≥ ck log k for
all k < n .

By substituting this assumption into the original recurrence and following similar steps as above, we can prove the lower bound.

Finally, since we have both an upper bound and a lower bound of T(n) that are of the form O(n log n) and Ω(n log n)
respectively, we can conclude that \( T(n) = Θ(n log n) .

2. Iteration Methods

It means to expand the recurrence and express it as a summation of terms of n and initial condition.

Example1: Consider the Recurrence

1. T (n) = 1 if n=1
2. = 2T (n-1) if n>1

Solution:

T (n) = 2T (n-1)
= 2[2T (n-2)] = 22T (n-2)
= 4[2T (n-3)] = 23T (n-3)
= 8[2T (n-4)] = 24T (n-4) (Eq.1)

Repeat the procedure for i times

T (n) = 2i T (n-i)
Put n-i=1 or i= n-1 in (Eq.1)
T (n) = 2n-1 T (1)
= 2n-1 .1 {T (1) =1 .....given}
= 2n-1

Example2: Consider the Recurrence

1. T (n) = T (n-1) +1 and T (1) = θ (1).

Solution:

T (n) = T (n-1) +1
= (T (n-2) +1) +1 = (T (n-3) +1) +1+1
= T (n-4) +4 = T (n-5) +1+4
= T (n-5) +5= T (n-k) + k
Where k = n-1
T (n-k) = T (1) = θ (1)
T (n) = θ (1) + (n-1) = 1+n-1=n= θ (n).
Recursion Tree Method
Recursion is a fundamental concept in computer science and mathematics that allows functions to call
themselves, enabling the solution of complex problems through iterative steps. One visual representation
commonly used to understand and analyze the execution of recursive functions is a recursion tree. In this
article, we will explore the theory behind recursion trees, their structure, and their significance in
understanding recursive algorithms.

What is a Recursion Tree?


A recursion tree is a graphical representation that illustrates the execution flow of a recursive function. It
provides a visual breakdown of recursive calls, showcasing the progression of the algorithm as it branches
out and eventually reaches a base case. The tree structure helps in analyzing the time complexity and
understanding the recursive process involved.

Tree Structure
Each node in a recursion tree represents a particular recursive call. The initial call is depicted at the top, with
subsequent calls branching out beneath it. The tree grows downward, forming a hierarchical structure. The
branching factor of each node depends on the number of recursive calls made within the function.
Additionally, the depth of the tree corresponds to the number of recursive calls before reaching the base
case.

Base Case
The base case serves as the termination condition for a recursive function. It defines the point at which the
recursion stops and the function starts returning values. In a recursion tree, the nodes representing the base
case are usually depicted as leaf nodes, as they do not result in further recursive calls.

Backward Skip 10sPlay VideoForward Skip 10s

Recursive Calls
The child nodes in a recursion tree represent the recursive calls made within the function. Each child node
corresponds to a separate recursive call, resulting in the creation of new sub problems. The values or
parameters passed to these recursive calls may differ, leading to variations in the sub problems'
characteristics.

Execution Flow:
Traversing a recursion tree provides insights into the execution flow of a recursive function. Starting from the
initial call at the root node, we follow the branches to reach subsequent calls until we encounter the base
case. As the base cases are reached, the recursive calls start to return, and their respective nodes in the tree
are marked with the returned values. The traversal continues until the entire tree has been traversed.

Time Complexity Analysis


Recursion trees aid in analyzing the time complexity of recursive algorithms. By examining the structure of
the tree, we can determine the number of recursive calls made and the work done at each level. This
analysis helps in understanding the overall efficiency of the algorithm and identifying any potential
inefficiencies or opportunities for optimization.

Introduction
o Think of a program that determines a number's factorial. This function takes a number N as an input
and returns the factorial of N as a result. This function's pseudo-code will resemble,
1. // find factorial of a number
2. factorial(n) {
3. // Base case
4. if n is less than 2: // Factorial of 0, 1 is 1
5. return n
6.
7. // Recursive step
8. return n * factorial(n-1); // Factorial of 5 => 5 * Factorial(4)...
9. }
10.
11. /* How function calls are made,
12.
13. Factorial(5) [ 120 ]
14. |
15. 5 * Factorial(4) ==> 120
16. |
17. 4. * Factorial(3) ==> 24
18. |
19. 3 * Factorial(2) ==> 6
20. |
21. 2 * Factorial(1) ==> 2
22. |
23. 1
24.
25. */
o Recursion is exemplified by the function that was previously mentioned. We are invoking a function
to determine a number's factorial. Then, given a lesser value of the same number, this function calls
itself. This continues until we reach the basic case, in which there are no more function calls.
o Recursion is a technique for handling complicated issues when the outcome is dependent on the
outcomes of smaller instances of the same issue.
o If we think about functions, a function is said to be recursive if it keeps calling itself until it reaches
the base case.
o Any recursive function has two primary components: the base case and the recursive step. We stop
going to the recursive phase once we reach the basic case. To prevent endless recursion, base cases
must be properly defined and are crucial. The definition of infinite recursion is a recursion that never
reaches the base case. If a program never reaches the base case, stack overflow will continue to
occur.

Recursion Types
Generally speaking, there are two different forms of recursion:

o Linear Recursion
o Tree Recursion
o Linear Recursion
Linear Recursion
o A function that calls itself just once each time it executes is said to be linearly recursive. A nice
illustration of linear recursion is the factorial function. The name "linear recursion" refers to the fact
that a linearly recursive function takes a linear amount of time to execute.
o Take a look at the pseudo-code below:

1. function doSomething(n) {
2. // base case to stop recursion
3. if nis 0:
4. return
5. // here is some instructions
6. // recursive step
7. doSomething(n-1);
8. }
o If we look at the function doSomething(n), it accepts a parameter named n and does some
calculations before calling the same procedure once more but with lower values.
o When the method doSomething() is called with the argument value n, let's say that T(n) represents
the total amount of time needed to complete the computation. For this, we can also formulate a
recurrence relation, T(n) = T(n-1) + K. K serves as a constant here. Constant K is included because it
takes time for the function to allocate or de-allocate memory to a variable or perform a
mathematical operation. We use K to define the time since it is so minute and insignificant.
o This recursive program's time complexity may be simply calculated since, in the worst scenario, the
method doSomething() is called n times. Formally speaking, the function's temporal complexity is
O(N).

Tree Recursion
o When you make a recursive call in your recursive case more than once, it is referred to as tree
recursion. An effective illustration of Tree recursion is the fibonacci sequence. Tree recursive
functions operate in exponential time; they are not linear in their temporal complexity.
o Take a look at the pseudo-code below,

1. function doSomething(n) {
2. // base case to stop recursion
3. if n is less than 2:
4. return n;
5. // here is some instructions
6. // recursive step
7. return doSomething(n-1) + doSomething(n-2);
8. }
o The only difference between this code and the previous one is that this one makes one more call to
the same function with a lower value of n.
o Let's put T(n) = T(n-1) + T(n-2) + k as the recurrence relation for this function. K serves as a constant
once more.
o When more than one call to the same function with smaller values is performed, this sort of
recursion is known as tree recursion. The intriguing aspect is now: how time-consuming is this
function?
o Take a guess based on the recursion tree below for the same function.

o It may occur to you that it is challenging to estimate the time complexity by looking directly at a
recursive function, particularly when it is a tree recursion. Recursion Tree Method is one of several
techniques for calculating the temporal complexity of such functions. Let's examine it in further
detail.

What Is Recursion Tree Method?


o Recurrence relations like T(N) = T(N/2) + N or the two we covered earlier in the kinds of recursion
section are solved using the recursion tree approach. These recurrence relations often use a divide
and conquer strategy to address problems.
o It takes time to integrate the answers to the smaller sub problems that are created when a larger
problem is broken down into smaller sub problems.
o The recurrence relation, for instance, is T(N) = 2 * T(N/2) + O(N) for the Merge sort. The time needed
to combine the answers to two sub problems with a combined size of T(N/2) is O(N), which is true
at the implementation level as well.
o For instance, since the recurrence relation for binary search is T(N) = T(N/2) + 1, we know that each
iteration of binary search results in a search space that is cut in half. Once the outcome is
determined, we exit the function. The recurrence relation has +1 added because this is a constant
time operation.
o The recurrence relation T(n) = 2T(n/2) + Kn is one to consider. Kn denotes the amount of time
required to combine the answers to n/2-dimensional sub problems.
o Let's depict the recursion tree for the aforementioned recurrence relation.
We may draw a few conclusions from studying the recursion tree above, including

1. The magnitude of the problem at each level is all that matters for determining the value of a node. The
issue size is n at level 0, n/2 at level 1, n/2 at level 2, and so on.

2. In general, we define the height of the tree as equal to log (n), where n is the size of the issue, and the
height of this recursion tree is equal to the number of levels in the tree. This is true because, as we just
established, the divide-and-conquer strategy is used by recurrence relations to solve problems, and getting
from issue size n to problem size 1 simply requires taking log (n) steps.

o Consider the value of N = 16, for instance. If we are permitted to divide N by 2 at each step, how
many steps are required to get N = 1? Considering that we are dividing by two at each step, the
correct answer is 4, which is the value of log(16) base 2.

log(16) base 2

log(2^4) base 2

4 * log(2) base 2, since log(a) base a = 1

so, 4 * log(2) base 2 = 4

3. At each level, the second term in the recurrence is regarded as the root.

Although the word "tree" appears in the name of this strategy, you don't need to be an expert on trees to
comprehend it.

How to Use a Recursion Tree to Solve Recurrence Relations?


The cost of the sub problem in the recursion tree technique is the amount of time needed to solve the sub
problem. Therefore, if you notice the phrase "cost" linked with the recursion tree, it simply refers to the
amount of time needed to solve a certain sub problem.

Let's understand all of these steps with a few examples.

Example
Consider the recurrence relation,

T(n) = 2T(n/2) + K

Solution

The given recurrence relation shows the following properties,

A problem size n is divided into two sub-problems each of size n/2. The cost of combining the solutions to
these sub-problems is K.

Each problem size of n/2 is divided into two sub-problems each of size n/4 and so on.

At the last level, the sub-problem size will be reduced to 1. In other words, we finally hit the base case.

Let's follow the steps to solve this recurrence relation,

Step 1: Draw the Recursion Tree

Step 2: Calculate the Height of the Tree

Since we know that when we continuously divide a number by 2, there comes a time when this number is
reduced to 1. Same as with the problem size N, suppose after K divisions by 2, N becomes equal to 1, which
implies, (n / 2^k) = 1

Here n / 2^k is the problem size at the last level and it is always equal to 1.

Now we can easily calculate the value of k from the above expression by taking log() to both sides. Below is
a more clear derivation,

n = 2^k

o log(n) = log(2^k)
o log(n) = k * log(2)
o k = log(n) / log(2)
o k = log(n) base 2

So the height of the tree is log (n) base 2.

Step 3: Calculate the cost at each level

o Cost at Level-0 = K, two sub-problems are merged.


o Cost at Level-1 = K + K = 2*K, two sub-problems are merged two times.
o Cost at Level-2 = K + K + K + K = 4*K, two sub-problems are merged four times. and so on....

Step 4: Calculate the number of nodes at each level

Let's first determine the number of nodes in the last level. From the recursion tree, we can deduce this

o Level-0 have 1 (2^0) node


o Level-1 have 2 (2^1) nodes
o Level-2 have 4 (2^2) nodes
o Level-3 have 8 (2^3) nodes

So the level log(n) should have 2^(log(n)) nodes i.e. n nodes.

Step 5: Sum up the cost of all the levels

o The total cost can be written as,


o Total Cost = Cost of all levels except last level + Cost of last level
o Total Cost = Cost for level-0 + Cost for level-1 + Cost for level-2 +.... + Cost for level-log(n) + Cost for
last level

The cost of the last level is calculated separately because it is the base case and no merging is done at the
last level so, the cost to solve a single problem at this level is some constant value. Let's take it as O (1).

Let's put the values into the formulae,

o T(n) = K + 2*K + 4*K + .... + log(n)` times + `O(1) * n


o T(n) = K(1 + 2 + 4 + .... + log(n) times)` + `O(n)
o T(n) = K(2^0 + 2^1 + 2^2 + ....+ log(n) times + O(n)

If you closely take a look to the above expression, it forms a Geometric progression (a, ar, ar^2, ar^3 ......
infinite time). The sum of GP is given by S(N) = a / (r - 1). Here is the first term and r is the common ratio.

Master Method
The Master Method is used for solving the following types of recurrence

T (n) = a T + f (n) with a≥1 and b≥1 be constant & f(n) be a function and can be interpreted as

Let T (n) is defined on non-negative integers by the recurrence.


T (n) = a T + f (n)

In the function to the analysis of a recursive algorithm, the constants and function take on the following significance:

o n is the size of the problem.


o a is the number of subproblems in the recursion.
o n/b is the size of each subproblem. (Here it is assumed that all subproblems are essentially the same size.)
o f (n) is the sum of the work done outside the recursive calls, which includes the sum of dividing the problem and the
sum of combining the solutions to the subproblems.
o It is not possible always bound the function according to the requirement, so we make three cases which will tell us
what kind of bound we can apply on the function.

Master Theorem:

It is possible to complete an asymptotic tight bound in these three cases:

Case1: If f (n) = for some constant ε >0, then it follows that:

T (n) = Θ

Example:

T (n) = 8 T apply master theorem on it.

Solution:
Compare T (n) = 8 T with

T (n) = a T
a = 8, b=2, f (n) = 1000 n2, logba = log28 = 3

Put all the values in: f (n) =


1000 n2 = O (n3-ε )
If we choose ε=1, we get: 1000 n2 = O (n3-1) = O (n2)

Since this equation holds, the first case of the master theorem applies to the given recurrence relation, thus resulting in the
conclusion:

T (n) = Θ
Therefore: T (n) = Θ (n3)

Case 2: If it is true, for some constant k ≥ 0 that:

F (n) = Θ then it follows that: T (n) = Θ

Example:

T (n) = 2 , solve the recurrence by using the master method.

As compare the given problem with T (n) = a T a


= 2, b=2, k=0, f (n) = 10n, logba = log22 =1

Put all the values in f (n) =Θ , we will get


10n = Θ (n1) = Θ (n) which is true.

Therefore: T (n) = Θ
= Θ (n log n)

Case 3: If it is true f(n) = Ω for some constant ε >0 and it also true that: a f for some
constant c<1 for large value of n ,then :

1. T (n) = Θ((f (n))


Example: Solve the recurrence relation:

T (n) = 2

Solution:

Compare the given problem with T (n) = a T


a= 2, b =2, f (n) = n2, logba = log22 =1

Put all the values in f (n) = Ω ..... (Eq. 1)


If we insert all the value in (Eq.1), we will get
n2 = Ω(n1+ε) put ε =1, then the equality will hold.
n2 = Ω(n1+1) = Ω(n2)
Now we will also check the second condition:

2
If we will choose c =1/2, it is true:

∀ n ≥1
So it follows: T (n) = Θ ((f (n))
T (n) = Θ(n2)

You might also like