0% found this document useful (0 votes)
128 views3 pages

Recurrence Relation

A recurrence relation describes a function in terms of its values on smaller inputs, and solving it involves obtaining a function defined on natural numbers. There are four methods for solving recurrences: Substitution, Iteration, Recursion Tree, and Master Method, with the Substitution Method focusing on proving asymptotic bounds through induction. The Master Method provides a quick way to solve specific forms of recurrences based on the relationship between constants and the function's growth rate.

Uploaded by

asmwcu0921004
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
128 views3 pages

Recurrence Relation

A recurrence relation describes a function in terms of its values on smaller inputs, and solving it involves obtaining a function defined on natural numbers. There are four methods for solving recurrences: Substitution, Iteration, Recursion Tree, and Master Method, with the Substitution Method focusing on proving asymptotic bounds through induction. The Master Method provides a quick way to solve specific forms of recurrences based on the relationship between constants and the function's growth rate.

Uploaded by

asmwcu0921004
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

Recurrence Relation

A recurrence is an equation or inequality that describes a function in terms of its values on


smaller inputs. To solve a Recurrence Relation means to obtain a function defined on the
natural numbers that satisfy the recurrence.

For Example, the Worst Case Running Time T(n) of the MERGE SORT Procedures is
described by the recurrence.
T (n) = θ (1) if n=1

2T + θ (n) if n>1

There are four methods for solving Recurrence:


1. Substitution Method
2. Iteration Method
3. Recursion Tree Method
4. Master Method

The substitution method is a condensed way of proving an asymptotic bound on a


recurrence by induction. In the substitution method, instead of trying to find an exact
closed-form solution, we only try to find a closed-form bound on the recurrence. Note that
the substitution method still requires the use of induction. The induction will always be of
the same basic form, but it is still important to state the property you are trying to prove,
split into one or more base cases and the inductive case, and note when the inductive
hypothesis is being used.

Substitution method example


Consider the following recurrence relation, which shows up fairly frequently for some
types of algorithms:
T(1) = 1
T(n) = 2T(n−1) + c1

By expanding this out a bit (using the "iteration method"), we can guess that this will be
O(2n). To use the substitution method to prove this bound, we now need to guess a closed-
form upper bound based on this asymptotic bound. We will guess an upper bound of k2n −
b, where b is some constant. We include the b in anticipation of having to deal with the
constant c1 that appears in the recurrence relation, and because it does no harm. In the
process of proving this bound by induction, we will generate a set of constraints on k and b,
and if b turns out to be unnecessary, we will be able to set it to whatever we want at the
end.

Our property, then, is T(n) ≤ k2n − b, for some two constants k and b. Note that this
property logically implies that T(n) is O(2n), which can be verified with reference to the
definition of O.

Base case: n = 1. T(1) = 1 ≤ k21 − b = 2k − b. This is true as long as k ≥ (b + 1)/2.


Inductive case: We assume our property is true for n − 1. We now want to show that it is
true for n.
T(n) = 2T(n−1) + c1

≤ 2(k2n − 1 − b) + c1 (by IH)

= k2n − 2b + c1

≤ k2n − b

This is true as long as b ≥ c1.

So we end up with two constraints that need to be satisfied for this proof to work, and we
can satisfy them simply by letting b = c1 and k = (b + 1)/2, which is always possible, as the
definition of O allows us to choose any constant. Therefore, we have proved that our
property is true, and so T(n) is O(2n).

The biggest thing worth noting about this proof is the importance of adding additional
terms to the upper bound we assume. In almost all cases in which the recurrence has
constants or lower-order terms, it will be necessary to have additional terms in the upper
bound to "cancel out" the constants or lower-order terms. Without the right additional
terms, the inductive case of the proof will get stuck in the middle, or generate an impossible
constraint; this is a signal to go back to your upper bound and determine what else needs
to be added to it that will allow the proof to proceed without causing the bound to change
in asymptotic terms.
Review of the master method
The master method gives us a quick way to find solutions to recurrence relations of the
form T(n) = aT(n/b) + h(n), where a and b are constants, a ≥ 1 and b > 1. Conceptually, a
represents how many recursive calls are made, b represents the factor by which the work
is reduced in each recursive call, and h(n) represents how much work is done by each call
apart from the recursion, as a function of n.
Once we have a recurrence relation of that form, the master method tells us the solution
based on the relation between a, b, and h(n), as follows:
• Case 1: h(n) is O(nlogba − ε), which says that h grows more slowly than the number of
leaves. In this case, the total work is dominated by the work done at the leaves, so
T(n) is Θ(nlogb a).
• Case 2: h(n) is Θ(nlogba), which says that h grows at the same rate as the number of
leaves. In this case, T(n) is Θ(nlogb a log n).
• Case 3: h(n) is Ω(nlogba + ε), which says that h grows faster than the number of leaves.
For the upper bound, we also need an extra smoothness condition on f, namely
ah(n/b) ≤ ch(n) for some c < 1 and large n. In this case, the total work is dominated
by the work done at the root, so T(n) is Θ(h(n)).

You might also like