CSC310 03 Algorithm Efficiency
CSC310 03 Algorithm Efficiency
Dr Fatimah Adamu-Fika
Overview................................................................................................................................... 1
Asymptotic Notations............................................................................................................11
Simple Statements..........................................................................................................20
If-Then-Else .......................................................................................................................20
Unit Goals................................................................................................................................22
Exercises ..................................................................................................................................23
Textbooks ............................................................................................................................24
Websites ..............................................................................................................................25
OVERVIEW
We already mentioned that an algorithm can be efficient in two ways: how fast a
given algorithm run determines its time efficiency, also called time complexity; the
amount of memory units the algorithm requires in addition to space needed by its
input and output indicates its space efficiency, also called space complexity. In the
past, both time and space were very expensive. With current technological
advancements, computer speed and memory size have been greatly improved. Now
the amount, of extra space required by an algorithm is not of much concern.
However, there exists still, a difference between the fast main memory, the slower
secondary memory, and the cache. The time issue has not diminished quite to the
same extent as the space issue. We shall primarily focus on time efficiency, but the
analytical framework we shall be exploring also applies to analysing space efficiency
as well. Also, when we consider space efficiency, we shall be focusing on main
memory usage rather than secondary memory. Therefore, until otherwise stated,
when we talk of efficiency, we are talking about time efficiency. And when we talk
of space efficiency, we are talking about how much main memory is needed by the
algorithm. Algorithms that have non-appreciable (not significant) space complexity
are said to be in place.
Many algorithms transform input objects into output objects. For example, a correct
sorting algorithm that reorders a given list in non-decreasing order when applied to
the problem instance [5, 3, 4, 2, 1] will output [1, 2, 3, 4, 5].
Sorting
5 3 4 2 1 1 2 3 4 5
algorithm
input object output object
From the example above, we can fathom that the running time of the sorting
algorithm will grow as the number of elements to be sorted increases. This
characteristic is typical for most algorithms.
In some situations, even on the same size input, the same algorithm can have very
different running times. In these situations, the nature (or details) of the input affects
the process of solving the problem. For example, a searching algorithm that scans a
list from the first element to the last will run faster if the value being searched for is the
first element in the list. Whereas, the same algorithm, when applied to another
instance of the same size but with the value being searched for located at the last
location in the list, will take longer to run.
However, there are cases where the choice of a parameter indicating an input size
does not matter. For example, when computing the product of two square (m × m)
matrices, the size of the problem is measured by the order m of the matrix, the order
of a matrix refers to its dimension, i.e., the number of rows and columns it has.
The selection of an appropriate input size metric can be influenced by the operations
of the given algorithm. For example, how should we decide an input’s size metric for
a spell-checking algorithm? If the algorithm examines individual characters of its input,
we should use the number of characters as the size metric; if it works by processing
words, we should take the number of words as the size metric.
There are cases, where the input is just one or two numbers. For example, when
checking the primality of a number (i.e., to check whether a number is prime) or when
multiplying two numbers. Here, the magnitude of the number(s) determines the input
size. In such scenarios, it is preferable to measure size by the number of bits (b) in the
number (n)’s binary representation:
𝑏 = [𝑙𝑜𝑔2 𝑛] + 1
his metric usually gives a better idea about the efficiency of the algorithm in question.
We shall indicate which input size measure is being used with each problem we study.
𝑇𝑛 ≈ 𝑐𝑜𝑝 𝐶𝑛
This formula gives a reasonable estimate can give a reasonable estimate of the
algorithm’s running time. It also makes it possible to answer such questions as how
much faster an algorithm would run on a faster computer as compared to when ran
on a slower machine. For example, an algorithm will obviously run 10 times faster on a
computer that is 10 times faster than another computer. Estimating the algorithm’s
running time could answer a more important of how much longer an algorithm will
1
run if its input size doubles. For example, suppose 𝐶𝑛 = 2 𝑛2 , let’s see how we can
answer how much longer it will take when the algorithm’s input size is doubled without
knowing the value of Cop.
1
𝑇2𝑛 𝐶𝑜𝑝 𝐶2𝑛 (2𝑛)2 4𝑛2
≈ ≈ 2 ≈ 2 ≈4
𝑇𝑛 𝐶𝑜𝑝 𝐶𝑛 1 𝑛
(𝑛)2
2
1
As you can notice Cop is neatly cancelled out in the ratio. Also note that 2, the
multiplicative factor, Cop, in the formula for the count was also cancelled out. This is
the reason why when we do efficiency analysis, we ignore multiplicative factors and
focus on the count’s order of growth to within a constant multiple for large-size inputs.
Order of Growth
Order of growth (or rate of growth) is a set of functions whose asymptotic growth
behaviour is considered equivalent. For example, 6n + 300, 100n, n + 1 and 2n all
belong to the same order of growth. They are linear functions. Functions that have n2
as a leading term are called quadratic. We are more interested in the order of growth
of the number of times the basic operation is executed on n (the input size of an
algorithm). Because, for smaller inputs, it is difficult to distinguish efficient algorithms
from inefficient ones. And for this reason, we are interested in the order of growth for
large input sizes.
For example, when we analyse two algorithms and expressed their running times in
terms of the size of the input, n: we found Algorithm A executed 100n operations to
solve a particular problem and Algorithm B executed n2 operations to solve the same
problem. The following table shows the running time for the two algorithms for different
input sizes.
n 100n n2
1 100 1
10 1,000 100
100 10,000 10,000
10,000 1,000,000 100,000,000
100,000 10,000,000 10,000,000,000
1,000,000 100,000,000 1,000,000,000,000
For smaller n, Algorithm A takes longer to execute than Algorithm B: when n = 1, we
can see that Algorithm A takes 100 times longer than Algorithm A, and 10 times longer
when n = 10. However, when n = 100 they have the same execution time. For larger
n, Algorithm A runs much faster, the larger the value of n the better Algorithm A is
over Algorithm B.
We may guess, from the example above, that functions that have larger leading terms
will always grow faster than those with smaller leading terms. The leading term, in a
function, is the term with the highest exponent. For Algorithm A, the leading term has
a large coefficient, 100, which is why Algorithm B does better for smaller n. However,
regardless of the coefficients, there will always be some value of n where an < bn2.
This also applies to the non-leading terms. Even if the run time of Algorithm A were
100n + 1010, it would still be better than Algorithm B for a large enough n.
So, we expect an algorithm with a smaller leading term to be a better algorithm for
large problems. However, for smaller problems, there may be a crossover point where
another algorithm is better. The crossover point is the problem size where two
algorithms require the same run time or space. The location of the crossover point
depends on the details of the algorithms, the inputs, and the hardware. This is usually
ignored for algorithmic analysis purposes.
Similarly, when two algorithms have the same leading terms, it is difficult to say which
one is better. Thus, for algorithm analysis, functions with the same leading term are
considered equivalent and belong to the same order of growth, even if they have
different coefficients.
The following table shows some orders of growth (concerning the input size) that are
common for algorithm analysis.
Where a, is the new base and b the old. For example, to change the log function in
our example from base 2 to base 10, using the formula, we get:
= 0.301 log 2 𝑛
∵ log 10 2 = 0.301
Therefore, we omit a logarithm’s base and write simply log 𝑛 in situations where we are
only interested in a function’s order of growth within a multiplicative constant. Similarly,
all exponential functions belong to the same order of growth irrespective of the base
of the exponent. Exponential functions are on the opposite end of the spectrum from
logarithmic functions, they grow very quickly that their values become extremely
large even for very small values of n. The factorial function is at this end of the
spectrum as well. Although, there is a huge difference between the orders of growth
of 2n and n!, n! is often also referred to as having exponential growth. We can see that
exponential algorithms are only useful for solving small problems.
Another observation we can make from the table above is the rate of growth of the
various functions when the input size doubles. Let’s summarise it in the following table:
0 1 2 3 … … …
In the RAM model, instructions are executed one after the other, and no two are
executed concurrently. These instructions are simple ones found in real computers:
arithmetic (such as add, subtract, multiply, divide, modulus etc), data movement
(such as load, save etc) and control (such as conditional and unconditional branch,
subroutine call and return). Each instruction takes a constant amount of time to
execute.
Now that we have specified a model, our next step is to analyse the number of steps
or operations an algorithm requires under the given model.
We shall learn how to count primitive operations using the algorithm. Consider the
problem of finding the value of the largest element in a list of n numbers. For simplicity,
we assume that the list is implemented as an array. The following is a pseudocode of
a standard algorithm for solving the problem.
Algorithm ArrayMax(A)
1. Line 1, max = A[0], initialises the variable max with the value A[0], that is two
primitive operations, indexing an array and assigning a value to a variable. This
is executed only once at the beginning of the algorithm. Thus, line 1 contributes
2 units to the count.
2. Line 2, for( i = 1; i < n; i++ ), at the beginning of the loop the counter i is initialised
0. This corresponds to executing the primitive operation of assigning a value to
a variable. So, i = 1 contributes unit to the count.
Before entering the loop, the condition i < n is verified. This corresponds to
executing the primitive operation of comparing two values. Since the counter
i starts at 1 and gets incremented by 1 at the end of each iteration of the loop,
the comparison i < n is executed n times. Thus, i < n contributes n units to count.
At the end of each iteration of the loop, the counter i gets incremented by 1,
this corresponds to executing two primitive operations, evaluating an
expression, i.e., performing an arithmetic operation, and assigning a value to
a variable. This executes the same number of times the body of the loop
executes, i.e., n – 1 times. Thus, contributing 2(n – 1) units to the count.
3. Line 3, if( A[i] > max ), compares A[i] to max. This corresponds to executing two
primitive operations, indexing an array and comparing two values. This is within
the body of the loop, so it executes n – 1 times. Thus, contributing 2(n – 1) units
to the count.
4. Line 4, max = A[i], only executes when line 3 executes, if it executes it updates
the variable max with A[i]. This corresponds to executing two primitive
operations, indexing an array and assigning a value to a variable.
How much line 4 contributes, depends on how many times it gets executed. In
the best-case (case a), when the first element of the array, i.e., A[0], is the
largest number of the array, line 4 will never get executed. Thus, it will contribute
0 units to the count. In the worst-case (case b), when the array is sorted in
increasing order, this means line 3 will always be true as every ith element of the
array is larger than the element before it. In this case, line 4 will be executed in
each iteration of the loop, i.e., line 4 will be executed n – 1 times. Thus,
contributing 2(n – 1) units to the count.
The following table summarises the primitive operations (Cn) performed by the
algorithm ArrayMax and the number of times they execute:
Count
at least at most
1 max = A[0] 2 2
2 for( i = 1; i < n; i++ ) 3n – 1 3n – 1
3 if A[i] > max 2(n – 1) 2(n – 1)
4 max = A[i] 0 2(n – 1)
5 return max 1 1
Total count (Cn) 5n 7n – 2
Thus, in the best case, 𝐶𝑛 = 2 + (3𝑛 – 1) + (2(𝑛 – 1)) + 0 + 1 = 𝟓𝒏 ∈ 𝑶(𝒏), it occurs
when A[0] is the largest element in the array. And in the worst-case, 𝐶𝑛 = 2 +
(3𝑛 – 1) + (2(𝑛 – 1)) + (2(𝑛 – 1)) + 1 = 𝟕𝒏 ∈ 𝑶(𝒏), this occurs when A is sorted in
increasing order, so that variable max is reassigned at each iteration of the loop.
From the above example, we can summarise the counting primitive operation
approach as:
This approach can be excessively difficult, especially when our algorithms have many
nested loops and are usually unnecessary. The thing to do is to identify the most
important operation of the algorithm, called the basic (or fundamental) operation,
and then compute the number of times the basic operation is executed.
Now that we have identified line 3 as the basic operation, next, we will find
summations to express the number of times, Cn, it is executed as a function of the input
size, n. ArrayMax executes A[i] > max (line 3) one time in each iteration of the loop,
the iterations are repeated i times, within the bound 1 to n – 1, i.e., for i = 1, 2, …, n
– 1.
5 return max 1
Therefore, we get the following summation from the basic operation:
𝑛 –1
𝐶𝑛 = ∑ 1
𝑖= 1
𝐶𝑛 = ∑ 1 = 𝒏 − 𝟏 ∈ 𝑶(𝒏)
𝑖=1
From the above, we can summarise identifying basic operation approach as:
So far, the frameworks we have considered are for analysing the time efficiency of
non-recursive algorithms. To analyse the time efficiency of recursive algorithms we
replace identifying basic operations in step 3 with modelling recursions as recursive
relations. We then expand these relations and rewrite them as summations.
The worst-case running time of an algorithm gives us an upper bound on the running
time for any input, that is, the longest time an algorithm can take to execute an input
of size n. For example, we say ArrayMax executes Cn = 7n – 2 primitive operations in
the worst case, meaning that the maximum number of primitive operations executed
by the algorithm, taken over all inputs of size n, is 7n – 2. Or we can similarly say that
its basic operation executes Cn = n – 1 times in the worst case, meaning that the
maximum number of times the basic operation is executed by the algorithm, taken
over all inputs of size n, is n – 1.
To perform a worst-case analysis, we need to identify the size n that an algorithm takes
the longest to execute. This type of analysis is undoubtedly important. It can lead to
better algorithms because for any instance of input n, it guarantees that the running
time of the algorithm will take longer than its running time in the worst case. This means,
that making the standard metric for efficiency be having an algorithm be efficient in
the worst case translates to requiring it to be efficient on every input.
We shall focus mainly on the worst-case analysis of algorithms, but we shall sometimes
mention other cases and their associated complexities.
The best-case running time of an algorithm gives us a lower bound on the running time
for any input, that is, the fastest running time for any input of size . Thus, we can
analyse best-case efficiency by identifying the kind of inputs that would yield the
smallest count, Cn. This does not necessarily imply the smallest input but rather the
input of size n that the algorithm will execute the fastest. For example, when we use
the counting primitive approach on ArrayMax, in the best case where the array is
sorted and the first element is the largest element it contains, the algorithm executes
Cn = 5n primitive operations meaning that the minimum number of primitive
operations executed by the algorithm, taken over all inputs of size n, is 5n. We can
say, that ArrayMax executes its basic operations Cn = n – 1 in the best case, meaning
that the minimum number of times the basic operation is executed by the algorithm,
taken over all inputs of size n, is n – 1. Consider another example, in a sequential
search where the search value is the first element in the array, the basic operation
would be executed exactly one time.
The best-case analysis is not as important as the worst-case analysis. However, it can
still be useful. If an algorithm has an inefficient best-case, we can immediately discard
it without further analysis.
ASYMPTOTIC NOTATIONS
We have seen that when we focus on just leading terms, we can focus on the
important part of the algorithm's running time, its order of growth. When we drop the
less significant terms and the constant-coefficient we use asymptotic notation. The
asymptotic behaviour of a function f(n) refers to the growth of f(n) as n gets large.
We shall explain three forms of asymptotic notation to help us compare and rank
orders of growth: O (big-O), 𝛀 (big-omega) and 𝚯 (big-theta).
As an example, let’s prove the running time from our counting primitive operations
example: 7𝑛 – 2 ∈ 𝑂(𝑛).
By big-O definition, we need to find c > 0 and n0 ≥ 0 such that 7n – 2 ≤ c for every
integer n ≥ n0.
7n – 2 ≤ 5 for all n ≥ 1.
Note the definition gives us a lot of freedom in choosing specific values for the
constant and . For example, by choosing c = 7 and 2, we can also reason that:
The big-O notation gives the worst-case complexity of an algorithm. When we use big-
O notation we are essentially saying that this is the slowest an algorithm can run for
large enough input of all sizes. For example, for an algorithm with an order of growth
in O(n), we can say that its running time increases at most proportionally to the input
size . This means the running time can never be more than n.
For example, for an algorithm with an order of growth in Ω(𝑛), we can say that its
running time increases at least proportionally to the size of the input.
For example, let’s prove 𝑛3 ∈ Ω(𝑛2 ). We need to find c > 0 and n0 ≥ 0 such that 𝑛3 ≥
𝑐 ⋅ 𝑛2 for every integer 𝑛 ≥ 𝑛0 . Let’s try 𝑐 = 1, then we need to find n0 ≥ 0 such that:
𝑛3 ≥ 𝑛2 for all 𝑛 ≥ 𝑛0 .
𝑛3 ≥ 𝑛2 for all n0 ≥ 0.
The big-omega notation gives the best-case complexity of an algorithm. The big-
omega notation essentially denotes that this is the fastest an algorithm can run for
arbitrary large inputs. For example, for an algorithm with an order of growth in Ω(𝑛2 ),
we can say that its running time increases at most proportionally to the square of the
input size n. This means the running time can never be less than 𝑛2 .
1
For example, let's show that 𝑛(𝑛 – 1) ∈ Θ(n2 ). We need to find c1 ≥ 0, c2 ≥ 0 and n0 ≥
2
1
0 such that 𝑐2 ⋅ 𝑛2 ≤ 𝑛(𝑛 – 1) ≤ 𝑐1 ⋅ 𝑛2 for every integer 𝑛 ≥ 𝑛0 .
2
First, we prove the upper bound, i.e., the right side of the inequality. Since we have
1
fractions, we will try fraction values for 𝑐2 . We try 𝑐2 = , now we find n0 ≥ 0 such that:
2
1 1 1
𝑛2 − 𝑛≤ 𝑛2 for all 𝑛 ≥ 𝑛0 .
2 2 2
1 1 1
[ 𝑛(𝑛 − 1) ≤ 𝑛2 − 𝑛]
2 2 2
1 1 1
An obvious choice is n0 = 0. Thus: 2 𝑛2 − 𝑛≤ 𝑛2 for all 𝑛 ≥ 0.
2 2
Second, we prove the lower bound, i.e., the left side of the inequality. We will also try
1 1
fraction values for . 𝑐1 < 2 for the definition to be true. So, we try 𝑐1 < 4 . Now, we
find n0 ≥ 0 such that:
1 1 1
𝑛2 ≤ 𝑛2 − 𝑛 for every integer 𝑛 ≥ 𝑛0 .
4 2 2
1 1
Therefore, by the definition, we can select 𝑐2 = , 𝑐1 = and n0 = 2.
4 2
For example, let’s consider the running time for a sequential search algorithm. In the
best-case scenario, we find the search value at the first location and the running time
is 1. Whereas, in the worst-case scenario, the search value may either be at the end
of the list or not be in the list at all, the running time, in this case, will be the same as
the number of elements in the list. It would be true that the worst-case running time of
a sequential search is Θ(𝑛). However, it is incorrect to say that sequential search runs
in Θ(𝑛) all the time. Because in the best-case scenario, it runs in Θ(1). The running time
of sequential search is never worse than Θ(𝑛) but it is sometimes better.
Therefore, it would be more convenient and accurate to use big-O in such situations.
So, if we say the running time of sequential search is always 𝑂(𝑛) what we are saying
is: “its running time grows at most this much, but it could grow more slowly.”
We can make a stronger statement about the sequential search’s worst-case running
time, it is Θ(𝑛). However, for a blanket statement to cover all cases, the strongest
statement we can make is that sequential search runs in 𝑂(𝑛).
Because big-O is bounded above and big-theta is bounded both above and below,
the running time that is Θ(𝑔(𝑛)) in the worst-case scenario is also O(𝑔(𝑛)). But as we
have seen, the opposite is not always true. Similarly, running time that is Θ(𝑔(𝑛)) also
implies Ω(𝑔(𝑛)), because big omega is bounded below.
We can make correct but imprecise statements using big-O or big-theta. we can
correctly but imprecisely say that the best-case running time of sequential search is
𝑂(𝑛), because we know that it takes, at most, linear time. We can make another
correct and imprecise statement, that the worst-case running time of sequential
search is Ω(1), because we know that it takes at least constant time.
Let 𝑓(𝑛) and 𝑔(𝑛) be any non-negative functions defined on a set of all real numbers.
We say 𝑓(𝑛) is 𝑂(𝑔(𝑛)) for all functions 𝑓(𝑛) that have a lower or the same order of
growth as 𝑔(𝑛), within a constant multiple as 𝑛 → ∞. Think of it as 𝑓(𝑛) ≤ 𝑔(𝑛)
For example:
• 𝑛 ∈ 𝑂(𝑛2 ) •
1
𝑛(𝑛 − 1) ∈ 𝑂(𝑛2 ) • 100𝑛 + 5 ∈ 𝑂(𝑛2 )
2
To summarise:
For example:
To summarise:
To summarise:
Therefore,
𝑐>1
Note:
• Asymptotic notations can also be used with multiple variables and with other
expressions on the right side of the equal sign. For example, the notation:
𝑓(𝑛, 𝑚) = 𝑛3 + 𝑚2 + 𝑂(𝑛 + 𝑚) represents the statement: there exists non-
negative constants 𝑐 and 𝑛0 such that 𝑓(𝑛, 𝑚) ≤ 𝑛3 + 𝑚2 + 𝑐 ⋅ (𝑛 + 𝑚) for all
𝑚, 𝑛 > 𝑛0 . We find complexity measures such as these in algorithms used
graphs.
If-Then-Else
if (condition) then
block 1 (sequence of statements)
else
block 2 (sequence of statements)
end if
Here, either block 1 will execute, or block 2 will execute.
Therefore, for worst-case analysis, the worst-case time is the slower of the two
branches:
Conversely, for best-case analysis, the best time is the faster of the two branches:
For example, if block 1 executes in constant time and block 2 executes in linear time,
the if-then-else statement would have linear complexity in the worst-case and
constant complexity in the best case.
Simple Loops
for i in 1 … n loop
sequence of statements
end loop
The loop executes n times, so the sequence of statements also executes times. If we
assume the statements have constant running time, the total time for the loop is n * 1,
which overall is linear.
Nested Loops
for i in 1 … n loop
for j in 1 … m loop
sequence of statements
end loop
end loop
The outer loop executes m times. Each time the outer loop executes, the inner loop
executes m times. As a result, the statements in the inner loop execute a total of 𝑛 × 𝑚
times. Thus, the complexity is 𝑂(𝑛 × 𝑚) (or Ω(𝑛 × 𝑚) or Θ(𝑛 × 𝑚)) depends on which
notation is being used). This can be extended for loop nesting consisting of k loops
(more than two loops). The total the statements in the innermost loop (loop k) would
execute is time(loop 1) * time(loop 2) * … * time(loop k) times.
In a common special case where the stopping condition of the inner loop is j < n
instead of j < m , i.e., the inner loop also executes n times, the total complexity for the
two loops is quadratic, O(n2).
Function/Procedure calls
for j in 1 … n loop
𝑔(𝑗)
end loop
the statement ( ) has quadratic (𝑛 × 𝑛 complexity: the loop executes n times and
each procedure call 𝑔(𝑛) also has linear complexity relative to the parameter n.
sequence of statements
for i in 1 … n loop
sequence of statements
end loop
sequence of statements
for i in 1 … n loop
for j in 1 … 2n loop
sequence of statement
end loop
end loop
The sequence of statements outside the loops runs in constant time. The first loop runs
in linear time, while the second runs in quadratic time. The maximum time is quadratic.
UNIT GOALS
You need to revise and reacquaint yourselves with the following mathematical
concepts.
• Summations
▪ See appendix: summations for some common ones that you will need a
lot in algorithm analysis.
• Logarithms and Exponents
▪ Logarithm is the opposite of exponents.
o log 𝑎 𝑏 = 𝑐 ⟺ 𝑎 𝑐 = 𝑏
▪ Properties of Logarithms:
o log 𝑎 𝑏 ⋅ 𝑐 = log 𝑎 𝑏 + log 𝑎 𝑐
𝑏
o log 𝑎 𝑐 = log 𝑎 𝑏 – log 𝑎 𝑐
EXERCISES
1. For each of the following algorithms, indicate (i) a natural size metric for its
inputs, (ii) its basic operation, and (iii) whether the basic operation count can
be different for inputs of the same size:
a. computing the sum of n numbers;
b. computing n!;
c. finding the smallest element in a list of n numbers;
d. searching for a particular element in a list of n names.
2. For each of the following functions, indicate how much the function’s value will
change if its argument (n) is increased fourfold (increased four times).
a. log 2 𝑛
b. √𝑛
c. 𝑛
d. 𝑛2
e. 𝑛3
f. 2𝑛
3. For each of the following pairs of functions, indicate whether the first function
of each of the following pairs has a lower, same, or higher order of growth (to
within a constant multiple) than the second function.
a. 𝑛(𝑛 + 1) and 2000𝑛2
b. 100𝑛2 and 0.013𝑛3
c. log 2 (log 2 𝑛) and log 2 𝑛
d. 2𝑛−1 and 2𝑛
e. (𝑛– 1)! and 𝑛!
f. log 22 𝑛 and log 2 𝑛2
4. Express the function 𝑛3 ⁄1000 – 100𝑛2 + 100𝑛 + 3 in terms of 𝑂-notation.
5. Select the 𝑂-notation for the following pseudocode fragments
[the choices are 𝑂(1), 𝑂(log 𝑛), 𝑂(𝑛), 𝑂(𝑛 log 𝑛), 𝑂(𝑛2 ), 𝑂(𝑛3 ), 𝑂(2𝑛 ), 𝑂(𝑛!)]:
a. for i = 0 to n – 1
z=x+y
b. for i = 0 to n – 1
for j = 0 to i
for k = 0 to j
sum = A[i] + B[j] + C[k]
c. for i = 1; i ≤ n; i *= 2
for j = 0; j < n; j++
Textbooks
[1] A. Levitin, Introduction to the Design and Analysis of Algorithms, Boston: Pearson,
2012.
Websites