0% found this document useful (0 votes)
42 views37 pages

Text

Uploaded by

Satvik Nager
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as TXT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
42 views37 pages

Text

Uploaded by

Satvik Nager
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as TXT, PDF, TXT or read online on Scribd

Combined & Deduplicated MCQ Practice: Analysis and Design of Algorithms

Instructions: Choose the best option for each question. All questions within each
topic are unique.
Asymptotic Notations & Basic Complexity Analysis
* Question: Which of the following notations describes the upper bound of an
algorithm's running time, ignoring constant factors and lower-order terms?
A) \Omega (Big-Omega)
B) \Theta (Big-Theta)
C) O (Big-O)
D) o (Little-o)
* Question: An algorithm's time complexity is given by T(n) = 5n^3 + 2n \log n +
100. What is its Big-O notation?
A) O(n \log n)
B) O(n^2)
C) O(n^3)
D) O(n^4)
* Question: Which asymptotic notation provides both an upper bound and a lower
bound for the growth rate of a function?
A) Big-O
B) Big-Omega
C) Big-Theta
D) Little-o
* Question: Which statement about asymptotic growth is FALSE?
A) n \in O(n^2)
B) n^2 \in \Omega(n)
C) 2^n \in O(n^k) for any constant k
D) \log n \in O(n)
* Question: If an algorithm has a time complexity of O(N \log N), what does this
imply about its efficiency for large inputs?
A) It's generally very inefficient.
B) Its running time grows linearly with input size.
C) Its running time is proportional to N multiplied by the logarithm of N, which
is efficient.
D) It's slower than O(N^2) but faster than O(N).
* Question: What is the time complexity of finding the maximum element in an
unsorted array of N elements?
A) O(1)
B) O(\log N)
C) O(N)
D) O(N^2)
* Question: The operation of deleting an element from the end of a dynamic array
(like ArrayList or std::vector) typically takes what time complexity?
A) O(1)
B) O(\log N)
C) O(N)
D) O(N^2)
* Question: Which of the following functions grows asymptotically fastest?
A) N^{10}
B) 2^N
C) N!
D) N \log N
* Question: The recurrence relation T(n) = T(n/2) + O(1) is characteristic of
algorithms like:
A) Merge Sort
B) Quick Sort (average case)
C) Binary Search
D) Linear Search
* Question: If f(n) = n^2 + 7n + 10 and g(n) = n^2, then f(n) is:
A) o(g(n))
B) \omega(g(n))
C) \Theta(g(n))
D) O(n)
* Question: What does o(g(n)) (Little-o) notation signify?
A) f(n) grows at the same rate as g(n).
B) f(n) grows asymptotically strictly slower than g(n).
C) f(n) grows asymptotically strictly faster than g(n).
D) f(n) is an upper bound for g(n).
* Question: The space complexity of an algorithm is a measure of:
A) The number of lines of code.
B) The amount of auxiliary memory used by the algorithm during execution.
C) The time taken to execute the algorithm.
D) The size of the input data.
* Question: Which of the following is an example of an algorithm with O(1) time
complexity?
A) Finding an element in a sorted array.
B) Inserting an element at the beginning of a singly linked list.
C) Traversing a binary tree.
D) Sorting a small array.
* Question: An algorithm has a time complexity of T(n) = \sum_{i=1}^{n} i. What is
its Big-O complexity?
A) O(n)
B) O(n \log n)
C) O(n^2)
D) O(2^n)
* Question: When comparing N^2 and N \log N for large values of N, which function
grows faster?
A) N^2
B) N \log N
C) They grow at the same rate.
D) It depends on the base of the logarithm.
* Question: The best-case time complexity of an algorithm refers to:
A) The minimum time required for any input of size N.
B) The maximum time required for any input of size N.
C) The average time required for inputs of size N.
D) The time required for a specific optimal input.
* Question: What is the time complexity of performing a push operation on a stack
implemented using a dynamic array (assuming no resizing is needed for that specific
push)?
A) O(1)
B) O(\log N)
C) O(N)
D) O(N^2)
* Question: Which of the following is typically the most efficient time complexity
for a sorting algorithm?
A) O(N^2)
B) O(N \log N)
C) O(N)
D) O(\log N)
* Question: The recurrence relation T(n) = 2T(n-1) has a solution of:
A) O(n)
B) O(n^2)
C) O(2^n)
D) O(n!)
* Question: What is the time complexity of searching for an element in a balanced
Binary Search Tree (BST) of N nodes?
A) O(1)
B) O(\log N)
C) O(N)
D) O(N \log N)
* Question: Which of the following notations represents a lower bound on the
running time of an algorithm?
A) O (Big-O)
B) \Omega (Big-Omega)
C) \Theta (Big-Theta)
D) o (Little-o)
* Question: An algorithm's running time is T(n) = 1000n + 5000. Its Big-O
complexity is:
A) O(1)
B) O(\log n)
C) O(n)
D) O(n^2)
* Question: If f(n) = \Theta(g(n)), then which of the following is true?
A) f(n) is always less than g(n).
B) f(n) is always greater than g(n).
C) f(n) and g(n) grow at the same rate asymptotically.
D) f(n) is an upper bound for g(n) but not a lower bound.
* Question: The time complexity of accessing an element by index in a hash table
in the worst case is:
A) O(1)
B) O(\log N)
C) O(N)
D) O(N^2)
* Question: Which of the following is an example of a polynomial time complexity?
A) O(N^2)
B) O(2^N)
C) O(N!)
D) O(N^N)
* Question: The concept of "amortized analysis" considers:
A) The worst-case cost of a single operation.
B) The average cost of a sequence of operations.
C) The best-case cost of an operation.
D) The memory usage of an algorithm.
* Question: If f(n) = \omega(g(n)), it means that:
A) f(n) grows asymptotically strictly slower than g(n).
B) f(n) grows asymptotically strictly faster than g(n).
C) f(n) grows at the same rate as g(n).
D) f(n) is a lower bound for g(n).
* Question: What is the time complexity of inserting an element at the end of a
fixed-size array?
A) O(1)
B) O(\log N)
C) O(N)
D) O(N^2)
* Question: The time complexity of an algorithm is O(\sqrt{N}). This is
considered:
A) Faster than O(\log N).
B) Slower than O(N).
C) Slower than O(N \log N).
D) Faster than O(1).
* Question: Which of the following is a common characteristic of an algorithm with
O(N!) complexity?
A) It is highly scalable for large inputs.
B) It is typically used for problems involving permutations or exhaustive search
on small inputs.
C) It is faster than O(2^N).
D) It is an example of a polynomial-time algorithm.
* Question: If f(n) = 3n \log n + 2n and g(n) = n \log n, then f(n) \in O(g(n))
is:
A) True
B) False
C) Cannot be determined
D) True only for small n
* Question: The space complexity of a recursive algorithm is primarily influenced
by:
A) The size of the input data.
B) The maximum depth of the recursion stack.
C) The number of loops in the algorithm.
D) The number of variables declared globally.
* Question: What is the time complexity of merging two sorted arrays of size M and
N into a single sorted array?
A) O(\min(M, N))
B) O(\max(M, N))
C) O(M+N)
D) O(M \cdot N)
* Question: Which of the following best describes the growth rate of N^{\log N}?
A) Polynomial
B) Exponential
C) Sub-exponential (faster than polynomial, slower than exponential)
D) Logarithmic
* Question: If T(n) = T(n/2) + n, using the Master Theorem, its complexity is:
A) O(n)
B) O(n \log n)
C) O(n^2)
D) O(\log n)
* Question: The primary goal of asymptotic analysis is to:
A) Determine the exact running time of an algorithm.
B) Compare the efficiency of algorithms for small input sizes.
C) Describe the behavior of an algorithm's running time or space usage as the
input size grows very large.
D) Find the constant factors in an algorithm's running time.
* Question: Which sorting algorithm has a worst-case time complexity of O(N^2) but
is known for its simplicity and good performance on nearly sorted data?
A) Merge Sort
B) Quick Sort
C) Insertion Sort
D) Heap Sort
* Question: What is the time complexity of deleting an element from a min-heap of
size N?
A) O(1)
B) O(\log N)
C) O(N)
D) O(N \log N)
* Question: If f(n) = 100n and g(n) = 0.01n^2, then for sufficiently large n:
A) f(n) grows faster than g(n).
B) g(n) grows faster than f(n).
C) They grow at the same rate.
D) f(n) is \Theta(g(n)).
* Question: The recurrence relation T(n) = T(n-1) + O(n) typically leads to a time
complexity of:
A) O(n)
B) O(n \log n)
C) O(n^2)
D) O(2^n)
* Question: What is the time complexity of searching for an element in a sorted
array using linear search?
A) O(1)
B) O(\log N)
C) O(N)
D) O(N^2)
* Question: Which of the following is true about the growth of logarithmic
functions?
A) \log_2 N grows faster than \log_{10} N.
B) \log N grows slower than any polynomial function N^k for k > 0.
C) \log N grows faster than N.
D) \log N grows at the same rate as N.
* Question: The space complexity of an algorithm that sorts an array in-place is
typically:
A) O(N)
B) O(\log N)
C) O(1)
D) O(N^2)
* Question: What is the time complexity of initializing all elements of an N \
times M matrix to zero?
A) O(\min(N, M))
B) O(\max(N, M))
C) O(N+M)
D) O(N \cdot M)
* Question: If an algorithm's time complexity is O(1), it means its running time
is:
A) Very fast, but depends on input size.
B) Independent of the input size.
C) Proportional to the input size.
D) Logarithmic.
* Question: Which of the following best represents the relationship f(n) = 2n^2
and g(n) = n^2 + 5n?
A) f(n) \in o(g(n))
B) f(n) \in \omega(g(n))
C) f(n) \in \Theta(g(n))
D) f(n) \in O(n)
* Question: The time complexity of searching for an element in a linked list of N
elements is:
A) O(1)
B) O(\log N)
C) O(N)
D) O(N^2)
* Question: What is the time complexity of building a balanced binary search tree
from a sorted array of N elements?
A) O(N)
B) O(N \log N)
C) O(N^2)
D) O(\log N)
* Question: The growth rate of N^2 is:
A) Slower than N \log N.
B) Faster than N^3.
C) Slower than 2^N.
D) The same as N.
* Question: If an algorithm takes T(n) time and T(n) = \Omega(n \log n), what does
this imply?
A) The algorithm is guaranteed to run in n \log n time.
B) The algorithm's running time is at least a constant times n \log n for
sufficiently large n.
C) The algorithm's running time is at most a constant times n \log n.
D) The algorithm has an exact running time of n \log n.
Solutions: Asymptotic Notations & Basic Complexity Analysis
* C) O (Big-O)
* C) O(n^3)
* C) Big-Theta
* C) 2^n \in O(n^k) for any constant k
* C) Its running time is proportional to N multiplied by the logarithm of N, which
is efficient.
* C) O(N)
* A) O(1)
* C) N!
* C) Binary Search
* C) \Theta(g(n))
* B) f(n) grows asymptotically strictly slower than g(n).
* B) The amount of auxiliary memory used by the algorithm during execution.
* B) Inserting an element at the beginning of a singly linked list.
* C) O(n^2)
* A) N^2
* A) The minimum time required for any input of size N.
* A) O(1)
* B) O(N \log N)
* C) O(2^n)
* B) O(\log N)
* B) \Omega (Big-Omega)
* C) O(n)
* C) f(n) and g(n) grow at the same rate asymptotically.
* C) O(N)
* A) O(N^2)
* B) The average cost of a sequence of operations.
* B) f(n) grows asymptotically strictly faster than g(n).
* A) O(1)
* B) Slower than O(N).
* B) It is typically used for problems involving permutations or exhaustive search
on small inputs.
* A) True
* B) The maximum depth of the recursion stack.
* C) O(M+N)
* C) Sub-exponential (faster than polynomial, slower than exponential)
* B) O(n \log n)
* C) Describe the behavior of an algorithm's running time or space usage as the
input size grows very large.
* C) Insertion Sort
* B) O(\log N)
* B) g(n) grows faster than f(n).
* C) O(n^2)
* C) O(N)
* B) \log N grows slower than any polynomial function N^k for k > 0.
* C) O(1)
* D) O(N \cdot M)
* B) Independent of the input size.
* C) f(n) \in \Theta(g(n))
* C) O(N)
* A) O(N)
* C) Slower than 2^N.
* B) The algorithm's running time is at least a constant times n \log n for
sufficiently large n.
Divide and Conquer Algorithms
* Question: The Divide and Conquer paradigm fundamentally involves which three
steps?
A) Initialization, Iteration, Termination
B) Divide, Conquer, Combine
C) Analysis, Design, Implementation
D) Preprocessing, Processing, Postprocessing
* Question: Which of the following sorting algorithms is a prime example of the
Divide and Conquer approach?
A) Bubble Sort
B) Selection Sort
C) Quick Sort
D) Counting Sort
* Question: What is the worst-case time complexity of Merge Sort?
A) O(N)
B) O(N \log N)
C) O(N^2)
D) O(\log N)
* Question: Which of the following is true about Quick Sort's stability?
A) It is always stable.
B) It is never stable.
C) It can be implemented to be stable, but it's not inherently stable.
D) Stability is not a relevant concept for Quick Sort.
* Question: The recurrence relation T(n) = 2T(n/2) + O(n) describes the time
complexity of:
A) Binary Search
B) Quick Sort (worst case)
C) Merge Sort
D) Linear Search
* Question: The Master Theorem is a powerful tool used to directly solve
recurrence relations of the form:
A) T(n) = T(n-1) + T(n-2)
B) T(n) = aT(n/b) + f(n)
C) T(n) = T(n-1) + c
D) T(n) = T(n/c) + T(n/d)
* Question: Which of these problems is NOT typically solved using a Divide and
Conquer strategy?
A) Finding the maximum and minimum elements in an array.
B) Strassen's Matrix Multiplication.
C) Kruskal's Algorithm for MST.
D) Closest Pair of Points problem.
* Question: What is the auxiliary space complexity of Merge Sort?
A) O(1)
B) O(\log N)
C) O(N)
D) O(N \log N)
* Question: In Quick Sort, which pivot selection strategy leads to the best-case
time complexity?
A) Always picking the first element.
B) Always picking the last element.
C) Picking the median element.
D) Randomly picking an element.
* Question: An efficient Divide and Conquer algorithm to find the maximum and
minimum elements in an array makes approximately how many comparisons?
A) N
B) 1.5N
C) 2N
D) N \log N
* Question: Binary Search is a Divide and Conquer algorithm because it:
A) Divides the array into two equal halves.
B) Divides the problem into smaller subproblems and recursively solves one of
them.
C) Sorts the array before searching.
D) Combines solutions from multiple subproblems.
* Question: A major disadvantage of Quick Sort is its:
A) High constant factors in average case.
B) O(N^2) worst-case time complexity.
C) Inability to be implemented in-place.
D) Requirement for additional data structures.
* Question: In the Divide and Conquer paradigm, the "Conquer" step typically
involves:
A) Breaking the problem down.
B) Solving the smallest subproblems directly (base cases) or recursively.
C) Merging the results.
D) Analyzing the problem's constraints.
* Question: The recurrence T(n) = T(n-1) + T(n-1) is NOT suitable for Master
Theorem. What is its approximate solution?
A) O(n)
B) O(n \log n)
C) O(2^n)
D) O(n^2)
* Question: Which optimization in the Union-Find data structure helps to keep the
tree representing a set flat?
A) Union by size.
B) Union by rank.
C) Path compression.
D) Both B and C.
* Question: The "Combine" step is crucial in Divide and Conquer algorithms because
it:
A) Reduces the original problem size.
B) Solves the base cases.
C) Integrates the solutions of subproblems into a solution for the original
problem.
D) Defines the stopping condition for recursion.
* Question: When using the Master Theorem, if f(n) is asymptotically larger than
n^{\log_b a} (Case 3), what dominates the running time?
A) The cost of the base cases.
B) The cost of the recursive calls.
C) The cost of the division and combination step.
D) The total number of nodes in the recursion tree.
* Question: Which sorting algorithm is known for its guaranteed O(N \log N) worst-
case time complexity and is an in-place algorithm?
A) Merge Sort
B) Quick Sort
C) Heap Sort
D) Insertion Sort
* Question: Quickselect is a Divide and Conquer algorithm used for:
A) Sorting an array.
B) Finding the median of an array.
C) Finding the k-th smallest element in an unsorted array.
D) Finding all pairs of elements with a specific sum.
* Question: Randomizing the pivot choice in Quick Sort helps to:
A) Guarantee O(N \log N) worst-case performance.
B) Make the algorithm stable.
C) Ensure that the worst-case scenario occurs with very low probability.
D) Reduce the auxiliary space complexity.
* Question: The recurrence relation for the Divide and Conquer algorithm to find
Max and Min in an array is T(n) = 2T(n/2) + c. This solves to:
A) O(n)
B) O(n \log n)
C) O(n^2)
D) O(\log n)
* Question: A characteristic of recursive Divide and Conquer algorithms is that
they always require:
A) A sorted input.
B) A base case to terminate recursion.
C) O(1) auxiliary space.
D) A fixed number of subproblems.
* Question: What is the primary reason for the efficiency of algorithms like
Binary Search and Merge Sort?
A) They use iteration instead of recursion.
B) They avoid using extra memory.
C) They repeatedly divide the problem into smaller, manageable subproblems.
D) They are stable sorting algorithms.
* Question: In the partitioning step of Quick Sort, the pivot element is placed
such that:
A) All elements to its left are smaller, and all to its right are larger.
B) It is always the smallest element in the sub-array.
C) It is always the largest element in the sub-array.
D) It divides the array into two exactly equal halves.
* Question: Strassen's Matrix Multiplication algorithm is faster than the naive
O(N^3) algorithm. Its complexity is approximately:
A) O(N^2)
B) O(N^{2.81})
C) O(N \log N)
D) O(N^{2.5})
* Question: The Master Theorem's Case 2 applies when f(n) is asymptotically equal
to n^{\log_b a}. An example is:
A) T(n) = 2T(n/2) + 1
B) T(n) = 2T(n/2) + n
C) T(n) = 2T(n/2) + n^2
D) T(n) = T(n-1) + n
* Question: Which sorting algorithm is often preferred for external sorting due to
its efficient handling of data that doesn't fit in memory?
A) Quick Sort
B) Heap Sort
C) Merge Sort
D) Insertion Sort
* Question: The average-case efficiency of Quick Sort is primarily due to:
A) Its use of a stable partitioning method.
B) Its ability to divide the problem into roughly equal subproblems on average.
C) Its minimal use of auxiliary space.
D) Its iterative nature.
* Question: When the problem size becomes very small in a recursive Divide and
Conquer algorithm, it reaches the:
A) Termination condition.
B) Base case.
C) Recursive step.
D) Combine step.
* Question: Which of the following problems can be solved using a Divide and
Conquer approach to achieve O(N \log N) time complexity?
A) Finding the maximum subarray sum.
B) Searching for an element in an unsorted array.
C) Finding the k-th smallest element in a sorted array.
D) Matrix Chain Multiplication.
* Question: The "Divide" step in a Divide and Conquer algorithm typically
involves:
A) Solving the base cases directly.
B) Breaking the problem into smaller, independent subproblems.
C) Merging the results of subproblems.
D) Analyzing the algorithm's performance.
* Question: Which of the following is a key advantage of the Divide and Conquer
paradigm?
A) It always guarantees an optimal solution.
B) It always leads to iterative solutions.
C) It can often lead to efficient algorithms by reducing the problem size
significantly.
D) It simplifies the problem statement by ignoring details.
* Question: The "Master Theorem" is used to analyze the complexity of algorithms
that follow a specific recursive structure. What is this structure?
A) Linear recursion.
B) Recurrence relations where a problem is divided into smaller, similar
subproblems.
C) Iterative loops.
D) Problems with overlapping subproblems.
* Question: What is the time complexity of Quick Sort in the worst case, if the
pivot is always chosen as the smallest or largest element?
A) O(N \log N)
B) O(N)
C) O(N^2)
D) O(1)
* Question: Which of the following is true about the space complexity of Merge
Sort?
A) It is O(1) (in-place).
B) It is O(\log N) due to recursion stack.
C) It is O(N) due to the auxiliary array used for merging.
D) It is O(N \log N).
* Question: The "path compression" optimization in the Union-Find data structure
primarily aims to:
A) Reduce the number of Union operations.
B) Reduce the depth of the trees representing sets.
C) Speed up the Union operation.
D) Detect cycles more efficiently.
* Question: For an array of N elements, how many comparisons are made by the
Divide and Conquer algorithm to find both maximum and minimum elements in the worst
case?
A) N-1
B) 2N-2
C) \lceil 3N/2 \rceil - 2
D) N \log N
* Question: Which of the following is NOT a typical characteristic of a Divide and
Conquer algorithm?
A) Recursion.
B) Base cases.
C) Overlapping subproblems.
D) Combining solutions.
* Question: If the recurrence relation for an algorithm is T(n) = T(n-1) + O(1),
what is its complexity?
A) O(1)
B) O(\log n)
C) O(n)
D) O(n^2)
* Question: The "closest pair of points" problem can be solved in O(N \log N) time
using Divide and Conquer. This efficiency comes from:
A) Using a hash table.
B) Sorting points and then dividing the plane.
C) A greedy approach.
D) Dynamic programming.
* Question: Which of the following is an advantage of Merge Sort over Quick Sort?
A) It is generally faster in practice.
B) It is an in-place sorting algorithm.
C) Its worst-case time complexity is O(N \log N).
D) It has lower space complexity.
* Question: When a recursive Divide and Conquer algorithm is implemented, what is
a potential issue related to memory?
A) Excessive heap memory usage.
B) Stack overflow due to deep recursion.
C) Memory leaks.
D) Cache misses.
* Question: The "Master Theorem" is applicable when the subproblems are of:
A) Arbitrary sizes.
B) Unequal sizes but sum to n.
C) Roughly equal sizes (n/b).
D) Always size n-1.
* Question: Which of the following is a common application of the Divide and
Conquer paradigm in computational geometry?
A) Convex Hull problem.
B) Shortest path problem.
C) Minimum Spanning Tree.
D) Max Flow problem.
* Question: The "Combine" step in Merge Sort involves:
A) Partitioning the array around a pivot.
B) Merging two already sorted sub-arrays.
C) Building a heap.
D) Swapping elements.
* Question: If the base case of a recursive Divide and Conquer algorithm is not
properly defined, what is the most likely outcome?
A) Incorrect results.
B) Infinite recursion (stack overflow).
C) Reduced performance.
D) Memory leak.
* Question: Which of the following is true about the "Union by Size" optimization
in Union-Find?
A) It always attaches the smaller tree under the root of the larger tree.
B) It always attaches the larger tree under the root of the smaller tree.
C) It ensures that all trees have a maximum height of \log N.
D) It is only used for Find operations.
* Question: The problem of finding inversions in an array (pairs (i, j) such that
i < j and A[i] > A[j]) can be solved efficiently using a modified version of:
A) Quick Sort
B) Merge Sort
C) Heap Sort
D) Insertion Sort
* Question: What is the fundamental principle that makes Divide and Conquer
algorithms efficient?
A) They use iteration.
B) They solve each subproblem only once.
C) They break down a large problem into smaller, more manageable subproblems.
D) They always find the optimal solution.
* Question: The time complexity of the Karatsuba algorithm for multiplying two N-
digit numbers is approximately:
A) O(N^2)
B) O(N \log N)
C) O(N^{\log_2 3})
D) O(N^{1.5})
Solutions: Divide and Conquer Algorithms
* B) Divide, Conquer, Combine
* C) Quick Sort
* B) O(N \log N)
* C) It can be implemented to be stable, but it's not inherently stable.
* C) Merge Sort
* B) T(n) = aT(n/b) + f(n)
* C) Kruskal's Algorithm for MST.
* C) O(N)
* C) Picking the median element.
* B) 1.5N
* B) Divides the problem into smaller subproblems and recursively solves one of
them.
* B) O(N^2) worst-case time complexity.
* B) Solving the smallest subproblems directly (base cases) or recursively.
* C) O(2^n)
* D) Both B and C.
* C) Integrates the solutions of subproblems into a solution for the original
problem.
* C) The cost of the division and combination step.
* C) Heap Sort
* C) Finding the k-th smallest element in an unsorted array.
* C) Ensure that the worst-case scenario occurs with very low probability.
* A) O(n)
* B) A base case to terminate recursion.
* C) They repeatedly divide the problem into smaller, manageable subproblems.
* A) All elements to its left are smaller, and all to its right are larger.
* B) O(N^{2.81})
* B) T(n) = 2T(n/2) + n
* C) Merge Sort
* B) Its ability to divide the problem into roughly equal subproblems on average.
* B) Base case.
* A) Finding the maximum subarray sum.
* B) Breaking the problem into smaller, independent subproblems.
* C) It can often lead to efficient algorithms by reducing the problem size
significantly.
* B) Recurrence relations where a problem is divided into smaller, similar
subproblems.
* C) O(N^2)
* C) It is O(N) due to the auxiliary array used for merging.
* B) Reduce the depth of the trees representing sets.
* C) \lceil 3N/2 \rceil - 2
* C) Overlapping subproblems.
* C) O(n)
* C) Divide and Conquer
* C) Its worst-case time complexity is O(N \log N).
* B) Stack overflow due to deep recursion.
* C) Roughly equal sizes (n/b).
* A) Convex Hull problem.
* B) Merging two already sorted sub-arrays.
* B) Infinite recursion (stack overflow).
* A) It always attaches the smaller tree under the root of the larger tree.
* B) Merge Sort
* C) They break down a large problem into smaller, more manageable subproblems.
* C) O(N^{\log_2 3})
Greedy Algorithms
* Question: For a problem to be optimally solvable by a greedy algorithm, it
generally needs to exhibit:
A) Overlapping subproblems and memoization.
B) Optimal substructure and the greedy choice property.
C) Polynomial time complexity and a unique solution.
D) Both a base case and a recursive step.
* Question: In the Fractional Knapsack problem, the greedy strategy involves:
A) Picking items with the smallest weight first.
B) Picking items with the highest value first.
C) Prioritizing items with the highest value-to-weight ratio.
D) Picking items that fit perfectly into the remaining capacity.
* Question: Which of the following problems is NOT typically solved optimally by a
greedy algorithm?
A) Activity Selection Problem.
B) Shortest path in a graph with non-negative edge weights (Dijkstra's).
C) 0/1 Knapsack Problem.
D) Huffman Coding.
* Question: Kruskal's algorithm for Minimum Spanning Tree (MST) makes a greedy
choice by:
A) Always adding the edge with the largest weight.
B) Adding the smallest weight edge that connects two previously disconnected
components.
C) Connecting the current vertex to its closest neighbor.
D) Selecting edges that form a cycle.
* Question: For the Activity Selection Problem, if activities are sorted by their
finish times, the greedy choice is to:
A) Select the activity with the earliest start time.
B) Select the activity with the shortest duration.
C) Select the activity that finishes earliest among the remaining compatible
activities.
D) Select the activity that overlaps with the fewest other activities.
* Question: Huffman coding is a greedy algorithm used primarily for:
A) Lossy image compression.
B) Optimal prefix code generation for lossless data compression.
C) Error detection in data transmission.
D) Encryption of text data.
* Question: The time complexity of Prim's algorithm for finding MST, when
implemented with a Fibonacci heap, is:
A) O(V^2)
B) O(E \log V)
C) O(E + V \log V)
D) O(E + V \log V)
* Question: Which data structure is essential for efficiently checking for cycles
in Kruskal's algorithm?
A) Priority Queue
B) Adjacency List
C) Disjoint Set Union (Union-Find)
D) Stack
* Question: In the Job Sequencing with Deadlines problem, the greedy approach
involves sorting jobs primarily by:
A) Their deadlines in ascending order.
B) Their profits in descending order.
C) Their durations in ascending order.
D) A combination of deadline and profit.
* Question: Consider jobs: J1(d=2, p=30), J2(d=1, p=40), J3(d=3, p=20), J4(d=2,
p=10). What is the maximum profit using greedy Job Sequencing?
A) 70 (J2, J1)
B) 60 (J1, J3)
C) 90 (J2, J1, J3)
D) 50 (J2, J4)
* Question: Dijkstra's algorithm for single-source shortest paths is a greedy
algorithm. It works correctly only if:
A) The graph is acyclic.
B) All edge weights are positive.
C) The graph is dense.
D) The graph is undirected.
* Question: The greedy choice in Huffman coding involves:
A) Always picking the character with the highest frequency.
B) Combining the two nodes (characters or subtrees) with the smallest
frequencies.
C) Assigning fixed-length codes to all characters.
D) Building a balanced binary tree.
* Question: A greedy algorithm is considered "optimal" if it:
A) Always finds a solution, even if not the best.
B) Always finds the best possible solution for the problem.
C) Runs in polynomial time.
D) Uses minimal memory.
* Question: Which of these problems typically requires Dynamic Programming for an
optimal solution, as a greedy approach may fail?
A) Activity Selection.
B) Fractional Knapsack.
C) Coin Change (with arbitrary denominations).
D) Prim's Algorithm.
* Question: The dominant step in Kruskal's algorithm's time complexity is usually:
A) Initializing the Disjoint Set Union structure.
B) Sorting all the edges.
C) Performing Union-Find operations.
D) Constructing the MST.
* Question: The Activity Selection problem aims to:
A) Maximize the total profit from selected activities.
B) Minimize the total time spent on activities.
C) Select the maximum number of non-overlapping activities.
D) Find the earliest possible completion time for all activities.
* Question: For a dense graph (where E \approx V^2), which MST algorithm is
generally more efficient when implemented with an adjacency matrix?
A) Kruskal's Algorithm.
B) Prim's Algorithm.
C) Bellman-Ford Algorithm.
D) Boruvka's Algorithm.
* Question: A key reason to use a greedy algorithm, when applicable, is often its:
A) Guaranteed optimality for all problems.
B) Simplicity of implementation and often faster execution.
C) Ability to handle negative edge weights.
D) Low memory footprint regardless of implementation.
* Question: For the standard US coin denominations ({1, 5, 10, 25} cents), a
greedy approach to the Coin Change Problem (minimizing coins) will always yield the
optimal solution. This is because:
A) The denominations are powers of 2.
B) The denominations form a canonical coin system.
C) The smallest denomination is 1.
D) It's a coincidence.
* Question: The Find operation in a Disjoint Set Union (Union-Find) data structure
returns:
A) The number of elements in the set.
B) A boolean indicating if two elements are in the same set.
C) The representative (root) of the set containing a given element.
D) The smallest element in the set.
* Question: A Minimum Spanning Tree (MST) of a connected, undirected, weighted
graph is:
A) A path that connects all vertices with minimum total weight.
B) A tree that connects all vertices with minimum total edge weight.
C) A subgraph with the fewest possible edges that connects all vertices.
D) A tree that includes all edges of the graph.
* Question: In Prim's algorithm, a min-priority queue is used to efficiently:
A) Store all edges of the graph.
B) Keep track of visited vertices.
C) Extract the vertex closest to the current MST.
D) Detect cycles.
* Question: Given activities: (1,4), (3,5), (0,6), (5,7), (3,9), (5,9), (6,10),
(8,11), (2,13), (12,14). Applying the greedy Activity Selection strategy (sort by
finish time), how many activities can be selected?
A) 3
B) 4
C) 5
D) 6
* Question: The main limitation of greedy algorithms is that:
A) They are always computationally expensive.
B) They do not always guarantee a globally optimal solution.
C) They are difficult to implement.
D) They cannot be applied to graph problems.
* Question: What property do Huffman codes possess that allows for unambiguous
decoding?
A) They are fixed-length codes.
B) They are variable-length codes.
C) They are prefix codes (no code is a prefix of another).
D) They are suffix codes.
* Question: What is the time complexity of Kruskal's algorithm when using a
Disjoint Set Union data structure with path compression and union by rank/size?
A) O(V^2)
B) O(E \log V)
C) O(E \log E)
D) O(E + V \log V)
* Question: The "greedy choice property" implies that:
A) An optimal solution can be found by combining optimal solutions to
subproblems.
B) A locally optimal choice at each step leads to a globally optimal solution.
C) The problem can be divided into independent subproblems.
D) The algorithm will always terminate quickly.
* Question: A practical application of Minimum Spanning Trees is:
A) Finding the shortest route between two points on a map.
B) Designing a cost-effective network to connect multiple locations.
C) Scheduling tasks with dependencies.
D) Searching for elements in a database.
* Question: When scheduling jobs with deadlines and profits using a greedy
approach, if multiple jobs can fit into an available time slot, which one is
chosen?
A) The one with the earliest deadline.
B) The one with the highest profit.
C) The one with the shortest processing time.
D) The one that starts earliest.
* Question: What is the key conceptual difference in how Prim's and Kruskal's
algorithms build an MST?
A) Prim's is for directed graphs, Kruskal's for undirected.
B) Prim's grows a single tree, Kruskal's merges components.
C) Prim's uses an adjacency matrix, Kruskal's uses an adjacency list.
D) Prim's is faster for sparse graphs, Kruskal's for dense graphs.
* Question: Consider the Coin Change Problem with denominations {1, 7, 10} and
amount 15. A greedy algorithm would give:
A) 10, 1, 1, 1, 1, 1 (6 coins)
B) 7, 7, 1 (3 coins)
C) 10, 5 (2 coins)
D) 1, 1, ..., 1 (15 coins)
* Question: In Dijkstra's algorithm, the decrease-key operation on the priority
queue is crucial for:
A) Adding new vertices to the queue.
B) Updating the distance of a vertex if a shorter path is found.
C) Removing the minimum element from the queue.
D) Detecting negative cycles.
* Question: For the Activity Selection Problem, sorting activities by which
criterion is essential for the greedy approach to work?
A) Start time in ascending order.
B) Finish time in ascending order.
C) Duration in ascending order.
D) Number of conflicts in ascending order.
* Question: The time complexity of building a Huffman tree for N distinct
characters is dominated by:
A) Initializing the frequency table (O(N)).
B) Sorting the initial frequencies (O(N \log N)).
C) Repeatedly extracting minimums from a priority queue (O(N \log N)).
D) Assigning codes (O(N)).
* Question: Which greedy strategy is applied in the Job Sequencing with Deadlines
problem?
A) Always pick the job that finishes earliest.
B) Always pick the job with the highest profit that can be scheduled.
C) Always pick the job with the earliest deadline that can be scheduled.
D) Always pick the job with the shortest duration.
* Question: For the Fractional Knapsack problem, if multiple items have the same
value-to-weight ratio, which tie-breaking rule ensures optimality?
A) Prioritize the item with smaller weight.
B) Prioritize the item with larger weight.
C) Any tie-breaking rule will work, as long as the ratio is highest.
D) Prioritize the item with higher value.
* Question: In Prim's algorithm, the key value associated with each vertex in the
priority queue represents:
A) Its degree in the graph.
B) Its distance from the source vertex.
C) The minimum weight of an edge connecting it to a vertex already in the MST.
D) Its rank in the graph.
* Question: The inverse Ackermann function \alpha(N), which appears in the
amortized complexity of Union-Find, grows:
A) Logarithmically.
B) Extremely slowly (effectively constant for practical inputs).
C) Linearly.
D) Exponentially.
* Question: Which of the following problems, if attempted with a greedy approach,
would likely lead to a suboptimal solution?
A) Minimum Spanning Tree (Prim's).
B) Fractional Knapsack.
C) 0/1 Knapsack.
D) Activity Selection.
* Question: A key advantage of Huffman coding is that it produces:
A) Fixed-length codes for all characters.
B) Codes that are easy to encrypt.
C) Optimal prefix codes, minimizing average code length.
D) Codes that are resilient to errors.
* Question: If a graph is represented by an adjacency list, what is the time
complexity of Prim's algorithm using a binary heap?
A) O(V^2)
B) O(E \log V)
C) O(E + V \log V)
D) O(E \log E)
* Question: Which property of a graph is directly related to the concept of a
Minimum Spanning Tree?
A) Shortest paths between any two nodes.
B) Cycles within the graph.
C) Connectivity and edge weights.
D) Bipartiteness.
* Question: In Job Sequencing with Deadlines, if a job has a deadline d, it means
it must be completed by time d. What is the maximum possible deadline for any job
in a set of N jobs, if we consider time slots 1, 2, \dots, N?
A) 1
B) N/2
C) N
D) 2N
* Question: How do Prim's and Kruskal's algorithms differ in their approach to
building the MST?
A) Prim's is vertex-centric, growing a single component; Kruskal's is edge-
centric, connecting components.
B) Prim's uses DFS, Kruskal's uses BFS.
C) Prim's finds shortest paths, Kruskal's finds minimum cuts.
D) Prim's works on directed graphs, Kruskal's on undirected.
* Question: For the Coin Change Problem with denominations {1, 2, 5} and amount 8,
a greedy algorithm would use:
A) 5, 2, 1 (3 coins)
B) 2, 2, 2, 2 (4 coins)
C) 5, 1, 1, 1 (4 coins)
D) 1, 1, 1, 1, 1, 1, 1, 1 (8 coins)
* Question: The greedy nature of Dijkstra's algorithm means it always picks the
unvisited vertex with the smallest distance. This works because:
A) It uses a stack.
B) It never revisits a vertex.
C) Edge weights are non-negative, ensuring that the first path found to a vertex
is the shortest.
D) It always finds all possible paths.
* Question: The Activity Selection Problem is a classic example where the greedy
choice property holds. This means:
A) Any choice of activity will lead to an optimal solution.
B) Selecting the earliest finishing activity at each step always leads to the
maximum number of selected activities.
C) The problem can be solved by dynamic programming.
D) The problem has overlapping subproblems.
* Question: Which algorithm is used for finding the shortest path in a graph where
some edge weights might be negative, provided there are no negative cycles?
A) Dijkstra's
B) Prim's
C) Bellman-Ford
D) Floyd-Warshall (for all-pairs)
* Question: The "optimal substructure" property in greedy algorithms implies that:
A) The problem can be broken into independent subproblems.
B) An optimal solution to the problem contains optimal solutions to its
subproblems.
C) The greedy choice at each step is always correct.
D) The solution can be verified quickly.
* Question: What is the primary role of the Union operation in the Disjoint Set
Union (Union-Find) data structure?
A) To find the representative of a set.
B) To merge two sets into one.
C) To add a new element to a set.
D) To check if two elements are in the same set.
Solutions: Greedy Algorithms
* B) Optimal substructure and the greedy choice property.
* C) Prioritizing items with the highest value-to-weight ratio.
* C) 0/1 Knapsack Problem.
* B) Adding the smallest weight edge that connects two previously disconnected
components.
* C) Select the activity that finishes earliest among the remaining compatible
activities.
* B) Optimal prefix code generation for lossless data compression.
* C) O(E + V \log V) (Note: With Fibonacci heap, it's O(E + V \log V), which is
O(E + V \log V) for binary heap. The question just asked for binary heap, so
O(E+V \log V) is the most common answer.)
* C) Disjoint Set Union (Union-Find)
* B) Their profits in descending order.
* A) 70 (J2, J1)
* B) All edge weights are positive.
* B) Combining the two nodes (characters or subtrees) with the smallest
frequencies.
* B) Always finds the best possible solution for the problem.
* C) Coin Change (with arbitrary denominations).
* B) Sorting all the edges.
* C) Select the maximum number of non-overlapping activities.
* B) Prim's Algorithm.
* B) Simplicity of implementation and often faster execution.
* B) The denominations form a canonical coin system.
* C) The representative (root) of the set containing a given element.
* B) A tree that connects all vertices with minimum total edge weight.
* C) Extract the vertex closest to the current MST.
* B) 4
* B) They do not always guarantee a globally optimal solution.
* C) They are prefix codes (no code is a prefix of another).
* C) O(E \log E)
* B) A locally optimal choice at each step leads to a globally optimal solution.
* B) Designing a cost-effective network to connect multiple locations.
* B) The one with the highest profit.
* B) Prim's grows a single tree, Kruskal's merges components.
* B) Denominations: {1, 3, 4}, Amount: 6 (Greedy: 4, 1, 1 = 6 (3 coins). Optimal:
3, 3 = 6 (2 coins)).
* C) Because a negative edge weight could lead to an incorrect shortest path if a
shorter path is found later.
* C) Finish times in ascending order.
* C) Repeatedly extracting minimums from a priority queue (O(N \log N)).
* B) Always pick the job with the highest profit that can be scheduled.
* C) Any tie-breaking rule will work, as long as the ratio is highest.
* B) Add the vertex outside the set that is closest to any vertex within the set.
* C) O(\alpha(N)) (inverse Ackermann function)
* C) 0/1 Knapsack.
* C) Optimal prefix codes, minimizing average code length.
* B) O(V^2)
* C) Connectivity and edge weights.
* C) N
* A) Prim's is vertex-centric, growing a single component; Kruskal's is edge-
centric, connecting components.
* A) 5, 2, 1 (3 coins)
* C) Edge weights are non-negative, ensuring that the first path found to a vertex
is the shortest.
* B) Selecting the earliest finishing activity at each step always leads to the
maximum number of selected activities.
* C) Bellman-Ford
* B) An optimal solution to the problem contains optimal solutions to its
subproblems.
* B) To merge two sets into one.
Dynamic Programming
* Question: For a problem to be efficiently solved by dynamic programming, it must
exhibit which two properties?
A) Greedy choice property and optimal substructure.
B) Optimal substructure and overlapping subproblems.
C) Divide and conquer and memoization.
D) Polynomial time complexity and a unique solution.
* Question: What is "memoization" in the context of dynamic programming?
A) An iterative, bottom-up approach.
B) A top-down approach that stores results of subproblems to avoid
recomputation.
C) A technique for managing memory in recursive calls.
D) A method for breaking a problem into independent subproblems.
* Question: The Longest Common Subsequence (LCS) problem is a classic application
of:
A) Greedy Algorithms.
B) Divide and Conquer.
C) Dynamic Programming.
D) Backtracking.
* Question: Which of the following problems is best solved using dynamic
programming for an optimal solution, rather than a greedy approach?
A) Fractional Knapsack.
B) Activity Selection.
C) 0/1 Knapsack.
D) Minimum Spanning Tree.
* Question: The time complexity of the standard dynamic programming solution for
the 0/1 Knapsack problem with N items and capacity W is:
A) O(N)
B) O(W)
C) O(N+W)
D) O(N \cdot W)
* Question: The Floyd-Warshall algorithm, used for all-pairs shortest paths, is an
example of which algorithmic paradigm?
A) Greedy.
B) Divide and Conquer.
C) Dynamic Programming.
D) Backtracking.
* Question: In dynamic programming, "tabulation" refers to:
A) A recursive approach with caching.
B) An iterative, bottom-up approach that fills a table of solutions.
C) A method for pruning the search space.
D) A technique for randomizing choices.
* Question: The optimal solution for Matrix Chain Multiplication (minimizing
scalar multiplications) using dynamic programming has a time complexity of:
A) O(N^2)
B) O(N \log N)
C) O(N^3)
D) O(2^N)
* Question: Which of these problems is generally NOT solved using dynamic
programming?
A) Fibonacci Sequence calculation (optimized).
B) Longest Increasing Subsequence.
C) Bubble Sort.
D) Edit Distance (Levenshtein Distance).
* Question: The "Principle of Optimality" in dynamic programming states that:
A) A globally optimal solution can be achieved by making locally optimal
choices.
B) An optimal solution to a problem can be constructed from optimal solutions to
its subproblems.
C) All subproblems are independent.
D) The problem can be solved in linear time.
* Question: What is the time complexity of finding the shortest path in a Directed
Acyclic Graph (DAG)?
A) O(V^2)
B) O(E \log V)
C) O(V+E)
D) O(V^3)
* Question: What is the key distinction between memoization and tabulation?
A) Memoization is iterative, tabulation is recursive.
B) Memoization is top-down, tabulation is bottom-up.
C) Memoization is for decision problems, tabulation for optimization problems.
D) Memoization uses more memory than tabulation.
* Question: The Coin Change Problem (finding the minimum number of coins for a
given amount) with arbitrary denominations is a classic problem solved by:
A) Greedy approach.
B) Divide and Conquer.
C) Dynamic Programming.
D) Brute force.
* Question: For the Longest Common Subsequence (LCS) problem, the base case in the
DP table is typically when:
A) Both strings have length 1.
B) One or both strings are empty, resulting in an LCS of 0.
C) The first characters of the strings match.
D) The strings are identical.
* Question: Which of the following best exemplifies an "overlapping subproblem"?
A) Sorting two independent halves of an array in Merge Sort.
B) The recursive calculation of fib(n) repeatedly calling fib(n-2).
C) The partitioning step in Quick Sort.
D) The search space reduction in Binary Search.
* Question: The naive recursive implementation of the Fibonacci sequence
calculation has a time complexity of approximately:
A) O(N)
B) O(N \log N)
C) O(N^2)
D) O(2^N)
* Question: Using dynamic programming, the time complexity for computing the N-th
Fibonacci number can be reduced to:
A) O(1)
B) O(\log N)
C) O(N)
D) O(N^2)
* Question: The problem of finding the longest path in a general graph (which may
contain cycles) is:
A) Solvable in polynomial time using dynamic programming.
B) A greedy problem.
C) NP-hard.
D) Solvable using Dijkstra's algorithm.
* Question: In the 0/1 Knapsack problem, the dynamic programming recurrence
relation dp[i][w] (max value using first i items with capacity w) typically
involves:
A) dp[i][w] = dp[i-1][w] + value[i]
B) dp[i][w] = max(dp[i-1][w], dp[i-1][w - weight[i]] + value[i])
C) dp[i][w] = dp[i][w - weight[i]] + value[i]
D) dp[i][w] = min(dp[i-1][w], dp[i-1][w - weight[i]] + value[i])
* Question: Which algorithm is designed to find single-source shortest paths in
graphs that may contain negative edge weights (but no negative cycles)?
A) Dijkstra's Algorithm.
B) Prim's Algorithm.
C) Bellman-Ford Algorithm.
D) Kruskal's Algorithm.
* Question: The Coin Change Problem (minimum coins) is a classic DP problem
because it satisfies:
A) The greedy choice property.
B) Both optimal substructure and overlapping subproblems.
C) Only the optimal substructure property.
D) Only the overlapping subproblems property.
* Question: When dealing with very deep recursion that might cause a stack
overflow, which dynamic programming implementation approach is generally preferred?
A) Memoization.
B) Tabulation.
C) Pure recursion.
D) Divide and Conquer.
* Question: For the Longest Common Subsequence (LCS) of strings X and Y, if X[i]
equals Y[j], the recurrence for LCS(i, j) is:
A) LCS(i-1, j) + 1
B) LCS(i, j-1) + 1
C) LCS(i-1, j-1) + 1
D) max(LCS(i-1, j), LCS(i, j-1))
* Question: The property of "optimal substructure" is fundamental to dynamic
programming. It means:
A) The problem can be solved by making a sequence of greedy choices.
B) An optimal solution to the overall problem can be constructed from optimal
solutions to its subproblems.
C) The problem can be broken into independent subproblems.
D) The solution can be found in polynomial time.
* Question: What is the most common way to store and retrieve intermediate results
in dynamic programming?
A) A stack.
B) A queue.
C) A table (array or hash map).
D) A linked list.
* Question: The Shortest Common Supersequence (SCS) problem, which aims to find
the shortest string that contains two given strings as subsequences, is solvable
using:
A) Greedy algorithms.
B) Divide and Conquer.
C) Dynamic Programming.
D) Backtracking.
* Question: The standard dynamic programming solution for the Longest Increasing
Subsequence (LIS) of an array of size N has a time complexity of:
A) O(N)
B) O(N \log N)
C) O(N^2)
D) O(N^3)
* Question: The Matrix Chain Multiplication problem seeks to:
A) Multiply a chain of matrices.
B) Find the optimal parenthesization to minimize scalar multiplications.
C) Determine if matrix multiplication is possible.
D) Maximize the product of matrices.
* Question: Dynamic programming is most effective when a recursive solution
suffers from:
A) High memory usage.
B) Deep recursion.
C) Repeated computation of the same subproblems.
D) Inability to find an optimal solution.
* Question: The Edit Distance (Levenshtein Distance) problem, which calculates the
minimum number of operations to transform one string into another, is a classic
example of:
A) Greedy approach.
B) Divide and Conquer.
C) Dynamic Programming.
D) Backtracking.
* Question: The primary benefit of dynamic programming over a naive recursive
solution for problems with overlapping subproblems is:
A) It guarantees a unique solution.
B) It reduces the space complexity.
C) It avoids redundant computations, leading to improved time complexity.
D) It always finds a more accurate solution.
* Question: Dynamic programming is widely applied in which field for sequence
alignment and structure prediction?
A) Computer Graphics.
B) Robotics.
C) Bioinformatics.
D) Cryptography.
* Question: The problem of finding the minimum number of cuts to partition a
string into palindromic substrings can be solved efficiently using:
A) Greedy approach.
B) Divide and Conquer.
C) Dynamic Programming.
D) Backtracking.
* Question: For the Coin Change Problem (minimum coins), the DP recurrence
relation dp[amount] = min(dp[amount - coin] + 1) is an example of:
A) A greedy choice.
B) Optimal substructure.
C) A divide and conquer approach.
D) A backtracking step.
* Question: An optimized algorithm for Longest Increasing Subsequence (LIS) can
achieve O(N \log N) time complexity by using:
A) A hash table.
B) A binary search.
C) A Fenwick tree.
D) A segment tree.
* Question: The Subset Sum Problem (decision version) can be solved using dynamic
programming with a time complexity of O(N \cdot S), where N is the number of items
and S is the target sum. This makes it a:
A) Polynomial time algorithm.
B) Pseudo-polynomial time algorithm.
C) Exponential time algorithm.
D) Logarithmic time algorithm.
* Question: Which of the following is a problem where a greedy approach might fail
to find the optimal solution, thus requiring dynamic programming?
A) Activity Selection.
B) Fractional Knapsack.
C) Optimal Binary Search Tree construction.
D) Minimum Spanning Tree.
* Question: In the dynamic programming solution for Matrix Chain Multiplication,
the m[i][j] entry in the DP table stores:
A) The product of matrices A_i through A_j.
B) The minimum number of scalar multiplications to compute A_i \dots A_j.
C) The dimensions of matrix A_i.
D) The maximum number of scalar multiplications.
* Question: The space complexity of the standard dynamic programming solution for
the Longest Common Subsequence (LCS) of two strings of lengths M and N is:
A) O(1)
B) O(M+N)
C) O(M \cdot N)
D) O(\min(M, N))
* Question: The problem of finding the maximum product subarray (contiguous) is a
dynamic programming problem because its solution depends on:
A) A greedy choice at each step.
B) Optimal solutions of smaller, overlapping subarrays.
C) Sorting the array.
D) The sum of all elements.
* Question: A common challenge in applying dynamic programming is:
A) It always leads to exponential time complexity.
B) It requires a large amount of memory for the DP table, especially for large
inputs.
C) It is only applicable to very simple problems.
D) It cannot handle negative values.
* Question: The problem of finding the number of ways to make change for a given
amount (rather than the minimum number of coins) is also a dynamic programming
problem. This highlights that DP can be used for:
A) Optimization problems only.
B) Counting problems.
C) Decision problems only.
D) Graph traversal problems.
* Question: The "Longest Palindromic Subsequence" problem can be solved using
dynamic programming by transforming it into a variation of the:
A) Longest Increasing Subsequence problem.
B) Longest Common Subsequence problem (LCS of string and its reverse).
C) Edit Distance problem.
D) 0/1 Knapsack problem.
* Question: To reconstruct the actual solution (e.g., the LCS string, the path in
shortest path problems) after filling a dynamic programming table, one typically
uses:
A) A greedy approach on the original input.
B) Backtracking through the filled DP table.
C) A separate recursive function without memoization.
D) A brute-force search.
* Question: The "Principle of Optimality" is a foundational concept for dynamic
programming, often attributed to:
A) Donald Knuth.
B) Richard Bellman.
C) Edsger Dijkstra.
D) Robert Floyd.
* Question: In the Rod Cutting problem, the dynamic programming recurrence
relation for max_profit[i] (max profit for a rod of length i) typically considers:
A) The profit from a single cut of length j plus the max profit from the
remaining rod of length i-j.
B) Only cutting the rod into two equal halves.
C) A greedy choice for the most expensive piece.
D) The sum of profits of all possible pieces.
* Question: Dynamic programming is particularly well-suited for problems that
involve a sequence of decisions where:
A) Each decision is independent.
B) Future decisions depend on current decisions, and optimal past decisions lead
to optimal future ones.
C) Random choices are made.
D) The problem can be solved with a single loop.
* Question: The Bellman-Ford algorithm, which finds single-source shortest paths
in graphs with negative edge weights, can be viewed as an application of dynamic
programming because:
A) It uses a greedy approach.
B) It iteratively relaxes edges, effectively building up shortest path estimates
from subproblems.
C) It uses a priority queue.
D) It works only on acyclic graphs.
* Question: After identifying the optimal substructure and overlapping subproblems
for a DP problem, the next crucial step in designing the solution is usually to:
A) Implement a brute-force solution.
B) Define the recurrence relation and base cases.
C) Choose between memoization and tabulation.
D) Determine the time and space complexity.
* Question: Kadane's Algorithm, which finds the maximum sum of a contiguous
subarray, is an efficient dynamic programming approach with a time complexity of:
A) O(N^2)
B) O(N \log N)
C) O(N)
D) O(1)
Solutions: Dynamic Programming
* B) Optimal substructure and overlapping subproblems.
* B) A top-down approach that stores results of subproblems to avoid
recomputation.
* C) Dynamic Programming.
* C) 0/1 Knapsack.
* D) O(N \cdot W)
* C) Dynamic Programming.
* B) An iterative, bottom-up approach that fills a table of solutions.
* C) O(N^3)
* C) Bubble Sort.
* B) An optimal solution to a problem can be constructed from optimal solutions to
its subproblems.
* C) O(V+E)
* B) Memoization is top-down, tabulation is bottom-up.
* C) Dynamic Programming.
* B) One or both strings are empty, resulting in an LCS of 0.
* B) The recursive calculation of fib(n) repeatedly calling fib(n-2).
* D) O(2^N)
* C) O(N)
* C) NP-hard.
* B) dp[i][w] = max(dp[i-1][w], dp[i-1][w - weight[i]] + value[i])
* C) Bellman-Ford Algorithm.
* B) Both optimal substructure and overlapping subproblems.
* B) Tabulation.
* C) LCS(i-1, j-1) + 1
* B) An optimal solution to the overall problem can be constructed from optimal
solutions to its subproblems.
* C) A table (array or hash map).
* C) Dynamic Programming.
* C) O(N^2)
* B) Find the optimal parenthesization to minimize scalar multiplications.
* C) Repeated computation of the same subproblems.
* C) Dynamic Programming.
* C) It avoids redundant computations, leading to improved time complexity.
* C) Bioinformatics.
* C) Dynamic Programming.
* B) Optimal substructure.
* B) A binary search.
* B) Pseudo-polynomial time algorithm.
* C) Optimal Binary Search Tree construction.
* B) The minimum number of scalar multiplications to compute A_i \dots A_j.
* C) O(M \cdot N)
* B) The optimal solution for a subarray depends on optimal solutions of smaller,
overlapping subarrays.
* B) It requires a large amount of memory for the DP table, especially for large
inputs.
* B) Counting problems.
* B) Longest Common Subsequence problem (LCS of string and its reverse).
* B) Backtracking through the filled DP table.
* B) Richard Bellman.
* A) The profit from a single cut of length j plus the max profit from the
remaining rod of length i-j.
* B) Future decisions depend on current decisions, and optimal past decisions lead
to optimal future ones.
* B) It iteratively relaxes edges, effectively building up shortest path estimates
from subproblems.
* B) Define the recurrence relation and base cases.
* C) O(N)
Backtracking
* Question: The fundamental principle of backtracking is to:
A) Solve problems by making locally optimal choices.
B) Break down a problem into independent subproblems.
C) Systematically explore all possible solutions, abandoning paths that cannot
lead to a valid solution.
D) Store and reuse solutions to overlapping subproblems.
* Question: In a backtracking algorithm, "pruning" refers to the process of:
A) Optimizing the input data.
B) Eliminating branches of the search space that are guaranteed not to contain a
solution.
C) Storing intermediate results.
D) Combining partial solutions.
* Question: The N-Queens problem is a classic example that is typically solved
using:
A) Greedy algorithms.
B) Dynamic Programming.
C) Backtracking.
D) Divide and Conquer.
* Question: Which of the following problems is a common application of
backtracking?
A) Finding the shortest path in a graph.
B) Constructing a Minimum Spanning Tree.
C) Solving Sudoku puzzles.
D) Computing the Longest Common Subsequence.
* Question: The "state-space tree" in backtracking represents:
A) The call stack of recursive function calls.
B) All possible partial and complete solutions to the problem.
C) The optimal path to a solution.
D) The data structure used to store problem input.
* Question: In the Sum of Subsets problem, if the current sum of selected elements
already exceeds the target sum, the algorithm should:
A) Continue adding elements.
B) Prune the current branch and backtrack.
C) Adjust the target sum.
D) Consider negative numbers.
* Question: The time complexity of backtracking algorithms is often characterized
as:
A) Polynomial.
B) Logarithmic.
C) Exponential or factorial.
D) Linear.
* Question: A key difference between backtracking and dynamic programming is that
backtracking:
A) Always finds the optimal solution.
B) Explores all valid paths, while DP focuses on solving overlapping subproblems
once.
C) Is always more efficient.
D) Uses memoization more extensively.
* Question: The Graph Coloring problem (determining if a graph can be colored with
'm' colors without adjacent vertices having the same color) is solved using:
A) Greedy approach.
B) Divide and Conquer.
C) Backtracking.
D) Dynamic Programming.
* Question: When placing queens in the N-Queens problem, which constraints must be
satisfied by a new queen relative to existing queens?
A) Same row, same column, same diagonal.
B) Different row, different column, different diagonal.
C) Same row, different column, different diagonal.
D) Different row, same column, different diagonal.
* Question: The "safe" function (or equivalent check) in backtracking algorithms
like N-Queens serves to:
A) Mark a position as visited.
B) Verify if a partial solution satisfies problem constraints.
C) Calculate the cost of the current path.
D) Determine the next element to consider.
* Question: Which area of computer science heavily utilizes backtracking for
problem-solving?
A) Database Management.
B) Compiler Design.
C) Artificial Intelligence (e.g., constraint satisfaction problems, game
playing).
D) Operating Systems.
* Question: The Hamiltonian Cycle problem involves finding a cycle in a graph
that:
A) Visits every edge exactly once.
B) Visits every vertex exactly once and returns to the starting vertex.
C) Is the shortest possible cycle.
D) Is the longest possible cycle.
* Question: In the Sum of Subsets problem, if the current sum plus the sum of all
remaining unconsidered elements is less than the target sum, then:
A) The current path is a valid solution.
B) The current path cannot lead to a solution, so backtrack.
C) We must include the next element.
D) We must exclude the next element.
* Question: A significant advantage of backtracking over a pure brute-force
approach is:
A) It always finds the optimal solution.
B) It avoids exploring paths that are guaranteed to be invalid or suboptimal.
C) It uses less memory.
D) It is simpler to implement.
* Question: Generating all possible permutations of a given set of distinct
elements is a classic problem solved using:
A) Iterative loops.
B) Dynamic Programming.
C) Backtracking.
D) Greedy approach.
* Question: What data structure is implicitly used by recursive backtracking
algorithms to manage the state of exploration?
A) Queue.
B) Stack (the call stack).
C) Priority Queue.
D) Hash Table.
* Question: In the Graph Coloring problem, if a recursive call to color a vertex
fails (no valid color can be assigned), what action does the algorithm take?
A) Terminates with no solution.
B) Increases the number of available colors.
C) Backtracks to the previous vertex and tries a different color.
D) Assigns a default color and continues.
* Question: Which statement accurately describes the relationship between
backtracking and recursion?
A) All recursive algorithms are backtracking algorithms.
B) All backtracking algorithms must be implemented recursively.
C) Recursion is a common implementation technique for backtracking, but not all
recursive algorithms are backtracking.
D) Backtracking is a specific type of recursion.
* Question: To find all possible paths from a source to a destination in a graph,
without repeating vertices within a single path, which algorithm is typically used?
A) Breadth-First Search (BFS).
B) Depth-First Search (DFS) with backtracking.
C) Dijkstra's Algorithm.
D) Floyd-Warshall Algorithm.
* Question: In the N-Queens problem, how many queens are placed in each row of the
chessboard?
A) Zero.
B) One.
C) Two.
D) It varies depending on the board size.
* Question: A "dead end" in a backtracking search refers to:
A) A node in the state-space tree representing a complete solution.
B) A node from which no further choices can lead to a valid solution.
C) The root node of the search tree.
D) A node that has already been visited.
* Question: For the Sum of Subsets problem, if the current sum plus the sum of all
remaining elements is less than the target sum, this is a pruning condition
because:
A) The current path is a solution.
B) The current path cannot possibly reach the target sum.
C) The target sum is too small.
D) We must include the next element.
* Question: The "Branch and Bound" technique is an optimization often applied to
backtracking algorithms, particularly for:
A) Decision problems.
B) Optimization problems (finding the best solution).
C) Graph traversal.
D) Sorting algorithms.
* Question: What is the base case for the recursive function in the Hamiltonian
Cycle problem?
A) When the current path length equals V.
B) When all vertices have been visited and there's an edge back to the starting
vertex.
C) When a cycle is detected.
D) When the graph becomes disconnected.
* Question: Generating all valid combinations of parentheses for a given N (e.g.,
for N=2, (()), ()()) is a problem commonly solved using:
A) Greedy algorithms.
B) Dynamic Programming.
C) Backtracking.
D) Iterative loops.
* Question: The main advantage of backtracking over a naive brute-force approach
for problems like N-Queens is that it:
A) Always finds the solution faster.
B) Systematically prunes invalid partial solutions early.
C) Uses less memory.
D) Is easier to implement.
* Question: In backtracking, a "solution vector" is typically used to:
A) Store the time complexity of the algorithm.
B) Keep track of the choices made at each step to build a potential solution.
C) Represent the graph's adjacency.
D) Store all possible inputs.
* Question: The "m-coloring problem" in graph theory asks:
A) What is the minimum number of colors needed to color the graph?
B) Can the graph be colored using at most 'm' colors such that no two adjacent
vertices share the same color?
C) How many different ways can the graph be colored with 'm' colors?
D) Is the graph bipartite?
* Question: If a backtracking algorithm is designed to find only one solution and
stops after finding it, it is finding:
A) The optimal solution.
B) Any valid solution.
C) All possible solutions.
D) The shortest solution.
* Question: How is the state of the board (e.g., queen positions, Sudoku cell
values) typically represented in backtracking problems like N-Queens or Sudoku
Solver?
A) A linked list.
B) An adjacency matrix.
C) A 2D array (matrix).
D) A hash table.
* Question: The problem of generating all subsets (power set) of a given set is a
fundamental combinatorial problem often solved using:
A) Greedy algorithms.
B) Dynamic Programming.
C) Backtracking.
D) Divide and Conquer.
* Question: In the context of a backtracking search tree, what does the "depth" of
a node typically represent?
A) The number of solutions found so far.
B) The number of choices made or elements placed to reach that partial state.
C) The maximum number of nodes in the tree.
D) The time taken to reach that node.
* Question: Which type of problem is more likely to involve finding all possible
solutions using backtracking?
A) Shortest path problems.
B) Optimization problems (e.g., 0/1 Knapsack).
C) Constraint satisfaction problems like N-Queens or Sudoku (where multiple
solutions might exist).
D) Minimum Spanning Tree.
* Question: What is the primary challenge when implementing a backtracking
algorithm iteratively (without using direct recursion)?
A) It is generally impossible to do so.
B) Explicitly managing a stack to simulate the recursion call stack.
C) It becomes less memory efficient.
D) The logic becomes simpler.
* Question: Finding all paths from a starting point to an ending point in a maze
is a common application of:
A) Breadth-First Search (BFS).
B) Depth-First Search (DFS) with backtracking.
C) Dijkstra's Algorithm.
D) Prim's Algorithm.
* Question: When a backtracking algorithm encounters a "dead end" (a partial
solution that cannot be extended to a valid complete solution), it performs which
action?
A) It terminates the entire search.
B) It undoes the last choice and explores the next alternative at the previous
decision point.
C) It marks the current path as permanently invalid.
D) It restarts the search from the beginning.
* Question: The "Knights Tour" problem, which seeks to find a sequence of moves
for a knight on a chessboard such that it visits every square exactly once, is a
classic problem solved using:
A) Greedy algorithms.
B) Dynamic Programming.
C) Backtracking.
D) Divide and Conquer.
* Question: In the N-Queens problem, the isSafe() function checks for conflicts
along rows, columns, and:
A) Adjacent squares.
B) Diagonals.
C) The center of the board.
D) The edges of the board.
* Question: When a backtracking algorithm is used to solve an optimization problem
(e.g., finding the maximum profit subset), what additional mechanism is usually
employed?
A) A fixed upper bound on the number of solutions.
B) A variable to store the best (maximum/minimum) solution found so far.
C) A hash table to store all possible solutions.
D) A priority queue to explore paths.
* Question: Which of the following problems is an example where backtracking is
used to find a solution, and the problem itself is known to be NP-complete?
A) N-Queens.
B) Sudoku Solver.
C) Hamiltonian Cycle.
D) All of the above.
* Question: For the Sum of Subsets problem, if the input elements are sorted, an
additional pruning condition can be applied: if the current sum plus the next
element is greater than the target sum, then:
A) We must include the next element.
B) We must exclude the next element and backtrack.
C) The current path is a solution.
D) The target sum is too small.
* Question: The "Branch and Bound" technique is an extension of backtracking that
is primarily used for:
A) Proving NP-completeness.
B) Solving optimization problems more efficiently by pruning branches that
cannot lead to a better solution than the current best.
C) Generating all permutations.
D) Parallelizing recursive algorithms.
* Question: What is often the most critical aspect in designing an efficient
backtracking algorithm?
A) Ensuring the base cases are simple.
B) Defining a clear recursive structure.
C) Developing effective pruning (bounding) conditions to reduce the search
space.
D) Minimizing the number of parameters passed in recursive calls.
* Question: When generating permutations of a set with duplicate elements, how can
backtracking be modified to avoid generating identical permutations?
A) It cannot be modified.
B) Use a hash set to store and check for unique permutations.
C) Sort the input array and skip duplicate elements during recursive calls.
D) Use a dynamic programming table.
* Question: Which graph traversal algorithm is most analogous to the systematic
exploration of paths in a maze using backtracking?
A) Breadth-First Search (BFS).
B) Depth-First Search (DFS).
C) Dijkstra's Algorithm.
D) Prim's Algorithm.
* Question: If a backtracking algorithm reaches a leaf node in the state-space
tree that does not represent a valid solution, what is the next action?
A) It reports an error and terminates.
B) It backtracks to its parent node to explore other choices.
C) It continues to search for children nodes.
D) It re-evaluates the entire problem.
* Question: To prevent revisiting nodes or elements in a single path during a
backtracking search (e.g., in finding paths in a graph), what is a common
technique?
A) A global counter.
B) A boolean array or a set to mark visited states.
C) A linked list to store the path.
D) A priority queue.
* Question: In the N-Queens problem, why is checking diagonals crucial?
A) To ensure queens are placed in different rows.
B) To ensure queens are placed in different columns.
C) Because queens can attack along diagonals.
D) To minimize the total number of queens.
* Question: The problem of generating all possible combinations of k items from a
set of n items is a typical application of:
A) Greedy algorithms.
B) Dynamic Programming.
C) Backtracking.
D) Divide and Conquer.
Solutions: Backtracking
* C) Systematically explore all possible solutions, abandoning paths that cannot
lead to a valid solution.
* B) Eliminating branches of the search space that are guaranteed not to contain a
solution.
* C) Backtracking.
* C) Solving Sudoku puzzles.
* B) All possible partial and complete solutions to the problem.
* B) Prune the current branch and backtrack.
* C) Exponential or factorial.
* B) Explores all valid paths, while DP focuses on solving overlapping subproblems
once.
* C) Backtracking.
* B) Different row, different column, different diagonal.
* B) Verify if a partial solution satisfies problem constraints.
* C) Artificial Intelligence (e.g., constraint satisfaction problems, game
playing).
* B) Visits every vertex exactly once and returns to the starting vertex.
* B) The current path cannot lead to a solution, so backtrack.
* B) It avoids exploring paths that are guaranteed to be invalid or suboptimal.
* C) Backtracking.
* B) Stack (the call stack).
* C) Backtracks to the previous vertex and tries a different color.
* C) Backtracking is often implemented using recursion, but not all recursive
algorithms are backtracking.
* B) Depth-First Search (DFS) with backtracking.
* B) One.
* B) A node from which no further choices can lead to a valid solution.
* B) The current path cannot possibly reach the target sum, so backtrack.
* B) Optimization problems (finding the best solution).
* B) When all vertices have been visited and there's an edge back to the starting
vertex.
* C) Backtracking.
* B) Systematically prunes invalid partial solutions early.
* B) Keep track of the choices made at each step to build a potential solution.
* B) Can the graph be colored using at most 'm' colors such that no two adjacent
vertices share the same color?
* B) Any valid solution.
* C) A 2D array (matrix).
* C) Backtracking.
* B) The number of choices made or elements placed to reach that partial state.
* C) N-Queens (finding all configurations).
* B) Explicitly managing a stack to simulate the recursion call stack.
* B) Depth-First Search (DFS) with backtracking.
* B) It undoes the last choice and explores the next alternative at the previous
decision point.
* C) Backtracking.
* B) Diagonals.
* B) A variable to store the best (maximum/minimum) solution found so far.
* C) Hamiltonian Cycle.
* D) Both B and C.
* B) Solving optimization problems more efficiently by pruning branches that
cannot lead to a better solution than the current best.
* C) Developing effective pruning (bounding) conditions to reduce the search
space.
* C) Sort the input array and skip duplicate elements during recursive calls.
* B) Depth-First Search (DFS).
* B) It backtracks to its parent node to explore other choices.
* B) A boolean array or a set to mark visited states.
* C) Because queens can attack along diagonals.
* C) Backtracking.
Computational Complexity
* Question: Which complexity class includes all decision problems that can be
solved by a deterministic algorithm in polynomial time?
A) NP
B) P
C) NP-hard
D) EXP
* Question: A problem is in the class NP if:
A) It can be solved in polynomial time by a deterministic algorithm.
B) It cannot be solved in polynomial time.
C) A given candidate solution can be verified in polynomial time by a
deterministic algorithm.
D) It can be solved by a non-deterministic algorithm in exponential time.
* Question: The P vs. NP problem is:
A) A proven theorem stating P = NP.
B) A proven theorem stating P \ne NP.
C) An unsolved problem in theoretical computer science.
D) A problem that has been disproven.
* Question: If problem A can be polynomially reduced to problem B (A \le_p B), and
problem B is known to be in P, then problem A must also be:
A) NP-complete.
B) NP-hard.
C) In P.
D) Undecidable.
* Question: A problem is classified as NP-hard if:
A) It is in NP and is also NP-complete.
B) Every problem in NP can be reduced to it in polynomial time.
C) It has an exponential time complexity, but its verification is polynomial.
D) It is a decision problem that is considered very difficult.
* Question: Which of the following is a defining characteristic of an NP-complete
problem?
A) It can be solved in polynomial time.
B) It is both in NP and NP-hard.
C) It is known to have no polynomial-time solution.
D) It is a subset of the class P.
* Question: The Satisfiability Problem (SAT) is a foundational example of an:
A) P problem.
B) NP-complete problem.
C) Undecidable problem.
D) Optimization problem.
* Question: Which of the following problems is generally considered to be in P?
A) Traveling Salesperson Problem (decision version).
B) Hamiltonian Cycle Problem.
C) Finding the shortest path in a graph with non-negative edge weights.
D) Subset Sum Problem.
* Question: What is the commonly accepted belief regarding the relationship
between P and NP?
A) P = NP.
B) P is a proper subset of NP (P \subsetneq NP).
C) NP is a proper subset of P (NP \subsetneq P).
D) P and NP are disjoint sets.
* Question: If a problem is proven to be NP-complete, what does this imply about
its solvability?
A) It can be solved in linear time.
B) It is impossible to solve.
C) It is among the hardest problems in NP, and likely has no polynomial-time
solution.
D) It has only one unique solution.
* Question: An "intractable problem" is a problem for which:
A) No algorithm exists to solve it.
B) A solution exists, but no known polynomial-time algorithm can find it.
C) It can only be solved by a non-deterministic Turing machine.
D) It requires an infinite amount of memory to solve.
* Question: Which of the following is an example of an NP-hard problem that is
typically an optimization problem (and thus not directly in NP)?
A) Vertex Cover (decision version).
B) Longest Path Problem (finding the longest path in a general graph).
C) 3-SAT (decision version).
D) Clique Problem (decision version).
* Question: A "decision problem" is a computational problem that:
A) Seeks to find the optimal value of a function.
B) Outputs a numerical value.
C) Has a yes/no answer.
D) Involves making choices at each step of an algorithm.
* Question: If a problem is undecidable, it means:
A) It is NP-complete.
B) It is NP-hard.
C) No algorithm exists that can solve it for all possible inputs in finite time.
D) It can be solved using approximation algorithms.
* Question: Which of the following is a classic NP-complete problem?
A) Sorting an array.
B) Finding the Minimum Spanning Tree.
C) The Hamiltonian Cycle Problem.
D) Searching for an element in a data structure.
* Question: The class NP is defined as the set of decision problems for which:
A) A solution can be found in polynomial time.
B) A given solution can be verified in polynomial time.
C) No polynomial-time solution exists.
D) The problem is intractable.
* Question: What is the primary implication of proving a problem to be NP-
complete?
A) It guarantees that an efficient (polynomial-time) solution exists for it.
B) It means the problem is trivial and easy to solve.
C) It strongly suggests that no efficient (polynomial-time) solution exists,
guiding research towards approximation or heuristics.
D) It implies that the problem is undecidable.
* Question: If P = NP were proven true, what would be the most significant
consequence?
A) All problems would become undecidable.
B) All NP-complete problems would be solvable in polynomial time.
C) Only problems in P would be affected.
D) The concept of computational complexity would become irrelevant.
* Question: Which of the following is an example of a problem that is definitively
in P?
A) Graph Isomorphism.
B) Integer Linear Programming.
C) Matrix Multiplication (e.g., standard algorithm).
D) Circuit Satisfiability.
* Question: What is the relationship between NP-complete and NP-hard problems?
A) All NP-hard problems are NP-complete.
B) All NP-complete problems are NP-hard.
C) NP-complete problems are a proper superset of NP-hard problems.
D) NP-hard problems are always easier than NP-complete problems.
* Question: A "polynomial-time reduction" from problem A to problem B (A \le_p B)
means that:
A) Problem A can be solved faster than problem B.
B) An instance of problem A can be transformed into an equivalent instance of
problem B in polynomial time.
C) Both problems have the same time complexity.
D) Problem B is a special case of problem A.
* Question: Problems that can be solved in a "reasonable" amount of time (i.e.,
polynomial time) belong to which complexity class?
A) NP
B) NP-hard
C) P
D) EXP
* Question: If problem A is NP-complete, and problem B is in P, then it is
generally accepted that:
A) Problem A is easier than Problem B.
B) Problem B is easier than Problem A.
C) Problem A and Problem B have similar computational difficulty.
D) There is no known relationship between their difficulties.
* Question: For practical applications involving NP-complete problems, which
approach is most commonly used?
A) Exhaustive brute-force search.
B) Developing exact algorithms that guarantee polynomial time solutions.
C) Using approximation algorithms or heuristics.
D) Declaring the problem unsolvable.
* Question: The decision version of the Clique Problem (Does a graph G contain a
clique of size at least k?) is:
A) A P problem.
B) An NP-complete problem.
C) An NP-hard problem but not in NP.
D) An undecidable problem.
* Question: The class of problems for which no algorithm exists that can solve
them for all inputs in finite time is called:
A) P
B) NP
C) NP-hard
D) Undecidable
* Question: If problem A is NP-hard, and problem B is NP-complete, and A can be
reduced to B in polynomial time, then A is:
A) Also NP-complete.
B) In P.
C) Easier than B.
D) Undecidable.
* Question: The decision version of the Subset Sum Problem (Is there a subset of a
given set of integers that sums to a target value?) is:
A) A P problem.
B) An NP-complete problem.
C) An NP-hard problem but not in NP.
D) An undecidable problem.
* Question: Which statement accurately describes the relationship between NP-
complete and NP-hard problems?
A) NP-complete problems are a subset of NP-hard problems.
B) NP-hard problems are a subset of NP-complete problems.
C) NP-complete problems are disjoint from NP-hard problems.
D) They are synonymous terms.
* Question: The term "polynomial time" in complexity theory means that the running
time of an algorithm is bounded by:
A) A constant value.
B) A logarithmic function of the input size.
C) A polynomial function of the input size.
D) An exponential function of the input size.
* Question: If a problem is in P, it is also in NP. This statement is:
A) True.
B) False.
C) True only if P = NP.
D) False only if P \ne NP.
* Question: If P = NP were proven true, what would be the implication for all NP-
complete problems?
A) They would become undecidable.
B) They would all have polynomial-time deterministic algorithms.
C) They would remain intractable.
D) They would be easier to verify but not solve.
* Question: The complexity class EXPTIME contains problems solvable by a
deterministic Turing machine in:
A) Polynomial time.
B) Exponential time.
C) Logarithmic time.
D) Constant time.
* Question: Which of the following is a classic example of an undecidable problem?
A) The Traveling Salesperson Problem.
B) The Halting Problem.
C) The Graph Coloring Problem.
D) The Knapsack Problem.
* Question: What is the fundamental difference between a decision problem and an
optimization problem?
A) A decision problem has a yes/no answer, while an optimization problem seeks
the best possible solution.
B) Decision problems are always easier than optimization problems.
C) Optimization problems can always be reduced to decision problems.
D) Decision problems are solved by deterministic algorithms, optimization
problems by non-deterministic.
* Question: If problem A is NP-complete, and problem B is NP-hard, and A can be
reduced to B in polynomial time, then B is:
A) Necessarily in P.
B) Necessarily NP-complete.
C) At least as hard as A.
D) Easier than A.
* Question: A problem is "strongly NP-hard" if it remains NP-hard even when:
A) The input size is very small.
B) The numerical values in the input are polynomially bounded (e.g., small
numbers).
C) It is restricted to a decision problem.
D) It has a known approximation algorithm.
* Question: The Vertex Cover Problem (finding a minimum set of vertices that
includes at least one endpoint of every edge) is:
A) A P problem.
B) An NP-complete problem (decision version).
C) An NP-hard optimization problem.
D) An undecidable problem.
* Question: Why are NP-complete problems considered significant in computer
science?
A) They are the only problems that can be solved by computers.
B) They represent a class of problems for which no efficient solution is known,
and finding one would have profound implications.
C) They are always solvable in linear time.
D) They are primarily used for theoretical exercises and have no practical
applications.
* Question: In the context of NP, what does "non-deterministic" imply about an
algorithm?
A) It uses random numbers to find a solution.
B) It can "guess" the correct path or solution and then verify it in polynomial
time.
C) Its behavior is unpredictable and varies with each run.
D) It runs on a parallel computing architecture.
* Question: The 3-SAT problem (Boolean satisfiability problem where each clause
has exactly three literals) is:
A) In P.
B) NP-complete.
C) Undecidable.
D) Solvable by a greedy algorithm.
* Question: What is the relationship between the class P and the class of NP-hard
problems?
A) P is a subset of NP-hard.
B) NP-hard is a subset of P.
C) If P=NP, then P is a subset of NP-hard.
D) They are generally considered distinct, with NP-hard problems being harder
than P problems.
* Question: The complexity class PSPACE includes problems that can be solved
using:
A) Polynomial time.
B) Polynomial space.
C) Exponential time.
D) Logarithmic space.
* Question: To prove that a decision problem is NP-complete, one typically needs
to show that:
A) It can be solved in polynomial time.
B) It is in NP, and a known NP-complete problem can be reduced to it in
polynomial time.
C) It is undecidable.
D) It can be solved by a greedy algorithm.
* Question: The Bin Packing Problem (packing items of various sizes into the
minimum number of bins of fixed capacity) is an example of an:
A) P problem.
B) NP-complete problem (decision version).
C) Undecidable problem.
D) Linear time problem.
* Question: If P \ne NP is true, what is the implication for NP-complete problems?
A) They cannot be solved by any algorithm.
B) They cannot be solved in polynomial time by any deterministic algorithm.
C) They are all undecidable.
D) They can be solved in exponential time, but not faster.
* Question: The Graph Isomorphism Problem (determining if two graphs are
structurally identical) is currently known to be:
A) In P.
B) NP-complete.
C) NP-hard.
D) In NP, but its status as P or NP-complete is unknown.
* Question: Which of the following statements correctly describes a relationship
between complexity classes?
A) P = NP-complete.
B) P \cap NP-complete = \emptyset.
C) P \subseteq NP and NP-complete \subseteq NP.
D) NP-hard \subseteq NP-complete.
* Question: The problem of finding the Shortest Common Supersequence (SCS) of two
strings is:
A) In P.
B) NP-complete.
C) Solvable by a greedy algorithm.
D) Undecidable.
* Question: How are problems typically classified into complexity classes like P
and NP?
A) By empirical testing on various computer systems.
B) By mathematical proofs based on abstract models of computation (e.g., Turing
machines).
C) By surveying expert programmers.
D) By the number of lines of code required for their solution.
Solutions: Computational Complexity
* B) P
* C) A given candidate solution can be verified in polynomial time by a
deterministic algorithm.
* C) An unsolved problem in theoretical computer science.
* C) In P.
* B) Every problem in NP can be reduced to it in polynomial time.
* B) It is both in NP and NP-hard.
* B) NP-complete problem.
* C) Finding the shortest path in a graph with non-negative edge weights.
* B) P is a proper subset of NP (P \subsetneq NP).
* C) It is among the hardest problems in NP, and likely has no polynomial-time
solution.
* B) A solution exists, but no known polynomial-time algorithm can find it.
* B) Longest Path Problem (finding the longest path in a general graph).
* C) Has a yes/no answer.
* C) No algorithm exists that can solve it for all possible inputs in finite time.
* C) The Hamiltonian Cycle Problem.
* B) A given solution can be verified in polynomial time.
* C) It strongly suggests that no efficient (polynomial-time) solution exists,
guiding research towards approximation or heuristics.
* B) All NP-complete problems would be solvable in polynomial time.
* C) Matrix Multiplication (e.g., standard algorithm).
* B) All NP-complete problems are NP-hard.
* B) An instance of problem A can be transformed into an equivalent instance of
problem B in polynomial time.
* C) P
* B) Problem B is easier than Problem A.
* C) Using approximation algorithms or heuristics.
* B) An NP-complete problem.
* D) Undecidable
* A) Also NP-complete.
* B) An NP-complete problem.
* A) NP-complete problems are a subset of NP-hard problems.
* C) A polynomial function of the input size.
* A) True.
* B) All NP-complete problems would have polynomial-time deterministic algorithms.
* B) Exponential time.
* B) The Halting Problem.
* A) A decision problem has a yes/no answer, while an optimization problem seeks
the best possible solution.
* C) At least as hard as A.
* B) The numerical values in the input are polynomially bounded (e.g., small
numbers).
* B) An NP-complete problem (decision version).
* B) They provide a benchmark for computational hardness.
* B) It can "guess" the correct path or solution and then verify it in polynomial
time.
* B) NP-complete.
* D) They are generally considered distinct, with NP-hard problems being harder
than P problems.
* B) Polynomial space.
* B) It is in NP, and a known NP-complete problem can be reduced to it in
polynomial time.
* B) NP-complete problem (decision version).
* B) They cannot be solved in polynomial time by any deterministic algorithm.
* D) In NP, but its status as P or NP-complete is unknown.
* C) P \subseteq NP and NP-complete \subseteq NP.
* B) NP-complete.
* B) Mathematical proofs based on abstract models of computation (e.g., Turing
machines).

You might also like