0% found this document useful (0 votes)
11 views19 pages

Tractable vs Non-Tractable Problems Explained

The document discusses the differences between tractable and non-tractable problems, highlighting that tractable problems can be solved efficiently in polynomial time, while non-tractable problems require exponential time and are impractical for large inputs. It also explains approximation algorithms, which provide near-optimal solutions for NP-hard problems within polynomial time, and the advantages of randomized algorithms that improve performance and simplify implementation. Additionally, it emphasizes the suitability of Insertion Sort for embedded systems due to its low memory usage, efficiency with small datasets, and stable performance.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views19 pages

Tractable vs Non-Tractable Problems Explained

The document discusses the differences between tractable and non-tractable problems, highlighting that tractable problems can be solved efficiently in polynomial time, while non-tractable problems require exponential time and are impractical for large inputs. It also explains approximation algorithms, which provide near-optimal solutions for NP-hard problems within polynomial time, and the advantages of randomized algorithms that improve performance and simplify implementation. Additionally, it emphasizes the suitability of Insertion Sort for embedded systems due to its low memory usage, efficiency with small datasets, and stable performance.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd

Differences between Tractable and Non-tractable Problems

Tractable Problems Non-tractable Problems

Can be solved in polynomial time Require exponential or factorial time

Fast and efficient Very slow and inefficient

Practical to solve even for large inputs Impractical when input size increases

Always produce results in reasonable May take years or impossible to compute


time

Complexity examples: O(n), O(n²), O(n Complexity examples: O(2ⁿ), O(n!)


log n)

Belong to class P (Polynomial) Belong to NP, NP-complete, NP-hard

Algorithms are easy to design and Solutions are hard or impossible to compute
implement exactly

Used in real-life systems like searching, Mostly theoretical or solved using


sorting, shortest path approximation or heuristics

Deterministic algorithms mostly used Mostly solved by approximation, heuristic, or


randomized algorithms

Example: Dijkstra, Kruskal, Merge sort, Example: TSP, Graph coloring, 0/1 knapsack
Binary search exact solution

One-line summary

Tractable problems can be solved efficiently in polynomial time, while non-tractable


problems require exponential time and are impractical for large inputs.

Simple example to remember

 Tractable → Like riding a bike: easy and fast even if distance increases slowly.

 Non-tractable → Like climbing a mountain: every extra step becomes much harder.
Q6 b) What is an Approximation Algorithm? How is
performance ratio useful?
Approximation Algorithm

 Many difficult problems (NP-hard problems) cannot be solved exactly in reasonable


time because they take exponential time.

 For such problems, we use Approximation Algorithms.

 An approximation algorithm gives a near-optimal solution (not exact) within a


reasonable, polynomial time.

 These algorithms are used when exact solution is too slow or impossible for large
input.

Simple Example

 Travelling Salesman Problem (TSP): Find the shortest route visiting all cities once.

 Exact algorithm may take years for large cities.

 Approximation algorithm gives a solution close to the shortest path quickly.

Performance Ratio

 Performance ratio measures how good or close the approximate solution is


compared to the optimal solution.

 If:
Why Performance Ratio is Useful?

Advantage Explanation

Measures quality Tells how close approximate answer is to real


optimal answer

Helps compare algorithms Can judge which approximation algorithm is better

Gives performance guarantee Shows worst-case error limit

Useful when exact algorithms Helps in choosing practical solutions


impossible

Final exam definition

Approximation algorithms are algorithms that find near-optimal solutions for NP-hard
problems in polynomial time. Performance ratio is used to measure how close the
approximate solution is to the optimal solution, giving a guarantee of the quality of the
solution.

Q6 b) What are Approximation Algorithms? Based on


approximation ratio classify the approximation algorithms
Approximation Algorithms

 Many problems such as NP-hard problems cannot be solved exactly in reasonable


time because their time complexity is exponential or factorial.

 For such problems, Approximation Algorithms are used.

 An approximation algorithm finds a solution that is close to the optimal solution but
may not be perfect.

 It runs in polynomial time, giving a near-optimal and practical answer.

Example

Travelling Salesman Problem (TSP), Knapsack, Graph Coloring, Vertex Cover, etc.
Classification of Approximation Algorithms Based on Approximation Ratio
Simple Summary

 1-Approximation → Exact answer

 r-Approximation → Near answer (within r times optimal)

 PTAS → Very close answer, adjustable accuracy

 FPTAS → Very close answer like PTAS but faster

Randomized Algorithm (Detailed Explanation)


A randomized algorithm is an algorithm that uses random numbers or random decisions at
some steps while solving a problem.
Because of randomness, the output or the time taken may be different each time you run
the same input.

Why do we use Randomized Algorithms?

Randomized algorithms are used because:

 Sometimes deterministic (normal) algorithms are too slow

 They help to avoid worst case performance

 They are simple and fast

 They give good average performance

Types of Randomized Algorithms

1. Las Vegas Algorithm

 Always gives the correct result

 Time may vary randomly

 Example: Randomized Quick Sort

2. Monte Carlo Algorithm

 Always takes fixed time

 Result may be approximately correct, not always exact

 Example: Randomized primality testing

Example: Randomized Quick Sort


Goal

To sort numbers in increasing order.

Normal Quick Sort

Chooses the first or last element as pivot.


This may lead to worst performance if the array is already sorted.

Randomized Version

Randomly picks a pivot element → reduces chance of worst case.

Steps

1. Select a random pivot element from the array.

2. Partition array: elements smaller on left, bigger on right.

3. Recursively apply steps to left and right parts.

Example

Array: [10, 80, 30, 90, 40, 50, 70]

Random pivot selected randomly → suppose pivot = 40

Partition:

Left side < 40 : [10, 30]

Right side > 40 : [80, 90, 50, 70]

Sorted result after recursion: [10, 30, 40, 50, 70, 80, 90]

Why is Randomized Quick Sort good?

 Avoids worst case

 Faster in practical use

 Expected time complexity: O(n log n)

Advantages of Randomized Algorithms

✔ Faster than deterministic algorithms in many cases


✔ Simple to write
✔ Works well with large data

Disadvantages
❌ Output or time may vary sometimes
❌ Not always 100% accurate (Monte Carlo type)

Real-life Example

Searching on Google

 Google uses randomization in ranking and load balancing.

 Randomly selects a server among many → response becomes faster.

Conclusion

A randomized algorithm uses randomness to make decisions and improves performance.


It is useful for complex and large problems where deterministic methods are slow.

c) Does randomized algorithm for quick sort, improves the


average case time complexity?
c) Does randomized algorithm for quick sort, improves the average case time complexity?
Yes, randomized algorithm for Quick Sort improves the average case time complexity.

Explanation (Simple and Easy Language)

In normal Quick Sort, we choose the first element or last element as pivot.
If the input array is already sorted or reverse sorted, it becomes the worst case, and the
time complexity becomes:

Worst Case Time Complexity

O(n²)

This is very slow for large data.

Randomized Quick Sort


In randomized quick sort, the pivot is chosen randomly, not fixed.
So the chance of selecting the worst pivot every time is very low.

This reduces the probability of worst-case partitions.

Average Case Time Complexity

O(n log n)

Worst Case (still possible but very rare)

O(n²)

Conclusion

Yes, randomized quick sort improves the expected/average performance because:

 It reduces the chance of bad partitions.

 Works faster in practical cases.

 Gives O(n log n) average time.

Final Answer (Short and Easy)

Yes, the randomized algorithm for quick sort improves the average case time complexity.
By selecting a random pivot, it avoids worst case probabilities and ensures that the expected
running time becomes O(n log n) instead of O(n²).

Q6 (c) What are randomized algorithms? Enlist few reasons


to use randomized algorithms.
Randomized Algorithms
A randomized algorithm is an algorithm that uses random numbers or random choices
during execution to make decisions.
Because of randomness, the output or running time may vary for the same input each time
it runs.

Reasons to Use Randomized Algorithms


1. Improves Performance

o Randomization helps avoid worst-case scenarios and gives better average


running time.

2. Simplicity

o Many randomized algorithms are easier to implement compared to


deterministic algorithms.

3. Fast for Large Data

o Works efficiently on large datasets where deterministic algorithms are slow.

4. Good for Complex Problems

o Useful when exact solutions are difficult or impossible to compute.

5. Avoids Pattern-based failures

o Random choices break patterns in input that might degrade performance.

6. Provides Approximate Solutions

o Helps in approximation problems where exact result is not required.

Example

 Randomized Quick Sort

 Randomized Primality Test (Miller–Rabin)

Short Summary

Randomized algorithms use randomness to make decisions and are widely used because
they are simple, fast, avoid worst cases, and work well with complex or large problems.

Q6) a) What are special needs of embedded algorithm?


Which sorting algorithm is best for embedded systems? Why?

Special needs of Embedded Algorithms


Embedded algorithms are designed for embedded systems such as smart watches, sensors,
mobile devices, medical equipment, automotive systems, and IoT devices.
These systems have limited hardware resources, so the algorithm must be optimized for
performance and resource usage.

The special needs of embedded algorithms are:

1. Low Memory Requirement

Embedded devices have very small RAM and ROM.


Therefore, algorithms should use minimum extra memory.

2. Low Power Consumption

Most embedded systems work on batteries.


Algorithms should consume less power by reducing computations and memory access.

3. Fast and Efficient Execution

Processors in embedded systems are slower.


Algorithms must be fast and efficient to give the output quickly.

4. Real-Time Response

Many embedded applications need results within a deadline (example: airbags, medical
devices).
Algorithms must provide deterministic and predictable timing.

5. Simple Implementation

Algorithms must be simple because code space is small and debugging is hard.

6. Reliability

Algorithms must work correctly all the time because many embedded systems are used in
critical applications.

Why Insertion Sort is best?

1. Requires very less memory

o It is an in-place sorting algorithm, so no extra memory is needed.

2. Simple and easy to implement

o Small and clean code fits easily in limited program space.


3. Efficient for small datasets

o Embedded systems usually process small sets of data such as sensor readings.

4. Low power usage

o Has fewer operations compared to complex algorithms.

5. Fast for almost sorted data

o Performs well when data is nearly sorted, common in embedded devices.

6. Deterministic performance

o Predictable execution behavior is useful in real-time systems.

Why Insertion Sort is preferred

1. In-place — needs only O(1) extra memory.

2. Very small code size & simple implementation — easy to write and verify.

3. Good for small arrays — embedded systems usually sort small buffers (e.g., sensor
readings of size 10–100). For small n, the O(n²) cost is acceptable and often faster
than more complex algorithms because of low overhead.

4. Adaptive — if the input is already nearly sorted, insertion sort runs in nearly O(n)
time (best case O(n)). Many sensor buffers are almost sorted, so it becomes very fast.

5. Stable — preserves order of equal elements (useful if records have multiple fields).

6. Deterministic / predictable timing — easier to reason about for real-time deadlines


(though worst-case is O(n²), it rarely happens for small sizes).

7. Low power — fewer memory accesses and simple loop logic often means less energy
than complex algorithms for small n.

Example use-case

A microcontroller reads 20 latest sensor values into a small array and must compute the
median. Sorting 20 numbers with Insertion Sort is simple, fast in practice, and uses almost
no extra memory — ideal for embedded.

Short exam-style conclusion

Embedded algorithms must be memory- and power-efficient, simple, and predictable. For
sorting in embedded systems, Insertion Sort is usually the best choice because it is in-place,
has a tiny code and memory footprint, is adaptive (fast for nearly sorted small arrays),
stable, and simple to implement. For larger arrays or special key types, other algorithms
(Shell, Heap, Radix, or hardware sorting networks) may be considered depending on
available memory, CPU, and real-time requirements.
Conclusion

Embedded algorithms must be memory-efficient, power-efficient, fast, simple, and reliable.


For sorting operations in embedded systems, Insertion Sort is the best choice because it
uses less memory, low power, has simple implementation, and works efficiently for small
inputs

Q5 (a) Advantages and Disadvantages


i) Aggregate Analysis

Advantages

 Simple and easy to understand because we just calculate total cost and divide by
number of operations.

 Direct method to find amortized cost without extra bookkeeping or potential


function.

 Works well for problems like binary counter, dynamic array insertion, etc.

 Gives a tight bound on worst-case average time per operation.

Disadvantages

 Does not show the cost of individual operations, only average.

 Cannot track how the cost changes step-by-step in the sequence.

 Not suitable when operations have different types of costs or require stored credit.

 Limited flexibility compared to accounting or potential method.

ii) Accounting Method (Banker’s Method)

Advantages

 Maintains extra credit that can pay for future expensive operations.

 Helps analyze algorithms where some operations depend on earlier operations (like
multipop in stack).

 Shows clearly how expensive operations are prepaid, giving better understanding.

 More flexible than aggregate method, works even when costs vary.

Disadvantages
 Harder to design because choosing the right amortized charge and credit value is not
always easy.

 If credits are assigned incorrectly, analysis becomes wrong.

 More complex to explain and implement in written form.

 Requires careful tracking of stored credits.

Short Summary

Method Advantages Disadvantages

Aggregate Simple, direct, easy formula, Doesn’t show individual cost


Analysis accurate average cost behavior, less flexible

Accounting Uses credits, handles variable More complex, hard to choose


Method costs, flexible charges, tracking needed

Final One-line Conclusion

Aggregate method is simple but less flexible, while accounting method is powerful but
complex due to managing credit.

Q5 (b) Suitable Sorting Algorithm for Embedded Medical


Device System
In an embedded system for a medical device, where continuous real-time sensor data such
as temperature, heart rate, blood pressure, and timestamps are collected and displayed,
the sorting algorithm must be:

 Fast and Real-Time

 Use very low memory

 Stable (preserve order of equal timestamps)

 Energy efficient (low power consumption)

 Predictable timing (no unexpected delays)

Suitable Sorting Algorithm


Insertion Sort

Justification

Insertion Sort is the best algorithm for real-time embedded systems like medical devices
because:

1. Works efficiently with streaming data

Data arrives continuously from sensors.


Insertion Sort can sort as data comes, without waiting for the complete dataset.

Example:
When a new sensor reading arrives, it is inserted into the correct position immediately.

2. Very low memory usage

Works in-place and does not need extra memory.


This is important because embedded devices have limited RAM.

3. Stable sorting

Maintains the order of records with the same timestamp, ensuring correct display of
multiple sensor values.

4. Predictable and consistent timing

Worst-case time complexity is known and controlled:

O(n2)O(n^2)O(n2)

but in real-time applications where data changes slowly and is almost sorted,

O(n)O(n)O(n)

So performance is very fast.

5. Low power consumption

Uses simple operations and requires fewer CPU cycles → saves battery life.

Conclusion
For real-time embedded medical devices that continuously monitor patient vitals, Insertion
Sort is the most suitable sorting algorithm because it:

Supports real-time data, uses low memory, stable, power-efficient, and works well on nearly
sorted data.\boxed{\text{Supports real-time data, uses low memory, stable, power-efficient,
and works well on nearly sorted data.}}Supports real-
time data, uses low memory, stable, power-efficient, and works well on nearly sorted data.

Final Answer in One Line

Insertion Sort is best suited for this medical embedded system because it sorts incoming
sensor data in real-time, uses very little memory, is stable, and has predictable
performance suitable for low-power embedded hardware.

b) What is Potential function method of amortized analysis?


To illustrate Potential method, find amortized cost of PUSH,
POP and MULTIPOP stack operations.
Potential Method of Amortized Analysis

The Potential Method is used to find the amortized cost of operations by storing extra
“potential energy” during cheap operations and using it to pay for expensive operations
later.

We define a potential function Φ which depends on the current state of the data structure.
For an operation with actual cost c, the amortized cost is:

Amortized Analysis of Stack Operations (PUSH, POP, MULTIPOP)

Let Φ = number of elements in the stack.


(This means the potential increases when we PUSH and decreases when we POP.)

1) PUSH operation
 Actual cost = 1

 Stack size increases by 1 → potential increases by 1

 Amortized cost = 1 + 1 = 2

2) POP operation

 Actual cost = 1

 Stack size decreases by 1 → potential decreases by 1

 Amortized cost = 1 − 1 = 0

3) MULTIPOP(k) operation

 Suppose t items are popped (t ≤ k)

 Actual cost = t

 Potential decreases by t

 Amortized cost = t − t = 0

Final Conclusion

 PUSH amortized cost = 2

 POP amortized cost = 0

 MULTIPOP amortized cost = 0

So the average or amortized cost per operation is O(1) (constant time).

Q6 (a) Explain with the help of example the methods of


Amortized Analysis.
🔹 First – What is Amortized Analysis? (Simple Meaning)

Amortized analysis is a method used to find the average cost per operation in a sequence of
operations, even if some operations are expensive.

👉 Instead of analyzing one operation alone, we analyze many operations together and find
the average time.

Example:
A binary counter sometimes flips many bits (expensive), but most times flips only one bit
(cheap).
So instead of saying worst case O(k), amortized analysis tells us average is O(1).

Methods of Amortized Analysis (Simple and Easy Language)

There are three methods:

1️⃣ Aggregate Method

Simple meaning:

We calculate total cost of n operations and divide it by n.

Amortized Cost=Total Costn\text{Amortized Cost} = \frac{\text{Total Cost}}


{n}Amortized Cost=nTotal Cost

Example: Binary Counter

If we update a counter n times, total bit flips < 2n.

So,

Amortized cost=2nn=2=O(1)\text{Amortized cost} = \frac{2n}{n} = 2 =


O(1)Amortized cost=n2n=2=O(1)

👉 Average cost per increment is constant time O(1).

2️⃣ Accounting Method

Simple Meaning:

We assign a fake (extra) cost to each operation in advance.


The extra cost is saved like coins in a bank, called credit.
Later, when an expensive operation happens, we use the saved credit.

Example: Stack (PUSH, POP)

Suppose:

 Assign amortized cost for PUSH = 2

 Assign amortized cost for POP = 0

Actual cost of PUSH = 1


So extra 1 coin is stored.

Later POP needs actual cost 1 → paid from stored credit.


👉 Average becomes O(1).

3️⃣ Potential Method

Simple Meaning:

We define a special function Φ (potential) which stores future cost like energy.
Amortized cost depends on how much potential increases or decreases.

Formula:

c^=c+(Φnew−Φold)\hat{c} = c + (\Phi_{new} - \Phi_{old})c^=c+(Φnew−Φold)

Example: Stack

Let Φ = number of items in the stack


So:

 PUSH increases potential → amortized cost = 2

 POP decreases potential → amortized cost = 0

👉 Overall cost of stack operations = O(1)

📌 Conclusion

Method Main Idea Example Result

Aggregate Divide total cost by operations Binary counter O(1)

Accountin Save extra credit in cheap ops Stack O(1)


g

Potential Use potential energy difference Stack O(1)

✨ Final Summary (Very Short Exam Note)

Amortized analysis finds average running time per operation over a sequence of
operations. It guarantees performance better than worst case. There are three methods:
Aggregate, Accounting, and Potential. All methods try to show that even if some
operations are expensive, the average cost is low, usually O(1).

You might also like