0% found this document useful (0 votes)
27 views9 pages

DAANOTES

daa notes for 4th sem

Uploaded by

sensumedha403
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
27 views9 pages

DAANOTES

daa notes for 4th sem

Uploaded by

sensumedha403
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd

Here are detailed notes based on the first PDF (DAA Introduction):

Introduction to Design and Analysis of Algorithms (DAA)

1. What is an Algorithm?

An algorithm is a finite sequence of well-defined instructions used to solve a problem or perform a


computation.

Characteristics of a good algorithm:

 Correctness: It should produce the correct output for all valid inputs.

 Efficiency: It should run in minimal time and use optimal space.

 Definiteness: Each step must be precisely defined.

 Finiteness: It must terminate after a finite number of steps.

 Input/Output: It should take input and produce an output.

2. Importance of Algorithm Design

Algorithm design is crucial for:

 Optimizing performance (faster execution, less memory usage).

 Solving complex problems efficiently.

 Developing scalable and robust software systems.

3. Algorithm Analysis

The analysis of algorithms focuses on:

1. Time Complexity: Measures the time an algorithm takes to complete as a function of input
size.

2. Space Complexity: Measures the amount of memory an algorithm uses.

4. Time Complexity Notations

To describe the efficiency of algorithms, we use asymptotic notations:

 Big-O (O) → Upper bound (worst-case scenario). Example: O(n2)O(n^2).

 Omega (Ω) → Lower bound (best-case scenario). Example: Ω(n)Ω(n).

 Theta (Θ) → Tight bound (average-case). Example: Θ(nlog⁡n)Θ(n \log n).

5. Algorithm Design Techniques

Different strategies are used to develop efficient algorithms:

1. Divide and Conquer:

o Breaks a problem into smaller subproblems, solves them recursively, and combines
the results.
o Example: Merge Sort, Quick Sort, Binary Search.

2. Greedy Method:

o Makes a locally optimal choice at each step in the hope of finding the global
optimum.

o Example: Kruskal’s Algorithm, Prim’s Algorithm, Huffman Coding.

3. Dynamic Programming (DP):

o Breaks a problem into overlapping subproblems and stores results to avoid


redundant calculations.

o Example: Fibonacci Series, Knapsack Problem.

4. Backtracking:

o Tries all possible solutions and backtracks when an invalid solution is encountered.

o Example: N-Queens Problem, Sudoku Solver.

5. Branch and Bound:

o Similar to backtracking but uses bounds to eliminate unpromising branches early.

o Example: Traveling Salesman Problem (TSP).

6. Complexity Classes in Algorithms

Algorithms fall into different complexity classes:

 P (Polynomial Time): Problems solvable in polynomial time (e.g., Sorting, Graph Traversal).

 NP (Nondeterministic Polynomial Time): Problems for which solutions can be verified in


polynomial time.

 NP-Hard & NP-Complete: Problems that are computationally difficult, like Traveling Salesman
Problem (TSP).

7. Applications of Algorithm Design

 Data compression: Huffman Coding.

 Cryptography: RSA Algorithm.

 Artificial Intelligence: A* Search Algorithm.

 Networking: Dijkstra’s Algorithm for shortest path routing.

Conclusion:

The Design and Analysis of Algorithms (DAA) is crucial in computer science and engineering. It helps
create efficient, optimized solutions for computational problems, ensuring better performance and
scalability in real-world applications.
These notes provide a detailed yet concise overview of DAA introduction. Let me know if you need
more details on any section! 🚀

Here are detailed notes combining the second (Substitution Method 1) and third
(Substitution Method 2) PDFs:

Solving Recurrence Relations using the


Substitution Method
1. Introduction
The Substitution Method is a mathematical technique used to solve recurrence relations by:

1. Guessing a solution
2. Substituting it into the recurrence
3. Using induction to prove the correctness

This method is useful in analyzing the time complexity of recursive algorithms, such as
Merge Sort, Binary Search, and Divide & Conquer approaches.

2. Steps in the Substitution Method


Step 1: Guess the form of the solution

 Identify the recurrence relation that defines the algorithm’s runtime.


 Make an educated guess about the time complexity.

Step 2: Expand the recurrence

 Rewrite the recurrence for multiple iterations to identify a pattern.

Step 3: Solve for constants using boundary conditions

 Use base cases to refine the complexity expression.

Step 4: Verify using induction

 Use mathematical induction to prove the guessed solution is correct.

3. Common Recurrence Relations and Their Solutions


Example 1: Linear Recurrence

Given:

T(n)=T(n−1)+cT(n) = T(n - 1) + c

Solution:
Expanding step by step:

T(n)=T(n−1)+cT(n) = T(n - 1) + c =T(n−2)+c+c= T(n - 2) + c + c =T(n−3)+3c= T(n - 3) + 3c

After kk expansions:

T(n)=T(n−k)+k⋅cT(n) = T(n - k) + k \cdot c

Setting n−k=0n - k = 0 gives k=nk = n, so:

T(n)=T(0)+n⋅cT(n) = T(0) + n \cdot c

For a base case T(0)=c0T(0) = c_0, the final solution is:

T(n)=O(n)T(n) = O(n)

Example 2: Binary Search Recurrence

Given:

T(n)=T(n/2)+O(1)T(n) = T(n/2) + O(1)

Solution:
Expanding step by step:

T(n)=T(n/2)+O(1)T(n) = T(n/2) + O(1) =T(n/4)+O(1)+O(1)= T(n/4) + O(1) + O(1)


=T(n/8)+O(1)+O(1)+O(1)= T(n/8) + O(1) + O(1) + O(1)

After kk steps:

T(n)=T(n/2k)+kO(1)T(n) = T(n/2^k) + k O(1)

Setting n/2k=1⇒k=log⁡2nn/2^k = 1 \Rightarrow k = \log_2 n, so:

T(n)=O(log⁡n)T(n) = O(\log n)

Thus, Binary Search runs in O(log⁡n)O(\log n) time.

Example 3: Merge Sort Recurrence


Given:

T(n)=2T(n/2)+O(n)T(n) = 2T(n/2) + O(n)

Solution:
Expanding step by step:

T(n)=2T(n/2)+O(n)T(n) = 2T(n/2) + O(n) =2(2T(n/4)+O(n/2))+O(n)= 2(2T(n/4) + O(n/2)) +


O(n) =4T(n/4)+2O(n/2)+O(n)= 4T(n/4) + 2O(n/2) + O(n)

After kk steps:

T(n)=2kT(n/2k)+kO(n)T(n) = 2^k T(n/2^k) + k O(n)

Setting n/2k=1⇒k=log⁡2nn/2^k = 1 \Rightarrow k = \log_2 n, we get:

T(n)=O(nlog⁡n)T(n) = O(n \log n)

Thus, Merge Sort runs in O(nlog⁡n)O(n \log n) time.

4. General Case Using the Master Theorem

For a recurrence of the form:

T(n)=aT(n/b)+f(n)T(n) = aT(n/b) + f(n)

We apply the Master Theorem:

 If f(n)=O(nc)f(n) = O(n^c) where c<log⁡bac < \log_b a, then T(n)=O(nlog⁡ba)T(n) =


O(n^{\log_b a}).
 If f(n)=O(nlog⁡ba)f(n) = O(n^{\log_b a}), then T(n)=O(nlog⁡balog⁡n)T(n) = O(n^{\
log_b a} \log n).
 If f(n)=Ω(nc)f(n) = Ω(n^c) where c>log⁡bac > \log_b a, then T(n)=O(f(n))T(n) =
O(f(n)).

Example Using Master Theorem

For Merge Sort,

T(n)=2T(n/2)+O(n)T(n) = 2T(n/2) + O(n)

Here, a=2a = 2, b=2b = 2, and f(n)=O(n)f(n) = O(n).


Since f(n)=O(n1)f(n) = O(n^1) and log⁡22=1\log_2 2 = 1, we apply case 2:

T(n)=O(nlog⁡n)T(n) = O(n \log n)

Thus, Merge Sort runs in O(nlog⁡n)O(n \log n) time.


5. Applications of the Substitution Method
The Substitution Method is useful in:

 Divide & Conquer algorithms like Merge Sort and Quick Sort.
 Recursion-based problems such as Fibonacci calculations.
 Graph algorithms like Dijkstra’s shortest path.
 Tree-based problems like Binary Search Tree (BST) height calculations.

6. Summary Table of Common Recurrences


Recurrence Time Complexity Example
T(n)=T(n−1)+O(1)T(n) = T(n - 1) + O(1) O(n)O(n) Simple Recursion
T(n)=2T(n/2)+O(n)T(n) = 2T(n/2) + O(n) O(nlog⁡n)O(n \log n) Merge Sort
T(n)=T(n/2)+O(1)T(n) = T(n/2) + O(1) O(log⁡n)O(\log n) Binary Search
T(n)=7T(n/2)+O(n2)T(n) = 7T(n/2) + Strassen’s
O(nlog⁡27)O(n^{\log_2 7})
O(n^2) Algorithm

7. Conclusion
The Substitution Method is an effective way to analyze the time complexity of recursive
algorithms. It helps:

 Expand and simplify recurrence relations.


 Guess and verify solutions using induction.
 Apply Master Theorem for faster solving.

By mastering this technique, one can efficiently analyze recursive algorithms and their
performance.

These notes provide a complete and detailed breakdown of both Substitution Method
PDFs. Let me know if you need more explanations! 😊🚀

Here are detailed notes based on the fourth (Greedy 1) and fifth (Greedy 2) PDFs.

Greedy Algorithm (Detailed Notes)


1. Introduction to Greedy Algorithm
The Greedy Algorithm is a problem-solving technique that makes the locally optimal
choice at each step with the hope that it will lead to the globally optimal solution.

Characteristics of Greedy Algorithms

 Greedy Choice Property – An optimal solution is constructed by making the best


choice at each step.
 Optimal Substructure – A problem has an optimal solution that can be constructed
from optimal solutions of its subproblems.

2. Steps in a Greedy Algorithm


1. Sort the input (if required) based on some property.
2. Make the best possible choice at each step.
3. Check feasibility (whether the choice maintains constraints).
4. Repeat until a solution is obtained.

3. Applications of Greedy Algorithms


Greedy algorithms are widely used in real-world problems. Below are some famous
applications:

1. Activity Selection Problem

 Problem: Given n activities with start and end times, select the maximum number of
activities that can be performed by a single person.
 Solution:
o Sort activities by ending time.
o Select the first activity.
o For each next activity, pick it only if it starts after the previous one ends.
 Time Complexity: O(nlog⁡n)O(n \log n) (due to sorting).

2. Huffman Coding (Data Compression)

 Problem: Encode characters with variable-length codes based on their frequency.


 Solution:
o Build a min-heap of character frequencies.
o Merge two smallest frequencies until one tree is formed.
o Assign shorter codes to more frequent characters.
 Time Complexity: O(nlog⁡n)O(n \log n) (Heap operations).

3. Fractional Knapsack Problem


 Problem: Given items with weights and values, choose items to maximize total value
without exceeding weight capacity.
 Solution:
o Calculate value/weight ratio for each item.
o Sort items in decreasing order of value/weight.
o Pick items greedily until weight is filled.
o If an item doesn’t fully fit, take a fraction of it.
 Time Complexity: O(nlog⁡n)O(n \log n) (Sorting step).

4. Job Sequencing with Deadlines

 Problem: Given n jobs with deadlines and profits, schedule them to maximize total
profit.
 Solution:
o Sort jobs in decreasing order of profit.
o Assign each job to the latest available slot before its deadline.
o Use a greedy strategy to maximize earnings.
 Time Complexity: O(nlog⁡n)O(n \log n) (Sorting step).

5. Prim’s Algorithm (Minimum Spanning Tree)

 Problem: Find the minimum-cost spanning tree (MST) of a graph.


 Solution:
o Start with any node and grow the MST by adding the smallest edge
connecting an included node to an excluded node.
o Use a priority queue (Min-Heap) to pick the minimum edge efficiently.
 Time Complexity: O(mlog⁡n)O(m \log n) (Heap operations).

6. Kruskal’s Algorithm (Minimum Spanning Tree)

 Problem: Find MST using a different greedy approach.


 Solution:
o Sort edges by weight.
o Use Union-Find to add edges without forming cycles.
 Time Complexity: O(mlog⁡m)O(m \log m) (Sorting step).

7. Dijkstra’s Algorithm (Shortest Path)

 Problem: Find the shortest path from a source node to all other nodes in a weighted
graph.
 Solution:
o Use a Min-Heap to always expand the nearest unvisited node.
 Time Complexity: O(nlog⁡n)O(n \log n) (Heap operations).

4. Comparison of Greedy vs Dynamic Programming


Feature Greedy Algorithm Dynamic Programming
Choice Making Locally optimal choice Uses past results to make choices
Subproblems Independent Overlapping subproblems
Solution Type Works only for certain problems Always gives optimal solution
Generally faster O(nlog⁡n)O(n \log Slower O(n2)O(n^2) or O(nlog⁡n)O(n \
Time Complexity
n) log n)
Example Activity Selection, Kruskal’s,
Knapsack (0/1), Floyd-Warshall
Problems Dijkstra’s

5. Summary
 Greedy algorithms work fast and efficiently for problems with greedy choice
property and optimal substructure.
 Not all problems can be solved optimally with a greedy approach.
 Dynamic Programming (DP) is a better choice when decisions depend on previous
choices (e.g., 0/1 Knapsack).

Final Time Complexity Table

Problem Time Complexity


Activity Selection O(nlog⁡n)O(n \log n)
Huffman Coding O(nlog⁡n)O(n \log n)
Fractional Knapsack O(nlog⁡n)O(n \log n)
Job Sequencing O(nlog⁡n)O(n \log n)
Prim’s Algorithm O(mlog⁡n)O(m \log n)
Kruskal’s Algorithm O(mlog⁡m)O(m \log m)
Dijkstra’s Algorithm O(nlog⁡n)O(n \log n)

These notes give a detailed breakdown of Greedy Algorithms, their principles,


applications, and comparisons. Let me know if you need more details! 😊🚀

You might also like