0% found this document useful (0 votes)
21 views12 pages

Daa Unti Ii

DAA UNIT 2

Uploaded by

sainisaini9817
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
21 views12 pages

Daa Unti Ii

DAA UNIT 2

Uploaded by

sainisaini9817
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 12

DAA UNTI II

Brute-Force Approach
1. Definition
The Brute-Force approach is the simplest and most straightforward method for solving a problem by trying
all possible solutions until the best or correct one is found. It does not use any optimization techniques and
relies on exhaustive search.

2. Characteristics
• Simple and Easy to Implement – Requires minimal logic and is easy to understand.

• Guaranteed to Find a Solution – Since it explores all possibilities, it always finds a solution (if one
exists).

• High Time Complexity – Often inefficient for large input sizes because it checks every possibility.

• No Optimization – Does not use heuristics, pruning, or dynamic programming.

3. Time Complexity
Since brute-force tries all possible cases, its time complexity is usually exponential (O(2ⁿ)) or factorial
(O(n!)), making it impractical for large inputs.
However, in some cases, it may have a polynomial complexity (O(n²) or O(n³)), such as in simple
searching and sorting.

4. Advantages & Disadvantages

Advantages Disadvantages

Simple and easy to implement Very slow for large inputs

Always finds the correct solution High time complexity

Works well for small problem sizes Not suitable for optimization problems

Greedy Algorithm
1. Definition
A Greedy Algorithm is an approach to solving optimization problems by making a series of locally optimal
choices at each step, assuming that these choices will lead to a globally optimal solution.

1|Page
Key Idea: Always pick the best immediate option without considering future consequences.

2. Characteristics of Greedy Algorithms

• Greedy Choice Property: The local best choice leads to the global best solution.

• Optimal Substructure: An optimal solution of the problem contains optimal solutions to its
subproblems.

• No Backtracking: Unlike Dynamic Programming (DP) or Backtracking, once a choice is made,


it is never reconsidered.

3. Time Complexity

Greedy algorithms are usually very efficient and have a time complexity of O(n log n) or O(n), depending on
the problem.

4. Advantages & Disadvantages

Advantages Disadvantages

Fails when a locally optimal choice doesn’t lead to a globally


Fast and efficient for many problems
optimal solution

Simple and easy to implement May require additional sorting (O(n log n))

Works well for problems with the Greedy Not applicable to problems requiring backtracking (like 0/1
Choice Property Knapsack, TSP)

5. Examples of Greedy Algorithms


(i) Activity Selection Problem (O(n log n))

Problem Statement:
Given n activities with start and end times, select the maximum number of activities that can be scheduled
without overlapping.

(ii) Huffman Coding (O(n log n))

Problem Statement:
Given characters with their frequencies, generate the optimal binary prefix code to minimize the total cost of
encoding.

(iii) Fractional Knapsack Problem (O(n log n))

2|Page
Problem Statement:
Given n items with weights and values, maximize the value while keeping the total weight within a given
capacity W.
Unlike 0/1 Knapsack, items can be divided into fractions.

(iv) Prim’s and Kruskal’s Algorithm for Minimum Spanning Tree (MST)

Problem Statement:
Find the minimum cost spanning tree of a given graph.

Dynamic Programming (DP)


1. Definition
Dynamic Programming (DP) is a method used to solve optimization and combinatorial problems by
breaking them down into overlapping subproblems and storing solutions to avoid redundant
computations.

Key Idea: Solve each subproblem once and store its result to avoid recomputation (Memoization or
Tabulation).

2. Characteristics of Dynamic Programming


• Optimal Substructure: The optimal solution to the overall problem depends on the optimal
solutions of its subproblems.

• Overlapping Subproblems: The problem contains smaller subproblems that are solved multiple
times.

• Memoization (Top-Down): Store results of subproblems in a table to avoid recalculating them.

• Tabulation (Bottom-Up): Solve subproblems first and use their results to solve bigger problems
iteratively.

3. Time Complexity
Since DP avoids redundant calculations, it significantly reduces time complexity compared to brute-force
approaches.
Most DP problems run in O(n) or O(n²) instead of exponential time (O(2ⁿ)).

4. Examples of Dynamic Programming Problems


(i) 0/1 Knapsack Problem (O(n × W))

Problem Statement: Given n items with weights and values, maximize the value while keeping the total
weight ≤ W.

3|Page
(ii) Longest Common Subsequence (LCS) (O(m × n))

Problem Statement: Given two strings X and Y, find the length of the longest common subsequence
(LCS).

(iii) Matrix Chain Multiplication (O(n³))

Problem Statement: Given a sequence of matrices, find the optimal way to multiply them to minimize
scalar multiplications.

5. Advantages & Disadvantages

Advantages Disadvantages

Optimized Computation: Avoids


High Memory Usage: Requires additional storage for tables
redundant calculations

Not Always Applicable: Only works when a problem has


Solves Complex Problems Efficiently
overlapping subproblems

Recursive DP (Memoization) uses recursion stack,


Ensures Optimal Solution
leading to high space complexity

6. When to Use Dynamic Programming?

If the problem has overlapping subproblems


If the problem has optimal substructure
If a brute-force solution has exponential time complexity

Avoid DP when:

• The problem does not have overlapping subproblems (use Greedy or Divide and Conquer).

• A more efficient approach exists (e.g., Binary Search, Greedy).

Greedy Approach vs Dynamic programming


Greedy approach and Dynamic programming are two different algorithmic approaches that can be used
to solve optimization problems. Here are the main differences between these two approaches:

Greedy Approach:
• The greedy approach makes the best choice at each step with the hope of finding a global
optimum solution.

• It selects the locally optimal solution at each stage without considering the overall effect on the
solution.

4|Page
• Greedy algorithms are usually simple, easy to implement, and efficient, but they may not always lead
to the best solution.

Dynamic Programming:
• Dynamic programming breaks down a problem into smaller subproblems and solves each
subproblem only once, storing its solution.

• It uses the results of solved subproblems to build up a solution to the larger problem.

• Dynamic programming is typically used when the same subproblems are being solved multiple times,
leading to inefficient recursive algorithms. By storing the results of subproblems, dynamic
programming avoids redundant computations and can be more efficient.

Difference between Greedy Approach and Dynamic Programming

Feature Greedy Approach Dynamic Programming

May not always provide an optimal Guarantees an optimal solution if the


Optimality solution. problem exhibits the principle of optimality.

Subproblem Does not reuse solutions to Reuses solutions to overlapping


Reuse subproblems. subproblems.

May involve backtracking, especially in top-


Does not involve backtracking.
Backtracking down implementations.

Typically, simpler and faster to May be more complex and slower to


Complexity implement. implement.

Suitable for problems where local


Suitable for problems with overlapping
optimization leads to global
subproblems and optimal substructure.
Application optimization.

Minimum Spanning Tree, Shortest Path Fibonacci sequence, Longest Common


Examples algorithms. Subsequence.

5|Page
Branch-and-Bound (B&B) Algorithm
1. Definition
Branch-and-Bound (B&B) is an algorithm design technique used to solve optimization problems,
especially NP-hard problems like the Traveling Salesman Problem (TSP) and 0/1 Knapsack
Problem.

Key Idea: Systematically explore possible solutions by branching and use bounding functions to
eliminate unpromising branches early.

2. Characteristics of Branch-and-Bound
• Exhaustive Search: It explores all possible solutions (like brute force) but prunes unnecessary
branches, reducing computations.

• Branching: The problem is divided into subproblems (branches).

• Bounding: A bound function estimates the best possible solution in a branch. If this bound is
worse than the best solution found so far, that branch is discarded.

• Works for Both Maximization and Minimization Problems.

Used for:
0/1 Knapsack Problem
Traveling Salesman Problem (TSP)
Integer Linear Programming
Job Scheduling

3. How Branch-and-Bound Works?


1. Branching: Split the problem into subproblems.

2. Bounding: Calculate an upper/lower bound for each subproblem.

3. Pruning: If a subproblem’s bound is worse than the best solution found so far, discard it.

4. Repeat Until All Possibilities Are Explored or Discarded.

Key Concept: If a node (subproblem) cannot give a better solution than the best known solution,
discard it to save time.

4. Example: 0/1 Knapsack Problem Using Branch-and-Bound


Problem Statement:

• You have n items, each with weight wt[i] and value val[i].

• The goal is to maximize value while keeping total weight ≤ W (Knapsack capacity).

6|Page
Solving Traveling Salesman Problem (TSP) Using B&B
• Problem Statement:
Find the shortest route visiting all cities exactly once and returning to the start.

5. Advantages & Disadvantages

Advantages Disadvantages

Prunes Unnecessary Computations Worst-Case is Exponential

Guarantees Optimal Solution High Memory Usage (Stores Multiple States)

Works for Large-Scale Problems Bounding Function Must Be Carefully Designed

6. When to Use Branch-and-Bound?


Use B&B when:
You need an exact solution (Not just an approximation like Greedy algorithms).
*The problem size is small to medium (n < 30 is ideal; beyond this, heuristic methods like A may be
better)**.
You need to prune unnecessary computations for efficiency.

Avoid B&B when:

• An approximation (like Greedy or Dynamic Programming) is acceptable.

• The problem is too large, making B&B impractical.

8. Comparison of B&B with Other Techniques

Algorithm Best For Drawback

Brute Force Small problems Too slow for large inputs

Greedy Algorithm Fast heuristics May not be optimal

Dynamic Programming (DP) Problems with overlapping subproblems High memory usage

Backtracking Constraint problems Exponential complexity

Branch-and-Bound Exact solution for optimization problems May still be slow in worst cases

7|Page
Backtracking Algorithm
1. Definition
Backtracking is a systematic search algorithm used for constraint satisfaction and optimization
problems. It incrementally builds a solution and abandons (backtracks) a path as soon as it is
determined that it cannot lead to a valid solution.

Key Idea: Explore all possible solutions recursively but discard invalid paths as early as possible.

2. Characteristics of Backtracking
• Recursive Approach: Uses recursion to construct solutions step-by-step.

• Depth-First Search (DFS) Based: Explores one path completely before moving to another.

• Pruning (Early Termination): If a partial solution is invalid, backtrack to the previous step.

• Efficient: Reduces unnecessary computations compared to brute-force.

• Used for Constraint-Based Problems: Solves problems with strict rules.

Used for:
N-Queens Problem
Sudoku Solver
Graph Coloring
Hamiltonian Cycle (TSP)
Subset and Permutation Problems

3. How Backtracking Works?


1. Start with an empty solution.

2. Make a decision and proceed recursively.

3. If the decision leads to an invalid state, undo (backtrack) and try another option.

4. Repeat until all solutions are found or an optimal solution is reached.

Backtracking = Depth-First Search (DFS) + Pruning (Early Termination).

4. Advantages & Disadvantages

Advantages Disadvantages

Efficient for small problems Slow for large inputs (Exponential Time Complexity)

Guarantees correct solutions Not ideal for optimization problems

8|Page
Advantages Disadvantages

Prunes unnecessary computations High recursion depth can cause stack overflow

Difference between Backtracking & Branch and Bound

Parameter Backtracking Branch and Bound

Backtracking is used to find all possible Branch-and-Bound is used to solve


solutions available to a problem. When it optimisation problems. When it realises that it
realises that it has made a bad choice, it already has a better optimal solution that the
Approach
undoes the last choice by backing it up. It pre-solution leads to, it abandons that pre-
searches the state space tree until it has solution. It completely searches the state space
found a solution for the problem. tree to get optimal solution.

Backtracking traverses the state space tree Branch-and-Bound traverse the tree in any
Traversal
by DFS(Depth First Search) manner. manner, DFS or BFS.

Branch-and-Bound involves a bounding


Function Backtracking involves feasibility function.
function.

Backtracking is used for solving Decision Branch-and-Bound is used for solving


Problems
Problem. Optimisation Problem.

In Branch-and-Bound as the optimum solution


In backtracking, the state space tree is may be present any where in the state space
Searching
searched until the solution is obtained. tree, so the tree need to be searched
completely.

Efficiency Backtracking is more efficient. Branch-and-Bound is less efficient.

Useful in solving N-Queen Problem, Sum


Useful in solving Knapsack Problem, Travelling
Applications of subset, Hamilton cycle problem, graph
Salesman Problem.
coloring problem

9|Page
Backtracking can solve almost any Branch-and-Bound can not solve almost any
Solve
problem. (chess, sudoku, etc ). problem.

Typically backtracking is used to solve Branch and bound is used to solve optimization
Used for
decision problems. problems.

Nodes in stat space tree are explored in Nodes in tree may be explored in depth-first or
Nodes
depth first tree. breadth-first order.

Next move from current state can lead to


Next move Next move is always towards better solution.
bad choice.

On successful search of solution in state Entire state space tree is search in order to find
Solution
space tree, search stops. optimal solution.

Problem-Solving Using Algorithm Design Techniques


▪ Bin Packing Problem
▪ Knapsack Problem
▪ Travelling Salesman Problem (TSP)

Heuristic Methods: Characteristics, Applications, and Complexity


1. Introduction to Heuristic Methods
Heuristic methods are problem-solving techniques that provide approximate solutions when finding an
exact solution is impractical due to time constraints or computational complexity. These methods are widely
used in optimization problems, artificial intelligence (AI), machine learning, game theory, and
network routing.

Key Idea: Instead of exhaustively searching all possible solutions, heuristics focus on finding a good
enough solution in a reasonable time.

2. Characteristics of Heuristic Algorithms


Heuristic algorithms possess the following key characteristics:

1. Approximate Solutions:

o They do not always guarantee the optimal solution but provide near-optimal results.

o Useful for NP-hard problems where finding the best solution is computationally expensive.

10 | P a g e
2. Problem-Specific Strategies:

o Designed for specific types of problems.

o Performance depends on the nature of the problem.

3. Efficiency and Speed:

o Faster than exhaustive search methods (like brute force).

o Reduce search space using intelligent rules.

4. Trade-off Between Accuracy and Computation Time:

o Sacrifices precision for speed.

o Balances between solution quality and execution time.

5. Greedy and Probabilistic Approaches:

o Uses greedy techniques (selecting the best immediate option).

o Sometimes employs randomization for improved results.

6. Iterative Refinement:

o Solutions are incrementally improved.

o Some heuristics use local search to refine solutions.

7. Domain-Specific Knowledge:

o Relies on domain expertise to create effective rules.

3. Application Domains of Heuristics


Heuristic methods are used in many fields, including:

1. AI and Machine Learning

• Feature Selection in AI Models: Choosing the best features using heuristic search.

• Hyperparameter Optimization: Finding optimal settings for deep learning models.

• Genetic Algorithms: Used in neural network training.

2. Optimization Problems

• Scheduling Problems: Employee work shifts, examination timetables.

• Vehicle Routing: Logistics and delivery optimization.

• Bin Packing: Allocating memory or container space efficiently.

3. Game Theory

• Chess AI (Minimax & Alpha-Beta Pruning): Uses heuristics for fast decision-making.

• Poker & Board Games: AI uses heuristics to evaluate positions.

11 | P a g e
• Pathfinding in Video Games: Uses A Algorithm*.

4. Network Routing

• *Shortest Path Algorithms (Dijkstra, A)**: Finds the best route in networks.

• Load Balancing: Distributes network traffic efficiently.

• Packet Scheduling: Prioritizing data packets in communication networks.

12 | P a g e

You might also like