DAA CHAPTER 1: Introduction to Algorithms ● Asymptotic Analysis - Big-O notation, Omega
Algorithm: notation, Theta notation
● Finite sequence of well-defined steps to solve a —-------------------------------------------------
specific problem or perform a computation CHAPTER 2: Divide and Conquer
Key properties: Divide and Conquer:
● Input - zero or more quantities provided ● Algorithm strategy for solving problems
externally ● Break > solve > combine
● Output - at least one result produced ● Divide problem into subproblems, solve them,
● Finiteness - terminates after finite number of then merge results
steps Key steps:
● Definiteness - each step is precisely defined ● Divide - split problem into smaller parts
● Effectiveness - steps are basic enough to be ● Conquer - solve each part
carried out ● Combine - merge solutions
Mediums of Algorithm: Why use Divide and Conquer?
● Natural language ● Simplifies complex problems
● Pseudocode ● Improves efficiency for large inputs
● Flowcharts ● Naturally fits recursive solutions
● Programming languages Divide and Conquer Algorithms:
Examples of Algorithm: ● Merge sort - sorting by splitting and merging
● Sorting algorithm ● Binary search - searching by halving
○ Bubble sort ● Quick sort - sorting by partitioning
○ Insertion soft Advantages:
○ Merge sort ● Clear recursive structure
● Searching algorithm ● Faster than brute force in many cases
○ Linear search ● Widely applicable in computer science
○ Binary search Brute force - straightforward and exhaustive approach
● Graph algorithm to solving problems
○ Dijkstra’s Shortest path, Breadth-First Limitations:
search ● Overhead from recursion
Importance of Designing and Analyzing an Algorithm: ● May use extra memory
● Ensures efficiency in terms of time and memory ● Not always efficient for small inputs
● Allows comparison between alternative —-------------------------------------------------
solutions CHAPTER 3: Backtracking
● Helps identify scalability of solutions for large
inputs Backtracking:
● Provides correctness guarantees through ● A refined brute force approach for solving
formal analysis problems
● Balances trade-offs ● Incrementally build a solution and abandons
Designing an Algorithm: paths that violate constraints
● Problem definition - understand input, output, ● Ideal for constraints satisfaction problems
and constraints (CSPs)
● Algorithm specification - describe solution Key Steps:
steps ● Choice - select an option from available
● Choose a design paradigm candidates
● Refinement - simplify or optimize steps ● Constraint - check if the partial solution is valid
Analysis of an Algorithm: under given rules
● Correctness - verify algorithm produces valid ● Goal - if constraints are satisfied and solution is
output complete accept it
● Space complexity - measure memory usage ● Backtrack - if constraints fail, go back to
● Time complexity - count fundamental previous decision point and try alternatives
operations (best case, average case, worst
case)
Why use Backtracking?
● Efficient in solving problems where brute force ● Useful for problems with greedy-choice
is impractical property and optimal substructure
● Avoids exploring unnecessary solution Limitations of Greedy;
branches
● Not all problems exhibit the greedy-choice
● Works well for problems with large search
property
spaces and constraints
Examples of backtracking algorithms: ● May produce suboptimal results in some cases
● N-Queens Problem - place N queens on an nxn ● Requires proof that local optimum leads to a
chessboard without attacking each other global optimum
● Sudoku solver - filling numbers while checking Advantages of Greedy:
row/column/box constraints
● Easy to implement
● Hamiltonian Path/Cycle - find a path visiting
● Time-efficient
all vertices exactly once
● Works well for many real-world optimization
● Graph coloring - assign colors to graph
vertices without adjacent conflicts problems
● Subset and permutation generation - Examples of Greedy Algorithms:
generate all valid computations ● Activity selection problem
Advantages: ● Fractional knapsack problem
● Systematic and complete: all solutions can be
● Huffman coding
generated
● Prims’s and Kruskal’s algorithms
● Prunes invalid states early, saving computation
● general-purpose ; adaptable to many CSP ● Dijkstra’s shortest path algorithm
problems
Limitations:
● Exponential complexity in worst cases
● Can still be slow for large N without optimization
● Performance highly depends on pruning
efficiency
—-------------------------------------------------
CHAPTER 4: Greedy
Greedy:
● A problem-solving technique that builds a
solution step by step, choosing the best option
at each stage
● Relies on the idea that a local optimum will lead
to a global optimum
● Used for optimization problems
Key Steps:
1. Selection - choose the best possible option at
the moment
2. Feasibility check - ensure the choice doesn’t
violate constraints
3. Inclusion - add the selected option to the
current solution
4. Repeat - continue until the solution is complete
or no more choices remain
Why use Greedy?
● Simple and fast
● Often gives optimal or near-optimal solutions