0% found this document useful (0 votes)
31 views15 pages

AI Unit 2

This document discusses heuristic search and game playing in artificial intelligence, detailing the components and properties of search algorithms, including uninformed and informed searches. It covers various algorithms such as BFS, DFS, A*, and AO*, highlighting their advantages, disadvantages, and applications. Additionally, it explores heuristic techniques and optimization problems, emphasizing the importance of heuristics in efficiently navigating search spaces.

Uploaded by

pauhal25
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
31 views15 pages

AI Unit 2

This document discusses heuristic search and game playing in artificial intelligence, detailing the components and properties of search algorithms, including uninformed and informed searches. It covers various algorithms such as BFS, DFS, A*, and AO*, highlighting their advantages, disadvantages, and applications. Additionally, it explores heuristic techniques and optimization problems, emphasizing the importance of heuristics in efficiently navigating search spaces.

Uploaded by

pauhal25
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd

Unit 2 : Heuristic Search and Game Playing

HEURISTICS

Search Alg’s in AI:


Search: Searching is a step by step procedure to solve a search-problem in a given search space. State
space search is a fundamental concept in artificial intelligence and computer science. It involves
exploring and navigating a space of possible states to find a solution to a problem. This problem-
solving approach is commonly used in various AI applications, such as path-finding, game playing,
planning, and optimization.

3 Main factors of a Search Problem:


Search Space: Search space represents a set of possible solutions, which a system may have.
Start State: It is a state from where agent begins the search.
Goal test: It is a function which observe the current state and returns whether the goal state is achieved
or not.

Key Components:
Initial State:
The starting point or configuration of the problem.
Goal State:
The desired or target configuration or state that represents the solution.
Operators/Actions:
The possible actions or operators that can be applied to transition from one state to another.
State Space:
The entire set of possible states that the problem can reach.
S:{ S, A, Action(S), Result(S,a), Cost(S,a) }
Path Cost:
The cost associated with transitioning from one state to another.
Constraints:
Any limitations or restrictions that must be considered during problem-solving.

Search Algorithm Terminologies:


Search tree: A tree representation of search problem is called Search tree. The root of the search tree is
the root node which is corresponding to the initial state.
Actions: It gives the description of all the available actions to the agent.
Transition model: A description of what each action do, can be represented as a transition model.
Path Cost: It is a function which assigns a numeric cost to each path.
Solution: It is an action sequence which leads from the start node to the goal node.
Optimal Solution: If a solution has the lowest cost among all the possible solutions.

Properties of Search Alg’s:


Completeness: A search algorithm is said to be complete if it guarantees to return a solution if at least
any solution exists for any random input.
Optimal: If a solution found for an algorithm is guaranteed to be the best solution (lowest path cost)
among all other solutions, then such a solution for problem is said to be an optimal solution.
Time Complexity: Time complexity is a measure of time for an algorithm to complete its task.
Space Complexity: It is the maximum storage space required at any point during the search, as the
complexity of the problem.
Types of Search Algs:

Uninformed/Blind Search:
Does not contain any domain knowledge such as closeness, the location of the goal.
Operates in a brute-force way as it only includes information about how to traverse the tree and how to
identify leaf and goal nodes.
No information about the search space, hence known as blind search.

Informed Search:
Informed search algorithms use domain knowledge.
Problem information is available which can guide the search.
Informed search strategies can find a solution more efficiently than an uninformed search strategy.
Informed search is also called a Heuristic search.
A heuristic is a way which might not always be guaranteed for best solutions but guaranteed to find a
good solution in reasonable time.

Uninformed/Blind Search:
BFS:
This algorithm searches breadthwise in a tree or graph.
Starts searching from the root node of the tree and expands all successor node at the current level
before moving to nodes of next level.
Breadth-first search implemented using FIFO queue data structure.

Advantages:
BFS will provide a solution if any solution exists.
If there are more than one solutions for a given problem, then BFS will provide the minimal solution
which requires the least number of steps.

Disadvantages:
It requires lots of memory since each level of the tree must be saved into memory to expand the next
level.
BFS needs lots of time if the solution is far away from the root node.

ALG:
Step 1. Put the initial node on a list START.
Step 2. If (START is empty) or (START=GOAL), terminate search.
Step 3. Remove the first node from START, call it node a.
Step 4. If (a=GOAL) terminate search with success.
Step 5. Else if node a has successor, generate all of them and add them to tail of START.
Step 6. Go to step 2

Properties:
Completeness:
BFS is complete, which means if the shallowest goal node is at some finite depth, then BFS will find a
solution.
Optimality:
BFS is optimal if path cost is a non-decreasing function of the depth of the node.

Applications:
Locating all nodes within one connected component
Locating the shortest path among two nodes u and v etc

DFS:
Depth-first search is a recursive algorithm for traversing a tree or graph data structure.
It is called the depth-first search because it starts from the root node and follows each path to its
greatest depth node before moving to the next path.
DFS uses a stack data structure for its implementation.

Advantage:
DFS requires very less memory as it only needs to store a stack of the nodes on the path from root node
to the current node.
It takes less time to reach to the goal node than BFS algorithm (if it traverses in the right path).
Disadvantage:
There is the possibility that many states keep re-occurring, and there is no guarantee of finding the
solution.
DFS algorithm goes for deep down searching and sometime it may go to the infinite loop.

Algorithm:
Step 1. Put the initial node on a list START.
Step 2. If (START is empty) or (START=GOAL), terminate search.
Step 3. Remove the first node from START, call it node a.
Step 4. If (a=GOAL) terminate search with success.
Step 5. Else if node a has successor, generate all of them and add them to beginning of START.
Step 6. Go to step 2

Properties:
Completeness:
DFS search algorithm is complete within finite state space as it will expand every node within a limited
search tree.
Space Complexity:
DFS algorithm needs to store only single path from the root node, hence space complexity of DFS is
equivalent to the size of the fringe set, which is O(bm).
Optimal:
DFS search algorithm is non-optimal, as it may generate a large number of steps or high cost to reach
to the goal node.

Problem with Blind Search:


Do not have any domain specific knowledge.
Process of searching is drastically reduced and inefficient.

Informed/Heuristics Search:
Informed search algorithm contains an array of knowledge such as how far we are from the goal, path
cost, how to reach to goal node, etc.
This knowledge help agents to explore less to the search space and find more efficiently the goal node.
It is also called Heuristic search.
Informed search algorithm is more useful for large search space.

Two categories of problems use heuristics:


Problem for which no exact algorithm are known, and one need to find an approximate and satisfying
solution. E.g. Computer vision or speech recognition.
Problem for which exact solution are known, but computationally infeasible. E.g. Rubric cube or
Chess.

Heuristics Function:
Heuristic is a function which is used in Informed Search, and it finds the most promising path.
It takes the current state of the agent as its input and produces the estimation of how close agent is from
the goal.
The heuristic method does not always give the best solution, but it guaranteed to find a good solution in
reasonable time.
It is represented by h(n), and it calculates the cost of an optimal path between the pair of states.
The value of the heuristic function is always positive.

Admissibility of the heuristic function is given as:


h(n) <= h*(n)
Here h(n) is heuristic cost, and h*(n) is the estimated cost. Hence heuristic cost should be less than or
equal to the estimated cost.

Some simple heuristic functions:


8-tile puzzle: hamming distance is used
Chess Problem: Material Advantage is used

Best First Search:


Best-first search algorithm always selects the path which appears best at that moment.
It is the combination of depth-first search and breadth-first search algorithms.
With the help of best-first search, at each step, we can choose the most promising node.
In the best first search algorithm, we expand the node which is closest to the goal node and the closest
cost is estimated by heuristic function, i.e.
f(n)= g(n).

Step 1. Put the initial node on a list START.


Step 2. If (START is empty) or (START=GOAL), terminate search.
Step 3. Remove the first node from START, call it node a.
Step 4. If (a=GOAL) terminate search with success.
Step 5. Else if node a has successor, generate all of them. Find out how far they are from the goal node.
Sort all the children generated so far by the remaining distance from the goal.
Step 6. Name the list as START 1 and replace it with list START.
Step 7. Go to step 2

Expand the nodes of S and put in the CLOSED list


Initialization: Open [A, B], Closed [S]
Iteration 1: Open [A], Closed [S, B]
Iteration 2: Open [E, F, A], Closed [S, B]
: Open [E, A], Closed [S, B, F]
Iteration 3: Open [I, G, E, A], Closed [S, B, F]
: Open [I, E, A], Closed [S, B, F, G]

Hence the final solution path will be: S----> B----->F----> G

Advantages:
Best first search can switch between BFS and DFS by gaining the advantages of both the algorithms.
This algorithm is more efficient than BFS and DFS algorithms.

Disadvantages:
It can behave as an unguided depth-first search in the worst case scenario.
It can get stuck in a loop as DFS.
This algorithm is not optimal.

InComplete: Greedy best-first search is also incomplete, even if the given state space is finite.
Not Optimal: Greedy best first search algorithm is not optimal.

A* Search:
A* uses heuristic function h(n), and cost to reach the node n from the start state g(n).
A* search algorithm finds the shortest path through the search space using the heuristic function.
This search algorithm expands less search tree and provides optimal result faster.
* means admissible.
A* is a admissible algorithm means it will always give optimal result.
In A* search algorithm, we use search heuristic as well as the cost to reach the node. Hence we
can combine both costs as following, and this sum is called as a fitness number.
At each point in the search space, only those node is expanded which have the lowest value of
f(n), and the algorithm terminates when the goal node is found.

EX:

From SCDE we can move to: G


f(G) = (3+7+2+5) + 0 = 17

Complete: A* algorithm is complete as long as Branching factor is finite and Cost at every path is
fixed.
Optimal: A* search algorithm is optimal if h(n) should be an admissible heuristic for A* tree
search.
Consistency: Second required condition is consistency for only A* graph-search.

Advantages:
A* search algorithm is the best algorithm than other search algorithms.
A* search algorithm is optimal and complete.
This algorithm can solve very complex problems.
Disadvantages:
It does not always produce the shortest path as it mostly based on heuristics and approximation.
(Underestimation and Overestimation)
A* search algorithm has some complexity issues.
The main drawback of A* is memory requirement as it keeps all generated nodes in the memory, so it
is not practical for various large-scale problems
AO* Alg:
The AO* method divides any given difficult problem into a smaller group of problems that are then
resolved using the AND-OR graph concept.

The evaluation function in AO* looks like this:


f(n) = g(n) + h(n)
f(n) = Actual cost + Estimated cost

-A* always gives the optimal solution but AO* doesn’t guarantee to give the optimal solution.
-Once AO* got a solution doesn’t explore all possible paths but A* explores all paths.
-When compared to the A* algorithm, the AO* algorithm uses less memory.
-Opposite to the A* algorithm, the AO* algorithm cannot go into an endless loop.

Real-Life Applications of AO* algorithm:


Vehicle Routing Problem:
The vehicle routing problem is determining the shortest routes for a fleet of vehicles to visit a set of
customers and return to the depot, while minimizing the total distance traveled and the total time taken.
The AO* algorithm can be used to find the optimal routes that satisfy both objectives.
Portfolio Optimization:
Portfolio optimization is choosing a set of investments that maximize returns while minimizing risks.
The AO* algorithm can be used to find the optimal portfolio that satisfies both objectives, such as
maximizing the expected return and minimizing the standard deviation.

Iterative deepening A* (IDA*):


It perform depth first search with limited to some f-bound.
Uses the formula: f(n) = h(n) + g(n)

Algorithm:
Perform depth-first search limited to some f-bound.
If goal found: OK
Else: increase the f-bound and restart.

Small Memory A* (SMA*):


Like A* search, SMA* search is an optimal and complete algorithm for finding a least-cost path.
Unlike A*, SMA* will not run out of memory, unless the size of the shortest path exceeds the amount
of space in available memory.
SMA* addresses the possibility of running out of memory by pruning the portion of the search-space
that is being examined.

Heuristic Techniques:
Problem-solving methods that prioritize speed and efficiency over optimality. These methods are
commonly used in situations where finding an exact solution is computationally expensive or
impractical.
Provide approximate solutions that are often "good enough" for practical purposes.
Greedy algorithms:
Make decisions based on the current best option without considering the global optimal solution. At
each step, the algorithm selects the locally optimal choice, hoping that it will lead to a good solution
overall. Greedy algorithms are easy to implement and often efficient but may not always produce the
best possible solution.
Local search: algorithms explore a solution space by iteratively moving from one solution to a
neighboring solution that improves some objective function. These algorithms start with an initial
solution and repeatedly make small modifications to improve it. Examples include hill climbing,
simulated annealing, and genetic algorithms.
Rule-based systems: use a set of predefined rules to make decisions or perform tasks. These systems
are often used in expert systems, where human expertise is encoded into a set of rules that guide
problem-solving or decision-making processes.

Generate and Test:


Description: In the generate and test approach, a solution is generated and then tested to determine if it
meets the desired criteria. If the solution is satisfactory, it is accepted; otherwise, the process is
repeated until an acceptable solution is found.
Example: In a software development context, when designing an algorithm to solve a particular
problem, developers may generate multiple solutions based on different approaches (e.g., brute force,
divide and conquer, dynamic programming) and test each solution against a set of test cases to evaluate
its performance and effectiveness.

Hill Climbing:
Description: Hill climbing is a local search algorithm that iteratively improves a solution by making
incremental changes at each step. The algorithm evaluates neighboring solutions and selects the one
that maximizes (or minimizes) an objective function, moving in the direction of the steepest ascent (or
descent) until a peak (or valley) is reached.
Example: In route optimization, hill climbing can be used to find the shortest path between two points
on a map by iteratively adjusting the route based on the distance to the destination. At each step, the
algorithm evaluates neighboring routes and selects the one that brings it closer to the destination until
an optimal path is found.

State Space Search:


Description: State space search involves exploring the set of possible states of a problem to find a
solution. Each state represents a configuration or arrangement of elements, and transitions between
states are determined by applying operators or actions. The search proceeds by systematically
exploring the state space until a goal state is reached.
Example: In puzzle-solving, such as the eight puzzle or Rubik's Cube, state space search is used to
find a sequence of moves that transforms the initial configuration of the puzzle into the goal
configuration. The algorithm explores different states of the puzzle by applying valid moves (e.g.,
sliding puzzle tiles or rotating cube faces) until the goal state is achieved.

Constraint Satisfaction Problem (CSP):


Description: A constraint satisfaction problem involves finding a solution that satisfies a set of
constraints or conditions. The problem typically consists of a set of variables, each with a domain of
possible values, and a set of constraints that restrict the allowable combinations of variable values. The
goal is to find an assignment of values to variables that satisfies all constraints.
Example: Sudoku puzzles can be formulated as constraint satisfaction problems, where the goal is to
fill in the empty cells of the grid with digits from 1 to 9 such that each row, column, and 3x3 subgrid
contains each digit exactly once. The constraints are the rules of Sudoku, and the solution is a valid
assignment of digits to cells that satisfies all constraints
Types of Optimization Problems:

Function optimization
This type of problem involves finding the maximum or minimum value of a given function. The goal is
to find the input value(s) that yield the best output value(s) of the function.
The Hill Climbing Algorithm is well-suited for function optimization problems because it can
iteratively search for the best solution in the search space.
Application: Finding the maximum profit of a business given a set of input variables such as
production costs, sales price, and demand.

Constraint optimization
This type of problem involves finding the best solution that satisfies a set of constraints. The goal is to
find the input value(s) that satisfy the constraints and optimize the objective function.
The Hill Climbing Algorithm can be used for constraint optimization problems by incorporating the
constraints into the neighbor generation and acceptance steps.
Application: Scheduling a set of tasks given constraints such as available resources, deadlines, and
dependencies.

Combinatorial optimization
This type of problem involves finding the best combination of discrete elements from a given set. The
goal is to find the combination that maximizes or minimizes a given objective function. Combinatorial
optimization problems are common in fields such as logistics, scheduling, and resource allocation.
The Hill Climbing Algorithm can be used for combinatorial optimization problems by generating
neighboring solutions that involve adding or removing elements from the current solution.
Application: Packing a set of items into a container to maximize the total value, subject to constraints
such as weight and volume limits.

Search strategies Used in Hill Climbing:


Random mutation:
A random perturbation is made to the current solution to generate a new solution. This approach can be
effective for exploring a large search space, but it can also lead to suboptimal solutions if the
perturbations are too large or not well-guided.

Gradient descent:
The search process is guided by the gradient of the objective function, which indicates the direction of
steepest increase or decrease. This approach can be effective for finding a local optimum, but it can
also get stuck in local optima or saddle points.

Simulated annealing:
The search process is guided by a temperature parameter that controls the likelihood of accepting worse
solutions. This approach can be effective for escaping local optima, but it can also be computationally
expensive and sensitive to the choice of parameters.

Tabu Search:
A tabu list is used to keep track of recently visited solutions and prevent backtracking. This approach
can be effective for avoiding cycles and exploring diverse areas of the search space, but it can also get
stuck in local optima if the tabu list is too strict.

Variants of Hill Climbing Algorithm:


Steepest ascent hill climbing:
The search process always selects the neighbor with the highest objective function value, regardless of
whether it is better than the current solution.
This approach can be effective for finding the global optimum, but it can also be computationally
expensive and prone to oscillations.
Application: Finding the optimal parameters of a support vector machine for a binary classification
problem.
Hill climbing with random restarts:
The search process is repeated from multiple random starting points, in the hope of finding a better
solution.
This approach can be effective for escaping local optima, but it can also be computationally expensive
and require a large number of restarts.
Application: Finding the optimal combination of hyperparameters for a deep learning model for image
recognition.

First-choice hill climbing


The search process evaluates a random subset of the neighbors and selects the first one that is better
than the current solution.
This approach can be effective for avoiding cycles and exploring diverse areas of the search space, but
it can also lead to suboptimal solutions if the subset is too small or not well-guided.
Application: Finding the optimal sequence of moves in a chess game.

Hill climbing with simulated annealing:


The search process is guided by a temperature parameter that controls the likelihood of accepting worse
solutions, similar to the simulated annealing search strategy.
This approach can be effective for escaping local optima and exploring diverse areas of the search
space, but it can also be computationally expensive and sensitive to the choice of parameters.
Application: Finding the optimal placement of facilities in a logistics network with multiple
objectives.

Advantages of Hill Climbing:


Simplicity: Hill climbing is easy to implement and understand, and requires few parameters to tune.
Efficiency: Hill climbing can be computationally efficient, especially for problems with simple
objective functions and low-dimensional search spaces.
Local search: Hill climbing is a local search algorithm, which can be beneficial for problems where
the objective function is smooth and the search space is well-behaved

Limitations:
Local optima:
Hill climbing can get stuck in local optima, which can lead to suboptimal solutions.
Suboptimal solutions that are better than their immediate neighbors, but worse than the global
optimum.
This happens when the algorithm chooses a neighboring solution that improves the objective function,
but does not lead to the global optimum.
Once the algorithm is stuck in a local optimum, it cannot escape from it, and may not find the global
optimum.
This limitation can be particularly problematic for problems with many local optima and few global
optima.

Initial solution:
Hill climbing is sensitive to the initial starting point, which can affect the quality of the final solution.
If the initial starting point is far from the global optimum, the algorithm may converge to a suboptimal
solution.
This limitation can be mitigated by using random or heuristic methods to generate a set of starting
points, and selecting the best one as the initial solution.

Search space:
Hill climbing is limited by the search space, which can be problematic for problems with complex or
discontinuous objective functions.
If the search space is large or contains regions with low objective function values, the algorithm may
require a large number of iterations to find a good solution.
Moreover, if the objective function is not smooth or continuous, the algorithm may not be able to
accurately estimate the gradient, and may get stuck in regions with large gradients or noisy values.
Lack of diversity:
The Algorithm focuses on improving the objective function value by iteratively selecting the best
neighboring solution.
This approach can be effective for problems where the objective function is smooth and well-behaved,
but may not be appropriate for problems with multiple objectives or conflicting constraints.
In these cases, the algorithm may converge to a single solution that is not diverse or does not reflect the
trade-offs between different objectives or constraints.

Sensitivity to parameters:
There are several parameters that can affect the performance, such as the step size, the stopping
criterion, and the number of iterations.
Choosing the right parameters can be challenging, and may require a trial-and-error process or a careful
analysis of the problem characteristics.
Moreover, the performance of the algorithm may be sensitive to small changes in the parameters,
which can make it difficult to generalize to new problems or datasets.

GAME PLAYING

Adversarial Search:
Adversarial search is a problem-solving technique used in artificial intelligence (AI) to find the best
move or strategy in a competitive, two-player game.
"Adversarial" refers to the fact that each player's goal is in direct conflict with the other player's goal

In adversarial search, the two players are typically referred to as "max" and "min," representing the
maximizing player (who seeks to maximize their own outcome) and the minimizing player (who seeks
to minimize the maximizing player's outcome).

Searches in which two or more players with conflicting goals are trying to explore the same search
space for the solution, are called adversarial searches, often known as Games.

Types of Games in AI:

Formalization of the Problem:


A game can be defined as a type of search in AI which can be formalized of the following elements:
Initial state: It specifies how the game is set up at the start.
-Player(s): It specifies which player has moved in the state space.
-Action(s): It returns the set of legal moves in state space.
-Result(s, a): It is the transition model, which specifies the result of moves in the state space.
-Terminal-Test(s): Terminal test is true if the game is over, else it is false at any case. The state where
the game ends is called terminal states.
-Utility(s, p): A utility function gives the final numeric value for a game that ends in terminal states s
for player p. It is also called payoff function. For Chess, the outcomes are a win, loss, or draw and its
payoff values are +1, 0, ½. And for tic-tac-toe, utility values are +1, -1, and 0.

Game Tree:
A game tree is a tree where nodes of the tree are the game states and Edges of the tree are the moves by
players. Game tree involves initial state, actions function, and result Function.
Min-Max Algorithm:
Mini-max algorithm is a recursive or backtracking algorithm which is used in decision-making and
game theory. It provides an optimal move for the player assuming that opponent is also playing
optimally.

Mini-Max algorithm uses recursion to search through the game-tree.


In this algorithm two players play the game, one is called MAX and other is called MIN.

The minimax algorithm performs a depth-first search algorithm for the exploration of the complete
game tree.
The minimax algorithm proceeds all the way down to the terminal node of the tree, then backtrack the
tree as the recursion.

Working of Min-Max:
We have taken an example of game-tree which is representing the two-player game.
In this example, there are two players one is called Maximizer and other is called Minimizer.
Maximizer will try to get the Maximum possible score, and Minimizer will try to get the minimum
possible score.
This algorithm applies DFS, so in this game-tree, we have to go all the way through the leaves to reach
the terminal nodes.
At the terminal node, the terminal values are given so we will compare those value and backtrack the
tree until the initial state occurs.
Properties of Mini-Max algorithm:
Complete- Min-Max algorithm is Complete. It will definitely find a solution (if exist), in the finite
search tree.
Optimal- Min-Max algorithm is optimal if both opponents are playing optimally.
Time complexity- As it performs DFS for the game-tree, so the time complexity of Min-Max
algorithm is O(bm), where b is branching factor of the game-tree, and m is the maximum depth of the
tree.
Space Complexity- Space complexity of Mini-max algorithm is also similar to DFS which is O(bm).

Limitations:
The main drawback of the minimax algorithm is that it gets really slow for complex games such as
Chess, go, etc.
This type of games has a huge branching factor, and the player has lots of choices to decide.

Alpha Beta Pruning:


-Modified version of Min Max
-Optimization technique for Min Max
-Pruning is a technique by which without checking each node of the game tree we can compute the
correct minimax decision.
-This involves two threshold parameter Alpha and beta for future expansion, so it is called alpha-beta
pruning. or Alpha-Beta Algorithm.
Alpha-beta pruning can be applied at any depth of a tree.

The two-parameter can be defined as:


Alpha: The best (highest-value) choice we have found so far at any point along the path of Maximizer.
The initial value of alpha is -∞.
Beta: The best (lowest-value) choice we have found so far at any point along the path of Minimizer.
The initial value of beta is +∞.

It removes all the nodes which are not really affecting the final decision but making algorithm slow.
Hence by pruning these nodes, it makes the algorithm fast.
The main condition which required for alpha-beta pruning is:
α>=β

Points to Remember:
The Max player will only update the value of alpha.
The Min player will only update the value of beta.
While backtracking the tree, the node values will be passed to upper nodes instead of values of alpha
and beta.
We will only pass the alpha, beta values to the child nodes
Working of this from page 27 ppt

Move Ordering in Alpha Beta Pruning:


-Worst ordering:
In some cases, alpha-beta pruning algorithm does not prune any of the leaves of the tree, and works
exactly as minimax algorithm. In this case, it also consumes more time because of alpha-beta factors,
such a move of pruning is called worst ordering. In this case, the best move occurs on the right side of
the tree. The time complexity for such an order is O(bm).

-Ideal ordering:
The ideal ordering for alpha-beta pruning occurs when lots of pruning happens in the tree, and best
moves occur at the left side of the tree. We apply DFS hence it first search left of the tree and go deep
twice as minimax algorithm in the same amount of time. Complexity in ideal ordering is O(bm/2)

Rules for Finding Good Ordering:


Occur the best move from the shallowest node.
Order the nodes in the tree such that the best nodes are checked first.
Use domain knowledge while finding the best move. Ex: for Chess, try order: captures first, then
threats, then forward moves, backward moves.
We can bookkeep the states, as there is a possibility that states may repeat.

Perfect Decision Game:


In a perfect decision game, players have complete and perfect information about the game state,
available actions, and potential outcomes.
Players can make decisions based on this perfect information, knowing the consequences of each
possible action.
Examples of perfect decision games include games like Tic-Tac-Toe or Chess, where players can see
the entire board and all possible moves.

Imperfect Decision Game:


In contrast, imperfect decision games involve uncertainty and incomplete information about the game
state or the actions of other players.
Players must make decisions based on incomplete or imperfect information, leading to uncertainty
about the outcomes of their actions.
Examples of imperfect decision games include card games like Poker, where players have limited
information about the cards held by other players and must make decisions based on probabilities and
bluffing.

You might also like