0% found this document useful (0 votes)
58 views35 pages

Informed Heuristic Search Techniques

Uploaded by

Pranesh Yadav
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
58 views35 pages

Informed Heuristic Search Techniques

Uploaded by

Pranesh Yadav
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 35

Heuristic Search Techniques

• All the previous searches (BFS,DFS,DFID, and Bidirectional) have been blind searches
• They make no use of any knowledge of the problem
• If we know something about the problem, we can usually do much, much better
• Heuristic search techniques are Informed search techniques where we will have functions
to guide the search procedure.
• Heuristic is a Rule of Thumb that use domain specific knowledge to estimate the quality or
potential of partial solutions.
• Heuristics technique is a criteria for deciding which among several alternatives will be the
most effective in order to achieve some goal.
• Heuristic is a technique that no longer guarantees to find the best answer but almost always
finds a very good answer.
Heuristic Search Techniques (Contd.)
• Using good heuristics, we can hope to get good solution to hard problems in less than
exponential time.
• There is no general theory for finding heuristics, because every problem is different
• There are general-purpose heuristics that are useful in a wide variety of problem
domains.
• We can also construct special purpose heuristics, which are domain specific.
Heuristic Search Techniques (Contd.)
• The searches which use some domain knowledge are called informed search strategies.
• Formally, informed search problem is a 5-tuple [S, S0, O, G, h].
• h( ) is a heuristic function estimating the distance to a goal.
• The function h(n) is the estimated cost of the cheapest path from node n to a goal node.
Heuristic Search Techniques (Contd.)
Branch & Bound Search
• Sometimes, it is called Uniform Cost Search.
• In Branch & Bound we will introduce the notion of cost.
• While generating a search space, it expands the least-cost partial path.
• The cost function g(X) or C(X) assigns cumulative expense to the path from Start node to
the current node X by applying the sequence of operators .
• If operators have unit cost, then this is same as BFS.
Heuristic Search Techniques (Contd.)
Branch & Bound Algorithm
Heuristic Search Techniques (Contd.) Contents of OPEN list are shown below:

Branch & Bound Algorithm (contd.)


Let us consider an example state space
and apply B&B algorithm:

Contents of CLOSED list are shown below:


Heuristic Search Techniques (Contd.)
Hill Climbing
• Hill Climbing is an optimization technique that belongs to the family of local searches.
• Local search algorithms operate using a single current node and generally move only to
neighbors of that node.
• It is a local search algorithm that continuously moves in the direction of increasing value –
that is uphill to find the best solution to the problem.
• It terminates when it reaches a “peak” where no neighbor has a higher value.
• The algorithm does not maintain a search tree as it only keeps a single current state.
• It is also called greedy local search as it only looks to its good immediate neighbor state and
not beyond that.
• Moving through a tree of paths, hill climbing proceeds in depth-first order, but the choices
are ordered according to some heuristic value.
Heuristic Search Techniques (Contd.)
Hill Climbing Algorithm
Input: Start and goal states of the problem
Output: Yes or No
Method:
1. Initialize: Set OPEN = {S0} ; current-state = start state;
2. Fail: If OPEN ={ }, terminate with failure
3. Select: Select the current state, n, from OPEN
4. Terminate: If n G (if n is a goal state), terminate with success.
5. Expand:
Generate the successors of n using state transition operators (O).
For each successor m, compute h(m). (i.e., estimated cost)
If a successor m is better than the current state n, then make the state m as the current
state and insert it into OPEN.
6. Loop: Go to step 2.
Heuristic Search Techniques (Contd.)
Problems in Hill Climbing
• The search process may reach to a position that is not a solution but from there no move
improves the situation.
• This will happen if we have reached a Local maximum, a plateau or a ridge.
• Local maximum: It is a state that is better than all its neighbors but is not better than
some other states farther away. From this state all moves appear to be worse. Since hill-
climbing uses a greedy approach, it will not move to the worse state and terminate itself.
The process will end even though a better solution may exist.
• Solution to this is to backtrack to some earlier state and try going in different
direction.
Heuristic Search Techniques (Contd.)
Problems in Hill Climbing (Contd.)
• Plateau: It is a flat area of the search space in which, a whole set of neighboring states
have the same value. It is not possible to determine the best direction.
• Here make a big jump to some direction and try to get to new section of the search
space.
• Ridge: It is an area of search space that is higher than surrounding areas, but that can not
be traversed by single moves in any one direction. (Special kind of local maxima).
• Here apply two or more rules before doing the test i.e., moving in several directions at
once.
Heuristic Search Techniques (Contd.)
Hill Climbing Algorithm (Contd.)
Consider the Blocks World Problem:

The possible operators for this problem are as follows:


■ Pick up one block and put it on the table.

■Pick up one block and put it on another one.

Suppose we use the following heuristic function:


■ A Local heuristic function: Add one point for a block if it is in correct location and
subtract one point if it is in wrong location. (Count +1 for every block that sits on the
correct thing. Count -1 for every block that sits on an incorrect thing. )
Heuristic Search Techniques (Contd.)
Hill Climbing Algorithm (Contd.)
Heuristic Search Techniques (Contd.)

Hill Climbing Algorithm (Contd.)


Now let us consider another heuristic function.
A Global heuristic function: Add one point for every block in a correct structure that the
block is sitting on. Subtract one point for every block in a wrong structure (For each block
that has the correct support structure: +1 to every block in the support structure. For each
block that has a wrong support structure: -1 to every block in the support structure.).
■ Using this function, the goal state has score 11. The initial state has the score -1.
Heuristic Search Techniques (Contd.)
Hill Climbing Algorithm (Contd.)
Heuristic Search Techniques (Contd.)
Beam Search
• Beam Search progresses level by level.
• It moves downward from the best W nodes only at each level. Other nodes are ignored.
• W is called width of beam search.
• It is like a BFS where also expansion is level wise.
• Best nodes are decided on the heuristic cost associated with the node.
• If B is the branching factor, then there will be only W*B nodes under consideration at any
depth but only W nodes will be selected.
• If W=1, then it becomes hill climbing search where always best node is chosen from
successor nodes.
Heuristic Search Techniques (Contd.)
Algorithm – Beam search
Heuristic Search Techniques (Contd.)
Beam search (Contd.)
The search tree generated using beam search algorithm , assume W=2 and B=3 is given
below.
Heuristic Search Techniques (Contd.)
Best-First Search
• Expand the best partial path from current node to the goal node.
• The cost of the partial path is calculated using some heuristic.
• The name Best-first refers to the method of exploring the node with the best “score” first.
• If the state has been generated earlier and new path is better than the previous one, then
change the parent and update the cost.
• It should be noted that in hill climbing, sorting is done on the successors nodes, where as
in the Best-first search, sorting is done on the entire list.
• It is not guaranteed to find an optimal solution, but generally it finds some solution faster
than solution obtained from any other method.
• The performance varies directly with the accuracy of the heuristic evaluation function.
Heuristic Search Techniques (Contd.)

Best First Search Algorithm


Input: Start and goal states of the problem
Output: Yes or No
Method:
1. Initialize: Set OPEN={S0}; CLOSED={ }
2. Fail: If OPEN={ }, terminate with failure
3. Select: Select the best score node, n, from OPEN and save it in CLOSED.
4. Terminate: If n G (if n is a goal state), terminate with success.
5. Expand: Generate all successors of node n using state transition operators (O).
• Assign the successor nodes a score using the evaluation function and add the
scored nodes to OPEN.
• Sort the list OPEN according to their heuristic values.
6. Loop: Go to step 2
Heuristic Search
A* Algorithm
Techniques (Contd.)
• A* (“Aystar”) (Hart, 1972) method is a combination of ‘Branch and Bound’ and ‘Best-
first search’, combined with the dynamic programming principle.
• It uses a heuristic or evaluation function usually denoted by f(n) to determine the order in
which the search visits nodes in the tree.
• The heuristic function (or Evaluation function) for a node n is defined as f(n) = g(n) + h(n)
• The function g is a measure of the cost of getting from the Start node (initial state) to the
current node n.
• The function h is an estimate of additional cost of getting from the current node n to the
Goal node (final state).
• Starting with a given node, the algorithm expands the node with the lowest f(n) value.
Heuristic Search Techniques (Contd.)
The A* formula: f(n) = g(n) + h(n)
start
• g(n) is the (known) distance from start to n
• h(n) is the (estimated) distance from n to a
goal
g(N)
• f(n) is just the sum of these
• f(n) is our best guess as to the distance from
f(N) start to a goal, passing through n.

h(N)

goal
Heuristic Search Techniques (Contd.)
A* Algorithm
Heuristic Search Techniques (Contd.)
A* Algorithm (Contd.)
Let us consider an example state space Contents of OPEN list are shown below:
and apply A* algorithm:

Contents of CLOSED list are shown below:


Heuristic Search Techniques (Contd.)
Example – Eight Puzzle Problem
Solve Eight puzzle problem using A* algorithm
Start state Goal state

3 7 6 5 3 6

5 1 2 7 2

4 8 4 1 8

• Evaluation function f(n) = g(n) + h(n)


h(n) = the number of tiles not in their goal position in a given state n
g(n) = depth of node n in the search tree
• Initial node has f(initial_node) = 4
• Apply A* algorithm to solve it.
• The choice of evaluation function critically determines search results.
Heuristic Search Techniques (Contd.)
Example – Eight Puzzle Problem (Contd.)
Start State
Search Tree f = 0+4
3 7 6
5 1 2
4 8

up left right
(1+3) (1+5) (1+5)
3 7 6 3 7 6 3 7 6
5 2 5 1 2 5 1 2
4 1 8 4 8 4 8

up left right
(2+3) (2+3) (2+4)
3 6 3 7 6 3 7 6
5 7 2 5 2 5 2
4 1 8 4 1 8 4 1 8

left right
(3+2) (3+4)
3 6 3 6
5 7 2 5 7 2
4 1 8 4 1 8
down
(4+1)
5 3 6 right 5 3 6 Goal
7 2 7 2 State
4 1 8 4 1 8
Heuristic Search Techniques (Contd.)
Harder Problem
• The quality of solution will depend on heuristic function.
• Harder problems (8 puzzle) can’t be solved by heuristic function defined earlier.

• A better estimate of h function might be as follows. The function g may remain same.
h(n) = the sum of the distances of the tiles from their goal position in a given state n.
■ This is called Manhattan distance heuristic.
• Initial node has h(initial_node) = 1+1+2+2+1+3+0+2=12
Heuristic Search Techniques (Contd.)
Optimal solution by A* Algorithm
A* algorithm finds optimal solution if heuristic function is carefully designed and is
underestimated.
Underestimation
• If we can guarantee that h never overestimates actual value from current to goal, then A*
algorithm is guaranteed to find an optimal path to a goal, if one exists.
Heuristic Search Techniques (Contd.)
Explanation -Example of Underestimation

Assume the cost of all arcs to be 1. A is expanded to B, C and D.


‘f’ values for each node is computed.
B is chosen to be expanded to E.
We notice that f(E) = f(C) = 5
Suppose we resolve in favor of E, the path currently we are expanding. E is expanded to F.
Clearly expansion of a node F is stopped as f(F)=6 and so we will now expand C.
Thus we see that by underestimating h(B), we have wasted some effort but eventually
discovered that B was farther away than we thought.
Then we go back and try another path, and will find optimal path.
Heuristic Search Techniques (Contd.)
Example – Overestimation
Here h is overestimated • A is expanded to B, C and D.
• Now B is expanded to E, E to F and F to G
for a solution path of length 4.
• Consider a scenario when there a direct
path from D to G with a solution giving a
path of length 2.
• We will never find it because of
overestimating h(D).
• Thus, we may find some other worse
solution without ever expanding D.
• So by overestimating h, we can not be
guaranteed to find the cheaper path
solution.
Heuristic Search Techniques (Contd.)
Admissibility of A*
• A search algorithm is admissible, if for any graph, it always terminates in an optimal path
from initial state to goal state, if path exists.
• If heuristic function h is underestimate of actual value from current state to goal state,
then it is called admissible function (h(n)≤h*(n)).

• Alternatively we can say that A* always terminates with the optimal path in case
• h(n) is an admissible heuristic function.
Heuristic Search Techniques (Contd.)
Monotonic Function
• A function h is monotone if
•  states Xi and Xj such that Xj is successor of Xi
h(Xi) – h(Xj) ≤ cost (Xi, Xj)
where, cost (Xi, Xj) actual cost of going from Xi to Xj
• h (goal) = 0
• In this case, heuristic is locally admissible i.e., consistently finds the minimal path to each
state they encounter in the search.
• With monotonic heuristic, if a state is rediscovered, it is not necessary to check whether the
new path is shorter.
• Each monotonic heuristic is admissible
• A cost function f(n) is monotone if f(n) ≤ f(succ(n)), n.
• For any admissible cost function f, we can construct a monotonic admissible function.
Heuristic Search Techniques (Contd.)
Iterative Deepening A* (IDA*) Algorithm
• Iterative deepening A* (IDA*) is a combination of the Depth-First Iterative Deepening
(DFID) and A* algorithm.
• But, instead of using depth-bound we use f-limit.
• At each iteration, perform a DFS pruning off a branch when its total cost (g+h) exceeds a
given threshold.
• The initial threshold starts at the estimate of the cost of the initial state, and increases for
each iteration of the algorithm.
• At each iteration, the threshold used for the next iteration is the minimum cost of all values
exceeded the current threshold.
• These steps are repeated till we find a goal state.
Heuristic Search Techniques (Contd.)

Iterative Deepening A* (IDA*) Algorithm


Algorithm works as follows.
• Start with f-limit = h(start) and we do a DFS.
• Prune any node if f(node) > f-limit.
• Next f-limit = min-cost of any node pruned.
• These steps are repeated till we find a goal state.
Ist iteration ( Threshhold = 5) O 5

O O O O O
6 8 4 8 9
   

IInd iteration ( Threshhold = 6) O 5

O O

O O O O O O
7 5 9 8 4 7
   

IIIrd iteration ( Threshhold = 7) O 5

O O

O O O

O O O O O O Goal
8 9 4 4 8
  

Iterative Deepening A* Example


Heuristic Search Techniques (Contd.)
Contd..
• Given an admissible monotone cost function, IDA* will find a solution of least cost or
optimal solution if one exists.
• IDA* not only finds cheapest path to a solution but uses far less space than A* and it
expands approximately the same number of nodes as A* in a tree search.
• An additional benefit of IDA* over A* is that it is simpler to implement, as there are no
open and closed lists to be maintained.
• A simple recursion performs DFS inside an outer loop to handle iterations.

You might also like