Algorithms 2
Best-First Search
Heuristic Search Simple search algorithms such as IDS do not consider the goodness of
states.
Best-first search visits states according to an evaluation function.
An evaluation function gives lower numbers to (seemingly) better states.
Heuristic search prefers to visit states that appear to be better.
A* search visits states based on cost from initial to a given state plus
heuristic function.
A heuristic function estimates the cost from a given state to the closest
Algorithms 2 goal state.
Best-First Search . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 CS 5233 Artificial Intelligence Heuristic Search – 2
General Best-First Search Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
A* Search Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
Iterative Deepening A* Search Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . 5 General Best-First Search Algorithm
IDA*’s Contour Procedure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
Analysis 7 function Best-FS(initial, Expand,
Properties of A* Search . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 Goal, Eval-Fn)
Optimality Proof . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 q ← New-Priority-Queue()
Efficiency of A* . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 Insert(initial, q, Eval-Fn(initial))
Performance of Heuristic Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 while q is not empty
Book Experiment Avoiding Back Edges . . . . . . . . . . . . . . . . . . . . . . . . . 11 do current ← Extract-Min(q)
if Goal(current) then return solution
Local Search 12 for each next in Expand(current)
Local Search . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 do Insert(next, q, Eval-Fn(next))
Local Optima Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 return failure
Local Search Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
Examples of Local Search Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . 15 CS 5233 Artificial Intelligence Heuristic Search – 3
1 2
A* Search Algorithm IDA*’s Contour Procedure
function A*(initial, Expand, Goal, function Contour(current, limit)
Cost, Heuristic) cost ← Cost(current) + Heuristic(current)
q ← New-Priority-Queue() if limit < cost then return null, cost
Insert(initial, q, Heuristic(initial)) if Goal(current) then return solution, cost
while q is not empty new-limit ← ∞
do current ← Extract-Min(q) for each next in Expand(current)
if Goal(current) then return solution do result, cost ← Contour(next, limit)
for each next in Expand(current) if result then return solution, cost
do Insert(next, q, Cost(next) new-limit ← min(new-limit, cost)
+ Heuristic(next)) return failure, new-limit
return failure
CS 5233 Artificial Intelligence Heuristic Search – 6
CS 5233 Artificial Intelligence Heuristic Search – 4
Iterative Deepening A* Search Algorithm Analysis 7
function IDA*(initial, Expand, Goal, Properties of A* Search
Cost, Heuristic)
Let n be a state/node.
limit ← Heuristic(initial)
Let g(n) be the cost from the initial state to n.
loop
Let h(n) be the estimate from n to a goal state.
do result, limit ← Contour (initial, limit)
Let f (n) = g(n) + h(n).
if result then return result
h is admissible if it is never an overestimate.
if limit = ∞ then return failure
If h is admissible, then A* finds optimal path.
CS 5233 Artificial Intelligence Heuristic Search – 5
If h is admissible with ǫ error and the search space is a uniform tree with
one goal state, then A* searches at most ǫ/2 from solution path.
CS 5233 Artificial Intelligence Heuristic Search – 7
Optimality Proof
Let f ∗ be optimal path cost.
Because h never overestimates, then all states n on optimal path have
f (n) ≤ f ∗.
Any nonoptimal goal state has f (n) > f ∗.
Because of priority queue, A* will visit optimal path before any nonoptimal
goal state.
CS 5233 Artificial Intelligence Heuristic Search – 8
3 4
Efficiency of A* Local Search 12
Assume tree-structured state space (b = branching factor, d = goal depth), Local Search
single goal state, each edge costs 1, and maximum error of ǫ.
Any state n more than ǫ/2 off of solution path has A local search algorithm keeps track of one state at a time,
f (n) = g(n) + h(n) > f ∗. An evaluation function and a selection procedure is used to decide what
All states n on solution path have f (n) = g(n) + h(n) ≤ f ∗. state to visit next.
A* and IDA* visit O(dbǫ/2) states. Local search gives up optimality guarantees in hopes of finding good
A* uses O(dbǫ/2) memory. IDA* uses O(db). solutions more efficiently.
The main difficulty is local minima/maxima.
CS 5233 Artificial Intelligence Heuristic Search – 9
CS 5233 Artificial Intelligence Heuristic Search – 12
Performance of Heuristic Functions
Local Optima Example
Consider these 8-puzzle heuristic functions:
h1 : number of tiles in goal position.
sin(5*x) - (x**2)/5
h2 : Manhattan distance from tiles to goals. 5
Both never overestimate and h1 ≤ h2
Characterize by effective branching factor 0
Let N states be visited and solution depth be d.
Solve for x in N = Σdi=0 xi -5
CS 5233 Artificial Intelligence Heuristic Search – 10
-10
Book Experiment Avoiding Back Edges
-15
States Visited (Effective BF)
d IDS IDA*(h1) IDA*(h2) -20
4 52 (2.35) 10 (1.35) 7 (1.17)
8 569 (2.03) 42 (1.36) 14 (1.11)
12 5357 (1.92) 315 (1.47) 45 (1.19)
-25
16 47271 (1.87) 2410 (1.52) 226 (1.28) -10 -5 0 5 10
20 17646 (1.55) 764 (1.29)
CS 5233 Artificial Intelligence Heuristic Search – 13
CS 5233 Artificial Intelligence Heuristic Search – 11
5 6
Local Search Algorithm
function Local-Search(initial, Expand,
Goal, Select)
current ← initial
loop
do if Goal(current) then return solution
current ← Select(Expand(current))
CS 5233 Artificial Intelligence Heuristic Search – 14
Examples of Local Search Algorithms
Hill-Climbing, Gradient Descent:
Select state improving an evaluation function.
Random-restart hill-climbing:
Repeat hill climbing from random initial states.
Simulated Annealing:
Hill-climbing with randomized selection.
Genetic Algorithms:
Maintain a set of “current states.” Crossover generates new states from
pairs of states.
Tabu Search: Like hill-climbing, but avoid recently visited states or recently
used operators.
CS 5233 Artificial Intelligence Heuristic Search – 15