0% found this document useful (0 votes)
6 views

4_Informed Searching

Uploaded by

Ritim Roof
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views

4_Informed Searching

Uploaded by

Ritim Roof
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 10

15/11/2024

Informed (Heuristic) Search

BLG 435E: Artificial Intelligence


Informed Searching
Instructor: Professor Mehmet Keskinöz
ITU Artificial Intelligence Research and Development Center (ITUAI)
Faculty of Computer and Informatics
Computer Engineering Department
Istanbul Technical University, Istanbul, Turkey

Email: [email protected]
ODS 2001 ODS 2001

Environment Type Discussed In this Lecture Review: Tree search

Fully
Observable
• Static Environment
yes

Deterministic

yes

Sequential
yes no

Discrete no
yes Discrete
no • A search strategy is defined by picking the
Planning, Control,
yes
Vector Search: Continuous Function
order of node expansion
Optimization
heuristic
search
cybernetics Constraint
Satisfaction
• Which nodes to check first?
ODS 2001 ODS 2001

1
15/11/2024

Knowledge and Heuristics Best-first search

• Simon and Newell, Human Problem Solving, • Idea: use an evaluation function f(n) for each node
– estimate of "desirability"
1972.  Expand most desirable unexpanded node

• Thinking out loud: experts have strong opinions
like “this looks promising”, “no way this is going • Implementation:
to work”. Order the nodes in frontier in decreasing order of desirability

• S&N: intelligence comes from heuristics that • Special cases:


help find promising states fast. – greedy best-first search
– A* search

ODS 2001 ODS 2001

Romania with step costs in km Greedy best-first search

• Evaluation function
– f(n) = h(n) (heuristic)
– = estimate of cost from n to goal

• e.g., hSLD(n) = straight-line distance from n to


Bucharest
• Greedy best-first search expands the node that
appears to be closest to goal

ODS 2001 ODS 2001

2
15/11/2024

Greedy best-first search example Greedy best-first search example

ODS 2001 ODS 2001

Greedy best-first search example Greedy best-first search example

http://aispace.org/search/

ODS 2001 ODS 2001

3
15/11/2024

Properties of greedy best-first search A* search

• Complete? No – can get stuck in loops, • Idea: avoid expanding paths that are already
– e.g. as Oradea as goal expensive.
• Iasi  Neamt  Iasi  Neamt  • Very important!
• Time? O(bm), but a good heuristic can give
dramatic improvement
• Evaluation function f(n) = g(n) + h(n)
• Space? O(bm) -- keeps all nodes in memory
• g(n) = cost so far to reach n
• Optimal? No
• h(n) = estimated cost from n to goal

• f(n) = estimated total cost of path through n to
goal
ODS 2001 ODS 2001

A* search example A* search example

ODS 2001 ODS 2001

4
15/11/2024

A* search example A* search example

ODS 2001 ODS 2001

A* search example A* search example

http://aispace.org/search/

• We stop when the node with the lowest f-value


is a goal state.
• Is this guaranteed to find the shortest path?

ODS 2001 ODS 2001

5
15/11/2024

Admissible heuristics Optimality of A* (proof)

• A heuristic h(n) is admissible if for every node n, • Suppose some suboptimal goal path G2 has been generated and is in the
frontier. Let n be an unexpanded node in the frontier such that n is on a
h(n) ≤ h*(n), where h*(n) is the true cost to reach the goal shortest path to an optimal goal G.
state from n. •
• An admissible heuristic never overestimates the cost to
reach the goal, i.e., it is optimistic.

• Example: hSLD(n) (never overestimates the actual road


distance)
• Negative Example: Fly heuristic: if wall is dark, then • f(G2) = g(G2) since h(G2) = 0 because h is admissible
distance from exit is large. • g(G2) > g(G) since G2 is suboptimal, cost of reaching G is less.
• f(G) = g(G) since h(G) = 0
• Theorem: If h(n) is admissible, A* using TREE-SEARCH • f(G2) > f(G) from above
is optimal

ODS 2001 ODS 2001

Optimality of A* (proof) Consistent heuristics

• A heuristic is consistent if for every node n, every successor n' of n generated by


• Suppose some suboptimal goal path G2 has been generated and is in the any action a,
frontier. Let n be an unexpanded node in the frontier such that n is on a •
shortest path to an optimal goal G.
• h(n) ≤ c(n,a,n') + h(n')

• Intuition: can’t do worse than going through n’.


• If h is consistent, we have

f(n') = g(n') + h(n') = g(n) + c(n,a,n') + h(n')
• f(G2) > f(G) from above ≥ g(n) + h(n) = f(n)
• h(n) ≤ h*(n) since h is admissible, h* is minimal distance.
• g(n) + h(n) ≤ g(n) + h*(n) • i.e., f(n) is non-decreasing along any path.

• f(n) ≤ f(G) • Theorem: If h(n) is consistent, A* using GRAPH-SEARCH is optimal
Hence f(G2) > f(n), and A* will never select G2 for expansion •

ODS 2001 ODS 2001

6
15/11/2024

Optimality of A* Properties of A*

• A* expands nodes in order of increasing f value


• Complete? Yes (unless there are infinitely many

• http://aispace.org/search/ nodes with f ≤ f(G) )
• Gradually adds "f-contours" of nodes •
• Contour i has all nodes with f=fi, where fi < fi+1
• • Time? Exponential

• Space? Keeps all nodes in memory

• Optimal? Yes

ODS 2001 ODS 2001

Admissible heuristics Admissible heuristics

E.g., for the 8-puzzle: E.g., for the 8-puzzle:

• h1(n) = number of misplaced tiles • h1(n) = number of misplaced tiles


• h2(n) = total Manhattan distance • h2(n) = total Manhattan distance
(i.e., no. of squares from desired location of each tile) (i.e., no. of squares from desired location of each tile)

• h1(S) = ?
• h2(S) = ? • h1(S) = ? 8
• • h2(S) = ? 3+1+2+2+2+3+3+2 = 18

ODS 2001 ODS 2001

7
15/11/2024

Dominance How to Select A Good Heuristic?

• Choosing a good heuristic is crucial for the efficiency and accuracy of the A* algorithm. The
heuristic function h(n) should estimate the cost from a node n to the goal. A well-designed
• If h2(n) ≥ h1(n) for all n (both admissible) then h2 dominates h1 . heuristic can significantly speed up the search process by guiding the algorithm towards the
• h2 is better for search goal more directly.

• Here are some principles and methods to help decide on a good heuristic:
• Typical search costs (average number of nodes expanded): • 1. Admissibility (Underestimation)

• A heuristic is admissible if it never overestimates the cost to reach the goal. This
• d=12 IDS = 3,644,035 nodes ensures that A* will find the optimal path.
A*(h1) = 227 nodes • Example: In a grid-based pathfinding problem, the Euclidean distance or Manhattan
A*(h2) = 73 nodes
• d=24 IDS = too many nodes distance can be used as an admissible heuristic:
A*(h1) = 39,135 nodes
A*(h2) = 1,641 nodes Euclidean Distance is useful if diagonal movement is allowed.
• Manhattan Distance is useful if only horizontal and vertical movements
are allowed

ODS 2001 ODS 2001

• 3. Domain-Specific Knowledge
• 2. Consistency (Monotonicity) • The design of the heuristic depends on the nature of the
•Consistency ensures that A* will not revisit nodes and makes the algorithm problem. Understanding the problem domain can help
more efficient. create more accurate heuristics.
•If a heuristic is consistent, it is also admissible, but not all admissible
heuristics are consistent. • For example:
– Navigation problems: Use straight-line (Euclidean) distance,
Manhattan distance, or great-circle distance (for spherical
surfaces like Earth).
– Puzzle problems (like 8-puzzle): Use the number of misplaced
tiles or the sum of the distances of each tile from its goal position
(Manhattan distance).
ODS 2001 ODS 2001

8
15/11/2024

• Relaxed Problem Heuristics


• 4. Heuristic Tightness • A heuristic can be derived from a relaxed version of the
• A tighter heuristic provides estimates that are closer to the problem, where some constraints are removed, making it
actual cost, leading to fewer nodes being explored. The easier to solve. The optimal cost in the relaxed problem is a
tighter the heuristic (as long as it remains admissible), the valid heuristic for the original problem.
faster the A* algorithm runs. • Example: In the sliding tile puzzle, if we relax the constraint
• However, if the heuristic is too complex to compute, it can that tiles can only move into adjacent empty spaces, the
slow down the algorithm despite being accurate. So, there’s shortest distance each tile needs to reach its goal position
a balance between heuristic accuracy and computational can serve as a heuristic.
cost.
ODS 2001 ODS 2001

• Examples of Heuristics for Common Problems


• Pathfinding on a grid:
– Manhattan Distance: Best for grids where movement is allowed only in horizontal or vertical
directions.
• 6. Combining Heuristics – Euclidean Distance: Best for grids where diagonal movement is allowed.
• When multiple admissible heuristics h1,h2,...,hk are – Chebyshev Distance: Useful when diagonal moves cost the same as horizontal/vertical
moves (e.g., in a chessboard-like setting).
available, taking their maximum is also admissible: • 8-Puzzle (or sliding puzzles):
h(n)=max(h1(n),h2(n),...,hk(n)) – Number of Misplaced Tiles: Counts how many tiles are out of place.
– Manhattan Distance: Sums the distance each tile is away from its goal position.
• This often provides a tighter estimate than any individual – Linear Conflict: Extends Manhattan distance by adding extra penalties for tiles that are in the
heuristic, leading to better performance. correct row or column but are reversed relative to their goal order.
• Graph Problems:
– If the problem involves traveling between points in a Euclidean space, the straight-line
distance is a common choice.
– For more complex graphs, domain-specific simplifications or models can serve as the
heuristic.
ODS 2001 ODS 2001

9
15/11/2024

Relaxed problems

• A problem with fewer restrictions on the actions is called a


• Summary relaxed problem
• To decide on a good heuristic: •
• Ensure it is admissible (never overestimates). • The cost of an optimal solution to a relaxed problem is an
admissible heuristic for the original problem
• Check if it is consistent (for efficiency).

• Leverage domain knowledge to estimate the cost. • If the rules of the 8-puzzle are relaxed so that a tile can
• Consider the trade-off between the heuristic’s computational complexity and its move anywhere, then h1(n) gives the shortest solution
accuracy. •
• Use relaxed problem solutions or combine multiple heuristics if possible. • If the rules are relaxed so that a tile can move to any
• A well-chosen heuristic can make A* significantly faster by reducing the number adjacent square, then h2(n) gives the shortest solution
of explored nodes, making it practical for larger problem spaces. •

ODS 2001 ODS 2001

Summary

• Heuristic functions estimate costs of shortest paths


• Good heuristics can dramatically reduce search cost
• Greedy best-first search expands lowest h
– incomplete and not always optimal
• A∗ search expands lowest g + h
– complete and optimal
– also optimally efficient (up to tie-breaks)
• Admissible heuristics can be derived from exact solution of
relaxed problems

ODS 2001

10

You might also like