0% found this document useful (0 votes)
16 views

FAI - Unit 2 - Search

Uploaded by

yvtinsane
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
16 views

FAI - Unit 2 - Search

Uploaded by

yvtinsane
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 66

Fundamentals of Artificial Intelligence

Unit 2: Uninformed Search (Blind Search)


&
Informed Search (Heuristics Search)

by
Dr. Abdul Ahad
Contents
2.1 Search in Intelligent Systems
2.2 Search Space (State Space Graphs)
2.3 Generate and Test Paradigm (Algorithm)
2.4 Uninformed Search (Blind Search) Algorithms
2.4.1 Breadth First Search (BFS)
2.4.2 Depth First Search (DFS)
2.4.3 Uniform Cost Search (UCS)
2.4.4 Depth First Iterative Deepening Search (DFIDS)
2.4.5 Bi-Directional Search (BDS)
2.5 Comparing Blind Search Algorithms
2.6 Informed Search (Heuristic Search) Algorithms
2.6.1 Best First Search
2.6.2 Hill Climbing (Greedy Search)
2.6.3 A* Search
2.6.4 Beam Search Dr. Abdul Ahad, Department of AI 2
2.1 Search in Intelligent Systems
Search is a natural part of people’s lives. Search in
Artificial Intelligence is the process of navigating from a
starting state to a goal state by transitioning through
intermediate states.
An intelligent agent is trying to find a set or sequence
of actions to achieve a goal. Software that solves search
problems faster are deemed to be more intelligent.

Dr. Abdul Ahad, Department


3 of AI
Search Example

Formulate goal: Be in Hyderabad city.


Formulate problem: states are cities, operators drive between
pairs of cities.
Find solution: Find a sequence of cities (e.g., Guntur, Vijayawada,
Kodad, Suryapet, Hyderabad) that leads from the current state to
a state meeting the goal condition.
Dr. Abdul Ahad, Department of AI 4
2.2 State Space Graphs (Search Space )
• State
– A description of a possible state of the world
– Includes all features of the world that are pertinent to the problem
• Initial state
– Description of all pertinent aspects of the state in which the agent
starts the search
• Goal test
– Conditions the agent is trying to meet (e.g., have 1M)
• Goal state
– Any state which meets the goal condition
– Thursday, have 1M, live in NTV
– Friday, have 1M, live in TV9

Dr. Abdul Ahad, Department of AI 5


▪ Action
- Function that maps (transitions) from one state to another.
▪ Problem formulation
- Describe a general problem as a search problem.
▪ Solution
- Sequence of actions that transitions the world from the initial
state to a goal state.
▪ Solution cost (additive)
- Sum of the cost of operators
- Alternative: sum of distances, number of steps, etc.
▪ Search
- Process of looking for a solution.
- Search algorithm takes problem as input and returns solution.
- We are searching through a space of possible states.
▪ Execution
- Process of executing sequence of actions (solution).
Dr. Abdul Ahad, Department of AI 6
Search Problem

A search problem is defined by the

1. Initial state (e.g., Guntur)


2. Operators (e.g., Guntur -> Vijayawada,
Guntur -> Miryalaguda, etc.)
3. Goal test (e.g., at Hyderabad)
4. Solution cost (e.g., path cost)

Dr. Abdul Ahad, Department of AI 7


Example Problem – Robot Assembly

States: real-valued coordinates of


• robot joint angles
• parts of the object to be assembled
Operators: rotation of joint angles
Goal test: complete assembly
Path cost: time to complete assembly
Dr. Abdul Ahad, Department of AI 8
Example Problem – Towers of Hanoi

States: combinations of poles and disks


Operators: move disk x from pole y to pole z
subject to constraints
• cannot move disk on top of smaller disk
• cannot move disk if other disks on top
Goal test: disks from largest (at bottom) to smallest on goal pole
Path cost: 1 per move
Dr. Abdul Ahad, Department of AI 9
Example Problem – Eight Puzzle

States: tile locations


Initial state: one specific tile configuration
Operators: move blank tile left, right, up, or down
Goal: tiles are numbered from one to eight around the square
Path cost: cost of 1 per move (solution cost same as number
of most or path length)
Dr. Abdul Ahad, Department of AI 10
Visualize Search Space as a Tree

• States are nodes


• Actions are edges
• Initial state is root
• Solution is path
from root to goal
node
• Edges sometimes
have associated
costs
• States resulting
from operator are
children
Dr. Abdul Ahad, Department of AI 11
Search Problem Example (as a tree)

Dr. Abdul Ahad, Department of AI 12


2.3 Generate and Test Paradigm
➢ Generate-and-test search paradigm is a very simple
algorithm that guarantees to find a solution if done
systematically and there exists a solution.

Algorithm: Generate-And-Test
– 1. Generate a possible solution.
– 2. Test to see if this is the expected solution.
– 3. If the solution has been found quit else go to step 1.
Dr. Abdul Ahad, Department of AI 13
➢ Like depth-first search, Generate-and-test requires
that complete solutions be generated for testing.
Solutions can also be generated randomly but
solution is not guaranteed.
➢ This approach is what is known as British Museum
algorithm: finding an object in the British Museum
by wandering randomly.
➢ Depth-first search tree with backtracking can be
used to implement systematic generate-and test
procedure.

Dr. Abdul Ahad, Department of AI 14


Example – Traveling Salesman Problem (TSP)
• A salesman has a list of cities, each of which he must visit
exactly once. There are direct roads between each pair of cities
on the list. Find the route the salesman should follow for the
shortest possible round trip that both starts and finishes at any
one of the cities.
– Traveler needs to visit n cities.
– Know the distance between each pair of cities.
– Want to know the shortest route that visits all the cities once.

Dr. Abdul Ahad, Department of AI 15


Select the Minimum length for final state
Dr. Abdul Ahad, Department of AI 16
2.4 Search Strategies (Methods)

➢ Search strategies differ only in QueuingFunction


➢ Features by which to compare search strategies
– Completeness (always find solution)
– Cost of search (time and space)
– Cost of solution, optimal solution
– Make use of knowledge of the domain

➢ Search Strategies (Methods)


i) Uninformed search
ii) Informed search

Dr. Abdul Ahad, Department of AI 17


Uninformed Search Strategies (Methods)

The search algorithms in this section have no additional


information on the goal node other than the one provided in the
problem definition. The plans to reach the goal state from the
start state differ only by the order and/or length of actions.
Uninformed search is also called Blind search.

The following are uninformed search algorithms:


1) Breadth First Search (BFS)
2) Depth First Search (DFS)
3) Uninform Cost Search (UCS)
4) Depth First Iterative Deepening Search (DFIDS)
5) Bi-Directional Search (BDS)
Dr. Abdul Ahad, Department of AI 18
2.4.1 Breadth-First Search (BFS)

• Breadth-first search (BFS) is an algorithm that is


used to graph or searching tree data structures.
• The algorithm efficiently visits and marks all the key
nodes in a graph in an accurate breadthwise
fashion. It’s a Level-by-level search method.
• The BFS will visit the nearest and un-visited nodes
one by one and marks them. These values are also
added to the queue. The queue works on the FIFO
model.
• BFS is a Uninformed Search Technique (Blind Search).
Dr. Abdul Ahad, Department of AI 19
Breadth-First Search (BFS)

Dr. Abdul Ahad, Department of AI 20


For Example, Which solution would BFS find to move from node
S to node G if run on the graph below?

Solution: The equivalent search


tree for the above graph is as
follows. As BFS traverses the tree
“shallowest node first”, it would
always pick the shallower branch
until it reaches the solution (or it
runs out of nodes, and goes to the
next branch). The traversal is
shown in blue arrows. Path: S -> D -> G
Dr. Abdul Ahad, Department of AI 21
Assume goal node at level s (depth) with constant branching
factor n (no of children’s connected to node)

Time complexity: Equivalent to the number of nodes


traversed in BFS until the shallowest solution.

Space complexity: Equivalent to how large can the fringe


get.

Completeness: BFS is complete, meaning for a given


search tree, BFS will come up with a solution if it exists.
Optimality: BFS is optimal as long as the costs of all edges
are equal.
Dr. Abdul Ahad, Department of AI 22
2.4.2 Depth-First Search (DFS)
• Depth-first search (DFS) is an algorithm that is
used to graph or searching tree data structures.
• The algorithm efficiently visits and marks all the
key nodes in a graph in an accurate depth wise
fashion.
• The DFS will visit the nearest and un-visited nodes
one by one and marks them. These values are
added to the Stack. The stack works on the LIFO
model.
• DFS is a Uninformed Search Technique (Blind
Search). Dr. Abdul Ahad, Department of AI 23
Depth-First Search (DFS)

Dr. Abdul Ahad, Department of AI 24


For Example, Which solution would DFS find to move from node
S to node G if run on the graph below?

Solution: The equivalent


search tree for the above
graph is as follows. As DFS
traverses the tree “deepest
node first”, it would always
pick the deeper branch until
it reaches the solution (or it
runs out of nodes, and goes
to the next branch). The
traversal is shown in blue
arrows. Path: S -> A -> B -> C -> G
Dr. Abdul Ahad, Department of AI 25
Time complexity: Equivalent to the number of nodes traversed
in DFS.

Space complexity: Equivalent to how large can the fringe get.

Completeness: DFS is complete if the search tree is finite,


meaning for a given finite search tree, DFS will come up with a
solution if it exists.

Optimality: DFS is not optimal, meaning the number of steps in


reaching the solution, or the cost spent in reaching it is high.

Dr. Abdul Ahad, Department of AI 26


2.4.3 Uniform Cost Search (Branch & Bound)

• Uniform-cost search is an uninformed search


algorithm that uses the lowest cumulative cost to
find a path from the source to the destination.
• Nodes are expanded, starting from the root,
according to the minimum cumulative cost.
• Cost of a node is defined as:
• cost(node) = cumulative cost of all nodes from root
• cost(root) = 0

Dr. Abdul Ahad, Department of AI 27


Example: Which solution would UCS find to move
from node S to node G if run on the graph below?

Solution: The equivalent search tree for


the above graph is as follows. Cost of
each node is the cumulative cost of
reaching that node from the root. Based
on UCS strategy, the path with least
cumulative cost is chosen. Note that due
to the many options in the fringe, the
algorithm explores most of them so long
as their cost is low, and discards them
when a lower cost path is found; these
discarded traversals are not shown
below. The actual traversal is shown in Path: S -> A -> B -> G
Cost: 5
blue. Dr. Abdul Ahad, Department of AI 28
Let m = cost of solution.
c = arcs cost.
Then m / c = effective depth

Time complexity : T(n) = O(n m/c)


Space complexity: S(n) = O(n m/c)

Dr. Abdul Ahad, Department of AI 29


2.4.4 Depth First Iterative Deepening Search (DFIDS)
DFIDS is a variation of Depth Limited Search (DLS). If the
lowest depth of a goal state is not known, we can always find the
best limit l for DLS by trying all possible depths l = 0, 1, 2, 3, …
in turn, and stopping once we have achieved a goal state.

Dr. Abdul Ahad, Department of AI 30


▪ This appears wasteful because all the DLS for l less than the
goal level are useless, and many states are expanded many
times.
▪ However, in practice, most of the time is spent at the deepest
part of the search tree, so the algorithm actually combines the
benefits of DFS and BFS.
▪ Because all the nodes are expanded at each level, the
algorithm is complete and optimal like BFS, but has the
modest memory requirements of DFS.
• The space complexity is O(bd) as in DLS with l = d, which is
better than BFS.
• The time complexity is O(bd) as in BFS, which is better than
DFS.
Dr. Abdul Ahad, Department of AI 31
2.4.5 Comparison of Search Techniques

DFS BFS UCS

Complete N Y Y

Optimal N N Y

Heuristic N N N

Time bm bd+1 bm

Space bm bd+1 bm

Dr. Abdul Ahad, Department of AI 32


2.5 Informed Search Techniques
The informed search algorithms have information on the goal
state, which helps in more efficient searching. This information
is obtained by something called a heuristic.

A heuristic is a function that estimates how close a state is to the


goal state.

For examples – Manhattan distance, Euclidean distance, etc.


(Lesser the distance, closer the goal.).

The following are informed search algorithms:


1) Best-first search
2) Hill climbing (Greedy Search)
3) A* Search
4) Beam search
Dr. Abdul Ahad, Department of AI 33
2.5.1 Best-First Search
Best First Search (BFS) is an artificial intelligence search
algorithm that utilizes a priority queue and heuristic search.
Its objective is to find the shortest path from an initial state
to a goal node in a graph. This algorithm expands graph
nodes based on their distance from the starting node,
progressing towards the goal node.
For example, imagine you are playing a video game of
Super Mario or Contra where you have to reach the goal
and kill the enemy. The best first search aid computer
system to control the Mario or Contra to check the
quickest route or way to kill the enemy. It evaluates distinct
paths and selects the closest one with no other threats to
reach your goal and kill the enemy as fast as possible.
Dr. Abdul Ahad, Department of AI 34
The best first search algorithm in AI utilizes two lists of
monitoring the transversal while searching for graph space,
i.e., Open and CLOSED list. An Open list monitors the
immediate nodes available to transverse at the moment. In
contrast, the CLOSED list monitors the nodes that are being
transferred already.
The best first search uses a heuristic function in informed
decisions. It helps in finding the right and quick path
towards the goal, called heuristic search. The current state
of the user in the maze is the input of this function, based
on which it estimates how close the user is to the goal.
Based on the analysis, it assists in reaching the goal in a
reasonable time and with minimum steps.
Dr. Abdul Ahad, Department of AI 35
For Example,

Dr. Abdul Ahad, Department of AI 36


Dr. Abdul Ahad, Department of AI 37
Dr. Abdul Ahad, Department of AI 38
Dr. Abdul Ahad, Department of AI 39
Dr. Abdul Ahad, Department of AI 40
Dr. Abdul Ahad, Department of AI 41
Dr. Abdul Ahad, Department of AI 42
Dr. Abdul Ahad, Department of AI 43
2.5.2 Hill Climbing (Greedy Search)
• Hill climbing algorithm is a local search algorithm which
continuously moves in the direction of increasing elevation/value
to find the peak of the mountain or best solution to the problem.
It terminates when it reaches a peak value where no neighbor has
a higher value.
• Hill climbing algorithm is a technique which is used for
optimizing the mathematical problems. It is also called greedy
local search as it only looks to its good immediate neighbor state
and not beyond that. It does not backtrack the search space, as it
does not remember the previous states.
• In Hill Climbing, the algorithm starts with an initial solution and
then iteratively makes small changes to it in order to improve the
solution. These changes are based on a heuristic function that
evaluates the quality of the solution.
Dr. Abdul Ahad, Department of AI 44
State-space Diagram for Hill Climbing

Local Maximum: Local maximum is a state which is better than its neighbor
states, but there is also another state which is higher than it.
Global Maximum: Global maximum is the best possible state of state space
landscape. It has the highest value of objective function.
Current state: It is a state in a landscape diagram where an agent is currently
present.
Flat local maximum: It is a flat space in the landscape where all the neighbor
states of current states have the same value.
Shoulder: It is a plateau region which
Dr. Abdul Ahad,has an uphill
Department of AI edge. 45
Simple Hill climbing algorithm:
1.Define the current state as an initial state
2.Loop until the goal state is achieved or no more operators can
be applied on the current state:
i. Apply an operation to current state and get a new state
ii. Compare the new state with the goal
iii. Quit if the goal state is achieved
iv. Evaluate new state with heuristic function and compare it
with the current state
v. If the newer state is closer to the goal compared to
current state, update the current state

Dr. Abdul Ahad, Department of AI 46


Example-1: The "Traveling Salesman" Problem, in which we
must reduce the salesman's journey distance, is the most hill
climbing algorithm popular. Consider the following graph with
six cities and the distances between them.

Dr. Abdul Ahad, Department of AI 47


From the given graph, since the origin is already mentioned, the
solution must always start from that node. Among the edges
leading from A, A → B has the shortest distance.

Then, B → C has the shortest and only edge between, therefore


it is included in the output graph.

Dr. Abdul Ahad, Department of AI 48


There’s only one edge between C → D, therefore it is added to the
output graph.

There’s two outward edges from D. Even though, D → B has lower


distance than D → E, B is already visited once and it would form a
cycle if added to the output graph. Therefore, D → E is added into the
output graph.

Dr. Abdul Ahad, Department of AI 49


There’s only one edge from E, that is E → F.
Therefore, it is added into the output graph.

Again, even though F → C has lower distance than F → A, F → A is


added into the output graph in order to avoid the cycle that would form
and C is already visited once.

Dr. Abdul Ahad, Department of AI 50


The shortest path that originates and ends
at A is A → B → C → D → E → F → A

The cost of the path is: 16 + 21 + 12 +


15 + 16 + 34 = 114.

Dr. Abdul Ahad, Department of AI 51


Example 2: Find the path from S to G using greedy search. The
heuristic values h of each node below the name of the node.

Solution: Starting from S, we can traverse to A(h=9) or D(h=5). We


choose D, as it has the lower heuristic cost. Now from D, we can move
to B(h=4) or E(h=3). We choose E with lower heuristic cost. Finally,
from E, we go to G(h=0). This entire traversal is shown in the search tree
below, in blue. Dr. Abdul Ahad, Department of AI 52
Path: S -> D -> E -> G

▪ Advantage: Works well with informed search problems, with


fewer steps to reach a goal.
▪ Disadvantage: Can turn into unguided DFS in the worst case.
Dr. Abdul Ahad, Department of AI 53
2.5.3 A* Tree Search
A* Tree Search, or simply known as A* Search, combines the
strengths of uniform-cost search and greedy search. In this search,
the heuristic is the summation of the cost in UCS, denoted by
g(x), and the cost in greedy search, denoted by h(x). The summed
cost is denoted by f(x). Finally choose the node with lowest f(x)
value.

f(x) = g(x) + h(x)


Here, h(x) is called the forward cost, and is an estimate of the
distance of the current node from the goal node. And, g(x) is
called the backward cost, and is the cumulative cost of a node
from the root node.

Dr. Abdul Ahad, Department of AI 54


Example: Find the path to reach from S to G using A* search.

Solution: Starting from S, the algorithm computes g(x) + h(x) for all
nodes in the fringe at each step, choosing the node with the lowest sum.
The entire working is shown in the table below.
Dr. Abdul Ahad, Department of AI 55
Path: S -> D ->Dr.BAbdul
-> Ahad,
E ->Department
G of AICost: 7 56
2.5.4 Beam Search
Beam search is a heuristic search algorithm that explores a
graph by expanding the most promising node in a limited set.
Beam search is an optimization of best-first search that
reduces its memory requirements.
Beam search uses breadth-first search to build its search
tree. At each level of the tree, it generates all successors of the
states at the current level, sorting them in increasing order of
heuristic cost. However, it only stores a predetermined number
(β), of best states at each level called the beamwidth. Only
those states are expanded next.
A beam search takes three components as its input:
▪ A problem to be solved,
▪ A set of heuristic rules for pruning,
▪ And a memory with aDr.limited available capacity.
Abdul Ahad, Department of AI 57
For example,

Path: A-> C-> F-> G


But the Optimal Path is A-> D-> G

Step 1: OPEN= {A}


Step 2: OPEN= {B, C} • The worst-case time complxity = O(B*m)
• The worst-case space complexity = O(B*m)
Step 3: OPEN= {C, E}
Step 4: OPEN= {F, E}
Step 5: OPEN= {G, E}
Step 6: Found the goalDr.node {G}, now stop.
Abdul Ahad, Department of AI 58
Example-2: Consider the following example of searching a graph
from S to G. For simplicity, only the edge weights will be used for
evaluation. A beam width of β = 2 will be used, meaning only the top
two nodes are considered at every level.

Dr. Abdul Ahad, Department of AI 59


Initialize the tree and add S to the closed list:

S only has one adjacent node I, so it is added to the tree:

Explore I. Nodes N and H are the top two nodes, so those are added to the tree:

Dr. Abdul Ahad, Department of AI 60


Nodes F (adjacent to H in the graph) and L (adjacent to N in the
graph) are next:

Dr. Abdul Ahad, Department of AI 61


The incompleteness of this algorithm can be seen in this step.
Node G has appeared in the open list but unfortunately, it has been
eliminated by the beam width. Nodes M and C are next:

Dr. Abdul Ahad, Department of AI 62


It is in this step that node G has been found:

Dr. Abdul Ahad, Department of AI 63


Trace the path from S to G: S – I – H – F – M - G
Dr. Abdul Ahad, Department of AI 64
Dr. Abdul Ahad, Department of AI 65
Thank You

Dr. Abdul Ahad, Department of AI 66

You might also like