0% found this document useful (0 votes)
6 views

(AI) Searching

Uploaded by

roopoo305
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views

(AI) Searching

Uploaded by

roopoo305
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 49

Search

Rakesh Kumar Bachchan


CDCSIT, T.U.
Introduction

Determination of possible sequence of actions that


lead to the state of unknown values and then choosing
the best sequence.

It is mainly used for problem solving.

Water Jug problem, Puzzle problem, and many more


can be solved by searching.

Search problem is associated with two important


issues: What to search (key) and Where to search
(Search space).
Introduction

In AI search space is called as “state space”.


Generally the State space in most of problems in
AI is not completely known, prior to solving the
problem.

So, problem solving in AI has two phases:

Generation of state of space

searching for the desired problem state (goal)


in that space.
Introduction

Generation of State space

How to generate the states of a problem? To formalize this, define a four


tuple, called state space, denoted by

{nodes, arc, goal, current}

nodes represent the set of existing states in the search space

arc denotes an operator applied to an existing state to cause


transition to another state

goal denotes the desired state to be identified in the nodes;

current represents the state, now generated for matching


with the goal.
Introduction

since the whole state space for a problem is quite


large, generation of the whole space prior to
search may cause a significant blockage of
storage, leaving a little for the search part.

To overcome this problem, the state space is


expanded in steps and the desired state, called
“the goal”, is searched after each incremental
expansion of the state space.
Introduction

once the search space is given, we know the order of


traversal in the tree. Such types of traversal are
generally called ‘deterministic’.

An alternative type of search, where we cannot


definitely say which node will be traversed next
without computing the details in the algorithm.
Further, we may have transition to one of many
possible states with equal likelihood at an instance of
the execution of the search algorithm. Such a type of
search, where the order of traversal in the tree is not
definite, is generally termed ‘non- deterministic’.
Introduction

Depending upon the state space expansion


methodology and the order of visiting the states,
Search problems are of different types.

Uninformed (Blind) Search method

Informed (Heuristic) Search method


Uninformed (Blind) Search

No additional information about the states


beyond that provided in problem definition

They can only generate successors and


distinguish a goal state from non-goal state.

So, it is called Blind Search

E.g. Breadth First Search, Depth First Search


Breadth First Search

Simple Idea:

Root node is expanded first, Then all the


successors of the root node are expanded next,
then their successors and so on.

i.e. All nodes are expanded at a given depth in


the search tree before any nodes at the next level
are expanded.
Breadth First Search
Algorithm

Begin
depth
1. Place the starting node in a queue
0
2. repeat 1
Delete queue to get the front element

IF the front element of the 2 3 4 1


queue = GOAL

return SUCCESS and stop

Else 5 6 7 8 2
insert the children of the first
element, if exists in any order at
the rear end of the queue.
9 10 11 12 3
UNTIL the queue is empty;
Breadth first search on a tree, where the node number denotes the order of visiting that node
END
Breadth First Search

Time complexity

Assume a state space of equal branching factor


from each node = b

largest depth = d

Worst case: expand all except the last at depth d

Total no. of nodes generated: b + b2 + b3 + ………..


+ bd + (bd+1 - b) = O(bd+1)
Breadth First Search

Space complexity

Each node that is generated must remain in memory, so, the space
complexity is same as that of the time complexity (plus one node
for the root)

Total no. of nodes in memory: 1 + b + b2 + b3 + ………..+ bd + (bd+1 -


b) = O(bd+1)

Optimality

If all paths have the same cost.

Otherwise, not optimal but finds solution with shortest path


length (shallowest solution). If each path does not have same path
cost, shallowest solution may not be optimal.
Uniform cost Search

This algorithm is by Dijkstra [1959]. The algorithm expands


nodes in the order of their cost from the source.

The path cost is usually taken to be the sum of the step costs.

Newly generated nodes are put in OPEN according to their


path costs.

This ensures that when a node is selected for expansion it


is a node with the cheapest cost among the nodes in OPEN.

uniform-cost search is complete and optimal if the cost of each


step exceeds some positive bound ε.
Uniform cost Search

It does not care about the number of steps a path


has, but only about their total cost.

So, it could get stuck in infinite loop if it ever


expands a node that has a zero cost action leading
back to the same state.

Let C* be the cost of the optimal solution and


assume that every action costs at least ε. Then,

Worst-Time and Space Complexity:


Uniform cost Search
A
Example 4 11

B C
15 13 4 3
D E F H
12 10 18 6 6 3 1 7
I J G1 K L M N G2
Uniform cost Search
Example Contd… A
11
4
B C
4 11
Since B has the least cost we expand it
A
4 11

B C
15 13 11
D E Of our three choices, C has the least cost so we
19 17 will expand it.
Uniform cost Search
Example Contd…
A
4 11

B C
15 13 4 3
D E F H
17 19 15 14
Node H has the least cost thus far, so we expand it.
Uniform cost Search
Example Contd… A
4 11

B C
15 13 4 3
D E F H
17 19 15 1 7
N G2
15 21
We have a goal, G2 but need to expand other branches to see if there is another goal with
less distance.
Uniform cost Search
Example Contd… A
4 11

B C
15 13 4 3
D E F H
17 19 6 15 3 1 7
L M N G2
21 18 15 21
Both nodes F and N have a cost of 15, we chose to expand the leftmost node first. We
continue expanding until all remaining paths are greater than 21, the cost of G2.
7
Depth First Search

Simple Idea:

The depth first search generates nodes and compares them


with the goal along the largest depth of the tree and moves
up to the parent of the last visited node, only when no
further node can be generated below the last visited node.

After moving up to the parent, the algorithm attempts to


generate a new offspring of the parent node.

The above principle is employed recursively to each node


of a tree in a depth first search.
Depth First Search
Algorithm

Begin
depth
1. Push the starting node at the stack, pointed
to by the stack-top; 0
2. While the stack is not empty do
1
Begin

Pop Stack to get the stack- top element 2 3 8 1


IF stack- top element = GOAL

return SUCCESS and stop

Else 4 5 9 10 2
Push the children at the stack-
top element in any order into the
stack;
6 7 11 12 3
End While
Depth first search on a tree, where the node number denotes the order of visiting that node
End
Depth First Search

Time complexity

Let m is the maximum depth of the search tree. In the


worst case Solution may exist at depth m.

root has b successors, each node at the next level has


again b successors (total b2), ….

Worst case: expand all except the last at depth m

Total no. of nodes generated: b + b2 + b3 + ………..+ bm


= O(bm)
Depth First Search

Space complexity

It needs to store only a single path from the root node to a


leaf node, along with remaining unexpanded sibling nodes for
each node on the path.

Total no. of nodes in memory: 1 + b + b + b + ………..+ b m


times = O(bm)

Optimality

DFS expand deepest node first, it expands entire left sub-tree


even if right sub-tree contains goal nodes at levels 2 or 3.
Thus we can say DFS may not always give optimal solution.
Depth Limited Search

Nodes at certain depth are treated as they have no


successors.

It imposes a maximum limit on the depth of the search.

It solves the problem of infinite path of Depth First Search.

Let L is depth limit, d is the depth of shallowest solution.

If L<d, i.e. the shallowest goal is beyond the depth limit,


then the algorithm is incomplete.

If L>d, then the algorithm is nonoptimal.


Depth Limited Search

Time complexity: O(bL)

Space complexity: O(bL)

Depth First Search can be viewed as special case


of depth limited search with L = ∞.
Iterative Deepening Depth First
Search
Algorithm
Begin
1. Set current depth cutoff =1;
2. Put the initial node into a stack, pointed to by stack-top;
3. While the stack is not empty and the depth is within the given depth cut-off do
Begin
Pop stack to get the stack-top element;

IF stack- top element = GOAL,


return SUCCESS and stop
Else
Push the children at the stack- top element in any order into the stack;
End While
4. Increment the depth cut-off by 1 and repeat through step 2.
End
Iterative Deepening Depth First
Search
Iterative Deepening Search evaluation:

Complete: Yes (Because it has no infinite paths)

Time complexity:

Algorithm seems costly due to repeated generation of certain states.

Node Generation:
level d: once
level d-1: 2
level d-2: 3
...
level 2: d-1
level 1: d
– Total no. of nodes generated:
d.b +(d-1). b + (d-2). b + .......................+1. bd= O(bd)
2 3
Iterative Deepening Depth First
Search
Iterative Deepening Search evaluation:

Space complexity:

It needs to store only a single path from the root node to a leaf
node, along with remaining unexpanded sibling nodes for each
node on the path.

Total no. of nodes in memory:


1+ b + b + b + ....................... b d times = O(bd)

Optimality:

YES if path cost is non-decreasing function of the depth of the


node.
Bidirectional Search

Search from both sides. i.e.Two graph search is


needed;

One from the initial state.

Another from the final (Goal) state.

At each stage check if the nodes of one have been


generated by the other, i.e, they meet in the middle.

If they meet, then concatenate the path. That is the


solution.
Bidirectional Search

Completeness: Yes

Optimal: Yes (if done with correct strategy - e.g.


Breadth First)

Time complexity: O(bd/2)

Space complexity: O(bd/2)


Heuristic Search

‘Heuristics’ stands for ‘Thumb Rule’, i.e. rules which


work successfully in many cases but its success is not
guaranteed.

Nodes are expanded by selecting the more promising


nodes, where these nodes are identified by measuring
their strength compared to their competitive
counterparts with the help of specialized intuitive
functions, called heuristic functions.

Heuristic Function: Estimated cost of the cheapest path


from node n to goal node.
Heuristic Search

Generally employed for:

Forward reasoning and

Backward reasoning

When Forward type of search algorithm is


realised with heuristic functions, it is generally
called “Heuristic search for OR graph” or “Best
First Search Algorithms”.
Best First Search
Start with a promising state and generate all its offsprings.

The performance (fitness) of each of the nodes is then examined and


the most promising node, based on its fitness, is selected for
expansion. Node with lowest value is expanded first.

The most promising node is then expanded and the fitness of all the
newborn children is measured.

Now, instead of selecting only from the generated children, all


the nodes having no children are examined and the most
promising of these fringe nodes is selected for expansion.

Two Variants: Greedy Best First Search and A* Search


Greedy Best First Search
Algorithm
1. Begin
Identify possible starting states and measure the distance (f) of their closeness with the
goal; Put them in a list L;
2. While L is not empty do
Begin
A. Identify the node n from L that has the minimum f; If there exist more than
one node with minimum f, select any one of them (say, n) arbitrarily;
B. If n is the goal
Then return n along with the path from the starting node, and exit;
Else remove n from L and add all the children of n to the list L, with their
labeled paths from the starting node;
End While;
End
Greedy Best First Search
Example: Given the following Kupondole
graph of places, starting at
10
S 30
kupondole problem is to reach
patan.
Thapathali Sanepa
Straight line distance 20 50
Kupondole 500 Hanuman Sthan Bakhundole
Patan 0 80 60
Thapathali 600
Hanuman 340 Kandevta G Patan
Kandevta 200
100
Pulchowk 100 150
Sanepa 140 300
Kalimati 700
Bakhundole 50 Pulchowk Kalimati
Greedy Best First Search

Solution: Kupondole Kupondole


500

Kupondole Thapathali Sanepa

Thapathali Sanepa Kupondole Bakhundole


600 140

Kupondole
Sanepa Patan
140 0

Thapathali Sanepa
600

Bakhundole 50
A* Search

Solution: Kupondole Kupondole


500 + 0 = 500
f(n) = g(n) + h(n)

Kupondole Thapathali Sanepa

Thapathali Sanepa Kupondole Bakhundole


600 + 10 = 610 140 + 30 = 170

Kupondole
Sanepa Patan
140+50+50=240 0+30+50+60=140

Thapathali Sanepa

500 +30+30+= 560 Kupondole Bakhundole 30+50+50=130


Homework

Use Best First Search and A* Search to reach Bucharest from Lugoj.
Hill Climbing Search

Employ a function f(x) that gives an estimate of the measure of the


distance of goal from node x.

After f(x) is evaluated at the possible initial nodes x, the nodes are sorted
in ascending order of their functional values and pushed into a stack in
the ascending order of their ‘f’ values.

So, the stack-top element has the least f value.

It is now popped and compared with the goal. If it is not the goal then
then it is expanded and f is measured for each of its children. They are
now sorted according to their ascending order of the functional values
and then pushed into the stack. If the stack-top element is the goal, the
algorithm exits; otherwise the process is continued until the stack
becomes empty. Pushing the sorted nodes into the stack adds a depth
first flavor to the present algorithm.
Hill Climbing Search

Iteratively maximize value of current state by


replacing it by successor state that has highest
value, as long as possible.

In hill climbing the basic idea is to always head


towards a state which is better than the current
one. So, if you are at town A and you can get to
town B and town C (and your target is town D)
then you should make a move IF town B or C
appear nearer to town D than town A does.
Hill Climbing Search

Algorithm
Begin

1. Identify possible starting states and measure


the distance (f) of their closeness with the goal
A Ridge
node; Push them in a stack according to the
ascending order of their f ;

2. Repeat
Fig: Moving along a ridge in two
Pop stack to get the stack-top element; steps (by two successive operators)
If the stack-top element is the goal,
announce it and exit in hill climbing
Else push its children into the stack in
the ascending order of their f values;

Until the stack is empty;

End
Hill Climbing Search
Example: Block
+1 A +5
World Problem A +1 D
Heuristic
Start B D Goal+1 B
Functions: C E +1 C +1 E
1. h1(n) = add 1 if
block on a
correct block. -1 D
+3 A +1 +1 A +1 -1 A
Subtract 1 if it is
-1
+1
on a wrong +1 B D -1 +1 B -1 D +1 B +1 B
block. C E C E A E E D
+1 +1 +1 +1 -1 +1 C +1 +1 C +1 -1
a b c d
According to the given heuristic function, the heuristic value of the different states are as
indicated in the figure.
In the above figure we see that, state (a) has maximum value of the heuristic function. So, (a) is
chosen for further exploration.
Hill Climbing Search
Example: +3 A +1 +1 A +5
+1 B D -1 +1 D
Heuristic
Functions: +1 C E +1 Goal+1 B
a +1 C +1 E
1. h1(n) = add 1 if
block on a
correct block. -1 B +1
-1 A +1 +1 +1 +1 A +1 A
Subtract 1 if it is
on a wrong +1 B -1 D +1 B -1 D -1 D -1 D
block. +1 C +1 E +1 C +1 E -1 A +1 C +1 E +1 C +1 E -1 B
a11 a12 a13 a14

In the above figure we see that, all states has same value of the heuristic function. Suppose
we choose a13 for further exploration. Then we get the following (shown in next slide):
Hill Climbing Search
Example: B +1 A
+1 +1
-1
+5
A +1 D
Heuristic
Functions: -1 D Goal+1 B
+1 C +1 E +1 C +1 E
1. h1(n) = add 1 if a13
block on a
correct block.
-1 C
Subtract 1 if it is -1 B
on a wrong -1 +1 +3
block.
A A +1 +1 A +1
-1 D +1 B D -1 -1 D
+1 E +1 C E +1 +1 C +1 E -1 B
a131 a132 a133
In the above figure we see that, state (a132) has maximum value of the heuristic function. So, (a132)
will be chosen for further exploration. But by inspection, we see that (a132) will again generate the
same state as before (shown in previous slide), i.e. we are stucked in a loop without reaching to the
goal.
Hill Climbing Search
Example: +4 A +11
Heuristic A +3 D
Functions: Start B D Goal+2 B
2. h2(n) = add 1 C E +1 C +1 E
for every block
in a correct
structure that
the block is -4 D
sitting on. -1 A -3 +1 A -3 -3 A
Subtract 1 for
-3
0
every wrong
+2 B D -2 +2 B -2 D +2 B +2 B
block. +1 C E +1 +1 C +1 E -1 A +1 C +1 E +1 C +1 E -1 D
a b c d
According to the given heuristic function, the heuristic value of the different states are as
indicated in the figure.
In the above figure we see that, state (b) has maximum value of the heuristic function. So, (b) is
chosen for further exploration.
Hill Climbing Search
Example: +1 +4 A +11
Heuristic
+2 B -2 D +3 D
Functions: C E A
+1 +1 -1
Goal+2 B
2. h2(n) = add 1
for every block
b +1 C +1 E
in a correct
structure that
the block is
sitting on.
-1 -1
-3 A +3 D +6 -3 A +1 -2
Subtract 1 for +2 B D
every wrong
-2 +2 B +2 B -2 D +2 B D
block. +1 C E +1 +1 C +1 E -1 A +1 C +1 E +1 C +1 E -1 A
b11 b12 b13 b14

In the above figure we see that, state (b12) has maximum value of the heuristic function. So, (b12)
will be chosen for further exploration.
Hill Climbing Search
Example: +3 D +6 +4 A +11
Heuristic +2 B +3 D
Functions:
+1 C +1 E -1 A Goal+2 B
2. h2(n) = add 1
for every block b12 +1 C +1 E
in a correct
structure that
the block is +4 A
-1
sitting on.
+3 D +11 +1 +3 D +1 -2
Subtract 1 for +2 B A
every wrong
+2 B -2 D +2 B -2 +2 B D
block. +1 C +1 E +1 C +1 E -1 A +1 C +1 E +1 C +1 E -1 A
b121 b122 b123 b124

In the above figure we see that, state (b121) has maximum value of the heuristic function and it
also matches with the goal state. Hence, the algorithm stops.
Hill Climbing Search

Problems of Hill Climbing Search

Trapping at local maxima at a foothill. When trapped at local maxima,


which is better than some neighbouring state but not better than
some other states further away. When local maxima occurs almost
within the sight of a solution, it is called foothill.

Reaching a Plateau. Flat area of the search space in which a whole set
of neighbouring states have the same value. It is very difficult to find
by local comparisons, the best direction for move.

Traversal along the ridge. A ridge on many occasions leads to a local


maxima. However, moving along the ridge is not possible by a single
step due to non-availability of appropriate operators. A multiple step
of movement is required to solve this problem.
THANKS

You might also like