Unit1-Part2
Unit1-Part2
3 SOLVING PROBLEMS BY
SEARCHING
1
• Problem Solving Agent
• Example Problems
• Search for solutions
• Uninformed search
• Informed(Heuristic Search)
2
Solving problems by Searching
• To study how agent can find a sequence of
actions to achieve its goals when no single
action will do.
– Problem solving agent - goal based agent
– use atomic representation
(State of the world as whole)
- Planning Agent- goal based agent
- use more advanced factored or structured
representations
AI 3
Problem solving agents
• In problem solving,
– Goal formulation - based on the current situation
and the agent’s performance measure is the first
step.
– Problem formulation - is the process of deciding
what actions and states to consider, given a goal.
AI 4
Terms
• Unknown
– The agent has no additional information .
– Then it is has no choice but to try one of the
actions at random. This is sad situation.
• An agent with several immediate options of
unknown value can decide what to do by first
examining future actions that eventually lead
to states of known value.
AI 5
• Environment is observable - the agent always knows
the current state.
• The environment is discrete- at any given state there
are only finitely many actions to choose from.
• The environment is deterministic- each action has
exactly one outcome.
• The process of looking for a sequence of actions that
reaches the goal is called search.
• Search algorithm takes a problem as input and returns
a solution in the form of an action sequence.
• Once a solution is found, the actions it recommends
can be carried out. This is called the execution phase.
AI 6
• The agent that is executing the solution sequence it
ignores its percepts when choosing an action
because it knows in advance what they will be.
• An agent that carries out its plans with its eyes
closed, so to speak, must be quite certain of what is
going on. Control theorists call this an open-loop
system, because ignoring the percepts breaks the
loop between agent and environment.
AI 7
Searching is a process of finding a sequence of steps
needed to solve any problem.
FORMULATE
SEARCH
EXECUTE
8
• Imagine an agent in the city of Arad, Romania, enjoying
a touring holiday. The agent’s performance measure
contains many factors: it wants to improve its suntan,
improve its Romanian, take in the sights, enjoy the
nightlife, avoid hangovers, and so on. The decision
problem is a complex one involving many tradeoffs and
careful reading of guidebooks.
9
Part 1. Components of well-defined Problem &
Solutions
1. Initial State The initial state that the agent starts in.
For example, the initial state for our agent in Romania might
be described as In(Arad).(marked in yellow)
2. A description of the possible actions available to the agent.
Given a particular state s, ACTIONS(s) returns the set of actions
that can be executed in s.
For example, from the state In(Arad), the applicable actions
are {Go(Sibiu), Go(Timisoara), Go(Zerind)}. (marked in orange)
3. A description of what each action does
the formal name for this is the transition model, specified by
a function RESULT(s, a) that returns the state that results from
doing action a in state s. – successor
– RESULT(In(Arad),Go(Zerind)) = In(Zerind) .
– the initial state, actions, and transition model implicitly define
the state space of the problem
– State space forms the directed network or graph, node are
states & links are the actions
AI 10
4. The goal test, which determines whether a given state is a goal
state.
Sometimes there is an explicit set of possible goal states, and the
test simply checks whether the given state is one of them.
5. A path cost function that assigns a numeric cost to each path. The
problem-solving agent chooses a cost function that reflects its own
performance measure.
– Step cost - The step cost of taking action a in state s to reach
state s’ is denoted by c(s, a, s’ ).
AI 11
Simple problem-solving agent
AI 12
Formulating Problems
• A solution to a problem is an action sequence that
leads from the initial state to a goal state.
• Solution quality is measured by the path cost function
• Optimal solution has the lowest path cost among all
solutions.
• Compare the simple state description we have chosen, In(Arad), to an actual
cross country trip, where the state of the world includes so many things: the
traveling companions, the current radio program, the scenery out of the
window, the proximity of law enforcement officers, the distance to the next
rest stop, the condition of the road, the weather, and so on.
• All these considerations are left out of our state descriptions because they
are irrelevant to the problem of finding a route to Bucharest.
• The process of removing detail from a representation is
called abstraction.
AI 13
Abstract the action too!
• In addition to abstracting the state description, we
must abstract the actions themselves.
• Consumes fuel, generates pollution, and changes the
agent (as they say, travel is broadening).
• The choice of a good abstraction thus involves
removing as much detail as possible while retaining
validity and ensuring that the abstract actions are
easy to carry out.
AI 14
Class 6
15
Part 2. Example Problems
• Toy problem
• Real world problem
AI 16
Example 1:State Space - Vacuum world
AI 17
Standard Formulation
• States: The state is determined by both the agent location and the
dirt locations. The agent is in one of two locations, each of which
might or might not contain dirt. Thus, there are 2 × 2 2 = 8 possible
world states. A larger environment with n locations has n ・ 2n
states.
• Initial state: Any state can be designated as the initial state.
• Actions: In this simple environment, each state has just three
actions: Left, Right, and Suck. Larger environments might also
include Up and Down.
• Transition model: The actions have their expected effects, except
that moving Left in the leftmost square, moving Right in the
rightmost square, and Sucking in a clean square have no effect.
• Goal test: This checks whether all the squares are clean.
• Path cost: Each step costs 1, so the path cost is the number of
steps in the path.
AI 18
Example 2:The 8-puzzle
AI 19
Standard Formulation
• States: A state description specifies the location of each of the
eight tiles and the blank in one of the nine squares.
• Initial state: Any state can be designated as the initial state. Note
that any given goal can be reached from exactly half of the
possible initial states
• Actions: The simplest formulation defines the actions as
movements of the blank space Left, Right, Up, or Down. Different
subsets of these are possible depending on where the blank is.
• Transition model: Given a state and action, this returns the
resulting state; for example, if we apply Left to the start state the
resulting state has the 5 and the blank switched.
• Goal test: This checks whether the state matches the goal
configuration (Other goal configurations are possible.)
• Path cost: Each step costs 1, so the path cost is the number of
steps in the path.
AI 20
21
22
this is known to be NP complete problem
23
Example 3:8-
queens problem
AI 24
Standard Formulation
• States: Any arrangement of 0 to 8 queens on the board is a
state.
• Initial state: No queens on the board.
• Actions: Add a queen to any empty square.
• Transition model: Returns the board with a queen added to
the specified square.
• Goal test: 8 queens are on the board, none attacked.
• In this formulation, we have 64 · 63 · · · 57 ≈ 1.8×10 14
possible sequences to investigate
AI 25
26
A better formulation would prohibit placing a queen in any square
that is already attacked:
• States: All possible arrangements of n queens (0 ≤ n ≤ 8), one
per column in the leftmost n columns, with no queen attacking
another.
• Actions: Add a queen to any square in the leftmost empty
column such that it is not attacked by any other queen.
• This formulation reduces the 8-queens state space from
1.8×1014 to just 2,057, and solutions are easy to find.
• On the other hand, for 100 queens the reduction is from
roughly 10400 states to about 1052 states
27
28
Example 4:
29
• there is no bound on how large a number
might be constructed in the process of
reaching a given target
– for example, the number
620,448,401,733,239,439,360,000 is generated in
the expression for 5—so the state space for this
problem is infinite.
30
• States: Each state obviously includes a location (e.g., an
airport) and the current time. Furthermore, because the
world •
these “historical” aspects.
Initial state: This is specified by the user’s query.
• Actions: Take any flight from the current location, in any
problems •
seat class, leaving after the current time, leaving enough
time for within-airport transfer if needed.
Transition model: The state resulting from taking a flight
- route- •
will have the flight’s destination as the current location and
the flight’s arrival time as the current time.
Goal test: Are we at the final destination specified by the
finding •
user?
Path cost: This depends on monetary cost, waiting time,
flight time, customs and immigration procedures, seat
AI 31
• Touring problems
• Traveling salesperson
problem (TSP)
• VLSI Layout
Other • Robot navigation
Examples • Automatic assembly
sequencing
• Protein design – right
sequence of amino acid,
with 3-D protein
AI 32
Part 3. Searching for solutions
33
• A solution is an action sequence
• Search algorithms work by considering various possible action sequences
• Possible action sequences starting at the initial state form a search tree
with the initial state at the root
• The branches are actions and the nodes correspond to states in the state
space of the problem
• The root node of the tree corresponds to the initial state
• The first step is to test whether this is a goal state. Then we need to
consider taking various actions.
34
Difference between State Space
and Search Tree
35
State Space Depicting Initial & Goal States
36
Search of State Space
37
Search of State Space
38
Search of State Space
39
Search of State Space & Search
Tree
40
• Actions are done by expanding the current state; ie, applying
each legal action to the current state, thereby generating a
new set of states
• Example,
• parent node In(Arad)
• child nodes: In(Sibiu), In(Timisoara), and In(Zerind). Each of
these six nodes is a leaf node,
• The set of all leaf nodes available for expansion at any given
point is called the frontier(open list).
• The process of choosing and expanding nodes in the frontier
continues until either a solution is found or there are no
more states to be expanded.
41
Partial search trees for finding a route from Arad to Bucharest. Nodes that have been
expanded are shaded; nodes that have been generated but not yet expanded are
outlined in bold; nodes that have not yet been generated are shown in faint dashed42
lines.
• Search algorithms all share this basic structure; they vary primarily
according to how they choose which state to expand next—the so-
called search strategy.
• Repeated state in the search tree, generated in this case by a loopy
path.
• Avoid such path, rely more by intuitive
• Loopy paths are a special case of the more general concept of
redundant paths, exist whenever there is more than one way to
get from one state to another.
43
• Loopy paths are a special case of the more general
concept of redundant paths
• Paths Arad–Sibiu (140km long) and Arad–Zerind–
Oradea–Sibiu (297km long)- Redundant
• To avoid exploring redundant paths is to remember
where one has been
44
Class 7
45
Search Algorithms
47
3.1 Infrastructure for search algorithms
Search algorithms require a data structure to keep track of the search
tree that is being constructed.
For each node n of the tree, we will have a structure that contains
the following four components:
• n.STATE: the state in the state space to which the node corresponds;
• n.PARENT: the node in the search tree that generated this node;
• n.ACTION: the action that was applied to the parent to generate the
node;
• n.PATH-COST: the cost, traditionally denoted by g(n), of the path
from the initial state to the node, as indicated by the parent pointers.
Given the components for a parent node, it is easy to see how to
compute the necessary components for a child node.
48
The function CHILD-NODE takes a parent node and an action and
returns the resulting child node:
49
Node Data structure
Nodes are the data structures from which the search tree is constructed.
Each has a parent, a state, and various bookkeeping fields. Arrows point
from child to parent.
● The PARENT pointers string the nodes together into a tree structure.
These pointers also allow the solution path to be extracted when a goal
node is found;
● The SOLUTION function to return the sequence of actions obtained by
following parent pointers back to the root. 50
Distinguish between nodes and states
• A node is a bookkeeping data structure used to represent the search
tree.
• A state corresponds to a configuration of the world.
• Thus, nodes are on particular paths, as defined by PARENT pointers,
whereas states are not.
• The frontier needs to be stored in such a way that the search
algorithm can easily choose the next node to expand according to its
preferred strategy.
• The appropriate data structure for this is a queue.
• The operations on a queue are as follows:
• EMPTY?(queue) returns true only if there are no more elements in
the queue.
• POP(queue) removes the first element of the queue and returns it.
• INSERT(element , queue) inserts an element and returns the
resulting queue.
51
• Queues are characterized by the order in which they store the
inserted nodes.
• Three common variants are
– first-in, first-out or FIFO queue, which pops the oldest
element of the queue;
– last-in, first-out or LIFO queue (also known as a stack), which
pops the newest element
– priority queue, which pops the element of the queue with the
highest priority according to some ordering function.
52
3.2 Measuring problem-solving
performance
We will evaluate an algorithm’s performance in four ways:
search?
53
Time and Space Complexity ?
Time and space complexity are measured in terms of:
54
Basic Search Algorithms
55
Part 4: UNINFORMED SEARCH STRATEGIES
Uninformed Search -
Depth-first
Breadth-first
Uniform-cost
Depth Limited.
Iterative deepening. Examples. Properties.
56
3.4.1 Breadth-first search
• Root node is expanded first, then all the
successors of the root node are expanded next, then their
successors, and so on.
• It is an instance of the general graph search algorithm, shallowest
unexpanded node is chosen for expansion.
• This is achieved by using a FIFO queue for the frontier.
• breadth-first search always has the shallowest path to every node
on the frontier.
57
Breadth First Search (BFS)
58
Breadth First Search (BFS)
Main idea:Expand all nodes at depth (i) before expanding nodes at depth (i + 1)
Level-order Traversal.
Implementation: Use of a First-In-First-Out queue (FIFO). Nodes visited first are
expanded first. Enqueue nodes in FIFO (first-in, first-out) order.
• Complete? Yes.
• Optimal? Yes, if path cost is nondecreasing function of depth
• Time Complexity: O(bd)
• Space Complexity: O(bd), note that every node in the fringe is kept in the queue.
59
60
At each stage, the node to be expanded next is indicated by a marker
61
62
Class 8
63
Uniform Cost Search (UCS)
5 2
[5] [2]
1 4 1 7
[6] [9] [3] [9]
Goal state
4 5
[x] = g(n)
[7] [8]
path cost of node n
64
Uniform Cost Search (UCS)
5 2
[5] [2]
65
Uniform Cost Search (UCS)
5 2
[5] [2]
1 7
[3] [9]
66
Uniform Cost Search (UCS)
5 2
[5] [2]
1 7
[3] [9]
4 5
[7] [8]
67
Uniform Cost Search (UCS)
5 2
[5] [2]
1 4 1 7
[6] [9] [3] [9]
4 5
[7] [8]
68
Uniform Cost Search (UCS)
5 2
[5] [2]
Goal state
1 4 1 7
path cost
g(n)=[6] [3] [9]
[9]
4 5
[7] [8]
69
Uniform Cost Search (UCS)
5 2
[5] [2]
1 4 1 7
[6] [3] [9]
[9]
4 5
[7] [8]
70
71
• A portion of the Romania state space, selected to illustrate
uniform-cost search.
• the problem is to get from Sibiu to Bucharest. The successors of
Sibiu are Rimnicu Vilcea and Fagaras, with costs 80 and 99
respectively. The least-cost node, Rimnicu Vilcea, is expanded next,
adding Pitesti with cost 80+97 = 177.
• The least-cost node is now Fagaras, so it is expanded, adding
Bucharest with cost 99+211 = 310.
• Now the algorithm checks to see if this new path is better than the
old one; it is, so the old one is discarded. Bucharest, now with g-
cost 278, is selected for expansion and the solution is returned.
72
• Uniform-cost search is guided by path costs rather than depths, so
its complexity cannot easily be characterized in terms of b and d.
• Instead, let C be the cost of the optimal solution,
and assume that every action costs at least ϵ. Then the algorithm’s
worst-case time and space complexity is O(b1+⌊C/ϵ⌋), which can be
much greater than bd .
• When all step costs are equal, b1+⌊C/ϵ⌋ is just bd+1
73
Uniform Cost Search (UCS)
Main idea: Expand the cheapest node. Where the cost is the path cost g(n).
Implementation:
Enqueue nodes in order of cost g(n).
QUEUING-FN:- insert in order of increasing path cost.
Enqueue new node at the appropriate position in the queue so that we dequeue the
cheapest node.
Complete? Yes.
Optimal? Yes, if path cost is nondecreasing function of depth
Time Complexity: O(bd+1)
Space Complexity: O(bd+1), note that every node in the fringe keep in the
queue.
74
Class 9
75
3.4.3 Depth-first search
• Depth-first search always expands the deepest node in the current
frontier of the search tree.
• The search proceeds immediately to the deepest level of the
search tree, where the nodes have no successors.
• As those nodes are expanded, they are dropped from the frontier,
so then the search “backs up” to the next deepest node that still
has unexplored successors.
• breadth-first-search uses a FIFO queue, depth-first search uses a
LIFOqueue
76
Depth First Search (DFS)
77
The unexplored region is shown in light gray. Explored nodes with no
descendants in the frontier are removed from memory. Nodes at depth 3 78
have no successors, and M is the only goal node.
• depth-first tree search needs to store only a single path from the
root to a leaf node, along with the remaining unexpanded sibling
nodes for each node on the path. Once a node has been expanded,
it can be removed from memory as soon as all its descendants
have been fully explored.
• For a state space with branching factor b and maximum depth m,
depth-first search requires storage of only O(bm) nodes
• A variant of depth-first search called backtracking search uses still
less memory only O(m) memory is needed rather than O(bm).
79
Depth First Search (DFS)
Main idea: Expand node at the deepest level (breaking ties left to right).
Implementation: use of a Last-In-First-Out queue or stack(LIFO). Enqueue nodes in LIFO (last-in, first-
out) order.
• Optimal? No
81
Depth-Limited Search (DLS)
Depth Bound = 3
82
Depth-Limited Search (DLS)
Given the following state space (tree search), give the
sequence of visited nodes when using DLS (Limit = 2):
Limit = 0 A
Limit = 1 B C D E
Limit = 2 F G H I J
K L M N
83
Depth-Limited Search (DLS)
DLS algorithm returns Failure (no solution)
The reason is that the goal is beyond the limit (Limit
=2): the goal depth is (d=4)
A
B C D E
Limit = 2 F G H I J
K L M N
84
85
Depth-Limited Search (DLS)
Implementation:
Enqueue nodes in LIFO (last-in, first-out) order. But limit
depth to L
• Optimal? No
88
89
Iterative deepening search L=0
Iterative deepening search L=1
Iterative deepening search L=2
Iterative Deepening Search L=3
Iterative deepening search
Iterative Deepening Search (IDS)
Key idea
Iterative deepening search (IDS) applies DLS
repeatedly with increasing depth. It terminates when
a solution is found or no solutions exists.
95
• states are generated multiple times. It turns out this is not very
costly
• In an iterative deepening search, the nodes on the bottom level
(depth d) are generated once, those on the next to bottom level
are generated twice, and so on, up to the children of the root,
which are generated d times. So the total number of nodes
generated in the worst case is
96
• In general, iterative deepening is the preferred uninformed search
method when there is a large search space and the depth of the
solution is not known.
97
Four iterations of iterative deepening search on a binary tree
98
3.4.7 Comparing uninformed search
strategies