Lect5 10 Ai Sps Feb2025
Lect5 10 Ai Sps Feb2025
▪Define the problem precisely – find input situations as well as final situations for an acceptable
solution to the problem
▪Analyse the problem – find few important features that may have impact on the appropriateness of
various possible techniques for solving the problem.
▪Isolate and represent task knowledge necessary to solve the problem
▪Choose the best problem-solving technique(s) and apply to the particular problem
Problem Definition
• A problem is defined by its ‘elements’ and their ‘relations’. To provide a formal description of a
problem, we need to do the following:
◦ Define a state space that contains all the possible configurations of the relevant objects,
including some impossible ones.
◦ Specify one or more states that describe possible situations, from which the problem solving
process may start. These states are called initial states.
◦ Specify one or more states that would be acceptable solution to the problem. These states
are called goal states.
• Specify a set of rules that describe the actions (operators) available. It consists of two parts: Left
side – serves as a pattern to be matched against the current situation, and Right Side – that
describes the change to be made on current situation to reflect the application of rule.
•Example: Playing Chess, etc.
Defining the problem as a state space search : An example:
Water Jug Problem:
You are given two jugs, a 4-gallon one and a 3-gallon one. Neither has any measuring markers on it. There is a
pump that can be used to fill the jugs with water.
How can you get exactly 2 gallons of water into the 4-gallon jug?
Production Rule:
Initial Condition Goal
1. (four, three) if four < 4 (4, three) fill four from tap
2. (four, three) if three< 3 (four, 3) fill three from tap
3. (four, three) If four > 0 (0, three) empty four into drain
4. (four, three) if three > 0 (four, 0) empty three into drain
5. (four, three) if four + three<4 (four + three, 0) empty three into four
6. (four, three) if four + three<3 (0, four + three) empty four into three
7. (0, three) If three > 0 (three, 0) empty three into four
8. (four, 0) if four > 0 (0, four) empty four into three
9. (0, 2) (2, 0) empty three into four
10. (2, 0) (0, 2) empty four into three
11. (four, three) if four < 4 (4, three-diff) pour diff, 4-four, into four from three
12. (three, four) if three < 3 (four-diff, 3) pour diff, 3-three, into three from four and a solution is given
below four three rule
Production systems provide appropriate structures to AI Program for performing and describing
search processes. A production system has four basic components as enumerated below.
• A set of rules each consisting of a left side that determines the applicability of the rule and a
right side that describes the operation to be performed if the rule is applied.
•A database of current facts established during the process of inference. Some parts may be
permanent, while others may pertain only to the solution of current problem. The information in
these database may be structured in any appropriate way.
•A control strategy that specifies the order in which the rules will be compared with facts of the
database and also specifies the way of resolving conflicts that arise when several rules match at
once.
•A rule applying (firing) module.
Production System…
❑ The production system needs to be very general and encompasses great many systems, including
descriptions of the problem solver(Chess player, Water-jug, etc.). It also encompasses a family of general
production system interpreters, including:
• Basic p.s. language, such as OPS5 [Brownston et al., 1985] and ACT*[Anderson, 1983].
• Complex, often hybrid systems called expert system shells, which provide complete environment for the
construction of knowledge-based expert systems.
• General problem-solving architectures like SOAR [Laird et al., 1987], as system based on a specific set of
cognitively motivated hypotheses about the nature of problem solving.
Search
Strategy
Uninformed Informed
Search Search
Iterative
Breadth-First Uniform-cost Depth-First Greedy best-
deepening A* Search
Search search Search first search
search
9
Review: Search problem formulation
oInitial state
oActions
oTransition model
oGoal state
oPath cost
Breadth-first search
Uniform-cost search
Depth-first search
Iterative deepening search
Breadth-first search
Expand shallowest unexpanded node
Implementation:
◦ fringe is a FIFO queue, i.e., new successors go at end
B C
D E F G
Breadth-first search
Expand shallowest unexpanded node
Implementation:
◦ fringe is a FIFO queue, i.e., new successors go at end
B C
D E F G
Breadth-first search
Expand shallowest unexpanded node
Implementation:
◦ fringe is a FIFO queue, i.e., new successors go at end
B C
D E F G
Breadth-first search
Expand shallowest unexpanded node
Implementation:
◦ fringe is a FIFO queue, i.e., new successors go at end
B C
D E F G
Breadth-first search
Expand shallowest unexpanded node
Implementation:
◦ fringe is a FIFO queue, i.e., new successors go at end
B C
D E F G
Properties of breadth-first search
Complete?
Yes (if branching factor b is finite)
Optimal?
Yes – if cost = 1 per step
Time?
Number of nodes in a b-ary tree of depth d: O(bd)
(d is the depth of the optimal solution)
Space?
O(bd)
B C
D E F G
Depth-first search
Expand deepest unexpanded node
Implementation:
◦ fringe = LIFO queue, i.e., put successors at front
B C
D E F G
Depth-first search
Expand deepest unexpanded node
Implementation:
◦ fringe = LIFO queue, i.e., put successors at front
B C
D E F G
Depth-first search
Expand deepest unexpanded node
Implementation:
◦ fringe = LIFO queue, i.e., put successors at front
B C
D E F G
Depth-first search
Expand deepest unexpanded node
Implementation:
◦ fringe = LIFO queue, i.e., put successors at front
B C
D E F G
Depth-first search
Expand deepest unexpanded node
Implementation:
◦ fringe = LIFO queue, i.e., put successors at front
B C
D E F G
Depth-first search
Expand deepest unexpanded node
Implementation:
◦ fringe = LIFO queue, i.e., put successors at front
B C
D E F G
Depth-first search
Expand deepest unexpanded node
Implementation:
◦ fringe = LIFO queue, i.e., put successors at front
B C
D E F G
Depth-first search
Expand deepest unexpanded node
Implementation:
◦ fringe = LIFO queue, i.e., put successors at front
B C
D E F G
Properties of depth-first search
Complete?
Fails in infinite-depth spaces, spaces with loops
Modify to avoid repeated states along path
→ complete in finite spaces
Optimal?
No – returns the first solution it finds
Time?
Could be the time to reach a solution at maximum depth m: O(bm)
Terrible if m is much larger than d
But if there are lots of solutions, may be much faster than BFS
Space?
O(bm), i.e., linear space!
Iterative deepening search
Use DFS as a subroutine
1. Check the root
2. Do a DFS searching for a path of length 1
3. If there is no path of length 1, do a DFS searching for a path of length 2
4. If there is no path of length 2, do a DFS searching for a path of length 3…
Iterative deepening search
Iterative deepening search
Iterative deepening search
Iterative deepening search
Properties of iterative deepening search
Complete?
Yes
Optimal?
Yes, if step cost = 1
Time?
(d+1)b0 + d b1 + (d-1)b2 + … + bd = O(bd)
Space?
O(bd)
Problem Characteristics:
❑Is the problem decomposable?
❑The role of knowledge, if it required in large amount or important only to constraint the
search?
▪ Partially commutative, monotonic productions systems are useful for solving ignorable problems. These are
important from an implementation point of view without the ability to backtrack to previous states when it is
discovered that an incorrect path has been followed. Both types of partially commutative production systems
are significant from an implementation point; they tend to lead to many duplications of individual states during
the search process. Production systems that are not partially commutative are useful for many problems in
which permanent changes occur.
Issues in the Design of Search Programs
• Direction of Search (Forward Vs Backward Reasoning) The tree can be searched forward from the initial node to the goal
state or backwards from the goal state to the initial state.
• Matching (How to select the applicable rules?) To select applicable rules, it is critical to have an efficient procedure for
matching rules against states.
• Knowledge Representation problem or frame problem How to represent each node of the search process? In games, an
array suffices; in other problems, more complex data structures are needed.
Finally in terms of data structures, considering the water jug as a typical problem do we use a graph or tree? The breadth-first
structure does take note of all nodes generated but the depth-first one can be modified.
- Hence total time required to perform this search is proportional to N!, if there are 10 cities, 10! = 3,628,800,
which is very large number, phenomenon is called as ‘Combinatorial Explosion’
- ‘Branch and Bound’ strategy may be useful, (begin generating the complete paths and follows keeping track
of the shortest path found so far) but still requires exponential time.
Heuristic Search (Informed Search)
❑ For solving many hard problems efficiently, it is often necessary to compromise the requirements of mobility and
systematicity and to construct a control structure that is no longer guaranteed to find the best answer but that will
almost always find a very good answer. Thus, introducing the idea of ‘heuristic’ (work comes from Greek work
‘heuriskein’, meaning “to discover” having the origin ‘eureka (heurika)’ derived from ‘Arhimedes’ means ‘I have
found’).
❑ It is a technique that improves the efficiency of search process, possibly by sacrificing the claims of completeness.
❑ Like tour guides, good – points to interesting directions; bad – miss the points of interest to particular individuals.
❑ There are some good general-purpose heuristics that are useful in wide variety of problem domains, and it is
possible to construct the special-purpose heuristics that exploit domain-specific knowledge to solve particular
problems.
❑ Informed Search- Idea: give the algorithm “hints” about the desirability of different states
◦ Use an evaluation function to rank nodes and select the most promising one for expansion
Heuristic Search…
❑ For TSP, a general-purpose i. e. ‘nearest neighbor heuristic’ is a useful and avoiding the
combinatorial issue with the following procedure:
1. Randomly select a starting city.
2. To select next, look at all cities(not visited yet) and select the closest to current city, go to
next.
3. Repeat step 2 until all cities have been visited.
- Procedure executes in time proportional to N^2, a significant improvement over N!
Heuristic function
❑ A heuristic function is a function that maps from problem state descriptions to measures of desirability, usually
represented as numbers. Which aspects of the problem state are considered, how those aspects are evaluated, and the
weights assigned to individual aspects are chosen in such a way that the value of the heuristic function at a given node
in the search process gives as good an estimate as possible of whether that node is on the desired path to a solution.
❑ Heuristic search methods use knowledge about the problem domain and choose promising operators first. These
heuristic search methods use heuristic functions to evaluate the next state towards the goal state. For finding a solution,
by using the heuristic technique, one should carry out the following steps:
1. Add domain—specific information to select what is the best path to continue searching along.
2. Define a heuristic function h(n) that estimates the ‘goodness’ of a node n. Specifically, h*(n) = estimated cost(or
distance) of minimal cost path from n to a goal state.
3. The term, heuristic means ‘serving to aid discovery’ and is an estimate, based on domain specific information that
is computable from the current state description of how close we are to a goal.
Heuristic function…
Heuristic function h(n) estimates the cost of reaching goal from node n
Example:
Start state
Goal state
Heuristic function…
- Finding a route from one city to another city is an example of a search problem in which different search
orders and the use of heuristic knowledge are easily understood.
1. State: The current city in which the traveler is located.
2. Operators: Roads linking the current city to other cities.
3. Cost Metric: The cost of taking a given road between cities.
4. Heuristic information: The search could be guided by the direction of the goal city from the current
city, or we could use airline distance as an estimate of the distance to the goal.
- Some simple heuristic functions:
◦ Chess: the material advantage of our side over the opponents
◦ TSP: the sum of the distance covered so far
◦ Tic-Tac-Toe: 1 for each row in which we could win and in which we already have one piece plus 2 for each such
row in which we have two pieces
Heuristic Search Techniques
I. Generate-And-Test Strategy
Generate-and-test search algorithm is a very simple algorithm that guarantees to find a solution if done systematically and
there exists a solution.
Algorithm: Generate-And-Test
1. Generate a possible solution. For some problems, this means generating a particular point in the problem space; for
others, it means generating a path from a start state.
2. Test to see if this is the actual a solution by comparing the chosen point or the endpoint of the chosen path to the set of
acceptable goal states.
3. If the solution has been found, quit. Else go to step 1.
-Important points:
- Systematic application
- Implementation: DFS + Backtracking
- Not efficient for harder problem, but if combined with other techniques to restrict the search space, becomes effective
- Plan-Generate-Test: in which the planning process uses constraint-satisfaction techniques and creates lists of recommended
and contraindicated substructures.
II. Hill Climbing
▪ Variant of generate-and-test procedure in which feedback from the test procedure is used to help the
generator to decide which direction to move in the search space. In traditional generate-and-test
procedure, the test responds ‘yes’ or ‘no’. But if this test function is augmented with a heuristic (objective)
function that provides an estimate of how close a given state is to a goal state.
▪ HC is often used when a good heuristic function is available for evaluating the states and when no other
useful knowledge is available .
▪ Uses the Greedy approach : At any point in state space, the search moves in that direction only which
optimizes the cost of function with the hope of finding the optimal solution at the end.
▪ Types of Hill Climbing
1. Simple Hill climbing : It examines the neighboring nodes one by one and selects the first neighboring
node which optimizes the current cost as next node.
Algorithm for Simple Hill climbing :
Step 1 : Evaluate the initial state. If it is a goal state then stop and return success. Otherwise, make initial
state as current state.
Step 2 : Loop until the solution state is found or there are no new operators present which can be applied to
current state.
a) Select a state that has not been yet applied to the current state and apply it to produce a new state.
b) Perform these to evaluate new state
i. If the current state is a goal state, then stop and return success.
ii. If it is better than the current state, then make it current state and proceed further.
iii. If it is not better than the current state, then continue in the loop until a solution is found.
Step 3 : Exit.
2. Steepest-Ascent Hill climbing
▪ Useful variation on simple HC, it considers all the moves from current state and selects the best one as the next
state i.e. it first examines all the neighboring nodes and then selects the node closest to the solution state as
next node. Algorithm:
Step 1 : Evaluate the initial state. If it is goal state then exit else make the current state as initial state
Step 2 : Repeat these steps until a solution is found or current state does not change
(a) Let ‘target’ be a state such that any successor of the current state will be better than it;
(b) For each operator that applies to the current state
a. Apply the new operator and create a new state
b. Evaluate the new state. If this state is goal state then return it and quit else compare with ‘target’. If this state is
better, then set ‘target’, to this state. If it is not better, leave ‘target’ alone.
(c) If target is better than current state set current state to ‘target’.
Step 3 : Exit
Issues with HC algorithms
▪ There is a trade-off between the time required to select a move (usually longer for steepest HC) and the
number of moves required to get a solution (usually longer for simple HC) that must be considered when
deciding which method will work better for a particular problem.
▪ Both these approaches may stuck with these following issues and fail to find a solution:
▪ Local maximum : It is a state which is better than its neighboring state but is not better than some other
states farther away. This state is better because here value of objective function is higher than its neighbors.
These are frustrating because they often occur almost within the sight of solution, in this case they are called
foothills.
▪ Plateau: It is a flat region of state space in which a whole set of neighboring states have the same value. In
this case it is not possible to determine the best direction to move by making local comparisons.
▪ Ridge : It is a special kind of local maximum; area of search space which is higher than surrounding areas
and that itself has a slope. But the orientation of the high region, compared to the set of available moves and
direction in which they move, makes it impossible to traverse a ridge by single moves.
▪ To overcome this situations the following methods can be adopted (but not guaranteed for optimal sol.):
Issues with HC algorithms…
▪ To overcome the issues the following measures can be taken however still it is not guaranteed to get the
optimal solution if such issues are arisen:
1. local maximum : Utilize backtracking technique and try going in a different direction. Maintain a list of
visited states. If the search reaches an undesirable state, it can backtrack to the previous configuration and
explore a new path.
2. Plateaus : Make a big jump. Randomly select a state far away from current state. Chances are that we will
land at a non-plateau region
3. Ridge : In this kind of obstacle, use two or more rules before testing. It implies moving in several directions
at once.
Heuristic for the Romania problem
Greedy best-first search
Expand the node that has the lowest value of the heuristic function h(n)
Greedy best-first search example
Greedy best-first search example
Greedy best-first search example
Greedy best-first search example
Properties of greedy best-first search
Complete?
No – can get stuck in loops
start
goal
Properties of greedy best-first search
Complete?
No – can get stuck in loops
Optimal?
No
Properties of greedy best-first search
Complete?
No – can get stuck in loops
Optimal?
No
Time?
Worst case: O(bm)
Best case: O(bd) – If h(n) is 100% accurate
Space?
Worst case: O(bm)
A* search
Idea: avoid expanding paths that are already expensive
The evaluation function f(n) is the estimated total cost of the path through node n
to the goal:
❑ Role of g
❑ Role of h’
◦ Optimality/admissibility??
◦ h’ should not be underestimated / overestimated
◦ “Graceful decay of admissibility”
h1(start) = 8
h2(start) = 3+1+2+2+2+3+3+2 = 18
Are h1 and h2 admissible?
Uninformed search strategies
b: maximum branching factor of the search tree, d: depth of the optimal solution
m: maximum length of any path in the state space, C*: cost of optimal solution
All search strategies
Overview of Artificial Intelligence:
Knowledge Representation
Knowledge Representation in AI describes the representation of knowledge. Basically, it is a
study of how the beliefs, intentions, and judgments of an intelligent agent can be expressed
suitably for automated reasoning. One of the primary purposes of Knowledge Representation
includes modeling intelligent behavior for an agent.
The different kinds of knowledge that need to be represented in AI include:
•Objects, Events
•Performance
•Facts
•Meta-Knowledge
•Knowledge-base
81