0% found this document useful (0 votes)
1 views

Lect5 10 Ai Sps Feb2025

The document outlines problem-solving strategies, emphasizing the importance of defining and analyzing problems, representing task knowledge, and selecting appropriate techniques. It discusses the concept of state space search, providing examples like the Water Jug Problem, and details various search strategies, including uninformed and informed searches. Additionally, it covers production systems, their characteristics, and the design issues in search programs, highlighting the relationship between problem types and production systems.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
1 views

Lect5 10 Ai Sps Feb2025

The document outlines problem-solving strategies, emphasizing the importance of defining and analyzing problems, representing task knowledge, and selecting appropriate techniques. It discusses the concept of state space search, providing examples like the Water Jug Problem, and details various search strategies, including uninformed and informed searches. Additionally, it covers production systems, their characteristics, and the design issues in search programs, highlighting the relationship between problem types and production systems.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 81

Problem Solving Strategies

▪Define the problem precisely – find input situations as well as final situations for an acceptable
solution to the problem
▪Analyse the problem – find few important features that may have impact on the appropriateness of
various possible techniques for solving the problem.
▪Isolate and represent task knowledge necessary to solve the problem
▪Choose the best problem-solving technique(s) and apply to the particular problem
Problem Definition
• A problem is defined by its ‘elements’ and their ‘relations’. To provide a formal description of a
problem, we need to do the following:
◦ Define a state space that contains all the possible configurations of the relevant objects,
including some impossible ones.
◦ Specify one or more states that describe possible situations, from which the problem solving
process may start. These states are called initial states.
◦ Specify one or more states that would be acceptable solution to the problem. These states
are called goal states.
• Specify a set of rules that describe the actions (operators) available. It consists of two parts: Left
side – serves as a pattern to be matched against the current situation, and Right Side – that
describes the change to be made on current situation to reflect the application of rule.
•Example: Playing Chess, etc.
Defining the problem as a state space search : An example:
Water Jug Problem:
You are given two jugs, a 4-gallon one and a 3-gallon one. Neither has any measuring markers on it. There is a
pump that can be used to fill the jugs with water.
How can you get exactly 2 gallons of water into the 4-gallon jug?
Production Rule:
Initial Condition Goal
1. (four, three) if four < 4 (4, three) fill four from tap
2. (four, three) if three< 3 (four, 3) fill three from tap
3. (four, three) If four > 0 (0, three) empty four into drain
4. (four, three) if three > 0 (four, 0) empty three into drain
5. (four, three) if four + three<4 (four + three, 0) empty three into four
6. (four, three) if four + three<3 (0, four + three) empty four into three
7. (0, three) If three > 0 (three, 0) empty three into four
8. (four, 0) if four > 0 (0, four) empty four into three
9. (0, 2) (2, 0) empty three into four
10. (2, 0) (0, 2) empty four into three
11. (four, three) if four < 4 (4, three-diff) pour diff, 4-four, into four from three
12. (three, four) if three < 3 (four-diff, 3) pour diff, 3-three, into three from four and a solution is given
below four three rule

One Possible solution from above production rule:


Four gallon Jug Three gallon Jug Rules Applied
0 0 -
0 3 2
3 0 7
3 3 2
4 2 11
0 2 3
2 0 10
Search Process
The problem can then be solved by using the rules, in combination with an appropriate control
strategy, to move through the problem space until a path from an initial state to a goal state is
found. This process is known as ‘search’. Thus:
▪ Search is fundamental to the problem-solving process.
▪Search is a general mechanism that can be used when a more direct method is not known.
▪Search provides the framework into which more direct methods for solving subparts of a problem can
be embedded. A very large number of AI problems are formulated as search problems.
Problem Space:
A problem space is represented by a directed graph, where nodes represent search state and paths
represent the operators applied to change the state.
• All the states the system can be in are represented as nodes of a graph.
• An action that can change the system from one state to another (e. g. a move in a game) is represented
by a link from one node to another.
• Search for a solution.
• A solution might be:
• Any path from start state to goal state.
• The best (e. g. lowest cost) path from start state to goal state (e. g. Travelling salesman problem).
• It may be possible to reach the same state through many different paths.
• There may be loops in the graph (can go round in circle).
6
Production System

Production systems provide appropriate structures to AI Program for performing and describing
search processes. A production system has four basic components as enumerated below.
• A set of rules each consisting of a left side that determines the applicability of the rule and a
right side that describes the operation to be performed if the rule is applied.
•A database of current facts established during the process of inference. Some parts may be
permanent, while others may pertain only to the solution of current problem. The information in
these database may be structured in any appropriate way.
•A control strategy that specifies the order in which the rules will be compared with facts of the
database and also specifies the way of resolving conflicts that arise when several rules match at
once.
•A rule applying (firing) module.
Production System…
❑ The production system needs to be very general and encompasses great many systems, including
descriptions of the problem solver(Chess player, Water-jug, etc.). It also encompasses a family of general
production system interpreters, including:
• Basic p.s. language, such as OPS5 [Brownston et al., 1985] and ACT*[Anderson, 1983].
• Complex, often hybrid systems called expert system shells, which provide complete environment for the
construction of knowledge-based expert systems.
• General problem-solving architectures like SOAR [Laird et al., 1987], as system based on a specific set of
cognitively motivated hypotheses about the nature of problem solving.

❑ Control Strategies: Requirements


• Good control strategy must cause motion
• and the motion(movement) must be also be systematic
Overview of Artificial Intelligence:
Problem as State space search

Search
Strategy

Uninformed Informed
Search Search

Iterative
Breadth-First Uniform-cost Depth-First Greedy best-
deepening A* Search
Search search Search first search
search

9
Review: Search problem formulation
oInitial state
oActions
oTransition model
oGoal state
oPath cost

❖What is the optimal solution?


❖What is the state space?
Review: Tree search
Initialize the fringe using the starting state
While the fringe is not empty
◦ Choose a fringe node to expand according to search strategy
◦ If the node contains the goal state, return solution
◦ Else expand the node and add its children to the fringe

To handle repeated states:


◦ Keep an explored set; add each node to the explored set every time you expand
it
◦ Every time you add a node to the fringe, check whether it already exists in the
fringe with a higher path cost, and if yes, replace that node with the new one
Search strategies
A search strategy is defined by picking the order of node expansion
Strategies are evaluated along the following dimensions:
◦ Completeness: does it always find a solution if one exists?
◦ Optimality: does it always find a least-cost solution?
◦ Time complexity: number of nodes generated
◦ Space complexity: maximum number of nodes in memory

Time and space complexity are measured in terms of


◦ b: maximum branching factor of the search tree
◦ d: depth of the optimal solution
◦ m: maximum length of any path in the state space (may be infinite)
Uninformed search strategies
Uninformed search strategies use only the information available in the problem
definition

Breadth-first search
Uniform-cost search
Depth-first search
Iterative deepening search
Breadth-first search
Expand shallowest unexpanded node
Implementation:
◦ fringe is a FIFO queue, i.e., new successors go at end

B C

D E F G
Breadth-first search
Expand shallowest unexpanded node
Implementation:
◦ fringe is a FIFO queue, i.e., new successors go at end

B C

D E F G
Breadth-first search
Expand shallowest unexpanded node
Implementation:
◦ fringe is a FIFO queue, i.e., new successors go at end

B C

D E F G
Breadth-first search
Expand shallowest unexpanded node
Implementation:
◦ fringe is a FIFO queue, i.e., new successors go at end

B C

D E F G
Breadth-first search
Expand shallowest unexpanded node
Implementation:
◦ fringe is a FIFO queue, i.e., new successors go at end

B C

D E F G
Properties of breadth-first search
Complete?
Yes (if branching factor b is finite)
Optimal?
Yes – if cost = 1 per step
Time?
Number of nodes in a b-ary tree of depth d: O(bd)
(d is the depth of the optimal solution)
Space?
O(bd)

Space is the bigger problem (more than time)


Uniform-cost search
Expand least-cost unexpanded node
Implementation: fringe is a queue ordered by path cost (priority queue)
Equivalent to breadth-first if step costs all equal
Complete? Yes, if step cost is greater than some positive constant ε (we don’t want infinite
sequences of steps that have a finite total cost)
Optimal? Yes – nodes expanded in increasing order of path cost
Time? Number of nodes with path cost ≤ cost of optimal solution (C*), O(bC*/ ε)
This can be greater than O(bd): the search can explore long paths consisting of small steps
before exploring shorter paths consisting of larger steps
Space? O(bC*/ ε)
Depth-first search
Expand deepest unexpanded node
Implementation:
◦ fringe = LIFO queue, i.e., put successors at front

B C

D E F G
Depth-first search
Expand deepest unexpanded node
Implementation:
◦ fringe = LIFO queue, i.e., put successors at front

B C

D E F G
Depth-first search
Expand deepest unexpanded node
Implementation:
◦ fringe = LIFO queue, i.e., put successors at front

B C

D E F G
Depth-first search
Expand deepest unexpanded node
Implementation:
◦ fringe = LIFO queue, i.e., put successors at front

B C

D E F G
Depth-first search
Expand deepest unexpanded node
Implementation:
◦ fringe = LIFO queue, i.e., put successors at front

B C

D E F G
Depth-first search
Expand deepest unexpanded node
Implementation:
◦ fringe = LIFO queue, i.e., put successors at front

B C

D E F G
Depth-first search
Expand deepest unexpanded node
Implementation:
◦ fringe = LIFO queue, i.e., put successors at front

B C

D E F G
Depth-first search
Expand deepest unexpanded node
Implementation:
◦ fringe = LIFO queue, i.e., put successors at front

B C

D E F G
Depth-first search
Expand deepest unexpanded node
Implementation:
◦ fringe = LIFO queue, i.e., put successors at front

B C

D E F G
Properties of depth-first search
Complete?
Fails in infinite-depth spaces, spaces with loops
Modify to avoid repeated states along path
→ complete in finite spaces
Optimal?
No – returns the first solution it finds
Time?
Could be the time to reach a solution at maximum depth m: O(bm)
Terrible if m is much larger than d
But if there are lots of solutions, may be much faster than BFS
Space?
O(bm), i.e., linear space!
Iterative deepening search
Use DFS as a subroutine
1. Check the root
2. Do a DFS searching for a path of length 1
3. If there is no path of length 1, do a DFS searching for a path of length 2
4. If there is no path of length 2, do a DFS searching for a path of length 3…
Iterative deepening search
Iterative deepening search
Iterative deepening search
Iterative deepening search
Properties of iterative deepening search
Complete?
Yes
Optimal?
Yes, if step cost = 1
Time?
(d+1)b0 + d b1 + (d-1)b2 + … + bd = O(bd)
Space?
O(bd)
Problem Characteristics:
❑Is the problem decomposable?

❑Can solution steps be ignored or undone?

❑Is the problem’s universe predictable?

❑Is a good solution absolute or relative?

❑Is the desired solution a state of the world or a path to a state?

❑The role of knowledge, if it required in large amount or important only to constraint the
search?

❑Does the task require interaction with a person?


Production System Characteristics:
Production systems provide us with good ways of describing the operations that can be performed in a search
for a solution to a problem. At this time, two questions may arise:
▪ Can production systems be described by a set of characteristics? And how can they be easily implemented?
▪ What relationships are there between the problem types and the types of production systems well suited for
solving the problems?
To answer these questions, first consider the following classes of production systems:
◦ A monotonic production system is a production system in which the application of a rule never prevents the
later application of another rule that could also have been applied at the time the first rule was selected.
◦ A non-monotonic production system is one in which this is not true.
◦ A partially commutative production system is a production system with the property that if the application
of a particular sequence of rules transforms state P into state Q, then any combination of those rules that is
allowable also transforms state P into state Q.
◦ A commutative production system is a production system that is both monotonic and partially commutative.
▪ Is there any relationship between classes of production systems and classes of problems? For any solvable
problems, there exist an infinite number of production systems that show how to find solutions. Any problem
that can be solved by any production system can be solved by a commutative one, but the commutative one is
practically useless. It may use individual states to represent entire sequences of applications of rules of a
simpler, non-commutative system. In the formal sense, there is no relationship between kinds of problems and
kinds of production systems Since all problems can be solved by all kinds of systems. But in the practical
sense, there is definitely such a relationship between the kinds of problems and the kinds of systems that lend
themselves to describing those problems.

▪ Partially commutative, monotonic productions systems are useful for solving ignorable problems. These are
important from an implementation point of view without the ability to backtrack to previous states when it is
discovered that an incorrect path has been followed. Both types of partially commutative production systems
are significant from an implementation point; they tend to lead to many duplications of individual states during
the search process. Production systems that are not partially commutative are useful for many problems in
which permanent changes occur.
Issues in the Design of Search Programs
• Direction of Search (Forward Vs Backward Reasoning) The tree can be searched forward from the initial node to the goal
state or backwards from the goal state to the initial state.
• Matching (How to select the applicable rules?) To select applicable rules, it is critical to have an efficient procedure for
matching rules against states.
• Knowledge Representation problem or frame problem How to represent each node of the search process? In games, an
array suffices; in other problems, more complex data structures are needed.

Finally in terms of data structures, considering the water jug as a typical problem do we use a graph or tree? The breadth-first
structure does take note of all nodes generated but the depth-first one can be modified.

Algorithm: Check Duplicate Nodes


1. Examine the set of nodes that have been created/generated so far to see if the new node already exists.
2. If it does not exist, simply add it to the graph just as for a tree.
Algorithm: Check Duplicate Nodes…

3. If it does already exist, then do the following:


◦ Set the node that is being expanded to the point to the already existing node corresponding to its
successor rather than to the new one. The new one can simply be thrown away.
◦ If you are keeping track of the best (shortest or otherwise least cost) path to each node, then check to see
if the new path is better or worse than the old one. If worse, do nothing. If better, record the new path as
the correct path to use to get to the node and propagate the corresponding change in cost down
through successor nodes as necessary.
Topics need to cover from books/reference:
❑Additional Problems:
❑Missionary and Cannibals Problem
❑ The Tower of Hanoi
❑Monkey and Bananas Problem
❑ 8 Puzzle problems
…………..
Travelling Salesman Problem:
❑The problem may be solved through principle of ‘motion-causing’ and ‘systematic control structure’ by
simply exploring all possible paths in a tree and returning the one with a shortest length. This approach will
even work in practice for very short list of cities. But it breaks down as number of cities grows.
- If there are N cities, the no of different paths among them is 1.2.3….(N-1) or (N-1)!

- Hence total time required to perform this search is proportional to N!, if there are 10 cities, 10! = 3,628,800,
which is very large number, phenomenon is called as ‘Combinatorial Explosion’

- ‘Branch and Bound’ strategy may be useful, (begin generating the complete paths and follows keeping track
of the shortest path found so far) but still requires exponential time.
Heuristic Search (Informed Search)
❑ For solving many hard problems efficiently, it is often necessary to compromise the requirements of mobility and
systematicity and to construct a control structure that is no longer guaranteed to find the best answer but that will
almost always find a very good answer. Thus, introducing the idea of ‘heuristic’ (work comes from Greek work
‘heuriskein’, meaning “to discover” having the origin ‘eureka (heurika)’ derived from ‘Arhimedes’ means ‘I have
found’).

❑ It is a technique that improves the efficiency of search process, possibly by sacrificing the claims of completeness.
❑ Like tour guides, good – points to interesting directions; bad – miss the points of interest to particular individuals.
❑ There are some good general-purpose heuristics that are useful in wide variety of problem domains, and it is
possible to construct the special-purpose heuristics that exploit domain-specific knowledge to solve particular
problems.

❑ Informed Search- Idea: give the algorithm “hints” about the desirability of different states
◦ Use an evaluation function to rank nodes and select the most promising one for expansion
Heuristic Search…
❑ For TSP, a general-purpose i. e. ‘nearest neighbor heuristic’ is a useful and avoiding the
combinatorial issue with the following procedure:
1. Randomly select a starting city.
2. To select next, look at all cities(not visited yet) and select the closest to current city, go to
next.
3. Repeat step 2 until all cities have been visited.
- Procedure executes in time proportional to N^2, a significant improvement over N!
Heuristic function
❑ A heuristic function is a function that maps from problem state descriptions to measures of desirability, usually
represented as numbers. Which aspects of the problem state are considered, how those aspects are evaluated, and the
weights assigned to individual aspects are chosen in such a way that the value of the heuristic function at a given node
in the search process gives as good an estimate as possible of whether that node is on the desired path to a solution.
❑ Heuristic search methods use knowledge about the problem domain and choose promising operators first. These
heuristic search methods use heuristic functions to evaluate the next state towards the goal state. For finding a solution,
by using the heuristic technique, one should carry out the following steps:
1. Add domain—specific information to select what is the best path to continue searching along.
2. Define a heuristic function h(n) that estimates the ‘goodness’ of a node n. Specifically, h*(n) = estimated cost(or
distance) of minimal cost path from n to a goal state.
3. The term, heuristic means ‘serving to aid discovery’ and is an estimate, based on domain specific information that
is computable from the current state description of how close we are to a goal.
Heuristic function…
Heuristic function h(n) estimates the cost of reaching goal from node n
Example:
Start state

Goal state
Heuristic function…
- Finding a route from one city to another city is an example of a search problem in which different search
orders and the use of heuristic knowledge are easily understood.
1. State: The current city in which the traveler is located.
2. Operators: Roads linking the current city to other cities.
3. Cost Metric: The cost of taking a given road between cities.
4. Heuristic information: The search could be guided by the direction of the goal city from the current
city, or we could use airline distance as an estimate of the distance to the goal.
- Some simple heuristic functions:
◦ Chess: the material advantage of our side over the opponents
◦ TSP: the sum of the distance covered so far
◦ Tic-Tac-Toe: 1 for each row in which we could win and in which we already have one piece plus 2 for each such
row in which we have two pieces
Heuristic Search Techniques
I. Generate-And-Test Strategy
Generate-and-test search algorithm is a very simple algorithm that guarantees to find a solution if done systematically and
there exists a solution.
Algorithm: Generate-And-Test
1. Generate a possible solution. For some problems, this means generating a particular point in the problem space; for
others, it means generating a path from a start state.
2. Test to see if this is the actual a solution by comparing the chosen point or the endpoint of the chosen path to the set of
acceptable goal states.
3. If the solution has been found, quit. Else go to step 1.
-Important points:
- Systematic application
- Implementation: DFS + Backtracking
- Not efficient for harder problem, but if combined with other techniques to restrict the search space, becomes effective
- Plan-Generate-Test: in which the planning process uses constraint-satisfaction techniques and creates lists of recommended
and contraindicated substructures.
II. Hill Climbing
▪ Variant of generate-and-test procedure in which feedback from the test procedure is used to help the
generator to decide which direction to move in the search space. In traditional generate-and-test
procedure, the test responds ‘yes’ or ‘no’. But if this test function is augmented with a heuristic (objective)
function that provides an estimate of how close a given state is to a goal state.
▪ HC is often used when a good heuristic function is available for evaluating the states and when no other
useful knowledge is available .
▪ Uses the Greedy approach : At any point in state space, the search moves in that direction only which
optimizes the cost of function with the hope of finding the optimal solution at the end.
▪ Types of Hill Climbing
1. Simple Hill climbing : It examines the neighboring nodes one by one and selects the first neighboring
node which optimizes the current cost as next node.
Algorithm for Simple Hill climbing :
Step 1 : Evaluate the initial state. If it is a goal state then stop and return success. Otherwise, make initial
state as current state.
Step 2 : Loop until the solution state is found or there are no new operators present which can be applied to
current state.
a) Select a state that has not been yet applied to the current state and apply it to produce a new state.
b) Perform these to evaluate new state
i. If the current state is a goal state, then stop and return success.
ii. If it is better than the current state, then make it current state and proceed further.
iii. If it is not better than the current state, then continue in the loop until a solution is found.

Step 3 : Exit.
2. Steepest-Ascent Hill climbing
▪ Useful variation on simple HC, it considers all the moves from current state and selects the best one as the next
state i.e. it first examines all the neighboring nodes and then selects the node closest to the solution state as
next node. Algorithm:
Step 1 : Evaluate the initial state. If it is goal state then exit else make the current state as initial state
Step 2 : Repeat these steps until a solution is found or current state does not change
(a) Let ‘target’ be a state such that any successor of the current state will be better than it;
(b) For each operator that applies to the current state
a. Apply the new operator and create a new state
b. Evaluate the new state. If this state is goal state then return it and quit else compare with ‘target’. If this state is
better, then set ‘target’, to this state. If it is not better, leave ‘target’ alone.
(c) If target is better than current state set current state to ‘target’.
Step 3 : Exit
Issues with HC algorithms
▪ There is a trade-off between the time required to select a move (usually longer for steepest HC) and the
number of moves required to get a solution (usually longer for simple HC) that must be considered when
deciding which method will work better for a particular problem.
▪ Both these approaches may stuck with these following issues and fail to find a solution:
▪ Local maximum : It is a state which is better than its neighboring state but is not better than some other
states farther away. This state is better because here value of objective function is higher than its neighbors.
These are frustrating because they often occur almost within the sight of solution, in this case they are called
foothills.
▪ Plateau: It is a flat region of state space in which a whole set of neighboring states have the same value. In
this case it is not possible to determine the best direction to move by making local comparisons.
▪ Ridge : It is a special kind of local maximum; area of search space which is higher than surrounding areas
and that itself has a slope. But the orientation of the high region, compared to the set of available moves and
direction in which they move, makes it impossible to traverse a ridge by single moves.
▪ To overcome this situations the following methods can be adopted (but not guaranteed for optimal sol.):
Issues with HC algorithms…
▪ To overcome the issues the following measures can be taken however still it is not guaranteed to get the
optimal solution if such issues are arisen:
1. local maximum : Utilize backtracking technique and try going in a different direction. Maintain a list of
visited states. If the search reaches an undesirable state, it can backtrack to the previous configuration and
explore a new path.
2. Plateaus : Make a big jump. Randomly select a state far away from current state. Chances are that we will
land at a non-plateau region
3. Ridge : In this kind of obstacle, use two or more rules before testing. It implies moving in several directions
at once.
Heuristic for the Romania problem
Greedy best-first search
Expand the node that has the lowest value of the heuristic function h(n)
Greedy best-first search example
Greedy best-first search example
Greedy best-first search example
Greedy best-first search example
Properties of greedy best-first search
Complete?
No – can get stuck in loops

start
goal
Properties of greedy best-first search
Complete?
No – can get stuck in loops
Optimal?
No
Properties of greedy best-first search
Complete?
No – can get stuck in loops
Optimal?
No
Time?
Worst case: O(bm)
Best case: O(bd) – If h(n) is 100% accurate
Space?
Worst case: O(bm)
A* search
Idea: avoid expanding paths that are already expensive
The evaluation function f(n) is the estimated total cost of the path through node n
to the goal:

f(n) = g(n) + h(n)

g(n): cost so far to reach n (path cost)


h(n): estimated cost from n to goal (heuristic)
A* search example
A* search example
A* search example
A* search example
A* search example
A* search example
Observations of A*

❑ Role of g
❑ Role of h’
◦ Optimality/admissibility??
◦ h’ should not be underestimated / overestimated
◦ “Graceful decay of admissibility”

❑Relation between tree and graph


Admissible heuristics
A heuristic h(n) is admissible if for every node n, h(n) ≤ h*(n), where h*(n) is the true cost to reach the
goal state from n
An admissible heuristic never overestimates the cost to reach the goal, i.e., it is optimistic
Example: straight line distance never overestimates the actual road distance
Theorem: If h(n) is admissible, A* is optimal
Other desirable Properties of A*

▪ Completeness condition: Algorithm A is complete if it always terminate with a solution


when one exists.
▪ Dominance Property: Let A1 and A2 be admissible algorithms with heuristic estimation
functions h*1 and h*2 respectively. A1 is said to be more informed than A2 whenever h1*(n) >
h2*(n) for all n. A1 is said to dominate A2.
▪Optimality Property: Algorithm A is optimal over a class of algorithms if A dominates all
members of the class.
▪Complete? Yes – unless there are infinitely many nodes with f(n) ≤ C*
▪Optimal? Yes
▪Time? Number of nodes for which f(n) ≤ C* (exponential)
▪Space? Exponential
Designing heuristic functions
Heuristics for the 8-puzzle
h1(n) = number of misplaced tiles
h2(n) = total Manhattan distance (number of squares from desired location of
each tile)

h1(start) = 8
h2(start) = 3+1+2+2+2+3+3+2 = 18
Are h1 and h2 admissible?
Uninformed search strategies

b: maximum branching factor of the search tree, d: depth of the optimal solution
m: maximum length of any path in the state space, C*: cost of optimal solution
All search strategies
Overview of Artificial Intelligence:
Knowledge Representation
Knowledge Representation in AI describes the representation of knowledge. Basically, it is a
study of how the beliefs, intentions, and judgments of an intelligent agent can be expressed
suitably for automated reasoning. One of the primary purposes of Knowledge Representation
includes modeling intelligent behavior for an agent.
The different kinds of knowledge that need to be represented in AI include:
•Objects, Events
•Performance
•Facts
•Meta-Knowledge
•Knowledge-base
81

You might also like