Unit 1 Elaborate Artificial Intelligence With Suitable Example Along With It's Application
Unit 1 Elaborate Artificial Intelligence With Suitable Example Along With It's Application
Elaborate artificial intelligence with suitable example along with it’s application.
John Maccarthy as invented the word Artificial Intelligence in year 1956.AI is define as
science and engineering of making. Ai indicates that machine are able to take some decision
in
order to take right decision. AI system works like human brain.
Application of Ai are as follows:
i. Robotic Vehicles= A driver less robotic car or a autonomous car name “Stalee” was
develop using AI.
ii. Speech recognition = It recognize the speech of an authenticated user and allows the
service while block the service to an unauthorized user.
iii. autonomous planning and Scheduling= NASA develop an autonomous planning and
scheduling for scheduling of different operations for a space craft.
iv. Game Playing= IBM DEEP BLUE become 1st computer program to detect the world
chess champion GARRY KASPAROE.
v. Spam fighting = Each day the learning algorithm classify over a billon message as spam
saving recipient from having to waste time deleting unwanted mail.
vi. Logistic Planning= Logistic planning is being used in different industry for performing
planning and scheduling different activities.
vii. Robotics= The iRobot corporation has sold over two million robotics vacuum cleaners.
viii. Machine Translation = It uses AI technology to translate any language into an English
language allowing an English people to communicate efficiently.
Other application of Ai are as follows:
AI in shopping
AI in education
AI in marketing
AI in hospitals
AI in entertainment. Industry
AI in Military Application
AI is one of the newest fields in science and engineering.
AI is a general term that implies the use of a computer to model & replicate intelligent
behaviour.
MUQuestionPapers.com
“AI is the design, study & construction of computer programs that behave
intelligently.”
Artificial intelligence (AI) refers to the simulation of human intelligence in machines
that are programmed to think like humans and mimic their actions. The term may also
be applied to any machine that exhibits traits associated with a human mind such as
learning and problem-solving.
The ideal characteristic of artificial intelligence is its ability to rationalize and take
actions that have the best chance of achieving a specific goal.
AI is continuously evolving to benefit many different industries. Machines are wired
using a cross-disciplinary approach based in mathematics, computer science,
linguistics, psychology, and more.
Research in AI focuses on development & analysis of algorithms that learn & perform
intelligent behaviour with minimal human intervention.
AI is the ability of machine or computer program to think and learn.
The concept of AI is based on idea of building machines capable of thinking, acting &
learning like humans.
AI is only field to attempt to build machines that will function autonomously complex
changing environments.
AI has focused chiefly on following components of intelligence.
o Learning: - the learning by trial & error.
o Reasoning: - reasoning skill often happen subconsciously & within seconds.
o Decision making: - it is a process of making choices by identifying a decision
gathering information & assessing alternative resolutions.
o Problem solving: - problem solving particularly in AI may be characterized as
systematic search in order to reach goal or solutions.
Examples of AI:-
1. Alexa
o Alexa's rise to become the smart home's hub, has been somewhat meteoric. When
Amazon first introduced Alexa, it took much of the world by storm.
o However, it's usefulness and its uncanny ability to decipher speech from anywhere in
the room has made it a revolutionary product that can help us scour the web for
information, shop, schedule appointments, set alarms and a million other things, but
also help power our smart homes and be a conduit for those that might have limited
mobility.
2. Amazon.com
o Amazon's transactional A.I. is something that's been in existence for quite some time,
allowing it to make astronomical amounts of money online.
MUQuestionPapers.com
o With its algorithms refined more and more with each passing year, the company has
gotten acutely smart at predicting just what we're interested in purchasing based on
our online behaviour.
3. Face Detection and Recognition
o Using virtual filters on our face when taking pictures and using face ID for unlocking
our phones are two applications of AI that are now part of our daily lives.
o The former incorporates face detection meaning any human face is identified. The
latter uses face recognition through which a specific face is recognised.
4. Chatbots
o As a customer, getting queries answered can be time-consuming. An artificially
intelligent solution to this is the use of algorithms to train machines to cater to
customers via chatbots.
o This enables machines to answer FAQs, and take and track orders.
5. Social Media
o The advent of social media provided a new narrative to the world with excessive
freedom of speech.
o Various social media applications are using the support of AI to control these
problems and provide users with other entertaining features.
o AI algorithms can spot and swiftly take down posts containing hate speech a lot faster
than humans could. This is made possible through their ability to identify hate
keywords, phrases, and symbols in different languages.
6. E-Payments
o Artificial intelligence has made it possible to deposit cheques from the comfort of
your home. AI is proficient in deciphering handwriting, making online cheque
processing practicable.
o The way fraud can be detected by observing users’ credit card spending patterns is
also an example of artificial intelligence.
What is the purpose of turing test?
The Turing Test, proposed by Alan Turing (1950), was designed to provide a
satisfactory operational definition of intelligence. To judge whether the system can act
like a human, Sir Alan turing had designed a test known as turing test.
II. A Turing Test is a method of inquiry in artificial intelligence (AI) for determining
whether or not a computer is capable of thinking like a human being.
III. A computer passes the test if a human interrogator, after posing some written
questions, cannot tell whether the written responses come from a person or from a
computer. Programming a computer to pass a rigorously applied test provides plenty
to work on. The computer would need to possess the following capabilities:
1. Natural language processing to enable it to communicate successfully in
English;
2. Knowledge representation to store what it knows or hears;
3. Automated reasoning to use the stored information to answer questions and
to draw new conclusions;
4. Machine learning to adapt to new circumstances and to detect and extrapolate
patterns.
IV. Turing’s test deliberately avoided direct physical interaction between the interrogator
and the computer, because physical simulation of a person is unnecessary for
intelligence. However, the so-called total Turing Test includes a video signal so that
the interrogator can test the subject’s perceptual abilities, as well as the opportunity
for the interrogator to pass physical objects “through the hatch.” To pass the total
Turing Test, the computer will need
5. Computer vision to perceive objects, and
6. Robotics to manipulate objects and move about.
V. These six disciplines compose most of AI, and Turing deserves credit for designing a
test that remains relevant 60 years later. Yet AI researchers have devoted little effort
to passing the Turing Test, believing that it is more important to study the underlying
principles of intelligence than to duplicate an exemplar.
Explain the concept of agent and environment.
An agent is anything that can be viewed as perceiving its environment through
sensors and acting upon that environment through actuators.
A human agent has eyes, ears, and other organs for sensors and hands, legs, vocal
tract, and so on for actuators.
Eyes, ears, nose, skin, tongue. These senses sense the environment are called as
sensors. Sensors collect percepts or inputs from environment and passes it to the
processing unit.
Actuators or effectors are the organs or tools using which the agent acts upon the
environment. Once the sensor senses the environment, it gives this information to
nervous system which takes appropriate action with the help of actuators. In case of
human agents we have hands, legs as actuators or effectors.
A robotic agent might have cameras and infrared range finders for sensors and various
motors for actuators.
A software agent receives keystrokes, file contents, and network packets as sensory
inputs and acts on the environment by displaying on the screen, writing files, and
sending network packets.
Use the term percept to refer to the agent’s perceptual inputs at any given instant. An
agent’s percept sequence is the complete history of everything the agent has ever
perceived.
In general, an agent’s choice of action at any given instant can depend on the entire
percept sequence observed to date, but not on anything it hasn’t perceived.
By specifying the agent’s choice of action for every possible percept sequence, we
have said more or less everything there is to say about the agent. Mathematically
speaking, we say that an agent’s behaviour is described by the agent function that
maps any given percept sequence to an action.
As shown in figure, there are two blocks A & B having some dirt. Vacuum cleaner
agent supposed to sense the dirt and collect it, thereby making the room clean.
In order to do that the agent must have a camera to see the dirt and a mechanism to
move forward, backward, left and right to reach to the dirt. Also it should absorb the
dirt. Based on the percepts, actions will be performed. For example: Move left, Move
right, absorb, No Operation.
Hence the sensor for vacuum cleaner agent can be camera, dirt sensor and the actuator
can be motor to make it move, absorption mechanism. And it can be represented as
[A, Dirty], [B, Clean], [A, Absorb], [B, Nop], etc.
Types of Environment
I. Fully observable vs. partially observable:
If an agent’s sensors give it access to the complete state of the environment at each
point in time, then we say that the task environment is fully observable.
Fully observable environments are convenient because the agent need not maintain
any internal state to keep track of the world. An environment might be partially
observable because of noisy and inaccurate sensors or because parts of the state are
simply missing from the sensor data.
If the agent has no sensors at all then the environment is unobservable.
II. Single agent vs. multiagent:
An agent solving a crossword puzzle by itself is clearly in a single-agent environment,
while in case of car driving agent, there are multiple agents driving on the road, hence
it’s a multiagent environment.
For example, in chess, the opponent entity B is trying to maximize its performance
measure, which, by the rules of chess, minimizes agent A’s performance measure.
Thus, chess is a competitive multiagent environment.
In the taxi-driving environment, on the other hand, avoiding collisions maximizes the
performance measure of all agents, so it is a partially cooperative multiagent
environment. It is also partially competitive because, for example, only one car can
occupy a parking space.
III. Deterministic vs. stochastic:
If the next state of the environment is completely determined by the current state and
the action executed by the agent, then we say the environment is deterministic;
otherwise, it is stochastic.
MUQuestionPapers.com
If the environment is partially observable, however, then it could appear to be
stochastic.
IV. Episodic vs. sequential:
In an episodic task environment, the agent’s experience is divided into atomic
episodes. In each episode the agent receives a percept and then performs a single
action.
Crucially, the next episode does not depend on the actions taken in previous episodes.
Many classification tasks are episodic.
In sequential environments, on the other hand, the current decision could affect all
future decisions.
Episodic environments are much simpler than sequential environments because the
agent does not need to think ahead.
V. Static vs. dynamic:
If the environment can change while an agent is deliberating, then we say the
environment is dynamic for that agent; otherwise, it is static.
Static environments are easy to deal with because the agent need not keep looking at
the world while it is deciding on an action, nor need it worry about the passage of
time.
Dynamic environments, on the other hand, are continuously asking the agent what it
wants to do; if it hasn’t decided yet, that counts as deciding to do nothing.
If the environment itself does not change with the passage of time but the agent’s
performance score does, then we say the environment is semi-dynamic.
VI. Discrete vs. continuous:
The discrete/continuous distinction applies to the state of the environment, to the way
time is handled, and to the percepts and actions of the agent.
For example, the chess environment has a finite number of distinct states (excluding
the clock).
Chess also has a discrete set of percepts and actions.
Taxi driving is a continuous-state and continuous-time problem: the speed and
location of the taxi and of the other vehicles sweep through a range of continuous
values and do so smoothly over time.
Taxi-driving actions are also continuous (steering angles, etc.). Input from digital
cameras is discrete, strictly speaking, but is typically treated as representing
continuously varying intensities and locations.
VII. Known vs. unknown:
In known environment, the output for all probable actions is given. state of knowledge
about the “laws of physics” of the environment.
In case of unknown environment, for an agent to make a decision, it has to gain
knowledge about how the environments works.
Explain the rational agent approach of AI.
A rationality agent is one which acts to perform the right decision in order to achieve
the best outcome.
2. A rational agent is one that does the right thing, conceptually speaking every entry in the
table, for the agent function is filled out correctly.
3. A general rule says that it is better to design the performance measures according to what
one actually wants in the environment rather than according to how one thinks the agent
should
behave.
4. The agent behaviors is based on following points:
➢ To achieve the high performance.
MUQuestionPapers.com
➢ To achieve the optimized result.
➢ To behave rationality.
5. What is rationality at any given time depends upon the four things as follows.
➢ The performance measure that defines the criteria of success.
➢ The agents prior knowledge of the environment.
➢ The action that the agent can perform.
➢ The agents percept sequence to the data.
6. These are four different types of agent which are as follows.
i. Simple reflex agent
ii. Model based agent
iii. Goal based agent
iv. Utility based agent
Rational Agent:
For each possible percept sequence, a rational agent should select an action that is expected to
maximize its performance measure, based on the evidence provided by the percept sequence
and whatever built-in knowledge the agent has.
1. The concept of rational agents as central to our approach to artificial intelligence.
2. Rationality is distinct from omniscience (all-knowing with infinite knowledge)
3. Agents can perform actions in order to modify future percepts so as to obtain useful
information (information gathering, exploration)
4. An agent is autonomous if its behaviour is determined by its own percepts & experience
(with ability to learn and adapt) without depending solely on build-in knowledge
5. A rational agent is one that does the right thing—conceptually speaking, every entry in
the table for the agent function is filled out correctly. Obviously, doing the right thing is
better than doing the wrong thing, but what does it mean to do the right thing?
6. If the sequence is desirable, then the agent has performed well. This notion of desirability
is captured by a performance measure that evaluates any given sequence of environment
states.
7. For every percept sequence a built-in knowledge base is updated, which is very useful for
decision making, because it stores the consequences of performing some particular action.
8. If the consequences direct to achieve desired goal then we get a good performance
measure factor, else if the consequences do not lead to desired goal sate, then we get a
poor performance measure factor.
For example :- if agents hurts his finger while using nail and hammer, then while using it
for the next time agent will be more careful and the probability of not getting hurts will
increase. In short agent will be able to use the hammer and nail more efficiently.
9. A rational agent should be autonomous—it should learn what it can to compensate for
partial or incorrect prior knowledge.
MUQuestionPapers.com
10. Rational agent not only to gather information but also to learn as much as possible from
what it perceives.
11. After sufficient experience of its environment, the behaviour of a rational agent can
become effectively independent of its prior knowledge. Hence, the incorporation of
learning allows one to design a single rational agent that will succeed in a vast variety of
environments.
12. What is rational at any given time depends on four things:
The performance measure that defines the criterion of success.
The agent’s prior knowledge of the environment.
The actions that the agent can perform.
The agent’s percept sequence to date.
Acting rationally: The rational agent approach
An agent is just something that acts (agent comes from the Latin agere, to do). Of
course, all computer programs do something, but computer agents are expected to do
more: operate autonomously, perceive their environment, persist over a prolonged
time period, and adapt to change, and create and pursue goals.
A rational agent is one that acts so as to achieve the best outcome or, when there is
uncertainty, the best expected outcome. In some situations, there is no provably
correct thing to do, but something must still be done. There are also ways of acting
rationally that cannot be said to involve inference. For example, recoiling from a hot
stove is a reflex action that is usually more successful than a slower action taken after
careful deliberation.
All the skills needed for the Turing Test also allow an agent to act rationally.
Knowledge representation and reasoning enable agents to reach good decisions. We
need to be able to generate comprehensible sentences in natural language to get by in
a complex society. We need learning not only for erudition, but also because it
improves our ability to generate effective behaviour.
The rational-agent approach has two advantages over the other approaches. First, it is
more general than the “laws of thought” approach because correct inference is just
one of several possible mechanisms for achieving rationality. Second, it is more
amenable to scientific development than are approaches based on human behaviour or
human thought. The standard of rationality is mathematically well defined and
completely general, and can be “unpacked” to generate agent designs that provably
achieve it.
One important point to keep in mind: We will see before too long that achieving
perfect rationality—always doing the right thing—is not feasible in complicated
environments.
Explain the working of simple reflex agent.
In simplex reflex agent, an agent performs the action based on the current state/ input
only by ignoring all the previous state of agent of the environment is called as simplex
reflex agent.
• Simple reflex agent is totally uncomplicated type of agent.
• The simplex reflect agent functions is based on the situation and its corresponding
action.
• The agent reflects on condition action protocol for performing any action.
• If the condition is true, then the matching action is performed without considering
previous history.
• E.g.: robotic vacuum cleaner for the home use.
The simplest kind of agent is the simple reflex agent. These agents select actions on
the basis of the current percept, ignoring the rest of the percept history.
II. A simple reflex agent is the most basic of the intelligent agents out there. It performs
actions based on a current situation. When something happens in the environment of a
simple reflex agent, the agent quickly scans its knowledge base for how to respond to
the situation at-hand based on pre-determined rules.
III. It would be like a home thermostat recognizing that if the temperature increases to 75
degrees in the house, the thermostat is prompted to kick on. It doesn’t need to know
what happened with the temperature yesterday or what might happen tomorrow.
Instead, it operates based on the idea that if _____ happens, _____ is the response.
IV. Simple reflex agents are just that: simple. They cannot compute complex equations or
solve complicated problems. They work only in environments that are fullyobservable
in the current percept, ignoring any percept history. If you have a smart
light bulb, for example, set to turn on at 6 p.m. every night, the light bulb will not
recognize how the days are longer in summer and the lamp is not needed until much
later. It will continue to turn the lamp on at 6 p.m. because that is the rule it follows.
Simple reflex agents are built on the condition-action rule.
V. Simple reflex behaviors occur even in more complex environments. Imagine yourself
as the driver of the automated taxi. If the car in front brakes and its brake lights come
on, then you should notice this and initiate braking. In other words, some processing
is done on the visual input to establish the condition we call “The car in front is
braking.” Then, this triggers some established connection in the agent program to the
action “initiate braking.” We call such a connection a condition–action rule, written
as
if car-in-front-is-braking then initiate-braking.
VI. For example, the vacuum agent is a simple reflex agent, because its decision is based
only on the current location and on whether that location contains dirt.
An agent program for this agent is shown in below:-
function REFLEX-VACUUM-AGENT([location,status]) returns an action
if status = Dirty then return Suck
else if location = A then return Right
else if location = B then return Left
Unit 2
VIII. The problem is to get from Sibiu to Bucharest. The successors of Sibiu are Rimnicu
Vilcea and Fagaras, with costs 80 and 99, respectively. The least-cost node, Rimnicu
Vilcea, is expanded next, adding Pitesti with cost 80 + 97=177.
IX. The least-cost node is now Fagaras, so it is expanded, adding Bucharest with cost
99+211=310. Now a goal node has been generated, but uniform-cost search keeps
going, choosing Pitesti for expansion and adding a second path to Bucharest with cost
80+97+101= 278. Now the algorithm checks to see if this new path is better than the
old one; it is, so the old one is discarded. Bucharest, now with g-cost 278, is selected
for expansion and the solution is returned.
X. It is easy to see that uniform-cost search is optimal in general. Uniform-cost search
does not care about the number of steps a path has, but only about their total cost.
Uniform-cost search is guided by path costs rather than depths, so its complexity is
not easily characterized in terms of b and d.
Explain the working mechanism of Genetic Algorithm.
V. In each case, the algorithm reaches a point at which no progress is being made.
Starting from a randomly generated 8-queens state, steepest-ascent hill climbing gets
stuck 86% of the time, solving only 14% of problem instances. It works quickly,
taking just 4 steps on average when it succeeds and 3 when it gets stuck—not bad for
a state space with 8^8 ≈ 17 million states.
VI. If we always allow sideways moves when there are no uphill moves, an infinite loop
will occur whenever the algorithm reaches a flat local maximum that is not a
shoulder. One common solution is to put a limit on the number of consecutive
sideways moves allowed. For example, we could allow up to, say, 100 consecutive
sideways moves in the 8-queens problem. This raises the percentage of problem
instances solved by hill climbing from 14% to 94%. Success comes at a cost: the
algorithm averages roughly 21 steps for each successful instance and 64 for each
failure.
VII. For 8-queens instances with no sideways moves allowed, p ≈ 0.14, so we need
roughly 7 iterations to find a goal (6 failures and 1 success). The expected number of
steps is the cost of one successful iteration plus (1−p)/p times the cost of failure, or
roughly 22 steps in all. When we allow sideways moves, 1/0.94 ≈ 1.06 iterations are
needed on average and (1×21) + (0.06/0.94)×64 ≈ 25 steps.
VIII. For 8-queens, then, random-restart hill climbing is very effective indeed. Even for
three million queens, the approach can find solutions in under a minute.
IX. The success of hill climbing depends very much on the shape of the state-space
landscape: if there are few local maxima and plateaux, random-restart hill climbing
will find a good solution very quickly.
Explain the working of AND-OR search tree.
I. An and–or tree is a graphical representation of the reduction of problems (or goals)
to conjunctions and disjunctions of sub problems (or sub goals).
II. In a deterministic environment, the only branching is introduced by the agent’s own
choices in each state. We call these nodes OR nodes. In the vacuum world, for
example, at an OR node the agent chooses Left or Right or Suck. In a
nondeterministic environment, branching is also introduced by the environment’s
choice of outcome for each action. We call these nodes AND nodes.
III. For example, the Suck action in state 1 leads to a state in the set {5, 7}, so the agent
would need to find a plan for state 5 and for state 7. These two kinds of nodes
alternate, leading to an AND–OR tree as illustrated in Figure
IV. A solution for an AND–OR search problem is a sub tree that (1) has a goal node at
every leaf, (2) specifies one action at each of its OR nodes, and (3) includes every
outcome branch at each of its AND nodes.
MUQuestionPapers.com
V. The solution is shown in bold lines in the figure; it corresponds to the plan given in
Equation (The plan uses if–then–else notation to handle the AND branches, but when
there are more than two branches at a node, it might be better to use a case construct.)
Modifying the basic problem-solving agent to execute contingent solutions of this kind
is straightforward. One may also consider a somewhat different agent design, in which
the agent can act before it has found a guaranteed plan and deals with some
contingencies only as they arise during execution.
VI. One key aspect of the algorithm is the way in which it deals with cycles, which often
arise in nondeterministic problems (e.g., if an action sometimes has no effect or if an
unintended effect can be corrected). If the current state is identical to a state on the
path from the root, then it returns with failure. This doesn’t mean that there is no
solution from the current state; it simply means that if there is a noncyclic solution, it
must be reachable from the earlier incarnation of the current state, so the new
incarnation can be discarded.
VII. With this check, we ensure that the algorithm terminates in every finite state space,
because every path must reach a goal, a dead end, or a repeated state. Notice that the
algorithm does not check whether the current state is a repetition of a state on some
other path from the root, which is important for efficiency.
VIII. AND–OR graphs can also be explored by breadth-first or best-first methods.
Unit 3
Write the minimax algorithm. Explain in short.
Minimax is a kind of backtracking algorithm that is used in decision making and
game theory to find the optimal move for a player, assuming that your opponent also
plays optimally. It is widely used in two player turn-based games such as Tic-Tac-
Toe, Backgammon, Mancala, Chess, etc.
II. In Minimax the two players are called maximizer and minimizer.
The maximizer tries to get the highest score possible while the minimizer tries to do
the opposite and get the lowest score possible.
III. It uses a simple recursive computation of the minimax values of each successor state,
directly implementing the defining equations. The recursion proceeds all the way
down to the leaves of the tree, and then the minimax values are backed up through
the tree as the recursion unwinds.
MUQuestionPapers.com
IV. The minimax algorithm performs a complete depth-first exploration of the game tree.
If the maximum depth of the tree is m and there are b legal moves at each point, then
the time complexity of the minimax algorithm is O (bm).
V. The space complexity is O (bm) for an algorithm that generates all actions at once, or
O (m) for an algorithm that generates actions one at a time. For real games, of course,
the time cost is totally impractical, but this algorithm serves as the basis for the
mathematical analysis of games and for more practical algorithms.
Working of Min-Max Algorithm:
The working of the minimax algorithm can be easily described using an example.
Below we have taken an example of game-tree which is representing the twoplayer
game.
In this example, there are two players one is called Maximizer and other is called
Minimizer.
Maximizer will try to get the Maximum possible score, and Minimizer will try to
get the minimum possible score.
This algorithm applies DFS, so in this game-tree, we have to go all the way
through the leaves to reach the terminal nodes.
At the terminal node, the terminal values are given so we will compare those
value and backtrack the tree until the initial state occurs. Following are the main
steps involved in solving the two-player game tree:
First, we need to replace the single value for each node with a vector of values.
For example, in a three-player game with players A, B, and C, a vector (vA, vB,
vC) is associated with each node. For terminal states, this vector gives the utility
of the state from each player’s viewpoint. (In two-player, zero-sum games, the
two-element vector can be reduced to a single value because the values are always
opposite.)
The simplest way to implement this is to have the UTILITY function return a
vector of utilities. Now we have to consider nonterminal states.
Consider the node marked X in the game tree shown in Figure. In that state, player
C chooses what to do. The two choices lead to terminal states with utility vectors
(vA =1, vB =2, vC =6) and (vA =4, vB =2, vC =3). Since 6 is bigger than 3, C
should choose the first move. This means that if state X is reached, subsequent
play will lead to a terminal state with utilities (vA =1, vB =2, vC =6). Hence, the
backed-up value of X is this vector. The backed-up value of a node n is always the
utility vector of the successor state with the highest value for the player choosing
at n.
Anyone who plays multiplayer games, such as Diplomacy, quickly becomes
aware that much more is going on than in two-player games. Multiplayer games
usually involve alliances, whether formal or informal, among the players.
Alliances are made and broken as the game proceeds. Strategies for each player in
MUQuestionPapers.com
a multiplayer game? It turns out that they can be. For example, suppose A and B
are in weak positions and C is in a stronger position. Then it is often optimal for
both A and B to attack C rather than each other, lest C destroy each of them
individually. In this way, collaboration emerges from purely selfish behaviour.
If the game is not zero-sum, then collaboration can also occur with just two
players. Suppose, for example, that there is a terminal state with utilities _vA
=1000, vB =1000_ and that 1000 is the highest possible utility for each player.
Then the optimal strategy is for both players to do everything possible to reach
this state—that is, the players will automatically cooperate to achieve a mutually
desirable goal.
Explain alpha-beta pruning with suitable example.
I. Alpha-beta pruning is a modified version of the minimax algorithm. It is an
optimization technique for the minimax algorithm.
II. As we have seen in the minimax search algorithm that the number of game states it
has to examine are exponential in depth of the tree. Since we cannot eliminate the
exponent, but we can cut it to half. Hence there is a technique by which without
checking each node of the game tree we can compute the correct minimax decision,
and this technique is called pruning. This involves two threshold parameter Alpha
and beta for future expansion, so it is called alpha-beta pruning. It is also called
as Alpha-Beta Algorithm.
III. Alpha-beta pruning can be applied at any depth of a tree, and sometimes it not only
prune the tree leaves but also entire sub-tree.
IV. The two-parameter can be defined as:
MUQuestionPapers.com
a. Alpha: The best (highest-value) choice we have found so far at any point
along the path of Maximizer. The initial value of alpha is -∞.
b. Beta: The best (lowest-value) choice we have found so far at any point along
the path of Minimizer. The initial value of beta is +∞.
VI. The Alpha-beta pruning to a standard minimax algorithm returns the same move as
the standard algorithm does, but it removes all the nodes which are not really
affecting the final decision but making algorithm slow. Hence by pruning these nodes,
it makes the algorithm fast.
VII. When applied to a standard minimax tree, it returns the same move as minimax
would, but prunes away branches that cannot possibly influence the final decision.
VIII. Alpha–beta pruning can be applied to trees of any depth, and it is often possible to
prune entire subtrees rather than just leaves.
Example of alpha beta pruning
We will first start with the initial move. We will initially define the alpha and beta
values as the worst case i.e. α = -∞ and β= +∞. We will prune the node only when
alpha becomes greater than or equal to beta.
Since the initial value of alpha is less than beta so we didn’t prune it. Now it’s turn for
MAX. So, at node D, value of alpha will be calculated. The value of alpha at node D
will be max (2, 3). So, value of alpha at node D will be 3.
Now the next move will be on node B and its turn for MIN now. So, at node B, the
value of alpha beta will be min (3, ∞). So, at node B values will be alpha= – ∞ and
beta will be 3
In the next step, algorithms traverse the next successor of Node B which is node E,
and the values of α= -∞, and β= 3 will also be passed.
Now it’s turn for MAX. So, at node E we will look for MAX. The current value of
alpha at E is – ∞ and it will be compared with 5. So, MAX (- ∞, 5) will be 5. So, at
node E, alpha = 5, Beta = 5. Now as we can see that alpha is greater than beta which
is satisfying the pruning condition so we can prune the right successor of node E and
algorithm will not be traversed and the value at node E will be 5.
In the next step the algorithm again comes to node A from node B. At node A alpha
will be changed to maximum value as MAX (- ∞, 3). So now the value of alpha and
beta at node A will be (3, + ∞) respectively and will be transferred to node C. These
same values will be transferred to node F.
At node F the value of alpha will be compared to the left branch which is 0. So, MAX
(0, 3) will be 3 and then compared with the right child which is 1, and MAX (3,1) = 3
MUQuestionPapers.com
still α remains 3, but the node value of F will become 1.
Now node F will return the node value 1 to C and will compare to beta value at C.
Now its turn for MIN. So, MIN (+ ∞, 1) will be 1. Now at node C, α= 3, and β= 1 and
alpha is greater than beta which again satisfies the pruning condition. So, the next
successor of node C i.e. G will be pruned and the algorithm didn’t compute the entire
subtree G.
Now, C will return the node value to A and the best value of A will be MAX (1, 3)
will be 3.
The
above represented tree is the final tree which is showing the nodes which are
computed and the nodes which are not computed. So, for this example the optimal
value of the maximizer will be 3.
Explain in brief about resolution theorem.
I. The process of forming an inferred clause or resolving from the parent clauses is
called resolution.
II. This method demonstrates that the theorem being false causes an inconsistency with
the axioms, hence the theorem must have been true all along. It uses only one rule of
deduction, used to combine two parent clauses into a resolved clause.
III. We can express the full resolution rule of inference concisely using 'big ' notation:
The 'big ' is just a more concise way of writing clauses, where underneath the V we
specify a set of indices for the literals L. For example, if A = {1,2,7} then the first
parent clause is L1 L2 L7. (We can use a similar 'big ' notation to express
conjunctions.) The rule resolves literals Pj (a negative literal) and Pk (a positive
literal). We just remove j and k from the set of indices to get the resolved clauses.
MUQuestionPapers.com
IV. We repreatedly resolve clauses until eventually two sentences resolve together to give
the empty clause, which contains no literals.
Initial State: A knowledge base (KB) consisting of negated theorem and
axioms in CNF.
Operators: The full resolution rule of inference picks two sentences from KB
and adds a new sentence.
Goal Test: Does KB contain False?
V. Illustrate the concept of a resolution search space with the simple example from
Aristotle we've seen before. Apparently, all men are mortal and Socrates was a man.
Given these words of wisdom, we want to prove that Socrates is mortal. We saw how
this could be achieved using the Modus Ponens rule, and it is instructive to use
Resolution to prove this as well.
The initial KB (including the negated theorem) in CNF is:
1) is_man(socrates)
2) ¬is_man(X) is_mortal(X)
3) ¬is_mortal(socrates)
We can apply resolution to get TWO different solutions. The first alternative is that
we combine (1) and (2) to get the state A:
1) is_man(socrates)
2) ¬is_man(X) is_mortal(X)
3) ¬is_mortal(socrates)
4) is_mortal(socrates)
Then combine (3) and (4) to get the state B:
1) is_man(socrates)
2) ¬is_man(X) is_mortal(X)
3) ¬is_mortal(socrates)
4) is_mortal(socrates)
5) False
Alternatively, we could initially combine (2) and (3) to get the state C:
1) is_man(socrates)
2) ¬is_man(X) is_mortal(X)
3) ¬is_mortal(socrates)
4) ¬is_man(socrates)
We then resolve again to get state D:
1) is_man(socrates)
2) ¬is_man(X) is_mortal(X)
3) ¬is_mortal(socrates)
MUQuestionPapers.com
4) ¬is_man(socrates)
5) False
So, we have a search space with two alternative paths to a solution: Initial --> A --> B
and Initial --> C --> D.
VI. Instead, it is often more convenient to visualise the developing proof. On the top line
we can write the clause of our initial KB, and draw lines from the two parent clauses
to the new clause, indicating what substitution was required, if any. Repeating this
process for each step we get a proof tree. Here's the finished proof tree for the path
Initial --> A --> B in our example above:
An here's the proof tree for the alternative path Initial --> C --> D:
VII. Complex proofs require a bit effort to lay out, and it is ususally best not to write out
all the initial clauses on the top line to begin with, but rather introduce them into the
tree as they are required.
VIII. Resolution proof trees make it easier to recontruct a proof. Considering the latter tree,
we can read the proof by working backwards from False. We could read the proof to
Aristotle thus:
MUQuestionPapers.com
IX. "You said that all men were mortal. That means that for all things X, either X is not a
man, or X is mortal [CNF step]. If we assume that Socrates is not mortal, then, given
your previous statement, this means Socrates is not a man [first resolution step]. But
you said that Socrates is a man, which means that our assumption was false [second
resolution step], so Socrates must be mortal."
X. We see that, even in this simple case, it is difficult to translate the resolution proof
into a human readable one. Due to the popularity of resolution theorem proving, and
the difficulty with which humans read the output from the provers, there have been
some projects to translate resolution proofs into a more human readable format. As an
exercise, generate the proof you would give to Aristotle from the first proof tree.
XI. In the slides accompanying these notes is an example taken from Russell and Norvig
about a cat called Tuna being killed by Curiosity. We will work through this example
in the lecture.
Write a note on Wumpus world problem.
I. The wumpus world is a cave consisting of rooms connected by passageways.
Lurking somewhere in the cave is the terrible wumpus, a beast that eats anyone who
enters its room. The wumpus can be shot by an agent, but the agent has only one
arrow. Some rooms contain bottomless pits that will trap anyone who wanders into
these rooms (except for the wumpus, which is too big to fall in). The only mitigating
feature of this bleak environment is the possibility of finding a heap of gold. Although
the wumpus world is rather tame by modern computer game standards, it illustrates
some important points about intelligence.
MUQuestionPapers.com
II. A sample wumpus world is shown in Figure. The precise definition of the task
environment is given, by the PEAS description:
Performance measure: +1000 for climbing out of the cave with the gold, –
1000 for falling into a pit or being eaten by the wumpus, –1 for each action
taken and –10 for using up the arrow. The game ends either when the agent
dies or when the agent climbs out of the cave.
Environment: A 4×4 grid of rooms. The agent always starts in the square
labelled [1, 1], facing to the right. The locations of the gold and the wumpus
are chosen randomly, with a uniform distribution, from the squares other than
the start square. In addition, each square other than the start can be a pit, with
probability 0.2.
Actuators: The agent can move Forward, TurnLeft by 90◦, or TurnRight by
90◦. The agent dies a miserable death if it enters a square containing a pit or a
live wumpus. (It is safe, albeit smelly, to enter a square with a dead wumpus.)
If an agent tries to move forward and bumps into a wall, then the agent does
not move. The action Grab can be used to pick up the gold if it is in the same
square as the agent. The action Shoot can be used to fire an arrow in a straight
line in the direction the agent is facing. The arrow continues until it either hits
(and hence kills) the wumpus or hits a wall. The agent has only one arrow, so
only the first Shoot action has any effect. Finally, the action Climb can be
used to climb out of the cave, but only from square [1, 1].
Sensors: The agent has five sensors, each of which gives a single bit of
information:
– In the square containing the wumpus and in the directly (not diagonally)
adjacent squares, the agent will perceive a Stench.
– In the squares directly adjacent to a pit, the agent will perceive a Breeze.
– In the square where the gold is, the agent will perceive a Glitter.
– When an agent walks into a wall, it will perceive a Bump.
– When the wumpus is killed, it emits a woeful Scream that can be perceived
anywhere in the cave.
III. The percepts will be given to the agent program in the form of a list of five symbols;
for example, if there is a stench and a breeze, but no glitter, bump, or scream, the
agent program will get [Stench, Breeze, None, None, None]. The wumpus
environment along the various dimensions. Clearly, it is discrete, static, and singleagent.
(The wumpus doesn’t move, fortunately.) It is sequential, because rewards may
come only after many actions are taken. It is partially observable, because some
aspects of the state are not directly perceivable: the agent’s location, the wumpus’s
state of health, and the availability of an arrow.
IV. As for the locations of the pits and the wumpus: we could treat them as unobserved
parts of the state that happen to be immutable—in which case, the transition model for
the environment is completely known; or we could say that the transition model itself
is unknown because the agent doesn’t know which Forward actions are fatal—in
which case, discovering the locations of pits and wumpus completes the agent’s
knowledge of the transition model.
Exploring the problem of wumpus world:
I. We use an informal knowledge representation language consisting of writing down
symbols in a grid (as in Figures 1 and 2).The agent’s initial knowledge base contains
the rules of the environment, as described previously; in particular, it knows that it is
in [1, 1] and that [1, 1] is a safe square; we denote that with an “A” and “OK,”
respectively, in square [1, 1].
II. The first percept is [None, None, None, None, None], from which the agent can
conclude that its neighboring squares, [1, 2] and [2, 1], are free of dangers—they are
OK. Figure 1(a) shows the agent’s state of knowledge at this point. A cautious agent
will move only into a square that it knows to be OK. Let us suppose the agent decides
to move forward to [2, 1]. The agent perceives a breeze (denoted by “B”) in [2, 1], so
there must be a pit in a neighboring square. The pit cannot be in [1, 1], by the rules of
MUQuestionPapers.com
the game, so there must be a pit in [2, 2] or [3, 1] or both. The notation “P?” in Figure
1(b) indicates a possible pit in those squares. At this point, there is only one known
square that is OK and that has not yet been visited. So the prudent agent will turn
around, go back to [1, 1], and then proceed to [1, 2].
III. The agent perceives a stench in [1, 2], resulting in the state of knowledge shown in
Figure 2(a). The stench in [1, 2] means that there must be a wumpus nearby. But the
wumpus cannot be in [1, 1], by the rules of the game, and it cannot be in [2, 2] (or the
agent would have detected a stench when it was in [2, 1]). Therefore, the agent can
infer that the wumpus is in [1, 3]. The notation W! Indicates this inference. Moreover,
MUQuestionPapers.com
the lack of a breeze in [1, 2] implies that there is no pit in [2, 2]. Yet the agent has
already inferred that there must be a pit in either [2, 2] or [3, 1], so this means it must
be in [3, 1]. This is a fairly difficult inference, because it combines knowledge gained
at different times in different places and relies on the lack of a percept to make one
crucial step.
IV. The agent has now proved to itself that there is neither a pit nor a wumpus in [2, 2], so
it is OK to move there. We do not show the agent’s state of knowledge at [2, 2]; we
just assume that the agent turns and moves to [2, 3], giving us Figure 2(b). In [2, 3],
the agent detects a glitter, so it should grab the gold and then return home.
V. Note that in each case for which the agent draws a conclusion from the available
information, that conclusion is guaranteed to be correct if the available information is
correct.
VI. This is a fundamental property of logical reasoning. In the rest of this chapter, we
describe how to build logical agents that can represent information and draw
conclusions such as those described in the preceding paragraphs.
Unit 4
Explain universal and existential quantifier with suitable example.
A logical quantifier that asserts all values of a given variable in a formula.
First-order logic contains two standard quantifiers, called universal and existential.
1. Universal quantifier
The symbol ∀ is called the universal quantifier.
It expresses the fact that, in a particular universe of discourse, all objects have a
particular property.
o ∀x: means:
o For all objects xx, it is true that ...
∀ is usually pronounced “For all . . .”. (Remember that the upside-down A stands for
“all.”)
That is:
Thus, the sentence says, “For all x, if x is a king, then x is a person.” The symbol x is
called a variable. By convention, variables are lowercase letters. A variable is a term
all by itself, and as such can also serve as the argument of a function—for example,
LeftLeg(x). A term with no variables is called a ground term.
The universal quantifier can be considered as a repeated conjunction:
Suppose our universe of discourse consists of the objects X1, X2, X3…X1, X2,
X3… and so on.
2. Existential quantifier
The symbol ∃ is called the existential quantifier.
It expresses the fact that, in a particular universe of discourse, there exists (at least
one) object having a particular property.
That is: ∃x means: There exists at least one object xx such that ...
for example, that King John has a crown on his head, we write
∃ x Crown(x) ∧ OnHead(x, John) .
∃x is pronounced “There exists an x such that . . .” or “For some x . . .”
What is first order logic? Discuss the different elements used in first order
First-order logic is another way of knowledge representation in artificial intelligence.
It is an extension to propositional logic. FOL is sufficiently expressive to represent the
natural language statements in a concise way.
II. First-order logic is also known as Predicate logic or First-order predicate logic. Firstorder
logic is a powerful language that develops information about the objects in a
more easy way and can also express the relationship between those objects.
III. First-order logic (like natural language) does not only assume that the world contains
facts like propositional logic but also assumes the following things in the world:
o Objects: A, B, people, numbers, colors, wars, theories, squares, pits,
wumpus,…
o Relations: It can be unary relation such as: red, round, is adjacent, or n-any
relation such as: the sister of, brother of, has color, comes between
o Function: Father of, best friend, third inning of, end of,…
IV. As a natural language, first-order logic also has two main parts:
o Syntax
o Semantics
V. Basic Elements of First-order logic:
o Following are the basic elements of FOL syntax:
VI. Atomic sentences:
1. Atomic sentence
An atomic sentence (or atom for short) is formed from a predicate symbol
optionally followed by a parenthesized list of terms, such as Brother (Richard,
John).
This states, under the intended interpretation given earlier, that Richard the
Lionheart is the brother of King John. Atomic sentences can have complex
terms as arguments. Thus, Married (Father (Richard), Mother (John)) states
that Richard the Lionheart’s father is married to King John’s mother (again,
under a suitable interpretation).
An atomic sentence is true in a given model if the relation referred to by the
predicate symbol holds among the objects referred to by the arguments.
Atomic Sentence = predicate (term 1,....., term n) or term1 = term2
An atomic sentence is formed from a predicate symbol followed by list of
terms.
Examples:-
o LargeThan(2,3) is false.
o Brother_of(Mary,Pete) is false.
o Married(Father(Richard),Mother(John)) could be true or false.
o Atomic sentences are the most basic sentences of first-order logic. These sentences
are formed from a predicate symbol followed by a parenthesis with a sequence of
terms.
o We can represent atomic sentences as Predicate (term1, term2, ......, term n).
o Example: Ravi and Ajay are brothers: => Brothers(Ravi, Ajay).
Chinky is a cat: => cat (Chinky).
VII. Complex Sentences:
We can use logical connectives to construct more complex sentences, with the
same syntax and semantics as in propositional calculus.
Here are four sentences that are true in the model of Figure under our intended
interpretation:
o ¬Brother (LeftLeg(Richard), John)
o Brother (Richard , John) ∧ Brother (John,Richard)
o King(Richard ) ∨ King(John)
o ¬King(Richard) ⇒ King(John) .
o Complex sentences are made by combining atomic sentences using connectives.
VIII. First-order logic statements can be divided into two parts:
o Subject: Subject is the main part of the statement.
o Predicate: A predicate can be defined as a relation, which binds two atoms together
in a statement.
Consider the statement: "x is an integer.", it consists of two parts, the first part x is
the subject of the statement and second part "is an integer," is known as a predicate.
Convert the following natural sentences into FOL form.
1) Virat is software engineer.
Ans. Virat(software engineer)
2) All vehicles have wheels.
Ans. For-all(x): vehicles(x) -> wheel(x)
3) Some -one speaks some language in class.
Ans. ∃ x ∃ y: person(x) v language(y)-> speaks(x, y)
4) Everybody loves somebody sometimes.
Ans. (for all(x) (exists(y) -> loves sometime(x, y)))
5) All software engineer develops software.
Ans. For-all(x):software(x) -> software engineer(y)
ii. All batsman are cricketers.
For-all(x): batsman(x) -> cricketer(x)
iii. Everybody speaks some language.
For-all(x) Exist(y): Person(x) V language(y) -> speaks(x,y)
iv. Every car has wheel.
(forall (x) (if (Car x) (exists (y) wheel-of (x y)))
v. Everybody loves somebody some time.
(forall (x) (exists (y) -> loves-sometime(x y)))
I. Backward chaining is the same idea as forward chaining except that you start with
requiring the learner to complete the last step of the task analysis. This means that you
will perform all the preceding steps either for or with the learner and then begin to
fade your prompts with the last step only.
II. Reinforcement is provided contingent upon the last step being completed. Once the
learner is able to complete the last step independently, you will require the learner to
complete the last two steps before receiving a reinforcer, and so on, until the learner is
able to complete the entire chain independently before receiving access to a
reinforcer.
III. Backward chaining uses the same basic approach as forward chaining but in reverse
order. That is, you start with the last step in the chain rather than the first. The
therapist can either prompt the learner through the entire sequence, without
opportunities for independent responding, until he gets to the final step (and then
MUQuestionPapers.com
teach that step), or the therapist can initiate the teaching interaction by going straight
to the last step.
IV. Either way, when the last step occurs, the therapist uses prompting to help the learner
perform the step correctly, reinforces correct responding with a powerful reinforcer,
and then fades prompts across subsequent trials. When the last step is mastered, then
each teaching interaction begins with the second-to-last step, and so on, until the first
step in the chain is mastered, at which point the whole task analysis is mastered.
V. Backward chaining is a kind of AND/OR search—the OR part because the goal query
can be proved by any rule in the knowledge base, and the AND part because all the
conjuncts in the lhs of a clause must be proved.
VI. Backward chaining, as we have written it, is clearly a depth-first search algorithm.
This means that its space requirements are linear in the size of the proof (neglecting,
for now, the space required to accumulate the solutions). It also means that backward
chaining (unlike forward chaining) suffers from problems with repeated states and
incompleteness. We will discuss these problems and some potential solutions, but first
we show how backward chaining is used in logic programming systems.
I. Figure shows an air cargo transport problem involving loading and unloading cargo
and flying it from place to place. The problem can be defined with three actions:
Load, Unload, and Fly.
II. The actions affect two predicates: In(c, p) means that cargo c is inside plane p, and
At(x, a) means that object x (either plane or cargo) is at airport a. Note that some care
must be taken to make sure the At predicates are maintained properly.
III. When a plane flies from one airport to another, all the cargo inside the plane goes
with it. In first-order logic it would be easy to quantify over all objects that are inside
the plane.
IV. But basic PDDL does not have a universal quantifier, so we need a different solution.
The approach we use is to say that a piece of cargo ceases to be At anywhere when it
is In a plane; the cargo only becomes At the new airport when it is unloaded. So At
really means “available for use at a given location.”
V. The following plan is a solution to the problem:
[Load (C1, P1, SFO), Fly (P1, SFO, JFK), Unload (C1, P1, JFK),
Load (C2, P2, JFK), Fly (P2, JFK, SFO), Unload (C2, P2, SFO)].
VI. Finally, there is the problem of spurious actions such as Fly (P1, JFK, JFK), which
should be a no-op, but which has contradictory effects (according to the definition, the
effect would include At(P1, JFK) ∧ ¬At(P1, JFK)). It is common to ignore such
problems, because they seldom cause incorrect plans to be produced. The correct
approach is to add inequality preconditions saying that the from and to airports must
be different.
STRIPS Operators
I. STRIPS Stands for STandford Research Institute Problem Solver
Tidily arranged actions descriptions
Restricted language (function-free literals)
Efficient algorithms
II. States represented by:
Conjunction of ground (function-free) atoms
Example
At(Home), Have(Bread)
Closed world assumption
Atoms that are not present are assumed to be false
Example
State: At(Home), Have(Bread)
Implicitly: ¬Have(Milk),¬Have(Bananas),¬Have(Drill)
Operator applicability
Operator o applicable in state s if: there is substitution Subst of the free variables such that
Subst(precond(o)) ⊆ s
Example
Buy(x) is applicable in state
At(Shop)∧Sells(Shop,Milk)∧Have(Bread)
with substitution
Subst = { p/Shop, x/Milk }
Resulting state
Computed from old state and literals in Subst(effect)
1. Positive literals are added to the state
2. Negative literals are removed from the state
3. All other literals remain unchanged (avoids the frame problem)
Formally s’ = (s ∪ {P | P a positive atom, P ∈ Subst(effect(o))})
\ {P | P a positive atom, ¬P ∈ Subst(effect(o))}
Example Application of
Drive(a,b) precond: At(a),Road(a,b) effect: At(b),¬At(a)
to state
At(Koblenz), Road(Koblenz,Landau)
results in
At(Landau), Road(Koblenz,Landau)
A complete set of STRIPS operators can be translated into a set of successor-state axioms
Write note on planning graph.
• Planning graph is a special data structure which is used to get better accuracy. It is
directed graph and is useful to accomplish improved heuristic estimates.
• Any of the search technique can make use of planning graph. Also, GRAPHPLAN can
be used to extract a solution directly.
• Planning graph works only for propositional problems without variable.
• Similarly, in case of planning graph there are series of levels which match to time ladle
in the plan. every level has set of literals and a set of actions.
• Level 0 is the initial state of planning graph.
• Properties of planning graph:
⎯ If goal is absent from last level then goal cannot be achieved.
⎯ If there exist a path to goal the goal is present in the last level.
⎯ If goal is present in last then there may not exist any path.
I. Planning graphs are an efficient way to create a representation of a planning problem
that can be used to Achieve better heuristic estimates Directly construct plans
II. Planning graphs only work for propositional problems.
III. Planning graphs consists of a seq of levels that correspond to time steps in the plan.
Level 0 is the initial state. Each level consists of a set of literals and a set of actions
that represent what might be possible at that step in the plan Might be is the key to
MUQuestionPapers.com
efficiency Records only a restricted subset of possible negative interactions among
actions.
IV. Each level consists of
Literals = all those that could be true at that time step, depending upon the
actions executed at preceding time steps.
Actions = all those actions that could have their preconditions satisfied at that
time step, depending on which of the literals actually hold.
V. The GRAPHPLAN algorithm repeatedly adds a level to a planning graph with
EXPAND-GRAPH. Once all the goals show up as nonmutex in the graph,
GRAPHPLAN calls EXTRACT-SOLUTION to search for a plan that solves the
problem. If that fails, it expands another level and tries again, terminating with failure
when there is no reason to go on.
Initially the plan consist of 5 literals from the initial state and the CWA literals (S0).
Add actions whose preconditions are satisfied by EXPAND-GRAPH (A0)
Also add persistence actions and mutex relations.
Add the effects at level S1
Repeat until goal is in level Si
MUQuestionPapers.com
VI. EXPAND-GRAPH also looks for mutex relations
Inconsistent effects:- E.g. Remove(Spare, Trunk) and LeaveOverNight due to
At(Spare,Ground) and not At(Spare, Ground)
Interference :- E.g. Remove(Flat, Axle) and LeaveOverNight At(Flat, Axle) as
PRECOND and not At(Flat,Axle) as EFFECT
Competing needs:- E.g. PutOn(Spare,Axle) and Remove(Flat, Axle) due to
At(Flat.Axle) and not At(Flat, Axle)
Inconsistent support:- E.g. in S2, At(Spare,Axle) and At(Flat,Axle)
Write note on semantic network.
V. For example, Figure has a MemberOf link between Mary and FemalePersons ,
corresponding to the logical assertion Mary ∈FemalePersons ; similarly, the SisterOf
link between Mary and John corresponds to the assertion SisterOf (Mary, John). We
can connect categories using SubsetOf links, and so on. It is such fun drawing bubbles
and arrows that one can get carried away.
MUQuestionPapers.com
VI. For example, we know that persons have female persons as mothers, so can we draw a
HasMother link from Persons to FemalePersons? The answer is no, because
HasMother is a relation between a person and his or her mother, and categories do not
have mother For this reason, we have used a special notation—the double-boxed
link—in Figure This link asserts that
∀x x∈ Persons ⇒ [∀ y HasMother (x, y) ⇒ y ∈ FemalePersons ] .
We might also want to assert that persons have two legs—that is,
∀x x∈ Persons ⇒ Legs(x, 2)
VII. Semantic Networks Are Majorly Used For
Representing data
Revealing structure (relations, proximity, relative importance)
Supporting conceptual edition
Supporting navigation
VIII. Advantages of Using Semantic Networks
The semantic network is more natural than the logical representation;
The semantic network permits using of effective inference algorithm
(graphical algorithm)
They are simple and can be easily implemented and understood.
The semantic network can be used as a typical connection application among various
fields of knowledge, for instance, among computer science and anthropology.
The semantic network permits a simple approach to investigate the problem space.
IX. Disadvantages of Using Semantic Networks
There is no standard definition for link names
Semantic Nets are not intelligent, dependent on the creator
VII. We can extend event calculus to make it possible to represent simultaneous events
(such as two people being necessary to ride a seesaw), exogenous events (such as the
wind blowing and changing the location of an object), continuous events (such as the
level of water in the bathtub continuously rising) and other complications.