0% found this document useful (0 votes)
102 views

Unit 1 Elaborate Artificial Intelligence With Suitable Example Along With It's Application

I. John McCarthy invented the term "artificial intelligence" in 1956 to refer to the science and engineering of creating machines that can intelligently solve problems and take actions. II. Some applications of AI include robotic vehicles, speech recognition, autonomous planning and scheduling for space missions, game playing like chess, fighting spam emails, logistics planning, robot vacuums, and machine translation between languages. III. The purpose of the Turing Test, proposed by Alan Turing, is to determine whether a computer can exhibit intelligent behavior equivalent to, or indistinguishable from, that of a human by having a human judge ask questions to determine if they are interacting with a computer or human.

Uploaded by

faiyaz pardiwala
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
102 views

Unit 1 Elaborate Artificial Intelligence With Suitable Example Along With It's Application

I. John McCarthy invented the term "artificial intelligence" in 1956 to refer to the science and engineering of creating machines that can intelligently solve problems and take actions. II. Some applications of AI include robotic vehicles, speech recognition, autonomous planning and scheduling for space missions, game playing like chess, fighting spam emails, logistics planning, robot vacuums, and machine translation between languages. III. The purpose of the Turing Test, proposed by Alan Turing, is to determine whether a computer can exhibit intelligent behavior equivalent to, or indistinguishable from, that of a human by having a human judge ask questions to determine if they are interacting with a computer or human.

Uploaded by

faiyaz pardiwala
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 46

Unit 1

Elaborate artificial intelligence with suitable example along with it’s application.
John Maccarthy as invented the word Artificial Intelligence in year 1956.AI is define as
science and engineering of making. Ai indicates that machine are able to take some decision
in
order to take right decision. AI system works like human brain.
Application of Ai are as follows:
i. Robotic Vehicles= A driver less robotic car or a autonomous car name “Stalee” was
develop using AI.
ii. Speech recognition = It recognize the speech of an authenticated user and allows the
service while block the service to an unauthorized user.
iii. autonomous planning and Scheduling= NASA develop an autonomous planning and
scheduling for scheduling of different operations for a space craft.
iv. Game Playing= IBM DEEP BLUE become 1st computer program to detect the world
chess champion GARRY KASPAROE.
v. Spam fighting = Each day the learning algorithm classify over a billon message as spam
saving recipient from having to waste time deleting unwanted mail.
vi. Logistic Planning= Logistic planning is being used in different industry for performing
planning and scheduling different activities.
vii. Robotics= The iRobot corporation has sold over two million robotics vacuum cleaners.
viii. Machine Translation = It uses AI technology to translate any language into an English
language allowing an English people to communicate efficiently.
Other application of Ai are as follows:
AI in shopping
AI in education
AI in marketing
AI in hospitals
AI in entertainment. Industry
AI in Military Application
AI is one of the newest fields in science and engineering.
AI is a general term that implies the use of a computer to model & replicate intelligent
behaviour.
MUQuestionPapers.com
“AI is the design, study & construction of computer programs that behave
intelligently.”
Artificial intelligence (AI) refers to the simulation of human intelligence in machines
that are programmed to think like humans and mimic their actions. The term may also
be applied to any machine that exhibits traits associated with a human mind such as
learning and problem-solving.
The ideal characteristic of artificial intelligence is its ability to rationalize and take
actions that have the best chance of achieving a specific goal.
AI is continuously evolving to benefit many different industries. Machines are wired
using a cross-disciplinary approach based in mathematics, computer science,
linguistics, psychology, and more.
Research in AI focuses on development & analysis of algorithms that learn & perform
intelligent behaviour with minimal human intervention.
AI is the ability of machine or computer program to think and learn.
The concept of AI is based on idea of building machines capable of thinking, acting &
learning like humans.
AI is only field to attempt to build machines that will function autonomously complex
changing environments.
AI has focused chiefly on following components of intelligence.
o Learning: - the learning by trial & error.
o Reasoning: - reasoning skill often happen subconsciously & within seconds.
o Decision making: - it is a process of making choices by identifying a decision
gathering information & assessing alternative resolutions.
o Problem solving: - problem solving particularly in AI may be characterized as
systematic search in order to reach goal or solutions.
Examples of AI:-
1. Alexa
o Alexa's rise to become the smart home's hub, has been somewhat meteoric. When
Amazon first introduced Alexa, it took much of the world by storm.
o However, it's usefulness and its uncanny ability to decipher speech from anywhere in
the room has made it a revolutionary product that can help us scour the web for
information, shop, schedule appointments, set alarms and a million other things, but
also help power our smart homes and be a conduit for those that might have limited
mobility.
2. Amazon.com
o Amazon's transactional A.I. is something that's been in existence for quite some time,
allowing it to make astronomical amounts of money online.
MUQuestionPapers.com
o With its algorithms refined more and more with each passing year, the company has
gotten acutely smart at predicting just what we're interested in purchasing based on
our online behaviour.
3. Face Detection and Recognition
o Using virtual filters on our face when taking pictures and using face ID for unlocking
our phones are two applications of AI that are now part of our daily lives.
o The former incorporates face detection meaning any human face is identified. The
latter uses face recognition through which a specific face is recognised.
4. Chatbots
o As a customer, getting queries answered can be time-consuming. An artificially
intelligent solution to this is the use of algorithms to train machines to cater to
customers via chatbots.
o This enables machines to answer FAQs, and take and track orders.
5. Social Media
o The advent of social media provided a new narrative to the world with excessive
freedom of speech.
o Various social media applications are using the support of AI to control these
problems and provide users with other entertaining features.
o AI algorithms can spot and swiftly take down posts containing hate speech a lot faster
than humans could. This is made possible through their ability to identify hate
keywords, phrases, and symbols in different languages.
6. E-Payments
o Artificial intelligence has made it possible to deposit cheques from the comfort of
your home. AI is proficient in deciphering handwriting, making online cheque
processing practicable.
o The way fraud can be detected by observing users’ credit card spending patterns is
also an example of artificial intelligence.
What is the purpose of turing test?
The Turing Test, proposed by Alan Turing (1950), was designed to provide a
satisfactory operational definition of intelligence. To judge whether the system can act
like a human, Sir Alan turing had designed a test known as turing test.
II. A Turing Test is a method of inquiry in artificial intelligence (AI) for determining
whether or not a computer is capable of thinking like a human being.
III. A computer passes the test if a human interrogator, after posing some written
questions, cannot tell whether the written responses come from a person or from a
computer. Programming a computer to pass a rigorously applied test provides plenty
to work on. The computer would need to possess the following capabilities:
1. Natural language processing to enable it to communicate successfully in
English;
2. Knowledge representation to store what it knows or hears;
3. Automated reasoning to use the stored information to answer questions and
to draw new conclusions;
4. Machine learning to adapt to new circumstances and to detect and extrapolate
patterns.
IV. Turing’s test deliberately avoided direct physical interaction between the interrogator
and the computer, because physical simulation of a person is unnecessary for
intelligence. However, the so-called total Turing Test includes a video signal so that
the interrogator can test the subject’s perceptual abilities, as well as the opportunity
for the interrogator to pass physical objects “through the hatch.” To pass the total
Turing Test, the computer will need
5. Computer vision to perceive objects, and
6. Robotics to manipulate objects and move about.
V. These six disciplines compose most of AI, and Turing deserves credit for designing a
test that remains relevant 60 years later. Yet AI researchers have devoted little effort
to passing the Turing Test, believing that it is more important to study the underlying
principles of intelligence than to duplicate an exemplar.
Explain the concept of agent and environment.
An agent is anything that can be viewed as perceiving its environment through
sensors and acting upon that environment through actuators.
A human agent has eyes, ears, and other organs for sensors and hands, legs, vocal
tract, and so on for actuators.
Eyes, ears, nose, skin, tongue. These senses sense the environment are called as
sensors. Sensors collect percepts or inputs from environment and passes it to the
processing unit.
Actuators or effectors are the organs or tools using which the agent acts upon the
environment. Once the sensor senses the environment, it gives this information to
nervous system which takes appropriate action with the help of actuators. In case of
human agents we have hands, legs as actuators or effectors.
A robotic agent might have cameras and infrared range finders for sensors and various
motors for actuators.
A software agent receives keystrokes, file contents, and network packets as sensory
inputs and acts on the environment by displaying on the screen, writing files, and
sending network packets.
Use the term percept to refer to the agent’s perceptual inputs at any given instant. An
agent’s percept sequence is the complete history of everything the agent has ever
perceived.
In general, an agent’s choice of action at any given instant can depend on the entire
percept sequence observed to date, but not on anything it hasn’t perceived.
By specifying the agent’s choice of action for every possible percept sequence, we
have said more or less everything there is to say about the agent. Mathematically
speaking, we say that an agent’s behaviour is described by the agent function that
maps any given percept sequence to an action.

As shown in figure, there are two blocks A & B having some dirt. Vacuum cleaner
agent supposed to sense the dirt and collect it, thereby making the room clean.
In order to do that the agent must have a camera to see the dirt and a mechanism to
move forward, backward, left and right to reach to the dirt. Also it should absorb the
dirt. Based on the percepts, actions will be performed. For example: Move left, Move
right, absorb, No Operation.
Hence the sensor for vacuum cleaner agent can be camera, dirt sensor and the actuator
can be motor to make it move, absorption mechanism. And it can be represented as
[A, Dirty], [B, Clean], [A, Absorb], [B, Nop], etc.
Types of Environment
I. Fully observable vs. partially observable:
If an agent’s sensors give it access to the complete state of the environment at each
point in time, then we say that the task environment is fully observable.
Fully observable environments are convenient because the agent need not maintain
any internal state to keep track of the world. An environment might be partially
observable because of noisy and inaccurate sensors or because parts of the state are
simply missing from the sensor data.
If the agent has no sensors at all then the environment is unobservable.
II. Single agent vs. multiagent:
An agent solving a crossword puzzle by itself is clearly in a single-agent environment,
while in case of car driving agent, there are multiple agents driving on the road, hence
it’s a multiagent environment.
For example, in chess, the opponent entity B is trying to maximize its performance
measure, which, by the rules of chess, minimizes agent A’s performance measure.
Thus, chess is a competitive multiagent environment.
In the taxi-driving environment, on the other hand, avoiding collisions maximizes the
performance measure of all agents, so it is a partially cooperative multiagent
environment. It is also partially competitive because, for example, only one car can
occupy a parking space.
III. Deterministic vs. stochastic:
If the next state of the environment is completely determined by the current state and
the action executed by the agent, then we say the environment is deterministic;
otherwise, it is stochastic.
MUQuestionPapers.com
If the environment is partially observable, however, then it could appear to be
stochastic.
IV. Episodic vs. sequential:
In an episodic task environment, the agent’s experience is divided into atomic
episodes. In each episode the agent receives a percept and then performs a single
action.
Crucially, the next episode does not depend on the actions taken in previous episodes.
Many classification tasks are episodic.
In sequential environments, on the other hand, the current decision could affect all
future decisions.
Episodic environments are much simpler than sequential environments because the
agent does not need to think ahead.
V. Static vs. dynamic:
If the environment can change while an agent is deliberating, then we say the
environment is dynamic for that agent; otherwise, it is static.
Static environments are easy to deal with because the agent need not keep looking at
the world while it is deciding on an action, nor need it worry about the passage of
time.
Dynamic environments, on the other hand, are continuously asking the agent what it
wants to do; if it hasn’t decided yet, that counts as deciding to do nothing.
If the environment itself does not change with the passage of time but the agent’s
performance score does, then we say the environment is semi-dynamic.
VI. Discrete vs. continuous:
The discrete/continuous distinction applies to the state of the environment, to the way
time is handled, and to the percepts and actions of the agent.
For example, the chess environment has a finite number of distinct states (excluding
the clock).
Chess also has a discrete set of percepts and actions.
Taxi driving is a continuous-state and continuous-time problem: the speed and
location of the taxi and of the other vehicles sweep through a range of continuous
values and do so smoothly over time.
Taxi-driving actions are also continuous (steering angles, etc.). Input from digital
cameras is discrete, strictly speaking, but is typically treated as representing
continuously varying intensities and locations.
VII. Known vs. unknown:
In known environment, the output for all probable actions is given. state of knowledge
about the “laws of physics” of the environment.
In case of unknown environment, for an agent to make a decision, it has to gain
knowledge about how the environments works.
Explain the rational agent approach of AI.
A rationality agent is one which acts to perform the right decision in order to achieve
the best outcome.
2. A rational agent is one that does the right thing, conceptually speaking every entry in the
table, for the agent function is filled out correctly.
3. A general rule says that it is better to design the performance measures according to what
one actually wants in the environment rather than according to how one thinks the agent
should
behave.
4. The agent behaviors is based on following points:
➢ To achieve the high performance.
MUQuestionPapers.com
➢ To achieve the optimized result.
➢ To behave rationality.
5. What is rationality at any given time depends upon the four things as follows.
➢ The performance measure that defines the criteria of success.
➢ The agents prior knowledge of the environment.
➢ The action that the agent can perform.
➢ The agents percept sequence to the data.
6. These are four different types of agent which are as follows.
i. Simple reflex agent
ii. Model based agent
iii. Goal based agent
iv. Utility based agent
Rational Agent:
For each possible percept sequence, a rational agent should select an action that is expected to
maximize its performance measure, based on the evidence provided by the percept sequence
and whatever built-in knowledge the agent has.
1. The concept of rational agents as central to our approach to artificial intelligence.
2. Rationality is distinct from omniscience (all-knowing with infinite knowledge)
3. Agents can perform actions in order to modify future percepts so as to obtain useful
information (information gathering, exploration)
4. An agent is autonomous if its behaviour is determined by its own percepts & experience
(with ability to learn and adapt) without depending solely on build-in knowledge
5. A rational agent is one that does the right thing—conceptually speaking, every entry in
the table for the agent function is filled out correctly. Obviously, doing the right thing is
better than doing the wrong thing, but what does it mean to do the right thing?
6. If the sequence is desirable, then the agent has performed well. This notion of desirability
is captured by a performance measure that evaluates any given sequence of environment
states.
7. For every percept sequence a built-in knowledge base is updated, which is very useful for
decision making, because it stores the consequences of performing some particular action.
8. If the consequences direct to achieve desired goal then we get a good performance
measure factor, else if the consequences do not lead to desired goal sate, then we get a
poor performance measure factor.
For example :- if agents hurts his finger while using nail and hammer, then while using it
for the next time agent will be more careful and the probability of not getting hurts will
increase. In short agent will be able to use the hammer and nail more efficiently.
9. A rational agent should be autonomous—it should learn what it can to compensate for
partial or incorrect prior knowledge.
MUQuestionPapers.com
10. Rational agent not only to gather information but also to learn as much as possible from
what it perceives.
11. After sufficient experience of its environment, the behaviour of a rational agent can
become effectively independent of its prior knowledge. Hence, the incorporation of
learning allows one to design a single rational agent that will succeed in a vast variety of
environments.
12. What is rational at any given time depends on four things:
The performance measure that defines the criterion of success.
The agent’s prior knowledge of the environment.
The actions that the agent can perform.
The agent’s percept sequence to date.
Acting rationally: The rational agent approach
An agent is just something that acts (agent comes from the Latin agere, to do). Of
course, all computer programs do something, but computer agents are expected to do
more: operate autonomously, perceive their environment, persist over a prolonged
time period, and adapt to change, and create and pursue goals.
A rational agent is one that acts so as to achieve the best outcome or, when there is
uncertainty, the best expected outcome. In some situations, there is no provably
correct thing to do, but something must still be done. There are also ways of acting
rationally that cannot be said to involve inference. For example, recoiling from a hot
stove is a reflex action that is usually more successful than a slower action taken after
careful deliberation.
All the skills needed for the Turing Test also allow an agent to act rationally.
Knowledge representation and reasoning enable agents to reach good decisions. We
need to be able to generate comprehensible sentences in natural language to get by in
a complex society. We need learning not only for erudition, but also because it
improves our ability to generate effective behaviour.
The rational-agent approach has two advantages over the other approaches. First, it is
more general than the “laws of thought” approach because correct inference is just
one of several possible mechanisms for achieving rationality. Second, it is more
amenable to scientific development than are approaches based on human behaviour or
human thought. The standard of rationality is mathematically well defined and
completely general, and can be “unpacked” to generate agent designs that provably
achieve it.
One important point to keep in mind: We will see before too long that achieving
perfect rationality—always doing the right thing—is not feasible in complicated
environments.
Explain the working of simple reflex agent.
In simplex reflex agent, an agent performs the action based on the current state/ input
only by ignoring all the previous state of agent of the environment is called as simplex
reflex agent.
• Simple reflex agent is totally uncomplicated type of agent.
• The simplex reflect agent functions is based on the situation and its corresponding
action.
• The agent reflects on condition action protocol for performing any action.
• If the condition is true, then the matching action is performed without considering
previous history.
• E.g.: robotic vacuum cleaner for the home use.
The simplest kind of agent is the simple reflex agent. These agents select actions on
the basis of the current percept, ignoring the rest of the percept history.
II. A simple reflex agent is the most basic of the intelligent agents out there. It performs
actions based on a current situation. When something happens in the environment of a
simple reflex agent, the agent quickly scans its knowledge base for how to respond to
the situation at-hand based on pre-determined rules.
III. It would be like a home thermostat recognizing that if the temperature increases to 75
degrees in the house, the thermostat is prompted to kick on. It doesn’t need to know
what happened with the temperature yesterday or what might happen tomorrow.
Instead, it operates based on the idea that if _____ happens, _____ is the response.
IV. Simple reflex agents are just that: simple. They cannot compute complex equations or
solve complicated problems. They work only in environments that are fullyobservable
in the current percept, ignoring any percept history. If you have a smart
light bulb, for example, set to turn on at 6 p.m. every night, the light bulb will not
recognize how the days are longer in summer and the lamp is not needed until much
later. It will continue to turn the lamp on at 6 p.m. because that is the rule it follows.
Simple reflex agents are built on the condition-action rule.
V. Simple reflex behaviors occur even in more complex environments. Imagine yourself
as the driver of the automated taxi. If the car in front brakes and its brake lights come
on, then you should notice this and initiate braking. In other words, some processing
is done on the visual input to establish the condition we call “The car in front is
braking.” Then, this triggers some established connection in the agent program to the
action “initiate braking.” We call such a connection a condition–action rule, written
as
if car-in-front-is-braking then initiate-braking.
VI. For example, the vacuum agent is a simple reflex agent, because its decision is based
only on the current location and on whether that location contains dirt.
An agent program for this agent is shown in below:-
function REFLEX-VACUUM-AGENT([location,status]) returns an action
if status = Dirty then return Suck
else if location = A then return Right
else if location = B then return Left
Unit 2

Explain the algorithm for breadth first search algorithm.


BFS is simple strategy in which rot node is expanded 1st then all the successor of root
node are expanded next. Then they are become successors. In bsf the shallowest unexpanded
node is selected for the expansion. Bsf is also called as a fifo technique which mean first in
first out. BSF algorithm are as follows:
• Create a variable called NODE-LIST and set it to initial state.
MUQuestionPapers.com
• Until a goal state is found or NODE-LIST is empty do:
a. Remove the first element from NODE-LIST and call it E. if NODE-LIST was
empty, quit.
b. For each way that each rule can match the state describe in E do:
i. Apply the rule to generate a new state.
ii. If the new state is goal state, quit and return this state
iii. Otherwise, add the new state to the end of NODE-LIST.
Solution path found is S A G<- this G also has cost 10
Number of nodes expanded(including goal node)=7

Write the uniform cost search algorithm. Explain in short.


Uniform Cost Search is an algorithm used to move around a directed weighted search
space to go from a start node to one of the ending nodes with a minimum cumulative
cost.
III. This search is an uninformed search algorithm, since it operates in a brute-force
manner i.e. it does not take the state of the node or search space into consideration.
IV. It is used to find the path with the lowest cumulative cost in a weighted graph where
nodes are expanded according to their cost of traversal from the root node. This is
implemented using a priority queue where lower the cost higher is its priority.
V. When all step costs are equal, breadth-first search is optimal because it always
expands the shallowest unexpanded node. By a simple extension, we can find an
algorithm that is optimal with any step-cost function. Instead of expanding the
shallowest node, uniform-cost search expands the node n with the lowest path cost g
(n). This is done by storing the frontier as a priority queue ordered by g.
VI. In addition to the ordering of the queue by path cost, there are two other significant
differences from breadth-first search. The first is that the goal test is applied to a node
when it is selected for expansion. Rather than when it is first generated. The reason is
that the first goal node that is generated may be on a suboptimal path. The second
difference is that a test is added in case a better path is found to a node currently on
the frontier.
VII. Algorithm
function UNIFORM-COST-SEARCH(problem) returns a solution, or failure
node ←a node with STATE = problem.INITIAL-STATE, PATH-COST = 0
MUQuestionPapers.com
frontier ←a priority queue ordered by PATH-COST, with node as the only element
explored ←an empty set
loop do
if EMPTY?( frontier) then return failure
node←POP( frontier ) /* chooses the lowest-cost node in frontier */
if problem.GOAL-TEST(node.STATE) then return SOLUTION(node)
add node.STATE to explored
for each action in problem.ACTIONS(node.STATE) do
child ←CHILD-NODE(problem, node, action)
if child .STATE is not in explored or frontier then
frontier ←INSERT(child , frontier )
else if child .STATE is in frontier with higher PATH-COST then
replace that frontier node with child

VIII. The problem is to get from Sibiu to Bucharest. The successors of Sibiu are Rimnicu
Vilcea and Fagaras, with costs 80 and 99, respectively. The least-cost node, Rimnicu
Vilcea, is expanded next, adding Pitesti with cost 80 + 97=177.
IX. The least-cost node is now Fagaras, so it is expanded, adding Bucharest with cost
99+211=310. Now a goal node has been generated, but uniform-cost search keeps
going, choosing Pitesti for expansion and adding a second path to Bucharest with cost
80+97+101= 278. Now the algorithm checks to see if this new path is better than the
old one; it is, so the old one is discarded. Bucharest, now with g-cost 278, is selected
for expansion and the solution is returned.
X. It is easy to see that uniform-cost search is optimal in general. Uniform-cost search
does not care about the number of steps a path has, but only about their total cost.
Uniform-cost search is guided by path costs rather than depths, so its complexity is
not easily characterized in terms of b and d.
Explain the working mechanism of Genetic Algorithm.

A genetic algorithm is a stochastic hill climbing search in which a large population of


state is maintained.
• New states are generate by mutation and by crossover which combines pairs of states
from the population.
• Genetic algorithm:
i. Generate a random population of chromosomes.
ii. Evaluate the fitness of each chromosomes in the population.
iii. Create new population by repeating the following steps until new population is
complete select 2 parent chromosomes from a population as per their fitness
with crossover probability. Crossover the parents to from new offspring as a
child with a mutation probability, muted a new offspring in a new population.
iv. If the end condition is satisfied, stop and return the best solution.
A genetic algorithm (or GA) is a variant of stochastic beam search in which
successor states are generated by combining two parent states rather than by
modifying a single state. The analogy to natural selection is the same as in stochastic
MUQuestionPapers.com
beam search, except that now we are dealing with sexual rather than asexual
reproduction.
II. Like beam searches, GAs begin with a set of k randomly generated states, called the
population. Each state, or individual, is represented as a string over a finite alphabet—
most commonly, a string of 0s and 1s.
III. For example, an 8-queens state must specify the positions of 8 queens, each in a
column of 8 squares, and so requires 8× log2 8=24 bits. Alternatively, the state could
be represented as 8 digits, each in the range from 1 to 8. (We demonstrate later that
the two encodings behave differently.) Figure shows a population of four 8-digit
strings representing 8-queens states.
IV. The following outline how the genetic algorithm works:
1. The algorithm begins by creating a random initial population.
2. The algorithm then creates a sequence of new populations. At each step, the algorithm
uses the individuals in the current generation to create the next population. To create
the new population, the algorithm performs the following steps:
a. Scores each member of the current population by computing its fitness value.
These values are called the raw fitness scores.
b. Scales the raw fitness scores to convert them into a more usable range of
values. These scaled values are called expectation values.
c. Selects members, called parents, based on their expectation.
d. Some of the individuals in the current population that have lower fitness are
chosen as elite. These elite individuals are passed to the next population.
e. Produces children from the parents. Children are produced either by making
random changes to a single parent—mutation—or by combining the vector
entries of a pair of parents—crossover.
f. Replaces the current population with the children to form the next generation.
MUQuestionPapers.com
3. The algorithm stops when one of the stopping criteria is met.

V. Initial Population:- The algorithm begins by creating a random initial population,


VI. Creating the Next Generation:-
The genetic algorithm creates three types of children for the next generation:
Eliteare the individuals in the current generation with the best fitness values. These
individuals automatically survive to the next generation.
Crossover are created by combining the vectors of a pair of parents.
Mutation children are created by introducing random changes, or mutations, to a
single parent.
MUQuestionPapers.com
o Crossover Children
The algorithm creates crossover children by combining pairs of parents in the
current population. At each coordinate of the child vector, the default
crossover function randomly selects an entry, or gene, at the same coordinate
from one of the two parents and assigns it to the child. For problems with
linear constraints, the default crossover function creates the child as a random
weighted average of the parents.
o Mutation Children
The algorithm creates mutation children by randomly changing the genes of
individual parents. By default, for unconstrained problems the algorithm adds
a random vector from a Gaussian distribution to the parent. For bounded or
linearly constrained problems, the child remains feasible.
VII. Plots of Later Generations
VIII. Stopping Conditions for the Algorithm
The genetic algorithm uses the following options to determine when to stop. See the
default values for each option by running opts = optimoptions('ga').
MaxGenerations — The algorithm stops when the number of generations
reaches MaxGenerations.
MaxTime — The algorithm stops after running for an amount of time in seconds
equal to MaxTime.
Give the illustration of 8 queen problem using hill climbing algorithm. (5)
I. The hill-climbing search algorithm It is simply a loop that continually moves in the
direction of increasing value—that is, uphill. It terminates when it reaches a “peak”
where no neighbor has a higher value. The algorithm does not maintain a search tree,
so the data structure for the current node need only record the state and the value of
the objective function. Hill climbing does not look ahead beyond the immediate
neighbors of the current state. This resembles trying to find the top of Mount Everest
in a thick fog while suffering from amnesia.
II. Local search algorithms typically use a complete-state formulation, where each state
has 8 queens on the board, one per column. The successors of a state are all possible
states generated by moving a single queen to another square in the same column (so
each state has 8×7=56 successors).
MUQuestionPapers.com
III. The heuristic cost function h is the number of pairs of queens that are attacking each
other, either directly or indirectly. The global minimum of this function is zero, which
occurs only at perfect solutions. Figure (a) shows a state with h=17. The figure also
shows the values of all its successors, with the best successors having h=12.
IV. Hill-climbing algorithms typically choose randomly among the set of best successors
if there is more than one. Hill climbing is sometimes called greedy local search
because it grabs a good neighbour state without thinking ahead about where to go
next. Although greed is considered one of the seven deadly sins, it turns out that
greedy algorithms often perform quite well. Hill climbing often makes rapid progress
toward a solution because it is usually quite easy to improve a bad state. For example,
from the state in Figure (a), it takes just five steps to reach the state in Figure (b),
which has h=1 and is very nearly a solution. Unfortunately, hill climbing often gets
stuck for the following reasons:
Local maxima: a local maximum is a peak that is higher than each of its
neighbouring states but lower than the global maximum. Hill-climbing
algorithms that reach the vicinity of a local maximum will be drawn upward
toward the peak but will then be stuck with nowhere else to go. More
concretely, the state in Figure (b) is a local maximum (i.e., a local minimum
for the cost h); every move of a single queen makes the situation worse.
Ridges: Ridges result in a sequence of local maxima that is very difficult for
greedy algorithms to navigate.
Plateaux: a plateau is a flat area of the state-space landscape. It can be a flat
local maximum, from which no uphill exit exists, or a shoulder, from which
progress is possible. A hill-climbing search might get lost on the plateau.

V. In each case, the algorithm reaches a point at which no progress is being made.
Starting from a randomly generated 8-queens state, steepest-ascent hill climbing gets
stuck 86% of the time, solving only 14% of problem instances. It works quickly,
taking just 4 steps on average when it succeeds and 3 when it gets stuck—not bad for
a state space with 8^8 ≈ 17 million states.
VI. If we always allow sideways moves when there are no uphill moves, an infinite loop
will occur whenever the algorithm reaches a flat local maximum that is not a
shoulder. One common solution is to put a limit on the number of consecutive
sideways moves allowed. For example, we could allow up to, say, 100 consecutive
sideways moves in the 8-queens problem. This raises the percentage of problem
instances solved by hill climbing from 14% to 94%. Success comes at a cost: the
algorithm averages roughly 21 steps for each successful instance and 64 for each
failure.
VII. For 8-queens instances with no sideways moves allowed, p ≈ 0.14, so we need
roughly 7 iterations to find a goal (6 failures and 1 success). The expected number of
steps is the cost of one successful iteration plus (1−p)/p times the cost of failure, or
roughly 22 steps in all. When we allow sideways moves, 1/0.94 ≈ 1.06 iterations are
needed on average and (1×21) + (0.06/0.94)×64 ≈ 25 steps.
VIII. For 8-queens, then, random-restart hill climbing is very effective indeed. Even for
three million queens, the approach can find solutions in under a minute.
IX. The success of hill climbing depends very much on the shape of the state-space
landscape: if there are few local maxima and plateaux, random-restart hill climbing
will find a good solution very quickly.
Explain the working of AND-OR search tree.
I. An and–or tree is a graphical representation of the reduction of problems (or goals)
to conjunctions and disjunctions of sub problems (or sub goals).
II. In a deterministic environment, the only branching is introduced by the agent’s own
choices in each state. We call these nodes OR nodes. In the vacuum world, for
example, at an OR node the agent chooses Left or Right or Suck. In a
nondeterministic environment, branching is also introduced by the environment’s
choice of outcome for each action. We call these nodes AND nodes.
III. For example, the Suck action in state 1 leads to a state in the set {5, 7}, so the agent
would need to find a plan for state 5 and for state 7. These two kinds of nodes
alternate, leading to an AND–OR tree as illustrated in Figure
IV. A solution for an AND–OR search problem is a sub tree that (1) has a goal node at
every leaf, (2) specifies one action at each of its OR nodes, and (3) includes every
outcome branch at each of its AND nodes.
MUQuestionPapers.com
V. The solution is shown in bold lines in the figure; it corresponds to the plan given in
Equation (The plan uses if–then–else notation to handle the AND branches, but when
there are more than two branches at a node, it might be better to use a case construct.)
Modifying the basic problem-solving agent to execute contingent solutions of this kind
is straightforward. One may also consider a somewhat different agent design, in which
the agent can act before it has found a guaranteed plan and deals with some
contingencies only as they arise during execution.
VI. One key aspect of the algorithm is the way in which it deals with cycles, which often
arise in nondeterministic problems (e.g., if an action sometimes has no effect or if an
unintended effect can be corrected). If the current state is identical to a state on the
path from the root, then it returns with failure. This doesn’t mean that there is no
solution from the current state; it simply means that if there is a noncyclic solution, it
must be reachable from the earlier incarnation of the current state, so the new
incarnation can be discarded.
VII. With this check, we ensure that the algorithm terminates in every finite state space,
because every path must reach a goal, a dead end, or a repeated state. Notice that the
algorithm does not check whether the current state is a repetition of a state on some
other path from the root, which is important for efficiency.
VIII. AND–OR graphs can also be explored by breadth-first or best-first methods.
Unit 3
Write the minimax algorithm. Explain in short.
Minimax is a kind of backtracking algorithm that is used in decision making and
game theory to find the optimal move for a player, assuming that your opponent also
plays optimally. It is widely used in two player turn-based games such as Tic-Tac-
Toe, Backgammon, Mancala, Chess, etc.
II. In Minimax the two players are called maximizer and minimizer.
The maximizer tries to get the highest score possible while the minimizer tries to do
the opposite and get the lowest score possible.
III. It uses a simple recursive computation of the minimax values of each successor state,
directly implementing the defining equations. The recursion proceeds all the way
down to the leaves of the tree, and then the minimax values are backed up through
the tree as the recursion unwinds.
MUQuestionPapers.com
IV. The minimax algorithm performs a complete depth-first exploration of the game tree.
If the maximum depth of the tree is m and there are b legal moves at each point, then
the time complexity of the minimax algorithm is O (bm).
V. The space complexity is O (bm) for an algorithm that generates all actions at once, or
O (m) for an algorithm that generates actions one at a time. For real games, of course,
the time cost is totally impractical, but this algorithm serves as the basis for the
mathematical analysis of games and for more practical algorithms.
Working of Min-Max Algorithm:
The working of the minimax algorithm can be easily described using an example.
Below we have taken an example of game-tree which is representing the twoplayer
game.
In this example, there are two players one is called Maximizer and other is called
Minimizer.
Maximizer will try to get the Maximum possible score, and Minimizer will try to
get the minimum possible score.
This algorithm applies DFS, so in this game-tree, we have to go all the way
through the leaves to reach the terminal nodes.
At the terminal node, the terminal values are given so we will compare those
value and backtrack the tree until the initial state occurs. Following are the main
steps involved in solving the two-player game tree:
First, we need to replace the single value for each node with a vector of values.
For example, in a three-player game with players A, B, and C, a vector (vA, vB,
vC) is associated with each node. For terminal states, this vector gives the utility
of the state from each player’s viewpoint. (In two-player, zero-sum games, the
two-element vector can be reduced to a single value because the values are always
opposite.)
The simplest way to implement this is to have the UTILITY function return a
vector of utilities. Now we have to consider nonterminal states.
Consider the node marked X in the game tree shown in Figure. In that state, player
C chooses what to do. The two choices lead to terminal states with utility vectors
(vA =1, vB =2, vC =6) and (vA =4, vB =2, vC =3). Since 6 is bigger than 3, C
should choose the first move. This means that if state X is reached, subsequent
play will lead to a terminal state with utilities (vA =1, vB =2, vC =6). Hence, the
backed-up value of X is this vector. The backed-up value of a node n is always the
utility vector of the successor state with the highest value for the player choosing
at n.
Anyone who plays multiplayer games, such as Diplomacy, quickly becomes
aware that much more is going on than in two-player games. Multiplayer games
usually involve alliances, whether formal or informal, among the players.
Alliances are made and broken as the game proceeds. Strategies for each player in
MUQuestionPapers.com
a multiplayer game? It turns out that they can be. For example, suppose A and B
are in weak positions and C is in a stronger position. Then it is often optimal for
both A and B to attack C rather than each other, lest C destroy each of them
individually. In this way, collaboration emerges from purely selfish behaviour.
If the game is not zero-sum, then collaboration can also occur with just two
players. Suppose, for example, that there is a terminal state with utilities _vA
=1000, vB =1000_ and that 1000 is the highest possible utility for each player.
Then the optimal strategy is for both players to do everything possible to reach
this state—that is, the players will automatically cooperate to achieve a mutually
desirable goal.
Explain alpha-beta pruning with suitable example.
I. Alpha-beta pruning is a modified version of the minimax algorithm. It is an
optimization technique for the minimax algorithm.
II. As we have seen in the minimax search algorithm that the number of game states it
has to examine are exponential in depth of the tree. Since we cannot eliminate the
exponent, but we can cut it to half. Hence there is a technique by which without
checking each node of the game tree we can compute the correct minimax decision,
and this technique is called pruning. This involves two threshold parameter Alpha
and beta for future expansion, so it is called alpha-beta pruning. It is also called
as Alpha-Beta Algorithm.
III. Alpha-beta pruning can be applied at any depth of a tree, and sometimes it not only
prune the tree leaves but also entire sub-tree.
IV. The two-parameter can be defined as:
MUQuestionPapers.com
a. Alpha: The best (highest-value) choice we have found so far at any point
along the path of Maximizer. The initial value of alpha is -∞.
b. Beta: The best (lowest-value) choice we have found so far at any point along
the path of Minimizer. The initial value of beta is +∞.
VI. The Alpha-beta pruning to a standard minimax algorithm returns the same move as
the standard algorithm does, but it removes all the nodes which are not really
affecting the final decision but making algorithm slow. Hence by pruning these nodes,
it makes the algorithm fast.
VII. When applied to a standard minimax tree, it returns the same move as minimax
would, but prunes away branches that cannot possibly influence the final decision.
VIII. Alpha–beta pruning can be applied to trees of any depth, and it is often possible to
prune entire subtrees rather than just leaves.
Example of alpha beta pruning
We will first start with the initial move. We will initially define the alpha and beta
values as the worst case i.e. α = -∞ and β= +∞. We will prune the node only when
alpha becomes greater than or equal to beta.
Since the initial value of alpha is less than beta so we didn’t prune it. Now it’s turn for
MAX. So, at node D, value of alpha will be calculated. The value of alpha at node D
will be max (2, 3). So, value of alpha at node D will be 3.
Now the next move will be on node B and its turn for MIN now. So, at node B, the
value of alpha beta will be min (3, ∞). So, at node B values will be alpha= – ∞ and
beta will be 3

In the next step, algorithms traverse the next successor of Node B which is node E,
and the values of α= -∞, and β= 3 will also be passed.
Now it’s turn for MAX. So, at node E we will look for MAX. The current value of
alpha at E is – ∞ and it will be compared with 5. So, MAX (- ∞, 5) will be 5. So, at
node E, alpha = 5, Beta = 5. Now as we can see that alpha is greater than beta which
is satisfying the pruning condition so we can prune the right successor of node E and
algorithm will not be traversed and the value at node E will be 5.

In the next step the algorithm again comes to node A from node B. At node A alpha
will be changed to maximum value as MAX (- ∞, 3). So now the value of alpha and
beta at node A will be (3, + ∞) respectively and will be transferred to node C. These
same values will be transferred to node F.
At node F the value of alpha will be compared to the left branch which is 0. So, MAX
(0, 3) will be 3 and then compared with the right child which is 1, and MAX (3,1) = 3
MUQuestionPapers.com
still α remains 3, but the node value of F will become 1.


Now node F will return the node value 1 to C and will compare to beta value at C.
Now its turn for MIN. So, MIN (+ ∞, 1) will be 1. Now at node C, α= 3, and β= 1 and
alpha is greater than beta which again satisfies the pruning condition. So, the next
successor of node C i.e. G will be pruned and the algorithm didn’t compute the entire
subtree G.

Now, C will return the node value to A and the best value of A will be MAX (1, 3)
will be 3.

The
above represented tree is the final tree which is showing the nodes which are
computed and the nodes which are not computed. So, for this example the optimal
value of the maximizer will be 3.

Explain the concept of knowledge base with example.


I. Knowledge is the basic element for a human brain to know and understand the things
logically. When a person becomes knowledgeable about something, he is able to do
that thing in a better way. In AI, the agents which copy such an element of human
beings are known as knowledge-based agents.
II. The central component of a knowledge-based agent is its knowledge base, or KB. A
knowledge base is a set of sentences. (Here “sentence” is used as a technical term. It
is related but not identical to the sentences of English and other natural languages.)
Each sentence is expressed in a language called a knowledge representation
language and represents some assertion about the world. Sometimes we dignify a
sentence with the name axiom, when the sentence is taken as given without being
derived from other sentences.
III. There must be a way to add new sentences to the knowledge base and a way to query
what is known. The standard names for these operations are TELL and ASK,
respectively.
IV. Both operations may involve inference—that is, deriving new sentences from old.
Inference must obey the requirement that when one ASKs a question of the
knowledge base, the answer should follow from what has been told (or TELLed) to
the knowledge base previously. Later in this chapter, we will be more precise about
the crucial word “follow.” For now, take it to mean that the inference process should
not make things up as it goes along.
V. The agent maintains a knowledge base, KB, which may initially contain some
background knowledge.
VI. Each time the agent program is called, it does three things. First, it TELLs the
knowledge base what it perceives. Second, it ASKs the knowledge base what action it
should perform. In the process of answering this query, extensive reasoning may be
done about the current state of the world, about the outcomes of possible action
sequences, and so on. Third, the agent program TELLs the knowledge base which
action was chosen, and the agent executes the action.
VII. Knowledge level: - where we need specify only what the agent knows and what its
goals are, in order to fix its behaviour. For example, an automated taxi might have the
MUQuestionPapers.com
goal of taking a passenger from San Francisco to Marin County and might know that
the Golden Gate Bridge is the only link between the two locations. Then we can
expect it to cross the Golden Gate Bridge because it knows that that will achieve its
goal.
VIII. Implementation level: - Notice that this analysis is independent of how the taxi
works at the implementation level. It doesn’t matter whether its geographical
knowledge is implemented as linked lists or pixel maps, or whether it reasons by
manipulating strings of symbols stored in registers or by propagating noisy signals in
a network of neurons.
IX. Example of knowledge-based agents is wumpus world.
X. The Wumpus world is a simple world example to illustrate the worth of a knowledgebased
agent and to represent knowledge representation. It was inspired by a video
game Hunt the Wumpus by Gregory Yob in 1973.
XI. The Wumpus world is a cave which has 4/4 rooms connected with passageways. So
there are total 16 rooms which are connected with each other. We have a knowledgebased
agent who will go forward in this world. The cave has a room with a beast which
is called Wumpus, who eats anyone who enters the room. The Wumpus can be shot by
the agent, but the agent has a single arrow. In the Wumpus world, there are some Pits
rooms which are bottomless, and if agent falls in Pits, then he will be stuck there
forever. The exciting thing with this cave is that in one room there is a possibility of
finding a heap of gold. So the agent goal is to find the gold and climb out the cave
without fallen into Pits or eaten by Wumpus. The agent will get a reward if he comes
out with gold, and he will get a penalty if eaten by Wumpus or falls in the pit.
Explain the syntax for propositional logic.
The syntax of propositional logic defines the allowable sentences. The atomic sentences
consist of a single proposition symbol. Each such symbol stands for a proposition that can
be true or false.
We use symbols that start with an uppercase letter and may contain other letters or
subscripts, for example: P, Q, R, W1, 3 and North. The names are arbitrary but are often
chosen to have some mnemonic value—we use W1, 3 to stand for the proposition that the
wumpus is in [1, 3]. (Remember that symbols such as W1, 3 are atomic, i.e., W, 1, and 3
are not meaningful parts of the symbol.) There are two proposition symbols with fixed
meanings: True is the always-true proposition and False is the always-false proposition.
Complex sentences are constructed from simpler sentences, using parentheses and logical
connectives. There are five connectives in common use:
1) ¬ (not):-
A sentence such as ¬W1, 3 is called the negation of W1, 3. A literal is either an
atomic sentence (a positive literal) or a negated atomic sentence (a negative literal).
Example: - ¬A
MUQuestionPapers.com
2) ∧ (and):-
A sentence whose main connective is ∧, such as W1, 3 ∧ P3, 1, is called a
conjunction; its parts are the conjuncts. (The ∧ looks like an “A” for “And.”)
Example: - A∧B
3) ∨ (or):-
A sentence using ∨, such as (W1, 3∧P3, 1) ∨W2, 2, is a disjunction of the disjuncts
(W1, 3 ∧ P3, 1) and W2, 2. (Historically, the ∨ comes from the Latin “vel,” which
means “or.” For most people, it is easier to remember ∨ as an upside-down ∧.)
Example: - A∨B
4) ⇒ (implies):-
A sentence such as (W1, 3∧P3, 1) ⇒ ¬W2, 2 is called an implication (or conditional).
Its premise or antecedent is (W1, 3 ∧P3, 1), and its conclusion or consequent is ¬W2,
2. Implications are also known as rules or if–then statements. The implication RULES
symbol is sometimes written as ⊃ or →.
Example: - A⇒B
5) ⇔ (if and only if):-
The sentence W1, 3 ⇔ ¬W2, 2 is a biconditional. In other way write this as ≡.
Example:- A⇔B


Explain in brief about resolution theorem.
I. The process of forming an inferred clause or resolving from the parent clauses is
called resolution.
II. This method demonstrates that the theorem being false causes an inconsistency with
the axioms, hence the theorem must have been true all along. It uses only one rule of
deduction, used to combine two parent clauses into a resolved clause.
III. We can express the full resolution rule of inference concisely using 'big ' notation:
The 'big ' is just a more concise way of writing clauses, where underneath the V we
specify a set of indices for the literals L. For example, if A = {1,2,7} then the first
parent clause is L1 L2 L7. (We can use a similar 'big ' notation to express
conjunctions.) The rule resolves literals Pj (a negative literal) and Pk (a positive
literal). We just remove j and k from the set of indices to get the resolved clauses.
MUQuestionPapers.com
IV. We repreatedly resolve clauses until eventually two sentences resolve together to give
the empty clause, which contains no literals.
Initial State: A knowledge base (KB) consisting of negated theorem and
axioms in CNF.
Operators: The full resolution rule of inference picks two sentences from KB
and adds a new sentence.
Goal Test: Does KB contain False?
V. Illustrate the concept of a resolution search space with the simple example from
Aristotle we've seen before. Apparently, all men are mortal and Socrates was a man.
Given these words of wisdom, we want to prove that Socrates is mortal. We saw how
this could be achieved using the Modus Ponens rule, and it is instructive to use
Resolution to prove this as well.
The initial KB (including the negated theorem) in CNF is:
1) is_man(socrates)
2) ¬is_man(X) is_mortal(X)
3) ¬is_mortal(socrates)
We can apply resolution to get TWO different solutions. The first alternative is that
we combine (1) and (2) to get the state A:
1) is_man(socrates)
2) ¬is_man(X) is_mortal(X)
3) ¬is_mortal(socrates)
4) is_mortal(socrates)
Then combine (3) and (4) to get the state B:
1) is_man(socrates)
2) ¬is_man(X) is_mortal(X)
3) ¬is_mortal(socrates)
4) is_mortal(socrates)
5) False
Alternatively, we could initially combine (2) and (3) to get the state C:
1) is_man(socrates)
2) ¬is_man(X) is_mortal(X)
3) ¬is_mortal(socrates)
4) ¬is_man(socrates)
We then resolve again to get state D:
1) is_man(socrates)
2) ¬is_man(X) is_mortal(X)
3) ¬is_mortal(socrates)
MUQuestionPapers.com
4) ¬is_man(socrates)
5) False
So, we have a search space with two alternative paths to a solution: Initial --> A --> B
and Initial --> C --> D.
VI. Instead, it is often more convenient to visualise the developing proof. On the top line
we can write the clause of our initial KB, and draw lines from the two parent clauses
to the new clause, indicating what substitution was required, if any. Repeating this
process for each step we get a proof tree. Here's the finished proof tree for the path
Initial --> A --> B in our example above:

An here's the proof tree for the alternative path Initial --> C --> D:


VII. Complex proofs require a bit effort to lay out, and it is ususally best not to write out
all the initial clauses on the top line to begin with, but rather introduce them into the
tree as they are required.
VIII. Resolution proof trees make it easier to recontruct a proof. Considering the latter tree,
we can read the proof by working backwards from False. We could read the proof to
Aristotle thus:
MUQuestionPapers.com
IX. "You said that all men were mortal. That means that for all things X, either X is not a
man, or X is mortal [CNF step]. If we assume that Socrates is not mortal, then, given
your previous statement, this means Socrates is not a man [first resolution step]. But
you said that Socrates is a man, which means that our assumption was false [second
resolution step], so Socrates must be mortal."
X. We see that, even in this simple case, it is difficult to translate the resolution proof
into a human readable one. Due to the popularity of resolution theorem proving, and
the difficulty with which humans read the output from the provers, there have been
some projects to translate resolution proofs into a more human readable format. As an
exercise, generate the proof you would give to Aristotle from the first proof tree.
XI. In the slides accompanying these notes is an example taken from Russell and Norvig
about a cat called Tuna being killed by Curiosity. We will work through this example
in the lecture.
Write a note on Wumpus world problem.
I. The wumpus world is a cave consisting of rooms connected by passageways.
Lurking somewhere in the cave is the terrible wumpus, a beast that eats anyone who
enters its room. The wumpus can be shot by an agent, but the agent has only one
arrow. Some rooms contain bottomless pits that will trap anyone who wanders into
these rooms (except for the wumpus, which is too big to fall in). The only mitigating
feature of this bleak environment is the possibility of finding a heap of gold. Although
the wumpus world is rather tame by modern computer game standards, it illustrates
some important points about intelligence.
MUQuestionPapers.com
II. A sample wumpus world is shown in Figure. The precise definition of the task
environment is given, by the PEAS description:
Performance measure: +1000 for climbing out of the cave with the gold, –
1000 for falling into a pit or being eaten by the wumpus, –1 for each action
taken and –10 for using up the arrow. The game ends either when the agent
dies or when the agent climbs out of the cave.
Environment: A 4×4 grid of rooms. The agent always starts in the square
labelled [1, 1], facing to the right. The locations of the gold and the wumpus
are chosen randomly, with a uniform distribution, from the squares other than
the start square. In addition, each square other than the start can be a pit, with
probability 0.2.
Actuators: The agent can move Forward, TurnLeft by 90◦, or TurnRight by
90◦. The agent dies a miserable death if it enters a square containing a pit or a
live wumpus. (It is safe, albeit smelly, to enter a square with a dead wumpus.)
If an agent tries to move forward and bumps into a wall, then the agent does
not move. The action Grab can be used to pick up the gold if it is in the same
square as the agent. The action Shoot can be used to fire an arrow in a straight
line in the direction the agent is facing. The arrow continues until it either hits
(and hence kills) the wumpus or hits a wall. The agent has only one arrow, so
only the first Shoot action has any effect. Finally, the action Climb can be
used to climb out of the cave, but only from square [1, 1].
Sensors: The agent has five sensors, each of which gives a single bit of
information:
– In the square containing the wumpus and in the directly (not diagonally)
adjacent squares, the agent will perceive a Stench.
– In the squares directly adjacent to a pit, the agent will perceive a Breeze.
– In the square where the gold is, the agent will perceive a Glitter.
– When an agent walks into a wall, it will perceive a Bump.
– When the wumpus is killed, it emits a woeful Scream that can be perceived
anywhere in the cave.

III. The percepts will be given to the agent program in the form of a list of five symbols;
for example, if there is a stench and a breeze, but no glitter, bump, or scream, the
agent program will get [Stench, Breeze, None, None, None]. The wumpus
environment along the various dimensions. Clearly, it is discrete, static, and singleagent.
(The wumpus doesn’t move, fortunately.) It is sequential, because rewards may
come only after many actions are taken. It is partially observable, because some
aspects of the state are not directly perceivable: the agent’s location, the wumpus’s
state of health, and the availability of an arrow.
IV. As for the locations of the pits and the wumpus: we could treat them as unobserved
parts of the state that happen to be immutable—in which case, the transition model for
the environment is completely known; or we could say that the transition model itself
is unknown because the agent doesn’t know which Forward actions are fatal—in
which case, discovering the locations of pits and wumpus completes the agent’s
knowledge of the transition model.
Exploring the problem of wumpus world:
I. We use an informal knowledge representation language consisting of writing down
symbols in a grid (as in Figures 1 and 2).The agent’s initial knowledge base contains
the rules of the environment, as described previously; in particular, it knows that it is
in [1, 1] and that [1, 1] is a safe square; we denote that with an “A” and “OK,”
respectively, in square [1, 1].
II. The first percept is [None, None, None, None, None], from which the agent can
conclude that its neighboring squares, [1, 2] and [2, 1], are free of dangers—they are
OK. Figure 1(a) shows the agent’s state of knowledge at this point. A cautious agent
will move only into a square that it knows to be OK. Let us suppose the agent decides
to move forward to [2, 1]. The agent perceives a breeze (denoted by “B”) in [2, 1], so
there must be a pit in a neighboring square. The pit cannot be in [1, 1], by the rules of
MUQuestionPapers.com
the game, so there must be a pit in [2, 2] or [3, 1] or both. The notation “P?” in Figure
1(b) indicates a possible pit in those squares. At this point, there is only one known
square that is OK and that has not yet been visited. So the prudent agent will turn
around, go back to [1, 1], and then proceed to [1, 2].


III. The agent perceives a stench in [1, 2], resulting in the state of knowledge shown in
Figure 2(a). The stench in [1, 2] means that there must be a wumpus nearby. But the
wumpus cannot be in [1, 1], by the rules of the game, and it cannot be in [2, 2] (or the
agent would have detected a stench when it was in [2, 1]). Therefore, the agent can
infer that the wumpus is in [1, 3]. The notation W! Indicates this inference. Moreover,
MUQuestionPapers.com
the lack of a breeze in [1, 2] implies that there is no pit in [2, 2]. Yet the agent has
already inferred that there must be a pit in either [2, 2] or [3, 1], so this means it must
be in [3, 1]. This is a fairly difficult inference, because it combines knowledge gained
at different times in different places and relies on the lack of a percept to make one
crucial step.
IV. The agent has now proved to itself that there is neither a pit nor a wumpus in [2, 2], so
it is OK to move there. We do not show the agent’s state of knowledge at [2, 2]; we
just assume that the agent turns and moves to [2, 3], giving us Figure 2(b). In [2, 3],
the agent detects a glitter, so it should grab the gold and then return home.
V. Note that in each case for which the agent draws a conclusion from the available
information, that conclusion is guaranteed to be correct if the available information is
correct.
VI. This is a fundamental property of logical reasoning. In the rest of this chapter, we
describe how to build logical agents that can represent information and draw
conclusions such as those described in the preceding paragraphs.
Unit 4
Explain universal and existential quantifier with suitable example.
A logical quantifier that asserts all values of a given variable in a formula.
First-order logic contains two standard quantifiers, called universal and existential.
1. Universal quantifier
The symbol ∀ is called the universal quantifier.
It expresses the fact that, in a particular universe of discourse, all objects have a
particular property.
o ∀x: means:
o For all objects xx, it is true that ...
∀ is usually pronounced “For all . . .”. (Remember that the upside-down A stands for
“all.”)
That is:
Thus, the sentence says, “For all x, if x is a king, then x is a person.” The symbol x is
called a variable. By convention, variables are lowercase letters. A variable is a term
all by itself, and as such can also serve as the argument of a function—for example,
LeftLeg(x). A term with no variables is called a ground term.
The universal quantifier can be considered as a repeated conjunction:
Suppose our universe of discourse consists of the objects X1, X2, X3…X1, X2,
X3… and so on.
2. Existential quantifier
The symbol ∃ is called the existential quantifier.
It expresses the fact that, in a particular universe of discourse, there exists (at least
one) object having a particular property.
That is: ∃x means: There exists at least one object xx such that ...
for example, that King John has a crown on his head, we write
∃ x Crown(x) ∧ OnHead(x, John) .
∃x is pronounced “There exists an x such that . . .” or “For some x . . .”
What is first order logic? Discuss the different elements used in first order
First-order logic is another way of knowledge representation in artificial intelligence.
It is an extension to propositional logic. FOL is sufficiently expressive to represent the
natural language statements in a concise way.
II. First-order logic is also known as Predicate logic or First-order predicate logic. Firstorder
logic is a powerful language that develops information about the objects in a
more easy way and can also express the relationship between those objects.
III. First-order logic (like natural language) does not only assume that the world contains
facts like propositional logic but also assumes the following things in the world:
o Objects: A, B, people, numbers, colors, wars, theories, squares, pits,
wumpus,…
o Relations: It can be unary relation such as: red, round, is adjacent, or n-any
relation such as: the sister of, brother of, has color, comes between
o Function: Father of, best friend, third inning of, end of,…
IV. As a natural language, first-order logic also has two main parts:
o Syntax
o Semantics
V. Basic Elements of First-order logic:
o Following are the basic elements of FOL syntax:


VI. Atomic sentences:
1. Atomic sentence
An atomic sentence (or atom for short) is formed from a predicate symbol
optionally followed by a parenthesized list of terms, such as Brother (Richard,
John).
This states, under the intended interpretation given earlier, that Richard the
Lionheart is the brother of King John. Atomic sentences can have complex
terms as arguments. Thus, Married (Father (Richard), Mother (John)) states
that Richard the Lionheart’s father is married to King John’s mother (again,
under a suitable interpretation).
An atomic sentence is true in a given model if the relation referred to by the
predicate symbol holds among the objects referred to by the arguments.
Atomic Sentence = predicate (term 1,....., term n) or term1 = term2
An atomic sentence is formed from a predicate symbol followed by list of
terms.
Examples:-
o LargeThan(2,3) is false.
o Brother_of(Mary,Pete) is false.
o Married(Father(Richard),Mother(John)) could be true or false.
o Atomic sentences are the most basic sentences of first-order logic. These sentences
are formed from a predicate symbol followed by a parenthesis with a sequence of
terms.
o We can represent atomic sentences as Predicate (term1, term2, ......, term n).
o Example: Ravi and Ajay are brothers: => Brothers(Ravi, Ajay).
Chinky is a cat: => cat (Chinky).
VII. Complex Sentences:
We can use logical connectives to construct more complex sentences, with the
same syntax and semantics as in propositional calculus.
Here are four sentences that are true in the model of Figure under our intended
interpretation:
o ¬Brother (LeftLeg(Richard), John)
o Brother (Richard , John) ∧ Brother (John,Richard)
o King(Richard ) ∨ King(John)
o ¬King(Richard) ⇒ King(John) .
o Complex sentences are made by combining atomic sentences using connectives.
VIII. First-order logic statements can be divided into two parts:
o Subject: Subject is the main part of the statement.
o Predicate: A predicate can be defined as a relation, which binds two atoms together
in a statement.
Consider the statement: "x is an integer.", it consists of two parts, the first part x is
the subject of the statement and second part "is an integer," is known as a predicate.


Convert the following natural sentences into FOL form.
1) Virat is software engineer.
Ans. Virat(software engineer)
2) All vehicles have wheels.
Ans. For-all(x): vehicles(x) -> wheel(x)
3) Some -one speaks some language in class.
Ans. ∃ x ∃ y: person(x) v language(y)-> speaks(x, y)
4) Everybody loves somebody sometimes.
Ans. (for all(x) (exists(y) -> loves sometime(x, y)))
5) All software engineer develops software.
Ans. For-all(x):software(x) -> software engineer(y)
ii. All batsman are cricketers.
For-all(x): batsman(x) -> cricketer(x)
iii. Everybody speaks some language.
For-all(x) Exist(y): Person(x) V language(y) -> speaks(x,y)
iv. Every car has wheel.
(forall (x) (if (Car x) (exists (y) wheel-of (x y)))
v. Everybody loves somebody some time.
(forall (x) (exists (y) -> loves-sometime(x y)))

What is knowledge engineering? Write the steps for its execution.


Knowledge Engineering,
The process of constructing knowledge based in FOL is called as knowledge
engineering. In the someone who investigate particular domain learns important concept of
that
domain and generates a formal representation of the objects is known as knowledge
engineering.
Following steps are follows for the knowledge engineering process:
1. Identify the task:
⎯ the knowledge engineer must delineate the range of questions that the
knowledge base will support and the kinds of fact that will be available for each
specific problem instance.
⎯ This step is analogous to the PEAS process for designing agents.
2. Assemble the relevant knowledge:
⎯ The knowledge engineer might already be an expert in the domain, or might
need to work with real expert to extract what they know- a process called
knowledge acquisition
⎯ At this stage, the scope of the knowledge is not represented formally.
⎯ The idea is to understand the scope of the knowledge base, as determined by the
task, and to understand how the domain actually works.
3. Decide on vocabulary:
⎯ That is, translate the important domain-level concepts into logic-level names.
⎯ This involves many questions of knowledge-engineering style.
⎯ Like programming style. This can have a significant impact on the eventual
success of the project.
4. Encode general- knowledge about the domain:
⎯ The knowledge engineer writes down the axioms for all the vocabulary terms.
MUQuestionPapers.com
⎯ This pins down the meaning of the terms, enabling the expert to check the
content.
⎯ Often, this step reveals misconceptions or gaps in the vocabulary that must be
fixed by returning to step 3 and iterating through the process.
5. Encode description of problem instance:
⎯ Of the ontology is well throughout, this step will easy.
⎯ It will involve writing simple automatic sentence about instances of concepts
that are already part of the ontology.
6. Pose queries to the interface procedure and get the answer:
⎯ This is where the reward is: we can let the interface procedure operate on the
axioms and problem=specific facts to derive the facts we are interested in
knowing.
⎯ Thus, we avoid the need for writing an application-specific solution algorithm.
7. Debug the knowledge base:
⎯ In this step debugging of the knowledge based takes place.
⎯ This is last step of complete process; in this step we will try to debug the issue
of knowledge.
I. Knowledge engineering is a field of artificial intelligence (AI) that tries to emulate the
judgment and behaviour of a human expert in a given field.
II. Knowledge engineering is the technology behind the creation of expert systems to
assist with issues related to their programmed field of knowledge. Expert systems
involve a large and expandable knowledge base integrated with a rules engine that
specifies how to apply information in the knowledge base to each particular situation.
III. The systems may also incorporate machine learning so that they can learn from
experience in the same way that humans do. Expert systems are used in various fields
including healthcare, customer service, financial services, manufacturing and the law.
IV. Using algorithms to emulate the thought patterns of a subject matter expert,
knowledge engineering tries to take on questions and issues as a human expert would.
Looking at the structure of a task or decision, knowledge engineering studies how the
conclusion is reached.
V. A library of problem-solving methods and a body of collateral knowledge are used to
approach the issue or question. The amount of collateral knowledge can be very large.
Depending on the task and the knowledge that is drawn on, the virtual expert may
assist with troubleshooting, solving issues, assisting a human or acting as a virtual
agent.
VI. Scientists originally attempted knowledge engineering by trying to emulate real
experts. Using the virtual expert was supposed to get you the same answer as you
would get from a human expert. This approach was called the transfer approach.
However, the expertise that a specialist required to answer questions or respond
to issues posed to it needed too much collateral knowledge: information that is not
central to the given issue but still applied to make judgments.
MUQuestionPapers.com
VII. A surprising amount of collateral knowledge is required to enable analogous
reasoning and nonlinear thought. Currently, a modelling approach is used where the
same knowledge and process need not necessarily be used to reach the same
conclusion for a given question or issue. Eventually, it is expected that knowledge
engineering will produce a specialist that surpasses the abilities of its human
counterparts.
Steps for knowledge engineering execution
Knowledge engineering projects vary widely in content, scope, and difficulty, but all such
projects include the following steps:
I. Identify the task.
The knowledge engineer must delineate the range of questions that the
knowledge base will support and the kinds of facts that will be available for
each specific problem instance.
For example, does the wumpus knowledge base need to be able to choose
actions or is it required to answer questions only about the contents of the
environment? Will the sensor facts include the current location? The task will
determine what knowledge must be represented in order to connect problem
instances to answers.
This step is analogous to the PEAS process for designing agents.
II. Assemble the relevant knowledge.
The knowledge engineer might already be an expert in the domain, or might
need to work with real experts to extract what they know—a process called
knowledge acquisition.
At this stage, the knowledge is not represented formally. The idea is to
understand the scope of the knowledge base, as determined by the task, and to
understand how the domain actually works.
For the wumpus world, which is defined by an artificial set of rules, the
relevant knowledge is easy to identify.
For real domains, the issue of relevance can be quite difficult—for example, a
system for simulating VLSI designs might or might not need to take into
account stray capacitances and skin effects.
III. Decide on a vocabulary of predicates, functions, and constants.
That is, translate the important domain-level concepts into logic-level names.
This involves many questions of knowledge-engineering style.
Like programming style, this can have a significant impact on the eventual
success of the project. For example, should pits be represented by objects or
by a unary predicate on squares? Should the agent’s orientation be a function
or a predicate? Should the wumpus’s location depend on time? Once the
MUQuestionPapers.com
choices have been made, the result is a vocabulary that is known as the
ontology of the domain.
The word ontology means a particular theory of the nature of being or
existence.
The ontology determines what kinds of things exist, but does not determine
their specific properties and interrelationships.
IV. Encode general knowledge about the domain.
The knowledge engineer writes down the axioms for all the vocabulary terms.
This pins down (to the extent possible) the meaning of the terms, enabling the
expert to check the content.
Often, this step reveals misconceptions or gaps in the vocabulary that must be
fixed by returning to step 3 and iterating through the process.
V. Encode a description of the specific problem instance.
If the ontology is well thought out, this step will be easy. It will involve
writing simple atomic sentences about instances of concepts that are already
part of the ontology.
For a logical agent, problem instances are supplied by the sensors, whereas a
“disembodied” knowledge base is supplied with additional sentences in the
same way that traditional programs are supplied with input data.
VI. Pose queries to the inference procedure and get answers.
This is where the reward is: we can let the inference procedure operate on the
axioms and problem-specific facts to derive the facts we are interested in
knowing.
Thus, we avoid the need for writing an application-specific solution algorithm.
VII. Debug the knowledge base.
Alas, the answers to queries will seldom be correct on the first try. More
precisely, the answers will be correct for the knowledge base as written,
assuming that the inference procedure is sound, but they will not be the ones
that the user is expecting.
For example, if an axiom is missing, some queries will not be answerable from
the knowledge base. A considerable debugging process could ensue.
Missing axioms or axioms that are too weak can be easily identified by
noticing places where the chain of reasoning stops unexpectedly.
Give comparison between forward chaining and backward chaining.

I. Backward chaining is the same idea as forward chaining except that you start with
requiring the learner to complete the last step of the task analysis. This means that you
will perform all the preceding steps either for or with the learner and then begin to
fade your prompts with the last step only.
II. Reinforcement is provided contingent upon the last step being completed. Once the
learner is able to complete the last step independently, you will require the learner to
complete the last two steps before receiving a reinforcer, and so on, until the learner is
able to complete the entire chain independently before receiving access to a
reinforcer.
III. Backward chaining uses the same basic approach as forward chaining but in reverse
order. That is, you start with the last step in the chain rather than the first. The
therapist can either prompt the learner through the entire sequence, without
opportunities for independent responding, until he gets to the final step (and then
MUQuestionPapers.com
teach that step), or the therapist can initiate the teaching interaction by going straight
to the last step.
IV. Either way, when the last step occurs, the therapist uses prompting to help the learner
perform the step correctly, reinforces correct responding with a powerful reinforcer,
and then fades prompts across subsequent trials. When the last step is mastered, then
each teaching interaction begins with the second-to-last step, and so on, until the first
step in the chain is mastered, at which point the whole task analysis is mastered.
V. Backward chaining is a kind of AND/OR search—the OR part because the goal query
can be proved by any rule in the knowledge base, and the AND part because all the
conjuncts in the lhs of a clause must be proved.
VI. Backward chaining, as we have written it, is clearly a depth-first search algorithm.
This means that its space requirements are linear in the size of the proof (neglecting,
for now, the space required to accumulate the solutions). It also means that backward
chaining (unlike forward chaining) suffers from problems with repeated states and
incompleteness. We will discuss these problems and some potential solutions, but first
we show how backward chaining is used in logic programming systems.

Proof tree constructed by backward chaining to prove that west is a criminal.


The tree should be read depth first, left to right. To prove Criminal (West), we have to
prove the four conjuncts below it. Some of these are in the knowledge base, and
others require further backward chaining. Bindings for each successful unification are
shown next to the corresponding sub goal. Note that once one sub goal in a
conjunction succeeds, its substitution is applied to subsequent sub goals. Thus, by the
time FOL-BC-ASK gets to the last conjunct, originally Hostile (z), z is already bound
to Nono.
Forward chaining is also known as a forward deduction or forward reasoning method
when using an inference engine. Forward chaining is a form of reasoning which start with
atomic sentences in the knowledge base and applies inference rules (Modus Ponens) in the
forward direction to extract more data until a goal is reached. The Forward-chaining
algorithm
starts from known facts, triggers all rules whose premises are satisfied, and add their
conclusion
to the known facts. This process repeats until the problem is solved.
E.g. If It is Raining, we will take the umbrella.
Data= if it is raining.
Goal/ Decision: we will take umbrella.
Properties of Forward-Chaining:
o It is a down-up approach, as it moves from bottom to top.
o It is a process of making a conclusion based on known facts or data, by starting from
the initial state and reaches the goal state.
MUQuestionPapers.com
o Forward-chaining approach is also called as data-driven as we reach to the goal using
available data.
o Forward -chaining approach is commonly used in the expert system, such as CLIPS,
business, and production rule systems.
Unit 5
What is planning? what is need of planning.
Planning is artificial intelligence can be defined as a problem that needs decision
making by intelligent system to accomplish the given target.
• There is one more definition of planning which says that planning is an activity where
agent has to come up with sequence of action to accomplish the target.
• Here we have information about the initial status of agent, goal condition of agent and
set of action an agent can take.
• Aim of agent is to find the proper sequence of action which will lead from start to goal
state and produce an efficient solution.
• A planning agent that interacts with environment using sensors and actuators.
• When task comes to agent it has to decide the sequence of actions to be taken and then
accordingly execute action.
• Planning Problem,
⎯ The states of an agent correspond to the probable surrounding environment
while action and goal of an agent are specified based on logical formulation.
MUQuestionPapers.com
⎯ To achieve any goal an agent has to answer few questions like what will be the
effect of its action, how it will affect the upcoming action.
Planning Domain Definition Language (PDDL)
Standard encoding language for “classical” planning tasks Components of a PDDL
planning task:
o Objects: Things in the world that interest us.
o Predicates: Properties of objects that we are interested in; can be true or false.
o Initial state: The state of the world that we start in.
o Goal specification: Things that we want to be true.
o Actions/Operators: Ways of changing the state of the world.

I. Figure shows an air cargo transport problem involving loading and unloading cargo
and flying it from place to place. The problem can be defined with three actions:
Load, Unload, and Fly.
II. The actions affect two predicates: In(c, p) means that cargo c is inside plane p, and
At(x, a) means that object x (either plane or cargo) is at airport a. Note that some care
must be taken to make sure the At predicates are maintained properly.
III. When a plane flies from one airport to another, all the cargo inside the plane goes
with it. In first-order logic it would be easy to quantify over all objects that are inside
the plane.
IV. But basic PDDL does not have a universal quantifier, so we need a different solution.
The approach we use is to say that a piece of cargo ceases to be At anywhere when it
is In a plane; the cargo only becomes At the new airport when it is unloaded. So At
really means “available for use at a given location.”
V. The following plan is a solution to the problem:
[Load (C1, P1, SFO), Fly (P1, SFO, JFK), Unload (C1, P1, JFK),
Load (C2, P2, JFK), Fly (P2, JFK, SFO), Unload (C2, P2, SFO)].
VI. Finally, there is the problem of spurious actions such as Fly (P1, JFK, JFK), which
should be a no-op, but which has contradictory effects (according to the definition, the
effect would include At(P1, JFK) ∧ ¬At(P1, JFK)). It is common to ignore such
problems, because they seldom cause incorrect plans to be produced. The correct
approach is to add inequality preconditions saying that the from and to airports must
be different.
STRIPS Operators
I. STRIPS Stands for STandford Research Institute Problem Solver
Tidily arranged actions descriptions
Restricted language (function-free literals)
Efficient algorithms
II. States represented by:
Conjunction of ground (function-free) atoms
Example
At(Home), Have(Bread)
Closed world assumption
Atoms that are not present are assumed to be false
Example
State: At(Home), Have(Bread)
Implicitly: ¬Have(Milk),¬Have(Bananas),¬Have(Drill)
Operator applicability
Operator o applicable in state s if: there is substitution Subst of the free variables such that
Subst(precond(o)) ⊆ s
Example
Buy(x) is applicable in state
At(Shop)∧Sells(Shop,Milk)∧Have(Bread)
with substitution
Subst = { p/Shop, x/Milk }
Resulting state
Computed from old state and literals in Subst(effect)
1. Positive literals are added to the state
2. Negative literals are removed from the state
3. All other literals remain unchanged (avoids the frame problem)
Formally s’ = (s ∪ {P | P a positive atom, P ∈ Subst(effect(o))})
\ {P | P a positive atom, ¬P ∈ Subst(effect(o))}
Example Application of
Drive(a,b) precond: At(a),Road(a,b) effect: At(b),¬At(a)
to state
At(Koblenz), Road(Koblenz,Landau)
results in
At(Landau), Road(Koblenz,Landau)
A complete set of STRIPS operators can be translated into a set of successor-state axioms
Write note on planning graph.
• Planning graph is a special data structure which is used to get better accuracy. It is
directed graph and is useful to accomplish improved heuristic estimates.
• Any of the search technique can make use of planning graph. Also, GRAPHPLAN can
be used to extract a solution directly.
• Planning graph works only for propositional problems without variable.
• Similarly, in case of planning graph there are series of levels which match to time ladle
in the plan. every level has set of literals and a set of actions.
• Level 0 is the initial state of planning graph.
• Properties of planning graph:
⎯ If goal is absent from last level then goal cannot be achieved.
⎯ If there exist a path to goal the goal is present in the last level.
⎯ If goal is present in last then there may not exist any path.
I. Planning graphs are an efficient way to create a representation of a planning problem
that can be used to Achieve better heuristic estimates Directly construct plans
II. Planning graphs only work for propositional problems.
III. Planning graphs consists of a seq of levels that correspond to time steps in the plan. 
Level 0 is the initial state. Each level consists of a set of literals and a set of actions
that represent what might be possible at that step in the plan Might be is the key to
MUQuestionPapers.com
efficiency Records only a restricted subset of possible negative interactions among
actions.
IV. Each level consists of
Literals = all those that could be true at that time step, depending upon the
actions executed at preceding time steps.
Actions = all those actions that could have their preconditions satisfied at that
time step, depending on which of the literals actually hold.
V. The GRAPHPLAN algorithm repeatedly adds a level to a planning graph with
EXPAND-GRAPH. Once all the goals show up as nonmutex in the graph,
GRAPHPLAN calls EXTRACT-SOLUTION to search for a plan that solves the
problem. If that fails, it expands another level and tries again, terminating with failure
when there is no reason to go on.

Initially the plan consist of 5 literals from the initial state and the CWA literals (S0).
Add actions whose preconditions are satisfied by EXPAND-GRAPH (A0)
Also add persistence actions and mutex relations.
Add the effects at level S1
Repeat until goal is in level Si
MUQuestionPapers.com
VI. EXPAND-GRAPH also looks for mutex relations
Inconsistent effects:- E.g. Remove(Spare, Trunk) and LeaveOverNight due to
At(Spare,Ground) and not At(Spare, Ground)
Interference :- E.g. Remove(Flat, Axle) and LeaveOverNight At(Flat, Axle) as
PRECOND and not At(Flat,Axle) as EFFECT
Competing needs:- E.g. PutOn(Spare,Axle) and Remove(Flat, Axle) due to
At(Flat.Axle) and not At(Flat, Axle)
Inconsistent support:- E.g. in S2, At(Spare,Axle) and At(Flat,Axle)
Write note on semantic network.

I. Semantic networks are an alternative to predicate logic as a form of knowledge


representation. The idea is that we can store our knowledge in the form of a graph,
with nodes representing objects in the world, and arcs representing relationships
between those objects.
II. A semantic network, or frame network is a knowledge base that
represents semantic relations between concepts in a network. It is
a directed or undirected graph consisting of vertices, which represent concepts,
and edges, which represent semantic relations between concepts, mapping or
MUQuestionPapers.com
connecting semantic fields. A semantic network may be instantiated as, for example,
a graph database or a concept map.
III. Typical standardized semantic networks are expressed as semantic triples. Semantic
networks are used in natural language processing applications such as semantic
parsing and word-sense disambiguation.
IV. The structural idea is that knowledge can be stored in the form of graphs, with nodes
representing objects in the world, and arcs representing relationships between those
objects.
Semantic nets consist of nodes, links and link labels. In these networks
diagram, nodes appear in form of circles or ellipses or even rectangles which
represents objects such as physical objects, concepts or situations.
Links appear as arrows to express the relationships between objects, and link
labels specify relations.
Relationships provide the basic needed structure for organizing the
knowledge, so therefore objects and relations involved are also not needed to
be concrete.
Semantic nets are also referred to as associative nets as the nodes are
associated with other nodes

V. For example, Figure has a MemberOf link between Mary and FemalePersons ,
corresponding to the logical assertion Mary ∈FemalePersons ; similarly, the SisterOf
link between Mary and John corresponds to the assertion SisterOf (Mary, John). We
can connect categories using SubsetOf links, and so on. It is such fun drawing bubbles
and arrows that one can get carried away.
MUQuestionPapers.com
VI. For example, we know that persons have female persons as mothers, so can we draw a
HasMother link from Persons to FemalePersons? The answer is no, because
HasMother is a relation between a person and his or her mother, and categories do not
have mother For this reason, we have used a special notation—the double-boxed
link—in Figure This link asserts that
∀x x∈ Persons ⇒ [∀ y HasMother (x, y) ⇒ y ∈ FemalePersons ] .
We might also want to assert that persons have two legs—that is,
∀x x∈ Persons ⇒ Legs(x, 2)
VII. Semantic Networks Are Majorly Used For
Representing data
Revealing structure (relations, proximity, relative importance)
Supporting conceptual edition
Supporting navigation
VIII. Advantages of Using Semantic Networks
The semantic network is more natural than the logical representation;
The semantic network permits using of effective inference algorithm
(graphical algorithm)
They are simple and can be easily implemented and understood.
The semantic network can be used as a typical connection application among various
fields of knowledge, for instance, among computer science and anthropology.
The semantic network permits a simple approach to investigate the problem space.
IX. Disadvantages of Using Semantic Networks
There is no standard definition for link names
Semantic Nets are not intelligent, dependent on the creator

Write a note on Event calculus.


We can represent situation of real world by specifying date, time, place, related people
and many other related objects. But there are many accessions of having continuous actions
such as filling a bucket of water, solving a puzzle, etc.
Situation calculus can only specify condition at the start of the action and at the end of
actions; but it cannot represent what happed during the action was taking place. Considering
the same example, situation calculus will specify that, bucket was empty at the start of action
and at the end bucket is full.
One more limitation of situation calculus is that it cannot represent simultaneous action.
E.g. writing assignment while watching tv program. To handle such things, we have event
calculus.
Event calculus is based on time points instead of only start state and end state. Event
can be described as instance of the event category. Event calculus consist of event and fluent.
Fluent are the object to represents fact but do not specify its truthfulness.
Following is set of event representation:
Knowledge is the most important aspect of knowledge base agent. They use the
available knowledge and deduce new set of knowledge based on that. As in case of human, if
we need some information, we ask a question to the authorized person, who knows the matter
we need.
Similarly, the knowledge base agent also needs to know what information it has in order
to deduce new one. It needs to know about the inference process, so that it gets the answer
wants to have.
This initiates the need to know the objects in someone's head called as mental objects
and the process to manipulate the object is called as mental events. Using mental objects and
events, agent can reason about beliefs of agents.
Mental objects for agents can show their attitudes such as knows, wants, believes,
intends, informs. These attitudes cannot be used as a normal predicate. Hence, we need to
reify
them.
I. Situation calculus is limited in its applicability: it was designed to describe a world in
which actions are discrete, instantaneous, and happen one at a time. Consider a
continuous action, such as filling a bathtub. Situation calculus can say that the tub is
empty before the action and full when the action is done, but it can’t talk about what
happens during the action. It also can’t describe two actions happening at the same
time—such as brushing one’s teeth while waiting for the tub to fill. To handle such
cases we introduce an alternative formalism known as event calculus, which is based
on points of time rather than on situations.
II. Event calculus reifies fluents and events. The fluent At(Shankar , Berkeley) is an
object that refers to the fact of Shankar being in Berkeley, but does not by itself say
anything about whether it is true. To assert that a fluent is actually true at some point
in time we use the predicate T, as in T(At(Shankar , Berkeley), t).
III. Events are described as instances of event categories. The event E1 of Shankar flying
from San Francisco to Washington, D.C. is described as
E1 ∈ Flyings ∧ Flyer (E1, Shankar ) ∧ Origin(E1, SF) ∧ Destination(E1,DC) .
IV. If this is too verbose, we can define an alternative three-argument version of the
category of flying events and say
E1 ∈ Flyings(Shankar , SF,DC) .
V. We then use Happens(E1, i) to say that the event E1 took place over the time interval
i, and we say the same thing in functional form with Extent(E1)=i. We represent time
intervals by a (start, end) pair of times; that is, i = (t1, t2) is the time interval that
starts at t1 and ends at t2.
VI. The complete set of predicates for one version of the event calculus is

VII. We can extend event calculus to make it possible to represent simultaneous events
(such as two people being necessary to ride a seesaw), exogenous events (such as the
wind blowing and changing the location of an object), continuous events (such as the
level of water in the bathtub continuously rising) and other complications.

You might also like