0% found this document useful (0 votes)
37 views80 pages

The Shikshak Tyit Sem 5 Artificial Intelligence Question Papers Solution N18 A19 N19 N22 A23

Uploaded by

alaricvaz2
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
37 views80 pages

The Shikshak Tyit Sem 5 Artificial Intelligence Question Papers Solution N18 A19 N19 N22 A23

Uploaded by

alaricvaz2
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 80

Telegram | youtube | Android App | website | instagram | whatsapp | e-next

For detailed Video Lecture Download The Shikshak Edu App


Telegram | youtube | Android App | website | instagram | whatsapp | e-next

BSC-IT SEM 5 Exam Planning for 100% Passing Result | Strategies, Tips, and Resources (theshikshak.com)

We keep updating our blog you so bookmark our blog in your favorite browser.

Stay connected
The Shikshak Edu App Telegram Youtube

AI Lectures playlists
• https://www.youtube.com/watch?v=RK_J8ir1UdU&list=PLG0jPn7yVt507BWE
gSI72Nv3irA6RDYga

For detailed Video Lecture Download The Shikshak Edu App


Telegram | youtube | Android App | website | instagram | whatsapp | e-next

Unit 1
1. What is Ar�ficial Intelligence? State its applica�ons. (NOV 2018)
OR
What is Ar�ficial intelligence? Explain with example (APR 2019)
OR
Elaborate ar�ficial intelligence with suitable example along with its applica�ons. (NOV 2019)
OR
Explain any five different applica�ons of Ar�ficial Intelligence.(APR 2023)
ANS The defini�ons on top are concerned with thought processes and reasoning, whereas the
ones on the botom address behavior. The defini�ons on the le� measure success in terms of
fidelity to human performance, whereas the ones on the right measure against an ideal
performance measure, called ra�onality. A system is ra�onal if it does the “right thing,” given
what it knows. Historically, all four approaches to AI have been followed, each by different
people with different methods.

Thinking Humanly Thinking Rationally


“The exciting new effort to make comput- “The study of mental faculties through
ers think . . . machines with minds, in the theuse of computational models.”
full and literal sense.” (Haugeland, 1985) (Charniak and McDermott, 1985)
“[The automation of] activities that we “The study of the computations that
associate with human thinking, activities makeit possible to perceive, reason, and
such as decision-making, problem solv- act.” (Winston, 1992)
ing, learning .. .” (Bellman, 1978)
Acting Humanly Acting Rationally
“The art of creating machines that per- “Computational Intelligence is the study
form functions that require intelligence of the design of intelligent agents.” (Poole
when performed by people.” (Kurzweil, et al., 1998)
1990)
“The study of how to make computers do “AI . . . is concerned with intelligent be-
things at which, at the moment, people are havior in artifacts.” (Nilsson, 1998)
better.” (Rich and Knight, 1991)
Figure 1.1 Some definitions of artificial intelligence, organized into four categories.

Applica�on of AI
Robo�c vehicles: A driverless robo�c car named STANLEY sped through the rough terrain of
the Mojave dessert at 22 mph, finishing the 132-mile course first to win the 2005 DARPA
Grand Challenge. STANLEY is a Volkswagen Touareg ou�ited with cameras, radar, and laser
rangefinders to sense the environment and onboard so�ware to command the steer- ing,
braking, and accelera�on (Thrun, 2006). The following year CMU’s BOSS won the Ur- ban
Challenge, safely driving in traffic through the streets of a closed Air Force base, obeying traffic
rules and avoiding pedestrians and other vehicles.

For detailed Video Lecture Download The Shikshak Edu App


Telegram | youtube | Android App | website | instagram | whatsapp | e-next

Speech recogni�on: A traveler calling United Airlines to book a flight can have the en- �re
conversa�on guided by an automated speech recogni�on and dialog management system.
Autonomous planning and scheduling: A hundred million miles from Earth, NASA’s Remote
Agent program became the first on-board autonomous planning program to control the
scheduling of opera�ons for a spacecra� (Jonsson et al., 2000). REMOTE AGENT gen- erated
plans from high-level goals specified from the ground and monitored the execu�on of those
plans—detec�ng, diagnosing, and recovering from problems as they occurred. Succes- sor
program MAPGEN (Al-Chang et al., 2004) plans the daily opera�ons for NASA’s Mars
Explora�on Rovers, and MEXAR2 (Cesta et al., 2007) did mission planning—both logis�cs and
science planning—for the European Space Agency’s Mars Express mission in 2008.
Game playing: IBM’s DEEP BLUE became the first computer program to defeat the world
champion in a chess match when it bested Garry Kasparov by a score of 3.5 to 2.5 in an
exhibi�on match (Goodman and Keene, 1997). Kasparov said that he felt a “new kind of
intelligence” across the board from him. Newsweek magazine described the match as “The
brain’s last stand.” The value of IBM’s stock increased by $18 billion. Human champions
studied Kasparov’s loss and were able to draw a few matches in subsequent years, but the
most recent human-computer matches have been won convincingly by the computer.
Spam figh�ng: Each day, learning algorithms classify over a billion messages as spam, saving
the recipient from having to waste �me dele�ng what, for many users, could comprise 80% or
90% of all messages, if not classified away by algorithms. Because the spammers are
con�nually upda�ng their tac�cs, it is difficult for a sta�c programmed approach to keep up,
and learning algorithms work best (Sahami et al., 1998; Goodman and Heckerman, 2004).
Logis�cs planning: During the Persian Gulf crisis of 1991, U.S. forces deployed a Dynamic
Analysis and Replanning Tool, DART (Cross and Walker, 1994), to do automated logis�cs
planning and scheduling for transporta�on. This involved up to 50,000 vehicles, cargo, and
people at a �me, and had to account for star�ng points, des�na�ons, routes, and conflict
resolu�on among all parameters. The AI planning techniques generated in hours a plan that
would have taken weeks with older methods. The Defense Advanced Research Project Agency
(DARPA) stated that this single applica�on more than paid back DARPA’s 30-year investment in
AI.
Robo�cs: The iRobot Corpora�on has sold over two million Roomba robo�c vacuum cleaners
for home use. The company also deploys the more rugged PackBot to Iraq and Afghanistan,
where it is used to handle hazardous materials, clear explosives, and iden�fy the loca�on of
snipers.

2 Discuss Turing test with Ar�ficial Intelligence approach. (Nov 2018)


OR
Explain Ar�ficial Intelligence with Turing Test approach. (Nov 2022)
OR
What is the purpose of turing test?
ANS The defini�ons on top are concerned with thought processes and reasoning, whereas the
ones on the botom address behavior. The defini�ons on the le� measure success in terms of
fidelity to human performance, whereas the ones on the right measure against an ideal
performance measure, called ra�onality. A system is ra�onal if it does the “right thing,” given
what it knows.

Ac�ng humanly: The Turing Test approach

For detailed Video Lecture Download The Shikshak Edu App


Telegram | youtube | Android App | website | instagram | whatsapp | e-next

The Turing Test, proposed by Alan Turing (1950), was designed to provide a sa�sfactory
opera�onal defini�on of intelligence. A computer passes the test if a human interrogator, a�er
posing some writen ques�ons, cannot tell whether the writen responses come from a
person or from a computer. Here we note that programming a computer to pass a rigorously
applied test provides plenty to work on. The computer would need to possess the following
capabili�es:
• natural language processing to enable it to communicate successfully in English;
• knowledge representa�on to store what it knows or hears;
• automated reasoning to use the stored informa�on to answer ques�ons and to draw
new conclusions;
• machine learning to adapt to new circumstances and to detect and extrapolate
paterns.
Turing’s test deliberately avoided direct physical interac�on between the interrogator and the
computer because physical simula�on of a person is unnecessary for intelligence. However,
the so-called total Turing Test includes a video signal so that the interrogator can test the
subject’s perceptual abili�es, as well as the opportunity for the interrogator to pass physical
objects “through the hatch.” To pass the total Turing Test, the computer will need
• computer vision to perceive objects, and
• robo�cs to manipulate objects and move about.
These six disciplines compose most of AI, and Turing deserves credit for designing a test that
remains relevant 60 years later.

3 What are agents? Explain how they interact with environment. (Nov 2018)
OR
Explain the concept of agent and environment. (April 2019)
OR
State the rela�onship between agents and environment. (Nov 2019)
OR
State the rela�onship between agents and environment. (NOV 2022)
ANS An agent is anything that can be viewed as perceiving its environment through sensors and
ac�ng upon that environment through actuators.
A human agent has eyes, ears, and other organs for sensors and hands, legs, vocal tract, and
so on for actuators.
A robo�c agent might have cameras and infrared range finders for sensors and various motors
for actuators.
A so�ware agent receives keystrokes, file contents, and network packets as sensory inputs and
acts on the environment by displaying on the screen, wri�ng files, and sending network
packets.
We use the term percept to refer to the agent’s perceptual inputs at any given instant. An
agent’s percept sequence is the complete history of everything the agent has ever perceived.
An agent’s behavior is described by the agent func�on that maps any given percept sequence
to an ac�on.

For detailed Video Lecture Download The Shikshak Edu App


Telegram | youtube | Android App | website | instagram | whatsapp | e-next

Agent Sensors
Percept

Ac�ons
Actuators

Figure 2.1 Agents interact with environments through sensors and actuators.

Figure 2.2 A vacuum-cleaner world with just two loca�ons.

To illustrate these ideas, we use a very simple example—the vacuum-cleaner world shown in
Figure 2.2. This world is so simple that we can describe everything that happens; it’s also a
made-up world, so we can invent many varia�ons. This par�cular world has just two loca�ons:
squares A and B. The vacuum agent perceives which square it is in and whether there is dirt in
the square. It can choose to move le�, move right, suck up the dirt, or do nothing. One very
simple agent func�on is the following: if the current square is dirty, then suck; otherwise,
move to the other square. A par�al tabula�on of this agent func�on is shown in Figure 2.3
Percept sequence Action
[A, Clean ] Right
[A, Dirty ] Suck
[B, Clean] Left
[B, Dirty ] Suck
[A, Clean ], [A, Clean ] Right
[A, Clean ], [A, Dirty ] Suck
. .
[A, Clean ], [A, Clean ], [A, Clean ] Right
[A, Clean ], [A, Clean ], [A, Dirty ] Suck
. .
Figure 2.3 Partial tabulation of a simple agent function for the vacuum-
cleaner worldshown in Figure 2.2.

For detailed Video Lecture Download The Shikshak Edu App


Telegram | youtube | Android App | website | instagram | whatsapp | e-next

4 What is ra�onal agent? Discuss in brief about ra�onality. (Nov 2018)


OR
Explain the ra�onal agent approach of AI. (April 2019)
OR
Explain the concept of Ra�onality. (NOV 2019)
ANS A ra�onal agent is one that does the right thing—conceptually speaking, every entry in the
table for the agent func�on is filled out correctly. Obviously, doing the right thing is beter
than doing the wrong thing, but what does it mean to do the right thing?
What is rational at any given time depends on four things:
• The performance measure that defines the criterion of success.
• The agent’s prior knowledge of the environment.
• The ac�ons that the agent can perform.
• The agent’s percept sequence to date.
This leads to a defini�on of a ra�onal agent:
For each possible percept sequence, a rational agent should select an action that is ex-
pected to maximize its performance measure, given the evidence provided by the percept
sequence and whatever built-in knowledge the agent has.
Consider the simple vacuum-cleaner agent that cleans a square if it is dirty and moves to the
other square if not; this is the agent function tabulated in Figure 2.3. Is this a rational agent?
That depends! First, we need to say what the performance measure is, what is known about
the environment, and what sensors and actuators the agent has. Let us assume the following:
• The performance measure awards one point for each clean square at each �me step,
over a “life�me” of 1000 �me steps.
• The “geography” of the environment is known a priori (Figure 2.2) but the dirt distri-
bu�on and the ini�al loca�on of the agent are not. Clean squares stay clean and sucking
cleans the current square. The Left and Right ac�ons move the agent le� and right
except when this would take the agent outside the environment, in which case the agent
remains where it is.
• The only available ac�ons are Left , Right , and Suck .
• The agent correctly perceives its loca�on and whether that loca�on contains dirt.
We claim that under these circumstances the agent is indeed ra�onal; its expected perfor-
mance is at least as high as any other agent’s.

5 Explain PEAS descrip�on of task environment for automated taxi. (Nov 2018)
OR
Give the PEAS descrip�on for taxi’s task environment. (April 2019)
OR
What is PEAS descrip�on? Explain with two suitable examples(NOV 2022)
OR
Explain PEAS descrip�on of the task environment for an automated taxi. (APR 2023)

ANS Agent Type Performance Environment Actuators Sensors


Measure

For detailed Video Lecture Download The Shikshak Edu App


Telegram | youtube | Android App | website | instagram | whatsapp | e-next

Taxi driver Safe, fast, legal, Roads, other Steering, Cameras, sonar,
comfortable trip, traffic, accelerator, speedometer,
maximize profits pedestrians, brake, signal, GPS, odometer,
customers horn, display accelerometer,
engine sensors,
keyboard

Figure 2.4 PEAS description of the task environment for an automated taxi.
First, what is the performance measure to which we would like our automated driverto
aspire? Desirable qualities include getting to the correct destination; minimizing fuel con-
sumption and wear and tear; minimizing the trip time or cost; minimizing violations of traffic
laws and disturbances to other drivers; maximizing safety and passenger comfort; maximiz-
ing profits. Obviously, some of these goals conflict, so tradeoffs will be required.
Next, what is the driving environment that the taxi will face? Any taxi driver must deal with
a variety of roads, ranging from rural lanes and urban alleys to 12-lane freeways. and potholes.
The taxi must also interact with potential and actual passengers. There are alsosome optional
choices. The taxi might need to operate in Southern California, where snow is seldom a
problem, or in Alaska, where it seldom is not. It could always be driving on the right, or we
might want it to be flexible enough to drive on the left when in Britain or Japan.Obviously, the
more restricted the environment, the easier the design problem.
The actuators for an automated taxi include those available to a human driver: control
over the engine through the accelerator and control over steering and braking. In addition, it
will need output to a display screen or voice synthesizer to talk back to the passengers, and
perhaps some way to communicate with other vehicles, politely or otherwise.
The basic sensors for the taxi will include one or more controllable video cameras so
that it can see the road; it might augment these with infrared or sonar sensors to detect dis-
tances to other cars and obstacles. To avoid speeding tickets, the taxi should have a speedome- ter,
and to control the vehicle properly, especially on curves, it should have an accelerometer.To
determine the mechanical state of the vehicle, it will need the usual array of engine, fuel,and
electrical system sensors. Like many human drivers, it might want a global positioning system
(GPS) so that it doesn’t get lost. Finally, it will need a keyboard or microphone for the
passenger to request a destination.

6 Give comparison between Full observable and partially observable agent. (Nov 2018)
ANS Fully observable Partially observable
Agent can access to the complete state of the Agent cannot access to the complete state of
environment at each point in time the environment at each point in time
Agent can detect all aspect that are relevant Agent cannot detect all aspect that are
to the choice of action relevant to the choice of action
A fully observable environment is easy as A partially observable environment is not
there is no need to maintain the internal state easy as there is need to maintain the internal
to keep track history of the world. state to keep track history of the world.
7 Explain the working of simple reflex agent. (April 2019)
or
Explain reflex agents with state. (Nov 2019)

For detailed Video Lecture Download The Shikshak Edu App


Telegram | youtube | Android App | website | instagram | whatsapp | e-next

ANS  The simplest kind of agent is the simple reflex agent. These agents select ac�ons on
the basis of the current percept, ignoring the rest of the percept history.
 For example, the vacuum agent whose agent func�on is a simple reflex agent,
because its decision is based only on the current loca�on and on whether that
loca�on contains dirt.
 Simple reflex behaviors occur even in more complex environments.
 Imagine yourself as the driver of the automated taxi. If the car in front brakes and its
brake lights come on, then you should no�ce this and ini�ate braking. In other words,
some processing is done on the visual input to establish the condi�on we call “The car
in front is braking.” Then, this triggers some established connec�on in the agent
program to the ac�on “ini�ate braking.” We call such a connec�on a condi�on–ac�on
rule, writen as
if car-in-front-is-braking then ini�ate-braking.

8 Discuss the historical evolu�on of Ar�ficial Intelligence (Nov 2019)


ANS Ar�ficial Intelligence is not a new word and not a new technology for researchers. This
technology is much older than you would imagine. Even there are the myths of Mechanical
men in Ancient Greek and Egyp�an Myths. Following are some milestones in the history of AI
which defines the journey from the AI genera�on to �ll date development.
o Year 1943: The first work which is now recognized as AI was done by Warren McCulloch
and Walter pits in 1943. They proposed a model of ar�ficial neurons.
o Year 1949: Donald Hebb demonstrated an upda�ng rule for modifying the connec�on
strength between neurons. His rule is now called Hebbian learning.
o Year 1950: The Alan Turing who was an English mathema�cian and pioneered Machine
learning in 1950. Alan Turing publishes "Compu�ng Machinery and Intelligence" in
which he proposed a test. The test can check the machine's ability to exhibit intelligent
behavior equivalent to human intelligence, called a Turing test.

For detailed Video Lecture Download The Shikshak Edu App


Telegram | youtube | Android App | website | instagram | whatsapp | e-next

o Year 1955: An Allen Newell and Herbert A. Simon created the "first ar�ficial intelligence
program"Which was named as "Logic Theorist". This program had proved 38 of 52
Mathema�cs theorems, and find new and more elegant proofs for some theorems.
o Year 1956: The word "Ar�ficial Intelligence" first adopted by American Computer
scien�st John McCarthy at the Dartmouth Conference. For the first �me, AI coined as
an academic field.
o Year 1966: The researchers emphasized developing algorithms which can solve
mathema�cal problems. Joseph Weizenbaum created the first chatbot in 1966, which
was named as ELIZA.
o Year 1972: The first intelligent humanoid robot was built in Japan which was named as
WABOT-1.
o The dura�on between years 1974 to 1980 was the first AI winter dura�on. AI winter
refers to the �me period where computer scien�st dealt with a severe shortage of
funding from government for AI researches.
o During AI winters, an interest of publicity on ar�ficial intelligence was decreased.
o Year 1980: A�er AI winter dura�on, AI came back with "Expert System". Expert systems
were programmed that emulate the decision-making ability of a human expert.
o In the Year 1980, the first na�onal conference of the American Associa�on of Ar�ficial
Intelligence was held at Stanford University.
o The dura�on between the years 1987 to 1993 was the second AI Winter dura�on.
o Again Investors and government stopped in funding for AI research as due to high cost
but not efficient result. The expert system such as XCON was very cost effec�ve.
o Year 1997: In the year 1997, IBM Deep Blue beats world chess champion, Gary
Kasparov, and became the first computer to beat a world chess champion.
o Year 2002: for the first �me, AI entered the home in the form of Roomba, a vacuum
cleaner.
o Year 2006: AI came in the Business world �ll the year 2006. Companies like Facebook,
Twiter, and Ne�lix also started using AI.
o Year 2011: In the year 2011, IBM's Watson won jeopardy, a quiz show, where it had to
solve the complex ques�ons as well as riddles. Watson had proved that it could
understand natural language and can solve tricky ques�ons quickly.

For detailed Video Lecture Download The Shikshak Edu App


Telegram | youtube | Android App | website | instagram | whatsapp | e-next

o Year 2012: Google has launched an Android app feature "Google now", which was able
to provide informa�on to the user as a predic�on.
o Year 2014: In the year 2014, Chatbot "Eugene Goostman" won a compe��on in the
infamous "Turing test."
o Year 2018: The "Project Debater" from IBM debated on complex topics with two master
debaters and also performed extremely well.

Google has demonstrated an AI program "Duplex" which was a virtual assistant and which had
taken hairdresser appointment on call, and lady on other side didn't no�ce that she was
talking with the machine.

9 Explain types of environments (NOV 2019)


OR
Explain following task environments:
i) Single Agent vs. Mul�agent
ii) Episodic vs. Sequen�al (NOV 2022)
OR
Explain different types of environments applicable to AI agents. (APR 2023)

For detailed Video Lecture Download The Shikshak Edu App


Telegram | youtube | Android App | website | instagram | whatsapp | e-next

ANS Fully observable vs. partially observable


 If an agent’s sensors give it access to complete state of the environment at each point
in time, then we say that the task environment is fully observable.
 Fully observable environments are convenient because the agent need not maintain any
internal state to keep track of the world.
 An environment might be partially observable because of noisy and inaccurate sensors
or because parts of the state are simply missing from the sensor data—for example, a
vacuum agent with only a local dirt sensor cannot tell whether there is dirt in other
squares, and an automated taxi cannot see what other drivers are thinking. If the agent
has no sensors at all then the environment is unobservable.
Single agent vs. multiagent
 The distinction between single-agent and multiagent environments may seem simple
enough. For example, an agent solving a crossword puzzle by itself is clearly in a
single-agent environment, whereas an agent playing chess is in a two agent
environment.
Deterministic vs. stochastic
 If the next state of the environment is completely determined by the current state and
the action executed by the agent, then we say the environment is deterministic;
otherwise, it is stochastic.
 an agent need not worry about uncertainty in a fully observable, deterministic
environment.
 Most real situations are so complex that it is impossible to keep track of all the
unobserved aspects; for practical purposes, they must be treated as stochastic. Taxi
driving is clearly stochastic in this sense, because one can never predict the behavior
of traffic exactly; moreover, one’s tires blow out and one’s engine seizes up without
warning. The vacuum world as we described it is deterministic, but variations can
include stochastic elements such as randomly appearing dirt and an unreliable suction
mechanism
Episodic vs. sequential
 In an episodic task environment, the agent’s experience is divided into atomic episodes.
In each episode the agent receives a percept and then performs a single action.
 The next episode does not depend on the actions taken in previous episodes.
 For example, an agent that has to spot defective parts on an assembly line bases each
decision on the current part, regardless of previous decisions; moreover, the current
decision doesn’t affect whether the next part is defective. In sequential environments,
on the other hand, the current decision could affect all future decisions. Chess and taxi
driving are sequential: in both cases, short-term actions can have long-term
consequences. Episodic environments are much simpler than sequential environments
because the agent does not need to think ahead.
Static vs. dynamic
 If the environment can change while an agent is deliberating, then we say the
environment is dynamic for that agent; otherwise, it is static. Static environments are
easy to deal with because the agent need not keep looking at the world while it is
deciding on an action, nor need it worry about the passage of time.
 Dynamic environments, on the other hand, are continuously asking the agent what it
wants to do; if it hasn’t decided yet, that counts as deciding to do nothing.
Taxi driving is clearly dynamic: the other cars and the taxi itself keep moving while the driving
algorithm dithers about what to do next. Chess, when played with a clock, is semi dynamic.

For detailed Video Lecture Download The Shikshak Edu App


Telegram | youtube | Android App | website | instagram | whatsapp | e-next

Crossword puzzles are static


Discrete vs. continuous
 The discrete/continuous distinction applies to the state of the environment, to the way
time is handled, and to the percepts and actions of the agent.
 For example, the chess environment has a finite number of distinct states (excluding
the clock). Chess also has a discrete set of percepts and actions.
 Taxi driving is a continuous-state and continuous-time problem: the speed and location
of the taxi and of the other vehicles sweep through a range of continuous values and do
so smoothly over time. Taxi-driving actions are also continuous (steering angles, etc.).
 Input from digital cameras is discrete, strictly speaking, but is typically treated as
representing continuously varying intensities and locations
Known vs. unknown
 Strictly speaking, this distinction refers not to the environment itself but to the agent’s
(or designer’s) state of knowledge about the “laws of physics” of the environment. In
a known environment, the outcomes (or outcome probabilities if the environment is
stochastic) for all actions are given. Obviously, if the environment is unknown, the
agent will have to learn how it works in order to make good decisions.
It is quite possible for a known environment to be partially observable—for example, in solitaire
card games, I know the rules but am still unable to see the cards that have not yet been turned
over. Conversely, an unknown environment can be fully observable—in a new video game, the
screen may show the entire game state but I still don’t know what the buttons do until I try
them.

10 Describe the contribu�on of Philosophy and Mathema�cs to Ar�ficial Intelligence. (NOV 2022)
ANS
Philosophy
• Can formal rules be used to draw valid conclusions?
• How does the mind arise from a physical brain?
• Where does knowledge come from?
• How does knowledge lead to ac�on?
Aristotle (384–322 B.C.), whose bust appears on the front cover of this book, was the first
to formulate a precise set of laws governing the ra�onal part of the mind. He developed
an informal system of syllogisms for proper reasoning, which in principle allowed one to
gener- ate conclusions mechanically, given ini�al premises. Descartes was a strong
advocate of the power of reasoning in understanding the world, a philosophy now called
ra�onalism, and one that counts Aristotle and Leibnitz as members. But Descartes was
also a proponent of dualism. He held that there is a part of the human mind (or soul or
spirit) that is outside of nature, exempt from physical laws. Animals, on the other hand,
did not possess this dual quality; they could be treated as machines. An alterna�ve to
dualism is materialism, which holds that the brain’s opera�on according to the laws of
physics cons�tutes the mind.
Mathema�cs
• What are the formal rules to draw valid conclusions?
• What can be computed?
• How do we reason with uncertain informa�on?
Philosophers staked out some of the fundamental ideas of AI, but the leap to a formal
science required a level of mathema�cal formaliza�on in three fundamental areas: logic,

For detailed Video Lecture Download The Shikshak Edu App


Telegram | youtube | Android App | website | instagram | whatsapp | e-next

computa- �on, and probability. The next step was to determine the limits of what could
be done with logic and com- puta�on. The first nontrivial algorithm is thought to be
Euclid’s algorithm for compu�ng greatest common divisors. The word algorithm (and the
idea of studying them) comes fromal-Khowarazmi, a Persian mathema�cian of the 9th
century.
Decidability and computability are important to an understanding of computa- �on, the
no�on of tractability has had an even greater impact. Roughly speaking, a problem is
called intractable if the �me required to solve instances of the problem grows
exponen�ally with the size of the instances. The dis�nc�on between polynomial and
exponen�al growth in complexity was first emphasized in the mid-1960s (Cobham, 1964;
Edmonds, 1965). It is important because exponen�al growth means that even moderately
large instances cannot be solved in any reasonable �me. Therefore, one should strive to
divide the overall problem of genera�ng intelligent behavior into tractable subproblems
rather than intractable ones. Besides logic and computa�on, the third great contribu�on
of mathema�cs to AI is thetheory of probability.

11
Describe the structure of U�lity based Agent. (NOV 2022)
ANS  Goals alone are not enough to generate high-quality behavior in most environments.
For example, many ac�on sequences will get the taxi to its des�na�on (thereby
achieving the goal) but some are quicker, safer, more reliable, or cheaper than others.
Goals just provide a crude binary dis�nc�on between “happy” and “ unhappy” states.
A more general performance measure should allow a comparison of different world
states according to exactly how happy they would make the agent. Because “happy”
does not sound very scien�fic, economists and computer scien�sts use the term
u�lity instead.
 We have already seen that a performance measure assigns a score to any given
sequence of environment states, so it can easily dis�nguish between more and less
desirable ways of ge�ng to the taxi’s des�na�on. An agent’s u�lity func�on is
essen�ally an internaliza�on of the performance measure. If the internal u�lity
func�on and the external performance measure are in agreement, then an agent that
chooses ac�ons to maximize its u�lity will be ra�onal according to the external
performance measure.

For detailed Video Lecture Download The Shikshak Edu App


Telegram | youtube | Android App | website | instagram | whatsapp | e-next

12 Explain various founda�ons of Ar�ficial Intelligence in brief. (APR 2023)


ANS The disciplines that contributed ideas, viewpoints,and techniques to AI. This one is forced to
concentrate on a small number of people, events, and ideas and to ignore others that also
were important.
Philosophy
• Can formal rules be used to draw valid conclusions?
• How does the mind arise from a physical brain?
• Where does knowledge come from?
• How does knowledge lead to ac�on?
Aristotle (384–322 B.C.), whose bust appears on the front cover of this book, was the first
to formulate a precise set of laws governing the ra�onal part of the mind. He developed
an informal system of syllogisms for proper reasoning, which in principle allowed one to
generate conclusions mechanically, given ini�al premises. Descartes was a strong advocate
of the power of reasoning in understanding the world, a philosophy now called
ra�onalism, and one that counts Aristotle and Leibnitz as members. But Descartes was
also a proponent of dualism. He held that there is a part of the human mind (or soul or
spirit) that is outside of nature, exempt from physical laws. Animals, on the other hand,
did not possess this dual quality; they could be treated as machines. An alterna�ve to
dualism is materialism, which holds that the brain’s opera�on according to the laws of
physics cons�tutes the mind.

Mathema�cs
• What are the formal rules to draw valid conclusions?
• What can be computed?
• How do we reason with uncertain informa�on?
Philosophers staked out some of the fundamental ideas of AI, but the leap to a formal
science required a level of mathema�cal formaliza�on in three fundamental areas: logic,
computa�on, and probability. The next step was to determine the limits of what could be
done with logic and computa�on. The first nontrivial algorithm is thought to be Euclid’s
algorithm for compu�ng greatest common divisors. The word algorithm (and the idea of
studying them) comes fromal-Khowarazmi, a Persian mathema�cian of the 9th century.
Decidability and computability are important to an understanding of computa�on, the
no�on of tractability has had an even greater impact. Roughly speaking, a problem is
called intractable if the �me required to solve instances of the problem grows
exponen�ally with the size of the instances. The dis�nc�on between polynomial and
exponen�al growth in complexity was first emphasized in the mid-1960s (Cobham, 1964;
Edmonds, 1965). It is important because exponen�al growth means that even moderately
large instances cannot be solved in any reasonable �me. Therefore, one should strive to
divide the overall problem of genera�ng intelligent behavior into tractable subproblems
rather than intractable ones. Besides logic and computa�on, the third great contribu�on
of mathema�cs to AI is thetheory of probability.

Economics

• How should we make decisions so as to maximize payoff?

For detailed Video Lecture Download The Shikshak Edu App


Telegram | youtube | Android App | website | instagram | whatsapp | e-next

• How should we do this when others may not go along?


• How should we do this when the payoff may be far in the future?
The science of economics got its start in 1776, when Sco�sh philosopher Adam Smith
(1723–1790) published An Inquiry into the Nature and Causes of the Wealth of Na�ons.
Decision theory, which combines probability theory with u�lity theory, provides a for-
mal and complete framework for decisions (economic or otherwise) made under
uncertainty— that is, in cases where probabilis�c descrip�ons appropriately capture the
decision maker’s environment.

Neuroscience
• How do brains process informa�on?
Neuroscience is the study of the nervous system, par�cularly the brain. Although the
exact way in which the brain enables thought is one of the great mysteries of science,
the fact that it does enable thought has been appreciated for thousands of years
because of the evidence that strong blows to the head can lead to mental
incapacita�on. It has also long been known that human brains are somehow different.

Psychology
• How do humans and animals think and act?
The origins of scien�fic psychology are usually traced to the work of the German physi- cist
Hermann von Helmholtz (1821–1894) and his student Wilhelm Wundt (1832–1920).
Helmholtz applied the scien�fic method to the study of human vision.
Cogni�ve psychology, which views the brain as an informa�on-processing device, can be
traced back at least to the works of William James (1842–1910). Helmholtz also insisted that
percep�on involved a form of unconscious logical inference.

Computer engineering
• How can we build an efficient computer?
AI has pioneered many ideas that have made their way back to mainstream computer science,
including �me sharing, interac�ve interpreters, personal computers with windows and mice,
rapid development environments, the linked list data type, automa�c storage management,
and key concepts of symbolic, func�onal, declara�ve, and object-oriented programming.
The first opera�onal programmable computer was the Z-3, the inven- �on of Konrad Zuse in
Germany in 1941. Zuse also invented floa�ng-point numbers and thefirst high-level
programming language, Plankalkül. The first electronic computer, the ABC, was assembled by
John Atanasoff and his student Clifford Berry between 1940 and 1942at Iowa State
University. Since that time, each generation of computer hardware has brought an increase in
speedand capacity and a decrease in price.
AI also owes a debt to the so�ware side of computer science, which has supplied the
opera�ng systems, programming languages, and tools needed to write modern programs (and
papers about them). But this is one area where the debt has been repaid: work in AI has pio-
neered many ideas that have made their way back to mainstream computer science, including
�me sharing, interac�ve interpreters, personal computers with windows and mice, rapid de-
velopment environments, the linked list data type, automa�c storage management, and key
concepts of symbolic, func�onal, declara�ve, and object-oriented programming.

Control theory and cyberne�cs

For detailed Video Lecture Download The Shikshak Edu App


Telegram | youtube | Android App | website | instagram | whatsapp | e-next

• How can ar�facts operate under their own control?


A water clock with a regulator that maintained a constant flow rate. This inven�on changed
the defini�on of what an ar�fact could do. Previously, only living things could modify their
behavior in response to changes in the environment. Other examples of self-regula�ng
feedback controlsystems include the steam engine governor. The central figure in the crea�on
of what is now called control theory.
Modern control theory, especially the branch known as stochas�c op�mal control, hasas its
goal the design of systems that maximize an objec�ve func�on over �me. This roughly matches
our view of AI: designing systems that behave op�mally. Why, then, are AI and control theory
two different fields, despite the close connec�ons among their founders? Theanswer lies in
the close coupling between the mathema�cal techniques that were familiar tothe participants
and the corresponding sets of problems that were encompassed in each worldview.

Linguis�cs
• How does language relate to thought?
In 1957, B. F. Skinner published Verbal Behavior. This was a comprehensive, detailed ac- count
of the behaviorist approach to language learning, writen by the foremost expert in field.
Modern linguis�cs and AI, then, were “born” at about the same �me, and grew up together,
intersec�ng in a hybrid field called computa�onal linguis�cs or natural language processing.
The problem of understanding language soon turned out to be considerably more complex
than it seemed in 1957. Understanding language requires an understanding of the subject
mater and context, not just an understanding of the structure of sentences. This might seem
obvious, but it was not widely appreciated un�l the 1960s. Much of the early work in
knowledge representa�on (the study of how to put knowledge into a form that a computer
can reason with) was �ed to language and informed by research in linguis�cs, which was
connected in turn to decades of work on the philosophical analysis of language.

13 What is Learning agents? Explain its four conceptual components. (APR 2023)
ANS A learning agent can be divided into four conceptual components, as shown in Fig- ure
2.15. The most important distinction is between the learning element, which is re- sponsible
for making improvements, and the performance element, which is responsible forselecting
external actions. The performance element is what we have previously consideredto be the
entire agent: it takes in percepts and decides on actions. The learning element usesfeedback
from the critic on how the agent is doing and determines how the performance element should
be modified to do better in the future.
Performance

Cri�c Sensors

feedback

changes
Learning Performance
element element
knowledge
learning

Problem
generato

Actuators
Agent

Figure 2.15 A general


For detailed learning
Video agent.
Lecture Download The Shikshak Edu App
Telegram | youtube | Android App | website | instagram | whatsapp | e-next

The design of the learning element depends very much on the design of the performance
element. When trying to design an agent that learns a certain capability, the first question is
not “How am I going to get it to learn this?” but “What kind of performance element will my
agent need to do this once it has learned how?” Given an agent design, learning mechanisms
can be constructed to improve every part of the agent.
The last component of the learning agent is the problem generator. It is responsible for
suggesting actions that will lead to new and informative experiences. The point is thatif the
performance element had its way, it would keep doing the actions that are best, given what it
knows.

Unit 2
1 Discuss in brief the formula�on of single state problem (NOV 2018)
ANS Problem formula�on is the process of deciding what ac�ons and states to consider, given a
goal. We discuss this process in more detail later. For now, letus assume that the agent will
consider ac�ons at the level of driving from one major town toanother. Each state therefore
corresponds to being in a par�cular town.
Our agent has now adopted the goal of driving to Bucharest and is considering where to go
from Arad. Three roads lead out of Arad, one toward Sibiu, one to Timisoara, and oneto Zerind.
None of these achieves the goal, so unless the agent is familiar with the geographyof Romania, it
will not know which road to follow.1 In other words, the agent will not knowwhich of its
possible ac�ons is best, because it does not yet know enough about the state that results from
taking each ac�on. If the agent has no addi�onal informa�on—i.e., if the environment is
unknown in the sense.
suppose the agent has a map of Romania. The point of a map is to provide the agent with
informa�on about the states it might get itself into and the ac�ons it can take. Theagent can use
this informa�on to consider subsequent stages of a hypothe�cal journey via each of the three
towns, trying to find a journey that eventually gets to Bucharest. Once it hasfound a path on the
map from Arad to Bucharest, it can achieve its goal by carrying out the driving ac�ons that
correspond to the legs of the journey. In general, an agent with several immediate options of
unknown value can decide what to do by first examining future actionsthat eventually lead to
states of known value.
we assume that the environment is observable, so the agent always knows the current state.
For the agent driving in Romania, it’s reasonable to suppose that each city on the map has asign
indica�ng its presence to arriving drivers. We also assume the environment is discrete,so at any

For detailed Video Lecture Download The Shikshak Edu App


Telegram | youtube | Android App | website | instagram | whatsapp | e-next

given state there are only finitely many ac�ons to choose from. This is true for naviga�ng in
Romania because each city is connected to a small number of other ci�es. Wewill assume the
environment is known, so the agent knows which states are reached by eachac�on. (Having an
accurate map suffices to meet this condi�on for naviga�on problems.) Finally, we assume that
the environment is determinis�c, so each ac�on has exactly one outcome.
The process of looking for a sequence of ac�ons that reaches the goal is called search.A search
algorithm takes a problem as input and returns a solu�on in the form of an ac�on sequence.
Once a solu�on is found, the ac�ons it recommends can be carried out. Thisis called the
execu�on phase.

2 Give the outline of Breadth First Search algorithm (NOV 2018)


OR
Explain the algorithm for breadth first search algorithm. (NOV 2019)
OR
Give the outline of Breadth First Search algorithm with respect to Ar�ficial Intelligence. (NOV
2022)
ANS
Breadth-first search is a simple strategy in which the root node is expanded first, then all the
successors of the root node are expanded next, then their successors, and so on.
In general, all the nodes are expanded at a given depth in the search tree before any nodes at the
next level are expanded.
Breadth-first search is an instance of the general graph-search algorithm (Figure 3.7) inwhich the
shallowest unexpanded node is chosen for expansion.
This is achieved very simplyby using a FIFO queue for the fron�er. Thus, new nodes (which are
always deeper than theirparents) go to the back of the queue, and old nodes, which are
shallower than the new nodes,get expanded first.
it is complete algorithm—if the shallowest goal node is at some finite depthd, breadth-first search
will eventually find it a�er genera�ng all shallower nodes. The shallowest goal node is not
necessarily the optimal one.
suppose that the solution is at depth d. In the worst case, it is the last node generated at that level.
Then the total number of nodes generated is
b + b2 + b3 + · · · + bd = O(bd)

3
Give the outline of tree search algorithm (NOV 2018)
ANS
A* Tree Search, or simply known as A* Search, combines the strengths of uniform-cost search
and greedy search. In this search, the heuristic is the summation of the cost in UCS, denoted by
g(x), and the cost in the greedy search, denoted by h(x). The summed cost is denoted by f(x).
Heuristic: The following points should be noted wrt heuristics in A* search. f(x) = g(x) + h(x)

For detailed Video Lecture Download The Shikshak Edu App


Telegram | youtube | Android App | website | instagram | whatsapp | e-next

Here, h(x) is called the forward cost and is an estimate of the distance of the current node from
the goal node.
And, g(x) is called the backward cost and is the cumulative cost of a node from the root node.
A* search is optimal only when for all nodes, the forward cost for a node h(x) underestimates
the actual cost h*(x) to reach the goal. This property of A* heuristic is called admissibility.

Strategy: Choose the node with the lowest f(x) value.

Solution. Starting from S, the algorithm computes g(x) + h(x) for all nodes in the fringe at each
step, choosing the node with the lowest sum. The entire work is shown in the table below.
Note that in the fourth set of iterations, we get two paths with equal summed cost f(x), so we
expand them both in the next set. The path with a lower cost on further expansion is the chosen
path.

For detailed Video Lecture Download The Shikshak Edu App


Telegram | youtube | Android App | website | instagram | whatsapp | e-next

Path: S -> D -> B -> E -> G


Cost: 7
4
Explain the mechanism of genetic algorithm. (NOV 2018)(NOV 2022)
OR
How generic algorithm works? (April 2019)
OR

For detailed Video Lecture Download The Shikshak Edu App


Telegram | youtube | Android App | website | instagram | whatsapp | e-next

Explain the working mechanism of genetic algorithm.


ANS Gene�c algorithms
A gene�c algorithm (or GA) is a variant of stochas�c beam search in which successor
states are generated by combining two parent states rather than by modifying a single
state. The analogy to natural selec�on is the same as in stochas�c beam search, except
that now we are dealing with sexual rather than asexual reproduc�on.

Mutation

The mutation operator inserts random genes in the offspring (new child) to maintain the
diversity in the population. It can be done by flipping some bits in the chromosomes.
Mutation helps in solving the issue of premature convergence and enhances diversification.

Crossover: The crossover plays a most significant role in the reproduction phase of the genetic
algorithm. In this process, a crossover point is selected at random within the genes. Then the
crossover operator swaps genetic information of two parents from the current generation to
produce a new individual representing the offspring.

Mutation
The mutation operator inserts random genes in the offspring (new child) to maintain the
diversity in the population. It can be done by flipping some bits in the chromosomes.
Mutation helps in solving the issue of premature convergence and enhances diversification.
5
Explain how transition model is used for sensing in vacuum cleaner problem (NOV 2018)
ANS
Vacuum cleaner
This can be formulated as a problem as follows:
• States: The state is determined by both the agent loca�on and the dirt loca�ons. The
agent is in one of two loca�ons, each of which might or might not contain dirt. Thus,
there are 2 × 22 = 8 possible world states. A larger environment with n loca�ons has n ·
2n states.
• Ini�al state: Any state can be designated as the ini�al state.
• Ac�ons: In this simple environment, each state has just three ac�ons: Left, Right, and
Suck. Larger environments might also include Up and Down.

For detailed Video Lecture Download The Shikshak Edu App


Telegram | youtube | Android App | website | instagram | whatsapp | e-next

• Transi�on model: The ac�ons have their expected effects, except that moving Left inthe
leftmost square, moving Right in the rightmost square, and Sucking in a clean squarehave
no effect. The complete state space is shown in Figure 3.3.
• Goal test: This checks whether all the squares are clean.
• Path cost: Each step costs 1, so the path cost is the number of steps in the path.

6
Give the illustration of 8 queen problem using hill climbing algorithm. (NOV 2018)
ANS
The N Queen is the problem of placing N chess queens on an N×N chessboard so that no two
queens attack each other.
For example, the following is a solution for 8 Queen problem.
Input: N = 4
Output:
0100
0001
1000
0010
Explanation:
The Position of queens are:
1 – {1, 2}
2 – {2, 4}

For detailed Video Lecture Download The Shikshak Edu App


Telegram | youtube | Android App | website | instagram | whatsapp | e-next

3 – {3, 1}
4 – {4, 3}
As we can see that we have placed all 4 queens
in a way that no two queens are attacking each other.
So, the output is correct
Input: N = 8
Output:
00000010
01000000
00000100
00100000
10000000
00010000
00000001
00001000
While there are algorithms like Backtracking to solve N Queen problem, let’s take an AI
approach in solving the problem.
It’s obvious that AI does not guarantee a globally correct solution all the time but it has quite a
good success rate of about 97% which is not bad.
A description of the notions of all terminologies used in the problem will be given and are as
follows:-
Notion of a State – A state here in this context is any configuration of the N queens on the N X
N board. Also, in order to reduce the search space let’s add an additional constraint that there
can only be a single queen in a particular column. A state in the program is implemented using
an array of length N, such that if state[i]=j then there is a queen at column index i and row
index j.
Notion of Neighbours – Neighbours of a state are other states with board configuration that
differ from the current state’s board configuration with respect to the position of only a single
queen. This queen that differs a state from its neighbour may be displaced anywhere in the
same column.
Optimisation function or Objective function – We know that local search is an optimization
algorithm that searches the local space to optimize a function that takes the state as input and
gives some value as an output. The value of the objective function of a state here in this context
is the number of pairs of queens attacking each other. Our goal here is to find a state with the

For detailed Video Lecture Download The Shikshak Edu App


Telegram | youtube | Android App | website | instagram | whatsapp | e-next

minimum objective value. This function has a maximum value of NC2 and a minimum value of
0.
Algorithm:
Start with a random state(i.e, a random configuration of the board).
Scan through all possible neighbours of the current state and jump to the neighbour with the
highest objective value, if found any. If there does not exist, a neighbour, with objective strictly
higher than the current state but there exists one with equal then jump to any random
neighbour(escaping shoulder and/or local optimum).
Repeat step 2, until a state whose objective is strictly higher than all it’s neighbour’s objectives,
is found and then go to step 4.
The state thus found after the local search is either the local optimum or the global optimum.
There is no way of escaping local optima but adding a random neighbour or a random restart
each time a local optimum is encountered increases the chances of achieving global
optimum(the solution to our problem).
Output the state and return.
It is easily visible that the global optimum in our case is 0 since it is the minimum number of
pairs of queens that can attack each other. Also, the random restart has a higher chance of
achieving global optimum but we still use random neighbour because our problem of N queens
does not has a high number of local optima and random neighbour is faster than random restart.

7
List and explain performance measuring ways for problem solving. (April 2019)
ANS
The performance measure of an algorithm should be measured. Consequently, There are four
ways to measure the performance of an algorithm:
Completeness: It measures if the algorithm guarantees to find a solution (if any solution exist).
Optimality: It measures if the strategy searches for an optimal solution.
Time Complexity: The time taken by the algorithm to find a solution.
Space Complexity: Amount of memory required to perform a search.
The complexity of an algorithm depends on branching factor or maximum number of
successors, depth of the shallowest goal node (i.e., number of steps from root to the path) and
the maximum length of any path in a state space.
Time and Space in complexity analysis are measured with respect to the number of nodes the
problem graph has in terms of asymptotic notations.
In AI, complexity is expressed by three factors b, d and m:
b the branching factor is the maximum number of successors of any node.
d the depth of the deepest goal.

For detailed Video Lecture Download The Shikshak Edu App


Telegram | youtube | Android App | website | instagram | whatsapp | e-next

m the maximum length of any path in the state space.


8
Formulate the vacuum world problem. (April 2019)
OR
Describe the problem formulation of Vacuum World problem (NOV 2022)
ANS
Consider a Vacuum cleaner world

Imagine that our intelligent agent is a robot vacuum cleaner.


Let's suppose that the world has just two rooms. The robot can be in either room and there can
be dirt in zero, one, or two rooms.

Goal formulation: intuitively, we want all the dirt cleaned up. Formally, the goal is
{ state 7, state 8 }.

Problem formulation(Actions):Left,Right,Suck,NoOp
State Space Graph

For detailed Video Lecture Download The Shikshak Edu App


Telegram | youtube | Android App | website | instagram | whatsapp | e-next

Measuring performance
With any intelligent agent, we want it to find a (good) solution and not spend forever doing it.
The interesting quantities are, therefore,
the search cost--how long the agent takes to come up with the solution to the problem, and
the path cost--how expensive the actions in the solution are.
The total cost of the solution is the sum of the above two quantities.
9
Write the uniform cost search algorithm. Explain in short. (April 2019)
OR
Give the outline of Uniform-cost search algorithm. (NOV 2019)
ANS
Instead of expanding the shallowest node, uniform-cost search expands the node n with the
lowest path cost g(n). This is done by storing the frontier as a priority queue ordered by g.
The first is that the goal test is applied to a node when it is selected for expansion rather than
when it is first generated. The reason is that the first goal node that is generated may be on a
suboptimal path. The second difference is that a test is added in case a better path is found to a
node currently on the frontier.

Both of these modifications come into play in the example shown in Figure 3.15, where the
problem is to get from Sibiu to Bucharest. The successors of Sibiu are Rimnicu Vilcea and
Fagaras, with costs 80 and 99, respectively. The least-cost node, Rimnicu Vilcea, is expanded
next, adding Pitesti with cost 80 + 97 = 177. The least-cost node is now Fagaras, so it is
expanded, adding Bucharest with cost 99 + 211 = 310. Now a goal node has been generated,
but uniform-cost search keeps going, choosing Pitesti for expansion and adding a second path
to Bucharest with cost 80 + 97 + 101 = 278. Now the algorithm checks to see if this new path is
better than the old one; it is, so the old one is discarded. Bucharest, now with g-cost 278, is
selected for expansion and the solution is returned.

For detailed Video Lecture Download The Shikshak Edu App


Telegram | youtube | Android App | website | instagram | whatsapp | e-next

Uniform-cost search does not care about the number of steps a path has, but only about their
total cost. Therefore, it will get stuck in an infinite loop if there is a path with an infinite
sequence of zero-cost actions.
10 With suitable diagram explain the following concepts (April 2019) (NOV 2022)
i. shoulder
ii. Global maximum
iii. Local maximum
ANS

Hill climbing is the simplest way to implement a hill climbing algorithm. It only evaluates
the neighbor node state at a time and selects the �irst one which optimizes current
cost and set it as a current state.
Shoulder: It is a plateau region which has an uphill edge.
Global Maximum: Global maximum is the best possible state of state space landscape. It has the
highest value of objec�ve func�on.

Local maxima: a local maximum is a peak that is higher than each of its neighboring states but
lower than the global maximum. Hill-climbing algorithms that reach the vicinity of a local
maximum will be drawn upward toward the peak but will then be stuck with nowhere else to go.
11 Explain the working of AND-OR search tree (April 2019)
ANS An and-or tree specifies only the search space for solving a problem. Different search strategies
for searching the space are possible. These include searching the tree depth-first, breadth-first,
or best-first using some measure of desirability of solu�ons. The search strategy can be
sequen�al, searching or genera�ng one node at a �me, or parallel, searching or genera�ng
several nodes in parallel.
An and–or tree is a graphical representa�on of the reduc�on of problems (or goals) to
conjunc�ons and disjunc�ons of subproblems (or subgoals).

For detailed Video Lecture Download The Shikshak Edu App


Telegram | youtube | Android App | website | instagram | whatsapp | e-next

12 Write the procedure for tree search. (NOV 2019)


ANS Searching means finding or loca�ng some specific element or node within a data structure.
However, searching for some specific node in binary search tree is prety easy due to the fact
that, element in BST are stored in a par�cular order.

1. Compare the element with the root of the tree.


2. If the item is matched then return the loca�on of the node.
3. Otherwise check if item is less than the element present on root, if so then move to the le�
sub-tree.
4. If not, then move to the right sub-tree.
5. Repeat this procedure recursively un�l match found.
6. If element is not found then return NULL.
13 Explain A* algorithm for the shortest path. (NOV 2019)
OR
What is A* Search Algorithm? How to make A* search admissible to get an op�mized solu�on
also men�on the Condi�on for op�mality with reference to Admissibility and consistency.
ANS The most widely known form of best-first search is called A∗ search (pronounced “A-star search”).
It evaluates nodes by combining g(n), the cost to reach the node, and h(n), the cost to get from
the node to the goal:
f (n) = g(n)+ h(n) .
Since g(n) gives the path cost from the start node to node n, and h(n) is the es�mated cost of the
cheapest path from n to the goal, we have
f (n) = es�mated cost of the cheapest solu�on through n .
Thus, if we are trying to find the cheapest solu�on, a reasonable thing to try first is the node with
the lowest value of g(n) + h(n). It turns out that this strategy is more than just reasonable:
provided that the heuris�c func�on h(n) sa�sfies certain condi�ons, A∗ search is both complete
and op�mal. The algorithm is iden�cal to UNIFORM-COST-SEARCH except that A∗ uses g + h
instead of g.

The first condi�on we require for op�mality is that h(n) be an admissible heuris�c. An admissible
heuris�c is one that never overes�mates the cost to reach the goal. Because g(n) is the actual
cost to reach n along the current path, and f (n)= g(n) + h(n), we have as an immediate
consequence that f (n) never overes�mates the true cost of a solu�on along the current path
through n.

For detailed Video Lecture Download The Shikshak Edu App


Telegram | youtube | Android App | website | instagram | whatsapp | e-next

A second, slightly stronger condi�on called consistency (or some�mes monotonicity) is required
only for applica�ons of A∗ to graph search.9 A heuris�c h(n) is consistent if, for every node n and
every successor nJ of n generated by any ac�on a, the es�mated cost of reaching the goal from
n is no greater than the step cost of ge�ng to nJ plus the es�mated cost of reaching the goal
from nJ:
h(n) ≤ c(n, a, nJ)+ h(nJ) .

14 Give the outline of Hill climbing algorithm.(NOV 2019)


It is a simply a loop that continually moves in the direction of increasing value—that is,
uphill.

For detailed Video Lecture Download The Shikshak Edu App


Telegram | youtube | Android App | website | instagram | whatsapp | e-next

It terminates when it reaches a “peak” where no neighbor has a higher value. The
algorithm does not maintain a search tree, so the data structure for the current node need
only record the state and the value of the objective function.
Hill climbing is sometimes called greedy local search because it grabs a good neighbor
state without thinking ahead about where to go next.

Hill climbing often gets stuck for the following reasons:


Local maxima: a local maximum is a peak that is higher than each of its neighboring
states but lower than the global maximum. Hill-climbing algorithms that reach the
vicinity of a local maximum will be drawn upward toward the peak but will then be
stuck with nowhere else to go.
Ridges: a ridge is shown in Figure RIDGE 4.4. Ridges result in a sequence of local
maxima that is very difficult for greedy algorithms to navigate.
Plateaux: a plateau is a flat area of the state-space landscape. It can be a flat local
maximum, from which no uphill exit exists, or a shoulder, from which progress is
possible. (See Figure 4.1.) A hill-climbing search might get lost on the plateau.

15 1. Explain following terms: (NOV 2022)


i) State Space of problem
ii) Path in State Space
iii) Goal Test
iv) Path Cost
v) Op�mal Solu�on to problem

A state space is a way to mathema�cally represent a problem by defining all the possible states
in which the problem can be. This is used in search algorithms to represent the ini�al state, goal
state, and current state of the problem.
Path in state space - In the state space, a path is a sequence of states connected by a sequence
of ac�ons. The solu�on of a problem is part of the graph formed by the state space.

For detailed Video Lecture Download The Shikshak Edu App


Telegram | youtube | Android App | website | instagram | whatsapp | e-next

Goal test: Are we at the final des�na�on specified by the user?


A test that determines whether a given state is a goal state. Path Cost: A func�on that assigns a
numeric value to each path.
Path cost: This depends on monetary cost, wai�ng �me, flight �me, customs and im- migra�on
procedures, seat quality, �me of day, type of airplane, frequent-flyer mileage awards, and so on

Consider, for example, the problem “Visit every city in Figure 3.2 at least once, star�ng and
ending in Bucharest.” As with route finding, the ac�ons correspond to trips between
adjacent ci�es. The state space, however, is quite different. Each state must include not just
the current loca�on but also the set of cities the agent has visited.So the ini�al state would
be In(Bucharest ), Visited ({Bucharest }), a typical intermediate state would be In(Vaslui ),
Visited ({Bucharest , Urziceni, Vaslui }), and the goal testwould check whether the agent is in
Bucharest and all 20 ci�es have been visited.

16 What are the components involved in problem formula�on? Explain.(APR 2023)


Problem formula�on is the process of deciding what ac�ons and states to consider, given a
goal. Each state therefore corresponds to being in a par�cular town.

A well-defined problem can be described by:

Ini�al state

Operator or successor func�on - for any state x returns s(x), the set of states reachable from x
with one ac�on

State space - all states reachable from ini�al by any sequence of ac�ons

Path - sequence through state space

Path cost - func�on that assigns a cost to a path. Cost of a path is the sum of costs of individual
ac�ons along the path

Goal test - test to determine if at goal state

18 Differen�ate between Uninformed and Informed Search Strategies

For detailed Video Lecture Download The Shikshak Edu App


Telegram | youtube | Android App | website | instagram | whatsapp | e-next

20 Explain in detail State-space search diagram for Hill Climbing algorithm.

The state-space landscape is a graphical representa�on of the hill-climbing algorithm which is


showing a graph between various states of algorithm and Objec�ve func�on/Cost.

On Y-axis we have taken the func�on which can be an objec�ve func�on or cost func�on, and
state-space on the x-axis. If the func�on on Y-axis is cost then, the goal of search is to find the
global minimum and local minimum. If the func�on of Y-axis is Objec�ve func�on, then the goal
of the search is to find the global maximum and local maximum.

Hill Climbing Algorithm in AI

For detailed Video Lecture Download The Shikshak Edu App


Telegram | youtube | Android App | website | instagram | whatsapp | e-next

Local Maximum: Local maximum is a state which is beter than its neighbor states, but there is
also another state which is higher than it.

Global Maximum: Global maximum is the best possible state of state space landscape. It has the
highest value of objec�ve func�on.

Current state: It is a state in a landscape diagram where an agent is currently present.

Flat local maximum: It is a flat space in the landscape where all the neighbor states of current
states have the same value.

Shoulder: It is a plateau region which has an uphill edge.

21 Explain Online search agents and unknown environments with its different types.
Online search agents and unknown environments.
An online search agent interleaves computa�on and ac�on: first it takes an ac�on, then it
observes the environment and computes the next ac�on.
Online search is a good idea in dynamic or semi dynamic domains—domains where there is a
penalty for si�ng around and compu�ng too long.
Online search is also helpful in nondeterminis�c domains because it allows the agent to focus
its computa�onal efforts on the con�ngencies that actually arise rather than those that might
happen but probably won’t.
Online search is a necessary idea for unknown environments, where the agent does not know
what states exist or what its ac�ons do. In this state of ignorance, the agent faces an
explora�on problem and must use its ac�ons as experiments in order to learn enough to make
delibera�on worthwhile.

The canonical example of online search is a robot that is placed in a new building and must
explore it to build a map that it can use for ge�ng from A to B.

For detailed Video Lecture Download The Shikshak Edu App


Telegram | youtube | Android App | website | instagram | whatsapp | e-next

Spa�al explora�on is not the only form of explora�on, however.


Consider a newborn baby: it has many possible ac�ons but knows the outcomes of none of
them, and it has experienced only a few of the possible states that it can reach. The baby’s
gradual discovery of how the world works is, in part, an online search process.

Online search problems


An online search problem must be solved by an agent execu�ng ac�ons, rather than by pure
computa�on. We assume a determinis�c and fully observable environment.
ACTIONS(s), which returns a list of ac�ons allowed in state s;
The step-cost func�on c(s, a, s’)—note that this cannot be used un�l the agent knows that s’ is
the outcome; and
GOAL-TEST(s).

Unit 3
1 Explain the working mechanism of min-max algorithm (Nov 2018)
OR
Write the minimax algorithm. Explain in short. (April 2019)
OR
Give the outline of min-max algorithm. (NOV 2019)
OR
Explain Minimax algorithm in detail. (Nov 2022)
ANS The minimax algorithm
The minimax algorithm computes the minimax decision from the current state.
It uses a simple recursive computa�on of the minimax values of each successor state,
directly
implemen�ng the defining equa�ons.
The recursion proceeds all the way down to the leaves of the tree, and then the
minimax values are backed up through the tree as the recursion unwinds.
The minimax algorithm performs a complete depth-first explora�on of the game tree.
If the maximum depth of the tree is m and there are b legal moves at each point, then
the
�me complexity of the minimax algorithm is O(bm).

For detailed Video Lecture Download The Shikshak Edu App


Telegram | youtube | Android App | website | instagram | whatsapp | e-next

The space complexity is O(bm) for an algorithm that generates all ac�ons at once, or
O(m) for an algorithm that generates ac�ons one at a �me.
For real games, of course, the �me cost is totally imprac�cal, but this algorithm serves
as the basis for the mathema�cal analysis of games and for more prac�cal algorithms.

A two-ply game tree. The nodes are “MAX nodes,” in which it is MAX’s turn to move,
and the nodes are “MIN nodes.”
The terminal nodes show the u�lity values for MAX; the other nodes are labeled with
their minimax values.
MAX’s best move at the root is a1, because it leads to the state with the highest
minimax value, and MIN’s best reply is b1, because it leads to the state with the lowest
minimax value.

2 Explain in brief about resolu�on theorem (Nov 2018)


OR
Give the outline of resolu�on-algorithm. (NOV 2019)
ANS Resolu�on is a theorem proving technique that proceeds by building refuta�on proofs,
i.e., proofs by contradic�ons. It was invented by a Mathema�cian John Alan Robinson
in the year 1965.
if there are various statements are given, and we need to prove a conclusion of those
statements.
Unifica�on is a key concept in proofs by resolu�ons. Resolu�on is a single inference
rule which can efficiently operate on the conjunc�ve normal form or clausal form.

For detailed Video Lecture Download The Shikshak Edu App


Telegram | youtube | Android App | website | instagram | whatsapp | e-next

Clause: Disjunc�on of literals (an atomic sentence) is called a clause. It is also known as
a unit clause.
Conjunc�ve Normal Form: A sentence represented as a conjunc�on of clauses is said to
be conjunc�ve normal form or CNF

Steps for Resolu�on:


• Conversion of facts into first-order logic.
• Convert FOL statements into CNF
• Negate the statement which needs to prove (proof by contradic�on)
Draw resolu�on graph (unifica�on).
3 Write a note on Kriegspiel’s Par�ally observable chess. (Nov 2018)
OR
Write a short note on Kriegspiel’s Par�ally observable chess. (Nov 2022)
ANS Kriegspiel: Par�ally observable chess
In deterministic par�ally observable games, uncertainty about the state of the board
arises en�rely from lack of access to the choices made by the opponent.
We will examine the game of Kriegspiel, a par�ally observable variant of chess in which
pieces can move but are completely invisible to the opponent.
Kriegspiel is a chess variant in which players don't see the whole board. A referee is
needed.

Rules of Kriegspiel
White and Black each see a board containing only their own pieces.

A referee, who can see all the pieces, adjudicates the game and periodically makes
announcements that are heard by both players.

On his turn, White proposes to the referee any move that would be legal if there were no black
pieces. If the move is in fact not legal (because of the black pieces), the referee announces
“illegal.”

In this case, White may keep proposing moves un�l a legal one is found—and learns more
about the loca�on of Black’s pieces in the process.

Once a legal move is proposed, the referee announces one or more of the following: “Capture
on square X” if there is a capture, and “Check by D” if the black king is in check, where D is the
direc�on of the check, and can be one of “Knight,” “Rank,” “File,” “Long diagonal,” or “Short
diagonal.” (In case of discovered check, the referee may make two “Check” announcements.)

For detailed Video Lecture Download The Shikshak Edu App


Telegram | youtube | Android App | website | instagram | whatsapp | e-next

If Black is checkmated or stalemated, the referee says so; otherwise, it is Black’s turn to move.

Part of a guaranteed checkmate in the KRK endgame, shown on a reduced board.


In the ini�al belief state, Black’s king is in one of three possible loca�ons.
By a combina�on of probing moves, the strategy narrows this down to one.
Comple�on of the checkmate is le� as an exercise.

Note that moves must be chosen not


just for op�mality, but also to minimize
informa�on that opponent will gain
about posi�on.

Also, playing any predictable op�mal


strategy gives opponent informa�on,
so op�mal play in par�ally
observable games requires a
willingness to play somewhat
randomly.

4 Explain in brief about knowledge base agent (Nov 2018)


OR
Explain the concept of knowledge base with example. (April 2019)
OR
What is knowledge based agent? Explain its importance in problem solving techniques (Nov
2022)
ANS Knowledge-based agents are those agents who have the capability of maintaining an
internal state of knowledge, reason over that knowledge, update their knowledge after
observations and take actions.

For detailed Video Lecture Download The Shikshak Edu App


Telegram | youtube | Android App | website | instagram | whatsapp | e-next

A knowledge-based agent must able to do the following:


• An agent should be able to represent states, ac�ons, etc.
• An agent Should be able to incorporate new percepts
• An agent can update the internal representa�on of the world
• An agent can deduce the internal representa�on of the world
• An agent can deduce appropriate ac�ons.
The above diagram is representing a generalized architecture for a knowledge-based
agent. The knowledge-based agent (KBA) take input from the environment by
perceiving the environment. The input is taken by the inference engine of the agent and
which also communicate with KB to decide as per the knowledge store in KB. The
learning element of KBA regularly updates the KB by learning new knowledge.

5 Explain the syntax for proposi�onal logic (Nov 2018)


ANS Connec�ve Called As Example

¬(not) nega�on ¬W1,3


∧(and) conjunc�on W1,3 ∧ P3,1
∨(or) disjunc�on (W1,3∧P3,1)∨W2,2
⇒(implies) implica�on (W1,3∧P3,1) ⇒ ¬W2,2
⇔(if and only if) bicondi�onal W1,3 ⇔ ¬W2,2
The syntax of proposi�onal logic defines the allowable sentences.
The atomic sentences consist of a single proposi�on symbol. Each such symbol stands for a
proposi�on that can be true or false.
We use symbols that start with an uppercase leter and may contain other leters or subscripts,
for example: P, Q, R, W1,3 and North.
There are two proposi�on symbols with fixed meanings: True is the always-true proposi�on
and False is the always-false proposi�on.
Complex sentences are constructed from simpler sentences, using parentheses and logical
connec�ves.
There are five connec�ves in common use:

For detailed Video Lecture Download The Shikshak Edu App


Telegram | youtube | Android App | website | instagram | whatsapp | e-next

6 Write a note on Wumpus world problem. (Nov 2018)


OR
Write a note on Wumpus world problem (NOV 2019)
OR
Write a short note on Wumpus world problem. (Nov 2022)
ANS The Wumpus world is a simple world example to illustrate the worth of a knowledge-
based agent and to represent knowledge representation. It was inspired by a video game
Hunt the Wumpus by Gregory Yob in 1973.
The Wumpus world is a cave which has 4/4 rooms connected with passageways. So
there are total 16 rooms which are connected with each other. We have a knowledge-
based agent who will go forward in this world. The cave has a room with a beast which
is called Wumpus, who eats anyone who enters the room. The Wumpus can be shot by
the agent, but the agent has a single arrow. In the Wumpus world, there are some Pits
rooms which are bottomless, and if agent falls in Pits, then he will be stuck there
forever. The exciting thing with this cave is that in one room there is a possibility of
finding a heap of gold. So the agent goal is to find the gold and climb out the cave
without fallen into Pits or eaten by Wumpus. The agent will get a reward if he comes
out with gold, and he will get a penalty if eaten by Wumpus or falls in the pit.
Following is a sample diagram for representing the Wumpus world. It is showing some
rooms with Pits, one room with Wumpus and one agent at (1, 1) square location of the
world.
• There are also some components which can help the agent to navigate the cave.
These components are given as follows:
The rooms adjacent to the Wumpus room are smelly, so that it would have
some stench.
• The room adjacent to PITs has a breeze, so if the agent reaches near to PIT, then
he will perceive the breeze.
• There will be gliter in the room if and only if the room has gold.
• The Wumpus can be killed by the agent if the agent is facing to it, and Wumpus
will emit a horrible scream which can be heard anywhere in the cave.

For detailed Video Lecture Download The Shikshak Edu App


Telegram | youtube | Android App | website | instagram | whatsapp | e-next

PEAS descrip�on of Wumpus world:


To explain the Wumpus world we have given PEAS descrip�on as below:

Performance measure:
• +1000 reward points if the agent comes out of the cave with the gold.
• -1000 points penalty for being eaten by the Wumpus or falling into the pit.
• -1 for each ac�on, and -10 for using an arrow.
• The game ends if either agent dies or came out of the cave.
Environment:
• A 4*4 grid of rooms.
• The agent ini�ally in room square [1, 1], facing toward the right.
• Loca�on of Wumpus and gold are chosen randomly except the first square [1,1].
• Each square of the cave can be a pit with probability 0.2 except the first square.
Actuators:
• Le� turn,
• Right turn
• Move forward
• Grab
• Release
• Shoot.
Sensors:
• The agent will perceive the stench if he is in the room adjacent to the Wumpus. (Not
diagonally).
• The agent will perceive breeze if he is in the room directly adjacent to the Pit.
• The agent will perceive the gliter in the room where the gold is present.
• The agent will perceive the bump if he walks into a wall.
• When the Wumpus is shot, it emits a horrible scream which can be perceived
anywhere in the cave.
• These percepts can be represented as five element list, in which we will have different
indicators for each sensor.
Example if agent perceives stench, breeze, but no gliter, no bump, and no scream then it can
be represented as: [Stench, Breeze, None, None, None].

7 List and explain the elements used to define the game formally. (April 2019)
ANS A game can be defined as a type of search in AI which can be formalized of the following
elements:

Ini�al state: It specifies how the game is set up at the start.


Player(s): It specifies which player has moved in the state space.
Ac�on(s): It returns the set of legal moves in state space.
Result(s, a): It is the transi�on model, which specifies the result of moves in the state space.
Terminal-Test(s): Terminal test is true if the game is over, else it is false at any case. The state
where the game ends is called terminal states.
U�lity(s, p): A u�lity func�on gives the final numeric value for a game that ends in terminal
states s for player p. It is also called payoff func�on. For Chess, the outcomes are a win, loss, or
draw and its payoff values are +1, 0, ½. And for �c-tac-toe, u�lity values are +1, -1, and 0.
8 Explain alpha-beta purning with suitable example (April 2019)

For detailed Video Lecture Download The Shikshak Edu App


Telegram | youtube | Android App | website | instagram | whatsapp | e-next

OR
What is alpha-beta pruning? Explain the func�on of alpha beta pruning (NOV 2019)
OR
Describe the technique of Alpha-Bela Pruning. (Nov 2022)
ANS It is possible to compute the correct minimax decision without looking at every node in the
game tree using Pruning Technique.
When applied to a standard minimax tree, it returns the same move as minimax would, but
prunes away branches that cannot possibly influence the final decision. The formula for
MINIMAX,
Let the two unevaluated successors of node C in Figure have values x and y. Then the value of
the root node is given by,
MINIMAX(root ) = max(min(3, 12, 8), min(2, x, y), min(14, 5, 2))
= max(3, min(2, x, y), 2)
= max(3, z, 2) where z = min(2, x, y) ≤ 2
= 3.

For detailed Video Lecture Download The Shikshak Edu App


Telegram | youtube | Android App | website | instagram | whatsapp | e-next

α = the value of the best (i.e., highest-value) choice we have found so far at any choice
point
along the path for MAX.
β = the value of the best (i.e., lowest-value) choice we have found so far at any choice
point
along the path for MIN

9 Write the connec�ves used to form complex sentence of proposi�onal logic. Give example for
each (April 2019)
ANS All sentences are constructed from atomic sentences and the five connec�ves; therefore, we
need to specify how to compute the truth of atomic sentences and how to compute the truth
of sentences formed with each of the five connec�ves.
Atomic sentences are easy:
• True is true in every model and False is false in every model.
• The truth value of every other proposi�on symbol must be specified directly in the model.
For example, in the model m1 given earlier, P1,2 is false.
For complex sentences, we have five rules, which hold for any subsentences P and Q in any
model m.
• ¬P is true iff P is false in m.
• P ∧ Q is true iff both P and Q are true in m.
• P ∨ Q is true iff either P or Q is true in m.
• P ⇒ Q is true unless P is true and Q is false in m.
• P ⇔ Q is true iff P and Q are both true or both false in m.

10 Write a short note on proposi�onal thermo proving. (April 2019)


ANS propositional theorem proving
Entailment can be done by theorem proving—applying rules of inference directly to the
sentences in our knowledge base to construct a proof of the desired sentence without
consul�ng models.

For detailed Video Lecture Download The Shikshak Edu App


Telegram | youtube | Android App | website | instagram | whatsapp | e-next

If the number of models is large but the length of the proof is short, then theorem proving can
be more efficient than model checking.
concepts related to entailment:
logical equivalence
• Two sentences α and β are logically equivalent if they are true in the same set of
models. We write this as α ≡ β.
• An alterna�ve defini�on of equivalence is as follows: any two sentences α and β are
equivalent only if each of them entails the other:
• α ≡ β if and only if α |= β and β |= α .
validity
• A sentence is valid if it is true in all models
• Valid sentences are also known as tautologies—they are necessarily true. Because the
sentence True is true in all models, every valid sentence is logically equivalent to True.
sa�sfiability
• A sentence is sa�sfiable if it is true in, or sa�sfied by, some model.

Inference and proofs


Inference rules that can be applied to derive a proof—a chain of conclusions that leads to the
desired goal. The best-known rule is called Modus Ponens and is writen

The nota�on means that, whenever any sentences of the form α ⇒ β and α are given, then the
sentence β can be inferred.
For example, if (WumpusAhead ∧ WumpusAlive) ⇒ Shoot
and (WumpusAhead ∧ WumpusAlive) are given, then Shoot can be inferred.
Another useful inference rule is And-Elimina�on, which says that, from a conjunc�on, any of
the conjuncts can be inferred:

For example, from (WumpusAhead ∧ WumpusAlive), WumpusAlive can be inferred.


The Modus Ponens rule is one of the most important rules of inference, and it states that if P
and P → Q is true, then we can infer that Q will be true. It can be represented as: P->Q, P

Statement-1: "If I am hungry then I will eat" ==> P→ Q P


Statement-2: "I am hungry" ==> P
Conclusion: "I will eat." ==> Q.

11 Write a note on card games (NOV 2019)


ANS Card Games

For detailed Video Lecture Download The Shikshak Edu App


Telegram | youtube | Android App | website | instagram | whatsapp | e-next

Seems similar to dice, where the roll is only done once at the beginning. Not a correct analogy
but suggests first algorithm: consider all possible deals of invisible cards, solve each one as if it
were fully observable, choose move with best outcome averaged over all deals.

Here, we run exact MINIMAX if computa�onally feasible; otherwise, we run H-MINIMAX.


In most card games, the number of possible deals is rather large.
For example, in bridge play, each player sees just two of the four hands; there are two unseen
𝟐𝟐𝟐𝟐
hands of 13 cards each, so the number of deals is ( )= 10, 400, 600.
𝟏𝟏𝟏𝟏
Monte Carlo approxima�on: instead of adding up all the deals, we take a random sample of N
deals, where the probability of deal s appearing in the sample is propor�onal to P(s):

Averaging over clairvoyance


The strategy described in above Equa�ons is some�mes called averaging over clairvoyance
because it assumes that the game will become observable to both players immediately a�er
the first move.
Despite its intui�ve appeal, the strategy can lead one astray.
Consider the following story:
Day 1: Road A leads to a heap of gold; Road B leads to a fork. Take the le� fork and you’ll find a
bigger heap of gold but take the right fork and you’ll be run over by a bus.
Day 2: Road A leads to a heap of gold; Road B leads to a fork. Take the right fork and you’ll find
a bigger heap of gold but take the le� fork and you’ll be run over by a bus.
Day 3: Road A leads to a heap of gold; Road B leads to a fork. One branch of the fork leads to a
bigger heap of gold but take the wrong fork and you’ll be hit by a bus.
Unfortunately, you don’t know which fork is which.
12 What is knowledge based agent? Explain, its role and importance. (NOV 2019)
ANS A knowledge base is a set of sentences.
Each sentence is expressed in a language called a knowledge representation language
and represents some assertion about the world.
Knowledge-based agents are those agents who have the capability of maintaining an
internal state of knowledge, reason over that knowledge, update their knowledge after
observations and take actions.

For detailed Video Lecture Download The Shikshak Edu App


Telegram | youtube | Android App | website | instagram | whatsapp | e-next

Inference system allows us to add a new sentence to the knowledge base. A sentence is a
proposition about the world. Inference system applies logical rules to the KB to deduce
new information.

A knowledge-based agent must able to do the following:


• An agent should be able to represent states, ac�ons, etc.
• An agent Should be able to incorporate new percepts
• An agent can update the internal representa�on of the world
• An agent can deduce the internal representa�on of the world
• An agent can deduce appropriate ac�ons.
The above diagram is representing a generalized architecture for a knowledge-based
agent. The knowledge-based agent (KBA) take input from the environment by perceiving
the environment. The input is taken by the inference engine of the agent and which also
communicate with KB to decide as per the knowledge store in KB. The learning element
of KBA regularly updates the KB by learning new knowledge.
Knowledge base: Knowledge-base is a central component of a knowledge-based agent,
it is also known as KB. It is a collection of sentences (here 'sentence' is a technical term
and it is not identical to sentence in English). These sentences are expressed in a
language which is called a knowledge representation language. The Knowledge-base of
KBA stores fact about the world.
13 Explain Forward-Chaining algorithm for Proposi�onal definite Clauses(Nov 2022)
ANS Forward chaining is also known as a forward deduc�on or forward reasoning method when
using an inference engine.
Forward chaining is a form of reasoning which start with atomic sentences in the knowledge
base and applies inference rules (Modus Ponens) in the forward direc�on to extract more data
un�l a goal is reached.
The Forward-chaining algorithm starts from known facts, triggers all rules whose premises are
sa�sfied, and add their conclusion to the known facts. This process repeats un�l the problem is
solved.
Proper�es of Forward-Chaining
• It is a down-up approach, as it moves from botom to top.
• It is a process of making a conclusion based on known facts or data, by star�ng from
the ini�al state and reaches the goal state.

For detailed Video Lecture Download The Shikshak Edu App


Telegram | youtube | Android App | website | instagram | whatsapp | e-next

• Forward-chaining approach is also called as data-driven as we reach to the goal using


available data.
• Forward -chaining approach is commonly used in the expert system, such as CLIPS,
business, and produc�on rule systems.
FORWARD CHAINING Example
"As per the law, it is a crime for an American to sell weapons to hos�le na�ons. Country A,
an enemy of America, has some missiles, and all the missiles were sold to it by Robert, who is
an American ci�zen.“
Prove that "Robert is criminal."
To solve the above problem, first, we will convert all the above facts into first-order definite
clauses, and then we will use a forward-chaining algorithm to reach the goal.
• It is a crime for an American to sell weapons to hos�le na�ons. (Let's say p, q, and r are
variables)
American (p) ∧ weapon(q) ∧ sells (p, q, r) ∧ hos�le(r) → Criminal(p) ...(1)
• Country A has some missiles. ?p Owns(A, p) ∧ Missile(p). It can be writen in two
definite clauses by using Existen�al Instan�a�on, introducing new Constant T1.
Owns(A, T1) ......(2)
Missile(T1) .......(3)
• All of the missiles were sold to country A by Robert.
?p Missiles(p) ∧ Owns (A, p) → Sells (Robert, p, A) ......(4)
• Missiles are weapons.
Missile(p) → Weapons (p) .......(5)
• Enemy of America is known as hos�le.
Enemy(p, America) →Hos�le(p) ........(6)
• Country A is an enemy of America.
Enemy (A, America) .........(7)
• Robert is American
American(Robert). ..........(8)
Forward chaining proof
Step-1:
In the first step we will start with the known facts and will choose the sentences which do
not have implica�ons, such as: American(Robert), Enemy(A, America), Owns(A, T1), and
Missile(T1). All these facts will be represented as below.

Step-2:
At the second step, we will see those facts which infer from available facts and with sa�sfied
premises.
Rule-(1) does not sa�sfy premises, so it will not be added in the first itera�on.
Rule-(2) and (3) are already added.
Rule-(4) sa�sfy with the subs�tu�on {p/T1}, so Sells (Robert, T1, A) is added, which infers
from the conjunc�on of Rule (2) and (3).
Rule-(6) is sa�sfied with the subs�tu�on(p/A), so Hos�le(A) is added and which infers from
Rule-(7).

For detailed Video Lecture Download The Shikshak Edu App


Telegram | youtube | Android App | website | instagram | whatsapp | e-next

Step-3:
At step-3, as we can check Rule-(1) is sa�sfied with the subs�tu�on {p/Robert, q/T1, r/A}, so
we can add Criminal(Robert) which infers all the available facts. And hence we reached our
goal statement.

Hence it is proved that Robert is Criminal using forward chaining approach.

Unit 4
1 What is first order logic? Discuss the different elements used in first order logic. (Nov 2018)
OR
What is meant by First Order Logic? Explain syntax and seman�cs of First Order Logic. (Nov
2022)
ANS FOL is a way of knowledge Representa�on.
• We can express the natural language statements in concise way.
• It is also known as First order predicate logic.
• FOL develops the information about objects in an easier way & can also express the
relationship between those objects.
• FOL assumes following things in world.
• Objects: A,B, Numbers, colors, peoples etc.
• Relation: red, round, brother of etc.
• Function: father of, best friend, end of etc

For detailed Video Lecture Download The Shikshak Edu App


Telegram | youtube | Android App | website | instagram | whatsapp | e-next

• FOL has two main parts: Syntax & Semantics

Syntax and semantics of FOL


Models for first-order logic have objects in them.
The domain of a model is the set of objects or domain elements it contains.
The domain is required to be nonempty—every possible world must contain at least one
object.
It doesn’t mater what these objects are—all that maters is how many there are in each
par�cular model.
A rela�on is just the set of tuples of objects that are related.
Models in first-order logic require total func�ons, that is, there must be a value for every input
tuple.
• Richard refers to Richard the Lionheart and John refers to the evil King John.
• Brother refers to the brotherhood rela�on, that is, the set of tuples of objects given in;
OnHead refers to the “on head” rela�on that holds between the crown and King John; Person,
King, and Crown refer to the sets of objects that are persons, kings, and crowns.
• Le�Leg refers to the “le� leg” func�on, that is, the mapping given in Equa�on (8.2).

2 Explain universal and existen�al quan�fier with suitable example. (Nov 2018)
OR
Write a short note on Universal and Existen�al quan�fier with suitable example. (Nov 2022)
OR
Explain the following concepts
i. Universal Instan�a�on
ii. Existen�al Instan�a�on (April 2019)
OR
Explain universal qualifier with example
ANS A quan�fier is a language element which generates quan�fica�on, and quan�fica�on specifies
the quan�ty of specimen in the universe of discourse.
These are the symbols that permit to determine or iden�fy the range and scope of the variable
in the logical expression. There are two types of quan�fier:
1) Universal quan�fica�on (∀)(for all, everyone, everything)
2) Existen�al quan�fica�on (∃)(for some, at least one)

Universal quanti�ier is a symbol of logical representation, which speci�ies that the


statement within its range is true for everything or every instance of a particular thing.

For detailed Video Lecture Download The Shikshak Edu App


Telegram | youtube | Android App | website | instagram | whatsapp | e-next

The Universal quanti�ier is represented by a symbol ∀, which resembles an inverted A.

Existential quanti�iers are the type of quanti�iers, which express that the statement within
its scope is true for at least one instance of something.
It is denoted by the logical operator ∃, which resembles as inverted E. When it is used with
a predicate variable then it is called as an existential quanti�ier.

“All kings are persons,” ∀ x King(x) ⇒ Person(x)


∀ is usually pronounced “For all . . .”. the sentence says, “For all x, if x is a king, then x is a
person.”
The symbol x is called a variable. By conven�on, variables are lowercase leters.
Universal quan�fica�on makes statements about every object. Similarly, we can make a
statement about some object in the universe without naming it, by using an existen�al
quan�fier.
To say, for example, that King John has a crown on his head, we write
∃ x Crown(x) ∧ OnHead(x, John) .
∃x is pronounced “There exists an x such that . . .” or “For some x . . .”.
the sentence ∃x P says that P is true for at least one object x.

One more EX
Example:
All man drink coffee.
∀x man(x) → drink (x, coffee).
It will be read as: There are all x where x is a man who drink coffee.

Example:
Some boys are intelligent.
∃x: boys(x) ∧ intelligent(x)

It will be read as: There are some x where x is a boy who is intelligent.

3 Convert the following natural sentences into FOL form:


i. Virat is cricketer.
ii. All batsman are cricketers.
iii. Everybody speaks some language
iv. Every car has wheel.
v. Everybody loves somebody some �me (Nov 2018)
Virat is cricketer
ANS Virat(cricketer)

All batsman are cricketers


For-all(x): batsman(x) -> cricketer(x)

Everybody speaks some language


For-all(x) Exist(y): Person(x) V language(y) -> speaks(x,y)

For detailed Video Lecture Download The Shikshak Edu App


Telegram | youtube | Android App | website | instagram | whatsapp | e-next

Every car has wheel


(forall (x) (if (Car x) (exists (y) wheel-of (x y)))

Everybody loves somebody some �me


(forall (x) (exists (y) -> loves-some�me(x y)))
4 What is knowledge engineering? Write the steps for its execu�on. (Nov 2018)
OR
Explain the steps of Knowledge Engineering projects in First Order Logic. (NOV 2022)
OR
Explain the process of knowledge engineering (Nov 2019)
OR
Explain in detail knowledge engineering Process in First Order Logic. (APR 2023)
ANS The general process of knowledge-base construc�on is called knowledge engineering.
A knowledge engineer is someone who inves�gates a par�cular domain, learns what concepts
are important in that domain, and creates a formal representa�on of the objects and rela�ons
in the domain.
• Iden�fy the task
• Assemble the relevant knowledge
• Decide on a vocabulary of predicates, func�ons, and constants
• Encode general knowledge about the domain
• Encode a descrip�on of the specific problem instance
• Pose queries to the inference procedure and get answers
• Debug the knowledge base

The knowledge-engineering process


Iden�fy the task
The knowledge engineer must delineate the range of ques�ons that the knowledge base will
support and the kinds of facts that will be available for each specific problem instance.

Assemble the relevant knowledge.


The knowledge engineer might already be an expert
in the domain or might need to work with real experts to extract what they know—a process
called knowledge acquisi�on. At this stage, the knowledge is not represented
formally. The idea is to understand the scope of the knowledge base, as determined by the task,
and to understand how the domain actually works.

For detailed Video Lecture Download The Shikshak Edu App


Telegram | youtube | Android App | website | instagram | whatsapp | e-next

Decide on a vocabulary of predicates, func�ons, and constants


That is, translate the important domain-level concepts into logic-level names. This involves many
ques�ons
of knowledge-engineering style. Like programming style, this can have a significant impact on
the eventual success of the project.

Encode general knowledge about the domain


The knowledge engineer writes down the axioms for all the vocabulary terms. This pins down
(to the extent possible) the meaning of the terms, enabling the expert to check the content
To encode the general knowledge about the logic circuit, we need some following rules:
If two terminals are connected, then they have the same input signal
∀ t1, t2 Terminal (t1) ∧ Terminal (t2) ∧ Connect (t1, t2) → Signal (t1) = Signal (2)
Signal at every terminal will have either value 0 or 1
∀ t Terminal (t) →Signal (t) = 1 ∨Signal (t) = 0
Connect predicates are commuta�ve
∀ t1, t2 Connect(t1, t2) → Connect (t2, t1)
Encode a descrip�on of the specific problem instance
If the ontology is well thought out, this step will be easy. It will involve wri�ng simple atomic
sentences about instances of concepts that are already part of the ontology.
Then, we show the connec�ons between them:
Circuit(C1) ∧ Arity(C1, 3, 2) Connected(Out(1,X1), In(1,X2)) Connected(In(1,C1),
In(1,X1))

For detailed Video Lecture Download The Shikshak Edu App


Telegram | youtube | Android App | website | instagram | whatsapp | e-next

Gate(X1) ∧ Type(X1)=XOR Connected(Out(1,X1), In(2,A2)) Connected(In(1,C1),


In(1,A1))
Gate(X2) ∧ Type(X2)=XOR Connected(Out(1,A2), In(1,O1)) Connected(In(2,C1),
In(2,X1))
Gate(A1) ∧ Type(A1)=AND Connected(Out(1,A1), In(2,O1)) Connected(In(2,C1),
In(2,A1))
Gate(A2) ∧ Type(A2)=AND Connected(Out(1,X2), Out(1,C1)) Connected(In(3,C1),
In(2,X2))
Gate(O1) ∧ Type(O1)=OR . Connected(Out(1,O1), Out(2,C1)) Connected(In(3,C1),
In(1,A2))

Pose queries to the inference procedure and get answers


This is where the reward is: we can let the inference procedure operate on the axioms and
problem-specific facts to derive the facts we are interested in knowing.
find all the possible set of values of all the terminal for the adder circuit. The first query will be:
What should be the combina�on of input which would generate the first output of circuit C1, as
0 and a second output to be 1?
∃ i1, i2, i3 Signal (In(1, C1))=i1 ∧ Signal (In(2, C1))=i2 ∧ Signal (In(3, C1))=i3 ∧ Signal (Out(1, C1))=0
∧ Signal (Out(2, C1))=1 .
The answers are subs�tu�ons for the variables i1, i2, and i3 such that the resul�ng sentence is
entailed by the knowledge base. ASKVARS will give us three such subs�tu�ons:
{i1/1, i2/1, i3/0} {i1/1, i2/0, i3/1} {i1/0, i2/1, i3/1} .
Debug the knowledge base
Alas, the answers to queries will seldom be correct on the first try. More precisely, the answers
will be correct for the knowledge base as writen, assuming that the inference procedure is
sound, but they will not be the ones that the user is expec�ng.
Try to debug the issues of knowledge base.
5 Give comparison between forward chaining and backward chaining. (Nov 2018)
OR
Give the outline of simple forward chaining algorithm (Nov 2019)
ANS Forward chaining is also known as a forward deduc�on or forward reasoning method when
using an inference engine.
Forward chaining is a form of reasoning which start with atomic sentences in the knowledge
base and applies inference rules (Modus Ponens) in the forward direc�on to extract more data
un�l a goal is reached.
The Forward-chaining algorithm starts from known facts, triggers all rules whose premises are
sa�sfied, and add their conclusion to the known facts. This process repeats un�l the problem is
solved.
Proper�es of Forward-Chaining
• It is a down-up approach, as it moves from botom to top.
• It is a process of making a conclusion based on known facts or data, by star�ng from
the ini�al state and reaches the goal state.
• Forward-chaining approach is also called as data-driven as we reach to the goal using
available data.
Forward -chaining approach is commonly used in the expert system, such as CLIPS, business, and
produc�on rule systems

For detailed Video Lecture Download The Shikshak Edu App


Telegram | youtube | Android App | website | instagram | whatsapp | e-next

Backward-chaining is also known as a backward reasoning method when using an inference


engine. These algorithms work backward from the goal, chaining through rules to find known
facts that support the proof.
Proper�es of backward chaining

• It is known as a top-down approach.


• Backward-chaining is based on modus ponens inference rule.
• In backward chaining, the goal is broken into sub-goal or sub-goals to prove the facts
true.
• It is called a goal-driven approach, as a list of goals decides which rules are selected
and used.
• Backward -chaining algorithm is used in game theory, automated theorem proving
tools, inference engines, proof assistants, and various AI applica�ons.
The backward-chaining method mostly used a depth-first search strategy for proof.

6 Explain in brief about unifica�on (Nov 2018)


OR
What is unifica�on? Explain the process of unifica�on (Nov 2019)
OR
Write a short note on Unifica�on Process. (NOV 2022)
ANS UNIFICATION
Li�ed inference rules require finding subs�tu�ons that make different logical expressions look
iden�cal. This process is called unifica�on and is a key component of all first-order inference
algorithms.
Following are some basic condi�ons for unifica�on:
• Predicate symbol must be same, atoms or expression with different predicate symbol
can never be unified.
• Number of Arguments in both expressions must be iden�cal.
• Unifica�on will fail if there are two similar variables present in the same expression.
The UNIFY algorithm takes two sentences and returns a unifier for them if one exists:
UNIFY(p, q)=θ where SUBST(θ, p)= SUBST(θ, q)
Suppose we have a query AskVars(Knows(John, x)): whom does John know? Answers to this
query can be found by finding all sentences in the knowledge base that unify with Knows(John,
x). Here are the
results of unifica�on with four different sentences that might be in the knowledge base:
UNIFY(Knows(John, x), Knows(John, Jane)) = {x/Jane}
UNIFY(Knows(John, x), Knows(y, Bill )) = {x/Bill, y/John}
UNIFY(Knows(John, x), Knows(y, Mother (y))) = {y/John, x/Mother (John)}
UNIFY(Knows(John, x), Knows(x, Elizabeth)) = fail .
The last unifica�on fails because x cannot take on the values John and Elizabeth at the same
�me.
Now, remember that Knows(x, Elizabeth) means “Everyone knows Elizabeth,” so we should be
able to infer that John knows Elizabeth.
The problem arises only because the two sentences happen to use the same variable name, x.
The problem can be avoided by standardizing apart one of the two sentences being unified,
which means renaming its variables to avoid name clashes.

For detailed Video Lecture Download The Shikshak Edu App


Telegram | youtube | Android App | website | instagram | whatsapp | e-next

For example, we can rename x in Knows(x, Elizabeth) to x17 (a new variable name) without
changing its meaning. Now the unifica�on will work:
UNIFY(Knows(John, x), Knows(x17, Elizabeth)) = {x/Elizabeth, x17/John}
7 Explain the following with example
i. Atomic sentence
ii. Complex sentence (April 2019)
ANS Atomic sentences
An atomic sentence (or atom for short) is formed from a predicate symbol op�onally followed
by a
parenthesized list of terms, such as Brother (Richard , John).
This states, under the intended interpreta�on given earlier, that Richard the Lionheart is the
brother of King John. Atomic sentences can have complex terms as arguments. Thus,
Married(Father (Richard),Mother (John))
states that Richard the Lionheart’s father is married to King John’s mother
An atomic sentence is true in a given model if the relation referred to by the predicate symbol
holds among the objects referred to by the arguments.

Complex sentences
We can use logical connec�ves to construct more complex sentences, with the same syntax
and seman�cs as in proposi�onal calculus. Here are four sentences that are true in the model
under our intended interpreta�on:
¬Brother (LeftLeg(Richard), John)
Brother (Richard , John) ∧ Brother (John, Richard)
King(Richard ) ∨ King(John)
¬King(Richard) ⇒ King(John) .

8 Define the wampus world problem in terms of first order logic. (April 2019)
ANS Recall that the wumpus agent receives a percept vector with five elements. The corresponding
first-order sentence stored in the knowledge base must include both the percept and
the �me at which it occurred; otherwise, the agent will get confused about when it saw what.
We use integers for �me steps. A typical percept sentence would be,

Percept ([Stench, Breeze, Gliter, None, None], 5)


Here, Percept is a binary predicate, and Stench and so on are constants placed in a list. The
ac�ons in the wumpus world can be represented by logical terms:
Turn(Right ), Turn(Le� ), Forward , Shoot , Grab, Climb .
To determine which is best, the agent program executes the query
ASKVARS(∃a BestAc�on(a, 5)) , which returns a binding list such as {a/Grab}.

Simple “reflex” behavior can also be implemented by quan�fied implica�on sentences.


For example, we have
∀ t Gliter (t) ⇒ BestAc�on(Grab, t) .

Given the percept and rules from the preceding paragraphs, this would yield the desired
conclusion
BestAc�on(Grab, 5)—that is, Grab is the right thing to do

For detailed Video Lecture Download The Shikshak Edu App


Telegram | youtube | Android App | website | instagram | whatsapp | e-next

9 Write and explain a simple backward-chaining algorithm for first-order knowledge bases. (April
2019)
OR
Describe Backward-Chaining algorithm for First Order definite Clauses. (Nov 2022)
ANS Backward-chaining is also known as a backward deduc�on or backward reasoning method when
using an inference engine. A backward chaining algorithm is a form of reasoning, which starts
with the goal and works backward, chaining through rules to find known facts that support the
goal.

Proper�es of backward chaining:

• It is known as a top-down approach.


• Backward-chaining is based on modus ponens inference rule.
• In backward chaining, the goal is broken into sub-goal or sub-goals to prove the facts
true.
• It is called a goal-driven approach, as a list of goals decides which rules are selected and
used.
• Backward -chaining algorithm is used in game theory, automated theorem proving tools,
inference engines, proof assistants, and various AI applica�ons.
• The backward-chaining method mostly used a depth-first search strategy for proof.
10 Explain the first order definite clause. (April 2019)
ANS Definite clause: A clause which is a disjunc�on of literals with exactly one posi�ve literal is
known as a definite clause or strict horn clause.
Horn clause: A clause which is a disjunc�on of literals with at most one posi�ve literal is
known as horn clause. Hence all the definite clauses are horn clauses.
Example: (¬ p V ¬ q V k). It has only one posi�ve literal k.
It is equivalent to p ∧ q → k.

12 What are predicates? Explain its syntax and seman�cs. (Nov 2019)
ANS A predicate is an expression of one or more variables determined on some specific domain. A
predicate with variables can be made a proposi�on by either authorizing a value to the variable
or by quan�fying the variable.

The following are some examples of predicates.


Consider E(x, y) denote "x = y"
Consider X(a, b, c) denote "a + b + c = 0"
Consider M(x, y) denote "x is married to y."

The main task of the syntax of any language is to dis�nguish the gramma�cally correct from the
gramma�cally incorrect sequences of words, which in our case are certain symbols, and the main
task of its seman�cs is to determine when a gramma�cally correct sentence is true and when
false. The syntax and seman�cs of the language of predicate logic make it possible to strictly
define the concepts of logical inference, logical equivalence, contradic�on, consistency, logical
validity, etc.

The well-formed formulas of predicate logic are interpreted with respect to a domain of objects
called universe of discourse, which we denote by “D”.

For detailed Video Lecture Download The Shikshak Edu App


Telegram | youtube | Android App | website | instagram | whatsapp | e-next

Seman�c - To interpret a formula as a sentence (a statement or an open sentence) from the


natural language, we need to interpret the predicate leters and the constants in it. The
interpreta�on of a constant consists in determining the object in D it denotes. For example, if D
is the set of humans (existed and exis�ng), we can interpret the constant “a” so that, in effect, it
has the meaning of the proper name “Socrates” by determining that it denotes (refers to) the
person Socrates.
13 What are Quan�fiers? Explain the types with syntax and example
ANS A quan�fier is a language element which generates quan�fica�on, and quan�fica�on specifies
the quan�ty of specimen in the universe of discourse.
These are the symbols that permit to determine or iden�fy the range and scope of the variable
in the logical expression. There are two types of quan�fier:
1) Universal quan�fica�on (∀)(for all, everyone, everything)
2) Existen�al quan�fica�on (∃)(for some, at least one)

Universal quanti�ier is a symbol of logical representation, which speci�ies that the


statement within its range is true for everything or every instance of a particular thing.
The Universal quanti�ier is represented by a symbol ∀, which resembles an inverted A.

Existential quanti�iers are the type of quanti�iers, which express that the statement within
its scope is true for at least one instance of something.
It is denoted by the logical operator ∃, which resembles as inverted E. When it is used with
a predicate variable then it is called as an existential quanti�ier.

“All kings are persons,” ∀ x King(x) ⇒ Person(x)


∀ is usually pronounced “For all . . .”. the sentence says, “For all x, if x is a king, then x is a
person.”
The symbol x is called a variable. By conven�on, variables are lowercase leters.
Universal quan�fica�on makes statements about every object. Similarly, we can make a
statement about some object in the universe without naming it, by using an existen�al
quan�fier.
To say, for example, that King John has a crown on his head, we write
∃ x Crown(x) ∧ OnHead(x, John) .
∃x is pronounced “There exists an x such that . . .” or “For some x . . .”.
the sentence ∃x P says that P is true for at least one object x.

One more EX
Example:
All man drink coffee.
∀x man(x) → drink (x, coffee).
It will be read as: There are all x where x is a man who drink coffee.

Example:
Some boys are intelligent.
∃x: boys(x) ∧ intelligent(x)

It will be read as: There are some x where x is a boy who is intelligent.

For detailed Video Lecture Download The Shikshak Edu App


Telegram | youtube | Android App | website | instagram | whatsapp | e-next

15 What is Predicate Logic? Differen�ate between Proposi�onal logic and First Order Logic. (APR
2023)

ANS
Proposi�onal Logic First order Inference

Proposi�onal Logic converts a complete First-Order Logic rela�on of a par�cular


sentence into a symbol and makes it sentence will be made that involves
logical rela�ons, constants, func�ons, and
constants.

The limita�on of PL is that it does not FOL can easily represent the individual
represent any individual en��es establishment that means if you are
wri�ng a single sentence then it can be
easily represented in FOL

PL does not signify or express the In FOL users can easily use quan�fiers as it
generaliza�on, specializa�on or patern for does express the generaliza�on,
example ‘QUANTIFIERS’ cannot be used in specializa�on, and patern.
PL

16 Explain Datalog used in first order definite clause. (Nov 2022)


ANS Datalog is a declara�ve logic programming language.
Datalog generally uses a botom-up rather than top-down evalua�on model.

Datalog is one of the property of forward chaining.


Datalog = first-order definite clauses + no func-ons
FC terminates for Datalog in finite number of itera-ons

A Datalog program consists of facts, which are statements that are held to be true, and rules,
which say how to deduce new facts from known facts. For example, here are two facts that
mean xerces is a parent of brooke and brooke is a parent of damocles:
parent(xerces, brooke).
parent(brooke, damocles).
17 Explain the following with respect to FOL:--
a. Term
b. Atomic Sentences
c. Complex Sentences
d. Universal Quan�fiers
e. Existen�al Quan�fier (APR 2023)

For detailed Video Lecture Download The Shikshak Edu App


Telegram | youtube | Android App | website | instagram | whatsapp | e-next

ANS FOL
• FOL develops the informa�on about objects in an easier way & can also express the
rela�onship between those objects.
• FOL assumes following things in world.
• Objects: A,B, Numbers, colors, peoples etc.
• Rela�on: red, round, brother of etc.
• Func�on: father of, best friend, end of etc
• FOL has two main parts: Syntax & Seman�cs

Atomic sentences
An atomic sentence (or atom for short) is formed from a predicate symbol op�onally followed
by a
parenthesized list of terms, such as Brother (Richard , John).
This states, under the intended interpreta�on given earlier, that Richard the Lionheart is the
brother of King John. Atomic sentences can have complex terms as arguments. Thus,
Married(Father (Richard),Mother (John))
states that Richard the Lionheart’s father is married to King John’s mother
An atomic sentence is true in a given model if the relation referred to by the predicate symbol
holds among the objects referred to by the arguments.

Complex sentences
We can use logical connec�ves to construct more complex sentences, with the same syntax
and seman�cs as in proposi�onal calculus. Here are four sentences that are true in the model
under our intended interpreta�on:
¬Brother (LeftLeg(Richard), John)
Brother (Richard , John) ∧ Brother (John, Richard)
King(Richard ) ∨ King(John)
¬King(Richard) ⇒ King(John) .

Quan�fiers
1) Universal quan�fica�on (∀)
2) Existen�al quan�fica�on (∃)

“All kings are persons,” ∀ x King(x) ⇒ Person(x)


∀ is usually pronounced “For all . . .”. the sentence says, “For all x, if x is a king, then x is a
person.”
The symbol x is called a variable. By conven�on, variables are lowercase leters.
Universal quan�fica�on makes statements about every object. Similarly, we can make a
statement about some object in the universe without naming it, by using an existen�al
quan�fier.
To say, for example, that King John has a crown on his head, we write
∃ x Crown(x) ∧ OnHead(x, John) .
∃x is pronounced “There exists an x such that . . .” or “For some x . . .”.
the sentence ∃x P says that P is true for at least one object x.

18 Convert the below given Facts into FOL and Prove that "Colonel is a Criminal" using Forward-
Chaining.
a. It is crime for an American to sell weapons to the enemy of America.

For detailed Video Lecture Download The Shikshak Edu App


Telegram | youtube | Android App | website | instagram | whatsapp | e-next

b. Country Nono is an enemy of America.


c. Nono has some Missiles.
d. All the Missiles were sold to Nono by Colonel.
e. Missile is a weapon.
f. Colonel is American (APR 2023)
ANS Note: In this answer consider colonel is Robert

"As per the law, it is a crime for an American to sell weapons to hos�le na�ons. Country A,
an enemy of America, has some missiles, and all the missiles were sold to it by Robert, who is
an American ci�zen.“
Prove that "Robert is criminal."
To solve the above problem, first, we will convert all the above facts into first-order definite
clauses, and then we will use a forward-chaining algorithm to reach the goal.
• It is a crime for an American to sell weapons to hos�le na�ons. (Let's say p, q, and r are
variables)
American (p) ∧ weapon(q) ∧ sells (p, q, r) ∧ hos�le(r) → Criminal(p) ...(1)
• Country A has some missiles. ?p Owns(A, p) ∧ Missile(p). It can be writen in two
definite clauses by using Existen�al Instan�a�on, introducing new Constant T1.
Owns(A, T1) ......(2)
Missile(T1) .......(3)
• All of the missiles were sold to country A by Robert.
?p Missiles(p) ∧ Owns (A, p) → Sells (Robert, p, A) ......(4)
• Missiles are weapons.
Missile(p) → Weapons (p) .......(5)
• Enemy of America is known as hos�le.
Enemy(p, America) →Hos�le(p) ........(6)
• Country A is an enemy of America.
Enemy (A, America) .........(7)
• Robert is American
American(Robert). ..........(8)
Forward chaining proof
Step-1:
In the first step we will start with the known facts and will choose the sentences which do
not have implica�ons, such as: American(Robert), Enemy(A, America), Owns(A, T1), and
Missile(T1). All these facts will be represented as below.

Step-2:
At the second step, we will see those facts which infer from available facts and with sa�sfied
premises.
Rule-(1) does not sa�sfy premises, so it will not be added in the first itera�on.
Rule-(2) and (3) are already added.
Rule-(4) sa�sfy with the subs�tu�on {p/T1}, so Sells (Robert, T1, A) is added, which infers
from the conjunc�on of Rule (2) and (3).
Rule-(6) is sa�sfied with the subs�tu�on(p/A), so Hos�le(A) is added and which infers from
Rule-(7).

For detailed Video Lecture Download The Shikshak Edu App


Telegram | youtube | Android App | website | instagram | whatsapp | e-next

Step-3:
At step-3, as we can check Rule-(1) is sa�sfied with the subs�tu�on {p/Robert, q/T1, r/A}, so
we can add Criminal(Robert) which infers all the available facts. And hence we reached our
goal statement.

Hence it is proved that Robert is Criminal using forward chaining approach.


19 Differen�ate between Forward and Backward Chaining (APR 2023)
ANS
Forward Chaining Backward Chaining
Forward chaining starts from known facts and Backward chaining starts from the goal and
applies inference rule to extract more data works backward through inference rules to
unit it reaches to the goal. find the required facts that support the goal
It is a bottom-up approach It is a top-down approach
Forward chaining is known as data-driven Backward chaining is known as goal-driven
inference technique as we reach to the goal technique as we start from the goal and
using the available data. divide into sub-goal to extract the facts.
Forward chaining reasoning applies a Backward chaining reasoning applies a
breadth-�irst search strategy. depth-�irst search strategy.
Forward chaining tests for all the available Backward chaining only tests for few
rules required rules
Forward chaining is suitable for the Backward chaining is suitable for
planning, monitoring, control, and diagnostic, prescription, and debugging
interpretation application. application.

For detailed Video Lecture Download The Shikshak Edu App


Telegram | youtube | Android App | website | instagram | whatsapp | e-next

F orward chaining can generate an in�inite Backward chaining generates a �inite


number of possible conclusions. number of possible conclusions.
It operates in the forward direction It operates in the backward direction.
Forward chaining is aimed for any B ackward chaining is only aimed for the
conclusion required data.

20 What is Resolu�on? Men�on the Steps for Resolu�on and also for Conver�ng FOL into
Conjunc�ve Normal Form (CNF). Consider following statements and Perform the following
conversion on it:-
f. Convert to FOL.
g. Convert FOL to CNF.
h. Prove that "Is Someone Smiling?" Using Resolu�on.
i. Draw Resolu�on tree.
Statements:-
● All people who are gradua�ng are happy.
● All happy people smile..
● Someone is Gradua�ng. (APR 2023)
ANS

Unit 5
1 What is planning? Explain STRIPS operators with suitable example. (NOV 2018)
OR
What is planning? Explain the need of planning (NOV 2019)
AN Planning is an important part of Ar�ficial Intelligence which deals with the tasks and domains of
S a par�cular problem. Planning is considered the logical side of ac�ng.

Everything we humans do is with a definite goal in mind, and all our ac�ons are oriented
towards achieving our goal. Similarly, Planning is also done for Ar�ficial Intelligence.

For example, Planning is required to reach a par�cular des�na�on. It is necessary to find the
best route in Planning, but the tasks to be done at a par�cular �me and why they are done are
also very important.

That is why Planning is considered the logical side of ac�ng. In other words, Planning is about
deciding the tasks to be performed by the ar�ficial intelligence system and the system's
func�oning under domain-independent condi�ons.

The Standford Research Ins�tute Problem Solver (STRIPS) is an automated planning technique
that works by execu�ng a domain and problem to find a goal. With STRIPS, you first describe
the world. You do this by providing objects, ac�ons, precondi�ons, and effects. These are all the
types of things you can do in the game world.

For detailed Video Lecture Download The Shikshak Edu App


Telegram | youtube | Android App | website | instagram | whatsapp | e-next

Once the world is described, you then provide a problem set. A problem consists of an ini�al
state and a goal condi�on. STRIPS can then search all possible states, star�ng from the ini�al
one, execu�ng various ac�ons, un�l it reaches the goal.

A common language for wri�ng STRIPS domain and problem sets is the Planning Domain
Defini�on Language (PDDL). PDDL lets you write most of the code with English words, so that it
can be clearly read and (hopefully) well understood. It’s a rela�vely easy approach to wri�ng
simple AI planning problems.
2 Explain in brief about par�ally ordered plan. ( NOV 2018)
AN Par�ally ordered collec�on of steps with Start step has the ini�al state descrip�on as its effect
S Finish step has the goal descrip�on as its precondi�on causal links from outcome of one step to
precondi�on of another temporal ordering between pairs of steps Open condi�on =
precondi�on of a step not yet causally linked A plan is complete iff every precondi�on is
achieved A precondi�on is achieved iff it is the effect of an earlier step and no possibly
intervening step undoes it.

OR can refer this ans:


A par�al-order plan or par�al plan is a plan which specifies all ac�ons that need to be taken, but
only specifies the order between ac�ons when necessary. It is the result of a par�al-order
planner. A par�al-order plan consists of four components:

A set of ac�ons (also known as operators).


A par�al order for the ac�ons. It specifies the condi�ons about the order of some ac�ons.
A set of causal links. It specifies which ac�ons meet which precondi�ons of other ac�ons.
Alterna�vely, a set of bindings between the variables in ac�ons.
A set of open precondi�ons. It specifies which precondi�ons are not fulfilled by any ac�on in
the par�al-order plan.
In order to keep the possible orders of the ac�ons as open as possible, the set of order
condi�ons and causal links must be as small as possible.

A plan is a solu�on if the set of open precondi�ons is empty.

For detailed Video Lecture Download The Shikshak Edu App


Telegram | youtube | Android App | website | instagram | whatsapp | e-next

A lineariza�on of a par�al order plan is a total order plan derived from the par�cular par�al
order plan; in other words, both order plans consist of the same ac�ons, with the order in the
lineariza�on being a linear extension of the par�al order in the original par�al order plan.

Example
For example, a plan for baking a cake might start:

go to the store
get eggs; get flour; get milk
pay for all goods
go to the kitchen
This is a par�al plan because the order for finding eggs, flour and milk is not specified, the agent
can wander around the store reac�vely accumula�ng all the items on its shopping list un�l the
list is complete.
3 Explain in brief about hierarchical planning. ( NOV 2018)
OR
Explain hierarchical planning. (NOV 2022)
AN • Hierarchical organiza�on of ac�ons
S • Complex and less complex ac�ons
• Lowest level reflects directly executable ac�ons
• Plans are organized in a hierarchy
• Planning starts with complex ac�on on top
• Planning constructed through ac�on decomposi�on
• Subs�tute complex ac�on with plan of less complex ac�ons
• An idea that pervades almost all atempts to manage complexity.
• For example, complex so�ware is created from a hierarchy of subrou�nes or object
classes; armies operate as a hierarchy of units; governments and corpora�ons have
hierarchies of departments, subsidiaries, and branch offices.
• The key benefit of hierarchical structure is that, at each level of the hierarchy, a
computa�onal task, military mission, or administra�ve func�on is reduced to a small
number of ac�vi�es at the next lower
• level, so the computa�onal cost of finding the correct way to arrange those ac�vi�es for
the current problem is small.
High-level ac�ons
Hierarchical task networks or HTN planning
-At each “Level” only a small no. of individual planning ac�ons, then descend to lower levels to
“solve
these” for real.
-At higher levels, the planner ignores “internal effects” of decomposi�ons. But these have to be
resolved
at some level.

HTN Sample- Construc�on Domain


Ac�ons:
1) Buy Land: Money->Land
2) Get Load: Good Credit->Money
3) Get Permit: Land-> Permit

For detailed Video Lecture Download The Shikshak Edu App


Telegram | youtube | Android App | website | instagram | whatsapp | e-next

4) Hire Builder:-> Contract


5) Construc�on: Permit ˄ Contract -> House Built
6) Pay Builder: Money ˄ House Built-> House
7) Hierarchy of ac�ons For example, the ac�on “Going to Goa”
8) --In terms of major ac�on or minor ac�on
9) --Lower-level ac�vi�es would detail more precise steps for accomplishing the higher-
level tasks

4 Write a note on mutex rela�on ( NOV 2018)


AN mutex links for both ac�ons and literals. A mutex rela�on holds between
S two actions at a given level if any of the following three condi�ons holds:
• Inconsistent effects: one ac�on negates an effect of the other. For example, Eat (Cake) and the
persistence of Have(Cake) have inconsistent effects because they disagree on the effect
Have(Cake).
• Interference: one of the effects of one ac�on is the nega�on of a precondi�on of the other.
For example Eat (Cake) interferes with the persistence of Have(Cake) by nega�ng its
precondi�on.
• Competing needs: one of the precondi�ons of one ac�on is mutually exclusive with a
precondi�on of the other. For example, Bake(Cake) and Eat (Cake) are mutex because they
compete on the value of the Have(Cake) precondi�on
A mutex rela�on holds between two literals at the same level if one is the nega�on of the other
or if each possible pair of ac�ons that could achieve the two literals is mutually exclusive.
This condi�on is called inconsistent support.
For example, Have(Cake) and Eaten(Cake) are mutex in S1 because the only way of achieving
Have(Cake), the persistence ac�on, is mutex with the only way of achieving Eaten(Cake), namely
Eat (Cake). In S2 the two literals are not mutex, because there are new ways of achieving them,
such as Bake(Cake) and the persistence of Eaten(Cake), that are not mutex.
5 What is seman�c network? Show the seman�c representa�on with suitable example ( NOV
2018) (NOV 2022)
OR
Explain seman�c network with example (April 2019)
OR

For detailed Video Lecture Download The Shikshak Edu App


Telegram | youtube | Android App | website | instagram | whatsapp | e-next

Write a note on seman�c network. (NOV 2019)

AN There are mainly four ways of knowledge representation. One of that is seman�c network
S representa�on.

Seman�c networks are alterna�ve of predicate logic for knowledge representa�on. In Seman�c
networks, we can represent our knowledge in the form of graphical networks. This network
consists of nodes represen�ng objects and arcs which describe the rela�onship between those
objects. Seman�c networks can categorize the object in different forms and can also link those
objects. Seman�c networks are easy to understand and can be easily extended.

This representa�on consists of mainly two types of rela�ons:

a. IS-A rela�on (Inheritance)


b. Kind-of-rela�on
Example: Following are some statements which we need to represent in the form of nodes and
arcs.

Statements:
a. Jerry is a cat.
b. Jerry is a mammal
c. Jerry is owned by Priya.
d. Jerry is brown colored.
e. All Mammals are animal.

6 Write a note on Event calculus (NOV 2018)


OR
What are events? Explain its importance (NOV 2019)
OR
What arc events? Explain its importance (NOV 2022)

For detailed Video Lecture Download The Shikshak Edu App


Telegram | youtube | Android App | website | instagram | whatsapp | e-next

AN
S

Or can refer this:


Some systems need to classify events into types, others need to listen for specific events, and
some need to predict events. These requirements o�en involve assigning a score to an event,
and then ranking all the events according to the assigned score.

These events can be google calendar events, a medical alarm, dates on a da�ng website, or in
the case of GenRush, events can indicate new poten�al clients for a company based on events
in the real world.

These event understanding problems are classic classifica�on and regression requirements
hiding behind the complexity of the word “event”. For sequence predic�on, we use
LSTM/GRU/RNN, and some�mes DNN (e.g. when the sequence of events forms a patern you
can graph/see). But let’s focus in this ar�cle on the non-sequence and non-loca�on part of
event processing. Let’s look in some detail at how to turn an event into a set of features that an
AI can use.

So, we have a whole lot of events, and we want the AI to no�fy human users when specific
events happen. Immediately we find that a classic false posi�ve versus false nega�ve
conundrum emerges. It have been seen a lot of this in hospital/medical no�fica�on systems
back when I was working on medical devices.
If the AI no�fies incorrectly too o�en (false posi�ve) then the users will ignore the no�fica�ons.
Whereas if the system misses key events, then users will think, and rightly so, that the AI is not
paying aten�on.
7 Write PDDL descrip�on of an air cargo transporta�on planning problem (April 2019)
OR
Explain Planning Domain Defini�on Language descrip�on for an Air Cargo planning problem.
(NOV 2022)
AN PDDL
S Each state is represented as a conjunc�on of fluents that are ground, func�onless atoms.
For example, a state in a package delivery problem might be
At(Truck 1, Melbourne) ∧ At(Truck 2, Sydney).
Ac�ons are described by a set of ac�on schemas that implicitly define the ACTIONS(s)
and RESULT(s, a) func�ons needed to do a problem-solving search.

For detailed Video Lecture Download The Shikshak Edu App


Telegram | youtube | Android App | website | instagram | whatsapp | e-next

A set of ground (variable-free) ac�ons can be represented by a single ac�on schema.


The schema is a li�ed representa�on—it li�s the level of reasoning from proposi�onal
logic to a restricted subset of first-order logic. For example, here is an ac�on schema for
flying a plane from one loca�on to another:
Ac�on(Fly(p, from, to),
PRECOND: At(p, from) ∧ Plane(p) ∧ Airport (from) ∧ Airport (to)
EFFECT:¬At(p, from) ∧ At(p, to))
The schema consists of the ac�on name, a list of all the variables used in the schema, a
precondi�on and an effect.
Air cargo transport
The problem can be defined with three ac�ons: Load , Unload, and Fly.
Load: This ac�on is taken to load cargo.
Unload: This ac�on is taken to unload the cargo when it reaches its des�na�on.
Fly: This ac�on is taken to fly from one place to another.
The ac�ons affect two predicates:
In(c, p) means that cargo c is inside plane p,
At(x, a) means that object x (either plane or cargo) is at airport a.
When a plane flies from one airport to another, all the cargo inside the plane goes with it. In
first-order logic it would be easy to quan�fy over all objects that are inside the plane. But basic
PDDL does not have a universal quan�fier, so we need a different solu�on.
The approach we use is to say that a piece of cargo ceases to be At anywhere when it is In a
plane; the cargo only becomes At the new airport when it is unloaded. So At really means
“available for use at a given loca�on.”
The following plan is a solu�on to the problem
[Load (C1, P1, SFO), Fly(P1, SFO, JFK),Unload(C1, P1, JFK),
Load (C2, P2, JFK), Fly(P2, JFK, SFO),Unload(C2, P2, SFO)]
Some problems can be ignored because they does not cause any problem in planning.
8 Explain GRAPHPLAN algorithm (April 2019)
OR
Write a note on planning graph. (NOV 2019)(NOV 2022)
AN A special data structure called a planning graph. It can be used to give beter heuris�c
S es�mates. These heuris�cs can be applied to any of the search techniques. We can search for a
solu�on over the space formed by the planning graph, using an algorithm called GRAPHPLAN.
A planning problem asks if we can reach a goal state from the ini�al state.
Suppose we are given a tree of all possible ac�ons from the ini�al state to successor states, and
their successors, and so on. If we indexed this tree appropriately, we could answer the planning
ques�on
“can we reach state G from state S0” immediately, just by looking it up.
A planning graph is polynomial size approxima�on to this tree that can be constructed quickly.
The planning graph can’t answer defini�vely whether G is reachable from S0, but it can estimate
how many steps it takes to reach G.
A planning graph is a directed graph organized into levels: first a level S0 for the ini�al state,
consis�ng of nodes represen�ng each fluent that holds in S0; then a level A0 consis�ng of nodes
for each ground ac�on that might be applicable in S0; then alterna�ng levels Si followed by Ai;
un�l we reach a termina�on condi�on.

For detailed Video Lecture Download The Shikshak Edu App


Telegram | youtube | Android App | website | instagram | whatsapp | e-next

Si contains all the literals that could hold at �me i, depending on the ac�ons executed at
preceding �me steps. If it is possible that either P or ¬P could hold, then both will be
represented in Si.
Ai contains all the ac�ons that could have their precondi�ons sa�sfied at �me i.
Planning graphs work only for proposi�onal planning problems—ones with no variables
The “have cake and eat cake too” problem

Each ac�on at level Ai is connected to its precondi�ons at Si and its effects at Si+1.
-So a literal appears because an ac�on caused it, but we also want to say that a literal can
persist if no ac�on negates it. This is represented by a persistence ac�on (some�mes called a
no-op).
-For every literal C, we add to the problem a persistence ac�on with precondi�on C and effect
C.
-Level A0 in Figure shows one “real” ac�on, Eat (Cake), along with two persistence ac�ons
drawn as small square boxes.

Level A0 contains all the ac�ons that could occur in state S0, but just as important it records
conflicts between ac�ons that would prevent them from occurring together.
The gray MUTUAL EXCLUSION lines in Figure indicate mutual exclusion (or mutex) links.
For example, Eat (Cake) is MUTEX mutually exclusive with the persistence of either Have(Cake)
or ¬Eaten(Cake).
We shall see shortly how mutex links are computed.
Level S1 contains all the literals that could result from picking any subset of the ac�ons in A0, as
well as mutex links (gray lines) indica�ng literals that could not appear together, regardless of
the choice of ac�ons.
For example, Have(Cake) and Eaten(Cake) are mutex depending on the choice of ac�ons in A0,
either, but not both, could be the result. In other words, S1 represents a belief state: a set of
possible states.
The members of this set are all subsets of the literals such that there is no mutex link between
any members of the subset.

For detailed Video Lecture Download The Shikshak Edu App


Telegram | youtube | Android App | website | instagram | whatsapp | e-next

9 List various classical planning approaches. Explain anyone. (April 2019)


AN The most popular and effec�ve approaches to fully automated planning are:
S Transla�ng to a Boolean sa�sfiability (SAT) problem
Forward state-space search with carefully cra�ed heuris�cs
Search using a planning graph
Classical planning as Boolean sa�sfiability
Proposi�onalize the ac�ons: replace each ac�on schema with a set of ground ac�ons
formed by subs�tu�ng constants for each of the variables. These ground ac�ons are not
part of the transla�on but will be used in subsequent steps.
Define the ini�al state: Assert 𝐹𝐹 0 for every fluent F in the problem’s ini�al state, and ¬F
for every fluent not men�oned in the ini�al state.
Proposi�onalize the goal: for every variable in the goal, replace the literals that contain
the variable with a disjunc�on over constants. For example, the goal of having block A
on another block, On(A, x) ∧ Block(x) in a world with objects A,B and C, would be
replaced by the goal
(On(A,A) ∧ Block (A)) ∨ (On(A,B) ∧ Block (B)) ∨ (On(A,C) ∧ Block (C))
Add successor-state axioms: For each fluent F, add an axiom of the form,
𝐹𝐹 𝑡𝑡+1 ⇔ Ac�onCauses𝐹𝐹 𝑡𝑡 ∨ (𝐹𝐹 𝑡𝑡 ∧ ¬Ac�onCausesNot𝐹𝐹 𝑡𝑡 ) ,
where Ac�onCausesF is a disjunc�on of all the ground ac�ons that have F in their add list, and
Ac�onCausesNotF is a disjunc�on of all the ground ac�ons that have F in their delete list.
Add precondi�on axioms: For each ground ac�on A, add the axiom 𝐴𝐴𝑡𝑡 ⇒ PRE(𝐴𝐴)𝑡𝑡 , that is, if an
ac�on is taken at �me t, then the precondi�ons must have been true.
Add ac�on exclusion axioms: say that every ac�on is dis�nct from every other ac�on.
The resul�ng transla�on is in the form that we can hand to SATPLAN to find a solu�on.
Planning as first-order logical deduc�on: Situa�on calculus
First-order logic replace the no�on of linear �me with a no�on of branching situations, using a
representa�on called situa�on calculus that works like this:
The ini�al state is called a situa�on. If s is a situa�on and a is an ac�on, then RESULT(s, a) is also
a situa�on. There are no other situa�ons. Thus, a situa�on corresponds to a sequence, or
history, of ac�ons.
A situa�on as the result of applying the ac�ons but note that two situa�ons are the same only if
their start and ac�ons are the same: (RESULT(s, a) = RESULT(s, a)) ⇔ (s = s ∧ a = a).
A func�on or rela�on that can vary from one situa�on to the next is a fluent.
By conven�on, the situa�on s is always the last argument to the fluent, for example At(x, l, s) is
a
rela�onal fluent that is true when object x is at loca�on l in situa�on s, and Loca�on is a
func�onal fluent such that Loca�on(x, s) = l holds in the same situa�ons as At(x, l, s).
Each ac�on’s precondi�ons are described with a possibility axiom that says when the
ac�on can be taken. It has the form Φ(s) ⇒ Poss(a, s) where Φ(s) is some formula
involving s that describes the precondi�ons. An example from the wumpus world says
that it is possible to shoot if the agent is alive and has an arrow:
Alive(Agent, s) ∧ Have(Agent, Arrow, s) ⇒ Poss(Shoot, s)
Each fluent is described with a successor-state axiom that says what happens to the
fluent, depending on what ac�on is taken. This is like the approach we took for
proposi�onal logic. The axiom has the form
Action is possible ⇒

For detailed Video Lecture Download The Shikshak Edu App


Telegram | youtube | Android App | website | instagram | whatsapp | e-next

(Fluent is true in result state ⇔ Action’s effect made it true


∨ It was true before and action left it alone)
For example, the axiom for the rela�onal fluent Holding says that the agent is holding
some gold g a�er execu�ng a possible ac�on if and only if the ac�on was a Grab of g or
if the agent was already holding g and the ac�on was not releasing it:
Poss(a, s) ⇒
(Holding(Agent, g, Result (a, s)) ⇔
a=Grab(g) ∨ (Holding(Agent, g, s) ∧ a = Release (g)))
We need unique ac�on axioms so that the agent can deduce that, for example, a
≠ Release(g). For each dis�nct pair of ac�on names Ai and Aj we have an axiom that
says the ac�ons are different:
Ai(x, . . .) ≠ Aj(y, . . .)
And for each ac�on name Ai we have an axiom that says two uses of that ac�on name
are equal if and only if all their arguments are equal:
Ai(x1, . . . , xn)=Ai(y1, . . . , yn) ⇔ x1 =y1 ∧ . . . ∧ xn =yn .
A solu�on is a situa�on (and hence a sequence of ac�ons) that sa�sfies the goal.
Planning as constraint sa�sfac�on
Constraint sa�sfac�on has a lot in common with Boolean sa�sfiability.
CSP techniques are effec�ve for scheduling problems, so it is not surprising that it is
possible to encode a bounded planning problem (i.e., the problem of finding a plan of
length k) as a constraint sa�sfac�on problem (CSP).
The encoding is like the encoding to a SAT problem (Sec�on 10.4.1), with one important
simplifica�on: at each �me step we need only a single variable, Ac�ont, whose domain
is the set of possible ac�ons.
Planning as refinement of par�ally ordered plans
All the approaches we have seen so far construct totally ordered plans consis�ng of a
strictly linear sequences of ac�ons.
This representa�on ignores the fact that many subproblems are independent.
A solu�on to an air cargo problem consists of a totally ordered sequence of ac�ons, yet
if 30 packages are being loaded onto one plane in one airport and 50 packages are
being loaded onto another at another airport, it seems pointless to come up with a
strict linear ordering of 80 load ac�ons; the two subsets of ac�ons should be thought of
independently.

10 Explain the following terms


i. Circumscrip�on
ii. Default logic (April 2019)
AN Circumscrip�on: Circumscrip�on is a rule of conjecture that allows you to jump to the
S conclusion that the objects you can show that posses a certain property, p, are in fact all the
objects that posses that property.

Circumscrip�on can also cope with default reasoning.


Suppose we know: bird(tweety)

For detailed Video Lecture Download The Shikshak Edu App


Telegram | youtube | Android App | website | instagram | whatsapp | e-next

: penguin(x) bird(x)

: penguin(x) flies(x)

and we wish to add the fact that typically, birds fly.


In circumscrip�on this phrase would be stated as:

A bird will fly if it is not abnormal


and can thus be represented by:
: bird(x) abnormal(x) flies(x).
However, this is not sufficient.

We cannot conclude
flies(tweety)
since we cannot prove
~ abnormal (tweety).

This is where we apply circumscrip�on and, in this case.

Default Logic:
Default logic is a non-monotonic logic proposed by Raymond Reiter to formalize reasoning with
default assump�ons.

Default logic can express facts like “by default, something is true”; by contrast, standard logic
can only express that something is true or that something is false. This is a problem because
reasoning o�en involves facts that are true in the majority of cases but not always. A classical
example is: “birds typically fly”. This rule can be expressed in standard logic either by “all birds
fly”, which is inconsistent with the fact that penguins do not fly, or by “all birds that are not
penguins and not ostriches and ... fly”, which requires all excep�ons to the rule to be specified.
Default logic aims at formalizing inference rules like this one without explicitly men�oning all
their excep�ons.
A default theory is a pair (W,D). W is a set of logical formulas, called the background theory, that
formalize the facts that are known for sure. D is a set of default rules, each one being of the
form:

According to this default, if we believe that Prerequisite is true, and each jus�fica�on, for
i=1,…,n. n is consistent with our current beliefs, we are led to believe that Conclusion is true.

The logical formulae in W and all formulae in a default were originally assumed to be first-order
logic formulae, but they can poten�ally be formulae in an arbitrary formal logic. The case in
which they are formulae in proposi�onal logic is one of the most studied.

11 Write a short note on descrip�on logics (April 2019)

For detailed Video Lecture Download The Shikshak Edu App


Telegram | youtube | Android App | website | instagram | whatsapp | e-next

AN Descrip�on logics (DL) are a family of formal knowledge representa�on languages. Many DLs
S are more expressive than proposi�onal logic but less expressive than first-order logic. In
contrast to the later, the core reasoning problems for DLs are (usually) decidable, and efficient
decision procedures have been designed and implemented for these problems. There are
general, spa�al, temporal, spa�otemporal, and fuzzy descrip�on logics, and each descrip�on
logic features a different balance between expressive power and reasoning complexity by
suppor�ng different sets of mathema�cal constructors.

DLs are used in ar�ficial intelligence to describe and reason about the relevant concepts of an
applica�on domain (known as terminological knowledge). It is of par�cular importance in
providing a logical formalism for ontologies and the Seman�c Web: the Web Ontology Language
(OWL) and its profiles are based on DLs. The most notable applica�on of DLs and OWL is in
biomedical informa�cs where DL assists in the codifica�on of biomedical knowledge.
12 Explain block world problem for the following start state and end state. (NOV 2019)
AN
S

What is Blocks World Problem?


This is how the problem goes — There is a table on which some blocks are placed. Some blocks
may or may not be stacked on other blocks. We have a robot arm to pick up or put down the
blocks. The robot arm can move only one block at a �me, and no other block should be stacked
on top of the block which is to be moved by the robot arm.

Our aim is to change the configura�on of the blocks from the Ini�al State to the Goal State,
both of which have been specified in the diagram above.

Given below are the list of predicates as well as their intended meaning

ON(A,B) : Block A is on B
ONTABLE(A) : A is on table
CLEAR(A) : Nothing is on top of A
HOLDING(A) : Arm is holding A.
ARMEMPTY : Arm is holding nothing
Using these predicates, we represent the Ini�al State and the Goal State in our example like
this:

Ini�al State — ON(B,A) ∧ ONTABLE(A) ∧ ONTABLE(C) ∧ ONTABLE(D) ∧ CLEAR(B) ∧ CLEAR(C) ∧


CLEAR(D) ∧ ARMEMPTY

For detailed Video Lecture Download The Shikshak Edu App


Telegram | youtube | Android App | website | instagram | whatsapp | e-next

Goal State — ON(C,A) ∧ ON(B,D) ∧ ONTABLE(A) ∧ ONTABLE(D) ∧ CLEAR(B) ∧ CLEAR(C) ∧


ARMEMPTY

“Opera�ons” performed by the robot arm

The Robot Arm can perform 4 opera�ons:

STACK(X,Y) : Stacking Block X on Block Y


UNSTACK(X,Y) : Picking up Block X which is on top of Block Y
PICKUP(X) : Picking up Block X which is on top of the table
PUTDOWN(X) : Put Block X on the table

All the four opera�ons have certain precondi�ons which need to be sa�sfied to perform the
same. These precondi�ons are represented in the form of predicates.

The effect of these opera�ons is represented using two lists ADD and DELETE. DELETE List
contains the predicates which will cease to be true once the opera�on is performed. ADD List
on the other hand contains the predicates which will become true once the opera�on is
performed.
The Precondi�on, Add and Delete List for each opera�on is rather intui�ve and have been listed
below.

For detailed Video Lecture Download The Shikshak Edu App


Telegram | youtube | Android App | website | instagram | whatsapp | e-next

13 Write a note on Truth maintenance system (NOV 2019)


AN Truth Maintenance Systems (TMS), also called Reason Maintenance Systems, are used within
S Problem Solving Systems, in conjunc�on with Inference Engines (IE) such as rule-based
inference systems, to manage as a Dependency Network the inference engine's beliefs in given
sentences.

A TMS is intended to sa�sfy a number of goals:


Provide jus�fica�ons for conclusions
When a problem solving system gives an answer to a user's query, an explana�on of the answer
is usually required. If the advice to a stockbroker is to invest millions of dollars, an explana�on
of the reasons for that advice can help the broker reach a reasonable decision.
An explanan�on can be constructed by the IE by tracing the jus�fica�on of the asser�on.
Recognise inconsistencies
The IE may tell the TMS that some sentences are contradictory. Then, if on the basis of other IE
commands and of inferences we find that all those sentences are believed true, then the TMS
reports to the IE that a contradic�on has arisen. For instance, in the ABC example the statement
that either Abbot, or Babbit, or Cabot is guilty together with the statements that Abbot is not
guilty, Babbit is not guilty, and Cabot is not guilty, form a contradic�on.
The IE can eliminate an inconsistency by determining the assump�ons used and changing them
appropriately, or by presen�ng the contradictory set of sentences to the users and asking them
to choose which sentence(s) to retract.
Support default reasoning
In many situa�ons we want, in the absence of firmer knowledge, to reason from default
assump�ons. If Tweety is a bird, un�l told otherwise, we will assume that Tweety flies and use
as jus�fica�on the fact that Tweety is a bird and the assump�on that birds fly.
Remember deriva�ons computed previously
In the process of determining what is responsible for a network problem, we may have derived,
while examining the performance of a name server, that Ms.Doe is an expert on e-mail systems.
That conclusion will not need to be derived again when the name server ceases to be the
poten�al culprit for the problem and we examine instead the routers.
Support dependency driven backtracking
The jus�fica�on of a sentence, as maintained by the TMS, provides the natural indica�on of
what assump�ons need to be changed if we want to invalidate that sentence.

For detailed Video Lecture Download The Shikshak Edu App


Telegram | youtube | Android App | website | instagram | whatsapp | e-next

14 Describe Forward (Progression) State-Space Search algorithm with an example (NOV 2022)
AN Search from the ini�al state through the space of states, looking for a goal.
S 1) forward search is prone to exploring irrelevant ac�ons
Consider the noble task of buying a copy of AI: A Modern Approach from an online bookseller.
Suppose there is an ac�on schema Buy(isbn) with effect Own(isbn).
ISBNs are 10 digits, so this ac�on schema represents 10 billion ground ac�ons. An uninformed
forward-search algorithm would have to start enumera�ng these 10 billion ac�ons to find one
that leads to the goal.
2) Second, planning problems o�en have large state spaces
Consider an air cargo problem, with 10 airports, where each airport has 5 planes and 20 pieces
of cargo. The goal is to move all the cargo at airport A to airport B.
There is a simple solu�on to the problem: load the 20 pieces of cargo into one of the planes at
A, fly the plane to B, and unload the cargo.
Finding the solu�on can be difficult because the average branching factor is huge: each of the
50
planes can fly to 9 other airports, and each of the 200 packages can be either unloaded (if
it is loaded) or loaded into any plane at its airport (if it is unloaded).
So, in any state there is a minimum of 450 ac�ons (when all the packages are at airports with no
planes) and a maximum of 10,450 (when all packages and planes are at the same airport).
On average, let’s say there are about 2000 possible ac�ons per state, so the search graph up to
the depth of the obvious solu�on has about 200041 nodes.

15 Write a short note on Sensorless Planning Problem (NOV 2022)


AN Sensorless problems
S If the agent has no sensors, then the agent cannot know it’s current state, and hence would
have to make many repeated ac�on paths to ensure that the goal state is reached regardless of
it’s ini�al state.

The agent doesn’t know where it is. We can use belief states (sets of states that the agent might
be in). Example determinis�c, sta�c, single-agent vacuum world.
16 Explain two different types of algorithms for planning as a state space search (APR 2023)
AN Forward (progression) state-space search
S Search from the ini�al state through the space of states, looking for a goal.
1) forward search is prone to exploring irrelevant ac�ons
Consider the noble task of buying a copy of AI: A Modern Approach from an online bookseller.
Suppose there is an ac�on schema Buy(isbn) with effect Own(isbn).
ISBNs are 10 digits, so this ac�on schema represents 10 billion ground ac�ons. An uninformed
forward-search algorithm would have to start enumera�ng these 10 billion ac�ons to find one
that leads to the goal.
2) Second, planning problems o�en have large state spaces
Consider an air cargo problem, with 10 airports, where each airport has 5 planes and 20 pieces
of cargo. The goal is to move all the cargo at airport A to airport B.
There is a simple solu�on to the problem: load the 20 pieces of cargo into one of the planes at
A, fly the plane to B, and unload the cargo.
Finding the solu�on can be difficult because the average branching factor is huge: each of the
50
planes can fly to 9 other airports, and each of the 200 packages can be either unloaded (if

For detailed Video Lecture Download The Shikshak Edu App


Telegram | youtube | Android App | website | instagram | whatsapp | e-next

it is loaded) or loaded into any plane at its airport (if it is unloaded).


So, in any state there is a minimum of 450 ac�ons (when all the packages are at airports with no
planes) and a maximum of 10,450 (when all packages and planes are at the same airport).
On average, let’s say there are about 2000 possible ac�ons per state, so the search graph up to
the depth of the obvious solu�on has about 200041 nodes.

Backward (regression) relevant-states search


Search backward from the goal, looking for the ini�al state.
It is called relevant-states search because we only consider ac�ons that are relevant to the goal
(or current state). As in belief-state search, there is a set of relevant states to consider at each
step, not just a single state
We start with the goal, which is a conjunc�on of literals forming a descrip�on of a set of states
—for example, the goal ¬Poor ∧Famous describes those states in which Poor is false, Famous is
true, and any other fluent can have any value.
If there are n ground fluents in a domain, then there are 2𝑛𝑛 ground states (each fluent can be
true or false), but 3𝑛𝑛 descrip�ons of sets of goal states (each fluent can be posi�ve, nega�ve, or
not men�oned).
Backward search works only when we know how to regress from a state descrip�on to the
predecessor state descrip�on.
For example, it is hard to search backwards for a solu�on to the n-queens problem because
there is no easy way to describe the states that are one move away from the goal.
The PDDL representa�on was designed to make it easy to regress ac�ons—if a domain can be
expressed in PDDL, then we can do regression search on it.

To get the full advantage of backward search, we need to deal with par�ally uninstan�ated
ac�ons and states, not just ground ones. For example, suppose the goal is to deliver a specific
piece of cargo to SFO: At(C2, SFO).
That suggests the ac�on Unload(C2, p’, SFO):
Ac�on(Unload(C2, p’, SFO),
PRECOND: In(C2, p’) ∧ At(p’, SFO) ∧ Cargo(C2) ∧ Plane(p’) ∧ Airport(SFO)
EFFECT: At(C2, SFO)∧ ¬In(C2, p’) .

17 Explain various types of planning methods for handling indeterminacy (APR 2023)
AN • Planning so far does not specify how long an ac�on takes, when an ac�on
S occurs, except to say that is before or a�er another ac�on
 When used in real world, such as scheduling Hubble Space Telescope
observa�ons, �me is also a resource/constraint
• Job shop scheduling – �me is essen�al
• An example of Figures 12.1 and 12.2
 A par�al order plan (with dura�ons)
 Cri�cal path (or the weakest link)
 Slack = LS (latest start) – ES (earliest start)
 Schedule = plan + �me (dura�ons for ac�ons)
• Scheduling with resource constraints
 When certain parts are not available, wai�ng �me should be minimized

For detailed Video Lecture Download The Shikshak Edu App


Telegram | youtube | Android App | website | instagram | whatsapp | e-next

 Difference in comple�ng which first and possible change in Fig 12.4


• Init: a chair, a table, cans of paints with unknown color
Goal: the chair and table have the same color
• Different types of planning
 Classic planning: fully observable?
 Sensorless planning: coercing
 Condi�onal planning with sensing: (1) already the same, (2) one painted
with the available color, (3) paint both
 Replanning: paint, check the effect, replan for missing spot
 Con�nuous planning: paint, can stop for unexpected events, con�nue

18 Explain mul�-agent planning with its different types of strategies (APR 2023)
AN • Mul�agent planning problem: When there are mul�ple agents in the
S environment, each agent faces a mul�agent planning problem in which it tries
to achieve its own goals with the help or hindrance of others.
• Mul�effector planning: An agent with mul�ple effectors that can operate
concurrently—for example, a human who can type and speak at the same
�me—needs to do mul�effector planning to manage each effector while
handling posi�ve and nega�ve interac�ons among the effectors.
• Mul�body planning: When the effectors are physically decoupled into detached
units—as in a fleet of delivery robots in a factory—mul�effector planning
becomes mul�body planning. example, a fleet/squadron of reconnaissance
robots that are some�mes out of communica�ons range.
• One is in charge of all decisions…

For example, a delivery company may do centralized, offline planning for the routes of
its trucks and planes each day, but leave some aspects open for autonomous decisions
by drivers and pilots who can respond individually to traffic and weather situa�ons.
Planning with mul�ple simultaneous ac�ons
The terminology is mul�actor se�ngs.
We merge aspects of the mul�effector, mul�body, & mul�agent paradigms, then
consider issues related to transi�on models, correctness of plans, efficiency/complexity
of planning algorithms.
Planning with mul�ple agents: Coopera�on and coordina�on
Each agent formulates its own plan, but based on shared goals & a shared knowledge
base

19 Write short notes on:- (APR 2023)


a. Categories and Objects
b. Mental Events and Mental Objects
AN a. Categories and Objects
S The organiza�on of objects into categories is a vital part of knowledge representa�on. Although
interac�on with the world takes place at the level of individual objects, much reasoning takes
place at the level of categories.

For detailed Video Lecture Download The Shikshak Edu App


Telegram | youtube | Android App | website | instagram | whatsapp | e-next

b. Mental Events and Mental Objects


Mental events and mental objects are two concepts in philosophy that relate to the workings of
the human mind and consciousness.

Mental events refer to any experience or occurrence that takes place within an individual's
mind. This includes sensa�ons, percep�ons, thoughts, emo�ons and other conscious
experiences. Mental events are subjec�ve and can only be directly observed by the individual
experiencing it.

Mental objects, on the other hand, are the en��es that mental events are directed at or about.
Mental objects can include anything that can be thought about, such as ideas, beliefs,
memories or percep�ons of external objects. They are the content or subject mater of mental
events.

For example, when you read a sentence, the mental event might be the percep�on of the words
on the page while the mental object might be the meaning that you derive from those words.
20 What is reasoning systems for categories? Explain Seman�c Nets and men�on its advantages
and disadvantages (APR 2023)
AN The reasoning is the mental process of deriving logical conclusion and making predic�ons from
S available knowledge, facts, and beliefs. Or we can say, "Reasoning is a way to infer facts from
exis�ng data." It is a general process of thinking ra�onally, to find valid conclusions.

In ar�ficial intelligence, the reasoning is essen�al so that the machine can also think ra�onally
as a human brain, and can perform like a human.
Types of Reasoning
In ar�ficial intelligence, reasoning can be divided into the following categories:

o Deductive reasoning
o Inductive reasoning
o Abductive reasoning
o Common Sense Reasoning
o Monotonic Reasoning
o Non-monotonic Reasoning

Drawbacks in Seman�c representa�on:


Seman�c networks take more computa�onal �me at run�me as we need to traverse the
complete network tree to answer some ques�ons. It might be possible in the worst case
scenario that a�er traversing the en�re tree, we find that the solu�on does not exist in this
network.
Seman�c networks try to model human-like memory (Which has 1015 neurons and links) to
store the informa�on, but in prac�ce, it is not possible to build such a vast seman�c network.
These types of representa�ons are inadequate as they do not have any equivalent quan�fier,
e.g., for all, for some, none, etc.
Seman�c networks do not have any standard defini�on for the link names.

For detailed Video Lecture Download The Shikshak Edu App


Telegram | youtube | Android App | website | instagram | whatsapp | e-next

These networks are not intelligent and depend on the creator of the system.
Advantages of Seman�c network:
Seman�c networks are a natural representa�on of knowledge.
Seman�c networks convey meaning in a transparent manner.
These networks are simple and easily understandable.

For detailed Video Lecture Download The Shikshak Edu App

You might also like