The Shikshak Tyit Sem 5 Artificial Intelligence Question Papers Solution N18 A19 N19 N22 A23
The Shikshak Tyit Sem 5 Artificial Intelligence Question Papers Solution N18 A19 N19 N22 A23
BSC-IT SEM 5 Exam Planning for 100% Passing Result | Strategies, Tips, and Resources (theshikshak.com)
We keep updating our blog you so bookmark our blog in your favorite browser.
Stay connected
The Shikshak Edu App Telegram Youtube
AI Lectures playlists
• https://www.youtube.com/watch?v=RK_J8ir1UdU&list=PLG0jPn7yVt507BWE
gSI72Nv3irA6RDYga
Unit 1
1. What is Ar�ficial Intelligence? State its applica�ons. (NOV 2018)
OR
What is Ar�ficial intelligence? Explain with example (APR 2019)
OR
Elaborate ar�ficial intelligence with suitable example along with its applica�ons. (NOV 2019)
OR
Explain any five different applica�ons of Ar�ficial Intelligence.(APR 2023)
ANS The defini�ons on top are concerned with thought processes and reasoning, whereas the
ones on the botom address behavior. The defini�ons on the le� measure success in terms of
fidelity to human performance, whereas the ones on the right measure against an ideal
performance measure, called ra�onality. A system is ra�onal if it does the “right thing,” given
what it knows. Historically, all four approaches to AI have been followed, each by different
people with different methods.
Applica�on of AI
Robo�c vehicles: A driverless robo�c car named STANLEY sped through the rough terrain of
the Mojave dessert at 22 mph, finishing the 132-mile course first to win the 2005 DARPA
Grand Challenge. STANLEY is a Volkswagen Touareg ou�ited with cameras, radar, and laser
rangefinders to sense the environment and onboard so�ware to command the steer- ing,
braking, and accelera�on (Thrun, 2006). The following year CMU’s BOSS won the Ur- ban
Challenge, safely driving in traffic through the streets of a closed Air Force base, obeying traffic
rules and avoiding pedestrians and other vehicles.
Speech recogni�on: A traveler calling United Airlines to book a flight can have the en- �re
conversa�on guided by an automated speech recogni�on and dialog management system.
Autonomous planning and scheduling: A hundred million miles from Earth, NASA’s Remote
Agent program became the first on-board autonomous planning program to control the
scheduling of opera�ons for a spacecra� (Jonsson et al., 2000). REMOTE AGENT gen- erated
plans from high-level goals specified from the ground and monitored the execu�on of those
plans—detec�ng, diagnosing, and recovering from problems as they occurred. Succes- sor
program MAPGEN (Al-Chang et al., 2004) plans the daily opera�ons for NASA’s Mars
Explora�on Rovers, and MEXAR2 (Cesta et al., 2007) did mission planning—both logis�cs and
science planning—for the European Space Agency’s Mars Express mission in 2008.
Game playing: IBM’s DEEP BLUE became the first computer program to defeat the world
champion in a chess match when it bested Garry Kasparov by a score of 3.5 to 2.5 in an
exhibi�on match (Goodman and Keene, 1997). Kasparov said that he felt a “new kind of
intelligence” across the board from him. Newsweek magazine described the match as “The
brain’s last stand.” The value of IBM’s stock increased by $18 billion. Human champions
studied Kasparov’s loss and were able to draw a few matches in subsequent years, but the
most recent human-computer matches have been won convincingly by the computer.
Spam figh�ng: Each day, learning algorithms classify over a billion messages as spam, saving
the recipient from having to waste �me dele�ng what, for many users, could comprise 80% or
90% of all messages, if not classified away by algorithms. Because the spammers are
con�nually upda�ng their tac�cs, it is difficult for a sta�c programmed approach to keep up,
and learning algorithms work best (Sahami et al., 1998; Goodman and Heckerman, 2004).
Logis�cs planning: During the Persian Gulf crisis of 1991, U.S. forces deployed a Dynamic
Analysis and Replanning Tool, DART (Cross and Walker, 1994), to do automated logis�cs
planning and scheduling for transporta�on. This involved up to 50,000 vehicles, cargo, and
people at a �me, and had to account for star�ng points, des�na�ons, routes, and conflict
resolu�on among all parameters. The AI planning techniques generated in hours a plan that
would have taken weeks with older methods. The Defense Advanced Research Project Agency
(DARPA) stated that this single applica�on more than paid back DARPA’s 30-year investment in
AI.
Robo�cs: The iRobot Corpora�on has sold over two million Roomba robo�c vacuum cleaners
for home use. The company also deploys the more rugged PackBot to Iraq and Afghanistan,
where it is used to handle hazardous materials, clear explosives, and iden�fy the loca�on of
snipers.
The Turing Test, proposed by Alan Turing (1950), was designed to provide a sa�sfactory
opera�onal defini�on of intelligence. A computer passes the test if a human interrogator, a�er
posing some writen ques�ons, cannot tell whether the writen responses come from a
person or from a computer. Here we note that programming a computer to pass a rigorously
applied test provides plenty to work on. The computer would need to possess the following
capabili�es:
• natural language processing to enable it to communicate successfully in English;
• knowledge representa�on to store what it knows or hears;
• automated reasoning to use the stored informa�on to answer ques�ons and to draw
new conclusions;
• machine learning to adapt to new circumstances and to detect and extrapolate
paterns.
Turing’s test deliberately avoided direct physical interac�on between the interrogator and the
computer because physical simula�on of a person is unnecessary for intelligence. However,
the so-called total Turing Test includes a video signal so that the interrogator can test the
subject’s perceptual abili�es, as well as the opportunity for the interrogator to pass physical
objects “through the hatch.” To pass the total Turing Test, the computer will need
• computer vision to perceive objects, and
• robo�cs to manipulate objects and move about.
These six disciplines compose most of AI, and Turing deserves credit for designing a test that
remains relevant 60 years later.
3 What are agents? Explain how they interact with environment. (Nov 2018)
OR
Explain the concept of agent and environment. (April 2019)
OR
State the rela�onship between agents and environment. (Nov 2019)
OR
State the rela�onship between agents and environment. (NOV 2022)
ANS An agent is anything that can be viewed as perceiving its environment through sensors and
ac�ng upon that environment through actuators.
A human agent has eyes, ears, and other organs for sensors and hands, legs, vocal tract, and
so on for actuators.
A robo�c agent might have cameras and infrared range finders for sensors and various motors
for actuators.
A so�ware agent receives keystrokes, file contents, and network packets as sensory inputs and
acts on the environment by displaying on the screen, wri�ng files, and sending network
packets.
We use the term percept to refer to the agent’s perceptual inputs at any given instant. An
agent’s percept sequence is the complete history of everything the agent has ever perceived.
An agent’s behavior is described by the agent func�on that maps any given percept sequence
to an ac�on.
Agent Sensors
Percept
Ac�ons
Actuators
Figure 2.1 Agents interact with environments through sensors and actuators.
To illustrate these ideas, we use a very simple example—the vacuum-cleaner world shown in
Figure 2.2. This world is so simple that we can describe everything that happens; it’s also a
made-up world, so we can invent many varia�ons. This par�cular world has just two loca�ons:
squares A and B. The vacuum agent perceives which square it is in and whether there is dirt in
the square. It can choose to move le�, move right, suck up the dirt, or do nothing. One very
simple agent func�on is the following: if the current square is dirty, then suck; otherwise,
move to the other square. A par�al tabula�on of this agent func�on is shown in Figure 2.3
Percept sequence Action
[A, Clean ] Right
[A, Dirty ] Suck
[B, Clean] Left
[B, Dirty ] Suck
[A, Clean ], [A, Clean ] Right
[A, Clean ], [A, Dirty ] Suck
. .
[A, Clean ], [A, Clean ], [A, Clean ] Right
[A, Clean ], [A, Clean ], [A, Dirty ] Suck
. .
Figure 2.3 Partial tabulation of a simple agent function for the vacuum-
cleaner worldshown in Figure 2.2.
5 Explain PEAS descrip�on of task environment for automated taxi. (Nov 2018)
OR
Give the PEAS descrip�on for taxi’s task environment. (April 2019)
OR
What is PEAS descrip�on? Explain with two suitable examples(NOV 2022)
OR
Explain PEAS descrip�on of the task environment for an automated taxi. (APR 2023)
Taxi driver Safe, fast, legal, Roads, other Steering, Cameras, sonar,
comfortable trip, traffic, accelerator, speedometer,
maximize profits pedestrians, brake, signal, GPS, odometer,
customers horn, display accelerometer,
engine sensors,
keyboard
Figure 2.4 PEAS description of the task environment for an automated taxi.
First, what is the performance measure to which we would like our automated driverto
aspire? Desirable qualities include getting to the correct destination; minimizing fuel con-
sumption and wear and tear; minimizing the trip time or cost; minimizing violations of traffic
laws and disturbances to other drivers; maximizing safety and passenger comfort; maximiz-
ing profits. Obviously, some of these goals conflict, so tradeoffs will be required.
Next, what is the driving environment that the taxi will face? Any taxi driver must deal with
a variety of roads, ranging from rural lanes and urban alleys to 12-lane freeways. and potholes.
The taxi must also interact with potential and actual passengers. There are alsosome optional
choices. The taxi might need to operate in Southern California, where snow is seldom a
problem, or in Alaska, where it seldom is not. It could always be driving on the right, or we
might want it to be flexible enough to drive on the left when in Britain or Japan.Obviously, the
more restricted the environment, the easier the design problem.
The actuators for an automated taxi include those available to a human driver: control
over the engine through the accelerator and control over steering and braking. In addition, it
will need output to a display screen or voice synthesizer to talk back to the passengers, and
perhaps some way to communicate with other vehicles, politely or otherwise.
The basic sensors for the taxi will include one or more controllable video cameras so
that it can see the road; it might augment these with infrared or sonar sensors to detect dis-
tances to other cars and obstacles. To avoid speeding tickets, the taxi should have a speedome- ter,
and to control the vehicle properly, especially on curves, it should have an accelerometer.To
determine the mechanical state of the vehicle, it will need the usual array of engine, fuel,and
electrical system sensors. Like many human drivers, it might want a global positioning system
(GPS) so that it doesn’t get lost. Finally, it will need a keyboard or microphone for the
passenger to request a destination.
6 Give comparison between Full observable and partially observable agent. (Nov 2018)
ANS Fully observable Partially observable
Agent can access to the complete state of the Agent cannot access to the complete state of
environment at each point in time the environment at each point in time
Agent can detect all aspect that are relevant Agent cannot detect all aspect that are
to the choice of action relevant to the choice of action
A fully observable environment is easy as A partially observable environment is not
there is no need to maintain the internal state easy as there is need to maintain the internal
to keep track history of the world. state to keep track history of the world.
7 Explain the working of simple reflex agent. (April 2019)
or
Explain reflex agents with state. (Nov 2019)
ANS The simplest kind of agent is the simple reflex agent. These agents select ac�ons on
the basis of the current percept, ignoring the rest of the percept history.
For example, the vacuum agent whose agent func�on is a simple reflex agent,
because its decision is based only on the current loca�on and on whether that
loca�on contains dirt.
Simple reflex behaviors occur even in more complex environments.
Imagine yourself as the driver of the automated taxi. If the car in front brakes and its
brake lights come on, then you should no�ce this and ini�ate braking. In other words,
some processing is done on the visual input to establish the condi�on we call “The car
in front is braking.” Then, this triggers some established connec�on in the agent
program to the ac�on “ini�ate braking.” We call such a connec�on a condi�on–ac�on
rule, writen as
if car-in-front-is-braking then ini�ate-braking.
o Year 1955: An Allen Newell and Herbert A. Simon created the "first ar�ficial intelligence
program"Which was named as "Logic Theorist". This program had proved 38 of 52
Mathema�cs theorems, and find new and more elegant proofs for some theorems.
o Year 1956: The word "Ar�ficial Intelligence" first adopted by American Computer
scien�st John McCarthy at the Dartmouth Conference. For the first �me, AI coined as
an academic field.
o Year 1966: The researchers emphasized developing algorithms which can solve
mathema�cal problems. Joseph Weizenbaum created the first chatbot in 1966, which
was named as ELIZA.
o Year 1972: The first intelligent humanoid robot was built in Japan which was named as
WABOT-1.
o The dura�on between years 1974 to 1980 was the first AI winter dura�on. AI winter
refers to the �me period where computer scien�st dealt with a severe shortage of
funding from government for AI researches.
o During AI winters, an interest of publicity on ar�ficial intelligence was decreased.
o Year 1980: A�er AI winter dura�on, AI came back with "Expert System". Expert systems
were programmed that emulate the decision-making ability of a human expert.
o In the Year 1980, the first na�onal conference of the American Associa�on of Ar�ficial
Intelligence was held at Stanford University.
o The dura�on between the years 1987 to 1993 was the second AI Winter dura�on.
o Again Investors and government stopped in funding for AI research as due to high cost
but not efficient result. The expert system such as XCON was very cost effec�ve.
o Year 1997: In the year 1997, IBM Deep Blue beats world chess champion, Gary
Kasparov, and became the first computer to beat a world chess champion.
o Year 2002: for the first �me, AI entered the home in the form of Roomba, a vacuum
cleaner.
o Year 2006: AI came in the Business world �ll the year 2006. Companies like Facebook,
Twiter, and Ne�lix also started using AI.
o Year 2011: In the year 2011, IBM's Watson won jeopardy, a quiz show, where it had to
solve the complex ques�ons as well as riddles. Watson had proved that it could
understand natural language and can solve tricky ques�ons quickly.
o Year 2012: Google has launched an Android app feature "Google now", which was able
to provide informa�on to the user as a predic�on.
o Year 2014: In the year 2014, Chatbot "Eugene Goostman" won a compe��on in the
infamous "Turing test."
o Year 2018: The "Project Debater" from IBM debated on complex topics with two master
debaters and also performed extremely well.
Google has demonstrated an AI program "Duplex" which was a virtual assistant and which had
taken hairdresser appointment on call, and lady on other side didn't no�ce that she was
talking with the machine.
10 Describe the contribu�on of Philosophy and Mathema�cs to Ar�ficial Intelligence. (NOV 2022)
ANS
Philosophy
• Can formal rules be used to draw valid conclusions?
• How does the mind arise from a physical brain?
• Where does knowledge come from?
• How does knowledge lead to ac�on?
Aristotle (384–322 B.C.), whose bust appears on the front cover of this book, was the first
to formulate a precise set of laws governing the ra�onal part of the mind. He developed
an informal system of syllogisms for proper reasoning, which in principle allowed one to
gener- ate conclusions mechanically, given ini�al premises. Descartes was a strong
advocate of the power of reasoning in understanding the world, a philosophy now called
ra�onalism, and one that counts Aristotle and Leibnitz as members. But Descartes was
also a proponent of dualism. He held that there is a part of the human mind (or soul or
spirit) that is outside of nature, exempt from physical laws. Animals, on the other hand,
did not possess this dual quality; they could be treated as machines. An alterna�ve to
dualism is materialism, which holds that the brain’s opera�on according to the laws of
physics cons�tutes the mind.
Mathema�cs
• What are the formal rules to draw valid conclusions?
• What can be computed?
• How do we reason with uncertain informa�on?
Philosophers staked out some of the fundamental ideas of AI, but the leap to a formal
science required a level of mathema�cal formaliza�on in three fundamental areas: logic,
computa- �on, and probability. The next step was to determine the limits of what could
be done with logic and com- puta�on. The first nontrivial algorithm is thought to be
Euclid’s algorithm for compu�ng greatest common divisors. The word algorithm (and the
idea of studying them) comes fromal-Khowarazmi, a Persian mathema�cian of the 9th
century.
Decidability and computability are important to an understanding of computa- �on, the
no�on of tractability has had an even greater impact. Roughly speaking, a problem is
called intractable if the �me required to solve instances of the problem grows
exponen�ally with the size of the instances. The dis�nc�on between polynomial and
exponen�al growth in complexity was first emphasized in the mid-1960s (Cobham, 1964;
Edmonds, 1965). It is important because exponen�al growth means that even moderately
large instances cannot be solved in any reasonable �me. Therefore, one should strive to
divide the overall problem of genera�ng intelligent behavior into tractable subproblems
rather than intractable ones. Besides logic and computa�on, the third great contribu�on
of mathema�cs to AI is thetheory of probability.
11
Describe the structure of U�lity based Agent. (NOV 2022)
ANS Goals alone are not enough to generate high-quality behavior in most environments.
For example, many ac�on sequences will get the taxi to its des�na�on (thereby
achieving the goal) but some are quicker, safer, more reliable, or cheaper than others.
Goals just provide a crude binary dis�nc�on between “happy” and “ unhappy” states.
A more general performance measure should allow a comparison of different world
states according to exactly how happy they would make the agent. Because “happy”
does not sound very scien�fic, economists and computer scien�sts use the term
u�lity instead.
We have already seen that a performance measure assigns a score to any given
sequence of environment states, so it can easily dis�nguish between more and less
desirable ways of ge�ng to the taxi’s des�na�on. An agent’s u�lity func�on is
essen�ally an internaliza�on of the performance measure. If the internal u�lity
func�on and the external performance measure are in agreement, then an agent that
chooses ac�ons to maximize its u�lity will be ra�onal according to the external
performance measure.
Mathema�cs
• What are the formal rules to draw valid conclusions?
• What can be computed?
• How do we reason with uncertain informa�on?
Philosophers staked out some of the fundamental ideas of AI, but the leap to a formal
science required a level of mathema�cal formaliza�on in three fundamental areas: logic,
computa�on, and probability. The next step was to determine the limits of what could be
done with logic and computa�on. The first nontrivial algorithm is thought to be Euclid’s
algorithm for compu�ng greatest common divisors. The word algorithm (and the idea of
studying them) comes fromal-Khowarazmi, a Persian mathema�cian of the 9th century.
Decidability and computability are important to an understanding of computa�on, the
no�on of tractability has had an even greater impact. Roughly speaking, a problem is
called intractable if the �me required to solve instances of the problem grows
exponen�ally with the size of the instances. The dis�nc�on between polynomial and
exponen�al growth in complexity was first emphasized in the mid-1960s (Cobham, 1964;
Edmonds, 1965). It is important because exponen�al growth means that even moderately
large instances cannot be solved in any reasonable �me. Therefore, one should strive to
divide the overall problem of genera�ng intelligent behavior into tractable subproblems
rather than intractable ones. Besides logic and computa�on, the third great contribu�on
of mathema�cs to AI is thetheory of probability.
Economics
Neuroscience
• How do brains process informa�on?
Neuroscience is the study of the nervous system, par�cularly the brain. Although the
exact way in which the brain enables thought is one of the great mysteries of science,
the fact that it does enable thought has been appreciated for thousands of years
because of the evidence that strong blows to the head can lead to mental
incapacita�on. It has also long been known that human brains are somehow different.
Psychology
• How do humans and animals think and act?
The origins of scien�fic psychology are usually traced to the work of the German physi- cist
Hermann von Helmholtz (1821–1894) and his student Wilhelm Wundt (1832–1920).
Helmholtz applied the scien�fic method to the study of human vision.
Cogni�ve psychology, which views the brain as an informa�on-processing device, can be
traced back at least to the works of William James (1842–1910). Helmholtz also insisted that
percep�on involved a form of unconscious logical inference.
Computer engineering
• How can we build an efficient computer?
AI has pioneered many ideas that have made their way back to mainstream computer science,
including �me sharing, interac�ve interpreters, personal computers with windows and mice,
rapid development environments, the linked list data type, automa�c storage management,
and key concepts of symbolic, func�onal, declara�ve, and object-oriented programming.
The first opera�onal programmable computer was the Z-3, the inven- �on of Konrad Zuse in
Germany in 1941. Zuse also invented floa�ng-point numbers and thefirst high-level
programming language, Plankalkül. The first electronic computer, the ABC, was assembled by
John Atanasoff and his student Clifford Berry between 1940 and 1942at Iowa State
University. Since that time, each generation of computer hardware has brought an increase in
speedand capacity and a decrease in price.
AI also owes a debt to the so�ware side of computer science, which has supplied the
opera�ng systems, programming languages, and tools needed to write modern programs (and
papers about them). But this is one area where the debt has been repaid: work in AI has pio-
neered many ideas that have made their way back to mainstream computer science, including
�me sharing, interac�ve interpreters, personal computers with windows and mice, rapid de-
velopment environments, the linked list data type, automa�c storage management, and key
concepts of symbolic, func�onal, declara�ve, and object-oriented programming.
Linguis�cs
• How does language relate to thought?
In 1957, B. F. Skinner published Verbal Behavior. This was a comprehensive, detailed ac- count
of the behaviorist approach to language learning, writen by the foremost expert in field.
Modern linguis�cs and AI, then, were “born” at about the same �me, and grew up together,
intersec�ng in a hybrid field called computa�onal linguis�cs or natural language processing.
The problem of understanding language soon turned out to be considerably more complex
than it seemed in 1957. Understanding language requires an understanding of the subject
mater and context, not just an understanding of the structure of sentences. This might seem
obvious, but it was not widely appreciated un�l the 1960s. Much of the early work in
knowledge representa�on (the study of how to put knowledge into a form that a computer
can reason with) was �ed to language and informed by research in linguis�cs, which was
connected in turn to decades of work on the philosophical analysis of language.
13 What is Learning agents? Explain its four conceptual components. (APR 2023)
ANS A learning agent can be divided into four conceptual components, as shown in Fig- ure
2.15. The most important distinction is between the learning element, which is re- sponsible
for making improvements, and the performance element, which is responsible forselecting
external actions. The performance element is what we have previously consideredto be the
entire agent: it takes in percepts and decides on actions. The learning element usesfeedback
from the critic on how the agent is doing and determines how the performance element should
be modified to do better in the future.
Performance
Cri�c Sensors
feedback
changes
Learning Performance
element element
knowledge
learning
Problem
generato
Actuators
Agent
The design of the learning element depends very much on the design of the performance
element. When trying to design an agent that learns a certain capability, the first question is
not “How am I going to get it to learn this?” but “What kind of performance element will my
agent need to do this once it has learned how?” Given an agent design, learning mechanisms
can be constructed to improve every part of the agent.
The last component of the learning agent is the problem generator. It is responsible for
suggesting actions that will lead to new and informative experiences. The point is thatif the
performance element had its way, it would keep doing the actions that are best, given what it
knows.
Unit 2
1 Discuss in brief the formula�on of single state problem (NOV 2018)
ANS Problem formula�on is the process of deciding what ac�ons and states to consider, given a
goal. We discuss this process in more detail later. For now, letus assume that the agent will
consider ac�ons at the level of driving from one major town toanother. Each state therefore
corresponds to being in a par�cular town.
Our agent has now adopted the goal of driving to Bucharest and is considering where to go
from Arad. Three roads lead out of Arad, one toward Sibiu, one to Timisoara, and oneto Zerind.
None of these achieves the goal, so unless the agent is familiar with the geographyof Romania, it
will not know which road to follow.1 In other words, the agent will not knowwhich of its
possible ac�ons is best, because it does not yet know enough about the state that results from
taking each ac�on. If the agent has no addi�onal informa�on—i.e., if the environment is
unknown in the sense.
suppose the agent has a map of Romania. The point of a map is to provide the agent with
informa�on about the states it might get itself into and the ac�ons it can take. Theagent can use
this informa�on to consider subsequent stages of a hypothe�cal journey via each of the three
towns, trying to find a journey that eventually gets to Bucharest. Once it hasfound a path on the
map from Arad to Bucharest, it can achieve its goal by carrying out the driving ac�ons that
correspond to the legs of the journey. In general, an agent with several immediate options of
unknown value can decide what to do by first examining future actionsthat eventually lead to
states of known value.
we assume that the environment is observable, so the agent always knows the current state.
For the agent driving in Romania, it’s reasonable to suppose that each city on the map has asign
indica�ng its presence to arriving drivers. We also assume the environment is discrete,so at any
given state there are only finitely many ac�ons to choose from. This is true for naviga�ng in
Romania because each city is connected to a small number of other ci�es. Wewill assume the
environment is known, so the agent knows which states are reached by eachac�on. (Having an
accurate map suffices to meet this condi�on for naviga�on problems.) Finally, we assume that
the environment is determinis�c, so each ac�on has exactly one outcome.
The process of looking for a sequence of ac�ons that reaches the goal is called search.A search
algorithm takes a problem as input and returns a solu�on in the form of an ac�on sequence.
Once a solu�on is found, the ac�ons it recommends can be carried out. Thisis called the
execu�on phase.
3
Give the outline of tree search algorithm (NOV 2018)
ANS
A* Tree Search, or simply known as A* Search, combines the strengths of uniform-cost search
and greedy search. In this search, the heuristic is the summation of the cost in UCS, denoted by
g(x), and the cost in the greedy search, denoted by h(x). The summed cost is denoted by f(x).
Heuristic: The following points should be noted wrt heuristics in A* search. f(x) = g(x) + h(x)
Here, h(x) is called the forward cost and is an estimate of the distance of the current node from
the goal node.
And, g(x) is called the backward cost and is the cumulative cost of a node from the root node.
A* search is optimal only when for all nodes, the forward cost for a node h(x) underestimates
the actual cost h*(x) to reach the goal. This property of A* heuristic is called admissibility.
Solution. Starting from S, the algorithm computes g(x) + h(x) for all nodes in the fringe at each
step, choosing the node with the lowest sum. The entire work is shown in the table below.
Note that in the fourth set of iterations, we get two paths with equal summed cost f(x), so we
expand them both in the next set. The path with a lower cost on further expansion is the chosen
path.
Mutation
The mutation operator inserts random genes in the offspring (new child) to maintain the
diversity in the population. It can be done by flipping some bits in the chromosomes.
Mutation helps in solving the issue of premature convergence and enhances diversification.
Crossover: The crossover plays a most significant role in the reproduction phase of the genetic
algorithm. In this process, a crossover point is selected at random within the genes. Then the
crossover operator swaps genetic information of two parents from the current generation to
produce a new individual representing the offspring.
Mutation
The mutation operator inserts random genes in the offspring (new child) to maintain the
diversity in the population. It can be done by flipping some bits in the chromosomes.
Mutation helps in solving the issue of premature convergence and enhances diversification.
5
Explain how transition model is used for sensing in vacuum cleaner problem (NOV 2018)
ANS
Vacuum cleaner
This can be formulated as a problem as follows:
• States: The state is determined by both the agent loca�on and the dirt loca�ons. The
agent is in one of two loca�ons, each of which might or might not contain dirt. Thus,
there are 2 × 22 = 8 possible world states. A larger environment with n loca�ons has n ·
2n states.
• Ini�al state: Any state can be designated as the ini�al state.
• Ac�ons: In this simple environment, each state has just three ac�ons: Left, Right, and
Suck. Larger environments might also include Up and Down.
• Transi�on model: The ac�ons have their expected effects, except that moving Left inthe
leftmost square, moving Right in the rightmost square, and Sucking in a clean squarehave
no effect. The complete state space is shown in Figure 3.3.
• Goal test: This checks whether all the squares are clean.
• Path cost: Each step costs 1, so the path cost is the number of steps in the path.
6
Give the illustration of 8 queen problem using hill climbing algorithm. (NOV 2018)
ANS
The N Queen is the problem of placing N chess queens on an N×N chessboard so that no two
queens attack each other.
For example, the following is a solution for 8 Queen problem.
Input: N = 4
Output:
0100
0001
1000
0010
Explanation:
The Position of queens are:
1 – {1, 2}
2 – {2, 4}
3 – {3, 1}
4 – {4, 3}
As we can see that we have placed all 4 queens
in a way that no two queens are attacking each other.
So, the output is correct
Input: N = 8
Output:
00000010
01000000
00000100
00100000
10000000
00010000
00000001
00001000
While there are algorithms like Backtracking to solve N Queen problem, let’s take an AI
approach in solving the problem.
It’s obvious that AI does not guarantee a globally correct solution all the time but it has quite a
good success rate of about 97% which is not bad.
A description of the notions of all terminologies used in the problem will be given and are as
follows:-
Notion of a State – A state here in this context is any configuration of the N queens on the N X
N board. Also, in order to reduce the search space let’s add an additional constraint that there
can only be a single queen in a particular column. A state in the program is implemented using
an array of length N, such that if state[i]=j then there is a queen at column index i and row
index j.
Notion of Neighbours – Neighbours of a state are other states with board configuration that
differ from the current state’s board configuration with respect to the position of only a single
queen. This queen that differs a state from its neighbour may be displaced anywhere in the
same column.
Optimisation function or Objective function – We know that local search is an optimization
algorithm that searches the local space to optimize a function that takes the state as input and
gives some value as an output. The value of the objective function of a state here in this context
is the number of pairs of queens attacking each other. Our goal here is to find a state with the
minimum objective value. This function has a maximum value of NC2 and a minimum value of
0.
Algorithm:
Start with a random state(i.e, a random configuration of the board).
Scan through all possible neighbours of the current state and jump to the neighbour with the
highest objective value, if found any. If there does not exist, a neighbour, with objective strictly
higher than the current state but there exists one with equal then jump to any random
neighbour(escaping shoulder and/or local optimum).
Repeat step 2, until a state whose objective is strictly higher than all it’s neighbour’s objectives,
is found and then go to step 4.
The state thus found after the local search is either the local optimum or the global optimum.
There is no way of escaping local optima but adding a random neighbour or a random restart
each time a local optimum is encountered increases the chances of achieving global
optimum(the solution to our problem).
Output the state and return.
It is easily visible that the global optimum in our case is 0 since it is the minimum number of
pairs of queens that can attack each other. Also, the random restart has a higher chance of
achieving global optimum but we still use random neighbour because our problem of N queens
does not has a high number of local optima and random neighbour is faster than random restart.
7
List and explain performance measuring ways for problem solving. (April 2019)
ANS
The performance measure of an algorithm should be measured. Consequently, There are four
ways to measure the performance of an algorithm:
Completeness: It measures if the algorithm guarantees to find a solution (if any solution exist).
Optimality: It measures if the strategy searches for an optimal solution.
Time Complexity: The time taken by the algorithm to find a solution.
Space Complexity: Amount of memory required to perform a search.
The complexity of an algorithm depends on branching factor or maximum number of
successors, depth of the shallowest goal node (i.e., number of steps from root to the path) and
the maximum length of any path in a state space.
Time and Space in complexity analysis are measured with respect to the number of nodes the
problem graph has in terms of asymptotic notations.
In AI, complexity is expressed by three factors b, d and m:
b the branching factor is the maximum number of successors of any node.
d the depth of the deepest goal.
Goal formulation: intuitively, we want all the dirt cleaned up. Formally, the goal is
{ state 7, state 8 }.
Problem formulation(Actions):Left,Right,Suck,NoOp
State Space Graph
Measuring performance
With any intelligent agent, we want it to find a (good) solution and not spend forever doing it.
The interesting quantities are, therefore,
the search cost--how long the agent takes to come up with the solution to the problem, and
the path cost--how expensive the actions in the solution are.
The total cost of the solution is the sum of the above two quantities.
9
Write the uniform cost search algorithm. Explain in short. (April 2019)
OR
Give the outline of Uniform-cost search algorithm. (NOV 2019)
ANS
Instead of expanding the shallowest node, uniform-cost search expands the node n with the
lowest path cost g(n). This is done by storing the frontier as a priority queue ordered by g.
The first is that the goal test is applied to a node when it is selected for expansion rather than
when it is first generated. The reason is that the first goal node that is generated may be on a
suboptimal path. The second difference is that a test is added in case a better path is found to a
node currently on the frontier.
Both of these modifications come into play in the example shown in Figure 3.15, where the
problem is to get from Sibiu to Bucharest. The successors of Sibiu are Rimnicu Vilcea and
Fagaras, with costs 80 and 99, respectively. The least-cost node, Rimnicu Vilcea, is expanded
next, adding Pitesti with cost 80 + 97 = 177. The least-cost node is now Fagaras, so it is
expanded, adding Bucharest with cost 99 + 211 = 310. Now a goal node has been generated,
but uniform-cost search keeps going, choosing Pitesti for expansion and adding a second path
to Bucharest with cost 80 + 97 + 101 = 278. Now the algorithm checks to see if this new path is
better than the old one; it is, so the old one is discarded. Bucharest, now with g-cost 278, is
selected for expansion and the solution is returned.
Uniform-cost search does not care about the number of steps a path has, but only about their
total cost. Therefore, it will get stuck in an infinite loop if there is a path with an infinite
sequence of zero-cost actions.
10 With suitable diagram explain the following concepts (April 2019) (NOV 2022)
i. shoulder
ii. Global maximum
iii. Local maximum
ANS
Hill climbing is the simplest way to implement a hill climbing algorithm. It only evaluates
the neighbor node state at a time and selects the �irst one which optimizes current
cost and set it as a current state.
Shoulder: It is a plateau region which has an uphill edge.
Global Maximum: Global maximum is the best possible state of state space landscape. It has the
highest value of objec�ve func�on.
Local maxima: a local maximum is a peak that is higher than each of its neighboring states but
lower than the global maximum. Hill-climbing algorithms that reach the vicinity of a local
maximum will be drawn upward toward the peak but will then be stuck with nowhere else to go.
11 Explain the working of AND-OR search tree (April 2019)
ANS An and-or tree specifies only the search space for solving a problem. Different search strategies
for searching the space are possible. These include searching the tree depth-first, breadth-first,
or best-first using some measure of desirability of solu�ons. The search strategy can be
sequen�al, searching or genera�ng one node at a �me, or parallel, searching or genera�ng
several nodes in parallel.
An and–or tree is a graphical representa�on of the reduc�on of problems (or goals) to
conjunc�ons and disjunc�ons of subproblems (or subgoals).
The first condi�on we require for op�mality is that h(n) be an admissible heuris�c. An admissible
heuris�c is one that never overes�mates the cost to reach the goal. Because g(n) is the actual
cost to reach n along the current path, and f (n)= g(n) + h(n), we have as an immediate
consequence that f (n) never overes�mates the true cost of a solu�on along the current path
through n.
A second, slightly stronger condi�on called consistency (or some�mes monotonicity) is required
only for applica�ons of A∗ to graph search.9 A heuris�c h(n) is consistent if, for every node n and
every successor nJ of n generated by any ac�on a, the es�mated cost of reaching the goal from
n is no greater than the step cost of ge�ng to nJ plus the es�mated cost of reaching the goal
from nJ:
h(n) ≤ c(n, a, nJ)+ h(nJ) .
It terminates when it reaches a “peak” where no neighbor has a higher value. The
algorithm does not maintain a search tree, so the data structure for the current node need
only record the state and the value of the objective function.
Hill climbing is sometimes called greedy local search because it grabs a good neighbor
state without thinking ahead about where to go next.
A state space is a way to mathema�cally represent a problem by defining all the possible states
in which the problem can be. This is used in search algorithms to represent the ini�al state, goal
state, and current state of the problem.
Path in state space - In the state space, a path is a sequence of states connected by a sequence
of ac�ons. The solu�on of a problem is part of the graph formed by the state space.
Consider, for example, the problem “Visit every city in Figure 3.2 at least once, star�ng and
ending in Bucharest.” As with route finding, the ac�ons correspond to trips between
adjacent ci�es. The state space, however, is quite different. Each state must include not just
the current loca�on but also the set of cities the agent has visited.So the ini�al state would
be In(Bucharest ), Visited ({Bucharest }), a typical intermediate state would be In(Vaslui ),
Visited ({Bucharest , Urziceni, Vaslui }), and the goal testwould check whether the agent is in
Bucharest and all 20 ci�es have been visited.
Ini�al state
Operator or successor func�on - for any state x returns s(x), the set of states reachable from x
with one ac�on
State space - all states reachable from ini�al by any sequence of ac�ons
Path cost - func�on that assigns a cost to a path. Cost of a path is the sum of costs of individual
ac�ons along the path
On Y-axis we have taken the func�on which can be an objec�ve func�on or cost func�on, and
state-space on the x-axis. If the func�on on Y-axis is cost then, the goal of search is to find the
global minimum and local minimum. If the func�on of Y-axis is Objec�ve func�on, then the goal
of the search is to find the global maximum and local maximum.
Local Maximum: Local maximum is a state which is beter than its neighbor states, but there is
also another state which is higher than it.
Global Maximum: Global maximum is the best possible state of state space landscape. It has the
highest value of objec�ve func�on.
Flat local maximum: It is a flat space in the landscape where all the neighbor states of current
states have the same value.
21 Explain Online search agents and unknown environments with its different types.
Online search agents and unknown environments.
An online search agent interleaves computa�on and ac�on: first it takes an ac�on, then it
observes the environment and computes the next ac�on.
Online search is a good idea in dynamic or semi dynamic domains—domains where there is a
penalty for si�ng around and compu�ng too long.
Online search is also helpful in nondeterminis�c domains because it allows the agent to focus
its computa�onal efforts on the con�ngencies that actually arise rather than those that might
happen but probably won’t.
Online search is a necessary idea for unknown environments, where the agent does not know
what states exist or what its ac�ons do. In this state of ignorance, the agent faces an
explora�on problem and must use its ac�ons as experiments in order to learn enough to make
delibera�on worthwhile.
The canonical example of online search is a robot that is placed in a new building and must
explore it to build a map that it can use for ge�ng from A to B.
Unit 3
1 Explain the working mechanism of min-max algorithm (Nov 2018)
OR
Write the minimax algorithm. Explain in short. (April 2019)
OR
Give the outline of min-max algorithm. (NOV 2019)
OR
Explain Minimax algorithm in detail. (Nov 2022)
ANS The minimax algorithm
The minimax algorithm computes the minimax decision from the current state.
It uses a simple recursive computa�on of the minimax values of each successor state,
directly
implemen�ng the defining equa�ons.
The recursion proceeds all the way down to the leaves of the tree, and then the
minimax values are backed up through the tree as the recursion unwinds.
The minimax algorithm performs a complete depth-first explora�on of the game tree.
If the maximum depth of the tree is m and there are b legal moves at each point, then
the
�me complexity of the minimax algorithm is O(bm).
The space complexity is O(bm) for an algorithm that generates all ac�ons at once, or
O(m) for an algorithm that generates ac�ons one at a �me.
For real games, of course, the �me cost is totally imprac�cal, but this algorithm serves
as the basis for the mathema�cal analysis of games and for more prac�cal algorithms.
A two-ply game tree. The nodes are “MAX nodes,” in which it is MAX’s turn to move,
and the nodes are “MIN nodes.”
The terminal nodes show the u�lity values for MAX; the other nodes are labeled with
their minimax values.
MAX’s best move at the root is a1, because it leads to the state with the highest
minimax value, and MIN’s best reply is b1, because it leads to the state with the lowest
minimax value.
Clause: Disjunc�on of literals (an atomic sentence) is called a clause. It is also known as
a unit clause.
Conjunc�ve Normal Form: A sentence represented as a conjunc�on of clauses is said to
be conjunc�ve normal form or CNF
Rules of Kriegspiel
White and Black each see a board containing only their own pieces.
A referee, who can see all the pieces, adjudicates the game and periodically makes
announcements that are heard by both players.
On his turn, White proposes to the referee any move that would be legal if there were no black
pieces. If the move is in fact not legal (because of the black pieces), the referee announces
“illegal.”
In this case, White may keep proposing moves un�l a legal one is found—and learns more
about the loca�on of Black’s pieces in the process.
Once a legal move is proposed, the referee announces one or more of the following: “Capture
on square X” if there is a capture, and “Check by D” if the black king is in check, where D is the
direc�on of the check, and can be one of “Knight,” “Rank,” “File,” “Long diagonal,” or “Short
diagonal.” (In case of discovered check, the referee may make two “Check” announcements.)
If Black is checkmated or stalemated, the referee says so; otherwise, it is Black’s turn to move.
Performance measure:
• +1000 reward points if the agent comes out of the cave with the gold.
• -1000 points penalty for being eaten by the Wumpus or falling into the pit.
• -1 for each ac�on, and -10 for using an arrow.
• The game ends if either agent dies or came out of the cave.
Environment:
• A 4*4 grid of rooms.
• The agent ini�ally in room square [1, 1], facing toward the right.
• Loca�on of Wumpus and gold are chosen randomly except the first square [1,1].
• Each square of the cave can be a pit with probability 0.2 except the first square.
Actuators:
• Le� turn,
• Right turn
• Move forward
• Grab
• Release
• Shoot.
Sensors:
• The agent will perceive the stench if he is in the room adjacent to the Wumpus. (Not
diagonally).
• The agent will perceive breeze if he is in the room directly adjacent to the Pit.
• The agent will perceive the gliter in the room where the gold is present.
• The agent will perceive the bump if he walks into a wall.
• When the Wumpus is shot, it emits a horrible scream which can be perceived
anywhere in the cave.
• These percepts can be represented as five element list, in which we will have different
indicators for each sensor.
Example if agent perceives stench, breeze, but no gliter, no bump, and no scream then it can
be represented as: [Stench, Breeze, None, None, None].
7 List and explain the elements used to define the game formally. (April 2019)
ANS A game can be defined as a type of search in AI which can be formalized of the following
elements:
OR
What is alpha-beta pruning? Explain the func�on of alpha beta pruning (NOV 2019)
OR
Describe the technique of Alpha-Bela Pruning. (Nov 2022)
ANS It is possible to compute the correct minimax decision without looking at every node in the
game tree using Pruning Technique.
When applied to a standard minimax tree, it returns the same move as minimax would, but
prunes away branches that cannot possibly influence the final decision. The formula for
MINIMAX,
Let the two unevaluated successors of node C in Figure have values x and y. Then the value of
the root node is given by,
MINIMAX(root ) = max(min(3, 12, 8), min(2, x, y), min(14, 5, 2))
= max(3, min(2, x, y), 2)
= max(3, z, 2) where z = min(2, x, y) ≤ 2
= 3.
α = the value of the best (i.e., highest-value) choice we have found so far at any choice
point
along the path for MAX.
β = the value of the best (i.e., lowest-value) choice we have found so far at any choice
point
along the path for MIN
9 Write the connec�ves used to form complex sentence of proposi�onal logic. Give example for
each (April 2019)
ANS All sentences are constructed from atomic sentences and the five connec�ves; therefore, we
need to specify how to compute the truth of atomic sentences and how to compute the truth
of sentences formed with each of the five connec�ves.
Atomic sentences are easy:
• True is true in every model and False is false in every model.
• The truth value of every other proposi�on symbol must be specified directly in the model.
For example, in the model m1 given earlier, P1,2 is false.
For complex sentences, we have five rules, which hold for any subsentences P and Q in any
model m.
• ¬P is true iff P is false in m.
• P ∧ Q is true iff both P and Q are true in m.
• P ∨ Q is true iff either P or Q is true in m.
• P ⇒ Q is true unless P is true and Q is false in m.
• P ⇔ Q is true iff P and Q are both true or both false in m.
If the number of models is large but the length of the proof is short, then theorem proving can
be more efficient than model checking.
concepts related to entailment:
logical equivalence
• Two sentences α and β are logically equivalent if they are true in the same set of
models. We write this as α ≡ β.
• An alterna�ve defini�on of equivalence is as follows: any two sentences α and β are
equivalent only if each of them entails the other:
• α ≡ β if and only if α |= β and β |= α .
validity
• A sentence is valid if it is true in all models
• Valid sentences are also known as tautologies—they are necessarily true. Because the
sentence True is true in all models, every valid sentence is logically equivalent to True.
sa�sfiability
• A sentence is sa�sfiable if it is true in, or sa�sfied by, some model.
The nota�on means that, whenever any sentences of the form α ⇒ β and α are given, then the
sentence β can be inferred.
For example, if (WumpusAhead ∧ WumpusAlive) ⇒ Shoot
and (WumpusAhead ∧ WumpusAlive) are given, then Shoot can be inferred.
Another useful inference rule is And-Elimina�on, which says that, from a conjunc�on, any of
the conjuncts can be inferred:
Seems similar to dice, where the roll is only done once at the beginning. Not a correct analogy
but suggests first algorithm: consider all possible deals of invisible cards, solve each one as if it
were fully observable, choose move with best outcome averaged over all deals.
Inference system allows us to add a new sentence to the knowledge base. A sentence is a
proposition about the world. Inference system applies logical rules to the KB to deduce
new information.
Step-2:
At the second step, we will see those facts which infer from available facts and with sa�sfied
premises.
Rule-(1) does not sa�sfy premises, so it will not be added in the first itera�on.
Rule-(2) and (3) are already added.
Rule-(4) sa�sfy with the subs�tu�on {p/T1}, so Sells (Robert, T1, A) is added, which infers
from the conjunc�on of Rule (2) and (3).
Rule-(6) is sa�sfied with the subs�tu�on(p/A), so Hos�le(A) is added and which infers from
Rule-(7).
Step-3:
At step-3, as we can check Rule-(1) is sa�sfied with the subs�tu�on {p/Robert, q/T1, r/A}, so
we can add Criminal(Robert) which infers all the available facts. And hence we reached our
goal statement.
Unit 4
1 What is first order logic? Discuss the different elements used in first order logic. (Nov 2018)
OR
What is meant by First Order Logic? Explain syntax and seman�cs of First Order Logic. (Nov
2022)
ANS FOL is a way of knowledge Representa�on.
• We can express the natural language statements in concise way.
• It is also known as First order predicate logic.
• FOL develops the information about objects in an easier way & can also express the
relationship between those objects.
• FOL assumes following things in world.
• Objects: A,B, Numbers, colors, peoples etc.
• Relation: red, round, brother of etc.
• Function: father of, best friend, end of etc
2 Explain universal and existen�al quan�fier with suitable example. (Nov 2018)
OR
Write a short note on Universal and Existen�al quan�fier with suitable example. (Nov 2022)
OR
Explain the following concepts
i. Universal Instan�a�on
ii. Existen�al Instan�a�on (April 2019)
OR
Explain universal qualifier with example
ANS A quan�fier is a language element which generates quan�fica�on, and quan�fica�on specifies
the quan�ty of specimen in the universe of discourse.
These are the symbols that permit to determine or iden�fy the range and scope of the variable
in the logical expression. There are two types of quan�fier:
1) Universal quan�fica�on (∀)(for all, everyone, everything)
2) Existen�al quan�fica�on (∃)(for some, at least one)
Existential quanti�iers are the type of quanti�iers, which express that the statement within
its scope is true for at least one instance of something.
It is denoted by the logical operator ∃, which resembles as inverted E. When it is used with
a predicate variable then it is called as an existential quanti�ier.
One more EX
Example:
All man drink coffee.
∀x man(x) → drink (x, coffee).
It will be read as: There are all x where x is a man who drink coffee.
Example:
Some boys are intelligent.
∃x: boys(x) ∧ intelligent(x)
It will be read as: There are some x where x is a boy who is intelligent.
For example, we can rename x in Knows(x, Elizabeth) to x17 (a new variable name) without
changing its meaning. Now the unifica�on will work:
UNIFY(Knows(John, x), Knows(x17, Elizabeth)) = {x/Elizabeth, x17/John}
7 Explain the following with example
i. Atomic sentence
ii. Complex sentence (April 2019)
ANS Atomic sentences
An atomic sentence (or atom for short) is formed from a predicate symbol op�onally followed
by a
parenthesized list of terms, such as Brother (Richard , John).
This states, under the intended interpreta�on given earlier, that Richard the Lionheart is the
brother of King John. Atomic sentences can have complex terms as arguments. Thus,
Married(Father (Richard),Mother (John))
states that Richard the Lionheart’s father is married to King John’s mother
An atomic sentence is true in a given model if the relation referred to by the predicate symbol
holds among the objects referred to by the arguments.
Complex sentences
We can use logical connec�ves to construct more complex sentences, with the same syntax
and seman�cs as in proposi�onal calculus. Here are four sentences that are true in the model
under our intended interpreta�on:
¬Brother (LeftLeg(Richard), John)
Brother (Richard , John) ∧ Brother (John, Richard)
King(Richard ) ∨ King(John)
¬King(Richard) ⇒ King(John) .
8 Define the wampus world problem in terms of first order logic. (April 2019)
ANS Recall that the wumpus agent receives a percept vector with five elements. The corresponding
first-order sentence stored in the knowledge base must include both the percept and
the �me at which it occurred; otherwise, the agent will get confused about when it saw what.
We use integers for �me steps. A typical percept sentence would be,
Given the percept and rules from the preceding paragraphs, this would yield the desired
conclusion
BestAc�on(Grab, 5)—that is, Grab is the right thing to do
9 Write and explain a simple backward-chaining algorithm for first-order knowledge bases. (April
2019)
OR
Describe Backward-Chaining algorithm for First Order definite Clauses. (Nov 2022)
ANS Backward-chaining is also known as a backward deduc�on or backward reasoning method when
using an inference engine. A backward chaining algorithm is a form of reasoning, which starts
with the goal and works backward, chaining through rules to find known facts that support the
goal.
12 What are predicates? Explain its syntax and seman�cs. (Nov 2019)
ANS A predicate is an expression of one or more variables determined on some specific domain. A
predicate with variables can be made a proposi�on by either authorizing a value to the variable
or by quan�fying the variable.
The main task of the syntax of any language is to dis�nguish the gramma�cally correct from the
gramma�cally incorrect sequences of words, which in our case are certain symbols, and the main
task of its seman�cs is to determine when a gramma�cally correct sentence is true and when
false. The syntax and seman�cs of the language of predicate logic make it possible to strictly
define the concepts of logical inference, logical equivalence, contradic�on, consistency, logical
validity, etc.
The well-formed formulas of predicate logic are interpreted with respect to a domain of objects
called universe of discourse, which we denote by “D”.
Existential quanti�iers are the type of quanti�iers, which express that the statement within
its scope is true for at least one instance of something.
It is denoted by the logical operator ∃, which resembles as inverted E. When it is used with
a predicate variable then it is called as an existential quanti�ier.
One more EX
Example:
All man drink coffee.
∀x man(x) → drink (x, coffee).
It will be read as: There are all x where x is a man who drink coffee.
Example:
Some boys are intelligent.
∃x: boys(x) ∧ intelligent(x)
It will be read as: There are some x where x is a boy who is intelligent.
15 What is Predicate Logic? Differen�ate between Proposi�onal logic and First Order Logic. (APR
2023)
ANS
Proposi�onal Logic First order Inference
The limita�on of PL is that it does not FOL can easily represent the individual
represent any individual en��es establishment that means if you are
wri�ng a single sentence then it can be
easily represented in FOL
PL does not signify or express the In FOL users can easily use quan�fiers as it
generaliza�on, specializa�on or patern for does express the generaliza�on,
example ‘QUANTIFIERS’ cannot be used in specializa�on, and patern.
PL
A Datalog program consists of facts, which are statements that are held to be true, and rules,
which say how to deduce new facts from known facts. For example, here are two facts that
mean xerces is a parent of brooke and brooke is a parent of damocles:
parent(xerces, brooke).
parent(brooke, damocles).
17 Explain the following with respect to FOL:--
a. Term
b. Atomic Sentences
c. Complex Sentences
d. Universal Quan�fiers
e. Existen�al Quan�fier (APR 2023)
ANS FOL
• FOL develops the informa�on about objects in an easier way & can also express the
rela�onship between those objects.
• FOL assumes following things in world.
• Objects: A,B, Numbers, colors, peoples etc.
• Rela�on: red, round, brother of etc.
• Func�on: father of, best friend, end of etc
• FOL has two main parts: Syntax & Seman�cs
Atomic sentences
An atomic sentence (or atom for short) is formed from a predicate symbol op�onally followed
by a
parenthesized list of terms, such as Brother (Richard , John).
This states, under the intended interpreta�on given earlier, that Richard the Lionheart is the
brother of King John. Atomic sentences can have complex terms as arguments. Thus,
Married(Father (Richard),Mother (John))
states that Richard the Lionheart’s father is married to King John’s mother
An atomic sentence is true in a given model if the relation referred to by the predicate symbol
holds among the objects referred to by the arguments.
Complex sentences
We can use logical connec�ves to construct more complex sentences, with the same syntax
and seman�cs as in proposi�onal calculus. Here are four sentences that are true in the model
under our intended interpreta�on:
¬Brother (LeftLeg(Richard), John)
Brother (Richard , John) ∧ Brother (John, Richard)
King(Richard ) ∨ King(John)
¬King(Richard) ⇒ King(John) .
Quan�fiers
1) Universal quan�fica�on (∀)
2) Existen�al quan�fica�on (∃)
18 Convert the below given Facts into FOL and Prove that "Colonel is a Criminal" using Forward-
Chaining.
a. It is crime for an American to sell weapons to the enemy of America.
"As per the law, it is a crime for an American to sell weapons to hos�le na�ons. Country A,
an enemy of America, has some missiles, and all the missiles were sold to it by Robert, who is
an American ci�zen.“
Prove that "Robert is criminal."
To solve the above problem, first, we will convert all the above facts into first-order definite
clauses, and then we will use a forward-chaining algorithm to reach the goal.
• It is a crime for an American to sell weapons to hos�le na�ons. (Let's say p, q, and r are
variables)
American (p) ∧ weapon(q) ∧ sells (p, q, r) ∧ hos�le(r) → Criminal(p) ...(1)
• Country A has some missiles. ?p Owns(A, p) ∧ Missile(p). It can be writen in two
definite clauses by using Existen�al Instan�a�on, introducing new Constant T1.
Owns(A, T1) ......(2)
Missile(T1) .......(3)
• All of the missiles were sold to country A by Robert.
?p Missiles(p) ∧ Owns (A, p) → Sells (Robert, p, A) ......(4)
• Missiles are weapons.
Missile(p) → Weapons (p) .......(5)
• Enemy of America is known as hos�le.
Enemy(p, America) →Hos�le(p) ........(6)
• Country A is an enemy of America.
Enemy (A, America) .........(7)
• Robert is American
American(Robert). ..........(8)
Forward chaining proof
Step-1:
In the first step we will start with the known facts and will choose the sentences which do
not have implica�ons, such as: American(Robert), Enemy(A, America), Owns(A, T1), and
Missile(T1). All these facts will be represented as below.
Step-2:
At the second step, we will see those facts which infer from available facts and with sa�sfied
premises.
Rule-(1) does not sa�sfy premises, so it will not be added in the first itera�on.
Rule-(2) and (3) are already added.
Rule-(4) sa�sfy with the subs�tu�on {p/T1}, so Sells (Robert, T1, A) is added, which infers
from the conjunc�on of Rule (2) and (3).
Rule-(6) is sa�sfied with the subs�tu�on(p/A), so Hos�le(A) is added and which infers from
Rule-(7).
Step-3:
At step-3, as we can check Rule-(1) is sa�sfied with the subs�tu�on {p/Robert, q/T1, r/A}, so
we can add Criminal(Robert) which infers all the available facts. And hence we reached our
goal statement.
20 What is Resolu�on? Men�on the Steps for Resolu�on and also for Conver�ng FOL into
Conjunc�ve Normal Form (CNF). Consider following statements and Perform the following
conversion on it:-
f. Convert to FOL.
g. Convert FOL to CNF.
h. Prove that "Is Someone Smiling?" Using Resolu�on.
i. Draw Resolu�on tree.
Statements:-
● All people who are gradua�ng are happy.
● All happy people smile..
● Someone is Gradua�ng. (APR 2023)
ANS
Unit 5
1 What is planning? Explain STRIPS operators with suitable example. (NOV 2018)
OR
What is planning? Explain the need of planning (NOV 2019)
AN Planning is an important part of Ar�ficial Intelligence which deals with the tasks and domains of
S a par�cular problem. Planning is considered the logical side of ac�ng.
Everything we humans do is with a definite goal in mind, and all our ac�ons are oriented
towards achieving our goal. Similarly, Planning is also done for Ar�ficial Intelligence.
For example, Planning is required to reach a par�cular des�na�on. It is necessary to find the
best route in Planning, but the tasks to be done at a par�cular �me and why they are done are
also very important.
That is why Planning is considered the logical side of ac�ng. In other words, Planning is about
deciding the tasks to be performed by the ar�ficial intelligence system and the system's
func�oning under domain-independent condi�ons.
The Standford Research Ins�tute Problem Solver (STRIPS) is an automated planning technique
that works by execu�ng a domain and problem to find a goal. With STRIPS, you first describe
the world. You do this by providing objects, ac�ons, precondi�ons, and effects. These are all the
types of things you can do in the game world.
Once the world is described, you then provide a problem set. A problem consists of an ini�al
state and a goal condi�on. STRIPS can then search all possible states, star�ng from the ini�al
one, execu�ng various ac�ons, un�l it reaches the goal.
A common language for wri�ng STRIPS domain and problem sets is the Planning Domain
Defini�on Language (PDDL). PDDL lets you write most of the code with English words, so that it
can be clearly read and (hopefully) well understood. It’s a rela�vely easy approach to wri�ng
simple AI planning problems.
2 Explain in brief about par�ally ordered plan. ( NOV 2018)
AN Par�ally ordered collec�on of steps with Start step has the ini�al state descrip�on as its effect
S Finish step has the goal descrip�on as its precondi�on causal links from outcome of one step to
precondi�on of another temporal ordering between pairs of steps Open condi�on =
precondi�on of a step not yet causally linked A plan is complete iff every precondi�on is
achieved A precondi�on is achieved iff it is the effect of an earlier step and no possibly
intervening step undoes it.
A lineariza�on of a par�al order plan is a total order plan derived from the par�cular par�al
order plan; in other words, both order plans consist of the same ac�ons, with the order in the
lineariza�on being a linear extension of the par�al order in the original par�al order plan.
Example
For example, a plan for baking a cake might start:
go to the store
get eggs; get flour; get milk
pay for all goods
go to the kitchen
This is a par�al plan because the order for finding eggs, flour and milk is not specified, the agent
can wander around the store reac�vely accumula�ng all the items on its shopping list un�l the
list is complete.
3 Explain in brief about hierarchical planning. ( NOV 2018)
OR
Explain hierarchical planning. (NOV 2022)
AN • Hierarchical organiza�on of ac�ons
S • Complex and less complex ac�ons
• Lowest level reflects directly executable ac�ons
• Plans are organized in a hierarchy
• Planning starts with complex ac�on on top
• Planning constructed through ac�on decomposi�on
• Subs�tute complex ac�on with plan of less complex ac�ons
• An idea that pervades almost all atempts to manage complexity.
• For example, complex so�ware is created from a hierarchy of subrou�nes or object
classes; armies operate as a hierarchy of units; governments and corpora�ons have
hierarchies of departments, subsidiaries, and branch offices.
• The key benefit of hierarchical structure is that, at each level of the hierarchy, a
computa�onal task, military mission, or administra�ve func�on is reduced to a small
number of ac�vi�es at the next lower
• level, so the computa�onal cost of finding the correct way to arrange those ac�vi�es for
the current problem is small.
High-level ac�ons
Hierarchical task networks or HTN planning
-At each “Level” only a small no. of individual planning ac�ons, then descend to lower levels to
“solve
these” for real.
-At higher levels, the planner ignores “internal effects” of decomposi�ons. But these have to be
resolved
at some level.
AN There are mainly four ways of knowledge representation. One of that is seman�c network
S representa�on.
Seman�c networks are alterna�ve of predicate logic for knowledge representa�on. In Seman�c
networks, we can represent our knowledge in the form of graphical networks. This network
consists of nodes represen�ng objects and arcs which describe the rela�onship between those
objects. Seman�c networks can categorize the object in different forms and can also link those
objects. Seman�c networks are easy to understand and can be easily extended.
Statements:
a. Jerry is a cat.
b. Jerry is a mammal
c. Jerry is owned by Priya.
d. Jerry is brown colored.
e. All Mammals are animal.
AN
S
These events can be google calendar events, a medical alarm, dates on a da�ng website, or in
the case of GenRush, events can indicate new poten�al clients for a company based on events
in the real world.
These event understanding problems are classic classifica�on and regression requirements
hiding behind the complexity of the word “event”. For sequence predic�on, we use
LSTM/GRU/RNN, and some�mes DNN (e.g. when the sequence of events forms a patern you
can graph/see). But let’s focus in this ar�cle on the non-sequence and non-loca�on part of
event processing. Let’s look in some detail at how to turn an event into a set of features that an
AI can use.
So, we have a whole lot of events, and we want the AI to no�fy human users when specific
events happen. Immediately we find that a classic false posi�ve versus false nega�ve
conundrum emerges. It have been seen a lot of this in hospital/medical no�fica�on systems
back when I was working on medical devices.
If the AI no�fies incorrectly too o�en (false posi�ve) then the users will ignore the no�fica�ons.
Whereas if the system misses key events, then users will think, and rightly so, that the AI is not
paying aten�on.
7 Write PDDL descrip�on of an air cargo transporta�on planning problem (April 2019)
OR
Explain Planning Domain Defini�on Language descrip�on for an Air Cargo planning problem.
(NOV 2022)
AN PDDL
S Each state is represented as a conjunc�on of fluents that are ground, func�onless atoms.
For example, a state in a package delivery problem might be
At(Truck 1, Melbourne) ∧ At(Truck 2, Sydney).
Ac�ons are described by a set of ac�on schemas that implicitly define the ACTIONS(s)
and RESULT(s, a) func�ons needed to do a problem-solving search.
Si contains all the literals that could hold at �me i, depending on the ac�ons executed at
preceding �me steps. If it is possible that either P or ¬P could hold, then both will be
represented in Si.
Ai contains all the ac�ons that could have their precondi�ons sa�sfied at �me i.
Planning graphs work only for proposi�onal planning problems—ones with no variables
The “have cake and eat cake too” problem
Each ac�on at level Ai is connected to its precondi�ons at Si and its effects at Si+1.
-So a literal appears because an ac�on caused it, but we also want to say that a literal can
persist if no ac�on negates it. This is represented by a persistence ac�on (some�mes called a
no-op).
-For every literal C, we add to the problem a persistence ac�on with precondi�on C and effect
C.
-Level A0 in Figure shows one “real” ac�on, Eat (Cake), along with two persistence ac�ons
drawn as small square boxes.
Level A0 contains all the ac�ons that could occur in state S0, but just as important it records
conflicts between ac�ons that would prevent them from occurring together.
The gray MUTUAL EXCLUSION lines in Figure indicate mutual exclusion (or mutex) links.
For example, Eat (Cake) is MUTEX mutually exclusive with the persistence of either Have(Cake)
or ¬Eaten(Cake).
We shall see shortly how mutex links are computed.
Level S1 contains all the literals that could result from picking any subset of the ac�ons in A0, as
well as mutex links (gray lines) indica�ng literals that could not appear together, regardless of
the choice of ac�ons.
For example, Have(Cake) and Eaten(Cake) are mutex depending on the choice of ac�ons in A0,
either, but not both, could be the result. In other words, S1 represents a belief state: a set of
possible states.
The members of this set are all subsets of the literals such that there is no mutex link between
any members of the subset.
: penguin(x) bird(x)
: penguin(x) flies(x)
We cannot conclude
flies(tweety)
since we cannot prove
~ abnormal (tweety).
Default Logic:
Default logic is a non-monotonic logic proposed by Raymond Reiter to formalize reasoning with
default assump�ons.
Default logic can express facts like “by default, something is true”; by contrast, standard logic
can only express that something is true or that something is false. This is a problem because
reasoning o�en involves facts that are true in the majority of cases but not always. A classical
example is: “birds typically fly”. This rule can be expressed in standard logic either by “all birds
fly”, which is inconsistent with the fact that penguins do not fly, or by “all birds that are not
penguins and not ostriches and ... fly”, which requires all excep�ons to the rule to be specified.
Default logic aims at formalizing inference rules like this one without explicitly men�oning all
their excep�ons.
A default theory is a pair (W,D). W is a set of logical formulas, called the background theory, that
formalize the facts that are known for sure. D is a set of default rules, each one being of the
form:
According to this default, if we believe that Prerequisite is true, and each jus�fica�on, for
i=1,…,n. n is consistent with our current beliefs, we are led to believe that Conclusion is true.
The logical formulae in W and all formulae in a default were originally assumed to be first-order
logic formulae, but they can poten�ally be formulae in an arbitrary formal logic. The case in
which they are formulae in proposi�onal logic is one of the most studied.
AN Descrip�on logics (DL) are a family of formal knowledge representa�on languages. Many DLs
S are more expressive than proposi�onal logic but less expressive than first-order logic. In
contrast to the later, the core reasoning problems for DLs are (usually) decidable, and efficient
decision procedures have been designed and implemented for these problems. There are
general, spa�al, temporal, spa�otemporal, and fuzzy descrip�on logics, and each descrip�on
logic features a different balance between expressive power and reasoning complexity by
suppor�ng different sets of mathema�cal constructors.
DLs are used in ar�ficial intelligence to describe and reason about the relevant concepts of an
applica�on domain (known as terminological knowledge). It is of par�cular importance in
providing a logical formalism for ontologies and the Seman�c Web: the Web Ontology Language
(OWL) and its profiles are based on DLs. The most notable applica�on of DLs and OWL is in
biomedical informa�cs where DL assists in the codifica�on of biomedical knowledge.
12 Explain block world problem for the following start state and end state. (NOV 2019)
AN
S
Our aim is to change the configura�on of the blocks from the Ini�al State to the Goal State,
both of which have been specified in the diagram above.
Given below are the list of predicates as well as their intended meaning
ON(A,B) : Block A is on B
ONTABLE(A) : A is on table
CLEAR(A) : Nothing is on top of A
HOLDING(A) : Arm is holding A.
ARMEMPTY : Arm is holding nothing
Using these predicates, we represent the Ini�al State and the Goal State in our example like
this:
All the four opera�ons have certain precondi�ons which need to be sa�sfied to perform the
same. These precondi�ons are represented in the form of predicates.
The effect of these opera�ons is represented using two lists ADD and DELETE. DELETE List
contains the predicates which will cease to be true once the opera�on is performed. ADD List
on the other hand contains the predicates which will become true once the opera�on is
performed.
The Precondi�on, Add and Delete List for each opera�on is rather intui�ve and have been listed
below.
14 Describe Forward (Progression) State-Space Search algorithm with an example (NOV 2022)
AN Search from the ini�al state through the space of states, looking for a goal.
S 1) forward search is prone to exploring irrelevant ac�ons
Consider the noble task of buying a copy of AI: A Modern Approach from an online bookseller.
Suppose there is an ac�on schema Buy(isbn) with effect Own(isbn).
ISBNs are 10 digits, so this ac�on schema represents 10 billion ground ac�ons. An uninformed
forward-search algorithm would have to start enumera�ng these 10 billion ac�ons to find one
that leads to the goal.
2) Second, planning problems o�en have large state spaces
Consider an air cargo problem, with 10 airports, where each airport has 5 planes and 20 pieces
of cargo. The goal is to move all the cargo at airport A to airport B.
There is a simple solu�on to the problem: load the 20 pieces of cargo into one of the planes at
A, fly the plane to B, and unload the cargo.
Finding the solu�on can be difficult because the average branching factor is huge: each of the
50
planes can fly to 9 other airports, and each of the 200 packages can be either unloaded (if
it is loaded) or loaded into any plane at its airport (if it is unloaded).
So, in any state there is a minimum of 450 ac�ons (when all the packages are at airports with no
planes) and a maximum of 10,450 (when all packages and planes are at the same airport).
On average, let’s say there are about 2000 possible ac�ons per state, so the search graph up to
the depth of the obvious solu�on has about 200041 nodes.
The agent doesn’t know where it is. We can use belief states (sets of states that the agent might
be in). Example determinis�c, sta�c, single-agent vacuum world.
16 Explain two different types of algorithms for planning as a state space search (APR 2023)
AN Forward (progression) state-space search
S Search from the ini�al state through the space of states, looking for a goal.
1) forward search is prone to exploring irrelevant ac�ons
Consider the noble task of buying a copy of AI: A Modern Approach from an online bookseller.
Suppose there is an ac�on schema Buy(isbn) with effect Own(isbn).
ISBNs are 10 digits, so this ac�on schema represents 10 billion ground ac�ons. An uninformed
forward-search algorithm would have to start enumera�ng these 10 billion ac�ons to find one
that leads to the goal.
2) Second, planning problems o�en have large state spaces
Consider an air cargo problem, with 10 airports, where each airport has 5 planes and 20 pieces
of cargo. The goal is to move all the cargo at airport A to airport B.
There is a simple solu�on to the problem: load the 20 pieces of cargo into one of the planes at
A, fly the plane to B, and unload the cargo.
Finding the solu�on can be difficult because the average branching factor is huge: each of the
50
planes can fly to 9 other airports, and each of the 200 packages can be either unloaded (if
To get the full advantage of backward search, we need to deal with par�ally uninstan�ated
ac�ons and states, not just ground ones. For example, suppose the goal is to deliver a specific
piece of cargo to SFO: At(C2, SFO).
That suggests the ac�on Unload(C2, p’, SFO):
Ac�on(Unload(C2, p’, SFO),
PRECOND: In(C2, p’) ∧ At(p’, SFO) ∧ Cargo(C2) ∧ Plane(p’) ∧ Airport(SFO)
EFFECT: At(C2, SFO)∧ ¬In(C2, p’) .
17 Explain various types of planning methods for handling indeterminacy (APR 2023)
AN • Planning so far does not specify how long an ac�on takes, when an ac�on
S occurs, except to say that is before or a�er another ac�on
When used in real world, such as scheduling Hubble Space Telescope
observa�ons, �me is also a resource/constraint
• Job shop scheduling – �me is essen�al
• An example of Figures 12.1 and 12.2
A par�al order plan (with dura�ons)
Cri�cal path (or the weakest link)
Slack = LS (latest start) – ES (earliest start)
Schedule = plan + �me (dura�ons for ac�ons)
• Scheduling with resource constraints
When certain parts are not available, wai�ng �me should be minimized
18 Explain mul�-agent planning with its different types of strategies (APR 2023)
AN • Mul�agent planning problem: When there are mul�ple agents in the
S environment, each agent faces a mul�agent planning problem in which it tries
to achieve its own goals with the help or hindrance of others.
• Mul�effector planning: An agent with mul�ple effectors that can operate
concurrently—for example, a human who can type and speak at the same
�me—needs to do mul�effector planning to manage each effector while
handling posi�ve and nega�ve interac�ons among the effectors.
• Mul�body planning: When the effectors are physically decoupled into detached
units—as in a fleet of delivery robots in a factory—mul�effector planning
becomes mul�body planning. example, a fleet/squadron of reconnaissance
robots that are some�mes out of communica�ons range.
• One is in charge of all decisions…
For example, a delivery company may do centralized, offline planning for the routes of
its trucks and planes each day, but leave some aspects open for autonomous decisions
by drivers and pilots who can respond individually to traffic and weather situa�ons.
Planning with mul�ple simultaneous ac�ons
The terminology is mul�actor se�ngs.
We merge aspects of the mul�effector, mul�body, & mul�agent paradigms, then
consider issues related to transi�on models, correctness of plans, efficiency/complexity
of planning algorithms.
Planning with mul�ple agents: Coopera�on and coordina�on
Each agent formulates its own plan, but based on shared goals & a shared knowledge
base
Mental events refer to any experience or occurrence that takes place within an individual's
mind. This includes sensa�ons, percep�ons, thoughts, emo�ons and other conscious
experiences. Mental events are subjec�ve and can only be directly observed by the individual
experiencing it.
Mental objects, on the other hand, are the en��es that mental events are directed at or about.
Mental objects can include anything that can be thought about, such as ideas, beliefs,
memories or percep�ons of external objects. They are the content or subject mater of mental
events.
For example, when you read a sentence, the mental event might be the percep�on of the words
on the page while the mental object might be the meaning that you derive from those words.
20 What is reasoning systems for categories? Explain Seman�c Nets and men�on its advantages
and disadvantages (APR 2023)
AN The reasoning is the mental process of deriving logical conclusion and making predic�ons from
S available knowledge, facts, and beliefs. Or we can say, "Reasoning is a way to infer facts from
exis�ng data." It is a general process of thinking ra�onally, to find valid conclusions.
In ar�ficial intelligence, the reasoning is essen�al so that the machine can also think ra�onally
as a human brain, and can perform like a human.
Types of Reasoning
In ar�ficial intelligence, reasoning can be divided into the following categories:
o Deductive reasoning
o Inductive reasoning
o Abductive reasoning
o Common Sense Reasoning
o Monotonic Reasoning
o Non-monotonic Reasoning
These networks are not intelligent and depend on the creator of the system.
Advantages of Seman�c network:
Seman�c networks are a natural representa�on of knowledge.
Seman�c networks convey meaning in a transparent manner.
These networks are simple and easily understandable.