1|Page
IT504B Artificial Intelligence
COURSE OBJECTIVES:
IT505B 1: Apply knowledge of computing and mathematics appropriate
to the discipline.
IT505B 2: Analyze a problem, and identify and define the computing
requirements appropriate to its solution.
IT505B 3: Design, implement, and evaluate a computer-based system,
process, component, or program to meet desired needs.
IT505B 4: Understand current techniques, skills, and tools necessary for
computing practice.
COURSE OUTCOME:
IT505B 1: Understand different types of AI agents and Tools.
IT505B 2: Know various AI search algorithms (uninformed, informed,
heuristic, constraint satisfaction).
IT505B 3: Understand the fundamentals of knowledge representation
(logic-based, frame-based, semantic nets), inference and theorem proving.
IT505B 4: Demonstrate working knowledge of reasoning in the presence
of incomplete and/or uncertain information.
IT505B 5: Ability to apply knowledge representation, reasoning, and
machine learning techniques to real-world problems.
Prerequisite:
1. Basic concept of computer science and automation.
2. Knowledge of preliminary programming language like C, C++.
3. Knowledge of high level programming language like Python, R.
4. Basic mathematical concept like calculus, probability, metrics and
statistics.
2|Page
Module I
1.1 Introduction to Artificial Intelligence
McCarthy coined the Term AI in 1956. Now AI is the buzzword ringing your
mind in every field of study. Researchers are doing extreme in every aspect to
accommodate the concept of AI is all recent application. Now a day, industry is
demanding young professional with the basic knowledge of AI. The basic of AI is
human brain: how it acquires the data and how the data are processed to produce
information as well as to take decision. This philosophy you have to analyze
mathematically for mimicking the same concept into a machine. The machine
then acquires the machine intelligence or Artificial Intelligence to serve as like a
human being. Thus AI is the guide of future Information Technology.
1.1.1 Definition of AI:
Artificial Intelligence (AI) is the technique to create machine, which acts
intelligently as like a smart human being.
The term ‘artificial intelligent’ refers to the ‘behavior of a machine’ if it acts
rationally. The intelligent behavior acquired by intelligent programming.
The science of AI could be described as "synthetic psychology," experimental
philosophy," or "computational epistemology"; epistemology is the study of
knowledge.
Finally, AI is concerned with the study of building machines/computers that
simulate human behavior. Traditional computing (numeric information is
processed by algorithm), cannot tackle symbolic information and heuristic
processing but AI is capable of both.
1.1.2 Goals of AI
a) To Create Expert Systems − the systems which exhibit intelligent
behavior, learn, demonstrate, explain, and advice its users. The first expert
system was MYCIN developed by Stanford University using LISP.
b) To Implement Human Intelligence in Machines − Creating systems that
understand, think, learn, and behave like humans.
3|Page
1.1.3 AI technique
AI technique is a method that exploits knowledge that should be represented in
such a way that:
a) The knowledge captures generalizations.
b) It can be understood by people who must provide it.
c) It can easily be modified to correct errors
d) It can be used in a great many situations even if it is not entirely
accurate.
Different AI techniques are:
e) Natural Language Processing: Techniques for Speech Synthesis
f) Machine Learning in Medicine: Multilayer perceptrons , Bayesian
networks
g) Cognitive Modelling: Artificial Neural Networks
h) Knowledge-Based Systems: Intelligent information visualization,
Integration of information technology with telecommunication
1.2 Foundations and History of Artificial Intelligence
About 400 years ago people started to write about the nature of thought and
reason. Hobbes (1588-1679), who has been described by Haugeland (1985), p. 85
as the "Grandfather of AI," espoused the position that thinking was symbolic
reasoning like talking out loud or working out an answer with pen and paper. The
idea of symbolic reasoning was further developed by Descartes (1596-1650),
Pascal (1623-1662), Spinoza (1632-1677), Leibniz (1646-1716), and others who
were pioneers in the philosophy of mind.
The idea of symbolic operations became more concrete with the development of
computers. The first general-purpose computer designed (but not built until 1991,
at the Science Museum of London) was the Analytical Engine by Babbage (1792-
1871). In the early part of the 20th century, there was much work done on
understanding computation. Several models of computation were proposed,
including the Turing machine by Alan Turing (1912-1954), a theoretical machine
that writes symbols on an infinitely long tape, and the lambda calculus of Church
(1903-1995), which is a mathematical formalism for rewriting formulas. It can be
shown that these very different formalisms are equivalent in that any function
4|Page
computable by one is computable by the others. This leads to the Church-Turing
thesis:
Any effectively computable function can be carried out on a Turing machine (and
so also in the lambda calculus or any of the other equivalent formalisms).
1.3 Turing test:
The Turing test, developed by Alan Turing in 1950, is a test of a machine's ability
to exhibit intelligent behavior equivalent to, or indistinguishable from, that of a
human.
Fig.1.1 Turing Test
A ‘human interrogator’ as judge engages in a natural language conversation with
two other parties, one a ‘human’ and the other an ‘AI System(machine)’; if the
judge cannot reliably tell which is which, then the machine is said to pass the test.
It is assumed that both the human and the machine try to appear human. The
‘machine can think’, is the conclusion of this test.
So, AI is a system which can think and act humanly as well as rationally.
1.4 Intelligent Agents and Environment
An agent is defined as anything which perceiving its environment through sensors
and acting upon that environment through actuators or effectors. Intelligent
programming controls the behavior of the agent.
5|Page
Fig.1.2 Agent system/ architecture
Intelligent Agent (IA) is the agent, which acts rationally that means, does the right
thing at right moment with having good sense of judgment.
1.5 The Structure of Intelligent Agents
Agent’s structure can be viewed as agent’s architecture consists of machinery that
an agent executes on and agent program is the implementation of an agent
function. Agent’s structure can be viewed as:
a) Agent = Architecture + Agent Program
b) Architecture = the machinery that an agent executes on.
c) Agent Program = an implementation of an agent function.
The agent system architecture is the hardware upon which the program runs. This
system may be physical, or may be virtual as in the case of software agents (soft-
bots) which live in a world of software.
1.6 Types of Agent
1.6.1 Human agent- perceives the environment through eyes, ears, & other organs
for sensors and acting upon that environment through hands, legs, mouth, & other
body parts for actuators
1.6.2 Robotic agent- perceives the environment through cameras and infrared
range finders for sensors and acting upon that environment through various
motors for actuators
6|Page
1.6.3 Software agent- perceives the environment through input data for sensor and
acting upon that environment through output signal via actuator.
1.7 Basic types of agent in order of increasing generalization
1.7 .1 Simple Reflex Agents
They choose actions only based on the current percept. They are rational only if a
correct decision is made only based on current precept.
Their environment is observable. It responds immediately to the single percepts.
Condition-Action Rule − It is a rule that maps a state (condition) to an action.
Fig.1.3 Simple Riflex Agent environment
1.7.2 Model Based Reflex Agents
They use a model of the world to choose their actions. They maintain an internal
state. The knowledge about how the things happen in the world.
Internal State is a representation of unobserved aspects of current state depending
on percept history.
Updating the state requires the information about −How the world evolves. How
the agent’s actions affect the world.
Fig.1.4 Model Based Riflex Agent environment
7|Page
1.7.3 Goal Based Agents
They choose their actions in order to achieve goals. Goal-based approach is more
flexible than reflex agent since the knowledge supporting a decision is explicitly
modeled, thereby allowing for modifications. It acts to achieve goals.
Goal is the description of desirable situations.
Fig.1.5 Goal Based Agent environment
1.7.4 Utility Based Agents
They choose actions based on a preference (utility) for each state. Goals are
inadequate when −There are conflicting goals, out of which only few can be
achieved.
Goals have some uncertainty of being achieved and you need to weigh likelihood
of success against the importance of a goal. It acts to maximize their "happiness".
8|Page
Fig.1.6 Utility Based Agent environment
1.7.4 Table Driven Agent
Table-driven agents, which respond to the percepts sequence seen so far. The
advantage table driven agent is to do what we want; it implements the desired
agent function.
1.8 Rational Agent
A rational agent always performs right action, where the right action means the
action that causes the agent to be most successful in the given percept sequence.
1.8.1 Rationality
Rationality is nothing but status of being reasonable, sensible, and having good
sense of judgment. Rationality is concerned with expected actions and results
depending upon what the agent has perceived. Performing actions with the aim of
obtaining useful information is an important part of rationality.
1.8.2 Ideal Rational Agent
An ideal rational agent is the one, which is capable of doing expected actions to
maximize its performance measure, on the basis of (a) its percept sequence, and
(b) its built-in knowledge base. Rationality of an agent depends on the following
four factors:
(a) The performance measures, which determine the degree of success.
(b) Agent’s Percept Sequence till now.
(c) The agent’s prior knowledge about the environment.
(d) The actions that the agent can carry out.
A rational agent always performs right action, where the right action means the
action that causes the agent to be most successful in the given percept sequence.
The problem, the agent solves is characterized by
Performance measure, Environment, Actuators, and Sensors (PEAS).
1.9 Production System (or production rule system)
A production system is defined as a set of rules, which is coded in computer
program used to provide some form of artificial intelligence. The set of rules is
about human behavior.
9|Page
Fig.1.7 Architecture of a production system
1.9.1 Components of production system
A production system consists of the following components:
a) A set of rule,
b) Working memory with database,
c) A control strategy with rule memory and
d) A rule applier /interpreter.
1.9.2 Types of production system
There are four types of production system:
1. Non-monotonic production system- is a system when there are many rules
applicable at time, and then you can select any one of them at current time.
If the applied rule does not lead to a solution then this system does not
allow applying the rest of the rules.
2. Monotonic production system- is a system when there are many rules
applicable at time, and then you can select any one of them at current time.
If the applied rule does not lead to a solution then this system allow
applying the rest of the rules.
3. Partially commutative production system- is a system in which the
application of a particular sequence of rules transforms state X into state
Y, then any permutation of those rules that is allowable also transforms
state X into state Y.
4. Commutative production system- is a system with the property of both
monotonic and partially commutative.
10 | P a g e
1.9.3 Conflict resolution in Production System
This strategy used for choosing which production rule to fire. The need for such
a strategy arises when the conditions of two or more rules are satisfied by the
currently known facts. Conflict resolution strategies fall into several categories:
1. Refractoriness: Do not allow a rule to fire twice on same data.
2. Recency: Take the data, which arrived in working memory most
recently, and find a rule that uses this data.
3. Specificity: Use the most specific rule (the one with the most
conditions attached).
4. Order: Pick the first applicable rule in order of presentation.
5. Arbitrary choice: Pick a rule at random.
1.9.4 Benefits of Production System
a) Production systems provide an excellent tool for structuring AI programs.
b) Production Systems are highly modular because the individual rules can
be added, removed or modified independently.
c) The production rules are expressed in a natural form, so the statements
contained in the knowledge base acquire a recording of an expert thinking.
d) It is language independent.
1.10 Short questions and Answer
1. What is Agent? How many types of agents are there?
A: An agent is anything that can be viewed as perceiving and acting upon the
environment through the sensors and actuators.
The four types of agents are Simple reflex, Model based, Goal based and Utility
based agents.
2. What is percept sequence? [WBUT 2016 CSE] How does the agent work?
A: An agent’s percept sequence is the complete history of everything that the
agent has ever perceived.
An agent program will implement function-mapping percepts to actions. When
the environment becomes more tricky means, the agent needs plan and search
action sequence to achieve the goal.
11 | P a g e
3. How does the Simple reflex agent work? How can agent improve its
performance?
A: Simple reflex agent is based on the present condition and so it is condition
action rule.
An agent can improve its performance by storing its pervious actions. This
process is also known as learning.
11. What is Problem generator?
A: Problem generator will give the suggestion to improve the output for learning
agent.
11. What is utility function?
A: A utility function maps a state onto a real number, which describes the
associated degree of happiness
11. What is AI?
A: AI is to make things work automatically through machine without using
human effort. Machine will give the result with just giving input from human.
That means the system or machine will act intelligently as per the requirement.
11. What do you mean by agent behavior?
A: An agent’s behavior is described by the agent function that maps any given
percept sequence to an action, which can be implemented by agent program. The
agent function is an abstract mathematical description; the agent program is a
concrete implementation, running
on the agent architecture.
8. What is rational agent?
A: Rational agent is the one who always does the right thing, Right in a sense that
it makes the agent the most successful.
For each possible percept sequence, a rational agent should select an action that is
expected to maximize its performance measure, given the evidence provided by
the percept sequence and whatever built-in knowledge the agent has.
9. What is an omniscient agent? Is it possible in real world?
12 | P a g e
A: An omniscient agent knows the actual outcome of its actions and can act
accordingly; but omniscience is impossible in reality. Therefore, in reality,
Rational Agent is possible which always does the right thing.
10. What is PEAS?
A: The task in environment of an agent is described by four parts: performance
measures, environment, actuators and sensors, which is generally known as the
PEAS descriptions.
11. State and Explain the PEAS attributes of a Medical Diagnostic System.
A: Performance measures, Environment, Actuators and Sensors are known as the
PEAS descriptions.
Performance measures attributes of a Medical Diagnostic System: Healthy
patient, minimize costs, lawsuits, etc.
Environment attributes of a Medical Diagnostic System: Patient, hospital, staff,
etc.
Actuators attributes of a Medical Diagnostic System: Screen display such as
questions, tests, diagnoses, treatments, referrals, etc.
Sensors attributes of a Medical Diagnostic System: Keyboard such as entry of
symptoms, findings, patient’s answers, etc.
Medical diagnosis is a forward reasoning strategy.
1.11 Multiple Choice Questions (MCQ)
1. Which instruments are used b) Complete history of
for perceiving and acting actuator
upon the environment? c) Complete history of
a) Sensors and Actuators perceived things
b) Sensors d) Both a & b
c) Perceiver
d) None of the mentioned 3. How many types of agents
are there in artificial
2. What is meant by agent’s intelligence?
percept sequence? a) 1
a) Used to perceive the b) 2
environment
13 | P a g e
c) 3 b) Model based agent
d) 4 c) Learning agent
d) Utility based agent
4. What is the rule of simple
reflex agent? 9. Which action sequences are
a) Simple-action rule used to acheive the agent’s
b) Condition-action rule goal?
c) Both a & b a) Search
d) None of the mentioned b) Plan
View Answer c) Reterive
d) Both a & b
5. What are the composition for
agents in artificial 10. What is Artificial
intelligence? intelligence?
a) Program a) Putting your intelligence
b) Architecture into Computer
c) Both a & b b) Programming with your
d) None of the mentioned own intelligence
View Answer c) Making a Machine
intelligent
6. In which agent does, the d) Playing a Game
problem generator is present? e) Putting more memory into
a) Learning agent Computer
b) Observing agent
c) Reflex agent 11. Which is not the commonly
d) None of the mentioned used programming language
for AI?
7. Which is used to improve the a) PROLOG
agents performance? b) Java
a) Perceiving c) LISP
b) Learning d) Perl
c) Observing
d) None of the mentioned 12. Artificial Intelligence has its
expansion in the following
8. Which agent deals with application.
happy and unhappy states? a) Planning and Scheduling
a) Simple reflex agent b) Game Playing
14 | P a g e
c) Diagnosis and Robotics b) Agent Function
d) All of the above c) Perception Sequence
d) Architecture and Program
13. What is perception sequence
of an agent? 17. In which of the following
a) A periodic inputs sets agent does the problem
b) a complete history of generator is present?
everything the agent has ever a) Learning agent
perceived b) Observing agent
c) Both a) and b) c) Reflex agent
d) None of the mentioned d) None of the mentioned
14. Agents behavior can be best 18. The first expert system was
described by [WBUT 2012 IT]
a) Perception sequence a) MYCIN
b) Agent function b) DENDRAL
c) Sensors and Actuators c) EMYCIN
d) Environment in which d) PROSPECTOR
agent is performing
15. Satellite Image Analysis 19. AI is described as
System is a) Synthetic psychology
a) Episodic b) Experimental philosophy
b) Semi-Static c) Computational
c) Single agent epistemology
d) Partially Observable d) All of these
16. An agent is composed of,
a) Architecture
1.12 MCQ Answer keys
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19
a c d b c a b d d c d d c b d d d a d
15 | P a g e
1.13 Test Your Skills
1. What is Artificial Intelligence?
2. What are AI techniques? [WBUT 2013 IT]
3. What is epistemology?
4. Briefly explain Turing Test with example. [WBUT 2009 IT]
5. What is an agent? [WBUT 2016 CSE] What are the advantages of table
driven agent? [WBUT 2011 CSE]
6. Describe various types of agent. [WBUT 2016 CSE]
7. What is Rational Agent? How it differ from Omniscient Agent?
8. What is PEAS?
9. State and explain the PEAS attributes of a taxi driver.
10. State and explain different types of environment.
11. What is production system? [WBUT 2016 CSE] Explain conflict
resolution strategies. [WBUT 2010,11 IT]
12. State the components of Production System. [WBUT 2013 IT]
13. Discuss the benefit of production system. [WBUT 2010 CSE]
14. What is agent system? [WBUT 2016 CSE]
Module II
2.1 State Space Search
State space search characterize problem solving as the process of finding a
solution path from the start state to a goal state.
2.2 Strategies for State Space Search
2.2.1 Data driven search (forward chaining)
It begins with the given facts and a set of legal moves or rules for changing the
state. Searches proceed by applying rules to facts to produce new facts, which are
in turn used by rules to generate more new facts. This process continues until it
generates a path that satisfies the goal condition.
16 | P a g e
Example: AND-OR Graph
It is an instance of forward chaining. Multiple links joined by an arc indicate
conjunction – every link must be proved.
Multiple links without an arc indicate disjunction – any link can be proved
For each new piece of data, generate all new facts, until the desired fact is
generated (Data-directed reasoning).
2.2.2 Goal driven search (Backward chaining)
Begin with goal to see what rules or legal moves could used to generate this goal,
and determined what conditions must be true to use them. These conditions
become the new goals, sub-goals, for the search. This process continues, working
backward through successive sub-goals, until a path is generated that leads back
to the facts of the problem.
Example: To prove the goal, find a clause that contains the goal as its head, and
prove the body recursively (Goal-directed reasoning)
17 | P a g e
2.3 Uninformed search strategies
A search strategy is defined by picking the order of node expansion. Uninformed
search strategies use only the information available in the problem definition.
Uninformed search strategies are Breadth-first search, Depth-first search, Iterative
deepening search and Uniform-cost search.
2.3.1 Breadth First Search (BFS)
There are many ways to traverse graphs. BFS is the most commonly used
approach. BFS is a traversing algorithm where you should start traversing from a
selected node (source or starting node) and traverse the graph layer wise thus
exploring the neighbor nodes (nodes which are directly connected to source
node). You must then move towards the next-level neighbor nodes. As the name
BFS suggests, you are required to traverse the graph breadth wise as follows:
✓ First move horizontally and visit all the nodes of the current layer
✓ Move to the next layer.
✓ Repeat this process until the queue is empty.
18 | P a g e
2.3.2 Depth First Search (DFS)
The DFS algorithm is a recursive algorithm that uses the idea of backtracking. It
involves exhaustive searches of all the nodes by going ahead, if possible, else by
backtracking.
Here, the word backtrack means that when you are moving forward and there are
no more nodes along the current path, you move backwards on the same path to
find nodes to traverse. All the nodes will be visited on the current path till all the
unvisited nodes have been traversed after which the next path will be selected.
This recursive nature of DFS can be implemented using stacks. The basic idea is
as follows:
✓ Pick a starting node and push all its adjacent nodes into a stack.
✓ Pop a node from stack to select the next node to visit and push all its
adjacent nodes into a stack.
✓ Repeat this process until the stack is empty. However, ensure that the
nodes that are visited are marked. This will prevent you from visiting the
same node more than once. If you do not mark the nodes that are visited
and you visit the same node more than once, you may end up in an infinite
loop.
19 | P a g e
2.3.3 Iterative deepening search/ Iterative deepening DFS
IDDFS combines depth-first search’s space-efficiency and breadth-first search’s
fast search (for nodes closer to root).
How does IDDFS work?
IDDFS calls DFS for different depths starting from an initial value. In every call,
DFS is restricted from going beyond given depth. So basically we do DFS in a
BFS fashion.
To avoid the infinite depth problem of DFS, we can decide to only search until
depth L, i.e. we don’t expand beyond depth L. Hence the search is called Depth-
Limited Search.
2.3.4 Uniform-cost search
In this search, expand the frontier node with the lowest path cost.
Implementation: frontier is a priority queue ordered by path cost.
This search is equivalent to breadth-first if step costs are all equal.
20 | P a g e
2.3.5 Bidirectional search Algorithm
Bidirectional search involves alternate searching from the start state toward the
goal and from the goal state toward the start. The algorithm stops when the
frontiers intersect. A search algorithm has to be selected for each half. How does
the algorithm know when the frontiers of the search tree intersect?
For bidirectional search to work well, there must be an efficient way to check
whether a given node belongs to the other search tree. Bidirectional search can
sometimes lead to finding a solution more quickly. The reason can be seen from
inspecting the following figure.
Also note that the algorithm works well only when there are unique start and goal
states.
4.4 Analysis of search strategies
21 | P a g e
2.5 Limitation of Uninformed search
✓ Depth-first search and breadth-first search are examples of blind (or
uninformed) search strategies.
✓ Breadth-first search produces an optimal solution (eventually, and if one
exists), but it still searches blindly through the states pace.
✓ Neither uses any knowledge about the specific domain in question to
search through the state-space in a more directed manner.
✓ If the search space is big, blind search can simply take too long to be
practical, or can significantly limit how deep we're able to look into the
space.
Informed search strategies are introduced here to eliminate the above limitation.
22 | P a g e
2.6 Informed search strategies
A search strategy, which searches the most promising branches of the state-space
first can: – find a solution more quickly,
– find solutions even when there is limited time available,
– find a better solution, since more profitable parts of the state-space can
be examined, while ignoring the unprofitable parts.
• An evaluation function f(n) determines how promising a node n in the search
tree appears to be for the task of reaching the goal.
• Best-first search chooses to expand the node that appears by evaluation function
to be most promising the among the candidates.
• Traditionally, one aims at minimizing the value of function f.
• A key component of an evaluation function is a heuristic function h(n), which
estimates the cost of the cheapest path from node n to a goal node.
• In connection of a search problem “heuristics” refers to a certain (but loose)
upper or lower bound for the cost of the best solution.
• Goal states are nevertheless identified: in a corresponding node n it is required
that h(n) = 0.
• Heuristic functions are the most common form in which additional knowledge is
imported to the search algorithm.
A search strategy which is better than another at identifying the most promising
branches of a search-space is said to be more informed.
Now, follow some inform search strategies:
2.6.1 Greedy best-first search
• Greedy best-first search tries to expand the node that is closest to the goal, on
the grounds that this is likely to lead to a solution quickly.
• Thus, the evaluation function is f(n) = h(n).
• E.g. in minimizing road distances a heuristic lower bound for distances of cities
is their straight-line distance.
• Greedy search ignores the cost of the path that has already been traversed to
reach n.
• Therefore, the solution given is not necessarily optimal
23 | P a g e
• If repeating states are not detected, greedy best-first search may oscillate forever
between two promising states.
• Because greedy best-first search can start down an infinite path and never return
to try other possibilities, it is incomplete.
• Because of its greediness the search makes choices that can lead to a dead end;
then one backs up in the search tree to the deepest unexpanded node.
• Greedy best-first search resembles depth-first search in the way it prefers to
follow a single path all the way to the goal, but will back up when it hits a dead
end.
• The worst-case time and space complexity is O(b*m), where m=depth of search.
• The quality of the heuristic function determines the practical usability of greedy
search.
Example:
Fig1: Romania with Step Costs in km
24 | P a g e
Solution:
Fig: Greedy search
2.6.2 A* search
• A* combines the value of the heuristic function h(n) and the cost to reach the
node n, g(n)
• Evaluation function f(n) = g(n) + h(n) thus estimates the cost of the cheapest
solution through n
• A* tries the node with the lowest f(n) value first
• This leads to both complete and optimal search algorithm, provided that h(n)
satisfies certain conditions
Example: Fig1
Solution:
25 | P a g e
Optimality of A*
• Provided that h(n) never overestimates the cost to reach the goal, then in tree
search A* gives the optimal solution
• Suppose G2 is a suboptimal goal node generated to the tree
• Let C* be the cost of the optimal solution
• Because G2 is a goal node, it holds that h(G2) = 0,and we know that f(G2) =
g(G2) > C*
• On the other hand, if a solution exists, there must exist a node n that is on the
optimal solution path in the tree
• Because h(n) does not overestimate the cost of completing the solution path, f(n)
= g(n) + h(n) C*
• We have shown that f(n) C* < f(G2),so G2 will not be expanded and A* must
return an optimal solution
2.6.3 Memory-bounded heuristic search
• Once again the main drawback of search is not computation time, but rather
space consumption.
• Therefore, one has had to develop several memory-bounded variants of A*.
• IDA* (Iterative Deepening A*) adapts the idea of iterative deepening.
• The cutoff used in this context is the f-cost (g + h) rather than the depth.
• At each iteration the cutoff value is the smallest f-cost of any node that exceeded
the cutoff on the previous iteration.
• Subsequent more modern algorithms carry out more complex pruning.
2.7 Local searches: This search is used in many problems, where, the path to the
goal is irrelevant.
• Local search algorithms operate using a single current state and generally move
to neighbors of that state.
• Typically, the paths followed by the search are not retained.
• They use very little memory, usually a constant amount.
• Local search algorithms lead to reasonable results in large or infinite
(continuous) state spaces for which systematic search methods are unsuitable.
Local Search Algorithms and Optimization Problems
26 | P a g e
• Local search algorithms are also useful for solving pure optimization problems.
• In optimization the aim is to find the best state according to an objective
function.
• Optimization problems are not always search problems in the same sense as they
were considered above.
• For instance, Darwinian evolution tries to optimize reproductive fitness.
• There does not exist any final goal state (goal test).
• Neither does the cost of the path matter in this task.
• Optimization of the value of objective function can be visualized as a state space
landscape, where the height of peaks and depth of valleys corresponds to the
value of the function
• A search algorithm giving the optimal solution to a maximization problem
comes up with the global maximum
• Local maxima are higher peaks than any of their neighbors, but lower than the
global maxima.
2.7.1 Hill climbing search
• In hill-climbing one always chooses a successor of the current state s that has the
highest value for the objective function f: maxs f(s')
• Search terminates when all neighbors of the state have a lower value for the
objective function than the current state has
• Most often search terminates in a local maximum, sometimes by chance, in a
global maximum
• Also plateaux cause problems to this greedy local search
• On the other hand, improvement starting from the initial state is often very fast
• Sideways moves can be allowed when search may proceed to states that are as
good as the current one.
• Stochastic hill-climbing chooses at random one of the neighbors that improve
the situation.
• Neighbors can, for example, be examined in random order and choose the first
one that is better than the current state.
• Also these versions of hill-climbing are incomplete because they can still get
stuck in a local maximum.
• By using random restarts one can guarantee the completeness of the method.
• Hill-climbing starts from a random initial state until a solution is found.
27 | P a g e
Example:
2.7.2 Constraint satisfaction problems
Formally speaking, a constraint satisfaction problem (or CSP) is defined by a set
of variables, X1, X2, . . . , Xn, and a set of constraints, C1, C2, . . . , Cm. Each
variable Xi has a nonempty domain Di of possible values. Each constraint Ci
involves some subset of the variables and specifies the allowable combinations of
values for that subset. A state of the problem is defined by an assignment of
values to some or all of the variables, {Xi = vi , Xj = vj , . . .}. An assignment that
does not violate any constraints is called a consistent or legal assignment. A
complete assignment is one in which every variable is mentioned, and a solution
to a CSP is a complete assignment that satisfies all the constraints. Some CSPs
also require a solution that maximizes an objective function.
So what does all this mean? Suppose that, having tired of Romania, we are
looking at a map of Australia showing each of its states and territories, as in
Figure 5.1(a), and that we are given the task of coloring each region either red,
green, or blue in such a way that no neighboring regions have the same color. To
formulate this as a CSP, we define the variables to be the regions: WA, NT, Q,
NSW, V, SA, and T. The domain of each variable is the set {red, green, blue}.
28 | P a g e
Fig: (a) The principal states and territories of Australia. Coloring this map can be viewed as a
constraint satisfaction problem. The goal is to assign colors to each region so that no neighboring
regions have the same color. (b) The map-coloring problem represented as a constraint graph.
The constraints require neighboring regions to have distinct colors; for example,
the allowable combinations for WA and NT are the pairs {(red, green),(red,
blue),(green, red),(green, blue),(blue, red),(blue, green)}. (The constraint can also
be represented more succinctly as the inequality WA 6= NT, provided the
constraint satisfaction algorithm has some way to evaluate such expressions.)
There are many possible solutions, such as {WA = red, NT = green, Q = red,
NSW = green, V = red, SA = blue, T = red }.
It is helpful to visualize a CSP as a constraint graph, as shown in Figure 5.1(b).
The nodes of the graph correspond to variables of the problem and the arcs
correspond to constraints. Treating a problem as a CSP confers several important
benefits. Because the representation of states in a CSP conforms to a standard
pattern—that is, a set of variables with assigned values—the successor function
and goal test can written in a generic way that applies to all CSPs. Furthermore,
we can develop effective, generic heuristics that require no additional, domain-
specific expertise. Finally, the structure of the constraint graph can be used to
simplify the solution process, in some cases giving an exponential reduction in
complexity.
It is easy to see that a CSP can be given an incremental formulation as a standard
search problem as follows:
29 | P a g e
I. Initial state: the empty assignment {}, in which all variables are
unassigned.
II. Successor function: a value can be assigned to any unassigned variable,
provided that it does not conflict with previously assigned variables.
III. Goal test: the current assignment is complete.
IV. Path cost: a constant cost for every step
Fig: Part of the search tree generated by simple backtracking for the map-coloring problem
Summary
• Blind search: Depth-First, Breadth-First, IDS – Do not use knowledge of
problem space to find solution.
• Informed search
• Best-first search: Order agenda based on some measure of how ‘good’ each state
is.
• Uniform-cost: Cost of getting to current state from initial state = g(n)
• Greedy search: Estimated cost of reaching goal from current state – Heuristic
evaluation functions, h(n)
• A* search: f(n) = g(n) + h(n)
• Admissibility: h(n)never overestimates the actual cost of getting to the goal
state.
• Informedness: A search strategy which searches less of the states pace in order
to find a goal state is more informed.
30 | P a g e
• Constraint satisfaction problems (or CSPs) consist of variables with constraints
on them. Many important real-world problems can be described as CSPs. The
structure of a CSP can be represented by its constraint graph.
• Backtracking search, a form of depth-firsts earch, is commonly used for solving
CSPs.
• The minimum remaining values and degree heuristics are domain-independent
methods for deciding which variable to choose next in a backtracking search. The
least constraining-value heuristic helps in ordering the variable values.
• By propagating the consequences of the partial assignments that it constructs,
the backtracking algorithm can reduce greatly the branching factor of the
problem. Forward checking is the simplest method for doing this. Arc consistency
enforcement is a more powerful technique, but can be more expensive to run.
• Backtracking occurs when no legal assignment can be found for a variable.
Conflict directed back jumping backtracks directly to the source of the problem.
• Local search using the min-conflicts heuristic has been applied to constraint
satisfaction problems with great success.
• The complexity of solving a CSP is strongly related to the structure of its
constraint graph. Tree-structured problems can be solved in linear time. Cut set
conditioning can reduce a general CSP to a tree-structured one and is very
efficient if a small cut set can be found. Tree decomposition techniques transform
the CSP into a tree of sub problems and are efficient if the tree width of the
constraint graph is small.
2.8 Stochastic search
Stochastic search is a general approach for solving combinatorial problems with
significant research and practical potential.
Stochastic Local Search:
– randomize initialization step.
– random initial solutions.
– randomized construction heuristics.
• Simulated annealing is a popular stochastic algorithm designed in analogy with
the physical process of cooling a molten substance where condensing of matter
into a crystalline solid takes place. In this context, searching for an optimal
solution is like finding a configuration of the cooled system with minimum free
energy. Because of its ability of escaping from local optima, simulated
31 | P a g e
annealing is a powerful algorithm for numerical and combinatorial optimization.
And another one is Genetic and evolutionary methods.
This search is easy to implement but sometimes it may be incomplete.
For instance, sophisticated search techniques form the backbone of modern
machine learning and data analysis. Computer systems that are able to extract
information from huge data sets (data mining), to recognize patterns, to do
classification, or to suggest diagnoses, in short, systems that are adaptive and —
to some extent — able to learn, fundamentally rely on effective and efficient
search techniques.
Workout Problem:
32 | P a g e
Module III
3.1 Adversarial search
Examine the problems that arise when we try to plan in a world where other
agents are planning against us. A good example is in board games. Adversarial
games, while much studied in AI, are a small part of game theory in economics.
Search versus Games
Search is not adversary
Solution is (heuristic) method for finding goal
Heuristic techniques can find optimal solution
Evaluation function: estimate of cost from start to goal through given node
Examples: path planning, scheduling activities
Games is adversary
Solution is strategy (strategy specifies move for every possible opponent reply).
Optimality depends on opponent. Why?
Time limits force an approximate solution
Evaluation function: evaluate “goodness” of game position
Examples: chess, checkers, Othello, backgammon
Mini-Max Algorithm
Mini-max is a type of backtracking algorithm. It is used in decision-making and game
theory to find the optimal move for a player. It is widely used in two player turn based
games such as Tic-Tac-Toe, Chess, etc
In Mini-Max algorithm, the two players are called Max and Min. The Max tries to get the
highest score possible while the Min tries to get the lowest score possible.
Example:
Consider a game which has 4 final states and paths to reach final state are from root to 4
leaves of a perfect binary tree as shown in fig. Assume you are the max player and you
get the first chance to move, i.e., you are at root, and your opponent at next level. Which
33 | P a g e
move you would make as a max player considering that your opponent also plays
optimally?
Fig. 1
Since this is a backtracking based algorithm, it tries all possible moves, then backtracks
and makes a decision.
Solution:
Step1. Max goes left:
It is now the Min turn. The Min now has a choice between 3 and 5. Being the Min it will
definitely choose the least among both, that is 3
Step2. Max goes right:
It is now the Min turn. The Min now has a choice between 2 and 9. He will choose 2 as it
is the least among the two values.
Step3. Being the Max you would choose the larger value that is 3. Therefore, the optimal
move for the Max is to go left and the optimal value is 3.
Step4. If there are more levels in the game tree, the process is repeat again.
The following fig shows the solution tree.
Fig.2
The disadvantage of the mini-max algorithm
34 | P a g e
The disadvantage of the mini-max algorithm is one time to find its children and a second time
to evaluate the all-heuristic values, which increase the complexity.
These disadvantages can be overcome by eliminating nodes from the tree without analyzing,
and this process is called pruning.
Alpha-Beta pruning Algorithm
Alpha-Beta pruning algorithm reduces the computation time by a huge factor than Mini-Max. It
cuts off (prune) branches in the game tree, which need not be searched because there
already exists a better move available. It uses two parameters in the mini-max algorithm,
namely alpha and beta that is why it is called Alpha-Beta pruning.
Alpha: It is the best choice so far for the player MAX. We want to get the highest possible
value here.
Beta: It is the best choice so far for MIN, and it has to be the lowest possible value.
Let us understand the intuition behind this first and then we will formalize the algorithm.
Suppose, we have the following game tree:
In this case,
Minimax Decision = MAX{MIN{3,5,10}, MIN{2,a,b}, MIN{2,7,3}}
= MAX{3,c,2}
=3
You would be surprised! How could we calculate the maximum with a missing value? Here is
the trick. MIN{2,a,b} would certainly be less than or equal to 2, i.e., c<=2 and hence
MAX{3,c,2} has to be 3. The question now is do we really need to calculate c? Of course not.
We could have reached to the conclusion without looking at those nodes. And this is where
alpha-beta pruning comes into the picture.
Example
35 | P a g e
Solution
o The initial call starts from A. The value of alpha here is -INFINITY and the value
of beta is +INFINITY. These values are passed down to subsequent nodes in the
tree. At A the maximizer must choose max of B and C, so A calls B first
o At B it the minimizer must choose min of D and E and hence calls D first.
o At D, it looks at its left child which is a leaf node. This node returns a value of 3.
Now the value of alpha at D is max( -INF, 3) which is 3.
o To decide whether its worth looking at its right node or not, it checks the condition
beta<=alpha. This is false since beta = +INF and alpha = 3. So it continues the
search.
o D now looks at its right child which returns a value of 5.At D, alpha = max(3, 5)
which is 5. Now the value of node D is 5
o D returns a value of 5 to B. At B, beta = min( +INF, 5) which is 5. The minimizer is
now guaranteed a value of 5 or lesser. B now calls E to see if he can get a lower
value than 5.
o At E the values of alpha and beta is not -INF and +INF but instead -INF and 5
respectively, because the value of beta was changed at B and that is what B passed
down to E
o Now E looks at its left child which is 6. At E, alpha = max(-INF, 6) which is 6. Here
the condition becomes true. beta is 5 and alpha is 6. So beta<=alpha is true. Hence it
breaks and E returns 6 to B
o Note how it did not matter what the value of E‘s right child is. It could have been
+INF or -INF, it still wouldn’t matter, We never even had to look at it because the
minimizer was guaranteed a value of 5 or lesser. So as soon as the maximizer saw
the 6 he knew the minimizer would never come this way because he can get a 5 on
the left side of B. This way we dint have to look at that 9 and hence saved
computation time.
36 | P a g e
o E returns a value of 6 to B. At B, beta = min( 5, 6) which is 5.The value of node B is
also 5
So far this is how our game tree looks. The 9 is crossed out because it was never
computed.
✓ B returns 5 to A. At A, alpha = max( -INF, 5) which is 5. Now the maximizer is
guaranteed a value of 5 or greater. A now calls C to see if it can get a higher value
than 5.
✓ At C, alpha = 5 and beta = +INF. C calls F
✓ At F, alpha = 5 and beta = +INF. F looks at its left child which is a 1. alpha = max(
5, 1) which is still 5.
✓ F looks at its right child which is a 2. Hence the best value of this node is 2. Alpha
still remains 5
✓ F returns a value of 2 to C. At C, beta = min( +INF, 2). The condition beta <= alpha
becomes false as beta = 2 and alpha = 5. So it breaks and it dose not even have to
compute the entire sub-tree of G.
✓ The intuition behind this break off is that, at C the minimizer was guaranteed a
value of 2 or lesser. But the maximizer was already guaranteed a value of 5 if he
choose B. So why would the maximizer ever choose C and get a value less than 2 ?
Again you can see that it did not matter what those last 2 values were. We also
saved a lot of computation by skipping a whole sub tree.
✓ C now returns a value of 2 to A. Therefore the best value at A is max( 5, 2) which is
a 5.
✓ Hence the optimal value that the maximizer can get is 5
This is how our final game tree looks like. As you can see G has been crossed out as it
was never computed.
37 | P a g e
Workout Problem
38 | P a g e
Module IV
Knowledge
Knowledge is all kinds of facts about the world and it is necessary for intelligent
behavior.
Knowledge Representation (KR)
Knowledge Representation is the area of Artificial Intelligence (AI) concerned with how
knowledge can be represented symbolically and manipulated in an automated way by
reasoning programs. KR is the field of study concerned with representations of
propositions.
Reasoning
Reasoning is the process of deriving new proposition from KR.
Knowledge Acquisition (KA)
Knowledge Acquisition (KA) is the transfer of propositions from source to sink.
Knowledge base (KB)
A knowledge base (KB) is used to store complex structured and unstructured (symbolic
and heuristic) information used by a computer system.
Symbolic: It incorporates knowledge that is symbolic as well as numeric.
Heuristic: It reasons with judgmental, imprecise, and qualitative knowledge as well as
with formal knowledge of established theories.
Propositional Logic
Propositional logic is a formal language for representing knowledge and for making
logical inferences. A proposition is a statement that is either true or false.
An atomic proposition is a statement that must be true or false. Examples of atomic
propositions are “7is a prime” and “program p terminates”.
A compound proposition can be created from other propositions using logical
connectives. Truth-values of elementary propositions and the meaning of connectives
define the truth of a compound proposition. The truth table for a compound proposition is
a table with entries (rows) for all possible combinations of truth-values of elementary
propositions.
39 | P a g e
Inference
It is the process of deriving new proposition from the existing old ones.
Logical connectives
Propositional logics are constructed from atomic propositions by using logical
connectives:
Well-formed formula (wff)
40 | P a g e
The well-formed formulas of propositional logic are obtained by using the construction
rules below:
Truth table
A truth table shows whether a propositional formula is true or false for each possible
truth assignment. Truth functions are sometimes called Boolean functions.
If we know how the five basic logical connectives work, it is easy (in principle) to
construct a truth table.
Tautology and Contradiction
Implication
41 | P a g e
Logical Equivalence
The propositions p and q are called logically equivalent if they have the same truth table.
42 | P a g e
Example 1
Example 2
43 | P a g e
Example 3
First Order Predicate Logic (FOPL)
In propositional logic, each possible atomic fact requires a separate unique propositional
symbol.
Predicate logic includes a richer ontology such as objects (terms), properties (unary
predicates on terms), relations (n-ary predicates on terms), and functions (mappings from
terms to other terms).
FOPL allows more flexible and compact representation of knowledge than propositional
logic using universal and existential quantifiers.
44 | P a g e
45 | P a g e
46 | P a g e
47 | P a g e
Semantic Net/Network
Semantic Network represents the knowledge in the form of graphical network.
A semantic net is a knowledge representation technique used to represent propositional
logic.
It represents knowledge about the world described in terms of graphs. Nodes correspond
to the Concepts or objects in the domain and links to relations. Three kinds are subset
links (isa, part-of links), member links (instance links) and function links. It can be
transformed to the first-order logic language. The graphical representation is often easier
to work with better overall view on individual concepts and relations
Example
48 | P a g e
Frame
Frame is a collection of semantic network nodes or slots that together describe a
stereotyped object, act or event. It representing knowledge about the objects and events
typical to specific situations.
Example
Generic Restaurant Frame
a-kind-of: Business Establishment
Types: Range: (Cafeteria, Seat-Yourself, Wait-to-be-seated, Fastfood)
Default: IF plastic-orange-counter THEN fastfood
IF stack-of-trays THEN cafeteria
IF wait-for-waitress-sign OR reservation-made THEN wait-to-be-seated
OTHERWISE seat_yourself
Location: Range: an ADDRESS
if-needed: (Look at the menu)
Name: if needed: (Look at the menu)
Food-style: Range: (Burgers, Chinese, American, Seafood, French)
Default: Chinese
if-added: (Update Alternative of Restaurant)
Time-of- Range: a time-of-day
Operation:
Default: open evenings except Mondays
Payment form: Range: (Cash, CreditCard, Check, Washing-Dishes Script)
Event- Default: Eat-at-Restaurant Script
Sequence:
Alternatives:: Ranges: all restaurant with some foodstyle
if-needed: (find all restaurants with the same foodstyle)
49 | P a g e
Script
A script is a structure that prescribes a set of circumstances, which could be expected to
follow on from one another. It is similar to a thought sequence or a chain of situations,
which could be anticipated. It could be considered to consist of a number of slots or
frames but with more specialized roles.
The components of a script include:
Entry Conditions: These must be satisfied before events in the script can occur.
Results: Conditions that will be true after events in script occur.
Props: Slots representing objects involved in events.
Roles: Persons involved in the events.
Track: Variations on the script.
Scenes: The sequence of events that occur.
Example
50 | P a g e
Situation calculus
Situation calculus is Logic for reasoning about changes in the state of the world. The
world is described by the sequences of situations of the current state. Changes from one
situation to another are caused by actions. The situation calculus allows us to describe the
initial state and a goal state. It builds the KB that describes the effect of actions
(operators). It proves that the KB and the initial state lead to a goal state.
Basic Elements of Situation Calculus
a. Actions- that can be performed in the world The actions can be quantified.
Actions:
move(x, y) implies robot is moving to a new location (x, y).
pickup(o) implies robot picks up an object o.
drop(o) implies robot drops the object o that holds.
b. Fluents- that describe the state of the world.
Relational fluents
Relational fluents are statements whose truth value may change and they take a situation
as a final argument.
is_carrying(o, s): robot is carrying object o in situation s.
Functional fluents
Functions that return a situation-dependent value and they take a situation as a final
argument.
location(s): returns the location (x, y) of the robot in situation s.
c. Situations- represent a history of action occurrences. A dynamic world is
modeled as progressing through a series of situations as a result of various
actions being performed within the world. It is a finite sequence of actions and
a situation is not a state, but a history.
Situations:
Initial situation S0: no actions have yet occurred
A new situation, resulting from the performance of an action a in current situation s, is
denoted using the function symbol do(a, s).
do(move(2, 3), S0): denotes the new situation after the performance of action move(2, 3)
in initial situation S0.
do(pickup(Ball ), do(move(2, 3), S0)).
51 | P a g e
Automated reasoning systems
1) Theorem proving: Prove sentences in the first-order logic. Use inference rules,
resolution rule and resolution refutation.
2) Deductive retrieval systems: Systems based on rules (KBs in Horn form) and
prove theorems or infer new assertions (forward, backward chaining).
3) Production systems: Systems based on rules with actions in antecedents and it
uses forward chaining mode of operation.
4) Semantic networks: Graphical representation of the world, objects are nodes in
the graphs, relations are various links.
Theorem proving
Example 1. From the following facts, prove that Ravi likes peanuts.
a) Ravi likes all kind of food.
b) Apple and chicken are food.
c) Anything anyone eats and is not killed is food.
d) Ajay eats peanuts and still alive.
Solution:
Step1: Negate the statement to be proved.
Ravi likes peanuts. FOPL: likes(Ravi,peanuts)
¬likes(Ravi,peanuts)
Step2: Covert the facts into FOPL.
# Facts FOPL
a Ravi likes all kind of food. ∀x: food(x) →likes(Ravi,x)
b Apple and chicken are food. food(Apple)
food(Chicken
c Anything anyone eats and is not killed ∀x∀y: eats(x, y) ˄¬killed(x) →food(y)
is food.
d Ajay eats peanuts and still alive eats(Ajay, peanuts) ˄alive(Ajay)
¬killed(x) →alive(x)
killed(x) →¬alive(x)
52 | P a g e
Step3: Covert FOPL into CNF.
# FOPL CNF
1 ∀x: food(x) →likes(Ravi,x) ¬food(x) ∨ likes(Ravi,x)
2 food(Apple) ˄food(Chicken food(Apple)
food(Chicken
3 ∀x∀y: eats(x, y) ˄¬killed(x) →food(y) ¬eats(x, y) ∨ killed(x) ∨food(y)
4 eats(Ajay, peanuts) ˄alive(Ajay) eats(Ajay, peanuts)
alive(Ajay)
5 killed(x) ∨ alive(x) ¬killed(x) →alive(x)
¬killed(x) ∨ ¬alive(x) killed(x) →¬alive(x)
Step4: Draw resolution graph.
¬likes(Ravi,peanuts) ¬food(x) ∨ likes(Ravi,x) (1)
[x/peanuts]
¬food(peanuts) ¬eats(x, y) ∨ killed(x) ∨food(y) (3)
[y/peanuts]
¬eats(x, peanuts) ∨ killed(x) eats(Ajay, peanuts) (4)
[x/Ajay]
killed(Ajay) ¬killed(x) ∨ ¬alive(x) (5)
[x/Ajay]
¬alive(Ajay) alive(Ajay) (4)
Φ
53 | P a g e
That means the outcome of resolution graph is false. So our assumption
¬likes(Ravi,peanuts) is false implies likes(Ravi,peanuts) i.e, Ravi likes peanuts is proved.
Planning
54 | P a g e
55 | P a g e
56 | P a g e
Example POS I:
Example POS II:
57 | P a g e
Example POS III:
58 | P a g e
59 | P a g e
Planning and acting in nondeterministic domains
60 | P a g e
Probabilty
61 | P a g e
62 | P a g e
63 | P a g e
64 | P a g e
65 | P a g e
66 | P a g e
Reasoning
67 | P a g e
68 | P a g e
Example
69 | P a g e
70 | P a g e
71 | P a g e
72 | P a g e
73 | P a g e
74 | P a g e
75 | P a g e
Learning
Learning is defined as the process of making change in the system that enables to do the
task more efficiently. It is an important feature of intelligent.
Learning is the process of knowledge acquisition and skill refinement.
Knowledge acquisition (example: learning physics) implies learning new symbolic
information coupled with the ability to apply that information in an effective manner
whereas skill refinement (example: riding a bicycle, playing the piano) occurs at a
subconscious level by virtue of repeated practice.
Learning is a goal-directed process of a system that improves the knowledge base (KB)
of the system by exploring experience and prior knowledge.
Learning method depends on (i) type of performance element,(ii) available feedback, (iii)
type of component to be improved, and its representation.
Types learning of agents
Learning Agent Sensors Actuators
Human Agent Eyes, Ears, etc Hands, Legs, Mouth
Robot Agent Camera, IR range finder Motors
Software Agent Keys strokes, Files contents Display to screen, write
files
Components of a learning system
76 | P a g e
There are four components of a learning system:
(i) Learning element-It is responsible for performance improvement by taking
knowledge and feedback.
(ii) Performance element- It is the agent itself, based on percept sequence it
actuates through effectors.
(iii) Critic-It gives feedback (success or failure) to the learning element for
performance improvement.
(iv) Problem generator- It suggests actions that generate new experience for
farther learning.
Learning from observation
77 | P a g e
Inductive learning
Learning a function from examples of its inputs and outputs is called inductive learning.
It is measured by their learning curve, which shows the prediction accuracy as a function
of the number of observed examples.
Learning Decision trees
Decision trees are one of the simplest, universal and most widely used prediction models.
78 | P a g e
79 | P a g e
Explanation based learning
Learning general problem solving techniques by observing and analyzing human
solutions to specific problems.
80 | P a g e
81 | P a g e
82 | P a g e
83 | P a g e
84 | P a g e
85 | P a g e
Machine learning is particularly attractive in several real life problem because of the
following reasons:
• Some tasks cannot be defined well except by example
• Working environment of machines may not be known at design time
• Explicit knowledge encoding may be difficult and not available
• Environments change over time
• Biological systems learn
Recently, learning is widely used in a number of application areas including,
• Data mining and knowledge discovery
• Speech/image/video (pattern) recognition
• Adaptive control
• Autonomous vehicles/robots
• Decision support systems
• Bioinformatics
• WWW
86 | P a g e
Classification & Clustering
The general idea of Classification and Clustering is to group inputs into (hopefully)
distinct categories. They can also then use those categories to identify group membership
of new input in the future.
Classification
Classification is a form of machine learning where two or more distinct groups, or
classes, of things are known ahead of time and used to group additional things. The
features of members of each class are analyzed or learned, and then generalized to build
an understanding of what it means to be a member of each class.
Classification is an example of supervised learning because classes are known.
Clustering
Clustering is a form of machine learning where groups, or classes, are not known
ahead of time so groupings are created by looking at similar and shared
characteristics among the things being grouped.
Clustering is an example of unsupervised learning because classes are unknown
at the start.
87 | P a g e
88 | P a g e
89 | P a g e
Reinforcement learning
Reinforcement learning is a computational approach to understanding and
automating goal-directed learning and decision making. It is distinguished from
other computational approaches by its emphasis on learning by an agent from
direct interaction with its environment, without relying on exemplary supervision
or complete models of the environment. In our opinion, reinforcement learning is
the first field to seriously address the computational issues that arise when
learning from interaction with an environment in order to achieve long-term
goals.
90 | P a g e
MCQ
1. When a computer program exhibits what appears to be human-like intelligence, it is
probably using an approach known as:
A Electronic thought
B Digital mimicry
C Intelligent learning
D Artificial intelligence
2. Specific examples of Machine Learning where a computer is able to improve its own
performance over time include:
A Performing millions of mathematical computations per second and drawing impressive
3D gaming graphics
B Filtering out spam email messages and converting handwriting into computer text
C Turning on a computer screen saver after a period of inactivity and automatically
dimming a cell phone screen in low-light situations
D Finding the shortest route home on your GPS device and analyzing paint samples to
get a perfect color match
3. Classification and clustering are which of these types of machine learning?
A Supervised and unsupervised learning
B Categorized and de-categorized learning
C Repetitive and experiential learning
D Filtered and identified learning
4. Which one of the following everyday situations is the most like Classification?
A Forming teams when the captains don’t know any of the players
B Figuring out where to sit during lunchtime in a high school cafeteria
C Deciding whether to pay using cash or credit
D Reorganizing an accidentally dumped-out box of 64 crayons
5. What makes Clustering more difficult than Classification?
A Not knowing the class labels ahead of time
B Doing lots of comparisons until you finally find the best clusters
C Dealing with items that don’t seem to fit well into any cluster
D All of the above
Sample questions and answer
1. Explain the concept of learning from example.
91 | P a g e
Each person will interpret a piece of information according to their level of
understanding and their own way of interpreting things.
2. What is meant by learning?
Learning is a goal-directed process of a system that improves the knowledge or the
Knowledge representation of the system by exploring experience and prior knowledge.
3. How statistical learning method differs from reinforcement learning method?
Reinforcement learning is learning what to do--how to map situations to actions--so as to
maximize a numerical reward signal. The learner is not told which actions to take, as in
most forms of machine learning, but instead must discover which actions yield the most
reward by trying them. In the most interesting and challenging cases, actions may affect
not only the immediate reward but also the next situation and, through that, all
subsequent rewards. These two characteristics--trial-and-error search and delayed
reward--are the two most important distinguishing features of reinforcement learning.
4. Define informational equivalence and computational equivalence.
A transformation from on representation to another causes no loss of information;
they can be constructed from each other.
The same information and the same inferences are achieved with the same amount
of effort.
5. Define knowledge acquisition and skill refinement.
knowledge acquisition (example: learning physics)—learning new
symbolic information coupled with the ability to apply that information in
an effective manner
skill refinement (example: riding a bicycle, playing the piano)—occurs at
a subconscious level by virtue of repeated practice
6. What is Explanation-Based Learning?
The background knowledge is sufficient to explain the hypothesis of Explanation-
Based Learning. The agent does not learn anything factually new from the instance. It
extractsgeneral rules from single examples by explaining the examples and generalizing
the explanation.
7. Define Knowledge-Based Inductive Learning.
Knowledge-Based Inductive Learning finds inductive hypotheses that explain set
of observations with the help of background knowledge.
8. What is truth preserving?
An inference algorithm that derives only entailed sentences is called sound or truth
preserving.
92 | P a g e
9. Define Inductive learning. How the performance of inductive learning algorithms can
be
measured?
Learning a function from examples of its inputs and outputs is called inductive
learning.
It is measured by their learning curve, which shows the prediction accuracy as a
function of the number of observed examples.
10. List the advantages of Decision Trees
The advantages of Decision Trees are,
It is one of the simplest and successful forms of learning algorithm.
It serves as a good introduction to the area of inductive learning and is
easy to implement.
11. What is the function of Decision Trees?
A decision tree takes as input an object or situation by a set of properties, and
outputs a yes/no decision. Decision tree represents Boolean functions.
12. List some of the practical uses of decision tree learning.
Some of the practical uses of decision tree learning are,
Designing oil platform equipment
Learning to fly
13.What is the task of reinforcement learning?
The task of reinforcement learning is to use rewards to learn a successful agent
function.
14. Define Passive learner and Active learner.
A passive learner watches the world going by, and tries to learn the utility of being
in various states.
An active learner acts using the learned information, and can use its problem
generator to suggest explorations of unknown portions of the environment.
15. State the factors that play a role in the design of a learning system.
The factors that play a role in the design of a learning system are,
Learning element
Performance element
Critic
Problem generator
93 | P a g e
16. What is memorization?
Memorization is used to speed up programs by saving the results of computation.
The basic idea is to accumulate a database of input/output pairs; when the function is
called, it
first checks the database to see if it can avoid solving the problem from scratch.
17. Define Q-Learning.
The agent learns an action-value function giving the expected utility of taking a
given action in a given state. This is called Q-Learning.
18. Define supervised learning & unsupervised learning.
Any situation in which both inputs and outputs of a component can be perceived is
called supervised learning.
Learning when there is no hint at all about the correct outputs is called
unsupervised learning.
19. Define Bayesian learning.
Bayesian learning simply calculates the probability of each hypothesis, given the
data, and makes predictions on that basis. That is, the predictions are made by using all
the
hypotheses, weighted by their probabilities, rather than by using just a single “best”
hypothesis.
20. What is utility-based agent?
A utility-based agent learns a utility function on states and uses it to select actions
that maximize the expected outcome utility.
21. What is reinforcement learning?
Reinforcement learning refers to a class of problems in machine learning which
postulate an agent exploring an environment in which the agent perceives its current state
and
takes actions. The environment, in return, provides a reward (which can be positive or
negative).
Reinforcement learning algorithms attempt to find a policy for maximizing cumulative
reward
for the agent over the curse of the problem.
22. What is the important task of reinforcement learning?
The important task of reinforcement learning is to use rewards to learn a successful
agent function.
94 | P a g e
Test your skills
1. What is the difference between the terms Classification and Clustering?
2. Explain whether Classification or Clustering is harder and why.
MCQ with explanation (VVI)
Artificial Intelligence
Questions 1 to 10
1. What is Artificial intelligence?
(a) Putting your intelligence into Computer
(b) Programming with your own intelligence
(c) Making a Machine intelligent
(d) Playing a Game
(e) Putting more memory into Computer
2. Which is not the commonly used programming language for AI?
(a) PROLOG (b) Java (c) LISP (d) Perl (e) Java script.
3. What is state space?
(a) The whole problem
(b) Your Definition to a problem
(c) Problem you design
(d) Representing your problem with variable and parameter
(e) A space where You know the solution.
4. A production rule consists of
(a) A set of Rule (b) A sequence of steps
(c) Both (a) and (b) (d) Arbitrary representation to problem
(e) Directly getting solution.
5. Which search method takes less memory?
(a) Depth-First Search (b) Breadth-First search
(c) Both (a) and (b) (d) Linear Search.
(e) Optimal search.
6. A heuristic is a way of trying
(a) To discover something or an idea embedded in a program
(b) To search and measure how far a node in a search tree seems to be from a goal
(c) To compare two nodes in a search tree to see if one is better than the other
(d) Only (a) and (b)
(e) Only (a), (b) and (c).
95 | P a g e
7. A* algorithm is based on
(a) Breadth-First-Search (b) Depth-First –Search
(c) Best-First-Search (d) Hill climbing.
(e) Bulkworld Problem.
8. Which is the best way to go for Game playing problem?
(a) Linear approach (b) Heuristic approach
(c) Random approach (d) Optimal approach
(e) Stratified approach.
9. How do you represent “All dogs have tails”.
(a) ۷x: dog(x)→hastail(x) (b) ۷x: dog(x)→hastail(y)
(c) ۷x: dog(y)→hastail(x) (d) ۷x: dog(x)→has→tail(x)
(e) ۷x: dog(x)→has→tail(y)
10. Which is not a property of representation of knowledge?
(a) Representational Verification (b) Representational Adequacy
(c) Inferential Adequacy (d) Inferential Efficiency
(e) Acquisitional Efficiency.
Answers
1. Answer : (c)
What is Artificial intelligence?
Reason : Because AI is to make things work automatically through machine without using
human effort. Machine will give the result with just giving input from human. That
means the system or machine will act as per the requirement.
2. Answer : (d)
Reason : Because Perl is used as a script language, and not of much use for AI practice.
All others are used to generate an artificial program to a great extent.
3. Answer : (d) Q. What is state space? Solve the water jug problem.
Reason : Because state space is mostly concerned with a problem, when you try to solve a
problem, we have to design a mathematical structure to the problem which can
only be through variables and parameters. Ex. You have given a 4-gallon jug and
another 3gallon jugs. Neither has measuring marker on it. You have to fill the jugs
with water .How can you get exactly 2 gallons of water in to 4gallons.Here the
state space can defined as set of ordered pairs integers(x,y),such that x=0,1,2,3
or 4 and y=0,1,2 or 3;X represents the number of gallons in 4galoon jug and y
represents quantity of water in the 3-gallon jug.
4. Answer : (c)
Reason : When you are trying to solve a problem, you should design how to get a step by
step solution with constraints condition to your problem, e.g Chess board
problem.
96 | P a g e
5. Answer : (a)
Reason : Depth-First Search takes less memory since only the nodes on the current path
are stored, but in Breadth First Search, all of the tree that has generated must be
stored.
6. Answer : (e)
Reason : In a heuristic approach we discover certain idea and use heuristic functions to
search for a goal and predicates to compare nodes.
7. Answer : (c)
Reason : Because Best-first-search is giving the idea of optimization and quick choose of
path, and all these characteristic lies in A* algorithm.
8. Answer : (b)
Reason : We use Heuristic approach as it will find out brute force computation ,looking at
hundreds of thousands of positions. e.g Chess competition between Human and
AI based Computer.
9. Answer : (a)
Reason : We represent the statement in mathematical logic taking ‘x ‘as Dog and which has
tail. We can not represent two variable x, y for the same object Dog which has tail.
The symbol “۷ “represent all.
10. Answer : (a)
Reason : There is nothing to go for Representational verification, the verification comes
under Representational adequacy.
Artificial Intelligence
Questions 11 to 20
11. What are you predicating by the logic: ۷x: €y: loyalto(x, y).
(a) Everyone is loyal to some one (b) Everyone is loyal to all
(c) Everyone is not loyal to someone (d) Everyone is loyal
(e) Everyone is not loyal.
12. Which is not Familiar Connectives in First Order Logic?
(a) and (b) iff (c) or (d) not (e) either a or
b.
13. Which is not a type of First Order Logic (FOL) Sentence?
(a) Atomic sentences (b) Complex sentences
(c) Quantified sentence (d) Quality Sentence
(e) Simple sentence.
14. Which is not a Goal-based agent?
(a) Inference (b) Search (c) Planning
(d) Conclusion (e) Dynamic search.
97 | P a g e
15. A plan that describe how to take actions in levels of increasing refinement and specificity is
(a) Problem solving (b) Planning
(c) Non-hierarchical plan (d) Hierarchical plan (e) Inheritance.
16. A constructive approach in which no commitment is made unless it is necessary to do so, is
(a) Least commitment approach (b) Most commitment approach
(c) Nonlinear planning (d) Opportunistic planning
(e) Problem based planning.
17. Partial order planning involves
(a) Searching over the space of possible plans
(b) Searching over possible situations
(c) Searching the whole problem at once
(d) Searching the best
(e) Searching the goal.
18. Which is true for Decision theory?
(a) Decision Theory = Probability theory + utility theory
(b) Decision Theory = Inference theory + utility theory
(c) Decision Theory = Uncertainty + utility theory
(d) Decision Theory = Probability theory + preference
(e) Decision Theory = Probability theory + inference.
19. Uncertainty arises in the wumpus world because the agent’s sensors give only
(a) Full & Global information (b) Partial & Global Information
(c) Partial & local Information (d) Full & local information
(e) Global information only.
20. A Hybrid Bayesian network contains
(a) Both discrete and continuous variables
(b) Only Discrete variables
(c) Only Discontinuous variable
(d) Both Discrete and Discontinuous variable
(e) Continous variable only.
Answers
11. Answer : (a)
Reason : ۷x denotes Everyone or all, and €y someone and loyal to is the proposition logic
making map x to y.
12. Answer : (d)
Reason : “not” is coming under propositional logic and is therefore not a connective.
13. Answer : (d)
Reason : Quantity structure is not a FOL structure while all other are.
14. Answer : (d)
Reason : Conclusion is a statement to Goal-based agent, but is not considered as Goal-
based agent.
98 | P a g e
15. Answer : (d)
Reason : A plan that describes how to take actions in levels of increasing refinement and
specificity is Hierarchical (e.g., "Do something" becomes the more specific "Go to
work," "Do work," "Go home.") Most plans are hierarchical in nature.
16. Answer : (a)
Reason : Because we are not sure about the outcome.
17. Answer : (a)
Reason : Partial order planning involves searching over the space of possible plans, rather
than searching over possible situations. The idea is to construct a plan piece-by-
piece. There are two kinds of steps we can take in constructing a plan: add an
operator (action), or add an ordering constraint between operators. The name
"partial order planning" comes from the fact that until we add the ordering
constraints, we don't specify the order in which actions are taken. This
(sometimes) allows a partial order planning to avoid lots of backtracking that
would slow down a state-space planner.
18. Answer : (a)
Reason : Utility theory to represent and reason with preference. Preference is expressed by
utilities. Utilities are combined with probabilities in the general theory of rational
decisions called decision theory. Decision theory, which combines probability
theory with utility theory, provides a formal and complete frame work for decisions
(economic or otherwise) made under uncertainty-that is, in case where
probabilistic descriptions appropriately capture the decision-maker’s environment.
19. Answer : (c)
Reason : The Wumpus world is a grid of squares surrounded by walls, where each square
can contain agents and objects. The agent (you) always starts in the lower left
corner, a square that will be labeled [1, 1]. The agent's task is to find the gold,
return to [1, 1] and climb out of the cave. So uncertainty is there as the agent
gives partial and local information only. Global variable are not goal specific
problem solving.
20. Answer : (a)
Reason : To specify a Hybrid network, we have to specify two new kinds of distributions:
the conditional distribution for continuous variables given discrete or continuous
parents, and the conditional distribution for a discrete variable given continuous
parents.
Artificial Intelligence
Questions 21 to 30
21. Which is not a desirable property of a logical rule-based system?
(a) Locality (b) Attachment (c) Detachment
(d) Truth-Functionality (e) Global attribute.
99 | P a g e
22. How is Fuzzy Logic different from conventional control methods?
(a) IF and THEN Approach (b) FOR Approach
(c) WHILE Approach (d) DO Approach
(e) Else If approach.
23. In an Unsupervised learning
(a) Specific output values are given
(b) Specific output values are not given
(c) No specific Inputs are given
(d) Both inputs and outputs are given
(e) Neither inputs nor outputs are given.
24. Inductive learning involves finding a
(a) Consistent Hypothesis (b) Inconsistent Hypothesis
(c) Regular Hypothesis (d) Irregular Hypothesis
(e) Estimated Hypothesis.
25. Computational learning theory analyzes the sample complexity and computational complexity
of
(a) UnSupervised Learning (b) Inductive learning
(c) Forced based learning (d) Weak learning
(e) Knowledge based learning.
26. If a hypothesis says it should be positive, but in fact it is negative, we call it
(a) A consistent hypothesis (b) A false negative hypothesis
(c) A false positive hypothesis (d) A specialized hypothesis
(e) A true positive hypothesis.
27. Neural Networks are complex -----------------------with many parameters.
(a) Linear Functions (b) Nonlinear Functions
(c) Discrete Functions (d) Exponential Functions
(e) Power Functions.
28. A perceptron is a --------------------------------.
(a) Feed-forward neural network (b) Back-propagation alogorithm
(c) Back-tracking algorithm (d) Feed Forward-backward algorithm
(e) Optimal algorithm with Dynamic programming.
29. Which is true?
(a) Not all formal languages are context-free
(b) All formal languages are Context free
(c) All formal languages are like natural language
(d) Natural languages are context-oriented free
(e) Natural language is formal.
30. Which is not true?
(a) The union and concatenation of two context-free languages is context-free
(b) The reverse of a context-free language is context-free, but the complement need not be
(c) Every regular language is context-free because it can be described by a regular grammar
(d) The intersection of a context-free language and a regular language is always context-free
(e) The intersection two context-free languages is context-free.
100 | P a g e
Answers
21. Answer : (b)
Reason : Locality: In logical systems, whenever we have a rule of the form A => B, we can
conclude B, given evidence A, without worrying about any other rules.
Detachment: Once a logical proof is found for a proposition B, the proposition can
be used regardless of how it was derived .That is, it can be detachment from its
justification. Truth-functionality: In logic, the truth of complex sentences can be
computed from the truth of the components. But there are no Attachment
properties lies in a Rule-based system. Global attribute defines a particular
problem space as user specific and changes according to user’s plan to problem.
22. Answer : (a) Q.What is fuzzy logic? Explain.
Reason : FL incorporates a simple, rule-based IF X AND Y THEN Z approach to a solving
control problem rather than attempting to model a system mathematically. The FL
model is empirically-based, relying on an operator's experience rather than their
technical understanding of the system. For example, rather than dealing with
temperature control in terms such as "SP =500F", "T <1000F", or "210C <TEMP
<220C", terms like "IF (process is too cool) AND (process is getting colder) THEN
(add heat to the process)" or "IF (process is too hot) AND (process is heating
rapidly) THEN (cool the process quickly)" are used. These terms are imprecise
and yet very descriptive of what must actually happen. Consider what you do in
the shower if the temperature is too cold: you will make the water comfortable
very quickly with little trouble. FL is capable of mimicking this type of behavior but
at very high rate.
23. Answer : (b) Q. What is Unsupervise learning?
Reason : The problem of unsupervised learning involves learning patterns in the input when
no specific out put values are supplied. We can not expect the specific output to
test your result. Here the agent does not know what to do, as he is not aware of
the fact what propose system will come out. We can say an ambiguous
unproposed situation.
24. Answer : (a)
Reason : Inductive learning involves finding a consistent hypothesis that agrees with
examples. The difficulty of the task depends on the chosen representation.
25. Answer : (b)
Reason : Computational learning theory analyzes the sample complexity and computational
complexity of inductive learning. There is a trade off between the expressiveness
of the hypothesis language and the ease of learning.
26. Answer : (c) Q. What is false positive and false negetive hypotheses?
Reason : Consistent hypothesis go with examples, If the hypothesis says it should be
negative but infact it is positive, it is false negative. If a hypothesis says it should
be positive, but in fact it is negative, it is false positive. In a specialized hypothesis
we need to have certain restrict or special conditions.
101 | P a g e
27. Answer : (b) Q.Why neural network use non linear funftion?
Reason : Neural networks parameters can be learned from noisy data and they have been
used for thousands of applications, so it varies from problem to problem and thus
use nonlinear functions.
28. Answer : (a) Q. What is perceptron?
Reason : A perceptron is a Feed-forward neural network with no hidden units that can be
represent only linear separable functions. If the data are linearly separable, a
simple weight updated rule can be used to fit the data exactly.
29. Answer : (a)
Reason : Not all formal languages are context-free — a well-known counterexample is
This particular language can be generated by a parsing expression grammar,
which is a relatively new formalism that is particularly well-suited to programming
languages.
30. Answer : (e)
Reason : The union and concatenation of two context-free languages is context-free; but
intersection need not be.
Artificial Intelligence
Questions 31 to 40
31 What is a Cybernetics?
. (a) Study of communication between two machines
(b) Study of communication between human and machine
(c) Study of communication between two humans
(d) Study of Boolean values
(e) Study of communication between logic circuits.
32 What is the goal of artificial intelligence?
. (a) To solve real-world problems
(b) To solve artificial problems
(c) To explain various sorts of intelligence
(d) To extract scientific causes
(e) To restrict problems.
33 An algorithm is complete if
. (a) It terminates with a solution when one exists
(b) It starts with a solution
(c) It does not terminate with a solution
(d) It has a loop
(e) It has a decision parameter.
102 | P a g e
34 Which is true regarding BFS?
. (a) BFS will get trapped exploring a single path
(b) The entire tree so far been generated must be stored in BFS
(c) BFS is not guaranteed to find a solution, if exists
(d) BFS is nothing but Binary First Search
(e) BFS is one type of sorting.
35 What is a heuristic function?
. (a) A function to solve mathematical problems
(b) A function which takes parameters of type string and returns an integer value
(c) A function whose return type is nothing
(d) A function which returns an object
(e) A function that maps from problem state descriptions to measures of desirability.
36 The traveling salesman problem involves n cities with paths connecting the cities. The
. time taken for traversing through all the cities, without knowing in advance the length of
a minimum tour, is
(a) O(n)
(b) O(n2)
(c) O(n!)
(d) O(n/2)
(e) O(2n).
37 The problem space of means-end analysis has
. (a) An initial state and one or more goal states
(b) One or more initial states and one goal state
(c) One or more initial states and one or more goal state
(d) One initial state and one goal state
(e) No goal state.
38 An algorithm A is admissible if
. (a) It is not guaranteed to return an optimal solution when one exists
(b) It is guaranteed to return an optimal solution when one exists
(c) It returns more solutions, but not an optimal one
(d) It guarantees to return more optimal solutions
(e) It returns no solutions at all.
39 Knowledge may be
. I. Declarative.
II. Procedural.
III. Non-procedural.
(a) Only (I) above
(b) Only (II) above
(c) Only (III) above
(d) Both (I) and (II) above
(e) Both (II) and (III) above.
103 | P a g e
40 Idempotency law is
. I. P P = P.
II. P P = P.
III. P + P = P.
(a) Only (I) above
(b) Only (II) above
(c) Only (III) above
(d) Both (I) and (II) above
(e) Both (II) and (III) above.
Answers
31. Answer : (b)
Reason : Cybernetics is Study of communication between human and machine
32. Answer : (c) Q. What is the goal of artificial intelligence?
Reason : The scientific goal of artificial intelligence is to explain various sorts of
intelligence
33. Answer : (a)
Reason : An Algorithm is complete if It terminates with a solution when one exists.
34. Answer : (b)
Reason : Regarding BFS-The entire tree so far been generated must be stored in BFS.
35. Answer : (e) Q. What is heuristic function?
Reason : Heuristic function is a function that maps from problem state descriptions to
measures of desirability
36. Answer : (c)
Reason : The traveling salesman problem involves n cities with paths connecting the
cities. The time taken for traversing through all the cities, without knowing in
advance the length of a minimum tour, is O(n!)
37. Answer : (a)
Reason : The problem space of means-end analysis has an initial state and one or more
goal states
38. Answer : (b) Q. What is the condition for admissibility of an algorithm?
Reason : An algorithm A is admissible if It is guaranteed to return an optimal solution
when one exists.
39. Answer : (d)
Reason : Knowledge may be declarative and procedural
40. Answer : (a)
Reason : Idempotency Law is P V P = P
104 | P a g e
Artificial Intelligence
Questions 41 to 50
41.Which of the following is true related to ‘Satisfiable’ property?
(a) A statement is satisfiable if there is some interpretation for which it is false
(b) A statement is satisfiable if there is some interpretation for which it is true
(c) A statement is satisfiable if there is no interpretation for which it is true
(d) A statement is satisfiable if there is no interpretation for which it is false
(e) None of the above.
42.Two literals are complementary if
(a) They are equal
(b) They are identical and of equal sign
(c) They are identical but of opposite sign
(d) They are unequal but of equal sign
(e) They are unequal but of opposite sign.
43.Consider a good system for the representation of knowledge in a particular domain.
What property should it possess?
(a) Representational Adequacy
(b) Inferential Adequacy
(c) Inferential Efficiency
(d) Acquisitional Efficiency
(e) All the above.
44.What is Transposition rule?
(a) From P → Q, infer ~Q → P
(b) From P → Q, infer Q → ~P
(c) From P → Q, infer Q → P
(d) From P → Q, infer ~Q → ~P
(e) None of the above.
45.Third component of a planning system is to
(a) Detect when a solution has been found
(b) Detect when solution will be found
(c) Detect whether solution exists or not
(d) Detect whether multiple solutions exist
(e) Detect a solutionless system.
105 | P a g e
46.Which of the following is true in Statistical reasoning?
(a)The representation is extended to allow some kind of numeric measure of certainty
to be associated with each statement
(b)The representation is extended to allow ‘TRUE or FALSE’ to be associated with
each statement
(c) The representation is extended to allow some kind of numeric measure of certainty
to be associated common to all statements
(d)The representation is extended to allow ‘TRUE or FALSE’ to be associated
common to all statements
(e)None of the above.
47.In default logic, we allow inference rules of the form
(a) (A : B) / C
(b) A / (B : C)
(c) A/B
(d) A/B:C
(e) (A: B) :C.
48.In Baye’s theorem, what is the meant by P(Hi|E)?
(a) The probability that hypotheses Hi is true given evidence E
(b) The probability that hypotheses Hi is false given evidence E
(c) The probability that hypotheses Hi is true given false evidence E
(d) The probability that hypotheses Hi is false given false evidence E
(e) The probability that hypotheses Hi is true given unexpected evidence E.
49.Default reasoning is another type of
(a) Monotonic reasoning
(b) Analogical reasoning
(c) Bitonic reasoning
(d) Non-monotonic reasoning
(e) Closed world assumption.
50.Generality is the measure of
(a) Ease with which the method can be adapted to different domains of application
(b) The average time required to construct the target knowledge structures from
some specified initial structures
(c) A learning system to function with unreliable feedback and with a variety of
training examples
(d) The overall power of the system
(e) Subdividing the system.
Answers
106 | P a g e
41. Answer : (b)
Reason : ‘Satisfiable’ property is a statement is satisfiable if there is some interpretation
for which it is true.
42. Answer : (c)
Reason : Two literals are complementary if They are identical but of opposite sign.
43. Answer : (e)
Reason : Consider a good system for the representation of knowledge in a particular
domain. The properties should be Representational Adequacy,Inferential
Adequacy,Inferential Efficiency and Acquisitional Efficiency
44. Answer : (d)
Reason : Transposition rule- From P → Q, infer ~Q → ~P
45. Answer : (a)
Reason : Third component of a planning system is to detect when a solution has been
found.
46. Answer : (a)
Reason : Statistical reasoning is the representation is extended to allow some kind of
numeric measure of certainty to be associated with each statement.
47. Answer : (a)
Reason : In default logic, we allow inference rules of the form:(A : B) / C
48. Answer : (a)
Reason : In Baye’s theorem, P(Hi|E) is the probability that hypotheses Hi is true given
evidence E.
49. Answer : (d)
Reason : Default reasoning is another type of non-monotonic reasoning
50. Answer : (a)
Reason : Generality is the measure of ease with which the method can be adapted to
different domains of application.
Artificial Intelligence
Questions 51 to 60
51.Machine learning is
(a) The autonomous acquisition of knowledge through the use of computer
programs
(b) The autonomous acquisition of knowledge through the use of manual programs
(c) The selective acquisition of knowledge through the use of computer programs
(d) The selective acquisition of knowledge through the use of manual programs
(e) None of the above.
52.Factors which affect the performance of learner system does not include
(a) Representation scheme used
(b) Training scenario
(c) Type of feedback
(d) Good data structures
(e) Learning algorithm.
107 | P a g e
53.Different learning methods does not include
(a) Memorization
(b) Analogy
(c) Deduction
(d) Introduction
(e) Acceptance.
54.In language understanding, the levels of knowledge that does not include
(a) Phonological
(b) Syntactic
(c) Semantic
(d) Logical
(e) Empirical.
55.A model of language consists of the categories which does not include
(a) Language units
(b) Role structure of units
(c) System constraints
(d) Structural units
(e) Components.
56.Semantic grammars
(a) Encode semantic information into a syntactic grammar
(b) Decode semantic information into a syntactic grammar
(c) Encode syntactic information into a semantic grammar
(d) Decode syntactic information into a semantic grammar
(e) Encode syntactic information into a logical grammar.
57.What is a top-down parser?
(a) Begins by hypothesizing a sentence (the symbol S) and successively predicting
lower level constituents until individual preterminal symbols are written
(b) Begins by hypothesizing a sentence (the symbol S) and successively predicting
upper level constituents until individual preterminal symbols are written
(c) Begins by hypothesizing lower level constituents and successively predicting a
sentence (the symbol S)
(d) Begins by hypothesizing upper level constituents and successively predicting a
sentence (the symbol S)
(e) All the above.
58.Perception involves
(a) Sights, sounds, smell and touch
(b) Hitting
(c) Boxing
(d) Dancing
(e) Acting.
108 | P a g e
59.Among the following which is not a horn clause?
(a) p
(b) p V q
(c) p→q
(d) p → q
(e) All the above.
60.The action ‘STACK(A, B)’ of a robot arm specify to
(a) Place block B on Block A
(b) Place blocks A, B on the table in that order
(c) Place blocks B, A on the table in that order
(d) Place block A on block B
(e) POP A, B from stack.
Answers
51. Answer : (a) Q. What is machine learning?
Reason : Machine learning is the autonomous acquisition of knowledge through the use
of computer programs.
52. Answer : (d)
Reason : Factors which affect the performance of learner system does not include good
data structures
53. Answer : (d)
Reason : Different learning methods does not include introduction
54. Answer : (e)
Reason : In language understanding, the levels of knowledge that does not include
empirical knowledge
55. Answer : (d)
Reason : A model of language consists of the categories which does not include
structural units
56. Answer : (a)
Reason : Semantic grammars encode semantic information into a syntactic grammar.
57. Answer : (a)
Reason : A top-down parser begins by hypothesizing a sentence (the symbol S) and
successively predicting lower level constituents until individual preterminal symbols
are written.
58. Answer : (a)
Reason : Perception involves Sights, sounds, smell and touch.
59. Answer : (d)
Reason : p → q is not a horn clause
60. Answer : (d)
Reason : The action ‘STACK(A,B)’ of a robot arm specify to Place block A on block B.