AI Unit 1 Part 1 Notes
AI Unit 1 Part 1 Notes
SYLLABUS:
Problem Solving by Search-I: Introduction to AI, Intelligent Agents
Problem Solving by Search –II: Problem-Solving Agents, Searching for Solutions,
Uninformed Search Strategies: Breadth-first search, Uniform cost search, Depth-first search,
Iterative deepening Depth-first search, Bidirectional search,
INTRODUCTION TO AI
➢ WHAT IS AI
➢ THE FOUNDATIONS OF ARTIFICIAL INTELLIGENCE
➢ THE HISTORY OF ARTIFICIAL INTELLIGENCE
➢ THE STATE OF THE ART
WHAT IS AI?
categorizing AI in four parts.
1. Thinking Humanly
2. Thinking Rationally
3. Acting Humanly
4. Acting Rationally
Thinking Humanly Thinking Rationally
“The exciting new effort to make “The study of mental faculties
computers think ... machines with through the use of computational
minds, in the full and literal sense.” models.”
(Haugeland, 1985) (Charniak and McDermott, 1985)
“[The automation of] activities that “The study of the computations that
we associate with human thinking, make it possible to perceive, reason,
activities such as decision-making, and act.” (Winston, 1992)
problem solv- ing, learning .. .”
(Bellman, 1978)
Acting Humanly Acting Rationally
“The art of creating machines that “Computational Intelligence is the
per- form functions that require study of the design of intelligent
intelligence when performed by agents.” (Poole et al., 1998)
people.” (Kurzweil, 1990)
“The study of how to make “AI . . . is concerned with intelligent
computers do things at which, at the be- havior in artifacts.” (Nilsson,
moment, people are better.” (Rich 1998)
and Knight, 1991)
Figure 1.1 Some definitions of artificial intelligence, organized into
four categories.
Historically, all four approaches to AI have been followed, each by different people with
different methods.
A human-centered approach must be in part an empirical science, in- volving observations
and hypotheses about human behavior.
A rationalist approach involves a combination of mathematics and engineering. The various
group have both disparaged and helped each other.
Acting humanly: The Turing Test approach
The Turing Test, proposed by Alan Turing (1950), was designed to provide a satisfactory
operational definition of intelligence. A computer passes the test if a human interrogator, after
posing some written questions, cannot tell whether the written responses come from a person
or from a computer.
The computer would need to possess the following capabilities:
• natural language processing to enable it to communicate successfully in English;
• knowledge representation to store what it knows or hears;
• automated reasoning to use the stored information to answer questions and to draw
new conclusions;
• machine learning to adapt to new circumstances and to detect and extrapolate
patterns.
• computer vision to perceive objects, and
• robotics to manipulate objects and move about.
To pass the total Turing Test, the computer will need These six disciplines compose most of
AI, and Turing deserves credit for designing a test that remains relevant 60 years later.
Philosophy
• Can formal rules be used to draw valid conclusions?
• How does the mind arise from a physical brain?
• Where does knowledge come from?
• How does knowledge lead to action?
Philosophical inquiries into the nature of knowledge, reasoning, and action have laid the
groundwork for understanding how AI systems should operate. Thinkers like Aristotle,
Descartes, and Hobbes explored questions about formal reasoning, the relationship
between mind and body, and the sources of knowledge and action.
Mathematics
• What are the formal rules to draw valid conclusions?
• What can be computed?
• How do we reason with uncertain information?
Mathematics provided the formal tools necessary for reasoning and computation. From
the development of logic by Boole and Frege to the exploration of computability by
Turing, mathematical concepts formed the backbone of AI, enabling the understanding
of what can be computed and what problems are tractable or intractable.
Economics
• How should we make decisions so as to maximize payoff?
• How should we do this when others may not go along?
• How should we do this when the payoff may be far in the future?
Economic theories introduced the concept of decision-making under uncertainty and the
optimization of outcomes. Scholars like Walras, von Neumann, and Morgenstern laid the
foundation for decision theory and game theory, which have implications for rational
decision-making in AI agents.
Neuroscience
• How do brains process information?
The study of the nervous system, particularly the brain, has revealed much about how
information is processed.
Through the discovery of localized brain functions by scientists like Paul Broca and the
development of techniques like staining by Camillo Golgi, we've gained insight into the
structure and function of neurons. Advances in technology, such as electroencephalography
(EEG) and functional magnetic resonance imaging (fMRI), have furthered our understanding
of brain activity. However, the exact mechanisms of cognition and memory storage remain
elusive.
Psychology
• How do humans and animals think and act?
The origins of scientific psychology can be traced back to figures like Hermann von
Helmholtz and Wilhelm Wundt, who applied scientific methods to the study of human
behaviour. Behaviourism, led by John Watson, focused on observable behaviour’s rather than
subjective mental processes.
Cognitive psychology, influenced by William James and others, views the brain as an
information-processing device, emphasizing mental processes like perception and memory.
Computer engineering
• How can we build an efficient computer?
The development of computers, from early mechanical devices to modern electronic
systems, has revolutionized technology. Figures like Alan Turing and Konrad Zuse played key
roles in pioneering programmable computers. Moore's Law, which describes the doubling of
computer performance roughly every 18 months, has driven continuous advancements in
computing power.
Linguistics
• How does language relate to thought?
Noam Chomsky's critique of behaviorist theories of language learning in the 1950s led to the
emergence of modern linguistics. Chomsky's syntactic models, rooted in formal grammatical
structures, challenged behaviorist ideas and paved the way for computational linguistics.
Understanding language involves more than just syntax—it requires context and subject
matter knowledge, highlighting the complexity of natural language processing.
Overall, the interdisciplinary nature of AI draws from a rich tapestry of knowledge, combining
insights from philosophy, mathematics, economics, and neuroscience, cybernetics, linguistics
to create intelligent systems capable of reasoning, learning, and decision-making.
contributing to our understanding of the human mind, the development of artificial
intelligence, and the advancement of technology.
Evolution of Artificial Intelligence (AI) from its early days to the present, highlighting key
developments, challenges, and shifts in focus within the field.
1. Blocks World and Early AI Pioneers: The blocks world served as a foundational
microworld for AI research, with notable contributions from David Huffman, David Waltz,
Patrick Winston, Terry Winograd, and Scott Fahlman. Early work on neural networks, such
as perceptrons and adalines, laid the groundwork for subsequent research.
Blue
Red
Red Green
Green Blue
Green
Figure :
2. Dose of Reality (1966–1973): Early optimism about AI's potential gave way to
recognition of significant challenges, including the need for domain-specific knowledge
and the limitations of early problem-solving approaches.
3. Knowledge-Based Systems (1969–1979): The emergence of knowledge-based systems,
exemplified by projects like DENDRAL and MYCIN, demonstrated the importance of
domain-specific knowledge for solving complex problems.
4. AI Becomes an Industry (1980–present): The 1980s saw the commercialization of AI
with the development of expert systems. However, the period was followed by an "AI
Winter" characterized by unmet expectations and financial setbacks.
5. The Return of Neural Networks (1986–present): Neural networks experienced a
resurgence in the mid-1980s with the rediscovery of back-propagation. Connectionist
models offered an alternative to symbolic AI approaches, leading to advancements in
machine learning and data mining.
6. AI Adopts the Scientific Method (1987–present): AI shifted towards a more rigorous
and empirical approach, embracing existing theories and methodologies from fields like
control theory and statistics.
7. The Emergence of Intelligent Agents (1995–present): Research shifted towards
building complete agent architectures, with applications expanding to internet-based
systems. Efforts towards Human-Level AI (HLAI) and Artificial General Intelligence (AGI)
gained traction.
8. The Availability of Very Large Data Sets (2001–present): Recent AI research has
focused on leveraging large datasets to train algorithms, leading to breakthroughs in
areas like natural language processing and computer vision.
Overall, the history of AI reflects a progression from early optimism and experimentation to a
more grounded and methodical approach, driven by advancements in technology and a deeper
understanding of the challenges involved in creating intelligent systems.
1. Robotic Vehicles: Examples include STANLEY, a driverless car that won the
DARPA Grand Challenge in 2005, and CMU's BOSS, which won the Urban
Challenge the following year. These vehicles use sensors and onboard software for
navigation and obstacle avoidance.
2. Speech Recognition: Automated systems, like those used by United Airlines for
booking flights, employ speech recognition and dialog management to interact with
users over the phone.
3. Autonomous Planning and Scheduling: Programs like NASA's Remote Agent and
its successors, such as MAPGEN and MEXAR2, autonomously plan and schedule
spacecraft operations, including logistics and science planning for missions to Mars.
4. Game Playing: IBM's DEEP BLUE defeated world chess champion Garry Kasparov
in 1997, marking a significant milestone in AI's ability to excel in complex strategic
games.
5. Spam Fighting: Learning algorithms are employed to classify spam messages,
helping users filter out unwanted emails efficiently.
6. Logistics Planning: Systems like Dynamic Analysis and Replanning Tool (DART)
automate logistics planning and scheduling for large-scale operations, such as military
deployments, resulting in significant time savings.
7. Robotics: iRobot's Roomba robotic vacuum cleaners and PackBot deployed in
conflict zones demonstrate AI's role in household chores and hazardous tasks like
bomb disposal.
8. Machine Translation: Programs utilizing statistical models and machine learning
algorithms translate text between languages, facilitating communication across
linguistic barriers.
INTELLIGENT AGENTS
➢ AGENTS AND ENVIRONMENTS
➢ GOOD BEHAVIOUR: CONCEPT OF RATIONALITY
➢ NATURE OF ENVIRONMENTS
➢ THE STRUCTURE OF AGENTS
upon that environment through actuators. This simple idea is illustrated in Figure
Agent Sensors
Percepts
Actions
Actuators
Figure 2.1 Agents interact with environments through sensors and actuators.
human agent has eyes, ears, and other organs for sensors and hands, legs, vocal tract, and so
on for actuators. A robotic agent might have cameras and infrared range finders for sensors
and various motors for actuators. A software agent receives keystrokes, file contents, and
network packets as sensory inputs and acts on the environment by displaying on the screen,
writing files, and sending network packets.
We use the term percept to refer to the agent’s perceptual inputs at any given instant. An
agent’s percept sequence is the complete history of everything the agent has ever perceived.
In general, an agent’s choice of action at any given instant can depend on the entire percept
sequence observed to date, but not on anything it hasn’t perceived. By specifying the agent’s
choice of action for every possible percept sequence, we have said more or less everything
there is to say about the agent. Mathematically speaking, we say that an agent’s behavior is described
by the agent function that maps any given percept sequence to an action.
can imagine tabulating the agent function that describes any given agent; for mostagents, this
would be a very large table—infinite, in fact, unless we place a bound on the length of percept
sequences we want to consider. Given an agent to experiment with, we can,in principle, construct this
table by trying out all possible percept sequences and recording which actions the agent does in
response.1 The table is, of course, an external characterization of the agent. Internally, the agent
function for an artificial agent will be implemented by an agent program. It is important to keep
these two ideas distinct. The agent function is an abstract mathematical description; the agent
program is a concrete implementation, runningwithin some physical system.
To illustrate these ideas, we use a very simple example—the vacuum-cleaner world shown in Figure
2.2.
The particular world has just two locations: squares A and B. The vacuum agent perceives which
square it is in and whether there is dirt in the square. It can choose to move left, move right, suck up
the dirt, or do nothing. One very simple agent function is the following: if the current square is dirty,
thensuck; otherwise, move to the other square. A partial tabulation of this agent function is shownin
Figure 2.3
Percept sequence Action
[A, Clean ] Right
[A, Dirty ] Suck
[B, Clean] Left
[B, Dirty ] Suck
[A, Clean ], [A, Clean ] Right
[A, Clean ], [A, Dirty ] Suck
. .
[A, Clean ], [A, Clean ], [A, Clean ] Right
[A, Clean ], [A, Clean ], [A, Dirty ] Suck
. .
Figure 2.3 Partial tabulation of a simple agent function for the vacuum-cleaner world
shown in Figure 2.2.
rationality is not the same as perfection. Rationality max- imizes expected performance, while
perfection maximizes actual performance.
rational agent not only to gather information but also to learnas much as possible from what it
perceives. The agent’s initial configuration could reflect some prior knowledge of the environment,
but as the agent gains experience this may be modified and augmented. There are extreme cases in
which the environment is completely known a priori. In such cases, the agent need not perceive or
learn; it simply acts correctly.
To the extent that an agent relies on the prior knowledge of its designer rather than on its own
percepts, we say that the agent lacks autonomy. A rational agent should be autonomous—it should
learn what it can to compensate for partial or incorrect prior knowl-edge.
NATURE OF ENVIRONMENTS
task environments, which are essentially the “problems” to which rational agents are the “solutions.”
We begin by showing howto specify a task environment, illustrating the process with a number of
examples. We then show that task environments come in a variety of flavors. The flavor of the task
environmentdirectly affects the appropriate design for the agent program.
Figure 2.4 PEAS description of the task environment for an automated taxi.
Satellite image Correct image Downlink from Display of scene Color pixel
analysis system categorization orbiting satellite categorization arrays
T HE S TRUCTURE OF AGENTS
The job of AI is to design an agent program that implements the agent function— the mapping from
percepts to actions. We assume this program will run on some sort of computing device with physical
sensors and actuators—we call this the architecture:
agent = architecture + program
Agent Sensors
What the
worldis like
now
Actuators
Figure 2.10 A simple reflex agent. It acts according to a rule whose condition matches
the current state, as defined by the percept.
2.1.5 Model-based reflex agents
3 knowledge about “how the world works”—whether implemented in simple Boolean circuits
or in complete scientific theories—is called a model of the world. An agent that uses such a
model is called a model-based agent.
These points highlight the importance of goal and problem formulation in guiding intelligent
agent behavior towards achieving desired outcomes efficiently.
successor states until a goal state is found or the search is exhausted. These
algorithms are typically less efficient than informed search algorithms but can be
• Breadth-First Search
But before we go into these search types and you go a step further wandering
into any Artificial Intelligence course, let’s get to know the few terms which will be
Goal State: The desired resulting condition in a given problem and the kind of
Path/Step Cost: These are integers that represent the cost to move from one
Time Complexity: A function describing the amount of time the algorithm takes
It is a search algorithm where the search tree will be traversed from the root
node. It will be traversing, searching for a key at the leaf of a particular branch. If
the key is not found, the searcher retraces its steps back (backtracking) to the
point from where the other branch was left unexplored, and the same procedure
starts from the root node A and then goes to the branch where node B is present
there is only one node to traverse, i.e., node H. But after node H does not have
any child nodes, we retrace the path in which we traversed earlier and again
reach node B, but this time, we traverse through in the untraced path a traverse
through node E. There are two branches at node E, but let’s traverse node I
(lexicographical order) and then retrace the path as we have no further number
branch and then again find we are at the end and retrace the path and reach
node B and then we will traverse the untraced branch, i.e., through node C, and
• DFS requires very little memory as it only needs to store a stack of the nodes on
• It takes less time to reach the goal node than the BFS algorithm [which is
Disadvantages
• There is the possibility that many states keep reoccurring, and there is no
• The DFS algorithm goes for deep-down searching, and sometimes it may go to
Verdict
It occupies a lot of memory space and time to execute when the solution is at the
bottom or end of the tree and is implemented using the LIFO Stack data
structure[DS].
• Complete: No
• Optimal: Yes
Breadth-First Search (BFS)
for the goal in a tree. It begins searching from the root node and expands the
successor node before expanding further along breadthwise and traversing those
The above figure is an example of a BFS Algorithm. It starts from the root node A
and then traverses node B. Till this step, it is the same as DFS. But here, instead
of expanding the children of B as in the case of DFS, we expand the other child
of A, i.e., node C because of BFS, and then move to the next level and traverse
have only taken into consideration the lexicographical order. This is how the BFS
Algorithm is implemented.
Advantages
• If there is more than one solution for a given problem, then BFS will provide the
Disadvantages
• It requires lots of memory since each level of the tree must be saved in memory
• BFS needs lots of time if the solution is far away from the root node.
Verdict
It requires a lot of memory space and is time-consuming if the goal state is at the
• Optimal: Yes, if step cost = 1 (i.e., no cost/all step costs are same)
Uniform Cost Search Algorithm (UCS)
Uniform Cost Search (UCS) is a graph traversal and search algorithm used in the
algorithm that explores a graph by gradually expanding nodes starting from the
initial node and moving towards the goal node while considering the cost
This algorithm is mainly used when the step costs are not the same, but we need
the optimal solution to the goal state. In such cases, we use Uniform Cost
Search to find the goal and the path, including the cumulative cost to expand
each node from the root node to the goal node. It does not go depth or breadth.
It searches for the next node with the lowest cost, and in the case of the same
In the above figure, consider S to be the start node and G to be the goal state.
From node S we look for a node to expand, and we have nodes A and G, but
since it’s a uniform cost search, it’s expanding the node with the lowest step
cost, so node A becomes the successor rather than our required goal node G.
From A we look at its children nodes B and C. Since C has the lowest step cost,
Since the cost to D is low, we expand along with node D. Since D has only one
child G which is our required goal state we finally reach the goal state D by
implementing UFS Algorithm. If we have traversed this way, definitely our total
path cost from S to G is just 6 even after traversing through many nodes rather
than going to G directly where the cost is 12 and 6<<12(in terms of step cost).
Advantages
• Uniform cost search is an optimal search method because at every state, the
Disadvantages
• It does not care about the number of steps or finding the shortest path involved
in the search problem, and it is only concerned about path cost. This algorithm
Verdict
• Complete: Yes (if b is finite and costs are stepped, costs are zero)
• Time Complexity: O(b(c/ϵ)) where, ϵ -> is the lowest cost, c -> optimal cost
DLS is an uninformed search algorithm. This is similar to DFS but differs only in
a few ways. The sad failure of DFS is alleviated by supplying a depth -first search
with a predetermined depth limit. That is, nodes at depth are treated as if they
cases:
1. Standard Failure Value (SFV): The SFV tells that there is no solution to the
problem.
2. Cutoff Failure Value (CFV): The Cutoff Failure Value tells that there is no solution
The above figure illustrates the implementation of the DLS algorithm. Node A is
at Limit = 2. Our start state is considered to be node A, and our goal state is
node H. To reach node H, we apply DLS. So in the first case, let’s set our limit to
Since limit 0, the algorithm will assume that there are no children after limit 0
even if nodes exist further. Now, if we implement it, we will traverse only node A
as there is only one node in limit 0, which is basically our goal state. If we use
SFV, it says there is no solution to the problem at limit 0, whereas LCV says
there is no solution for the problem until the set depth limit. Since we could not
find the goal, let’s increase our limit to 1 and apply DFS till limit 1, even though
there are further nodes after limit 1. But those nodes aren’t expanded as we have
order. As in our first case, if we use SFV, it says there is no solution to the
problem at limit 1, whereas LCV says there is no solution for the problem until
the set depth limit 1. Hence we again increase our limit from 1 to 2 in the notion
Till limit 2, DFS will be implemented from our start node A and its children B, C,
D, and E. Then from E, it moves to F, similarly backtracks the path, and explores
the unexplored branch where node G is present. It then retraces the path and
explores the child of C, i.e., node H, and then we finally reach our goal by
applying DLS Algorithm. Suppose we have further successors of node F but only
the nodes till limit 2 will be explored as we have limited the depth and have
understanding.
1. Standard Failure: it indicates that the problem does not have any solutions.
2. Cutoff Failure Value: It defines no solution for the problem within a given depth
limit.
Advantages
Disadvantages
• The DLS has disadvantages of completeness and is not optimal if it has more
Verdict
It is a search algorithm that uses the combined power of the BFS and DFS
iteration. It performs the Algorithm until it reaches the goal node. The algorithm is
set to search until a certain depth and the depth keeps increasing at every
In the above figure, let’s consider the goal node to be G and the start state to be
A. We perform our IDDFS from node A. In the first iteration, it traverses only
node A at level 0. Since the goal is not reached, we expand our nodes, go to the
next level, i.e., 1 and move to the next iteration. Then in the next iteration, we
traverse the node A, B, and C. Even in this iteration, our goal state is not
reached, so we expand the node to the next level, i.e., 2, and the nodes are
traversed from the start node or the previous iteration and expand the nodes A,
& DFS) too are explored, and we find the goal state in this iteration. This is the
Advantages
• It combines the benefits of BFS and DFS search algorithms in terms of fast
• The main drawback of IDDFS is that it repeats all the work from the previous
phase.
Verdict
Before moving into bidirectional search, let’s first understand a few terms.
and backward search. Basically, if the average branching factor going out of
node / fan-out, if fan-out is less, prefer forward search. Else if the average
branching factor going into a node/fan-in is less (i.e., fan-out is more), prefer
backward search. We must traverse the tree from the start node and the goal
node, and wherever they meet, the path from the start node to the goal through
the start/root node and node 16 as the goal node. The algorithm divides the
search tree into two sub-trees. So from the start of node 1, we do a forward
search, and at the same time, we do a backward search from goal node 16. The
traverses through nodes 16, 12, 10, and 9. We see that both forward and
backward search meets at node 9, called the intersection node. So the total path
traced by forwarding search and the path traced by backward search is the
• Since BS uses various techniques like DFS, BFS, DLS, etc., it is efficient and
Disadvantages
Verdict
• Complete: Yes
• Optimal: Yes (if step cost is uniform in both forward and backward directions)
Final Interpretations
combines the power of unguided search and works in a brute-force way. The
science as they don’t have the information related to state space and target
This is the complete analysis of all the Uninformed Search Strategies. Each
search algorithm is no less than the other, and we can use any one of the search
strategies based on the problem. The term ‘uninformed’ means that they do not
have any additional information about states or state space. Thus we conclude
Key Takeaways
• Uninformed algorithms are used in search problems, where the goal is to find a
solving certain problems, but they may also be less efficient than informed
Heuristic Function
These features are used by intelligent search engines as an extra source of data to
decide which course to take. Heuristic functions are crucial in informed search
algorithms for this reason.
Example:
A player can begin the game of tic tac toe from a variety of positions, and each
position has a different chance of winning. The first player, however, has the best
chance of succeeding if they begin from the middle of the board. As a result, winning
chances can be used as a heuristic.
Pure Heuristic Function: The most basic type of heuristic search algorithm is pure
heuristic search. Nodes are expanded according to their heuristic value, h(n). It
keeps an OPEN list and a CLOSED list. It lists the nodes that have already
expanded in the CLOSED list and the nodes that haven’t expanded yet in the OPEN
list.
Each node n with the lowest heuristic value is expanded, producing all of its children,
and then n is added to the closed list. The algorithm keeps running until a goal state
is found.
Here h(n) is the Heuristic cost, and h*(n) is the estimated cost. Hence, the heuristic
cost should be less than or equal to the estimated cost.
The path that appears to be the best at the time is always chosen by the greedy
best-first search algorithm. It combines techniques like depth-first search and
breadth-first search. It uses search and the heuristic function. We can benefit from
both algorithms’ advantages by using best-first search. We can select the most
promising node at each stage using a best-first search. The node that is nearest to
the goal node is expanded using the best first search technique, and the closest cost
is calculated using a heuristic function.
We expand the node that is closest to the goal node, and the closest cost is
estimated by a Heuristic function, i.e. f(n)= h(n).
• Step 1: The initial node should first be added to the OPEN list.
• Step 2: Stop and return failure if the OPEN list is empty.
• Step 3: Move the node n that has the lowest h(n) value out of the OPEN list and into the
CLOSED list.
• Step 4: Generate the successors of node n and expand node n.
• Step 5: Determine whether or not each successor of node n is a goal node. Return
success and end the search if any successor node is the goal node; otherwise, move on
to Step 6.
• Step 6: The method examines each successor node for the evaluation function f(n)
before determining whether it has ever been in the OPEN or CLOSED list. Add the node
to the OPEN list if it hasn’t already been there.
• Step 7: Return to Step 2.
Example:
Certainly, let’s use the Best-First Search algorithm with the given heuristic values for
nodes S, A, B, C, D, H, F, G, and E. We’ll explore how the algorithm works based on
these heuristics.
S 10
A 9
B 7
C 8
D 8
H 6
F 6
G 3
E 0
Initialization:
Node Expansion:
The algorithm selects the node from the open list with the lowest heuristic value. In
this case, it’s node B with a heuristic value of 7.
Add H and A to the open list with heuristic values of 6 and 9, respectively.
Continued Expansion:
The algorithm selects node H from the open list with a heuristic value of 6.
Further Expansion:
The algorithm selects node G from the open list with a heuristic value of 3.
Goal Reached:
The algorithm terminates upon reaching the goal state, which is node E with a
heuristic value of 0.
Result: The correct path found by the Best-First Search algorithm based on the
given heuristic values is indeed S -> B -> H -> G -> E.
A* Search
A* search is the most commonly known form of best-first search. The heuristic
function h(n), along with the distance from the initial state g(n) to the node n, is used.
Due to the combination of UCS and greedy best-first search features, the issue is
effectively solved. Using the heuristic function, the A* search method locates the
shortest route through the search space. This search algorithm produces the best
results more quickly and extends the search tree a little less. In contrast to UCS, the
A* algorithm utilizes g(n)+h(n) instead of g(n).
We use both the search heuristic and the node-reach cost in the A* search method.
Therefore, we can add both costs as follows; this total is known as the fitness
number.
f(n)=g(n)+h(n)
• Step 1: Place the beginning node in the OPEN list as the first step.
• Step 2: Verify whether or not the OPEN list is empty; if it is, return failure and stop.
• Step 3: Choose the node from the OPEN list that has the evaluation function (g+h) with
the least value. Return success and stop if node n is the destination node; otherwise,
continue.
• Step 4: Generate all of the successors for node n, expand it, and add it to the closed
list. Check to see if each successor, n’, is already in the OPEN or CLOSED list before
computing its evaluation function and adding it to the Open list.
• Step 5: If node n’ is not already in OPEN or CLOSED, it should be attached to the back
pointer, which represents the value with the lowest g(n’) value.
• Step 6: Return to Step 2 in Step 6.
Example: Given the heuristic values and distances between nodes, let’s use the A*
algorithm to find the optimal path from node S to node G.
Here’s the table representing the nodes, their heuristic values, and the distances
between nodes:
Node Heuristic Value
S 5
A 3
B 4
C 2
D 6
G 0
Initialization:
Create an open list and add S to it with a cost of 0 (initial cost) and a heuristic value
of 5 (estimated cost to reach G).
Node Expansion:
The algorithm selects the node from the open list with the lowest cost + heuristic
value. In this case, it’s node S with a cost of 0 + 5 = 5.
Calculate the cost to reach A and G and add them to the open list.
Continued Expansion:
The algorithm selects node A from the open list with a cost of 1 (cost to reach A) + 3
(heuristic value of A) = 4.
Goal Reached:
The algorithm terminates upon reaching the goal state, which is node G with a cost
of 10 (cost to reach G) + 0 (heuristic value of G).
Result:
The A* algorithm finds the optimal path from node S to node G: S -> A -> C -> G.
This path has the lowest cost among all possible paths, considering both the actual
distance and the heuristic values.
In this example, the A* algorithm efficiently finds the optimal solution, and the optimal
path is indeed S -> A -> C -> G with a cost of 10.
UNIT – II
Syllabus:
Problem Solving by Search-II and Propositional Logic:
Adversarial Search: Games, Optimal Decisions in Games, Alpha–Beta Pruning,
Imperfect Real-Time Decisions.
Constraint Satisfaction Problems: Defining Constraint Satisfaction Problems,
Constraint Propagation, Backtracking Search for CSPs, Local Search for CSPs, The
Structure of Problems.
Propositional Logic: Knowledge-Based Agents, The Wumpus World, Logic,
Propositional Logic,
Propositional Theorem Proving: Inference and proofs, Proof by resolution,
Horn clauses and definite clauses,
Forward and backward chaining,
Effective Propositional Model Checking,
Agents Based on Propositional Logic.
UNIT - III
First-Order Logic: Representation, Syntax and Semantics of First-Order Logic, Using First-
Order Logic, Knowledge Engineering in First-Order Logic.
Introduction
Introduction
• How a knowledge-based agent could represent the world in which it operates and deduce
what actions to take showed.
• Propositional logic is used as our representation language because it sufficed to illustrate
the basic concepts of logic and knowledge- based agents. • Propositional logic is too puny a
language to represent knowledge of complex environments in a concise way. • First-order
logic, is sufficiently expressive to represent a good deal of our commonsense knowledge. • It
also either subsumes or forms the foundation of many other representation languages and
has been studied intensively for many decades. 6
• First-order logic is also known as Predicate logic or First-order predicate logic. First-order
logic is a powerful language that develops information about the objects in a more easy
way and can also express the relationship between those objects. 7
8
Inference engine: • The inference engine is the component of the intelligent system in
artificial intelligence, which applies logical rules to the knowledge base to infer new
information from known facts. The first inference engine was part of the expert system.
Inference engine commonly proceeds in two modes, which are: • Forward chaining •
Backward chaining 9
A. Forward Chaining • Forward chaining is also known as a forward deduction or forward
reasoning method when using an inference engine. Forward chaining is a form of reasoning
which start with atomic sentences in the knowledge base and applies inference rules
(Modus Ponens) in the forward direction to extract more data until a goal is reached. • The
Forward-chaining algorithm starts from known facts, triggers all rules whose premises are
satisfied, and add their conclusion to the known facts. This process repeats until the problem
is solved. 10
Properties of Forward-Chaining: • It is a down-up approach, as it moves from bottom to top. •
It is a process of making a conclusion based on known facts or data, by starting from the
initial state and reaches the goal state. • Forward-chaining approach is also called as data-
driven as we reach to the goal using available data. • Forward -chaining approach is
commonly used in the expert system, such as CLIPS, business, and production rule systems.
11
Example • "As per the law, it is a crime for an American to sell weapons to hostile nations.
Country A, an enemy of America, has some missiles, and all the missiles were sold to it by
Robert, who is an American citizen." • Prove that "Robert is criminal." 12
Facts Conversion into FOL: • It is a crime for an American to sell weapons to hostile
nations. (Let's say p, q, and r are variables) American (p) weapon(q) sells (p, q, r) hostile(r)
→ ∧∧∧Criminal(p) ...(1) • Country A has some missiles. Owns(A, p) Missile(p)∧. It can be
written in two definite clauses by using Existential Instantiation, introducing new Constant
T1. Owns(A, T1) ......(2) Missile(T1) .......(3) 13
• All of the missiles were sold to country A by Robert. Missiles(p) Owns (A, p) → Sells
(Robert, p, A) ......(4) ∧• Missiles are weapons. Missile(p) → Weapons (p) .......(5) • Enemy
of America is known as hostile. Enemy(p, America) →Hostile(p) ........(6) • Country A is an
enemy of America. Enemy (A, America) .........(7) • Robert is American American(Robert).
..........(8) 14
Forward chaining proof: • Step-1: In the first step we will start with the known facts and will
choose the sentences which do not have implications, such as: American(Robert), Enemy(A,
America), Owns(A, T1), and Missile(T1) 15
Step-2: 16• At the second step, we will see those facts which infer from available facts and
with satisfied premises. • Rule-(1) does not satisfy premises, so it will not be added in the first
iteration. • Rule-(2) and (3) are already added. • Rule-(4) satisfy with the substitution {p/T1},
so Sells (Robert, T1, A)is added, which infers from the conjunction of Rule (2) and (3). • Rule-
(6) is satisfied with the substitution(p/A), so Hostile(A) is added and which infers from Rule-
(7).
17
• Step-3: • At step-3, as we can check Rule-(1) is satisfied with the substitution {p/Robert,
q/T1, r/A}, so we can add Criminal(Robert)which infers all the available facts. And hence
we reached our goal statement. 18
Hence it is proved that Robert is Criminal using forward chaining approach. 19
B. Backward Chaining: • Backward-chaining is also known as a backward deduction or
backward reasoning method when using an inference engine. A backward chaining algorithm
is a form of reasoning, which starts with the goal and works backward, chaining through
rules to find known facts that support the goal. 20
Properties of backward chaining: • It is known as a top-down approach. • Backward-
chaining is based on modus ponens inference rule. • In backward chaining, the goal is
broken into sub-goal or sub-goals to prove the facts true. • It is called a goal-driven
approach, as a list of goals decides which rules are selected and used. • Backward -chaining
algorithm is used in game theory, automated theorem proving tools, inference engines,
proof assistants, and various AI applications. • The backward-chaining method mostly used
a depth-first searchstrategy for proof. 21
Example: • In backward-chaining, we will use the same above example, and will rewrite all
the rules. • American (p) weapon(q) sells (p, q, r) hostile(r) → Criminal(p) ... ∧∧∧(1)
Owns(A, T1) ........(2) • Missile(T1) • ?p Missiles(p) Owns (A, p) → Sells (Robert, p,A)
......(4) ∧• Missile(p) → Weapons (p) .......(5) • Enemy(p, America) →Hostile(p) ........(6) •
Enemy (A, America) .........(7) • American(Robert). ..........(8) 22
Backward-Chaining proof: • In Backward chaining, we will start with our goal predicate,
which is Criminal(Robert), and then infer further rules. • Step-1: • At the first step, we will
take the goal fact. And from the goal fact, we will infer other facts, and at last, we will prove
those facts true. So our goal fact is "Robert is Criminal," so following is the predicate of it. 23
Step-2: • At the second step, we will infer other facts form goal fact which satisfies the rules.
So as we can see in Rule-1, the goal predicate Criminal (Robert) is present with substitution
{Robert/P}. So we will add all the conjunctive facts below the first level and will replace p
with Robert. 24