0% found this document useful (0 votes)
11 views133 pages

AI and Intelligent Agents Overview

The document outlines the syllabus for a course on Artificial Intelligence and Machine Learning at MIT, covering topics such as AI history, intelligent agents, search strategies, and the Turing Test. It describes different types of AI, including weak AI, strong AI, and artificial superintelligence, as well as the structure and functioning of intelligent agents. The document also discusses the concept of rationality in AI and various search algorithms used in problem-solving.

Uploaded by

Yash Chavan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views133 pages

AI and Intelligent Agents Overview

The document outlines the syllabus for a course on Artificial Intelligence and Machine Learning at MIT, covering topics such as AI history, intelligent agents, search strategies, and the Turing Test. It describes different types of AI, including weak AI, strong AI, and artificial superintelligence, as well as the structure and functioning of intelligent agents. The document also discusses the concept of rationality in AI and various search algorithms used in problem-solving.

Uploaded by

Yash Chavan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd

MIT School of Computing

Department of Computer Science & Engineering

Third Year Engineering

Artificial Intelligence and Machine Learning

Class - T.Y.
PLD(SEM-IV)

Unit - I
Introduction To Artificial Intelligence

AY 2024-2025
1
MIT School of Computing
Department of Computer Science & Engineering

Unit-I Syllabus

● AI problems, foundation of AI and history of AI


● Intelligent agents: Agents and Environments,
● The concept of rationality, the nature of environments,
● Structure of agents, problem solving agents,PLD
problem formulation.
● Searching- Searching for solutions, uniformed search strategies – Breadth first search,
depth first Search.
● Search with partial information (Heuristic search) Hill climbing, A*, AO* Algorithms,
● Problem reduction,
● Game Playing-Adversial search, Games, mini-max algorithm, optimal decisions in
multiplayer games, Problem in Game playing, Alpha-Beta pruning, Evaluation
functions. 2
What Is Artificial
Intelligence?

The science and engineering of


making intelligent machines.

or
Artificial Intelligence is the science of
getting machines to think and make
decisions like humans.
Do you think the concept and existence of Artificial
Intelligence is new?

The term “Artificial Intelligence” was actually coined


way back in 1956 by John McCarthy, a professor at
Dartmouth.

Computer scientist known as the father


of AI
WHY?
DEMAND FOR ARTIFICIAL
INTELLIGENCE (AI)
ARTIFICIAL INTELLIGENCE (AI)
1. AI is a technique that enables
machines to mimic human
behavior.
2. It is the theory and
development of computer
systems able to perform
tasks normally requiring
human intelligence, such as
visual perception, speech
recognition, decision-
making and translation
between languages.
CONT.
CONT.
1. AI is accomplished by studying how human brain thinks, learns,
decide, and work while trying to solve a problem, and then using
the outcomes of this study as a basis of developing intelligent
software and systems.
2. AI has made it possible for machines to learn from experience
and grow to perform human-like tasks.
3. Flashy examples :
1. Self Driving Cars
2. Chess Playing Computers (based on Deep Learning)
3. Language Translation (Natural Language Processing)
AREAS CONTRIBUTE TO AI
CONT.
IMPACT OF AI ON HUMAN
TYPES OF AI
CONT.
Commonly known as weak AI, Artificial Narrow
Intelligence involves applying AI only to specific
tasks.
CONT.
1. Commonly known as strong AI,
Artificial General Intelligence
involves machines that possess the
ability to perform any intellectual
task that a human being can.

2. Machines don’t possess human-


like abilities, they have a strong
processing unit that can perform
high-level computations but they’re
not yet capable of thinking and
reasoning like a human.
CONT.
1. Artificial Super Intelligence is a term
referring to the time when the
capability of computers will surpass
humans.

2. ASI is presently seen as a hypothetical


situation as depicted in movies and
science fiction books, where machines
have taken over the world. However,
tech masterminds like Elon Musk
believe that ASI will take over the
world by 2040!
LANGUAGE FOR AI
TURING TEST
Turing Test is a method of inquiry for determining
whether or not a computer is capable of thinking
like a human being.

Introduced by Alan Turing in


1950.

In this test, Turing proposed that the computer can


be said to be an intelligent if it can mimic human
response under specific conditions.
CONT.
Based on a party
game
Features of Turing test

Turing Test requires the following:


Who is isolated from other two
1. Participants: The main participants include:
• An interrogator. players and his job is to find
• A person (human). that which player is machine
• Computer. among two of them.

2. The site: This is where the test takes place. The


participants are kept in separate places.

3. The test: These are the questions that the respondents


must answer.
HOW THE TURING TEST
WORKS?
FEATURES REQUIRED FOR A MACHINE TO
PASS THE TURING TEST
1. Natural language processing: NLP is required to
communicate with Interrogator in general human language like
English.

2. Knowledge representation: To store and retrieve information


during the test.

3. Automated reasoning: To use the previously stored


information for answering the questions.

4. Machine learning: To adapt new changes and can detect


CONCLUSION OF TURING
TEST
1. The test result does not depend on each correct answer, but only
how closely its responses like a human answer.

2. In this game, if an interrogator would not be able to identify


which is a machine and which is human, then the computer
passes the test successfully, and the machine is said to be
intelligent and can think like a human.
INTELLIGENT AGENTS
An intelligent agent is a software entity that enables
artificial intelligence to take action. Intelligent agent
senses the environment and uses actuators to initiate
action and conducts operations in the place of users.

Intelligent Agent (IA) is an entity that


makes decisions.
WHAT IS AN INTELLIGENT
AGENT (IA)?
1. This agent has some level of autonomy that allows it to perform
specific, predictable, and repetitive tasks for users or applications.
2. It’s also termed as ‘intelligent’ because of its ability to learn during
the process of performing tasks.
3. The two main functions of intelligent agents include perception
and action. Perception is done through sensors while actions are
initiated through actuators.
4. Intelligent agents consist of sub-agents that form a hierarchical
structure. Lower-level tasks are performed by these sub-agents.
5. The higher-level agents and lower-level agents form a complete
system that can solve difficult problems through intelligent
behaviors or responses.
STRUCTURE OF INTELLIGENT
AGENTS
Architectur Agent Agent
e Function Program

This refers to This is an


machinery or devices This is a function in implementation or
that consists of which actions are execution of the agent
actuators and sensors. mapped from a certain function. The agent
The intelligent agent percept sequence. function is produced
executes on this Percept sequence through the agent
machinery. Examples refers to a history of program’s execution on
include a personal what the intelligent the physical
computer, a car, or a agent has perceived. architecture.
camera.
HOW DO INTELLIGENT
AGENTS WORK?
Sensors, actuators, and effectors are the three main components
the intelligent
1. agents
Sensor: A device that detects
environmental changes and sends the
information to other devices. An agent
observes the environment through sensors.
E.g.: Camera, GPS, radar.

2. Actuators: Machine components that


convert energy into motion. Actuators are
responsible for moving and controlling a
system. E.g.: electric motor, gears, rails, etc.

3. Effectors: Devices that affect the


environment. E.g.: wheels, and display
PEAS PROPERTIES
1. PEAS is a type of model on which an AI agent works upon.
2. When we define an AI agent or rational agent, then we can group
its properties under PEAS representation model.
3. It is made up of four words:
P: Performance measure
E: Environment
A: Actuators
S: Sensors
PEAS FOR SELF-DRIVING
CARS
1. Performance: Safety, time,
legal drive, comfort
2. Environment: Roads, other
vehicles, road signs, pedestrian
3. Actuators: Steering,
accelerator, brake, signal, horn
4. Sensors: Camera, GPS,
speedometer, odometer,
accelerometer, sonar.
Reflex means immediate action. For example: “sneezing
is a reflex action”

TYPES OF AI AGENTS Observable environment: If agent know


complete knowledge of environment. For
example: Chess Game

Simple Reflex Model-based Goal-based Utility-based


Learning agent
Agent reflex agent agents agent

1. Simplest agents
2. Take decisions on the basis of
the current percepts
3. Ignore the rest of the percept
history
4. Works on Condition-action (if-
then) rule
5. Environment should be fully
observable
6. Problems: limited intelligence
TYPES OF AI AGENTS
Partially Observable environment: If
agent does not know the complete
knowledge of environment. For example:
Traffic on road
Simple Reflex Model-based Goal-based Utility-based
Learning agent
Agent reflex agent agents agent

1. Can work in a partially


observable environment and
track the situation.
2. Representation of the current
state based on percept history.

Model based means knowledge base


or history
TYPES OF AI AGENTS
Simple Reflex Model-based Goal-based Utility-based
Learning agent
Agent reflex agent agents agent

1. Expansion of Model Based


Reflex Agents by having the
"goal" information.
2. Searching and Planning

Example: planning to travel from


Pune to Ooty
TYPES OF AI AGENTS
Simple Reflex Model-based Goal-based Utility-based
Learning agent
Agent reflex agent agents agent

1. Extra component of utility measurement


2. Utility-based agent act based not only
goals but also the best way to achieve
the goal.
3. The Utility-based agent is useful when
there are multiple possible alternatives
and an agent has to choose in order to
perform the best action.
4. The utility function maps each state to a
real number to check how efficiently
each action achieves the goals.
TYPES OF AI AGENTS
Other agents takes the decision on the basis of
knowledge but learning agent start with some basic
knowledge and improve with experiences.

Simple Reflex Model-based Goal-based Utility-based


Learning agent
Agent reflex agent agents agent

1. It has learning capabilities.


2. Four conceptual components:
1. Learning element: It is responsible for making
improvements by learning from environment
2. Critic: Learning element takes feedback from critic
which describes that how well the agent is doing
with respect to a fixed performance standard.
3. Performance element: It is responsible for
selecting external action
4. Problem generator: This component is
responsible for suggesting actions that will lead to
new and informative experiences.
3. Learning agents are able to learn, analyze performance,
GOALS OF INTELLIGENT
AGENTS
1. High performance
2. Optimal result
3. Rational action
CONCEPT OF RATIONALITY
CONCEPT OF RATIONALITY
RATIONAL AGENTS
it is one that always tries to do the best thing based on what it knows,
so it can achieve its goal in the most eff ective way possible.

• Rationality
– Performance measuring success-This is how we measure if the agent is doing a good job or not.

– Agents prior knowledge of environment:-This refers to what the agent already knows about the
world before it starts taking actions. It could be anything like rules, patterns, or facts.

– Actions that agent can perform:- This is a list of all possible actions that the agent can take in
different situations.

– Agent’s percept sequence to date:-This is what the agent has seen or sensed so far. It's like a
history of all the information the agent has collected.

Def of Rational Agent: A rational agent makes decisions based on what it knows (its
prior knowledge), what it has experienced (its percept sequence), and what actions it
can take. It chooses the best action that will help it achieve its goal, based on
everything it knows.
ARTIFICIAL INTELLIGENCE A MODERN APPROACH 37

STATE SPACE SEARCH
Used in Problem Solving

State space search is a method used in AI to solve problems


by exploring all possible situations (called states) that could
occur as the agent tries to achieve its goal.

Problems are modelled as State Space it means that the problem


is represented as a series of states (situations or conditions), and
the solution involves moving through these states in an orderly
way to reach the goal state
CONT.
Set of all possible states for a given
problem is known as state space
representation of the problem.
1. State Space Representation consist of defining an INITIAL state (from
where to start), The GOAL state (the destination) and then we follow
certain set of sequence of steps (called states).
2. Define separately:
1. State: AI problem can be represented as a well formed set of possible states. State can
be initial state, goal state and other possible states.
2. Space: This is the entire collection of all possible states the agent could encounter
while trying to solve a problem..
3. Search: It is a technique which takes the initial state to goal state by applying certain
set of valid rules while moving through space of all possible states.

3. For search process, we need the following:


1. Initial state
2. Set of valid rules
3. Goal state
REPRESENTATION
S: (S, A, Action(S), Result(s,a),
Cost(s,a))

Function; which action is Cost estimation after applying


Set of all possible states
possible for current state action on particular state

Function; state reached by


Set of all possible actions
performing action ‘a’ on state ‘s’
STATE SPACE SEARCH
(EXAMPLE: EIGHT TILE
PUZZLE)

Start State Goal State

Up

Possible Action Down

Left

Solution
Right
PROPERTIES OF SEARCH
ALGORITHMS
Algorithm is said to be complete if it
Completeness guarantees to return a solution.

Optimality Algorithm is guaranteed to be the best


solution (lowest path cost)

Time Complexity Time for an algorithm to complete its task

Maximum storage space required at any


Space Complexity
point during the search
TYPES OF SEARCH
ALGORITHMS
Search
Algorithm

Uninformed/ Informed
Blind search search
Best First
Breadth First Search
Search
Depth First Search A* Search
Uniform Cost Search
Depth Limited Search
Bidirectional Search
Time consuming, Quick solution, Less
More time & space time & space
CONT. complexity complexity

Uninformed/Blind Informed Search


Search
• Does not contain any domain
• Uses domain knowledge
knowledge such as closeness
• Solution more efficiently than
of the goal
• Operates in a brute-force way an uninformed search
• Search applies without any • Also called a Heuristic search
information
• It is also called blind search A heuristic is a way which might
• Examines each option until it not always be guaranteed for best
achieves the goal solutions but guaranteed to find a
good solution in reasonable time.
GRAPH TRAVERSAL

The process of visiting and exploring a graph for processing is


called graph traversal. To be more specific it is all about
visiting and exploring each vertex and edge in a graph such
that all the vertices are explored exactly once.

That sounds simple! But there’s a catch.

The challenge is to use a graph traversal technique that is


most suitable for solving a particular problem.
WHAT IS THE BREADTH-
FIRST SEARCH ALGORITHM?
Breadth-First Search algorithm is a graph traversing technique, where you select a random initial node (source or
root node) and start traversing the graph layer-wise in such a way that all the nodes and their respective children
nodes are visited and explored.

Two important terms:


1. Visiting a node: visiting a
node means to visit or select a
node.
2. Exploring a node: Adjacent
nodes of a selected node.
BREADTH-FIRST SEARCH
ALGORITHM WITH AN
EXAMPLE
Breadth-First Search algorithm follows a simple, level-
based approach to solve a problem. Consider the binary
tree (which is a graph). Our aim is to traverse the graph
by using the Breadth-First Search Algorithm.

Data structure involved in the Breadth-First


Search algorithm: Queue
QUEUE
A queue is an abstract data structure that follows the First-In-First-Out methodology (data inserted first will
be accessed first). It is open on both ends, where one end is always used to insert data (enqueue) and the
other is used to remove data (dequeue).
BREADTH FIRST SEARCH
EXAMPLE
BREADTH FIRST SEARCH
EXAMPLE
BREADTH FIRST SEARCH
EXAMPLE
BREADTH FIRST SEARCH
EXAMPLE
BREADTH FIRST SEARCH
EXAMPLE
BREADTH FIRST SEARCH
EXAMPLE
BREADTH FIRST SEARCH
EXAMPLE (COMPLETE)
BREADTH FIRST SEARCH
EXAMPLE (COMPLETE)
BREADTH-FIRST SEARCH
ALGORITHM PSEUDOCODE
BREADTH-FIRST SEARCH
ALGORITHM PSEUDOCODE
APPLICATIONS OF BREADTH-
FIRST SEARCH
ADVANTAGES VS
DISADVANTAGES
Advantages Disadvantages

1. BFS will provide a solution if any 1. It requires lots of memory since each
solution exists. level of the tree/graph must be
saved into memory to expand the
2. If there are more than one solutions next level.
for a given problem, then BFS will
provide the minimal solution which 2. BFS needs lots of time if the solution
requires the least number of steps. is far away from the root node.
WHAT IS THE DEPTH-FIRST
SEARCH ALGORITHM?
Depth first search (DFS) algorithm or
searching technique starts with the root
node of the graph G, and then travel to
deeper and deeper until we find the goal
node or the node which has no children
by visiting different node of the tree.

The algorithm, then backtracks or return


back from the dead end or last node
towards the most recent node that is yet
to be completely unexplored.
CONT.…
Data structure involved in the Depth-first Search algorithm: Stack

Stack is a
collection of
objects that are
inserted and
removed according
to the last-in-first-
out (LIFO)
principle.
DFS ALGORITHM
DFS EXAMPLE
SILENT POINTS: DEPTH
FIRST SEARCH
DFS algo is a recursive algorithm that uses the idea of backtracking. It
involves exhaustive searches of all the nodes by going a head, if possible,
else by backtracking.
The term backtracking means that when you are moving forward and there
are no more nodes along the current path, you move backwards on the
same path to find nodes to traverse.
Greater time
Utilizes Deepest Node It may be Incomplete
complexity

Solution may not be


Optimal

In DFS, the edges that goes to an unvisited node are called discovery edges
while the edges that goes to an already visited node are called block edges.
ADVANTAGES VS
DISADVANTAGES
Advantages Disadvantages

1. DFS requires very less memory as it 1. There is the possibility that many
only needs to store a stack of the states keep re-occurring, and there is
nodes on the path from root node to no guarantee of finding the solution.
the current node.
2. DFS algorithm goes for deep down
2. It takes less time to reach to the goal searching and sometime it may go to
node than BFS algorithm (if it the infinite loop.
traverses in the right path).
COMPARISON BETWEEN “DFS” AND “BFS”
Always gives the guaranty to give the good solution but does not guaranty
to give the optimal solution.

HEURISTIC SEARCH &


HEURISTIC FUNCTION

Heuristic Search and Heuristic Function are used in informed search.

Heuristic Search is a simple searching technique that tries to optimize a


problem using Heuristic Function.

Optimization means that we will try to solve a problem in minimum number of steps and cost.
HEURISTIC FUNCTION
It is a function H(n) that gives an estimation on the cost of getting from
node 'n' to goal state.

It helps in selecting optimal node for expansion.

H(R1)=300

R1=340 Km
R1 300
K m
m
40K

Mumbai Goa
H(R2)=200
K m
70Km 200 R2=270 Km
R2
TYPES OF HEURISTIC
FUNCTION
Heuristic function

Admissible Non Admissible

• It never overestimates the cost of • May overestimates the cost of reaching


reaching the goal. the goal.

H(n) <= H*(n) H(n) > H*(n)

• Here H(n) is heuristic cost and H*(n) is • Here H(n) is heuristic cost and H*(n) is
estimated cost. estimated cost.
EXAMPLE Start State

A Heuristic Cost

1 H(B)=3, H(C)=4 and H(D)=5


1
H(B)=3 1 H(D)=5

B C D Total cost= Heuristic Cost + Actual cost

3 H(C)=4 F(n)= H(n)+G(n)

E H(E)=2
B=3+1=4; C=4+1=5; D=5+1=6
5
2
F G
ADMISSIBL Actual cost from A to G = 1+3+5+2 = 11

E H(B)=3, so H(n)=3 and H*(n)=11


H(F)=3 Goal State
H(n)<=H*(n)
3<=11
EXAMPLE Start State

A Heuristic Cost

1 H(D)=5
1
H(B)=3 1 H(D)=5

B C D Total cost= Heuristic Cost + Actual cost

3 H(C)=4 F(n)= H(n)+G(n)

E H(E)=2 3
Actual cost from A to G (via D) = 1+3 = 4
5
2
H(D)=5, so H(n)=5 and H*(n)=4
F G NON-ADMISSIBLE
H(n)>H*(n)
H(F)=3 Goal State 5>4
Generate and Test Search Algorithm
Also known as “British Museum
INTRODUCTION Search Algorithm”.

• Generate and Test Search is a heuristic search technique based on


Depth First Search with Backtracking which guarantees to find a
solution if done systematically and there exists a solution.

• The generate-and-test strategy is the simplest of all the approaches. It


consists of the following steps:
• Algorithm: Generate-and-Test
1. Generate all possible
solutions.
2. Select one solution among the
possible solutions.
3. If a solution has been found
and acceptable, quit.
Otherwise return to step 1.
PROPERTIES OF GOOD
GENERATOR
Good Generators need to be complete i.e. they
should generate all the possible solutions and cover
Complete all the possible states. In this way, we can guaranty
our algorithm to converge to the correct solution at
some point in time.

Good Generators should not yield a duplicate solution


Non-
at any point of time as it reduces the efficiency of
Redundant
algorithm thereby increasing the time of search.

Good Generators have the knowledge about the


Informed search space which they maintain in the form of an
search array of knowledge. This can be used to search how
far the agent is from the goal, calculate the path cost
and even find a way to reach the goal.
EXAMPLE: (PROBLEM
STATEMENT)
Let us take a simple example to understand the importance of a good
generator. Consider a pin made up of three 2 digit numbers i.e. the
numbers are of the form.
EXAMPLE: (SOLUTION)
• The total number of solutions in this case is (100*100*100) which is
approximately 1M.
• If we do not make use of any informed search technique then it
results in exponential time complexity.
• Let’s say if we generate 5 solutions every minute. Then the total
numbers generated in 1 hour are 5*60=300 and the total number of
solutions to be generated are 1M.
• Let us consider the brute force search technique for example linear
search whose average time complexity is N/2. Then on an average,
the total number of the solutions to be generated are approximately
5 lakhs.
• Using this technique even if you work for about 24 hrs. a day then
also you will need 10 weeks to complete the task.
EXAMPLE: (SOLUTION)
• Now consider using heuristic function where we have domain
knowledge that every number is a prime number between 0-99 then
the possible number of solutions are (25*25*25) which is
approximately 15,000.
• Now consider the same case that you are generating 5 solutions
every minute and working for 24 hrs. then you can find the solution in
less than 2 days which was being done in 10 weeks in the case of
uninformed search. .
Conclusion
We can conclude for here that if we can find a good heuristic then time
complexity can be reduced gradually. But in the worst-case time and
space complexity will be exponential. It all depends on the generator i.e.
better the generator lesser is the time complexity.
Hill Climbing Algorithm
INTRODUCTION
Hill climbing algorithm is a local search algorithm which continuously
moves in the direction of increasing elevation/value to find the peak of
the mountain or best solution to the problem. It terminates when it
reached a peak value where no neighbor has higher value.

Silent Features:
• Local Space algorithm: Only considers nearby (neighboring) solutions.
e.g. You're climbing a hill, seeing only a few steps ahead, and always
moving to the highest nearby step without knowing if a taller peak
exists.
• Greedy Approach: Hill Climbing always picks the best available
option right now, without thinking about the future.
• No Backtracking:Once the algorithm makes a move, it doesn’t go
back to try a different path. If it gets stuck, it stops there.
STATE SPACE DIAGRAM FOR
HILL CLIMBING
A State-space diagram shows all possible choices and how they affect
the result, helping find the best solution.
The X-axis denotes the
state space i.e. states or
configuration our algorithm
may reach.
The Y-axis denotes the
values of objective function
corresponding to a particular
state.
The best solution will be that
state space where objective
function has maximum value
or global maxima.
STATE SPACE DIAGRAM FOR
HILL CLIMBING
It is a state which is better than its neighboring state
Local however there exists a state which is better than it (global
maxima maximum). This state is better because here the value of
the objective function is higher than its neighbors

Global It is the best possible state in the state space diagram. This
maxima because at this state, objective function has the highest
value.
Plateau/flat
It is a flat region of state space where neighboring states
local
have the same value.
maxima
It is a region which is higher than its neighbor's but itself has
Ridge
a slope. It is a special kind of local maximum.
Current The region of state space diagram where we are currently
state present during the search. (Denoted by the highlighted
circle in the given image.)
TYPES OF HILL CLIMBING
Types of Hill
climbing

Simple Hill Steepest - Ascent Stochastic hill


Climbing hill climbing climbing
1. SIMPLE HILL CLIMBING

Simple Hill Climbing is like climbing a hill where you can only see one
step ahead at a time. You check the next step to see if it’s higher
(better) than where you are. If it is, you move to that step; if not, you
stay where you are. The algorithm only checks one step at a time and
moves if it finds a better option. If no better step is found, it stops.

This algorithm has the following features:


1. Less time consuming
2. Less optimal solution
3. The solution is not guaranteed
SIMPLE HILL CLIMBING
(ALGORITHM)
Step 1: Evaluate the initial state, if it is goal state then return success
and Stop.

Step 2: Loop Until a solution is found or there is no new operator left to


apply.

Step 3: Select and apply an operator to the current state.

Step 4: Check new state:


If it is goal state, then return success and quit.
else if it is better than the current state then assign new state as a
current state.
else if not better than the current state, then return to step 2.

Step 5: Exit.
2. STEEPEST-ASCENT HILL
CLIMBING

The Steepest-Ascent Hill Climbing algorithm is a more advanced


version of Simple Hill Climbing. Instead of looking at just one step, it
examines all the nearby steps and selects the one that gets closest to
the goal. However, because it checks multiple steps, it takes more
time than Simple Hill Climbing.
STEEPEST-ASCENT HILL
CLIMBING (ALGORITHM)
Step 1: Evaluate the initial state, if it is goal state then return success
and stop, else make the current state as your initial state.

Step 2: Loop until a solution is found or the current state does not
change.
1. Let S be a state such that any successor of the current state will be
better than it.
2. For each operator that applies to the current state;
• Apply the new operator and generate a new state.
• Evaluate the new state.
• If it is goal state, then return it and quit, else compare it to the S.
• If it is better than S, then set new state as S.
• If the S is better than the current state, then set the current state
to S.
Step 3: Exit.
3. STOCHASTIC HILL
CLIMBING

Stochastic Hill Climbing is a version of Hill Climbing that doesn’t look at


all the nearby steps. Instead, it picks a random step, checks if it’s
better than where you are, and moves if it is. If it’s not better, it
randomly picks another step to try.
So, it makes random moves and evaluates them one by one, rather
than checking everything.
PROBLEMS IN DIFFERENT
REGIONS IN HILL CLIMBING
Local maximum: At a local maximum all neighboring states have
values which are worse than the current state. Since hill-climbing uses a
greedy approach, it will not move to the worse state and terminate
itself. The process will end even though a better solution may exist.

To overcome the local maximum problem: Utilize the backtracking


technique. Maintain a list of visited states. If the search reaches an
undesirable state, it can backtrack to the previous configuration and
explore a new path.
Plateau: On the plateau, all neighbors have the same value. Hence, it is
not possible to select the best direction.

To overcome plateaus: Make a big jump. Randomly select a state far


away from the current state. Chances are that we will land at a non-
plateau region
PROBLEMS IN DIFFERENT
REGIONS IN HILL CLIMBING
Ridge: Any point on a ridge can look like a peak because the movement
in all possible directions is downward. Hence, the algorithm stops when
it reaches such a state.

To overcome Ridge: You could use two or more rules before testing. It
implies moving in several directions at once.
APPLICATIONS OF HILL
CLIMBING TECHNIQUE
1. Hill Climbing technique can be used to solve many
problems, where the current state allows for an
accurate evaluation function, such as Network-
Flow, Travelling Salesman problem, 8-Queens
problem, Integrated Circuit design, etc.
2. Hill Climbing is used in inductive learning methods
too. This technique is also used in robotics for
coordinating multiple robots in a team.
WHAT IS A* SEARCH
ALGORITHM?
• Moving from one place to another is a task we do
almost everyday.

• Finding the shortest path by ourselves was difficult.

• We now have algorithms that help us find that shortest


route.

A* is one the most popular algorithms out there.


WHAT IS A* ALGORITHM?
It is an advanced BFS algorithm that searches for shorter paths first
rather than the longer paths. A* is optimal as well as a complete
algorithm.
A* is sure to find the least cost from the source to the
Optimal
destination

Complet A* is going to find all the paths that are available to


e us from the source to the destination

So that makes A* the best algorithm right? YES.


WHY CHOOSE A* OVER
OTHER FASTER
ALGORITHMS?
Dijkstra’s algorithm and A* Algorithm for comparison

It finds all the paths that can be


It finds the most optimal path that
taken without finding or knowing
it can take from the source in
which is the most optimal one for the
reaching the destination. It knows
problem that we are facing. This
which is the best path that can be
leads to the unoptimized working of
taken from its current state and how
the algorithm and unnecessary
it needs to reach its destination.
computations.
IN-AND-OUT OF A*
ALGORITHM
A* is used to find the most optimal path from a source to a destination. It
optimizes the path by calculating the least distance from one node to the other.

Need to remember : F = G
+H

F : This parameter is used to find the least cost from one node to the next node.
Responsible to find the most optimal path from our source to destination.
G : The cost of moving from one node to the other node.
This parameter changes for every movement from one node to other
most optimal path.
H : H is the heuristic/estimated path between the current code to the
destination node.
Not actual cost but is the assumption cost from the node to destination.
A* EXAMPLE
A* EXAMPLE
A* EXAMPLE
AO* ALGORITHM
AO* is a smart way to find the best path in problems where some
decisions depend on multiple steps (AND) and some have
different choices (OR).
AO* algorithm uses the concept of AND-OR graphs to decompose any
complex problem given into smaller set of problems which are further
solved.
AND-OR graphs are specialized graphs that are used in problems that
can be broken down into sub problems where;
• AND side of the graph represent a set of task that need to be done
to achieve the main goal , whereas the
• OR side of the graph represent the different ways of performing task
to achieve the same main goal.
CONT..
Want to pass
Mobile
in exam

Work Purchase Do Do Hard Pass


Gift
Hard Mobile cheating Work exam

AND AND
OR OR
EXAMPLE OF AO* ALGORITHM WITH
AND-OR GRAPH

Each node in the graph is assigned a


heuristic value, denoted as h(n), and the
edge length is considered as 1.

Step 1: Initial Evaluation Using f(n) = g(n)


+ h(n)
For the path involving AND nodes (C and D):

Step 2: Explore Node B


Next, we explore node B, and calculate the values for
nodes E and F:
So, by above calculation B⇢E path is chosen which is minimum path, i.e f(B⇢E) because B’s
heuristic value is different from its actual value The heuristic is updated and the minimum
cost path is selected. The minimum value in our situation is 8. Therefore, the heuristic for A
must be updated due to the change in B’s heuristic.

So we need to calculate it again.


Thus, f(B -> E) = 8 is chosen as the optimal path. Since B’s
heuristic value differs from its actual cost, we update the heuristic
for A.
Step 3: Compare and Explore Paths

Now, we compare f(A -> B) = 9 with f(A -> C + D) = 8. Since f(A -> C + D) is smaller, we explore this path
and move to node C.
For node C:

f(C⇢H+I) is selected as the path with the lowest cost and the heuristic is also left unchanged because it
matches the actual cost. Paths H & I are solved because the heuristic for those paths is 0, but Path A⇢D needs
to be calculated because it has an AND.
The path f(C -> H + I) = 2 is selected. Since the heuristic for H and I matches
the actual cost (both are 0), these paths are considered solved. Next, we
calculate the value for A -> D as it also has an AND node.

For node D:

After updating the heuristic for D, we


recalculate:

Now that f(A -> C + D) has the lowest cost, this becomes
the solved path, and the AND-OR graph is now fully
resolved.
EXAMPLE#3
AO* ALGORITHM
A* VS AO*

1. A* algorithm provides with the optimal solution, whereas AO* stops


when it finds any solution.

2. AO* algorithm requires lesser memory compared to A* algorithm.

3. AO* algorithm doesn't go into infinite loop whereas the A* algorithm


can go into an infinite loop.
A* VS AO*
CONSTRAINT SATISFACTION
PROBLEMS
Constraint satisfaction is a technique where a problem is solved when its
values satisfy certain constraints or rules of the problem.

Constraint Satisfaction:
1. It is a search procedure that operates in a space of constraint sets.
2. Constraint Satisfaction Problems in AI have goal of discovering some
problem state that satisfies a given set of constraints.

Process:
1. Constraints are discovered and propagated throughout the system.
2. If still there is no solution, search begins. (A guess is made about
something and added as new constrain)
CONT..
Constraint satisfaction depends on three components, namely:
V: It is a set of variables.
D: It is a set of domains where the variables reside. There is a specific
domain for each variable.
C: It is a set of constraints which are followed by the set of variables.

The constraint value consists of a pair of {scope, rel}.

1. The “scope” is a set of variables which participate in the constraint


2. The “rel” is a relation which includes a list of values which the
variables can take to satisfy the constraints of the problem
Exampl
e
INTRODUCTION TO
ADVERSARIAL SEARCH
Adversarial search in AI is a type of search used when multiple agents
(players) compete against each other, like in games.

Each agent tries to maximize its own success while minimizing the
opponent's success.

The environment with more than one agent is termed as multi-agent


environment, in which each agent is an opponent of other agent and
playing against each other.

Searches in which two or more players with conflicting goals are trying
to explore the same search space for the solution, are called
adversarial searches, often known as Games.
TECHNIQUES REQUIRES TO GET
THE OPTIMAL SOLUTION
There is always a need to choose those algorithms which provide the
best optimal solution in a limited time. So, we use the two techniques
which could fulfill our requirements.

Pruning in AI means cutting off unnecessary


parts of a search process or model to make
Pruning decision-making faster and more efficient. It helps AI
ignore unimportant choices and focus only on
the best options, saving time and computing
Heuristic power.
A Heuristic Evaluation Function helps AI quickly
Evaluatio
guess the best move by assigning a score to
n
each option. it helps AI choose wisely without
Function
checking everything
TYPES OF GAMES
A Perfect Information Game is a game where all players
can see everything that is happening—there are no
Perfect
hidden moves or secrets.
information
everyone knows the same information, so the game is about
strategy, not luck!
An Imperfect Information Game is a game where some
Imperfect information is hidden from one or more players
information players must predict and make decisions without knowing
everything

A Deterministic Game is a game where there is no luck or


Deterministi randomness—everything happens based on players' moves
c games and strategies.
the game depends only on player decisions, not luck.
A Non-Deterministic Game is a game where luck or
Non-
randomness affects the outcome, meaning players cannot
deterministi
fully predict what will happen next.
c games
the game involves both skill and chance.
FORMALIZATION OF THE PROBLEM:
In Artificial Intelligence (AI), formalizing a problem means defining it in a clear and structured
way so that an AI system can solve it

Initial
It specifies Where the problem starts.
state

Actions It specifies Possible moves or choices.

Transition
What happens when an action is taken..
Model
Goal
It is the The final desired outcome.
State

Path Cost The cost of reaching the goal.


TYPES OF ADVERSARIAL
SEARCH
There are two types of adversarial search:
1. Minimax Algorithm
2. Alpha-beta Pruning
MINI-MAX ALGORITHM
Mini-max algorithm is a recursive or backtracking algorithm which is
used in decision-making and game theory. It provides an optimal move
for the player assuming that opponent is also playing optimally.

The minimax algorithm proceeds all the way down to the terminal node
of the tree, then backtrack the tree as the recursion.

In this algorithm two players play the game, one is called MAX and other
is called MIN.

Both the players fight it as the opponent player gets the minimum
benefit while they get the maximum benefit.

The minimax algorithm performs a depth-first search algorithm for the


exploration of the complete game tree.
EXAMPLE
There are two players one is called Maximizer and other is called
Minimizer.

Maximizer will try to get the Maximum possible score, and Minimizer will
try to get the minimum possible score.

This algorithm applies DFS, so in this game-tree, we have to go all the


way through the leaves to reach the terminal nodes.
At the last step of the game (terminal node), we assign scores to the
outcomes (win, lose, or draw). Then, we compare these scores and
move backward (backtrack) through the game tree until we reach
the starting point.
STEPS INVOLVED IN SOLVING
Step-1: The algorithm first
THE TWO-PLAYER GAME creates
TREE the whole game tree by
exploring all possible moves.
Then, it uses a utility function to
assign values (scores) to the final
game states (win, lose, or draw). A
is the starting point (initial state)
of the tree. The Maximizer (AI
trying to win) plays first and
starts with the worst possible
score, -∞ (negative infinity).
The Minimizer (opponent)
wants to reduce the
Maximizer’s score as much as
possible.
🔹 It starts with +∞ (a very high
value) because, at the
beginning, it doesn't know
what the lowest possible score
will be.
STEPS INVOLVED Step
IN 2 In this step, the Maximizer
:

SOLVING THE TWO-PLAYER


aims to pick the highest value from
the given terminal values. It starts
GAME TREE with -∞ (a very low value) and
compares each terminal value,
keeping the highest one for each
parent node. For Node D, the
terminal values are -1 and 4, so
the maximum is 4. For Node E, the
values are 2 and 6, so the
maximum is 6. For Node F, the
values are -3 and -5, so the
maximum is -3. Finally, for Node
G, the values are 0 and 7, so the
maximum is 7. After this step, the
updated values for the parent
nodes are D = 4, E = 6, F = -3,
and G = 7. This process ensures
that each Maximizer node selects
the best possible outcome to
STEPS INVOLVED IN SOLVING
In this step, it is the
THE TWO-PLAYER GAME
Minimizer’s turn, meaning it
will select the lowest value
TREE from the given choices. The
Minimizer starts with +∞ (a
very high value) and compares
the values from its child nodes,
keeping the smallest one for
each parent node. For Node B,
the values from its children (D
and E) are 4 and 6, so the
minimum is 4, making B’s
value 4. For Node C, the
values from its children (F and
G) are -3 and 7, so the
minimum is -3, making C’s
value -3. After this step, the
updated values for the nodes
are B = 4 and C = -3. This
ensures that the Minimizer
chooses the least favorable
STEPS INVOLVED IN
SOLVING THE TWO-PLAYER
Step 4: In this final step, it is
GAME TREE the Maximizer’s turn again.
The Maximizer looks at the
values from its child nodes
(B and C) and chooses the
highest one because it
wants to maximize the
score. The values of B and C
are 4 and -3, so the
Maximizer picks 4 as the
best possible outcome. This
means the final value for
the root node (A) is 4. This
completes the Minimax
algorithm, where the
Maximizer makes the best
possible move assuming the
Minimizer plays optimally.
PROPERTIES OF MINI-MAX
ALGORITHM
Complete: Min-Max algorithm is Complete. It will definitely find a
solution (if exist), in the finite search tree.

Optimal: Min-Max algorithm is optimal if both opponents are playing


optimally.
ALPHA BETA PRUNING
Alpha-beta pruning is a modified version of the minimax
algorithm. It is an optimization technique for the minimax algorithm.

There is a technique by which without checking each node of the game


tree we can compute the correct minimax decision and this technique
is called pruning. This involves two threshold parameter Alpha and
beta for future expansion, so it is called alpha-beta pruning. It is also
called as Alpha-Beta Algorithm.
The two-parameter can be defined as:
Alpha: The best (highest-value) choice we have found so far at any
point along the path of Maximizer. The initial value of alpha is -∞.
Beta: The best (lowest-value) choice we have found so far at any point
along the path of Minimizer. The initial value of beta is +∞.
The Alpha-beta pruning to a standard minimax algorithm returns the
same move as the standard algorithm does, but it removes all the
nodes which are not really affecting the final decision but making
algorithm slow. Hence by pruning these nodes, it makes the algorithm
fast.
EXAMPLE Ma
α =3
x

Min β =3 β =2

Ma 3 α =3 α =7 α =2 α
x

3 β =3 β =2 β =7 β β=2 β=1 β β
Min

3 4 2 1 7 8 9 10 2 11 1 12 14 9 13 16
Prune
ADVANTAGES
DISADVANTAGES
Thank you.. 

You might also like