0% found this document useful (0 votes)
2 views

AIppt2

Chapter 2 discusses Intelligent Agents (IA), defining them as entities that perceive their environment through sensors and act upon it via actuators. It categorizes agents into five types: simple reflex, model-based reflex, goal-based, utility-based, and learning agents, each with distinct capabilities and operational mechanisms. The chapter also explores various environmental properties that affect agent performance, such as accessibility, determinism, and episodicity.

Uploaded by

kmkkali41
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views

AIppt2

Chapter 2 discusses Intelligent Agents (IA), defining them as entities that perceive their environment through sensors and act upon it via actuators. It categorizes agents into five types: simple reflex, model-based reflex, goal-based, utility-based, and learning agents, each with distinct capabilities and operational mechanisms. The chapter also explores various environmental properties that affect agent performance, such as accessibility, determinism, and episodicity.

Uploaded by

kmkkali41
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 34

Chapter 2

Intelligent Agents (IA)

1
Chapter 2
Outline
 Intelligent Agents

 Introduction
 Agents and Environments
 Acting of Intelligent Agents (Rationality)
 Structure of Intelligent Agents
 Agent Types
 Simple reflex agent
 Model-based reflex agent

 Goal-based agent

 Utility-based agent

 Learning agent

 Types of environment 2
 Important Concepts and Terms
Intelligent Agents (IA)
Introduction
An agent is anything that can be viewed as perceiving its environment
through sensors and sensor acting upon that environment through
actuators or effectors.
 An agent is one that acts.
 Percept An agent`s perceptual input at any instance.
 Sensors it is used to get percepts from the environment.
 Actuators they are used to perform any actions by the agent.

3
Cont’d
This simple idea is illustrated

 A human agent has eyes, ears, and other organs for sensors and hands,
legs, mouth, and other body parts for actuators.

 A robotic agent might have cameras and infrared range finders for
sensors and various motors for actuators.

 A software agent receives keystrokes, file contents, and network


packets as sensory inputs and acts on the environment by displaying
on the screen, writing files, and sending network packets.
4
Cont’d

5
Figure Agents interact with environments through sensors and effectors
(source: Stuart J. Russell and Peter Norvig, Artificial Intelligence, A
Modern Approach)
Cont’d
Agent-Related Terms
 Percept sequence: A complete history of everything the agent has ever
perceived. Think of this as the state of the world from the agent’s
perspective.
 Agent function (or Policy): Maps percept sequence to action (determines
agent behavior) or maps from percept histories to actions: ( f : p* A )
 Agent program: Implements the agent function Or runs on the physical
architecture to produce f
 Agent =architecture + program
6
Acting of Intelligent Agents (Rationality)

A rational agent is one that does the right thing-conceptually speaking; every

entry in the table for the agent function is filled out correctly.

 Obviously, doing the right thing is better than doing the wrong thing. The

right action is the one that will cause the agent to be most successful.

 An agent should strive to "do the right thing", based on what it can perceive

and the actions it can perform.

7
 The right action is the one that will cause the agent to be most successful.
CONT’D
Performance measures
 A performance measure embodies the criterion for success of an
agent's behavior.
 When an agent is plunked down in an environment, it generates a
sequence of actions according to the percepts it receives.
 This sequence of actions causes the environment to go through a
sequence of states.
 If the sequence is desirable, then the agent has performed well.

8
RATIONALITY
What is rational at any given time depends on four things:
 The performance measure that defines the criterion of success.
 The agent's prior knowledge of the environment.
 The actions that the agent can perform.
 The agent's percept sequence to date.
 This leads to a definition of a rational agent:
 For each possible percept sequence, a rational agent should select
an action that is sequence and whatever built-in knowledge the
agent has 9
CONT’D

 IntelligentAgent:-is an agent that perceives it


environment via sensors and acts rationally
upon that environment with its effectors

10
CONT’D
Omniscient agent
 Omniscience means the agent knows the actual outcome of his
actions and can act accordingly.
 What information would a chess player need to have to be
omniscient?
 Omniscience is (generally) impossible in reality (stochastic imperfect
knowledge)
 A rational agent should do the right thing based on the knowledge it
has. 11

 Rationality maximizes expected performance not actual performance.


CONT’D

Autonomous: agents depend on the knowledge of the designer. To

compensate for partial or incorrect prior knowledge a rational agent

should be autonomous. (It would be best to give the agent some initial

knowledge as well as ability to learn).

 The autonomy of an agent is the extent to which its behavior is

determined by its own experience (with ability to learn and adapt).


12
Example: baby learning to crawl
STRUCTURE OF INTELLIGENT AGENTS

AI designs the agent program

The program runs on some kind of architecture


 To design an agent program, need to understand (PAGEs)
 Percepts
 Actions
 Goals
 Environment
13
AGENT TYPES
 There are many types of agent, Russell and Norvig (2003)
group agents into five classes based on their degree of
perceived intelligence and capability:
1. Simple reflex agent
2. Model-based reflex agents
3. Goal-based agents
4. Utility-based agents
5. Learning agents
14
CONT’D
1. Simple reflex agent
 Simple reflex agent: - are based on condition- action

rules and implemented with an appropriate production


(rule-based) system. They are stateless devices which
do not have memory of past world states.
 Specific response to percepts, i.e. condition-action rule

Agent Sensors

Environment
What the world is
like now

What action I
Condition-action rules should do now
15
Effectors
CONT’D
2. model-based reflex agents
 A model-based agent can handle partially observable

environment.
 Its current state is stored inside the agent maintaining

some kind of structure which describes the part of the


world which cannot be seen.
 This knowledge about "how the world works" is called a

model of the world, hence the name "model-based


agent".
 A model-based reflex agent should maintain some sort

of internal model that depends on the percept history and


thereby reflects at least some of the unobserved aspects
of the current state.
 Percept history and impact of action on the environment
16
can be determined by using internal model. It then
chooses an action in the same way as reflex agent.
CONT’D
2. model-based reflex agents

17
CONT’D
3. Goal-based agent
 Choose actions so as to achieve a (given or computed) goal.

 Goal-based agents further expand on the capabilities of the model-

based agents, by using "goal" information.


 Goal information describes situations that are desirable. This allows

the agent a way to choose among multiple possibilities, selecting the


one which reaches a goal state.
 Search and planning are the subfields of artificial intelligence
devoted to finding action sequences that achieve the agent's goals.
 It is more flexible because the knowledge that supports its decisions

is represented explicitly and can be modified.


 Agent continues to receive percepts and maintain state

 Agent also has a goal

 Makes decisions based on achieving goal


 Example

 Pathfinder goal: reach a boulder 18


 If pathfinder trips or gets stuck, can make decisions to reach goal.
CONT’D
3. Goal-based agent

19
CONT’D
4. Utility-based agent
 Goals are not enough – need to know value of

goal
Is this a minor accomplishment, or a major
one?
Affects decision making – will take greater
risks for more major goals
 Utility: numerical measurement of importance

of a goal
 A utility-based agent will attempt to make the

appropriate tradeoff.
20
CONT’D
Goal-based agents only distinguish between goal states and
non-goal states.
 It is possible to define a measure of how desirable a particular
state is. This measure can be obtained through the use of
a utility function which maps a state to a measure of the utility
of the state.
 A more general performance measure should allow a
comparison of different world states according to exactly how
happy they would make the agent.
 The term utility can be used to describe how "happy" the agent
is.
 A rational utility-based agent chooses the action that maximizes
the expected utility of the action outcomes - that is, what the
agent expects to derive, on average, given the probabilities and
utilities of each outcome.
 A utility-based agent has to model and keep track of 21 its

environment, tasks that have involved a great deal of research


on perception, representation, reasoning, and learning.
CONT’D
4. Utility-based agent

22
CONT’D
5. Learning agent
Learning has the advantage that it allows the agents to
initially operate in unknown environments and to
become more competent than its initial knowledge
alone might allow.
The most important distinction is between the
"learning element", which is responsible for making
improvements, and the "performance element",
which is responsible for selecting external actions.
The learning element uses feedback from the "critic"
on how the agent is doing and determines how the
performance element should be modified to do
better in the future.
The performance element is what we have previously 23

considered to be the entire agent: it takes in


CONT’D
5. Learning agent
 The last component of the learning agent is the

"problem generator". It is responsible for suggesting


actions that will lead to new and informative
experiences.

24
CONT’D
N.B:-Performance standard: - Think of this
as outside the agent since you don’t want it to
be changed by the agent.
 Critic: Tells learning element how well the

agent is doing with respect to the


performance standard (because the percepts
don’t tell the agent about its success/ failure).
 Learning element: - Responsible for
improving the agent’s behavior with
experience.
 Problem generator: - Suggest actions to

come up with new and informative


25

experiences.
PROPERTIES OF ENVIRONMENTS
Environments come in several flavors. The principal
distinctions to be made are as follows:
 Accessible/ Inaccessible.

 If an agent's sensory apparatus gives it access to


the complete state of the environment, then we say
that the environment is accessible to that agent.
 An environment is effectively accessible if the
sensors detect all aspects that are relevant to the
choice of action.
 An accessible environment is convenient because
the agent need not maintain any internal state to
keep track of the world.
 Such environments are convenient, since the agent is
26
freed from the task of keeping track of the changes in
the environment.
CONT’D
Deterministic/ Nondeterministic.
If the next state of the environment is
completely determined by the current state
and the actions selected by the agents, then
we say the environment is deterministic.
In principle, an agent need not worry about
uncertainty in an accessible, deterministic
environment.
 If the environment is inaccessible, however,
then it may appear to be nondeterministic.
This is particularly true if the environment is
complex, making it hard to keep track of27 all
the inaccessible aspects.
CONT’D
Episodic/ Non-episodic.

In an episodic environment, the agent's


experience is divided into "episodes.“
 Each episode consists of the agent perceiving
and then acting.
The quality of its action depends just on the
episode itself, because subsequent episodes do
not depend on what actions occur in previous
episodes.
 Episodic environments are much simpler
28
because the agent does not need to think
ahead.
CONT’D
Static/ Dynamic.
If the environment can change while an agent
is deliberating, then we say the environment
is dynamic for that agent; otherwise it is
static.
 Static environments are easy to deal with
because the agent need not keep looking at
the world while it is deciding on an action,
nor need it worry about the passage of time.
 If the environment does not change with the
passage of time but the agent's performance
score does, then we say the environment 29 is

semi-dynamic.
CONT’D
Discrete/ Continuous.
 If the number of distinct percepts and actions

is limited the environment is discrete,


otherwise it is continuous.
 Chess is discrete—there are a fixed number

of possible moves on each turn. Taxi driving is


continuous—the speed and location of the taxi
and the other vehicles sweep through a range
of continuous values.

30
CONT’D

Single agent/ Multi-agent


 Agents affect each other’s performance measure

– cooperative or competitive is called Multi-


agent.
 An agent operating by itself in an environment is

called ‘Single agent’ or there are many agents


working together.

N.B:- Some environments are more difficult for


agents than others. The most challenging
environments are inaccessible, nondeterministic,
31

non-episodic, dynamic, multi-agent and


TECHNICAL TERMS
1. Artificial :
Man made things.
2. Intelligence:
Ability to think and take decisions.
3. Artificial Intelligence:
Artificial Intelligence is the branch of computer science concerned with making
computers behave like humans.
4. Agent :
An agent is one that acts.
5. Rational Agent :
It is the one that acts to achieve the best expected outcome.
6. Environment :
Surroundings.
7. Sensors :
It is used to get percepts from the environment.
8. Actuators :
These are used to perform any actions by the agent.
32
9. Percept :
An agent`s perceptual input at any instance.
TECHNICAL TERMS
10. Percept Sequence :
The complete history of everything the agent has ever perceived.
11. Agent Function:
It maps any given percept sequence to an action.
12. Agent program :
Agent function is implemented by agent program.
13. Performance Measure :
Criteria for success of an agent`s behavior.
14. Omniscience:
The actual outcome of the action is known.
15. Autonomy:
Uses inbuilt and perceived knowledge.
16. Simple Reflex Agent :
These agents select actions on the basis of the current percept.
17. Model based Reflex Agent:
Current percept is combined with the old state to generate the updated
description
33
of the current state.
18. Goal Based agent :
Current state description along with desirable goal.
Question?

34

You might also like