AIppt2
AIppt2
1
Chapter 2
Outline
Intelligent Agents
Introduction
Agents and Environments
Acting of Intelligent Agents (Rationality)
Structure of Intelligent Agents
Agent Types
Simple reflex agent
Model-based reflex agent
Goal-based agent
Utility-based agent
Learning agent
Types of environment 2
Important Concepts and Terms
Intelligent Agents (IA)
Introduction
An agent is anything that can be viewed as perceiving its environment
through sensors and sensor acting upon that environment through
actuators or effectors.
An agent is one that acts.
Percept An agent`s perceptual input at any instance.
Sensors it is used to get percepts from the environment.
Actuators they are used to perform any actions by the agent.
3
Cont’d
This simple idea is illustrated
A human agent has eyes, ears, and other organs for sensors and hands,
legs, mouth, and other body parts for actuators.
A robotic agent might have cameras and infrared range finders for
sensors and various motors for actuators.
5
Figure Agents interact with environments through sensors and effectors
(source: Stuart J. Russell and Peter Norvig, Artificial Intelligence, A
Modern Approach)
Cont’d
Agent-Related Terms
Percept sequence: A complete history of everything the agent has ever
perceived. Think of this as the state of the world from the agent’s
perspective.
Agent function (or Policy): Maps percept sequence to action (determines
agent behavior) or maps from percept histories to actions: ( f : p* A )
Agent program: Implements the agent function Or runs on the physical
architecture to produce f
Agent =architecture + program
6
Acting of Intelligent Agents (Rationality)
A rational agent is one that does the right thing-conceptually speaking; every
entry in the table for the agent function is filled out correctly.
Obviously, doing the right thing is better than doing the wrong thing. The
right action is the one that will cause the agent to be most successful.
An agent should strive to "do the right thing", based on what it can perceive
7
The right action is the one that will cause the agent to be most successful.
CONT’D
Performance measures
A performance measure embodies the criterion for success of an
agent's behavior.
When an agent is plunked down in an environment, it generates a
sequence of actions according to the percepts it receives.
This sequence of actions causes the environment to go through a
sequence of states.
If the sequence is desirable, then the agent has performed well.
8
RATIONALITY
What is rational at any given time depends on four things:
The performance measure that defines the criterion of success.
The agent's prior knowledge of the environment.
The actions that the agent can perform.
The agent's percept sequence to date.
This leads to a definition of a rational agent:
For each possible percept sequence, a rational agent should select
an action that is sequence and whatever built-in knowledge the
agent has 9
CONT’D
10
CONT’D
Omniscient agent
Omniscience means the agent knows the actual outcome of his
actions and can act accordingly.
What information would a chess player need to have to be
omniscient?
Omniscience is (generally) impossible in reality (stochastic imperfect
knowledge)
A rational agent should do the right thing based on the knowledge it
has. 11
should be autonomous. (It would be best to give the agent some initial
Agent Sensors
Environment
What the world is
like now
What action I
Condition-action rules should do now
15
Effectors
CONT’D
2. model-based reflex agents
A model-based agent can handle partially observable
environment.
Its current state is stored inside the agent maintaining
17
CONT’D
3. Goal-based agent
Choose actions so as to achieve a (given or computed) goal.
19
CONT’D
4. Utility-based agent
Goals are not enough – need to know value of
goal
Is this a minor accomplishment, or a major
one?
Affects decision making – will take greater
risks for more major goals
Utility: numerical measurement of importance
of a goal
A utility-based agent will attempt to make the
appropriate tradeoff.
20
CONT’D
Goal-based agents only distinguish between goal states and
non-goal states.
It is possible to define a measure of how desirable a particular
state is. This measure can be obtained through the use of
a utility function which maps a state to a measure of the utility
of the state.
A more general performance measure should allow a
comparison of different world states according to exactly how
happy they would make the agent.
The term utility can be used to describe how "happy" the agent
is.
A rational utility-based agent chooses the action that maximizes
the expected utility of the action outcomes - that is, what the
agent expects to derive, on average, given the probabilities and
utilities of each outcome.
A utility-based agent has to model and keep track of 21 its
22
CONT’D
5. Learning agent
Learning has the advantage that it allows the agents to
initially operate in unknown environments and to
become more competent than its initial knowledge
alone might allow.
The most important distinction is between the
"learning element", which is responsible for making
improvements, and the "performance element",
which is responsible for selecting external actions.
The learning element uses feedback from the "critic"
on how the agent is doing and determines how the
performance element should be modified to do
better in the future.
The performance element is what we have previously 23
24
CONT’D
N.B:-Performance standard: - Think of this
as outside the agent since you don’t want it to
be changed by the agent.
Critic: Tells learning element how well the
experiences.
PROPERTIES OF ENVIRONMENTS
Environments come in several flavors. The principal
distinctions to be made are as follows:
Accessible/ Inaccessible.
semi-dynamic.
CONT’D
Discrete/ Continuous.
If the number of distinct percepts and actions
30
CONT’D
34