0% found this document useful (0 votes)
32 views

m2 Agents

An intelligent agent can be a software program or physical robot that interacts with its environment. The key components of an agent include sensors to perceive the environment, actuators to take actions, and algorithms to make decisions. The "PEAS" framework is used to define the Performance measure, Environment, Actuators, and Sensors of an agent for a given task. Examples of agents discussed include industrial robots, autonomous vehicles, chatbots, and game agents. Rational agents aim to maximize their performance measure given the information available from the environment.

Uploaded by

Nimra
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
32 views

m2 Agents

An intelligent agent can be a software program or physical robot that interacts with its environment. The key components of an agent include sensors to perceive the environment, actuators to take actions, and algorithms to make decisions. The "PEAS" framework is used to define the Performance measure, Environment, Actuators, and Sensors of an agent for a given task. Examples of agents discussed include industrial robots, autonomous vehicles, chatbots, and game agents. Rational agents aim to maximize their performance measure given the information available from the environment.

Uploaded by

Nimra
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 60

Intelligent Agents

Chapter 2
Outline
• Agents and environments
• Rationality
• PEAS (Performance measure,
Environment, Actuators, Sensors)
• Environment types
• Agent types
Agents
• An agent is anything that can be viewed as perceiving its
environment through sensors and acting upon that environment
through actuators

Example:
• Human Agent
• Robotic Agent
• Software agent

• Human agent: eyes, ears, and other organs for sensors;


• hands, legs, mouth, and other body parts for actuators
• various motors for actuators
• Robotic agent: cameras and infrared range finders for
sensors;
An agent can be a software program
or a physical robot. It can interact
with the environment directly or
indirectly, and it can make decisions
based on a set of rules, heuristics, or
machine learning algorithms.
Structure of Agents
• Agent = architecture + program
• architecture
– device with sensors and actuators
– e.g., A robotic car, a camera, a PC, …
• program
– implements the agent function on the
architecture

5
Agents and environments

• The agent function maps from percept histories to


actions:
• The agent program runs on the physical architecture to
produce f
• agent = architecture + program

[f: P*  A]
Main components of an agent
These include:
•Sensors: Sensors are used to perceive the state of the environment. They can
include cameras, microphones, pressure sensors, or any other device that can detect
the relevant features of the environment.
•Actuators: Actuators are used to take actions in the environment. They can include
motors, speakers, displays, or any other device that can interact with the
environment.
•Perception: Perception refers to the process of interpreting the sensory input and
extracting meaningful information from it.
•Decision-making: Decision-making refers to the process of choosing an action
based on the current state of the environment and the agent's goals.
•Learning: Learning refers to the process of improving the agent's decision-making
capabilities through experience. It can include supervised, unsupervised, or
reinforcement learning.
An agent can operate in a variety of environments, including physical environments,
simulated environments, and software environments. Some common examples of
agents include autonomous vehicles, chatbots, recommendation systems, and game-
playing agents.
Some examples of agents in artificial intelligence:

1.Industrial robots: Industrial robots are agents that use sensors and actuators to perform a
variety of manufacturing tasks, such as welding, painting, or assembly.
some examples of agents in artificial intelligence:
1.Autonomous vehicles: Autonomous vehicles are agents that use a combination of
sensors, machine learning algorithms, and decision-making systems to navigate the
environment and make driving decisions.
2.Chatbots: Chatbots are agents that use natural language processing techniques to
understand user inputs and provide appropriate responses.
3.Recommendation systems: Recommendation systems are agents that use collaborative
filtering or content-based filtering techniques to suggest products, services, or content to
users.
4.Game-playing agents: Game-playing agents are agents that use decision-making
algorithms to play games such as chess, Go, or poker.
5.Personal assistants: Personal assistants, such as Apple's Siri or Amazon's Alexa, are
agents that use natural language processing and machine learning algorithms to perform
tasks such as setting reminders, making appointments, or playing music.
6.Automated trading systems: Automated trading systems are agents that use machine
learning algorithms to analyze financial data and make trading decisions.
7.Medical diagnosis systems: Medical diagnosis systems are agents that use machine
learning algorithms and medical data to diagnose diseases and recommend treatments.
Acting rationally: rational agent
• Rational behavior: doing the right thing
• Doesn't necessarily involve thinking – e.g.,
blinking reflex – but thinking should be in
the service of rational action
• The right thing: that which is expected to
maximize goal achievement, given the
available information

Rational Agent Advantages:
1. It is more general than the logical approach because
correct inference is only a useful mechanism for
achieving rationality, not a necessary one.
2. It is more amenable to scientific development than
approaches based on human behaviour or human
thought because a standard of rationality can be defined
independent of humans.
Note: Achieving perfect rationality in complex
environments is not possible because the computational
demands are too high. However, we will study perfect
rationality as a starting place.
Vacuum-cleaner world

• Percepts: location and contents, e.g.,


[A,Dirty]
• Actions: Left, Right, Suck, NoOp

Rational agents
• An agent should strive to "do the right thing", based on what
it can perceive and the actions it can perform. The right
action is the one that will cause the agent to be most
successful.
• For each possible percept sequence, a rational agent should
select an action that is expected to maximize its performance
measure, given the evidence provided by the percept
sequence and whatever built-in knowledge the agent has.
• Performance measure: An objective criterion for
success of an agent's behavior
• E.g., performance measure of a vacuum-cleaner
agent could be amount of dirt cleaned up, amount of
time taken, amount of electricity consumed, amount
of noise generated, etc
Specifying the Task
Environment
PEAS
•Performance Measure: captures agent’s
aspiration
•Environment: context, restrictions
•Actuators: indicates what the agent can
carry out
•Sensors: indicates what the agent can
perceive
13
PEAS
• Must first specify the setting for intelligent agent design
– Performance measure: Safe, fast, legal, comfortable trip,
maximize profits
– Environment: Roads, other traffic, pedestrians,
customers
– Actuators: Steering wheel, accelerator, brake,
signal, horn
– Sensors: Cameras, sonar, speedometer, GPS,
odometer, engine sensors, keyboard

• Consider, e.g., the task of designing an automated
taxi driver:

PEAS
• Agent: Medical diagnosis system
• Performance measure: Healthy patient,
minimize costs, lawsuits
• Environment: Patient, hospital, staff
• Actuators: Screen display (questions,
tests, diagnoses, treatments, referrals)
• Sensors: Keyboard (entry of symptoms,
findings, patient's answers)
PEAS
• Agent: Part-picking robot
• Performance measure: Percentage of
parts in correct bins
• Environment: Conveyor belt with parts,
bins
• Actuators: Jointed arm and hand
• Sensors: Camera, joint angle sensors
PEAS
• Agent: Interactive English tutor
• Performance measure: Maximize student's
score on test
• Environment: Set of students
• Actuators: Screen display (exercises,
suggestions, corrections)
• Sensors: Keyboard
Agent Environment in AI
• An environment is everything in the world which surrounds the
agent, but it is not a part of an agent itself. An environment
can be described as a situation in which an agent is present.
• The environment is where agent lives, operate and provide
the agent with something to sense and act upon it.
• Features of Environment
1. Fully observable vs Partially Observable
2. Static vs Dynamic
3. Discrete vs Continuous
4. Deterministic vs Stochastic
5. Single-agent vs Multi-agent
6. Episodic vs sequential
7. Known vs Unknown
8. Accessible vs Inaccessible
1 Fully observable vs Partially Observable:
•If an agent sensor can sense or access the complete state of an
environment at each point of time then it is a fully
observable environment, else it is partially observable.
•A fully observable environment is easy as there is no need to
maintain the internal state to keep track history of the world.
•An agent with no sensors in all environments then such an
environment is called as unobservable.

2. Deterministic vs Stochastic:
•If an agent's current state and selected action can completely
determine the next state of the environment, then such
environment is called a deterministic environment.
•A stochastic environment is random in nature and cannot be
determined completely by an agent.
•In a deterministic, fully observable environment, agent does not
need to worry about uncertainty.
3. Episodic vs Sequential:
•The agent's experience is divided into atomic "episodes" (each
episode consists of the agent perceiving and then performing a
single action), and the choice of action in each episode depends
only on the episode itself
•In an episodic environment, there is a series of one-shot
actions, and only the current percept is required for the action.
•However, in Sequential environment, an agent requires memory
of past actions to determine the next best actions.
4. Single-agent vs Multi-agent
•If only one agent is involved in an environment, and operating
by itself then such an environment is called single agent
environment.
•However, if multiple agents are operating in an environment,
then such an environment is called a multi-agent environment.
•The agent design problems in the multi-agent environment are
different from single agent environment.
5. Static vs Dynamic:
•If the environment can change itself while an agent is deliberating then
such environment is called a dynamic environment else it is called a
static environment.
•Static environments are easy to deal because an agent does not need
to continue looking at the world while deciding for an action.
•However for dynamic environment, agents need to keep looking at the
world at each action.
•Taxi driving is an example of a dynamic environment whereas
Crossword puzzles are an example of a static environment.
6. Discrete vs Continuous:
•If in an environment there are a finite number of percepts and actions
that can be performed within it, then such an environment is called a
discrete environment else it is called continuous environment.
•A chess gamecomes under discrete environment as there is a finite
number of moves that can be performed.
•A self-driving car is an example of a continuous environment.
7. Known vs Unknown
Known and unknown are not actually a feature of an environment, but it
is an agent's state of knowledge to perform an action.
In a known environment, the results for all actions are known to the
agent. While in unknown environment, agent needs to learn how it
works in order to perform an action.
It is quite possible that a known environment to be partially observable
and an Unknown environment to be fully observable.
8. Accessible vs Inaccessible
If an agent can obtain complete and accurate information about the
state's environment, then such an environment is called an Accessible
environment else it is called inaccessible.
An empty room whose state can be defined by its temperature is an
example of an accessible environment.
Information about an event on earth is an example of Inaccessible
environment.
Environment types
Chess with Chess without Taxi
driving
a clock a clock
Fully observable Yes Yes No
Deterministic Strategic Strategic No
Episodic No No No
Static Semi Yes No
Discrete Yes Yes No
Single agent No No No

• The environment type largely determines the agent design


• The real world is (of course) partially observable, stochastic,
sequential, dynamic, continuous, multi-agent


Environment Examples
Environment Observ Determin Episodic Static Discrete Agents
able istic

Chess with a clock

Chess without a clock

Fully observable vs. partially observable


Deterministic vs. stochastic / strategic
Episodic vs. sequential
Static vs. dynamic
Discrete vs. continuous
Single agent vs. multiagent
Environment Examples
Environment Observ Determini Episodic Static Discrete Agents
able stic

Chess with a clock Fully Strategic Sequential Semi Discrete Multi

Chess without a clock Fully Strategic Sequential Static Discrete Multi

Fully observable vs. partially observable


Deterministic vs. stochastic / strategic
Episodic vs. sequential
Static vs. dynamic
Discrete vs. continuous
Single agent vs. multiagent
Environment Examples
Environment Observable Deterministic Episodic Static Discrete Agents

Chess with a clock Fully Strategic Sequential Semi Discrete Multi

Chess without a Fully Strategic Sequential Static Discrete Multi


clock
Poker

Fully observable vs. partially observable


Deterministic vs. stochastic / strategic
Episodic vs. sequential
Static vs. dynamic
Discrete vs. continuous
Single agent vs. multiagent
Environment Examples
Environment Observable Deterministic Episodic Static Discrete Agents

Chess with a clock Fully Strategic Sequential Semi Discrete Multi

Chess without a Fully Strategic Sequential Static Discrete Multi


clock
Poker Partial Strategic Sequential Static Discrete Multi

Fully observable vs. partially observable


Deterministic vs. stochastic / strategic
Episodic vs. sequential
Static vs. dynamic
Discrete vs. continuous
Single agent vs. multiagent
Environment Examples
Environment Observ Determini Episodic Static Discrete Agents
able stic

Chess with a clock Fully Strategic Sequential Semi Discrete Multi

Chess without a clock Fully Strategic Sequential Static Discrete Multi

Poker Partial Strategic Sequential Static Discrete Multi

Backgammon

Fully observable vs. partially observable


Deterministic vs. stochastic / strategic
Episodic vs. sequential
Static vs. dynamic
Discrete vs. continuous
Single agent vs. multiagent
Environment Examples
Environment Observable Determini Episodic Static Discrete Agents
stic

Chess with a clock Fully Strategic Sequential Semi Discrete Multi

Chess without a clock Fully Strategic Sequential Static Discrete Multi

Poker Partial Strategic Sequential Static Discrete Multi

Backgammon Fully Stochastic Sequential Static Discrete Multi

Fully observable vs. partially observable


Deterministic vs. stochastic / strategic
Episodic vs. sequential
Static vs. dynamic
Discrete vs. continuous
Single agent vs. multiagent
Environment Examples
Environment Observa Deterministic Episodic Static Discrete Agents
ble
Chess with a clock Fully Strategic Sequential Semi Discrete Multi

Chess without a clock Fully Strategic Sequential Static Discrete Multi

Poker Partial Strategic Sequential Static Discrete Multi

Backgammon Fully Stochastic Sequential Static Discrete Multi

Taxi driving Partial Stochastic Sequential Dynamic Continuous Multi

Fully observable vs. partially observable


Deterministic vs. stochastic / strategic
Episodic vs. sequential
Static vs. dynamic
Discrete vs. continuous
Single agent vs. multiagent
Environment Examples
Environment Observa Determinis Episodic Static Discrete Agent
ble tic s
Chess with a Fully Strategic Sequential Semi Discrete Multi
clock
Chess without Fully Strategic Sequential Static Discrete Multi
a clock
Poker Partial Strategic Sequential Static Discrete Multi

Backgammon Fully Stochastic Sequential Static Discrete Multi

Taxi driving Partial Stochastic Sequential Dynamic Continuo Multi


us
Medical
diagnosis

Fully observable vs. partially observable


Deterministic vs. stochastic / strategic
Episodic vs. sequential
Static vs. dynamic
Discrete vs. continuous
Single agent vs. multiagent
Environment Examples
Environment Observa Determinis Episodic Static Discrete Agents
ble tic
Chess with a Fully Strategic Sequential Semi Discrete Multi
clock
Chess without Fully Strategic Sequential Static Discrete Multi
a clock
Poker Partial Strategic Sequential Static Discrete Multi

Backgammon Fully Stochastic Sequential Static Discrete Multi

Taxi driving Partial Stochastic Sequential Dyna Continuous Multi


mic
Medical Partial Stochastic Episodic Static Continuous Single
diagnosis

Fully observable vs. partially observable


Deterministic vs. stochastic / strategic
Episodic vs. sequential
Static vs. dynamic
Discrete vs. continuous
Single agent vs. multiagent
Environment Examples
Environment Obser Determi Episodic Stati Discret Agent
vable nistic c e s
Chess with a clock Fully Strategic Sequenti Semi Discret Multi
al e
Chess without a Fully Strategic Sequenti Stati Discret Multi
clock al c e
Poker Partial Strategic Sequenti Stati Discret Multi
al c e
Backgammon Fully Stochast Sequenti Stati Discret Multi
ic al c e
Taxi driving Partial Stochast Sequenti Dyn Continu Multi
ic al amic ous
Medical diagnosis Partial Stochast Episodic Stati Continu Single
ic c ous
Fully observable vs. partiallyImage
observable
analysis
Deterministic vs. stochastic / strategic
Episodic vs. sequential
Static vs. dynamic
Discrete vs. continuous
Single agent vs. multiagent
Environment Examples
Environment Observa Determ Episodic Stati Discret Agent
ble inistic c e s
Chess with a Fully Strategi Sequenti Semi Discret Multi
clock c al e
Chess without a Fully Strategi Sequenti Stati Discret Multi
clock c al c e
Poker Partial Strategi Sequenti Stati Discret Multi
c al c e
Backgammon Fully Stochas Sequenti Stati Discret Multi
Fully observable vs. tic al c e
partially observable Taxi driving Partial Stochas Sequenti Dyn Continu Multi
tic al amic ous
Deterministic vs.
stochastic / strategic Medical diagnosis Partial Stochas Episodic Stati Continu Single
tic c ous
Episodic vs. sequential Image analysis Fully Determi Episodic Semi Discret Single
Static vs. dynamic nistic e
Discrete vs.
continuous
Single agent vs.
multiagent
Environment Examples
Environment Observa Determ Episodic Stati Discret Agent
ble inistic c e s
Chess with a Fully Strategi Sequenti Semi Discret Multi
clock c al e
Chess without a Fully Strategi Sequenti Stati Discret Multi
clock c al c e
Poker Partial Strategi Sequenti Stati Discret Multi
c al c e
Backgammon Fully Stochas Sequenti Stati Discret Multi
Fully observable vs. tic al c e
partially observable Taxi driving Partial Stochas Sequenti Dyn Continu Multi
tic al amic ous
Deterministic vs.
Medical Partial Stochas Episodic Stati Continu Single
stochastic / strategic diagnosis tic c ous
Episodic vs. sequential Image analysis Fully Determi Episodic Semi Discret Single
Static vs. dynamic nistic e

Discrete vs. Robot part


picking
continuous
Single agent vs.
Environment Examples
Environment Observa Determ Episodic Stati Discret Agent
ble inistic c e s
Chess with a Fully Strategi Sequenti Semi Discret Multi
clock c al e
Chess without a Fully Strategi Sequenti Stati Discret Multi
clock c al c e
Poker Partial Strategi Sequenti Stati Discret Multi
c al c e
Backgammon Fully Stochas Sequenti Stati Discret Multi
tic al c e
Fully observable vs. Taxi driving Partial Stochas Sequenti Dyn Continu Multi
partially observable tic al amic ous

Deterministic vs. Medical diagnosis Partial Stochas Episodic Stati Continu Single
tic c ous
stochastic / strategic
Image analysis Fully Determi Episodic Semi Discret Single
Episodic vs. sequential nistic e
Static vs. dynamic Robot part Fully Determi Episodic Semi Discret Single
Discrete vs. picking nistic e
continuous
Single agent vs.
Environment Examples
Environment Obse Determ Episodic Stati Discret Agent
rvabl inistic c e s
e
Chess with a clock Fully Strategi Sequenti Semi Discret Multi
c al e
Chess without a Fully Strategi Sequenti Stati Discret Multi
clock c al c e
Poker Partial Strategi Sequenti Stati Discret Multi
c al c e
Backgammon Fully Stochas Sequenti Stati Discret Multi
Fully observable vs. tic al c e
partially observable Taxi driving Partial Stochas Sequenti Dyn Continu Multi
Deterministic vs. tic al amic ous
stochastic / strategic Medical diagnosis Partial Stochas Episodic Stati Continu Single
tic c ous
Episodic vs. sequential
Image analysis Fully Determi Episodic Semi Discret Single
Static vs. dynamic nistic e
Discrete vs. Robot part picking Fully Determi Episodic Semi Discret Single
continuous nistic e

Single agent vs. Interactive English


tutor
Environment Examples
Environment Obser Determi Episodic Stati Discret Agent
vable nistic c e s
Chess with a clock Fully Strategi Sequentia Semi Discrete Multi
c l
Chess without a Fully Strategi Sequentia Stati Discrete Multi
clock c l c
Poker Partial Strategi Sequentia Stati Discrete Multi
c l c
Backgammon Fully Stochas Sequentia Stati Discrete Multi
Fully observable vs. tic l c
partially observable Taxi driving Partial Stochas Sequentia Dyna Continu Multi
tic l mic ous
Deterministic vs.
Medical diagnosis Partial Stochas Episodic Stati Continu Single
stochastic / strategic tic c ous
Episodic vs. sequential Image analysis Fully Determi Episodic Semi Discrete Single
Static vs. dynamic nistic

Discrete vs. Robot part picking Fully Determi Episodic Semi Discrete Single
nistic
continuous
Interactive English Partial Stochas Sequentia Dyna Discrete Multi
Single agent vs. tutor tic l mic
Agent functions and programs
• An agent is completely specified by the
agent function mapping percept
sequences to actions
• One agent function (or a small
equivalence class) is rational
• Aim: find a way to implement the rational
agent function concisely


Agent types
Types of agents (increasing in generality and
ability to handle complex environments)
Five basic types in order of increasing generality:
•Simple reflex agents
•Model-based reflex agents
•Goal-based agents
•Utility-based agents
•Learning Agents
Simple reflex agents

• Use simple “if


then” rules
• Can be short
sighted

SimpleReflexAgent(percept)
state = InterpretInput(percept)
rule = RuleMatch(state, rules)
action = RuleAction(rule)
Return action
Simple reflex agents
• Simple reflex agents ignore the rest of the percept history and act only
on the basis of the current percept.
• Percept history is the history of all that an agent has perceived till date.
• The agent function is based on the condition-action rule. A condition-
action rule is a rule that maps a state i.e, condition to an action. If the
condition is true, then the action is taken, else not. This agent function
only succeeds when the environment is fully observable.
• For simple reflex agents operating in partially observable environments,
infinite loops are often unavoidable. It may be possible to escape from
infinite loops if the agent can randomize its actions. Problems with
Simple reflex agents are :
• Very limited intelligence.
• No knowledge of non-perceptual parts of state.
• Usually too big to generate and store.
• If there occurs any change in the environment, then the collection of
rules need to be updated.
Examples of simple reflex agents in artificial intelligence

1.Automatic doors - These doors open when they sense someone


approaching and close when they no longer detect movement.
2.Thermostats - Thermostats can be programmed to turn on and off
based on the temperature in a room.
3.Motion detectors - These devices detect movement and can be used
for security purposes, such as turning on lights or setting off an alarm.
4.Traffic lights - Traffic lights switch between green, yellow, and red
based on predefined rules, such as time of day and the amount of
traffic on the road.
5.Elevator systems - Elevator systems are designed to go up and
down based on user input, with some systems using sensors to detect
when someone is waiting for an elevator.
6.Automatic vacuum cleaners - These devices move around a room
and change direction when they encounter an obstacle.
Example: Vacuum Agent

• Performance?
– 1 point for each square cleaned in time T?
– #clean squares per time step - #moves per time step?
• Environment: vacuum, dirt, multiple areas defined by square regions
• Actions: left, right, suck, idle
• Sensors: location and contents
– [A, dirty]

• Rational is not omniscient


– Environment may be partially observable
• Rational is not clairvoyant
– Environment may be stochastic
• Thus Rational is not always successful
Reflex Vacuum Agent

• If status=Dirty then return Suck


else if location=A then return Right
else if location=B then right Left
Reflex Vacuum Agent

• If status=Dirty then Suck


else if have not visited
other square in >3 time units,
go there
Model-based reflex agents
Model-based reflex agents
• It works by finding a rule whose condition matches the
current situation. A model-based agent can handle partially
observable environments by use of model about the
world. The agent has to keep track of internal state which
is adjusted by each percept and that depends on the
percept history. The current state is stored inside the agent
which maintains some kind of structure describing the part
of the world which cannot be seen. Updating the state
requires the information about :
• how the world evolves in-dependently from the agent, and
• how the agent actions affects the world.
Examples of model-based reflex agents

1.Chess-playing AI: A chess-playing AI uses a model of the game board and the
rules of chess to make decisions about which move to make next. It can analyze the
board and determine which moves are legal and which ones would put its own pieces
in danger.
2.Autonomous vehicle: An autonomous vehicle uses sensors and a model of the
environment to make decisions about how to navigate the road. It can detect other
vehicles, pedestrians, and obstacles, and use its model to predict their future
movements and avoid collisions.
3.Stock trading bot: A stock trading bot uses a model of the stock market and
financial trends to make decisions about when to buy or sell stocks. It can analyze
past performance data and current market conditions to predict future trends and make
informed trading decisions.
4.Predictive maintenance system: A predictive maintenance system uses sensor data
and a model of a machine's behavior to predict when maintenance is needed. It can
analyze the data to detect patterns and anomalies, and use its model to predict when a
machine is likely to fail.
5.Weather forecasting AI: A weather forecasting AI uses a model of the atmosphere
and weather patterns to predict future weather conditions. It can analyze current
weather data, historical weather patterns, and other factors to make accurate
predictions about future weather events.
Goal-based agents


Goal-based agents
• These kind of agents take decision based on how
far they are currently from their goal(description
of desirable situations). Their every action is
intended to reduce its distance from goal. This
allows the agent a way to choose among multiple
possibilities, selecting the one which reaches a
goal state. The knowledge that supports its
decisions is represented explicitly and can be
modified, which makes these agents more
flexible. They usually require search and
planning. The goal based agent’s behavior can
easily be changed.
Examples of Goal-based agents

1.Route planning AI: A route planning AI is a goal-based agent that helps users
find the most efficient route from one location to another. It considers factors such
as distance, traffic conditions, and time of day to find the best route to the
destination.
2.Personal assistant AI: A personal assistant AI is a goal-based agent that helps
users accomplish tasks and achieve goals. It can schedule appointments, set
reminders, and provide recommendations based on the user's preferences and past
behavior.
3.Game-playing AI: A game-playing AI is a goal-based agent that tries to win a
game by making strategic moves. It considers the current state of the game and its
possible future states to determine the best move to make.
4.Search engine AI: A search engine AI is a goal-based agent that helps users find
relevant information based on their search queries. It uses algorithms to rank search
results based on their relevance to the user's query.
5.Autonomous robot: An autonomous robot is a goal-based agent that performs
tasks in a physical environment. It uses sensors to gather information about its
surroundings and takes actions to achieve its goals, such as picking up and moving
objects or navigating through a space.
Utility-based agents
Utility-based agents
• The agents which are developed having their end uses as
building blocks are called utility based agents. When there
are multiple possible alternatives, then to decide which
one is best, utility based agents are used.They choose
actions based on a preference (utility) for each state.
Sometimes achieving the desired goal is not enough. We
may look for quicker, safer, cheaper trip to reach a
destination. Agent happiness should be taken into
consideration. Utility describes how “happy” the agent is.
Because of the uncertainty in the world, a utility agent
chooses the action that maximizes the expected utility. A
utility function maps a state onto a real number which
describes the associated degree of happiness.
Utility-based agents
• These agents are similar to the goal-based agent but provide an extra
component of utility measurement which makes them different by
providing a measure of success at a given state.
• Utility-based agent act based not only goals but also the best way to
achieve the goal.
• The Utility-based agent is useful when there are multiple possible
alternatives, and an agent has to choose in order to perform the best
action.
• They choose actions based on a preference (utility) for each state.
Sometimes achieving the desired goal is not enough. We may look for
quicker, safer, cheaper trip to reach a destination.
• Agent happiness should be taken into consideration. Utility describes
how “happy” the agent is. Because of the uncertainty in the world, a
utility agent chooses the action that maximizes the expected utility.
• A utility function maps a state onto a real number which describes the
associated degree of happiness.
Examples of Utility-based agents
1.Automated customer service chatbot: A utility-based agent can be used in automated
customer service chatbots that aim to provide the best possible solution to the customer's
problem. The chatbot can assess the situation and provide the most helpful answer that
maximizes the customer's satisfaction.
2.Personalized recommendation system: A utility-based agent can be used in personalized
recommendation systems that aim to recommend the most suitable products or services to the
user. The recommendation system can evaluate the user's preferences, past behaviors, and
other factors to make recommendations that maximize the user's satisfaction.
3.Automated trading system: A utility-based agent can be used in automated trading
systems that aim to maximize profits. The trading system can analyze market trends and
historical data to make trades that maximize the investor's returns.
4.Resource allocation system: A utility-based agent can be used in resource allocation
systems that aim to allocate resources efficiently. The resource allocation system can evaluate
the needs and priorities of different users to allocate resources in a way that maximizes overall
utility.
5.Energy management system: A utility-based agent can be used in energy management
systems that aim to optimize energy consumption. The energy management system can
evaluate the energy needs and costs of different appliances and devices to make decisions that
maximize energy efficiency and cost savings.
Learning agents
Learning agents
• A learning agent in AI is the type of agent which can learn from its
past experiences, or it has learning capabilities.
• It starts to act with basic knowledge and then able to act and adapt
automatically through learning.
• A learning agent has mainly four conceptual components, which are:
– Learning element: It is responsible for making improvements by
learning from environment
– Critic: Learning element takes feedback from critic which describes
that how well the agent is doing with respect to a fixed performance
standard.
– Performance element: It is responsible for selecting external action
– Problem generator: This component is responsible for suggesting
actions that will lead to new and informative experiences.
• Hence, learning agents are able to learn, analyze performance, and
look for new ways to improve the performance.
Examples of Learning agents
1.AlphaGo: AlphaGo is a reinforcement learning agent developed by DeepMind that was the
first to defeat a human world champion in the board game Go. AlphaGo learned by playing
against itself and improving its strategies over time.
2.Image Recognition Agents: There are many examples of learning agents that can recognize
images, including Inception, ResNet, and VGG. These agents use deep learning to classify
images into different categories.
3.Siri: Siri is a virtual assistant developed by Apple that uses natural language processing and
machine learning to understand and respond to user requests. Siri learns from user interactions
and improves its responses over time.
4.Self-Driving Car Agents: There are many examples of learning agents being used to develop
self-driving cars, including Waymo, Tesla, and Uber. These agents use reinforcement learning
to learn how to drive safely and make decisions in complex environments.
5.Recommendation Systems: Many online platforms use learning agents to provide
personalized recommendations to users. Examples include Amazon's product
recommendations, Netflix's movie and TV show recommendations, and Spotify's music
recommendations.
6.Chatbots: Chatbots are virtual agents that use natural language processing and machine
learning to interact with users in a conversational manner. Examples include Google Assistant,
Microsoft's Cortana, and Facebook's M.
7.Game Playing Agents: There are many examples of learning agents that can play games at a
high level, including Deep Blue (chess), AlphaZero (chess, Go, and shogi), and OpenAI Five
(Dota 2). These agents use reinforcement learning to learn how to play the game and improve
Examples of Learning agents
1.Q-learning Agent: Q-learning is a model-free, reinforcement learning algorithm that uses a
table of values to estimate the utility of taking each action in each state. It can be used for
tasks such as playing games, controlling robots, and autonomous driving.
2.Deep Reinforcement Learning Agent: Deep reinforcement learning (DRL) is a type of
machine learning that combines deep neural networks with reinforcement learning algorithms
to enable agents to learn from raw sensory input. DRL agents have been used for a variety of
tasks, including playing video games, controlling robots, and trading in financial markets.
3.Monte Carlo Tree Search Agent: Monte Carlo Tree Search (MCTS) is a heuristic search
algorithm that has been used in artificial intelligence for decision-making in complex,
uncertain environments. MCTS can be used to solve problems such as game playing, route
planning, and resource allocation.
4.Genetic Algorithm Agent: Genetic algorithms (GAs) are a type of optimization algorithm
that mimics the process of natural selection. GAs have been used in artificial intelligence for
optimization problems, such as finding the optimal configuration of a neural network or
designing a control system for a robot.
5.Bayesian Learning Agent: Bayesian learning is a statistical learning method that uses
Bayesian inference to update probabilities as new data is received. Bayesian learning agents
have been used in natural language processing, robotics, and computer vision, among other
applications.
6.Neural Network Agent: Neural networks are a type of machine learning algorithm that are
modeled after the structure of the human brain. Neural network agents have been used for a
variety of tasks, including image and speech recognition, natural language processing, and
game playing.

You might also like