Intelligent Agents
Dr. Naveen Sundar
Artificial Intelligence
Think humanly: Creating
models to simulate how Think Rationally:
humans think. medical diagnosis,
legal reasoning).
Act Rationally:
Act humanly: Chatbots
Autonomous Vehicles
Systems that think like humans: Systems that think ‘rationally’ - laws of thought
Rational Thinking AI: These systems use logical rules
• Challenge: To build AI that thinks like humans, we first to solve problems and make decisions.
Human vs. Logic: Unlike machines, humans don't
need to understand how humans think. always use logic perfectly. Emotions and
• Cognitive Science: This is the study of the mind and uncertainties affect our decisions.
how it works. It combines knowledge from psychology, Limitations: Logic alone can't handle all situations,
neuroscience, computer science, and other fields to especially when things are uncertain or too complex
understand human thought. Goal: Study and develop AI that can perceive, think,
and act efficiently, balancing logic with practical
strategies.
Systems that act like humans :
Goal: Build machines that can do tasks typically needing
human intelligence.
Systems that act rationally: Rational agent
• Rational Agents: These AI systems make the best
Examples of Tasks: Understanding language, recognizing
decisions to achieve their goals.
faces, playing chess, making decisions.
• Doing the Right Thing: They aim to maximize
Human-Like Performance: The machine should perform
success using logical reasoning and available
these tasks in a way that seems smart, just like a person
information.
would.
• Limits of Logic: Logic doesn't always provide the
The Turing Test approach : The computer must right answer, so they use specialized knowledge
behave intelligently or extra information to make better decisions..
?
Turing Test
The success of an intelligent behavior of a system can be measured with Turing Test.
if an interrogator would not be able
to identify which is a machine and
which is human, then the computer
passes the test successfully, and the
machine is said to be intelligent and
can think like a human.
https://youtu.be/4VROUIAF2Do
https://www.youtube.com/w
atch?v=zXf7J8y-QgI
So AI can be defined as:
Artificial -Produced by human art or effort, rather than originating naturally.
Intelligence - the ability to acquire and apply knowledge and skills
AI is the study of ideas that enable computers to be intelligent.
AI is the part of computer science concerned with design of computer systems that exhibit human
intelligence (From the Concise Oxford Dictionary)
From the above two definitions, we can see that AI has two major roles:
1. Study the intelligent part concerned with humans.
2. Represent those actions using computers.
Study AI as rational agent
Introduction
An AI system is composed of an agent and its environment.
An agent(Self driving car) perceives its environment(roads and traffics)
through sensors -The complete set of inputs at a given time is called a
percept
The agent can change the environment through actuators or
effectors(Steering/accelerator/brake)
An operation involving an effector is called Action – group of actions -
action sequences.
An agent is anything that can be viewed as
◦ perceiving its environment through sensors and
◦ acting upon that environment through actuators
It’s a system that implements a mapping from percept sequences to
actions. [f: P* A]
A performance measure has to be used in order to evaluate an agent.
An autonomous agent decides autonomously which action to take in the
current situation to maximize progress towards its goals.
Examples of Agent
A Human agent has eyes, ears, and other organs which act
as sensors and hands, legs, mouth, and other body parts
acting as actuators.
A Robotic agent has camera, sonar, infrared, bumper, etc.
for sensors. And has grippers, wheels, lights, speakers, etc.
for actuators. Ex: Xavier from CMU(Robotic agent – interact
with people), COG from MIT(Humanoid robot), etc, AIBO
entertainment robot from SONY(Entertainment robot)
A Software agents or softbots --functions as sensors and
actuators. Ex: askjeeves.com(software agents – user
queries)
Expert systems –Cardiologs – AI serving cardiology
Intelligent Agents
An Intelligent Agent must sense, must act, must be autonomous (to some extent), It
also must be rational.
AI is about building rational agents - An agent is something that perceives and acts.
Rationality -- status of being reasonable, sensible, and having good sense of judgment.
A rational agent always does the right thing (The right thing - that which is expected to
maximize goal achievement, given the available information and Giving answers to
questions is ‘acting’.
The problem the agent solves is characterized by Performance Measure,
Environment, Actuators, and Sensors (PEAS).
PEAS - Performance, Environment, Actuators, Sensors
Specifying the task environment is always the first step in designing agent
Agent Terminology
•Performance Measure of Agent − It is the criteria, which determines how successful an agent is.
•Behavior of Agent − It is the action that agent performs after any given sequence of percepts.
•Percept − It is agent’s perceptual inputs at a given instance.
•Percept Sequence − It is the history of all that an agent has perceived till date.
•Agent Function − It is a map from the precept sequence to an action.
The Structure of Intelligent Agents
•Agent = Architecture + Agent Program
•Architecture = The machinery or platform where the agent operates.
•Ex - Robotic Architecture: Physical robot with sensors (cameras, sonar) and actuators (wheels, grippers).
•Agent Program = an implementation of an agent function.
The job of AI is to design the agent program that implements the agent function mapping percepts to
actions
Aim: find a way to implement the rational agent function concisely
it takes the current percept as input from the sensors and returns an action to the actuators
Agent program takes the current percept as input - Nothing is available from the environment
Agent function takes the entire percept history - To do this, remember all the percepts
Characteristics of an intelligent
agent
•Rationality : Perfect Rationality assumes that the rational agent knows all and will take the action that maximizes
her utility. Rational Action is the action that maximizes the expected value of the performance measure given the
precept sequence to date.
•Bounded Rationality : The property of an agent that behaves in a manner that is nearly optimal with respect to its
goals as its resources will allow –an intelligent agent will be expected to act optimally to the best of its abilities and
its resource constraints under the agent environment.
•Agent Environment : Environments in which agents operate can be defined in different ways.
•Observability : In terms of observability, an environment can be characterized as fully observable or partially
observable.
In a fully observable environment all of the environment relevant to the action being considered is observable -
the agent does not need to keep track of the changes in the environment. Ex: chess playing system.
In a partially observable environment, the relevant features of the environment are only partially observable. Ex:
A bridge playing program
•Determinism : In deterministic environments, the next state of the environment is completely
described by the current state and the agent’s action. Ex: Image analysis systems - the processed
image is determined completely by the current image and the processing operations.
•Episodicity : An episodic environment means that subsequent episodes do not depend on what
actions occurred in previous episodes. In a sequential environment, the agent engages in a series of
connected episodes.
•Dynamism:
•Static Environment: does not change from one state to the next while the agent is considering its
course of action. The only changes to the environment are those caused by the agent itself.
◦ A static environment does not change while the agent is thinking(board games)
◦ The passage of time as an agent deliberates is irrelevant.
◦ The agent doesn’t need to observe the world during deliberation.
A Dynamic Environment changes over time independent of the actions of the agent -- and thus if an
agent does not respond in a timely manner, this counts as a choice to do nothing.(stock market)
•Continuity : If the number of distinct percepts and actions is limited, the environment
is discrete(chess), otherwise it is continuous(self driving car).
Types of Agents
Agents can be grouped based on their degree of perceived intelligence and capability :
◦Simple Reflex Agents
◦Model-Based Reflex Agents
◦Goal-Based Agents
◦Utility-Based Agents
◦Learning Agent
1. Simple reflex agents
•Also known as simplest agents- basic form of agents and function only in the current state.
•Very low intelligence capability as they don’t have the ability to store past state.
Respond to events based on pre-defined rules (pre-programmed) – They hold a static table from
where they fetch all the pre-defined rules for acting.
They perform well only when the environment is fully observable.
•It follows Condition-action rule -- the agent will map the current state to action.
•No knowledge of non-perceptual parts of state.
•If there occurs any change in the environment,
then the collection of rules need to be updated.
thermostat
2. Model-based reflex agents
Advanced version of the Simple Reflex agent.
It can also respond to events based on the pre-defined conditions -- and
can store the internal state (past information) based on previous
events.
To perform any action, it relies on both internal state and current
percept.
•It works in a partially observable environment and prefers to track the
state.
• It has two important parts:
• Model: it means “how things happen in the world,” -- known as a Model-based agent.(basic layout -
Robot)
• Internal State: Representation of the current state depending on percept history(already cleaned,
obstacles)
Ex: Roomba vacuum cleaner and the autonomous car known as Waymo - Both interact with their
environments by using what they know--an internal model of the world--and their on-board sensors as
well, to make moment-to-moment decisions about their actions
3. Goal Based Agents
Goal - the description of desirable situations - Goal-Based Agents select their actions to get goals
A goal-based agent has an agenda – (Ex: shopping list),
The action taken by these agents depends on the distance from their goal (Desired Situation).
The actions are intended to reduce the distance between the current state and the desired state.
To attain its goal, it makes use of the search and planning algorithm.
One drawback of Goal-Based Agents is that they don’t always select the most optimized path to reach the final
goal – this can be overcome by using Utility Agent
‘Alibaba-G Plus’
robots for testing to
delivery at home. –
How to deliver,
where to deliver
and which path
should take all this
planning made by a
goal based
agent
4. Utility Based Agents
Utility-based agent is an agent that acts
based not only on what the goal is, but
the best way to reach that goal.
In short, it is the usefulness or utility of the agent that makes itself distinct from its
counterparts
The action taken by these agents depends on the end objective(preference), so they are
called Utility Agent (Ex: degree of happiness in tour)
Utility Agents are used when there are multiple solutions to a problem, and the best possible
alternative has to be chosen -The alternative chosen is based on each state’s utility.
They perform a cost-benefit analysis of each solution and select one that can achieve the
minimum cost goal.
5. Learning Agent
have learning abilities to learn from their past
experiences.
These types of agents can start from scratch and,
over time, can acquire significant knowledge from
their environment.
It starts to act with basic knowledge and then able to
act and adapt automatically through learning.
Ex – Self Driving cars(past experience & improve)
•A learning agent has mainly four conceptual components:
Learning element: It is responsible for making improvements by learning from environment
Critic: Learning element takes feedback from critic which describes that how well the agent is doing with
respect to a fixed performance standard.
Performance element: responsible for selecting external action
Problem generator: responsible for suggesting actions that will lead to new and informative experiences.
A taxi driver
Performance Element – Knowledge of how
to drive in traffic
Learning Element – Relates low tips to
actions that may be the cause - able to
formulate a rule saying this was a bad action,
and the performance element is modified by
installation of the new rule.
Critic – Observes tips from customers and
horn honking from other cars
Problem Generator – Proposes new routes
to try and improve driving skills - Identify
certain areas of behavior in need of
improvement and suggest experiments, such
as trying out the brakes on different road
surfaces under different conditions (ex:
scientists trying out new experiments)