Intelligent Agents and their environments
Report by: Sheryll Mascarenhas (B. Tech. AIML, School of CSIT)
Submitted to: Prof. Rehana Shaikh
Introduction: In artificial intelligence (AI), intelligent agents (IAs) are autonomous
entities that use sensors and actuators to interact with their environment to achieve
goals. IAs can operate in a variety of environments, including physical spaces like a
factory floor or digital spaces like a
website. They use sensors to gather
information about their environment,
actuators to take actions based on
that data, and a decision-making
mechanism to make decisions based
on their goals and the sensor data.
IAs can also learn from their
environment to achieve their goals.
Some common characteristics of IAs
include:
• Adaptation: IAs can adapt based on their experiences.
• Real-time problem-solving: IAs can solve problems in real time.
• Error and success rate analysis: IAs can analyse their error and success
rates.
• Memory-based storage and retrieval: IAs can use memory-based storage and
retrieval.
Some examples of IAs include:
• Driverless cars: Use cameras, radar, ultrasonic, and LiDAR sensors to
navigate their environment
• Siri: A virtual assistant that interacts with users
• Chatbots: Interact with users in virtual environments
Researchers can use virtual environments to test and evaluate how IAs perform
different tasks. For example, they can create scenarios where IAs interact in a
virtual grocery store to study their decision-making and planning skills.
Intelligent Agents:
An intelligent agent is an autonomous entity which act upon an environment using
sensors and actuators for achieving goals. An intelligent agent may learn from the
environment to achieve their goals. A thermostat is an example of an intelligent
agent.
Following are the main four rules for an AI agent:
o Rule 1: An AI agent must have the ability to perceive the environment.
o Rule 2: The observation must be used to make decisions.
o Rule 3: Decision should result in an action.
o Rule 4: The action taken by an AI agent must be a rational action.
Rational Agent:
o A rational agent is an agent which
has clear preference, models uncertainty,
and acts in a way to maximize its
performance measure with all possible
actions.
o A rational agent is said to perform the
right things. AI is about creating rational
agents to use for game theory and decision
theory for various real-world scenarios.
o For an AI agent, the rational action is
most important because in AI
reinforcement learning algorithm, for each
best possible action, agent gets the positive reward and for each wrong action,
an agent gets a negative reward.
Structure of an AI Agent
The task of AI is to design an agent program which implements the agent function.
The structure of an intelligent agent is a combination of architecture and agent
program. It can be viewed as:
1. Agent = Architecture + Agent program
Following are the main three terms involved in the structure of an AI agent:
i. Architecture: Architecture is machinery that an AI agent executes on.
ii. Agent Function: Agent function is used to map a percept to an action.
An agent is anything that can perceive its environment through sensors and acts
upon that environment through effectors.
• A human agent has sensory organs such as eyes, ears, nose, tongue and
skin parallel to the sensors, and other organs such as hands, legs, mouth,
for effectors.
• A robotic agent replaces cameras and infrared range finders for the
sensors, and various motors and actuators for effectors.
• A software agent has encoded bit strings as its programs and actions.
The Structure of Intelligent Agents
Agent’s structure can be viewed as −
• Agent = Architecture + Agent Program
• Architecture = the machinery that an agent executes on.
• Agent Program = an implementation of an agent function.
Simple Reflex Agents
• They choose actions only based on the current percept.
• They are rational only if a correct decision is made only on the basis of
current precept.
• Their environment is completely observable.
Condition-Action Rule − It is a rule that maps a state (condition) to an action.
I. Model Based Reflex Agents
They use a model of the world to choose their actions. They maintain an internal
state.
Model − knowledge about “how the things happen in the world”.
Internal State − It is a representation of unobserved aspects of current state
depending on percept history.
Updating the state requires the information about −
• How the world evolves.
• How the agent’s actions affect the world.
II. Goal Based Agents
They choose their actions in order to achieve goals. Goal-based approach is more
flexible than reflex agent since the knowledge supporting a decision is explicitly
modelled, thereby allowing for modifications.
Goal − It is the description of desirable situations.
III. Utility Based Agents
They choose actions based on a preference (utility) for each state.
Goals are inadequate when −
• There are conflicting goals, out of which only few can be achieved.
• Goals have some uncertainty of being achieved and you need to weigh
likelihood of success against the importance of a goal.
The Nature of Environments
Some programs operate in the entirely artificial environment confined to
keyboard input, database, computer file systems and character output on a screen.
In contrast, some software agents (software robots or softbots) exist in rich,
unlimited softbots domains. The simulator has a very detailed, complex
environment. The software agent needs to choose from a long array of actions in
real time. A softbot designed to scan the online preferences of the customer and
show interesting items to the customer works in the real as well as
an artificial environment.
The most famous artificial environment is the Turing Test environment, in
which one real and other artificial agents are tested on equal ground. This is a very
challenging environment as it is highly difficult for a software agent to perform as
well as a human.
Properties of Environment
The environment has multi-fold properties −
• Discrete / Continuous − If there are a limited number of distinct, clearly
defined, states of the environment, the environment is discrete (For example,
chess); otherwise, it is continuous (For example, driving).
• Observable / Partially Observable − If it is possible to determine the
complete state of the
environment at each time
point from the precepts it
is observable; otherwise, it
is only partially
observable.
• Static / Dynamic − If the
environment does not
change while an agent is
acting, then it is static;
otherwise, it is dynamic.
• Single agent / Multiple
agents − The environment
may contain other agents
which may be of the same
or different kind as that of
the agent.
• Accessible / Inaccessible − If the agent’s sensory apparatus can have
access to the complete state of the environment, then the environment is
accessible to that agent.
• Deterministic / Non-deterministic − If the next state of the environment is
completely determined by the current state and the actions of the agent,
then the environment is deterministic; otherwise, it is non-deterministic.
• Episodic / Non-episodic − In an episodic environment, each episode
consists of the agent perceiving and then acting. The quality of its action
depends just on the episode itself. Subsequent episodes do not depend on
the actions in the previous episodes. Episodic environments are much
simpler because the agent does not need to think ahead.
References
1. Tutorials Points: For basic information and understanding.
2. GeeksForGeeks: For codes.
3. Gemini AI: For report summation.
4. Generative AI: For images.
Report by: Sheryll Mascarenhas