Unit -1
agent = architecture + program
Agent program that implements the agent function— the mapping from percepts to actions
Four basic kinds of agent programs that embody the principles underlying almost all intelligent systems:
• Simple reflex agents;
• Model-based reflex agents;
• Goal-based agents;
• Utility-based agents.
Simple reflex agents
• These agents select actions on the basis of the
current percept, ignoring the rest of the percept
history.
• condition–action rule -> If condition then action
• For example, the vacuum agent whose agent
function is tabulated is a simple reflex agent,
function SIMPLE-REFLEX-AGENT(percept ) returns an
because its decision is based only on the current
action
location and on whether that location contains dirt.
persistent: rules, a set of condition–action rules
• Works only if the correct decision can be made on state←INTERPRET-INPUT(percept )
rule←RULE-MATCH(state, rules)
the basis of only the current percept-that is, only if
action ←[Link]
the environment is fully observable(agent prg applies
return action
->)
Model Reflex Agents
A model-based agent handles partial
observability by maintaining an internal
representation of the world.
Internal State: Stores information from
previous percepts to infer parts of the
environment not currently visible.
World Model: Encodes knowledge of how the
world evolves and how the agent's actions
change the environment.
Percept History: Combines current percept
with internal state using a function like UPDATE-
STATE to keep track of the current situation.
Decision Making: Uses the updated internal
state to decide what action to take.
Better for Complex Tasks: Suitable for
dynamic and partially observable environments
like driving, exploration, or robotics.
Goal Based Agent
1. Goal-Oriented Decisions: Chooses Behavior:
actions based on a goal (desired o Evaluates current location.
outcome), not just the current state. o Uses a map to decide whether
2. Future Planning: Considers possible to turn left, right, or go straight.
future outcomes to pick the best action. o Picks the path that leads closer
3. Flexible Behavior: Can easily adapt to to the goal.
new goals by changing the goal input. o If it starts raining, updates its
4. Requires Reasoning: Involves reasoning model to account for slippery
like “What if I do this?” and “Will this roads and changes braking
help achieve my goal?” behavior accordingly.
5. Uses a Model: Relies on knowledge of
how actions affect the environment
(similar to model-based reflex agents).
6. Less Efficient but More Adaptable:
Slower than reflex agents but handles
complex, changing environments
better.
7. Supports Search & Planning: Core
concept in AI fields like search
algorithms and planning systems.
Example:
Scenario: A self-driving taxi at a road
junction. Same agent program with goal function
Goal: Reach the passenger’s
destination.
Utility Based Agent ✅ Example: Self-Driving Taxi Agent
1. Goes Beyond Goals: Goal-based approach: Get passenger from point
Unlike goal-based agents that only distinguish A to B.
success or failure, utility-based agents evaluate Utility-based approach: Get passenger from A to
how good each outcome is. B safely, quickly, and cheaply, while avoiding
2. Uses Utility Function: traffic, obeying laws, and saving fuel.
A utility function assigns a numeric value (utility) Utility function might look like:
to each possible state to reflect its desirability to Utility = 0.5 * Safety + 0.3 * Speed + 0.2 * Cost-Savings
the agent. The agent chooses the route or maneuver that gives the
3. Decision Under Uncertainty: highest expected utility based on these factors.
In uncertain environments (e.g., stochastic,
partially observable), it chooses actions based on
expected utility – a weighted average of possible
outcomes.
4. Handles Trade-offs:
Useful when goals conflict (e.g., speed vs. safety)
or when goals can’t be guaranteed; utility helps
balance preferences.
5. More Flexible & Rational:
Enables more intelligent behavior by allowing
the agent to compare and prioritize among
various choices.
6. Needs Modeling & Computation:
Requires models of the environment and
efficient algorithms to predict outcomes and Same agent program with utility
maximize utility, which can be complex. function
Learning Agent – Summary in Points
1. Definition: Component Role
A learning agent is an AI system that improves its
performance over time by learning from Provides feedback on how well
experience and feedback. Critic the agent is doing based on a
2. Why Learning? fixed standard.
o Allows adaptation in unknown
environments. Suggests exploratory actions to
Problem
o Helps the agent become more gain new knowledge or
Generator
competent than its initial experience.
programming.
3. Four Main Components: 4. Learning Types:
o From percepts (e.g. observing how
Component Role actions change the environment).
o From feedback (e.g. learning what
Chooses actions based on works well or poorly).
Performance
percepts (the “doer” of the o From utility/reward (e.g. getting tips or
Element
agent). penalties).
5. Key Idea:
Improves the agent by Learning is the process of modifying internal
Learning
modifying the performance knowledge/components using feedback to
Element
element. improve future decisions.
🚖 Example: Automated Taxi Agent Problem Generator: Tries braking in different
Performance Element: Drives the taxi based on road conditions to learn better control.
current knowledge (e.g. traffic rules, GPS).
Critic: Observes outcomes like other drivers’
reactions or passenger satisfaction.
Learning Element: Learns that sudden lane
changes lead to poor outcomes and adjusts
driving behavior. Learning agent flowchart
Representation Type Description Example Strengths Limitations
Cannot represent internal
Treats each state as a single "CityA" or "CityB" in a Simple to implement; works well in
Atomic structure or shared
indivisible unit (“black box”) route planning problem search/game-playing problems
attributes
Allows partial knowledge, Limited when complex
Splits state into variables Location = CityA, Gas =
Factored variable tracking, easier relationships exist between
(attributes) with values Full, Money = Low
transitions entities
Truck(t1), Cow(c1), Captures rich, relational info;
Represents world as objects Complex reasoning;
Structured Blocking(c1, t1), suitable for logic and
and relations between them computationally expensive
Reversing(t1) language
UNIT – 3
What is a Problem-Solving Agent?
A problem-solving agent is an intelligent agent that simplifies complex decision-making by adopting a goal and working to
achieve it using a sequence of actions.
Why Use Goals?
Goals simplify decision-making by narrowing the scope of what the agent needs to consider.
Instead of optimizing multiple conflicting objectives (e.g., fun, sightseeing, rest), a goal like “get to Bucharest” focuses
the agent’s efforts.
They help eliminate irrelevant options early on.