ARTIFICIAL INTELLIGENCE
LECTURE 3
DR. DANISH JAVED
University of Management and Technology
Department of Software Engineering
REPRESENTING KNOWLEDGE
OR “STATES OF THE WORLD”
❖ AI systems need to represent what’s happening in the
world (the state of the environment).
❖ Different problems need different levels of detail, and
the way we represent that information changes how
intelligent the system can be.
2
REPRESENTING KNOWLEDGE
OR “STATES OF THE WORLD”
There are three main types of representations:
i. Atomic
ii. Factored
iii. Structured
And two ways to store those representations:
i. Localist
ii. Distributed
3
ATOMIC REPRESENTATION
❖ In an atomic representation, each state of the world is indivisible, and it has no
internal details. Think of this as a black box, a simple label with no internal
details.
Example
❖ Imagine you are finding a driving route across a country.
All you care about is which city you are in, not gas level, weather, or roads.
So your state might just be:
❖ City = Lahore
That’s it. Each city name is an “atom.”
❖ Islamabad ≠ Lahore (different black boxes).
We know nothing inside those boxes, only that they are different.
Used in:
❖ Search problems (e.g., finding shortest path)
❖ Game playing (like chess positions) 4
FACTORED REPRESENTATION
A factored representation splits up each state into a fixed set of variables or attributes,
each of which can have a value.
Example:
Instead of just “Lahore,” we might describe:
➢ City = Lahore
➢ Fuel = Half tank
➢ Money = Rs. 500
➢ TollPaid = No
➢ OilLight = Off
➢ RadioStation = 103 FM
Now, two states might share some features — e.g., same city but different fuel.
Use in Machine learning models (many use feature variables)
5
STRUCTURED REPRESENTATION
We represent objects and their relationships.
We explicitly describe entities and how they interact, instead of just variables.
Example:
You’re driving and see:
“A large truck is reversing into the driveway of a dairy farm, but a cow is blocking it.”
You can’t describe that with simple attributes like TruckAhead = True.
You need objects (Truck, Cow, Driveway, Farm) and relationships:
• Object: Truck1, Cow1, FarmDriveway1
• Relation: Blocking(Cow1, Truck1)
• Relation: BackingInto(Truck1, FarmDriveway1)
Now the AI understands who is doing what to whom
Used in First-order logic and Natural language understanding
6
HOW IS THE KNOWLEDGE
STORED IN MEMORY
LOCALIST REPRESENTATION
Each concept → one memory location.
Example:
• “Truck” stored in one neuron (or one memory cell).
• “Cow” stored in another.
If that memory cell is damaged or misread, the concept is lost or
confused.
Analogy:
Like having one file per idea on your computer, and if the file is
corrupted, you lose it.
8
DISTRIBUTED REPRESENTATION
Each concept is spread across many memory locations.
Each memory cell participates in storing multiple concepts.
Analogy:
Like saving a picture on cloud servers — if one bit is lost, the image is slightly
fuzzy but still recognizable.
Advantage:
More robust to noise or data loss.
Similar meanings stored close together in “concept space.”
Example:
In a neural network, the concept “Truck” might be represented as a pattern of
activations (like a fingerprint), not a single point.
9
PROBLEM-SOLVING AGENT
An agent is an intelligent system that perceives its environment and
acts to achieve goals.
When the correct action is not immediately clear, the agent must plan
ahead, and think about a sequence of actions that will lead it from its
current state to a goal.
This type of agent is called a problem-solving agent.
And the process it uses to plan ahead is called search.
10
A SIMPLIFIED ROAD MAP OF PART OF
ROMANIA, WITH ROAD DISTANCES IN MILES
11
HOW PROBLEM-SOLVING WORKS:
THE 4 PHASES
A problem-solving agent follows four main steps when making a plan:
1. Goal Formulation
The agent first decides what it wants to achieve.
The goal gives it direction and focus.
Example:
“Reach Bucharest” is the goal.
12
HOW PROBLEM-SOLVING WORKS:
THE 4 PHASES
2. Problem Formulation
The agent decides how to describe the problem and what information is
relevant.
It creates a model of the world: what states exist, what actions are possible,
and what results each action causes.
Example:
Define each city as a state (Arad, Sibiu, etc.)
Define actions as traveling between connected cities.
The only thing that changes after an action: the current city.
So:
State: Current city
Action: Move to neighboring city
This simplified model makes reasoning possible.
13
HOW PROBLEM-SOLVING WORKS:
THE 4 PHASES
3. Search
Before moving, the agent simulates different paths mentally.
It explores possible routes to find a sequence of actions (a path) that leads to
the goal.
Example:
Try Arad → Sibiu → Fagaras → Bucharest.
Check if it reaches the goal.
If not, try another path.
This process of exploring possible sequences is called search.
14
HOW PROBLEM-SOLVING WORKS:
THE 4 PHASES
4. Execution
Once the agent finds a solution, it performs those actions in
the real world, one by one.
Example:
Actually drive from Arad → Sibiu → Fagaras → Bucharest.
15
TYPES OF SEARCH ALGORITHMS
1. Uninformed (Blind):
Agent has no clue how far it is from the goal; it explores blindly.
Example: Breadth-First Search, Depth-First Search
2. Informed (Heuristic):
Agent has an estimate (a “hint”) of distance to the goal; it uses
that to search smartly.
Example: A*, Greedy Best-First Search
16
VACUUM CLEANER WORLD
(SINGLE-STATE PROBLEM)
The Scenario
Imagine a vacuum cleaner robot that has to clean two rooms: A and
B.
Each room can either be Dirty or Clean.
The agent can:
• Move Left
• Move Right
• Clean Dirt
The robot also knows its location (A or B).
17
VACUUM CLEANER WORLD
(SINGLE-STATE PROBLEM)
Possible States
A state can be defined by:
i. The vacuum’s location (A or B), and
ii. The cleanliness of each room.
So, we have:
❖ Location: 2 possibilities (A or B)
❖ Dirt status: Each room can be Clean or Dirty → 2 × 2 = 4 possibilities
Total possible states = 2 × 4 = 8 states
18
VACUUM CLEANER WORLD
(SINGLE-STATE PROBLEM)
State Vacuum Location Room A Room B
1 A Dirty Dirty
2 A Dirty Clean
3 A Clean Dirty
4 A Clean Clean
5 B Dirty Dirty
6 B Dirty Clean
7 B Clean Dirty
8 B Clean Clean
19
VACUUM CLEANER WORLD
(SINGLE-STATE PROBLEM)
Action Result
Clean Cleans the current room
Left Moves vacuum to Room A
Right Moves vacuum to Room B
The goal state is:
❑ Room A = Clean
❑ Room B = Clean
20
SINGLE-STATE PROBLEM
In a single-state problem, the agent always knows exactly which
state it is in.
That means:
❖ It knows whether it’s in Room A or B.
❖ It knows which rooms are dirty or clean.
❖ It knows the effect of its actions.
So the environment is:
✓ Fully observable (it can see dirt)
✓ Deterministic (actions have predictable results)
✓ Static (dirt doesn’t reappear)
✓ Discrete (finite rooms, actions)
21
EXAMPLE OF A SOLUTION PATH
IF THE INITIAL STATE IS:
(A, DIRTY, DIRTY)
Possible sequence:
Clean → (A, Clean, Dirty)
Right → (B, Clean, Dirty)
Clean → (B, Clean, Clean)
22
TREE SEARCH STRUCTURE EXAMPLE
Start: (A, Dirty, Dirty)
/ \
Clean Right
| \
(A, Clean, Dirty) (B, Dirty, Dirty)
| \
Right Clean
| \
(B, Clean, Dirty) (B, Dirty, Clean)
|
Clean
|
(B, Clean, Clean) → Goal
23
TREE SEARCH STRUCTURE EXAMPLE
(B, Dirty, Clean)
Solution?
24
TREE SEARCH STRUCTURE EXAMPLE
(B, Dirty, Clean)
/ | \
Clean Left Right
| | \
(B, Dirty, Clean) (A, Dirty, Clean) (B, Dirty, Clean)
/ \
Clean Right
| |
(A, Clean, Clean) (B, Dirty, Clean)
25
STATE SPACE GRAPH FOR THE VACUUM
PROBLEM
26
THE 8-PUZZLE PROBLEM
27
REAL-WORLD PROBLEMS
ROUTE-FINDING PROBLEM
Consider the airline travel problems that must be solved by a travel-planning website:
States: Each state obviously includes a location (e.g., an airport) and the current time.
Furthermore, because the cost of an action (a flight segment) may depend on previous
segments, their fare bases, and their status as domestic or international, the state must
record extra information about these “historical” aspects.
Initial state: The user’s home airport.
Actions: Take any flight from the current location, in any seat class, leaving after the
current time, leaving enough time for within-airport transfer if needed.
Transition model: The state resulting from taking a flight will have the flight’s destination
as the new location and the flight’s arrival time as the new time.
Goal state: A destination city. Sometimes the goal can be more complex, such as
“arrive at the destination on a nonstop flight.”
Action cost: A combination of monetary cost, waiting time, flight time, customs and
immigration procedures, seat quality, time of day, type of airplane, frequent-flyer reward
points, and so on. 28
REAL-WORLD PROBLEMS
ROBOTIC ASSEMBLY
States: The state represents the complete configuration of the
robotic assembly system, including the real-valued coordinates of
the robot’s joint angles and the positions/orientations of the parts
to be assembled.
Actions: Continuous motions of robot joints.
Goal test: Complete assembly.
Path cost: Time to execute
Other examples: TSP (NP-hard), robot navigation, protein folding
(unsolved).
29
SEARCHING FOR SOLUTION
❖ A solution is an actions sequence; accordingly, search algorithms work by
considering various possible action sequences.
❖ The possible action sequences starting at the initial state form a search
tree with the initial state at the root; the branches are actions and the
nodes correspond to state in the state space of the problem.
❖ To consider taking various actions, we expand the current state – thereby
generating a new set of states.
❖ In this way, we add branches from the parent node to children nodes
❖ A node with no children is called a leaf; the set of all leaf nodes available
for expansion at any given point is called the frontier.
❖ The process of expanding nodes on the frontier continues until either a
solution is found or there are no more states to expand.
30
TREE SEARCH EXAMPLE
31
TREE SEARCH EXAMPLE
Three partial search trees for finding a route from Arad to Bucharest. Nodes that have been
expanded are lavender with bold letters; nodes on the frontier that have been generated but not yet
expanded are in green; the set of states corresponding to these two types of nodes are said to have
been reached. Nodes that could be generated next are shown in faint dashed lines. Notice in the
bottom tree there is a cycle from Arad to Sibiu to Arad; that can’t be an optimal path, so search
should not continue from there.
32
A SEQUENCE OF SEARCH TREES
GENERATED BY A GRAPH SEARCH ON
THE ROMANIA PROBLEM
❖ A sequence of search trees generated by a graph search on the Romania
problem. At each stage, we have expanded every node on the frontier,
extending every path with all applicable actions that don’t result in a
state that has already been reached.
❖ Notice that at the third stage, the topmost city (Oradea) has two
successors, both of which have already been reached by other paths, so
no paths are extended from Oradea.
33
A SEQUENCE OF SEARCH TREES GENERATED BY A
GRAPH SEARCH ON THE ROMANIA PROBLEM
34
THE SEPARATION PROPERTY OF GRAPH
SEARCH, ILLUSTRATED ON A RECTANGULAR-
GRID PROBLEM
The frontier (green) separates the interior (lavender) from the exterior (faint
dashed). The frontier is the set of nodes (and corresponding states) that have
been reached but not yet expanded; the interior is the set of nodes (and
corresponding states) that have been expanded; and the exterior is the set of
states that have not been reached. In (a), just the root has been expanded. In
(b), the top frontier node is expanded. In (c), the remaining successors of the
root are expanded in clockwise order.
35