0% found this document useful (0 votes)
6 views61 pages

AIML-TheORY - Session-1 Agents and Environment

The document discusses the fundamentals of artificial intelligence, focusing on agents, environments, and problem-solving through searching. It defines key concepts such as agents, percepts, rational agents, and various types of agents, while also outlining the performance measures and characteristics of different environments. Additionally, it explores real-world applications of AI across various fields, including healthcare, finance, education, and transportation.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views61 pages

AIML-TheORY - Session-1 Agents and Environment

The document discusses the fundamentals of artificial intelligence, focusing on agents, environments, and problem-solving through searching. It defines key concepts such as agents, percepts, rational agents, and various types of agents, while also outlining the performance measures and characteristics of different environments. Additionally, it explores real-world applications of AI across various fields, including healthcare, finance, education, and transportation.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 61

AIML-THEORY - SESSION 1

Agents, Environments, Applications and


SOLVING PROBLEMS BY SEARCHING
Human Intelligence
 Ability to Solve Problems
 Do Logic and Reasoning
 Derive inferences and carry
resolutions
 Acquire and Apply Knowledge
 Handle Uncertainty
 Ability to Learn at Macro Level
 Ability to Deep Learn

2
Agents &Environments
• An agent is anything that can be viewed as perceiving its
environment through sensors and acting upon that
environment through actuators.
• This simple idea is illustrated in the following figure:
Some Typical Examples

 A human agent has eyes, ears, and other organs for


sensors and hands, legs, vocal tract, and so on for
actuators.

 A robotic agent might have cameras and infrared


range finders for sensors and switches, motors
etc., for actuators.
Terminology
 Percept: Percept refers to the agent's perceptual
inputs at any given instant.
 Percept Sequence: An agent's percept sequence
is the complete history of everything the agent has
ever perceived.
 Agent Function: The agent’s behavior is
described by the agent function that maps any
given percept sequence to an action.
Example for Agent & Environment – Vacuum
Cleaner
 The Environment constitutes two locations: Squares A and B.
 The vacuum agent perceives which square it is in and whether
there is dirt in the square.
 It can choose to move left, move right, suck up the dirt, or do
nothing.
 The Agent implements a function such as “if the current
square is dirty, then suck; otherwise, move to the other
square”
Formulating the agent function
 The agent function can be represented as a Table
 The Table is constructed by including in it all
possible percept sequences connected with actions
to be taken in response
 The table is a kind of external characterization of
the agent.
 The agent function is an abstract mathematical
description; the agent program is a concrete
implementation, running within some physical
system.

7
Tabulating the Agent Function

8
Rational Agent
 A rational agent does the right things.
 Conceptually speaking, every entry in the table for the agent
function is filled out correctly.
 doing the right thing is better than doing the wrong thing,
 What should be done to do the right things?

What is rational at any given time depends on four things:

 The performance measure that defines the criterion of


success.
 The agent's prior knowledge of the environment.
 The actions that the agent can perform.
 The agent's percept sequence to date.

Definition of a rational agent:

 For each possible percept sequence, a rational agent should select an


action that is expected to maximize its performance measure, given
the evidence provided by the percept sequence and whatever built-in
9
knowledge the agent has.
Types of agents
 Reflex agents
 Problem Solving Agents
 Planning Agents
 Interacting Agents
 Collaborating agents
 Hybrid agents
 Interfacing Agents
Reflexive Agents

 The simplest agents are Reflex Agents.

 These agents base their actions on a direct


mapping from states to actions.

 Such agents cannot operate well in environments


for which the mapping of states to actions is too
large

 Example: Vacuum cleaner


Problem Solving Agents

 Problem solving agents are Goal based


 Goal-based agents, consider future actions
and the desirability of their outcomes
 Problem-solving agents use atomic
representations called states of the world
which are considered as wholes, with no
internal structure visible to the problem
solving algorithm
Planning Agents

 Goal-based agents use more advanced factored or


structured representations are usually called
planning agent
Goal Based Agents
 Problem Solving Agents
 Sets Goals
 Finds sequence of
Actions
 Takes Decisions that
lead to desirable states
The Nature of Environments

 To develop any agent, we had to specify the


performance measure, the environment, the
agent's actuators and sensors.
 All these are grouped under the heading of the
task environment.
 In agent's an Intelligent agents behavior is
determined through PEAS (Performance,
Environment, Actuators, Sensors)
Example Agents

16
Performance Measure of an agent

 An agent, while positioned in an environment, generates a


sequence of actions according to the percepts it receives.
 This sequence of actions causes the environment to go
through a sequence of states.
 If the sequence of states are desirable, then the agent has
performed well.
 The notion of desirability is captured by a performance
measure that evaluates any given sequence of
environment states.

17
PEAS description of a Taxi Driver Agent
Agent Type Performance Environment Actuators Sensors
Measure

Taxi driver Safe, fast, legal, Roads, other Steering, Cameras, sonar,
comfortable trip, traffic, accelerator, Speedo
maximize profits pedestrians, brake, signal, meter,
customers horn, display GPS, odometer,
accelerometer,
engine sensors,
keyboard

18
Types of Environments
 Fully observable vs. Partially observable
 Single agent Vs. Multi agent
 Deterministic Vs. Stochastic
 Episodic Vs. Sequential
 Static vs. Dynamic
 Discrete vs. Continuous
 Known vs. Unknown
Properties of Familiar Environments
Applications
of AI
1. Game Plying
2. Medical Diagnosis,
Autonomous Control,
3. Autonomous Planning and
Scheduling
4. Expert Systems
5. Robotics
6. Natural Language Processing
7. Computer Vision
8. e- Commerce

21
Applications of AI in healthcare

 Diagnostics

 Medical Imaging: AI algorithms analyse X-rays, MRIs, and CT scans to detect cancer, fractures, and
infections.
 Pathology: AI systems assist in examining tissue samples for accurate diagnosis.

 Personalized Medicine: AI analyses patient data to tailor treatments to individual


genetic profiles and health histories.
 Robotic Surgery: Robots like da Vinci Surgical System enhance precision in
complex surgeries, reducing recovery times and improving outcomes.
 Drug Discovery: AI accelerates discovering new drugs by predicting molecular
interactions and potential side effects.
Applications of AI in Finance

 Fraud Detection: AI systems monitor transactions in real time to


identify suspicious activities and prevent fraud.
 Algorithmic Trading: AI algorithms analyze market data and
execute trades at high speeds, optimizing investment
strategies.
 Risk Management: AI assesses credit scores and loan eligibility,
helping financial institutions manage risks.
 Customer Service: Chatbots and virtual assistants handle
customer inquiries, providing efficient and accurate responses.
Applications of AI in Education

 Personalized Learning: AI systems adapt educational


content to individual students' learning pace and style.
 Tutoring Systems: AI-powered tutors provide additional
support and resources to students outside of the
classroom.
 Administrative Tasks: AI automates grading, scheduling,
and other administrative tasks, freeing up time for
educators
Applications of AI in Transportation

 Autonomous Vehicles: AI powers self-driving cars by


processing data from sensors and cameras to
navigate safely.
 Traffic Management: AI systems analyse traffic
patterns to optimize signal timings and reduce
congestion.
 Logistics and Supply Chain: AI optimizes routes for
delivery trucks, manages inventory levels, and
predicts demand.
Applications of AI in Smart Homes and Cities

 Home Automation: AI controls home devices such as


lights, thermostats, and security systems for
convenience and energy efficiency.
 Smart Grids: AI optimises energy distribution and usage
in power grids, enhancing sustainability.
 Urban Planning: AI analyses data on traffic, pollution,
and population to improve city planning and
infrastructure
Problems, Solutions
and
Problem-Solving
Components of Well-Defined Problem and
solution
 Initial state the agent starts in
 Description of Possible actions – Successor function to
find the next function
 Goal Test – To test whether a state reached is a goal state
 A Path Cost (Cost in time measured in Kilometer
Distance)
Total Cost = Cost of all the actions taken along the path
Step cost = Taking action y to go from state “a” to state
“b” - C(x,a,b)
 Solution to the problem is path from Initial state to goal
state
 Optimal solution is shortest path from Initial state to goal
state
Problem States
 An Agent will be in different states while
solving the problem
 Being in a Location/node is a kind of state
 If the Location/node is the Goal then the
agent is said to be in Goal State.
 There can be many Goal states for a given
problem
8 Puzzle Game
 States: A state description specifies the location of each of the eight tiles and the
blank in one of the nine squares.
 Initial state: Any state can be designated as the initial state. Note that any given
goal can be reached from exactly half of the possible initial states
 Actions: The simplest formulation defines the actions as movements of the blank
space Left, Right, Up, or Down. Different subsets of these are possible depending
on where the blank is.
 Transition model: Given a state and action, resulting state is known; for
example, if we apply Left to the start state in Figure 3.4, the resulting state has the
5 and the blank switched.
 Goal test: This checks whether the state matches the goal configuration shown in
Figure 3.4. (Other goal configurations are possible.)
 Path cost: Each step costs 1, so the path cost is the number of steps in the path.
8 Puzzle Game
Route finding Problem
Route finding Problem
 To find the best route to move from a city called “Arad”
to “Bucharest”

 Goal – To Reach “Bucharrest” from “Arad”


 Performance Measures - Least Cost.
 Different kind of actions must be taken to reach “Bucharest”
 The actions that will not help reaching “Bucharest” can be
ignored so that decision space gets reduced
 Goals help organising the behaviour by limiting the
objectives that a agent is trying to achieve
Vacuum world problem
Vacuum world problem
 States:
The state is determined by both the agent location and the dirt location.

The agent is in one of two locations, each of which might or might not contain dirt. Thus, there are 2 × 2 2 = 8 possible
world states. A larger environment with n locations has n · 2n states.

 Initial state: Any state can be designated as the initial state.


 Actions:

In this simple environment, each state has just three actions: Left, Right, and Suck. Larger environments might also
include Up and Down.

 Transition model:

The actions have their expected effects, except that moving Left in the leftmost square, moving Right in the rightmost
square, and Sucking in a clean square have no effect.

 Goal test: This checks whether all the squares are clean.
 Path cost: Each step costs 1, so the path cost is the number of steps in the path.
8 Queen Problem
 States: Any arrangement of 0 to 8 queens on
the board is a state.
 Initial state: No queens on the board.
 Actions: Add a queen to any empty square.
 Transition model: Returns the board with a
queen added to the specified square.
 Goal test: 8 queens are on the board, none
attacked.
8 Queen Problem
Real-world Problems
 Route Finding
 Touring
 Travelling Salesperson
 VLSI Layout
 Robot Navigation
 Automatic Assembly Sequencing
 Software Robot – Internet Searching
The Traveling Salesperson Problem (TSP)
(Real-world Problem)
 TSP is a touring problem in which each city must be
visited exactly once.
 The aim is to find the shortest tour.
 An enormous amount of effort has been expended to
improve the capabilities of TSP algorithms.
 In addition to planning trips for travelling salespersons,
these algorithms have been used for tasks such as
planning movements of automatic circuit-board drills
and of stocking machines on shop floors.
Robot Navigation(Real-world problem
 Robot navigation is a generalization of the route-finding problem
described earlier.
 Rather than following a discrete set of routes, a robot can move in a
continuous space with (in principle) an infinite set of possible actions
and states.
 The space for a circular robot moving on a flat surface is essentially
two-dimensional. When the robot has arms, legs, or wheels that must
also be controlled, the search space becomes many-dimensional.
 Advanced techniques are required to make the search space finite.
Problem Solving Steps
 Goal Formulation
 Is based on the current situation and the agent's
performance measures
 Problem formulation
 What actions and states to consider, Given A goal

 Search and find Solution


 Examine different possible sequence of actions that
leads to states of known values and then choose the best
sequence
 Execute the best solution
Problem Formulation
 Using a Grid
 Using a Maze
 Using a Graph
 Using a Tree
Problem Formulation through a Grid
A Sample Grid
Problem Formulation through a Maze
A Sample MAZE

Lion

Elephant
Problem Formulation through a Graph
Problem Formulation through a Tree

Arad

Tomisora Zerand
Sibiu

Arad Orades Reminicu Fagaras Arad Lugoj Arad Orades

Partial search trees for finding a route from Arad to Bucharest. Nodes that have been
expanded are shaded; nodes that have been generated but not yet expanded are
outlined in bold; nodes that have not yet been generated are shown in faint dashed
lines.
B Trees

B+ Trees

Search
AVL Tree
Trees
Explicit Search Trees

A tree generated from the Initial State and a Successor


function – Both together define state space
Takes a problem as Input and
returns solution in the form
of a sequence of actions
Search
Algorithm
Once the solution is found,
the related actions are
performed (Execution)
SEARCHING FOR SOLUTIONS

 Search Tree
The possible action sequences starting at the initial state form a
search tree with the initial state at the root; the branches are
actions and the nodes correspond to states in the state space of
the problem.

 Solutions

A solution is an action sequence, so search algorithms work by


considering various action sequences. This can be implemented
using search tree.
Distinction between node and state
 A Node is a Bookkeeping data structure used
to represent a search tree
 A state corresponds to a configuration of the
world
 Nodes are on the paths but not the state
 A state can be reached through two different
nodes
Nodes are the data structures from which the search tree is constructed. Each has a
parent, a state, and various bookkeeping fields_ Arrows point from child to parent.
Representing Nodes
 Following up one option now and putting
the other aside for latter in case the
chosen option does not find solution

Essence  There are finite number of states in a


route-finding problem (20) – One for each
city

of  There are infinite number of paths in the


state space

search 


There can be loops in the paths which
needs to be avoided

A tree node is a search path, which means


there are infinite number of nodes
Process for searching
Start

Set Ini tial


State

Is Inti al YES
State Goal STop
State

Expand using
Successor
functi on and
find new
states

Check which is
the best state

Select New
State
Measurin • Completeness: Is the algorithm
guaranteed to find a solution when there

g is one?
• Optimality: Does the strategy find the
optimal solution
problem- • Time complexity: How long does it take
to find a solution?

solving • Space complexity: How much memory


is needed to perform the search?

performa

nce
Search The Choice of a state
to expand is called

Strategies
search Strategy
 Uninformed Search
Strategies
– BFS
Search – DFS
– Depth Limited
Strategies Search
– Iterative deepening
Search
– Bidirectional search
Search
Strategies

 Informed Search
Strategies(Heuristic
Search Strategies)

– Best First Search


– Greedy Best First
Search
– A* SEARCH
– Memory – Bounded
heuristic search
– Recursive Best First
Search
Search Strategies

 Adversarial Search
Strategies
– MIN-MAX
– ALPHA-BETA PRUNING
– iterative Deepening (BFS+DFS)
– Monte Carlo Tree Search (MCTS)
Algorithms for
Local Search

 Hill Climbing
 Simulated Annealing
 Local Beam Search
 Genetic Algorithms
 Local Search in continuous
space
 Online Search Agents
Constrained
Forword Checking
Satisfaction
problems
and solution
Back Tracking
strategies
Local Search + CSP

You might also like