Instructor: Dr Syed Musharaf Ali
Artificial Intelligence
CS-451
Artificial Intelligence Approach
Agents and Environments:
 AI approach is called intelligent agent
 An agent is anything that can perceive its environment through
sensors and acts upon that environment through effectors.
 A function of an agent that maps sensors to effectors is called
Control-Policy of the agent.
Agents and Environments:
Sensors
Effectors
Agent Enviroment
Perception-Action Cycle
Artificial Intelligence Approach
An agent is anything that can be viewed as perceiving its
environment through sensors and acting upon that environment
through effectors
A rational agent is one that does the right action. The right action is
the one that will cause the agent to be the most successful.
Performance measure is the criteria that determine how successful
an agent is e.g. % accuracy achieved, amount of work done, energy
consumed, time in seconds etc.
Intelligent Agent
An ideal rational agent:
For each possible percept sequence, an ideal rational agent should
do whatever action is expected to maximize its performance
measure, on the basis of the evidence(information) provided by the
percept sequence and whatever built-in knowledge the agent has.
Intelligent Agent
Mapping:
An agent’s behaviour depends only on its percept sequence
We can describe any particular agent by making a table of the action
it takes in response to each possible percept sequence.
Intelligent Agent
P1
P2
P3
A1
A2
A3
Percept
Sequence
Action
Mapping from percept sequnce to actions
Autonomy:
A system is autonomous, if it’s behaviour is determined by its own
experience and learning
If the agent’s actions are based completely on built-in knowledge,
then we say that the agent lacks autonomy.
It would be reasonable to provide an artificial agent with some built-
in knowledge as well as ability to learn.
A truly autonomous intelligent agent should be able to operate
successfully in a wide variety of environments, given sufficient time
to adapt
Intelligent Agent
Agent program: A function that implements the agent mapping from
precepts to actions.
Computing device on which program will run, we call architecture
The architecture makes the percepts from the sensors available to
the program, run the program, and feeds the program’s action to the
effectors.
Agent = architecture + program
Agent Program
Designing agent program, we need to keep in mind
 Possible percepts and actions
 Performance measures (Goals)
 What sort of environment it will operate
Agent Program
Agent type Percepts Actions Goals Enviroments
Taxi driver
(self driving car)
Cameras,
speedometer,
GPS, sonar,
microphone etc
Steer,
accelarate,
brake, talk to
passenger
Safe, fast,
legal,
comfortable
trip,
maximize
profits
Roads, traffic,
pedestrians,
customers
Environment provides percepts/information to the agent, and agent
done actions on the environment
Environments
Fully observable vs. partially observable:
If an agent's sensors give it access to the complete state of the
environment at each point in time then we say that the environment
is fully observable/accessible to that agent e.g.
Game of chess (fully observable)
Poker, self driving car (partially observable)
Properties of Environments
Deterministic vs. nondeterministic:
If the next state of the environment is completely determined by the
current state and the actions selected by the agent, then we say
environment is deterministic e.g.
Chess (deterministic)
Dice, poker, taxi driver (non deterministic)
If environment is fully observable and deterministic then agent need
not to worry about uncertainty
Properties of Environments
Episodic vs. non episodic:
The agent's experience is divided into atomic "episodes" (each
episode consists of the agent perceiving and then performing a single
action), and the quality of action in each episode depends only on
the episode itself.
Subsequent episodes do not depend on what actions occur in
previous episodes e.g.
Part-picking robot (Episodic)
Chess, poker, Taxi driving (nonepisodic)
Properties of Environments
Static vs. dynamic:
The environment can change while an agent is deliberating then we
say the environment is dynamic for that agent otherwise it is static
e.g.
Chess (static)
Taxi driving (dynamic)
Properties of Environments
Discrete (vs. continuous):
A limited number of distinct, clearly defined percepts and actions ,
we say that environment is discrete e.g.
Chess (discrete)… there are fixed number of possible moves on each
turn
Driving (continuous)
Properties of Environments
 Different environment types require different agent programs to
deal with them effectively.
 Environment that is partially observable, non episodic, dynamic
and continuous is hardest for artificial intelligent agent to deal
with.
Properties of Environments
Types of agents are
1. Simple reflex agents
2. Model-based reflex agents
3. Goal-based agents
4. Utility-based agents
5. Learning agents
Agent Program
Simple Reflex Agents
They choose actions only based on the current percept.
They are rational only if a correct decision is made only on the basis
of current precept.
Their environment is completely observable.
Condition-Action Rule − It is a rule that maps a state (condition) to
an action.
Simple Reflex Agents
This agent function only succeeds when the environment is fully
observable
e.g. If the car-in-front is braking then initiate braking.
If hand is in fire then pull away hand
If there is a rock then pick it up (Mars lander)
Model-based Reflex Agents
Model-based reflex agents are made to deal with partial
accessibility
They use a model of the world to choose their actions. They
maintain an internal state.
Model − The knowledge about how the things happen in the world.
Internal State − It is a representation of unobserved aspects of
current state depending on percept history.
Updating the state requires the information about −
1. How the world evolves.
2. How the agent’s actions affect the world.
Model-based Reflex Agents
e.g. This time out mars Lander after picking up its first sample, it
stores this in the internal state of the world around it so when it come
across the second same sample it passes it by and saves space for
other samples.
Goal-based Reflex Agents
Goal-based agents further expand on the capabilities of the model-
based agents, by using "goal" information.
Goal information describes situations that are desirable.
This allows the agent a way to choose among multiple
possibilities/actions, selecting the one which reaches a goal state.
Search and planning are the subfields of artificial intelligence
devoted to finding action sequences that achieve the agent's goals.
Goals-based Reflex Agents
Utility-based Reflex Agents
Just having goals isn’t good enough because often we may have
several actions which all satisfy our goal so we need some way of
working out the most efficient one.
A utility function maps each state after each action to a real number
representing how efficiently each action achieves the goal.
This is useful when we either have many actions all solving the same
goal
Utility-based Reflex Agents
e.g. Shortest path finding algorithm.
Learning Agents
Learning Agents
A learning agent can be divided into four conceptual components,
Learning element, which is responsible for making improvements
Performance element select actions. The performance element is what we have
previously considered to be the entire agent: it takes in percepts and decides on
actions.
The learning element uses CRITIC feedback from the critic on how the agent is doing
and determines how the performance element should be modified to do better in
the future
Problem generator is responsible for suggesting actions that will lead to new and
informative experiences

More Related Content

PPTX
Intelligent agent
PDF
intelligentagent-140313053301-phpapp01 (1).pdf
PDF
Artificial Intelligence chapter 1 and 2(1).pdf
PPTX
AI Agents, Agents in Artificial Intelligence
PPTX
AI_02_Intelligent Agents.pptx
PPT
Jarrar.lecture notes.aai.2011s.ch2.intelligentagents
PPT
Jarrar.lecture notes.aai.2011s.ch2.intelligentagents
PDF
Chapter word of it Intelligent Agents.pdf
Intelligent agent
intelligentagent-140313053301-phpapp01 (1).pdf
Artificial Intelligence chapter 1 and 2(1).pdf
AI Agents, Agents in Artificial Intelligence
AI_02_Intelligent Agents.pptx
Jarrar.lecture notes.aai.2011s.ch2.intelligentagents
Jarrar.lecture notes.aai.2011s.ch2.intelligentagents
Chapter word of it Intelligent Agents.pdf

Similar to AI_Lec1.pptx ist step to enter in AI field (20)

PDF
Understanding Intelligent Agents: Concepts, Structure, and Applications
PPT
introduction to inteligent IntelligentAgent.ppt
PPTX
Agents-Artificial Intelligence with different types of agents
PDF
AI - Agents & Environments
PPT
chapter -2 Intelligent Agents power Point .ppt
PPTX
artificial intelligence best presentation.pptx
PPTX
AI: Artificial Agents on the Go and its types
PPTX
Intelligent Agents, A discovery on How A Rational Agent Acts
PPTX
Lecture 3-Artificial Intelligence.pptx Mzuzu
PDF
Intelligent agent In Artificial Intelligence
PPTX
Lecture 4 (1).pptx
PPT
Chapter-2 Intelligent Agent.ppt.........
PDF
AI Chapter II for computer Science students
PPSX
Week 1 b - Agents.ppsx used in AI for be
PDF
Intelligent Agent PPT ON SLIDESHARE IN ARTIFICIAL INTELLIGENCE
PDF
Introduction to Intelligent Agent and its types
PDF
What are AI Agents? Definition and Types - Tpoint Tech
PDF
Agents-and-Problem-Solving-20022024-094442am.pdf
PDF
Week 2.pdf
PDF
Understanding Intelligent Agents: Concepts, Structure, and Applications
introduction to inteligent IntelligentAgent.ppt
Agents-Artificial Intelligence with different types of agents
AI - Agents & Environments
chapter -2 Intelligent Agents power Point .ppt
artificial intelligence best presentation.pptx
AI: Artificial Agents on the Go and its types
Intelligent Agents, A discovery on How A Rational Agent Acts
Lecture 3-Artificial Intelligence.pptx Mzuzu
Intelligent agent In Artificial Intelligence
Lecture 4 (1).pptx
Chapter-2 Intelligent Agent.ppt.........
AI Chapter II for computer Science students
Week 1 b - Agents.ppsx used in AI for be
Intelligent Agent PPT ON SLIDESHARE IN ARTIFICIAL INTELLIGENCE
Introduction to Intelligent Agent and its types
What are AI Agents? Definition and Types - Tpoint Tech
Agents-and-Problem-Solving-20022024-094442am.pdf
Week 2.pdf
Ad

Recently uploaded (20)

PDF
0520_Scheme_of_Work_(for_examination_from_2021).pdf
PPTX
Climate Change and Its Global Impact.pptx
PDF
LIFE & LIVING TRILOGY - PART (3) REALITY & MYSTERY.pdf
PDF
Myanmar Dental Journal, The Journal of the Myanmar Dental Association (2015).pdf
PPTX
PLASMA AND ITS CONSTITUENTS 123.pptx
PPTX
CAPACITY BUILDING PROGRAMME IN ADOLESCENT EDUCATION
PPTX
ACFE CERTIFICATION TRAINING ON LAW.pptx
PDF
M.Tech in Aerospace Engineering | BIT Mesra
PPTX
Thinking Routines and Learning Engagements.pptx
PDF
Laparoscopic Colorectal Surgery at WLH Hospital
PPTX
Reproductive system-Human anatomy and physiology
PDF
Everyday Spelling and Grammar by Kathi Wyldeck
PDF
Journal of Dental Science - UDMY (2022).pdf
PDF
African Communication Research: A review
PDF
faiz-khans about Radiotherapy Physics-02.pdf
PDF
Disorder of Endocrine system (1).pdfyyhyyyy
PPTX
Macbeth play - analysis .pptx english lit
PDF
Nurlina - Urban Planner Portfolio (english ver)
PDF
Lecture on Viruses: Structure, Classification, Replication, Effects on Cells,...
PDF
Literature_Review_methods_ BRACU_MKT426 course material
0520_Scheme_of_Work_(for_examination_from_2021).pdf
Climate Change and Its Global Impact.pptx
LIFE & LIVING TRILOGY - PART (3) REALITY & MYSTERY.pdf
Myanmar Dental Journal, The Journal of the Myanmar Dental Association (2015).pdf
PLASMA AND ITS CONSTITUENTS 123.pptx
CAPACITY BUILDING PROGRAMME IN ADOLESCENT EDUCATION
ACFE CERTIFICATION TRAINING ON LAW.pptx
M.Tech in Aerospace Engineering | BIT Mesra
Thinking Routines and Learning Engagements.pptx
Laparoscopic Colorectal Surgery at WLH Hospital
Reproductive system-Human anatomy and physiology
Everyday Spelling and Grammar by Kathi Wyldeck
Journal of Dental Science - UDMY (2022).pdf
African Communication Research: A review
faiz-khans about Radiotherapy Physics-02.pdf
Disorder of Endocrine system (1).pdfyyhyyyy
Macbeth play - analysis .pptx english lit
Nurlina - Urban Planner Portfolio (english ver)
Lecture on Viruses: Structure, Classification, Replication, Effects on Cells,...
Literature_Review_methods_ BRACU_MKT426 course material
Ad

AI_Lec1.pptx ist step to enter in AI field

  • 1. Instructor: Dr Syed Musharaf Ali Artificial Intelligence CS-451
  • 2. Artificial Intelligence Approach Agents and Environments:  AI approach is called intelligent agent  An agent is anything that can perceive its environment through sensors and acts upon that environment through effectors.  A function of an agent that maps sensors to effectors is called Control-Policy of the agent.
  • 3. Agents and Environments: Sensors Effectors Agent Enviroment Perception-Action Cycle Artificial Intelligence Approach
  • 4. An agent is anything that can be viewed as perceiving its environment through sensors and acting upon that environment through effectors A rational agent is one that does the right action. The right action is the one that will cause the agent to be the most successful. Performance measure is the criteria that determine how successful an agent is e.g. % accuracy achieved, amount of work done, energy consumed, time in seconds etc. Intelligent Agent
  • 5. An ideal rational agent: For each possible percept sequence, an ideal rational agent should do whatever action is expected to maximize its performance measure, on the basis of the evidence(information) provided by the percept sequence and whatever built-in knowledge the agent has. Intelligent Agent
  • 6. Mapping: An agent’s behaviour depends only on its percept sequence We can describe any particular agent by making a table of the action it takes in response to each possible percept sequence. Intelligent Agent P1 P2 P3 A1 A2 A3 Percept Sequence Action Mapping from percept sequnce to actions
  • 7. Autonomy: A system is autonomous, if it’s behaviour is determined by its own experience and learning If the agent’s actions are based completely on built-in knowledge, then we say that the agent lacks autonomy. It would be reasonable to provide an artificial agent with some built- in knowledge as well as ability to learn. A truly autonomous intelligent agent should be able to operate successfully in a wide variety of environments, given sufficient time to adapt Intelligent Agent
  • 8. Agent program: A function that implements the agent mapping from precepts to actions. Computing device on which program will run, we call architecture The architecture makes the percepts from the sensors available to the program, run the program, and feeds the program’s action to the effectors. Agent = architecture + program Agent Program
  • 9. Designing agent program, we need to keep in mind  Possible percepts and actions  Performance measures (Goals)  What sort of environment it will operate Agent Program Agent type Percepts Actions Goals Enviroments Taxi driver (self driving car) Cameras, speedometer, GPS, sonar, microphone etc Steer, accelarate, brake, talk to passenger Safe, fast, legal, comfortable trip, maximize profits Roads, traffic, pedestrians, customers
  • 10. Environment provides percepts/information to the agent, and agent done actions on the environment Environments
  • 11. Fully observable vs. partially observable: If an agent's sensors give it access to the complete state of the environment at each point in time then we say that the environment is fully observable/accessible to that agent e.g. Game of chess (fully observable) Poker, self driving car (partially observable) Properties of Environments
  • 12. Deterministic vs. nondeterministic: If the next state of the environment is completely determined by the current state and the actions selected by the agent, then we say environment is deterministic e.g. Chess (deterministic) Dice, poker, taxi driver (non deterministic) If environment is fully observable and deterministic then agent need not to worry about uncertainty Properties of Environments
  • 13. Episodic vs. non episodic: The agent's experience is divided into atomic "episodes" (each episode consists of the agent perceiving and then performing a single action), and the quality of action in each episode depends only on the episode itself. Subsequent episodes do not depend on what actions occur in previous episodes e.g. Part-picking robot (Episodic) Chess, poker, Taxi driving (nonepisodic) Properties of Environments
  • 14. Static vs. dynamic: The environment can change while an agent is deliberating then we say the environment is dynamic for that agent otherwise it is static e.g. Chess (static) Taxi driving (dynamic) Properties of Environments
  • 15. Discrete (vs. continuous): A limited number of distinct, clearly defined percepts and actions , we say that environment is discrete e.g. Chess (discrete)… there are fixed number of possible moves on each turn Driving (continuous) Properties of Environments
  • 16.  Different environment types require different agent programs to deal with them effectively.  Environment that is partially observable, non episodic, dynamic and continuous is hardest for artificial intelligent agent to deal with. Properties of Environments
  • 17. Types of agents are 1. Simple reflex agents 2. Model-based reflex agents 3. Goal-based agents 4. Utility-based agents 5. Learning agents Agent Program
  • 18. Simple Reflex Agents They choose actions only based on the current percept. They are rational only if a correct decision is made only on the basis of current precept. Their environment is completely observable. Condition-Action Rule − It is a rule that maps a state (condition) to an action.
  • 19. Simple Reflex Agents This agent function only succeeds when the environment is fully observable e.g. If the car-in-front is braking then initiate braking. If hand is in fire then pull away hand If there is a rock then pick it up (Mars lander)
  • 20. Model-based Reflex Agents Model-based reflex agents are made to deal with partial accessibility They use a model of the world to choose their actions. They maintain an internal state. Model − The knowledge about how the things happen in the world. Internal State − It is a representation of unobserved aspects of current state depending on percept history. Updating the state requires the information about − 1. How the world evolves. 2. How the agent’s actions affect the world.
  • 21. Model-based Reflex Agents e.g. This time out mars Lander after picking up its first sample, it stores this in the internal state of the world around it so when it come across the second same sample it passes it by and saves space for other samples.
  • 22. Goal-based Reflex Agents Goal-based agents further expand on the capabilities of the model- based agents, by using "goal" information. Goal information describes situations that are desirable. This allows the agent a way to choose among multiple possibilities/actions, selecting the one which reaches a goal state. Search and planning are the subfields of artificial intelligence devoted to finding action sequences that achieve the agent's goals.
  • 24. Utility-based Reflex Agents Just having goals isn’t good enough because often we may have several actions which all satisfy our goal so we need some way of working out the most efficient one. A utility function maps each state after each action to a real number representing how efficiently each action achieves the goal. This is useful when we either have many actions all solving the same goal
  • 25. Utility-based Reflex Agents e.g. Shortest path finding algorithm.
  • 27. Learning Agents A learning agent can be divided into four conceptual components, Learning element, which is responsible for making improvements Performance element select actions. The performance element is what we have previously considered to be the entire agent: it takes in percepts and decides on actions. The learning element uses CRITIC feedback from the critic on how the agent is doing and determines how the performance element should be modified to do better in the future Problem generator is responsible for suggesting actions that will lead to new and informative experiences

Editor's Notes

  • #2: Effector: something that respond to signal/stimulus/information
  • #3: Effector: something that respond to signal/stimulus/information
  • #4: Rational : based on reason and logic