Artificial Intelligence
CMSC471/671
Section 0101
Tuesday/Thursday 5:30 – 6:45
Math-Psychology 103
Instructor: Professor Yun Peng
ECS Building Room 221
(410)455-3816
ypeng@cs.umbc.edu
Chapter 1: Introduction
• Can machines think?
• And if so, how?
• And if not, why not?
• And what does this say about human beings?
• And what does this say about the mind?
What is artificial intelligence?
• There are no clear consensus on the definition of AI
• Here’s one from John McCarthy, (He coined the phrase
AI in 1956) - see http:// www. formal. Stanford. EDU/
jmc/ whatisai/)
Q. What is artificial intelligence?
A. It is the science and engineering of making intelligent
machines, especially intelligent computer programs. It
is related to the similar task of using computers to
understand human intelligence, but AI does not have to
confine itself to methods that are biologically
observable.
Q. Yes, but what is intelligence?
A. Intelligence is the computational part of the ability to
achieve goals in the world. Varying kinds and degrees of
Other possible AI definitions
• AI is a collection of hard problems which can be
solved by humans and other living things, but for
which we don’t have good algorithms for solving.
– e. g., understanding spoken natural language,
medical diagnosis, circuit design, learning, self-
adaptation, reasoning, chess playing, proving
math theories, etc.
• Definition from R & N book: a program that
– Acts like human (Turing test)
– Thinks like human (human-like patterns of
thinking steps)
– Acts or thinks rationally (logically, correctly)
• Some problems used to be thought of as AI but are
now considered not
– e. g., compiling Fortran in 1955, symbolic
mathematics in 1965, pattern recognition in 1970
What’s easy and what’s hard?
• It’s been easier to mechanize many of the high level
cognitive tasks we usually associate with “intelligence” in
people
– e. g., symbolic integration, proving theorems, playing
chess, some aspect of medical diagnosis, etc.
• It’s been very hard to mechanize tasks that animals can
do easily
– walking around without running into things
– catching prey and avoiding predators
– interpreting complex sensory information (visual,
aural, …)
– modeling the internal states of other animals from
their behavior
– working as a team (ants, bees)
• Is there a fundamental difference between the two
categories?
History of AI
• AI has roots in a number of scientific disciplines
– computer science and engineering (hardware and
software)
– philosophy (rules of reasoning)
– mathematics (logic, algorithms, optimization)
– cognitive science and psychology (modeling high level
human/animal thinking)
– neural science (model low level human/animal brain
activity)
– linguistics
• The birth of AI (1943 – 1956)
– Pitts and McCulloch (1943): simplified mathematical
model of neurons (resting/firing states) can realize all
propositional logic primitives (can compute all Turing
computable functions)
– Allen Turing: Turing machine and Turing test (1950)
– Claude Shannon: information theory; possibility of
• Early enthusiasm (1952 – 1969)
– 1956 Dartmouth conference
John McCarthy (Lisp);
Marvin Minsky (first neural network machine);
Alan Newell and Herbert Simon (GPS);
– Emphasize on intelligent general problem solving
GSP (means-ends analysis);
Lisp (AI programming language);
Resolution by John Robinson (basis for automatic
theorem proving);
heuristic search (A*, AO*, game tree search)
• Emphasis on knowledge (1966 – 1974)
– domain specific knowledge is the key to overcome
existing difficulties
– knowledge representation (KR) paradigms
– declarative vs. procedural representation
• Knowledge-based systems (1969 – 1999)
– DENDRAL: the first knowledge intensive system
(determining 3D structures of complex chemical
compounds)
– MYCIN: first rule-based expert system (containing 450
rules for diagnosing blood infectious diseases)
EMYCIN: an ES shell
– PROSPECTOR: first knowledge-based system that made
significant profit (geological ES for mineral deposits)
• AI became an industry (1980 – 1989)
– wide applications in various domains
– commercially available tools
• Current trends (1990 – present)
– more realistic goals
– more practical (application oriented)
– distributed AI and intelligent software agents
– resurgence of neural networks and emergence of
genetic algorithms
Chapter 2: Intelligent Agents
• Definition: An Intelligent Agent perceives it environment via
sensors and acts rationally upon that environment with its
effectors.
• Hence, an agent gets percepts one at a time, and maps
this percept sequence to actions.
• Properties
–Autonomous
–Interacts with other agents plus the environment
–Reactive to the environment
–Pro-active (goal- directed)
Rationality
• An ideal rational agent should, for each possible percept
sequence, do whatever actions that will maximize its
performance measure based on
(1) the percept sequence, and
(2) its built-in and acquired knowledge.
• Hence it includes information gathering, not "rational
ignorance."
• Rationality => Need a performance measure to say how
well a task has been achieved.
• Types of performance measures: payoffs, false alarm and
false dismissal rates, speed, resources required, effect on
environment, etc.
Autonomy
• A system is autonomous to the extent that its own
behavior is determined by its own experience and
knowledge.
• To survive agents must have:
–Enough built- in knowledge to survive.
–Ability to learn.
Some Agent Types
• Table-driven agents
– use a percept sequence/ action table in memory to find the next action.
They are implemented by a (large) lookup table.
• Simple reflex agents
– are based on condition- action rules and implemented with an
appropriate production (rule-based) system. They are stateless devices
which do not have memory of past world states.
• Agents with memory
– have internal state which is used to keep track of past states of the
world.
• Agents with goals
– are agents which in addition to state information have a kind of goal
information which describes desirable situations. Agents of this kind
take future events into consideration.
• Utility-based agents
– base their decision on classic axiomatic utility-theory in order to act
rationally.
Simple Reflex Agent
• Table lookup of percept- action pairs defining all
possible condition- action rules necessary to interact in
an environment
• Problems
– Too big to generate and to store (Chess has about 10^120
states, for example)
– No knowledge of non- perceptual parts of the current state
– Not adaptive to changes in the environment; requires entire
table to be updated if changes occur
• Use condition-action rules to summarize portions of
the table
A Simple Reflex Agent: Schema
Environment
Agent
What the world
is like now
What action I
should do now
Condition-action rules
Sensors
Effectors
Reflex Agent with Internal State
• Encode "internal state" of the world to remember the past
as contained in earlier percepts
• Needed because sensors do not usually give the entire
state of the world at each input, so perception of the
environment is captured over time. "State" used to
encode different "world states" that generate the same
immediate percept.
• Requires ability to represent change in the world; one
possibility is to represent just the latest state, but then
can't reason about hypothetical courses of action
Goal- Based Agent
• Choose actions so as to achieve a (given or computed) goal.
• A goal is a description of a desirable situation
• Keeping track of the current state is often not enough -- need
to add goals to decide which situations are good
• Deliberative instead of reactive
• May have to consider long sequences of possible actions
before deciding if goal is achieved -- involves consideration of
the future, “what will happen if I do...?”
Agents with Explicit Goals
Environment
What action I
should do now
Sensors
Effectors
What the world
is like now
What it will be like
if I do action A
Goals
State
How the world evolves
What my actions do
Utility- Based Agent
• When there are multiple possible alternatives, how to decide
which one is best?
• A goal specifies a crude distinction between a happy and
unhappy state, but often need a more general performance
measure that describes "degree of happiness"
• Utility function U: States --> Reals indicating a measure of
success or happiness when at a given state
• Allows decisions comparing choice between conflicting goals,
and choice between likelihood of success and importance of
goal (if achievement is uncertain)
A Complete Utility- Based Agent
Environment
Sensors
Effectors
What the world
is like now
What it will be like
if I do action A
Utility
State
How the world evolves
What my actions do
What action I
should do now
How happy I will
be in such a state
Properties of Environments
• Accessible/ Inaccessible.
– If an agent's sensors give it access to the complete state of the
environment needed to choose an action, the environment is
accessible.
– Such environments are convenient, since the agent is freed from the
task of keeping track of the changes in the environment.
• Deterministic/ Nondeterministic.
– An environment is deterministic if the next state of the environment
is completely determined by the current state of the environment and
the action of the agent.
– In an accessible and deterministic environment the agent need not
deal with uncertainty.
• Episodic/ Nonepisodic.
– An episodic environment means that subsequent episodes do not
depend on what actions occurred in previous episodes.
– Such environments do not require the agent to plan ahead.
Properties of Environments
• Static/ Dynamic.
– An environment which does not change while the agent is thinking is
static.
– In a static environment the agent need not worry about the passage of
time while he is thinking, nor does he have to observe the world while
he is thinking.
– In static environments the time it takes to compute a good strategy does
not matter.
• Discrete/ Continuous.
– If the number of distinct percepts and actions is limited the environment
is discrete, otherwise it is continuous.
• With/ Without rational adversaries.
– If an environment does not contain other rationally thinking, adversary
agents, the agent need not worry about strategic, game theoretic aspects
of the environment
– Most engineering environments are without rational adversaries,
whereas most social and economic systems get their complexity from
the interactions of (more or less) rational agents.
– As example for a game with a rational adversary, try the Prisoner's
Dilemma
The Prisoners' Dilemma
• The two players in the game can choose between two moves, either
"cooperate" or "defect".
• Each player gains when both cooperate, but if only one of them
cooperates, the other one, who defects, will gain more.
• If both defect, both lose (or gain very little) but not as much as the
"cheated” cooperator whose cooperation is not returned.
• If both decision- makers were purely rational, they would never
cooperate. Indeed, rational decision- making means that you make
the decision which is best for you whatever the other actor chooses.
Cooperative Defect
Cooperative 5 -10
Defect 10 0
Summary
• An agent perceives and acts in an environment, has an architecture
and is implemented by an agent program.
• An ideal agent always chooses the action which maximizes its
expected performance, given percept sequence received so far.
• An autonomous agent uses its own experience rather than built- in
knowledge of the environment by the designer.
• An agent program maps from percept to action & updates its
internal state.
– Reflex agents respond immediately to percpets.
– Goal-based agents act in order to achieve their goal( s).
– Utility-based agents maximize their own utility function.
• Representing knowledge is important for successful agent design.
• Some environments are more difficult for agents than others. The
most challenging environments are inaccessible, nondeterministic,
nonepisodic, dynamic, and continuous.

More Related Content

PPTX
SANG AI 1.pptx
PPTX
Artificial Intelligence and Machine Learning.pptx
PPT
Introduction
PPTX
AIML Unit1 ppt for the betterment and help of students
PPT
Types of Artificial Intelligence.ppt
PPT
Unit 1.ppt
PPTX
Artificial Intelligence TFB A guide to understanding
PPTX
ARTIFICIAL INTELLIGENCE.pptx
SANG AI 1.pptx
Artificial Intelligence and Machine Learning.pptx
Introduction
AIML Unit1 ppt for the betterment and help of students
Types of Artificial Intelligence.ppt
Unit 1.ppt
Artificial Intelligence TFB A guide to understanding
ARTIFICIAL INTELLIGENCE.pptx

Similar to Ch1-2 (artificial intelligence for 3).ppt (20)

PPTX
MODULE_one of Artificial intelligence.pptx
PPTX
Introduction to ai and it's agents .pptx
PPT
artificial Intelligence unit1 ppt (1).ppt
PPTX
unit 1.pptx
PPTX
AI_Lec1.pptx ist step to enter in AI field
PDF
Artificial Intelligence - An Introduction
PDF
Agents-and-Problem-Solving-20022024-094442am.pdf
PPTX
Artificial Intelligence jejeiejj3iriejrjifirirjdjeie
PDF
Trí tuệ nhân tao đai học công nghiệp tphcm chap 1
PPTX
1 Introduction to AI.pptx
PPT
the basics and advance information about ai and agents
PPTX
AIES Unit I(2022).pptx
PPTX
Introduction to Artificial intelligence.pptx
PPT
Lecture1
PDF
AIES Unit_1 (2022).pdf...............................
PPT
M1 intro
PPTX
Artificial Intelligence
PDF
Introduction to intelligent systems
PPTX
Lecture 4 (1).pptx
MODULE_one of Artificial intelligence.pptx
Introduction to ai and it's agents .pptx
artificial Intelligence unit1 ppt (1).ppt
unit 1.pptx
AI_Lec1.pptx ist step to enter in AI field
Artificial Intelligence - An Introduction
Agents-and-Problem-Solving-20022024-094442am.pdf
Artificial Intelligence jejeiejj3iriejrjifirirjdjeie
Trí tuệ nhân tao đai học công nghiệp tphcm chap 1
1 Introduction to AI.pptx
the basics and advance information about ai and agents
AIES Unit I(2022).pptx
Introduction to Artificial intelligence.pptx
Lecture1
AIES Unit_1 (2022).pdf...............................
M1 intro
Artificial Intelligence
Introduction to intelligent systems
Lecture 4 (1).pptx
Ad

More from Iftikhar70 (20)

PPT
m1-intro artificial intelligence for .ppt
PPTX
introductioartificial intelligencen.pptx
PPT
dcscw07aetificial intelligence for 1.ppt
PPTX
توظيف ادوات الذكاء الاصطناعي في البحث العلمي الاعلامي - saad kadhim.pptx
PPTX
introduction technology technology tec.pptx
PPT
cps270_game_playing technology intelligence.ppt
PPT
Chapter_9_Morphological_Image_Processing.ppt
PPTX
10_2020_12_10!09_25_23_AM technology.pptx
PPT
14580hffggcdfghcfgvcsdvbcdgbvcdgg968.ppt
PPT
1424403vfdfdfghljhhggvgfggffgddgd6trf.ppt
PPT
12_2017_09_17!02_48_41_Aengerrings M.ppt
PPTX
امن سيبر اني للحوسبة في الجزء الثاني.pptx
PPT
Alhadeff cloud computing cyber technology.ppt
PPT
Introduction to artificial intelligence.ppt
PPTX
777137036 image processing bacherde.pptx
PPT
m1-intro artificial intelligence tec.ppt
PPT
cps270_intro artificial intelligence.ppt
PPT
cps270_game_playing artificial intelligence.ppt
PPTX
في الذكاء الاصطناعي وتقنيه المعلوماتالعملي.pptx
PPT
chapter5-2 restoration and depredations.ppt
m1-intro artificial intelligence for .ppt
introductioartificial intelligencen.pptx
dcscw07aetificial intelligence for 1.ppt
توظيف ادوات الذكاء الاصطناعي في البحث العلمي الاعلامي - saad kadhim.pptx
introduction technology technology tec.pptx
cps270_game_playing technology intelligence.ppt
Chapter_9_Morphological_Image_Processing.ppt
10_2020_12_10!09_25_23_AM technology.pptx
14580hffggcdfghcfgvcsdvbcdgbvcdgg968.ppt
1424403vfdfdfghljhhggvgfggffgddgd6trf.ppt
12_2017_09_17!02_48_41_Aengerrings M.ppt
امن سيبر اني للحوسبة في الجزء الثاني.pptx
Alhadeff cloud computing cyber technology.ppt
Introduction to artificial intelligence.ppt
777137036 image processing bacherde.pptx
m1-intro artificial intelligence tec.ppt
cps270_intro artificial intelligence.ppt
cps270_game_playing artificial intelligence.ppt
في الذكاء الاصطناعي وتقنيه المعلوماتالعملي.pptx
chapter5-2 restoration and depredations.ppt
Ad

Recently uploaded (20)

PDF
Myanmar Dental Journal, The Journal of the Myanmar Dental Association (2013).pdf
PPTX
ACFE CERTIFICATION TRAINING ON LAW.pptx
PDF
Nurlina - Urban Planner Portfolio (english ver)
PDF
Horaris_Grups_25-26_Definitiu_15_07_25.pdf
PDF
Laparoscopic Colorectal Surgery at WLH Hospital
PDF
semiconductor packaging in vlsi design fab
PDF
Myanmar Dental Journal, The Journal of the Myanmar Dental Association (2015).pdf
PDF
Compact First Student's Book Cambridge Official
PDF
Environmental Education MCQ BD2EE - Share Source.pdf
PDF
Lecture on Viruses: Structure, Classification, Replication, Effects on Cells,...
PDF
faiz-khans about Radiotherapy Physics-02.pdf
PPTX
UNIT_2-__LIPIDS[1].pptx.................
DOCX
Ibrahim Suliman Mukhtar CV5AUG2025.docx
PDF
Journal of Dental Science - UDMY (2021).pdf
PPTX
Thinking Routines and Learning Engagements.pptx
PPTX
Macbeth play - analysis .pptx english lit
PDF
LIFE & LIVING TRILOGY- PART (1) WHO ARE WE.pdf
PDF
Journal of Dental Science - UDMY (2020).pdf
PDF
Farming Based Livelihood Systems English Notes
PDF
Journal of Dental Science - UDMY (2022).pdf
Myanmar Dental Journal, The Journal of the Myanmar Dental Association (2013).pdf
ACFE CERTIFICATION TRAINING ON LAW.pptx
Nurlina - Urban Planner Portfolio (english ver)
Horaris_Grups_25-26_Definitiu_15_07_25.pdf
Laparoscopic Colorectal Surgery at WLH Hospital
semiconductor packaging in vlsi design fab
Myanmar Dental Journal, The Journal of the Myanmar Dental Association (2015).pdf
Compact First Student's Book Cambridge Official
Environmental Education MCQ BD2EE - Share Source.pdf
Lecture on Viruses: Structure, Classification, Replication, Effects on Cells,...
faiz-khans about Radiotherapy Physics-02.pdf
UNIT_2-__LIPIDS[1].pptx.................
Ibrahim Suliman Mukhtar CV5AUG2025.docx
Journal of Dental Science - UDMY (2021).pdf
Thinking Routines and Learning Engagements.pptx
Macbeth play - analysis .pptx english lit
LIFE & LIVING TRILOGY- PART (1) WHO ARE WE.pdf
Journal of Dental Science - UDMY (2020).pdf
Farming Based Livelihood Systems English Notes
Journal of Dental Science - UDMY (2022).pdf

Ch1-2 (artificial intelligence for 3).ppt

  • 1. Artificial Intelligence CMSC471/671 Section 0101 Tuesday/Thursday 5:30 – 6:45 Math-Psychology 103 Instructor: Professor Yun Peng ECS Building Room 221 (410)455-3816 [email protected]
  • 2. Chapter 1: Introduction • Can machines think? • And if so, how? • And if not, why not? • And what does this say about human beings? • And what does this say about the mind?
  • 3. What is artificial intelligence? • There are no clear consensus on the definition of AI • Here’s one from John McCarthy, (He coined the phrase AI in 1956) - see http:// www. formal. Stanford. EDU/ jmc/ whatisai/) Q. What is artificial intelligence? A. It is the science and engineering of making intelligent machines, especially intelligent computer programs. It is related to the similar task of using computers to understand human intelligence, but AI does not have to confine itself to methods that are biologically observable. Q. Yes, but what is intelligence? A. Intelligence is the computational part of the ability to achieve goals in the world. Varying kinds and degrees of
  • 4. Other possible AI definitions • AI is a collection of hard problems which can be solved by humans and other living things, but for which we don’t have good algorithms for solving. – e. g., understanding spoken natural language, medical diagnosis, circuit design, learning, self- adaptation, reasoning, chess playing, proving math theories, etc. • Definition from R & N book: a program that – Acts like human (Turing test) – Thinks like human (human-like patterns of thinking steps) – Acts or thinks rationally (logically, correctly) • Some problems used to be thought of as AI but are now considered not – e. g., compiling Fortran in 1955, symbolic mathematics in 1965, pattern recognition in 1970
  • 5. What’s easy and what’s hard? • It’s been easier to mechanize many of the high level cognitive tasks we usually associate with “intelligence” in people – e. g., symbolic integration, proving theorems, playing chess, some aspect of medical diagnosis, etc. • It’s been very hard to mechanize tasks that animals can do easily – walking around without running into things – catching prey and avoiding predators – interpreting complex sensory information (visual, aural, …) – modeling the internal states of other animals from their behavior – working as a team (ants, bees) • Is there a fundamental difference between the two categories?
  • 6. History of AI • AI has roots in a number of scientific disciplines – computer science and engineering (hardware and software) – philosophy (rules of reasoning) – mathematics (logic, algorithms, optimization) – cognitive science and psychology (modeling high level human/animal thinking) – neural science (model low level human/animal brain activity) – linguistics • The birth of AI (1943 – 1956) – Pitts and McCulloch (1943): simplified mathematical model of neurons (resting/firing states) can realize all propositional logic primitives (can compute all Turing computable functions) – Allen Turing: Turing machine and Turing test (1950) – Claude Shannon: information theory; possibility of
  • 7. • Early enthusiasm (1952 – 1969) – 1956 Dartmouth conference John McCarthy (Lisp); Marvin Minsky (first neural network machine); Alan Newell and Herbert Simon (GPS); – Emphasize on intelligent general problem solving GSP (means-ends analysis); Lisp (AI programming language); Resolution by John Robinson (basis for automatic theorem proving); heuristic search (A*, AO*, game tree search) • Emphasis on knowledge (1966 – 1974) – domain specific knowledge is the key to overcome existing difficulties – knowledge representation (KR) paradigms – declarative vs. procedural representation
  • 8. • Knowledge-based systems (1969 – 1999) – DENDRAL: the first knowledge intensive system (determining 3D structures of complex chemical compounds) – MYCIN: first rule-based expert system (containing 450 rules for diagnosing blood infectious diseases) EMYCIN: an ES shell – PROSPECTOR: first knowledge-based system that made significant profit (geological ES for mineral deposits) • AI became an industry (1980 – 1989) – wide applications in various domains – commercially available tools • Current trends (1990 – present) – more realistic goals – more practical (application oriented) – distributed AI and intelligent software agents – resurgence of neural networks and emergence of genetic algorithms
  • 9. Chapter 2: Intelligent Agents • Definition: An Intelligent Agent perceives it environment via sensors and acts rationally upon that environment with its effectors. • Hence, an agent gets percepts one at a time, and maps this percept sequence to actions. • Properties –Autonomous –Interacts with other agents plus the environment –Reactive to the environment –Pro-active (goal- directed)
  • 10. Rationality • An ideal rational agent should, for each possible percept sequence, do whatever actions that will maximize its performance measure based on (1) the percept sequence, and (2) its built-in and acquired knowledge. • Hence it includes information gathering, not "rational ignorance." • Rationality => Need a performance measure to say how well a task has been achieved. • Types of performance measures: payoffs, false alarm and false dismissal rates, speed, resources required, effect on environment, etc.
  • 11. Autonomy • A system is autonomous to the extent that its own behavior is determined by its own experience and knowledge. • To survive agents must have: –Enough built- in knowledge to survive. –Ability to learn.
  • 12. Some Agent Types • Table-driven agents – use a percept sequence/ action table in memory to find the next action. They are implemented by a (large) lookup table. • Simple reflex agents – are based on condition- action rules and implemented with an appropriate production (rule-based) system. They are stateless devices which do not have memory of past world states. • Agents with memory – have internal state which is used to keep track of past states of the world. • Agents with goals – are agents which in addition to state information have a kind of goal information which describes desirable situations. Agents of this kind take future events into consideration. • Utility-based agents – base their decision on classic axiomatic utility-theory in order to act rationally.
  • 13. Simple Reflex Agent • Table lookup of percept- action pairs defining all possible condition- action rules necessary to interact in an environment • Problems – Too big to generate and to store (Chess has about 10^120 states, for example) – No knowledge of non- perceptual parts of the current state – Not adaptive to changes in the environment; requires entire table to be updated if changes occur • Use condition-action rules to summarize portions of the table
  • 14. A Simple Reflex Agent: Schema Environment Agent What the world is like now What action I should do now Condition-action rules Sensors Effectors
  • 15. Reflex Agent with Internal State • Encode "internal state" of the world to remember the past as contained in earlier percepts • Needed because sensors do not usually give the entire state of the world at each input, so perception of the environment is captured over time. "State" used to encode different "world states" that generate the same immediate percept. • Requires ability to represent change in the world; one possibility is to represent just the latest state, but then can't reason about hypothetical courses of action
  • 16. Goal- Based Agent • Choose actions so as to achieve a (given or computed) goal. • A goal is a description of a desirable situation • Keeping track of the current state is often not enough -- need to add goals to decide which situations are good • Deliberative instead of reactive • May have to consider long sequences of possible actions before deciding if goal is achieved -- involves consideration of the future, “what will happen if I do...?”
  • 17. Agents with Explicit Goals Environment What action I should do now Sensors Effectors What the world is like now What it will be like if I do action A Goals State How the world evolves What my actions do
  • 18. Utility- Based Agent • When there are multiple possible alternatives, how to decide which one is best? • A goal specifies a crude distinction between a happy and unhappy state, but often need a more general performance measure that describes "degree of happiness" • Utility function U: States --> Reals indicating a measure of success or happiness when at a given state • Allows decisions comparing choice between conflicting goals, and choice between likelihood of success and importance of goal (if achievement is uncertain)
  • 19. A Complete Utility- Based Agent Environment Sensors Effectors What the world is like now What it will be like if I do action A Utility State How the world evolves What my actions do What action I should do now How happy I will be in such a state
  • 20. Properties of Environments • Accessible/ Inaccessible. – If an agent's sensors give it access to the complete state of the environment needed to choose an action, the environment is accessible. – Such environments are convenient, since the agent is freed from the task of keeping track of the changes in the environment. • Deterministic/ Nondeterministic. – An environment is deterministic if the next state of the environment is completely determined by the current state of the environment and the action of the agent. – In an accessible and deterministic environment the agent need not deal with uncertainty. • Episodic/ Nonepisodic. – An episodic environment means that subsequent episodes do not depend on what actions occurred in previous episodes. – Such environments do not require the agent to plan ahead.
  • 21. Properties of Environments • Static/ Dynamic. – An environment which does not change while the agent is thinking is static. – In a static environment the agent need not worry about the passage of time while he is thinking, nor does he have to observe the world while he is thinking. – In static environments the time it takes to compute a good strategy does not matter. • Discrete/ Continuous. – If the number of distinct percepts and actions is limited the environment is discrete, otherwise it is continuous. • With/ Without rational adversaries. – If an environment does not contain other rationally thinking, adversary agents, the agent need not worry about strategic, game theoretic aspects of the environment – Most engineering environments are without rational adversaries, whereas most social and economic systems get their complexity from the interactions of (more or less) rational agents. – As example for a game with a rational adversary, try the Prisoner's Dilemma
  • 22. The Prisoners' Dilemma • The two players in the game can choose between two moves, either "cooperate" or "defect". • Each player gains when both cooperate, but if only one of them cooperates, the other one, who defects, will gain more. • If both defect, both lose (or gain very little) but not as much as the "cheated” cooperator whose cooperation is not returned. • If both decision- makers were purely rational, they would never cooperate. Indeed, rational decision- making means that you make the decision which is best for you whatever the other actor chooses. Cooperative Defect Cooperative 5 -10 Defect 10 0
  • 23. Summary • An agent perceives and acts in an environment, has an architecture and is implemented by an agent program. • An ideal agent always chooses the action which maximizes its expected performance, given percept sequence received so far. • An autonomous agent uses its own experience rather than built- in knowledge of the environment by the designer. • An agent program maps from percept to action & updates its internal state. – Reflex agents respond immediately to percpets. – Goal-based agents act in order to achieve their goal( s). – Utility-based agents maximize their own utility function. • Representing knowledge is important for successful agent design. • Some environments are more difficult for agents than others. The most challenging environments are inaccessible, nondeterministic, nonepisodic, dynamic, and continuous.