0% found this document useful (0 votes)
6 views

1 Intro New

Uploaded by

Kritika Pahuja
Copyright
© © All Rights Reserved
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views

1 Intro New

Uploaded by

Kritika Pahuja
Copyright
© © All Rights Reserved
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 50

Principle of Artificial Intelligence

Introduction
Prerequisites
• Comfortable programming in language such as C (or C++) or Java.
• Some knowledge of algorithmic concepts such as running times of
algorithms; having some rough idea of what NP-hard means.
• Some familiarity with probability (we will go over this from the beginning
but we will cover the basics only briefly).
• Not scared of mathematics, able to do simple mathematical proofs.
What is artificial intelligence?
• Artificial: Made as a copy of something natural.
• Intelligence: The ability to gain and apply knowledge and
skills.
• The replication of human intellectual processes by
machines, most notably computer systems, is referred to
as artificial intelligence (AI for short).
• Expert systems, natural language processing, speech
recognition, and machine vision are all examples of
specific uses of artificial intelligence.
• AI is further divided into two categories:
• Strong AI
• Weak AI
• Weak AI programs cannot be called “intelligent” because
they cannot really think.
What is artificial intelligence?
• Strong AI: The computer is not merely a tool in the study
of the mind; rather, the appropriately programmed
computer really is a mind.
• Weak AI: Some “thinking-like” features can be added to
computers to make them more useful tool.
• Popular conception driven by science fiction
• Robots good at everything except emotions, empathy,
appreciation of art, culture, …
• Current AI is also bad at lots of simpler stuff!
• There is a lot of AI work on thinking about what other agents
are thinking
Parameters of
Strong AI Weak AI
Comparison

Weak AI do not have their


Intelligence Strong AI have their mind and exhibit
mind and have no provision
abilities abilities of human cognitive
for human cognitive abilities

Strong AI is the future of Weak AI is the present form


Type
artificial intelligence of artificial intelligence

To develop innovative approaches for To solve problems and tasks


Goal every task or problem through artificial at a faster pace and
intelligence accomplish the particular task

Voice-based personal
Application Still not applied assistance models like Siri or
Alexa

The maximum level of weak


The maximum level of strong AI is to AI is to provide solutions
Maximum level
match true human intelligence within the predefined
responses
Real AI
• A serious science.
• Real AI is capable of making judgments in a wide range of
situations and identifying the most cost-effective
strategies for achieving a certain goal.
• General-purpose AI like the robots of science fiction is
incredibly hard
• Human brain appears to have lots of special and general
functions, integrated in some amazing way that we really do not
understand at all (yet)
• Special-purpose AI is more achievable (nontrivial)
• E.g., chess/poker playing programs, logistics planning,
automated translation, voice recognition, web search, data
mining, medical diagnosis, keeping a car on the road, etc.
Early History of AI
• The origin of AI can be seen in Turing’s work in his paper
on intelligent machines published in 1950.
• AI can draw logical conclusions, prove some theorems,
create simple plans. Some initial work on neural networks.
History of AI…
• 70s, 80s: Creation of expert systems (systems specialized
for one particular task based on experts’ knowledge), wide
industry adoption
• Led to overhyping: researchers promised funding agencies
spectacular progress, but started running into difficulties:
• Ambiguity: highly funded translation programs (Russian to
English) were good at syntactic manipulation but bad at
disambiguation.
• Scalability/complexity: early examples were very small,
programs could not scale to bigger instances
• Limitations of representations used
Definitions of AI if our system can be
more rational than
humans in some
cases, why not?
Systems
Cognitive Science Approach Systems
Laws of thought Approach

focus on action
avoids philosophical that think like that think
issues such as “is the humans rationally
system conscious”
etc. Systems
Turing Test Approach Systems
Rational Agent Approach

that act like that act rationally


humans
• In order to design intelligent systems, it is important to
categorize them into four categories (Luger and
Stubberfield 1993), (Russell and Norvig, 2003)
• We will follow “act rationally” approach
Turing Test
• Alan Turing proposed a simple method of determining whether a
machine can demonstrate human intelligence.
• In this test, Turing proposed that the computer can be said to be
an intelligent if it can mimic human response under specific
conditions.
• Consider, Player A is a computer, Player B is
human, and Player C is an interrogator.
Interrogator is aware that one of them is
machine, but he needs to identify this on the
basis of questions and their responses.
• The conversation between all players is via keyboard and screen so the
result would not depend on the machine's ability to convert words as
speech.
• The test result does not depend on each correct answer, but only how
closely its responses like a human answer.
• The computer is permitted to do everything possible to force a wrong
identification by the interrogator.
• The questions and answers can be like:
• Interrogator: Are you a computer?
• Player A (Computer): No
• Interrogator: Multiply two large numbers such as
(256896489*456725896)
• Player A: Long pause and give the wrong answer.
• In this game, if an interrogator would not be able to
identify which is a machine and which is human, then the
computer passes the test successfully, and the machine is
said to be intelligent and can think like a human.
“Chinese room”
argument [Searle 1980]
• The Chinese room argument holds
that a program cannot give a
computer a “mind”, “understanding”
or “consciousness”” regardless of how
intelligently or human-like the
program may make the computer
behave.
• John imagines himself as a non-Chinese person inside a room while
a Chinese person outside tries to converse. He is given a list of
Chinese characters and an instruction book detailing how to build
character strings, but not their meanings. He has an English version
of the computer software, paper, pencils, erasers, and file cabinets.

• A person in the “Chinese room” is passed questions from outside the


room, and consults a library of books to formulate an answer. Now
he receive all the messages posted through a slot in the door written
in Chinese language.
• In the year 1980, John Searle presented "Chinese Room" thought
experiment, in his paper "Mind, Brains, and Program," which was
against the validity of Turing's Test. According to his argument,
"Programming a computer may make it to understand a language,
but it will not produce a real understanding of language or
consciousness in a computer."
Lessons from AI research
• Clearly-defined tasks that we think require intelligence and
education from humans tend to be doable for AI techniques.
• Playing chess, drawing logical inferences from clearly-stated facts,
performing probability calculations in well-defined environments.
• Although, scalability can be a significant issue.
• Complex, messy, ambiguous tasks that come natural to humans (in
some cases other animals) are much harder
• Recognizing your grandmother in a crowd, drawing the right
conclusion from an ungrammatical or ambiguous sentence, driving
around the city.
• Humans better at coming up with reasonably good solutions in
complex environments.
• Humans better at adapting/self-evaluation/creativity (“My usual
strategy for chess is getting me into trouble against this person…
Why? What else can I do?”)
Modern AI
• More rigorous, scientific, formal/mathematical
• Divided into many subareas interested in particular
aspects
• More directly connected to “neighboring” disciplines
• Theoretical computer science, statistics, economics, operations
research, biology, psychology/neuroscience, …
• Often leads to question “Is this really AI”?
• Some senior AI researchers are calling for re-integration of
all these topics, return to more grandiose goals of AI
• Somewhat risky proposition for graduate students and junior
faculty…
Topics
• Search
• Constraint satisfaction problems
• Game playing
• Logic, knowledge representation
• Planning
• Probability, decision theory, game theory, reasoning under uncertainty
• Machine learning, reinforcement learning
Agents
• An agent can be anything that perceive its environment
through sensors and act upon that environment through
actuators. An Agent runs in the cycle
of perceiving, thinking, and acting. An agent can be:
• Human-Agent: A human agent has eyes, ears, and other organs
which work for sensors and hand, legs, vocal tract work for
actuators.
• Robotic Agent: A robotic agent can have cameras, infrared
range finder, sensors and various motors for actuators.
• Software Agent: Software agent can have keystrokes, file
contents as sensory input and act on those inputs and display
output on the screen.
• We should first know about sensors, effectors, and
actuators.
• Sensor: Sensor is a device which detects the change in the
environment and sends the information to other electronic
devices. An agent observes its environment through sensors.
• Actuators: Actuators are the component of machines that
converts energy into motion. The actuators are only
responsible for moving and controlling a system. An actuator
can be an electric motor, gears, rails, etc.
• Effectors: Effectors are the devices which affect the
environment. Effectors can be legs, wheels, arms, fingers,
wings, fins, and display screen.
Rational Agents
• A rational agent is an agent which has clear preference, models
uncertainty, and acts in a way to maximize its performance
measure with all possible actions.
• A rational agent is said to perform the right things. AI is about
creating rational agents to use for game theory and decision
theory for various real-world scenarios.
• For an AI agent, the rational action is most important because in AI
reinforcement learning algorithm, for each best possible action,
agent gets the positive reward and for each wrong action, an
agent gets a negative reward.
• True maximization of goals requires omniscience and unlimited
computational abilities.
• Limited rationality involves maximizing goals within the
computational and other resources available.
Rational Agent at any given time
depends on
• The performance measure that defines degree of success.
• Everything that the agent has perceived so far (percept sequence).
• What the agent knows about the environment.
• The actions that the agent can perform.
Vaccum Cleaning Agent
• The vacuum cleaner agent has a location sensor and a dirt sensor
so that it knows where it is (room A or room B) and whether the
room is dirty. It can go left, go right, suck, and idle.
• A possible performance measure is to maximize the number of
clean rooms over a certain period.
• Agent function = {([A: Clean], Right), ([A: Dirty], Clean), ([B:
Clean], Left), ([B: Dirty], Clean), ([A: Clean, B: Clean], Stop), ([A:
Clean, B: Dirty], Clean), …}
• Percept sequence: [A: Clean]
• Action: Right
• Performance Measure: desirable actions performed by the agent
Performance Measure
• The amount of dirt cleaned
• The amount of time taken
• The amount of electricity consumed
• Level of noise generated
PEAS
• The PEAS System is utilized to group together agents that
share similar characteristics.
• The performance measure of the particular agent is
provided by the PEAS system, which takes into account
the surroundings, actuators, and sensors of the agent in
question.
• P: Performance measure
• E: Environment
• A: Actuators
• S: Sensors
• Here performance measure is the objective for the
success of an agent's behavior.
• PEAS stands for Performance measure, Environment,
Actuator, Sensor.
• Performance Measure: Performance measure is the unit to
define the success of an agent. Performance varies with agents
based on their different precept.
• Environment: Environment is the surrounding of an agent at
every instant. It keeps changing with time if the agent is set in
motion.
• Actuator: Actuator is a part of the agent that delivers the output
of an action to the environment.
• Sensor: Sensors are the receptive parts of an agent which takes
in the input for the agent.
Performance
Agent Environment Actuator Sensor
Measure

Hospital Patient’s health, Prescription,


Hospital, Doctors, Symptoms,
Management Admission Diagnosis, Scan
Patients Patient’s response
System process, Payment report

The comfortable
Steering wheel,
Automated Car trip, Safety, Roads, Traffic, Camera, GPS,
Accelerator,
Drive Maximum Vehicles Odometer
Brake, Mirror
Distance

Maximize scores, Classroom, Desk,


Smart displays, Eyes, Ears,
Subject Tutoring Improvement is Chair, Board, Staff,
Corrections Notebooks
students Students

Percentage of
Conveyor belt Jointed arms and Camera, joint
Part-picking robot parts in correct
with parts; bins hand angle sensors
bins

Display
Satellite image Correct image Downlink from
categorization of Color pixel arrays
analysis system categorization orbiting satellite
scene
Task Environment
• A task environment refers to the choices, actions and
outcomes a given user has for a given task.
• In a fully observable environment all of the environment
relevant to the action being considered is observable.
• In deterministic environments, the next state of the
environment is completely described by the current state
and the agent’s action.
• If an element of interference or uncertainty occurs then
the environment is stochastic.
Properties of Task Environment
• An environment in artificial intelligence is the surrounding
of the agent.
• The agent takes input from the environment through
sensors and delivers the output to the environment
through actuators.
• There are several types of environments:
• Fully Observable vs Partially Observable
• Deterministic vs Stochastic
• Competitive vs Collaborative
• Single-agent vs Multi-agent
• Static vs Dynamic
• Discrete vs Continuous
• Episodic vs Sequential
• Known vs Unknown
Fully Observable vs Partially Observable
•When an agent sensor is capable to sense or access the
complete state of an agent at each point in time, it is said to
be a fully observable environment else it is partially
observable.
•Maintaining a fully observable environment is easy as
there is no need to keep track of the history of the
surrounding.
•An environment is called unobservable when the agent
has no sensors in all environments.
•Examples:
• Chess – the board is fully observable, and so are the opponent’s
moves.
• Driving – the environment is partially observable because
what’s around the corner is not known.
Deterministic vs Stochastic
•When a uniqueness in the agent’s current state
completely determines the next state of the agent, the
environment is said to be deterministic.
•The stochastic environment is random in nature which is
not unique and cannot be completely determined by the
agent.
•Examples:
• Chess – there would be only a few possible moves for
a coin at the current state and these moves can be
determined.
• Self-Driving Cars- the actions of a self-driving car are
not unique, it varies time to time.
Episodic vs Sequential
•In an Episodic task environment, each of the agent’s
actions is divided into atomic incidents or episodes. There
is no dependency between current and previous incidents.
In each incident, an agent receives input from the
environment and then performs the corresponding action.
• Example: Consider an example of Pick and Place robot, which is
used to detect defective parts from the conveyor belts. Here,
every time robot(agent) will make the decision on the current
part i.e. there is no dependency between current and previous
decisions.
•In a Sequential environment, the previous decisions can
affect all future decisions. The next action of the agent
depends on what action he has taken previously and what
action he is supposed to take in the future.
• Example:
• Checkers- Where the previous move can affect all the
following moves.
Dynamic vs Static
•An environment that keeps constantly changing itself
when the agent is up with some action is said to be
dynamic.
•A roller coaster ride is dynamic as it is set in motion and
the environment keeps changing every instant.
•An idle environment with no change in its state is called a
static environment.
•An empty house is static as there’s no change in the
surroundings when an agent enters.
Single-agent vs Multi-agent
•An environment consisting of only one agent is said to be a
single-agent environment.
•A person left alone in a maze is an example of the single-
agent system.
•An environment involving more than one agent is a multi-
agent environment.
•The game of football is multi-agent as it involves 11
players in each team.
Discrete vs Continuous
•If an environment consists of a finite number of actions
that can be deliberated in the environment to obtain the
output, it is said to be a discrete environment.
•The game of chess is discrete as it has only a finite number
of moves. The number of moves might vary with every
game, but still, it’s finite.
•The environment in which the actions are performed
cannot be numbered i.e. is not discrete, is said to be
continuous.
•Self-driving cars are an example of continuous
environments as their actions are driving, parking, etc.
which cannot be numbered.
Examples of Task Enviornments
Agent’s Classification
1. Simple Reflex Agents
2. Model based Reflex Agents
3. Goal based agents
4. Utility based agents
5. Learning Agent
Simple Reflex Agent

• This agent makes decisions on what actions to do not


based on previous perceptions but on the agent's present
understanding of the world.
• For instance, if a Mars lander discovered a rock in a particular
location that it needed to collect, it would pick up that rock.
However, if the agent was just a simple reflex, it would still pick up
that rock even if it found the same rock in a different location
because it does not take into account the fact that it has already
collected that rock.
• This is useful for when a quick automated response is
needed.
• Condition-Action rule
• if {set of percepts} then {set of actions}
• if it is raining then put up umbrella
• These agents are simple to work with but have very
limited intelligence, such as picking up 2 rock samples.
• Infinite loops are often unavoidable
• If the agent can randomize its actions, it may be possible
to escape from infinite loops.
Model Based Reflex Agent

• Model-based reflex agents deal with limited accessibility


by tracking what they can see.
• It keeps an internal state that depends on what it has
seen before to store information about unseen features
of the present state.
• Handle partially observable environments.
• Example: This time out mars Lander after picking up its
first sample, it stores this in the internal state of the world
around it so when it come across the second same sample
it passes it by and saves space for other samples.
• Update this internal storage requires 2 things:
• 1. How the world naturally evolves. E.g.: If our Mars Lander
picked up the next rock, everything would continue as normal.
• 2. The agent's global impact. E.g.: Our Mars Lander may smash a
rock if it took a sample under a dangerous ledge.
• If you remove a supporting rock from a ledge, the ledge
will fall. Such facts are called models, thus the name
model-based agent.
Goal Based Agent

• These agents base decisions on their distance from their


goal (description of desirable situations).
• Every activity aims to get closer to the objective.
• This lets the agent pick from several options to
accomplish a desired state.
• A simple example would be the shopping list;
• our goal is to pick up every thing on that list.
• This makes it easier to decide if you need to choose between
milk and orange juice because you can only afford one.
• As milk is a goal on our shopping list and the orange juice is not
we chose the milk.
• These agents' explicit, modifiable knowledge makes them
more versatile.
• They need forethought and research.
• Change the goal-based agent's behavior effortlessly.
Utility Based Agent

• Utility-based agents serve as building blocks. Utility-based


agents are used to choose amongst numerous options.
• They pick activities by state preference (utility). Achieving
a goal isn't always enough.
• We may seek a faster, safer, cheaper travel. Consider
agent bliss.
• Utility indicates an agent's "happiness."
• A utility agent enhances projected utility due to global
uncertainty.
• A utility function relates a state to a happiness number.
• For example let’s show our mars Lander on the surface of
mars with an obstacle in its way.
• In a goal based agent it is uncertain which path will be taken by
the agent but in a utility based agent the best path will have the
best output from the utility function and that path will be
chosen.
Learning Based Agent

• AI learning agents can learn from their prior experiences.


• It acts with basic information and learns to adjust
automatically.
• A learning agent has mainly four conceptual components,
which are:
1.Learning element: It is responsible for making improvements
by learning from the environment.
2.Critic: The learning element takes feedback from critics
which describes how well the agent is doing with
respect to a fixed performance standard.
3.Performance element: It is responsible for selecting
external action.
4.Problem Generator: This component is responsible for
suggesting actions that will lead to new and informative
experiences.
• The learning element is responsible for improvements, this can
make a change to any of the knowledge components in the
agents. One way of learning is to observe pairs of successive
states in the percept sequence; from this, the agent can learn
how the world evolves. For utility based agents an external
performance standard is needed to tell the critic if the agent’s
action has a good or a bad effect on the world.
• The performance element is responsible for selecting external
actions, and this is considered to be the previous agents
discussed.
• The learning agent gains feedback from the critic on how well
the agent is doing and determines how the performance
element should be modified if at all to improve the agent.
• For example, when you were in school you would do a
test and it would be marked the test is the critic. The
teacher would mark the test and see what could be
improved and instructs you how to do better next
time, the teacher is the learning element and you are
the performance element.
• The last component is the problem generator, the
performance generator only suggests actions that it
can already do so we need a way of getting the agent
to experience new situations, and this is what the
performance generator is for. This way the agent
keeps on learning.

You might also like