1 Intro New
1 Intro New
Introduction
Prerequisites
• Comfortable programming in language such as C (or C++) or Java.
• Some knowledge of algorithmic concepts such as running times of
algorithms; having some rough idea of what NP-hard means.
• Some familiarity with probability (we will go over this from the beginning
but we will cover the basics only briefly).
• Not scared of mathematics, able to do simple mathematical proofs.
What is artificial intelligence?
• Artificial: Made as a copy of something natural.
• Intelligence: The ability to gain and apply knowledge and
skills.
• The replication of human intellectual processes by
machines, most notably computer systems, is referred to
as artificial intelligence (AI for short).
• Expert systems, natural language processing, speech
recognition, and machine vision are all examples of
specific uses of artificial intelligence.
• AI is further divided into two categories:
• Strong AI
• Weak AI
• Weak AI programs cannot be called “intelligent” because
they cannot really think.
What is artificial intelligence?
• Strong AI: The computer is not merely a tool in the study
of the mind; rather, the appropriately programmed
computer really is a mind.
• Weak AI: Some “thinking-like” features can be added to
computers to make them more useful tool.
• Popular conception driven by science fiction
• Robots good at everything except emotions, empathy,
appreciation of art, culture, …
• Current AI is also bad at lots of simpler stuff!
• There is a lot of AI work on thinking about what other agents
are thinking
Parameters of
Strong AI Weak AI
Comparison
Voice-based personal
Application Still not applied assistance models like Siri or
Alexa
focus on action
avoids philosophical that think like that think
issues such as “is the humans rationally
system conscious”
etc. Systems
Turing Test Approach Systems
Rational Agent Approach
The comfortable
Steering wheel,
Automated Car trip, Safety, Roads, Traffic, Camera, GPS,
Accelerator,
Drive Maximum Vehicles Odometer
Brake, Mirror
Distance
Percentage of
Conveyor belt Jointed arms and Camera, joint
Part-picking robot parts in correct
with parts; bins hand angle sensors
bins
Display
Satellite image Correct image Downlink from
categorization of Color pixel arrays
analysis system categorization orbiting satellite
scene
Task Environment
• A task environment refers to the choices, actions and
outcomes a given user has for a given task.
• In a fully observable environment all of the environment
relevant to the action being considered is observable.
• In deterministic environments, the next state of the
environment is completely described by the current state
and the agent’s action.
• If an element of interference or uncertainty occurs then
the environment is stochastic.
Properties of Task Environment
• An environment in artificial intelligence is the surrounding
of the agent.
• The agent takes input from the environment through
sensors and delivers the output to the environment
through actuators.
• There are several types of environments:
• Fully Observable vs Partially Observable
• Deterministic vs Stochastic
• Competitive vs Collaborative
• Single-agent vs Multi-agent
• Static vs Dynamic
• Discrete vs Continuous
• Episodic vs Sequential
• Known vs Unknown
Fully Observable vs Partially Observable
•When an agent sensor is capable to sense or access the
complete state of an agent at each point in time, it is said to
be a fully observable environment else it is partially
observable.
•Maintaining a fully observable environment is easy as
there is no need to keep track of the history of the
surrounding.
•An environment is called unobservable when the agent
has no sensors in all environments.
•Examples:
• Chess – the board is fully observable, and so are the opponent’s
moves.
• Driving – the environment is partially observable because
what’s around the corner is not known.
Deterministic vs Stochastic
•When a uniqueness in the agent’s current state
completely determines the next state of the agent, the
environment is said to be deterministic.
•The stochastic environment is random in nature which is
not unique and cannot be completely determined by the
agent.
•Examples:
• Chess – there would be only a few possible moves for
a coin at the current state and these moves can be
determined.
• Self-Driving Cars- the actions of a self-driving car are
not unique, it varies time to time.
Episodic vs Sequential
•In an Episodic task environment, each of the agent’s
actions is divided into atomic incidents or episodes. There
is no dependency between current and previous incidents.
In each incident, an agent receives input from the
environment and then performs the corresponding action.
• Example: Consider an example of Pick and Place robot, which is
used to detect defective parts from the conveyor belts. Here,
every time robot(agent) will make the decision on the current
part i.e. there is no dependency between current and previous
decisions.
•In a Sequential environment, the previous decisions can
affect all future decisions. The next action of the agent
depends on what action he has taken previously and what
action he is supposed to take in the future.
• Example:
• Checkers- Where the previous move can affect all the
following moves.
Dynamic vs Static
•An environment that keeps constantly changing itself
when the agent is up with some action is said to be
dynamic.
•A roller coaster ride is dynamic as it is set in motion and
the environment keeps changing every instant.
•An idle environment with no change in its state is called a
static environment.
•An empty house is static as there’s no change in the
surroundings when an agent enters.
Single-agent vs Multi-agent
•An environment consisting of only one agent is said to be a
single-agent environment.
•A person left alone in a maze is an example of the single-
agent system.
•An environment involving more than one agent is a multi-
agent environment.
•The game of football is multi-agent as it involves 11
players in each team.
Discrete vs Continuous
•If an environment consists of a finite number of actions
that can be deliberated in the environment to obtain the
output, it is said to be a discrete environment.
•The game of chess is discrete as it has only a finite number
of moves. The number of moves might vary with every
game, but still, it’s finite.
•The environment in which the actions are performed
cannot be numbered i.e. is not discrete, is said to be
continuous.
•Self-driving cars are an example of continuous
environments as their actions are driving, parking, etc.
which cannot be numbered.
Examples of Task Enviornments
Agent’s Classification
1. Simple Reflex Agents
2. Model based Reflex Agents
3. Goal based agents
4. Utility based agents
5. Learning Agent
Simple Reflex Agent