Ai - Unit 1 - Part 1
Ai - Unit 1 - Part 1
Chapter I
Introduction
1.1 What is Artificial Intelligence?
“Artificial intelligence is the simulation of human intelligence processes by machines, especially
computer systems”.
Specific applications of AI include expert systems, natural language processing, speech recognition and
machine vision.
A computer passes the test if a human interrogator, after posing some written questions, cannot tell whether the written
responses come from a person or from a computer.
The Total Turing Test includes a video signal so that the interrogator can test the subject’s perceptual abilities, as well as
the opportunity for the interrogator to pass physical objects “through the hatch.”
To pass the total Turing Test, the computer will need
Computer vision to perceive objects, and
Robotics to manipulate objects and move about.
Thinking humanly: The cognitive modeling approach
If we are going to say that a given program thinks like a human, we must have some way of
determining how humans think. We need to get inside the actual workings of human minds.
Once we have a sufficiently precise theory of the mind, it becomes possible to express the
theory as a computer program.
If the program’s input–output behavior matches corresponding human behavior, that is
evidence that some of the program’s mechanisms could also be operating in humans.
Thinking rationally: The “laws of thought” approach
There are two main obstacles to this approach.
First, it is not easy to take informal knowledge and state it in the formal terms required by
logical notation, particularly when the knowledge is less than 100% certain.
Second, there is a big difference between solving a problem “in principle” and solving it in
practice.
Even problems with just a few hundred facts can exhaust the computational resources of any
computer unless it has some guidance as to which reasoning steps to try first.
Although both of these obstacles apply to any attempt to build computational reasoning systems,
they appeared first in the logicist tradition.
Acting rationally: The rational agent approach
An agent is just something that acts (agent comes from the Latin agere, to do).
Of course, all computer programs do something, but computer agents are expected to do
more: operate autonomously, perceive their environment, persist over a prolonged time period,
adapt to change, and create and pursue goals.
A rational agent is one that acts so as to achieve the best outcome or, when there is
uncertainty, the best expected outcome.
AI systems use formal logical methods and Boolean logic (Boole, 1847), analysis of limits to what can be
computed, probability theory, uncertainty that forms the basis for most modern approaches to AI, fuzzy logic, etc.
Neuroscience:
In early studies, injured and abnormal people were used to understand what parts of brain work. Now recent
studies use accurate sensors to correlate brain activity to human thought.
By monitoring individual neurons, monkeys can now control a computer mouse using thought alone.
Moore's law states that the computers will have as many gates as humans have neurons in the year 2020.
Control theory:
Machines can modify their behavior in response to the environment (sense/ action loop).
Eg: Steam engine governor, thermostat and-water-flow regulator
In 1950, control theory could only describe linear systems. AI largely rose as a response to this shortcoming.
Linguistics
Speech demonstrates so much of human intelligence.
Analysis of human language reveals thought taking place in ways not understood in other settings. Children
can create sentences they have never heard before.
Languages and thoughts are believed to be tightly intertwined.
Computer Engineering
How to build an efficient computer?
Provides the artifact that makes AI application possible
The power of computer makes computation of large and difficult problems more easily
AI has also contributed its own work to computer science, including: time-sharing, the linked list data type,
etc.
Psychology
How do humans think and act?
The study of human reasoning and acting, provides reasoning models for AI.
Strengthen the ideas humans and other animals can be considered as information processing machines .
1.3 History of Artificial Intelligence
Following are some milestones in the history of AI.
Maturation of Artificial Intelligence (1943-1952)
Year 1943: The first work which is now recognized as AI was done by Warren McCulloch and Walter pits in 1943.
They proposed a model of artificial neurons.
Year 1949: Donald Hebb demonstrated an updating rule for modifying the connection strength between neurons.
His rule is now called Hebbian learning.
Year 1950: The Alan Turing who was an English mathematician and pioneered Machine learning in 1950. Alan
Turing publishes "Computing Machinery and Intelligence" in which he proposed a test. The test can check the
machine's ability to exhibit intelligent behavior equivalent to human intelligence, called a Turing test.
The birth of Artificial Intelligence (1952-1956)
Year 1955: Allen Newell and Herbert A. Simon created the "first artificial intelligence program “ Which was
named as "Logic Theorist". This program had proved 38 of 52 Mathematics theorems, and find new and more
elegant proofs for some theorems.
Year 1956: The word "Artificial Intelligence" first adopted by American Computer scientist John McCarthy at the
Dartmouth Conference.
At that time high-level computer languages such as FORTRAN, LISP, or COBOL were invented. And the
enthusiasm for AI was very high at that time.
The golden years-Early enthusiasm (1956-1974)
Year 1966: The researchers emphasized developing algorithms which can solve mathematical problems.
Joseph Weizenbaum created the first chatbot in 1966, which was named as ELIZA.
Year 1972: The first intelligent humanoid robot was built in Japan which was named as WABOT-1.
The first AI winter (1974-1980)
The duration between years 1974 to 1980 was the first AI winter duration.
AI winter refers to the time period where computer scientist dealt with a severe shortage of funding from
government for AI researches.
During AI winters, an interest of publicity on artificial intelligence was decreased.
Boom of AI (1980-1987)
Year 1980: After AI winter duration, AI came back with "Expert System". Expert systems were
programmed that emulate the decision-making ability of a human expert.
In the Year 1980, the first national conference of the American Association of Artificial Intelligence was
held at Stanford University.
For the first time, AI coined as an academic field.
The second AI winter (1987-1993)
The duration between the years 1987 to 1993 was the second AI Winter duration.
Again Investors and government stopped in funding for AI research as due to high cost but not efficient
result.
The expert system such as XCON was very cost effective.
The emergence of intelligent agents (1993-2011)
Year 1997: In the year 1997, IBM Deep Blue beats world chess champion, Gary Kasparov, and became the
first computer to beat a world chess champion.
Year 2002: for the first time, AI entered the home in the form of Roomba, a vacuum cleaner.
Year 2006: AI came in the Business world till the year 2006. Companies like Facebook, Twitter, and
Netflix also started using AI.
Deep learning, big data and artificial general intelligence (2011-present)
Year 2011: In the year 2011, IBM's Watson won jeopardy, a quiz show, where it had to solve the complex
questions as well as riddles. Watson had proved that it could understand natural language and can solve
tricky questions quickly.
Year 2012: Google has launched an Android app feature "Google now", which was able to provide
information to the user as a prediction.
• Year 2014: In the year 2014, Chatbot "Eugene Goostman" won a competition in the infamous "Turing
test."
• Year 2018: The "Project Debater" from IBM debated on complex topics with two master debaters and
also performed extremely well.
• Google has demonstrated an AI program "Duplex" which was a virtual assistant and which had taken
hairdresser appointment on call, and lady on other side didn't notice that she was talking with the machine.
1.4 THE STATE OF THE ART
Following are some sectors which have the application of Artificial Intelligence:
1. AI in Astronomy
Artificial Intelligence can be very useful to solve
complex universe problems.
AI technology can be helpful for understanding
the universe such as how it works, origin, etc.
2. AI in Healthcare
Healthcare Industries are applying AI to make a
better and faster diagnosis than humans.
AI can help doctors with diagnoses and can
inform when patients are worsening so that medical
help can reach to the patient before hospitalization.
3. AI in Gaming
The AI machines can play strategic games like chess, where the machine needs to think of a large number of
possible places.
IBM’s DEEP BLUE became the first computer program to defeat the world champion in a chess match when it
bested Garry Kasparov by a score of 3.5 to 2.5 in an exhibition match (Goodman and Keene, 1997).
4. AI in Finance
The finance industry is implementing automation, chatbot, adaptive intelligence, algorithm trading, and machine
learning into financial processes.
5. AI in Data Security
Cyber-attacks are growing very rapidly in the digital world. AI can be used to make the data more safe and
secure.
Some examples such as AEG bot, AI2 Platform, are used to determine software bug and cyber-attacks in a better
way.
6. AI in Social Media
Social Media sites such as Facebook, Twitter, and Snapchat contain billions of user profiles, which need to be
stored and managed in a very efficient way.
AI can organize and manage massive amounts of data. AI can analyze lots of data to identify the latest trends,
hashtag, and requirement of different users.
8. AI in Automotive Industry
Some Automotive industries are using AI to provide virtual assistant to their user for better performance. Such as
Tesla has introduced TeslaBot, an intelligent virtual assistant.
Various Industries are currently working for developing self-driven cars which can make the journey more safe and
secure.
9. AI in Robotics:
Usually, general robots are programmed such that they can perform some repetitive task, but with the help of AI, we
can create intelligent robots which can perform tasks with their own experiences without pre-programmed.
Humanoid Robots are best examples for AI in robotics, recently the intelligent Humanoid robot named as Erica and
Sophia has been developed which can talk and behave like humans.
The iRobot Corporation has sold over two million Roomba robotic vacuum cleaners for home use.
10. AI in Entertainment
We are currently using some AI based applications in our daily life with some entertainment services such as
Netflix or Amazon. With the help of ML/AI algorithms, these services show the recommendations for programs or
shows.
11. AI in Agriculture
Agriculture is an area which requires various resources, labor, money, and time for best result. Now a day's
agriculture is becoming digital, and AI is emerging in this field.
Agriculture is applying AI as agriculture robotics, solid and crop monitoring, predictive analysis.
12. AI in E-commerce
AI is helping shoppers to discover associated products with recommended size, color, or even brand.
13. AI in education:
AI can automate grading so that the tutor can have more time to teach. AI chatbot can communicate with students
as a teaching assistant.
AI in the future can be work as a personal virtual tutor for students, which will be accessible easily at any time and
any place.
14. Spam fighting:
Each day, learning algorithms classify over a billion messages as spam, saving the recipient from having to waste
time deleting what, for many users, could comprise 80% or 90% of all messages, if not classified away by algorithms.
Because the spammers are continually updating their tactics, it is difficult for a static programmed approach to keep
up, and learning algorithms work best (Sahami et al., 1998; Goodman and Heckerman, 2004).
15.Machine Translation
A computer program automatically translates from Arabic to English, allowing an English speaker to see the
headline “Ardogan Confirms That Turkey Would Not Accept Any Pressure, Urging Them to Recognize Cyprus.”
Chapter 2
Intelligent Agents
1.5 AGENTS AND ENVIRONMENTS
What is an Agent?
“An agent can be anything that perceive its environment through sensors and act upon that
environment through actuators”(a device that causes a machine or other device to operate).
An Agent runs in the cycle of perceiving (to notice or realize something), thinking, and acting.
An agent can be:
Human-Agent: A human agent has eyes, ears, and other organs which work for sensors and hand,
legs, vocal tract work for actuators.
Robotic Agent: A robotic agent can have cameras, infrared range finder, NLP for sensors and various
motors for actuators.
Software Agent: Software agent can have keystrokes, file contents as sensory input and act on those
inputs and display output on the screen.
Hence the world around us is full of agents such as, cellphone, camera, and even we are also
agents.
• Sensor: Sensor is a device which detects the change in the environment and sends the
information to other electronic devices. An agent observes its environment through sensors.
• Actuators: Actuators are the component of machines that converts energy into motion. The
actuators are only responsible for moving and controlling a system. An actuator can be an
electric motor, gears, rails, etc.
• Effectors: Effectors are the devices which affect the environment. Effectors can be legs, wheels,
arms, fingers, wings, fins, and display screen.
Agents interact with environments through sensors and actuators.
In the simple world, the vacuum cleaner agent has a location sensor and a dirt sensor so that it knows where it
is (room A or room B) and whether the room is dirty.
It can go left, go right, suck, and idle. A possible performance
measure is to maximize the number of clean rooms over a certain period.
A rational agent is one that does the right thing—conceptually speaking, every entry in the table for the agent function is filled out
correctly.
Obviously, doing the right thing is better than doing the wrong thing, but what does it mean to do the right thing?
By considering the consequences of the agent’s behavior, When an agent is plunked down in an environment, it generates a
sequence of actions according to the percepts it receives.
This sequence of actions causes the environment to go through a sequence of states. If the sequence is desirable, then the agent has
performed well.
This notion of desirability is captured by a performance measure that evaluates any given sequence of environment states.
Performance measures
As a general rule, it is better to design performance measures according to what one actually wants in the environment, rather than
according to how one thinks the agent should behave.
Rationality
“For each possible percept sequence, a rational agent should select an action that is expected to maximize its performance
measure, given the evidence provided by the percept sequence and whatever built-in knowledge the agent has, that is the task
of rational agent is to improve the performance measure depends on percept sequence”.
Consider the simple vacuum-cleaner agent that cleans a square if it is dirty and moves to the other square if not; this is the agent
function tabulated in Figure. Is this a rational agent? That depends!
First, we need to say what the performance measure is, what is known about the environment, and what sensors and actuators
the agent has.
The Left and Right actions move the agent left and
right except when this would take the agent outside the
environment, in which case the agent remains where it is.
An omniscient agent knows the actual outcome of its actions and can act accordingly; but omniscience is impossible in
reality.
A rational agent not only to gather information, but also to learn as much as possible from what it perceives.
The agent‘s initial configuration could reflect some prior knowledge of the environment, but as the agent gains
experience this may be modified and augmented.
Successful agents split the task of computing the agent function into three different periods:
when the agent is being designed, some of the computation is done by its designers;
when it is deliberating on its next action, the agent does more computation and
as it learns from experience, it does even more computation to decide how to modify its behavior.
A rational agent should be autonomous – it should learn what it can, to compensate for partial or incorrect prior
knowledge.
1.7 THE NATURE OF ENVIRONMENTS
The environment is the Task Environment (problem) for which the Rational Agent is the solution. Any task
environment is characterized on the basis of PEAS.
• Performance – What is the performance characteristic which would either make the agent successful or not. For
example, as per the previous example clean floor, optimal energy consumption might be performance measures.
• Environment – Physical characteristics and constraints expected. For example, wood floors, furniture in the way etc.
• Actuators – The physical or logical constructs which would take action. For example for the vacuum cleaner, these
are the suction pumps
• Sensors – Again physical or logical constructs which would sense the environment. From our previous example, these
are cameras and dirt sensors.
Specifying the task environment
The PEAS description for the taxi’s task environment. We discuss each element in more detail in the following paragraphs.
First, what is the performance measure to which we would like our automated driver to aspire?
Desirable qualities include getting to the correct destination;
minimizing fuel consumption;
minimizing the trip time or cost;
minimizing violations of traffic laws and disturbances to other drivers;
maximizing safety and passenger comfort; maximizing profits.
Obviously, some of these goals conflict, so tradeoffs will be required.
Next, what is the driving environment that the taxi will face?
Any taxi driver must deal with a variety of roads, ranging from rural lanes and urban alleys to 12-lane freeways.
The roads contain other traffic, pedestrians, stray animals, road works, police cars . The taxi must also interact with
potential and actual passengers.
The actuators for an automated taxi include those available to a human driver:
control over the engine through the accelerator and control over steering and braking.
In addition, it will need output to a display screen or voice synthesizer to talk back to the passengers, and perhaps
some way to communicate with other vehicles, politely or otherwise.
The basic sensors for the taxi will include one or more controllable video cameras so that it can see
the road;
it might augment these with infrared or sonar sensors to detect distances to other cars and obstacle.
we have sketched the basic PEAS elements for a number of additional agent types
Properties of task environments
An environment in artificial intelligence is the surrounding of the agent. The agent takes input from the environment
through sensors and delivers the output to the environment through actuators.
Deterministic vs Stochastic
Competitive vs Collaborative
Single-agent vs Multi-agent
Static vs Dynamic
Discrete vs Continuous
Episodic vs sequential
Known vs Unknown
Accessible vs Inaccessible
Fully Observable vs Partially Observable
When an agent sensor is capable to sense or access the complete state of an agent at each point in time, it is said to be a
fully observable environment else it is partially observable.
Maintaining a fully observable environment is easy as there is no need to keep track of the history of the surrounding.
An environment is called unobservable when the agent has no sensors in all environments.
Example:
Chess – the board is fully observable, so are the opponent’s moves
Driving – the environment is partially observable because what’s around the corner is not know.
Deterministic vs Stochastic
When a uniqueness in the agent’s current state completely determines the next state of the agent, the environment is
said to be deterministic.
The stochastic environment is random in nature which is not unique and cannot be completely determined by the agent.
Example:
Chess – there would be only a few possible moves for a coin at the current state and these moves can be determined.
Self Driving Cars – the actions of a self-driving car are not unique, it varies time to time.
Competitive vs Collaborative
An agent is said to be in a competitive environment when it competes against another agent to optimize
the output.
The game of chess is competitive as the agents compete with each other to win the game which is the
output.
An agent is said to be in a collaborative environment when multiple agents cooperate to produce the
desired output.
When multiple self-driving cars are found on the roads, they cooperate with each other to avoid
collisions and reach their destination which is the output desired.
Single-agent vs Multi-agent
An environment consisting of only one agent is said to be a single-agent environment.
A person left alone in a maze is an example of the single-agent system.
An environment involving more than one agent is a multi-agent environment.
The game of football is multi-agent as it involves 11 players in each team.
Dynamic vs Static
An environment that keeps constantly changing itself when the agent is up with some action is said to be dynamic.
A roller coaster ride is dynamic as it is set in motion and the environment keeps changing every instant.
An idle environment with no change in its state is called a static environment.
An empty house is static as there’s no change in the surroundings when an agent enters.
Discrete vs Continuous
If an environment consists of a finite number of actions that can be deliberated in the environment to obtain the
output, it is said to be a discrete environment.
The game of chess is discrete as it has only a finite number of moves. The number of moves might vary with every
game, but still, it’s finite.
The environment in which the actions performed cannot be numbered ie. is not discrete, is said to be continuous.
Self-driving cars are an example of continuous environments as their actions are driving, parking, etc. which cannot
be numbered.
Known vs Unknown
Known and unknown are not actually a feature of an environment, but it is an agent's state of knowledge to
perform an action.
In a known environment, the results for all actions are known to the agent. While in unknown environment,
agent needs to learn how it works in order to perform an action.
It is quite possible that a known environment to be partially observable and an Unknown environment to be
fully observable.
Accessible vs Inaccessible
If an agent can obtain complete and accurate information about the state's environment, then such an
environment is called an Accessible environment else it is called inaccessible.
An empty room whose state can be defined by its temperature is an example of an accessible environment.
We assume this program will run on some sort of computing device with physical sensors and actuators
Following are the main three terms involved in the structure of an AI agent:
Agent program: Agent program is an implementation of agent function. An agent program executes on the physical architecture to
produce function f.
The program we choose has to be one that is appropriate for the architecture.
If the program is going to recommend actions like Walk, the architecture had better have legs.
The architecture might be just an ordinary PC, or it might be a robotic car with several onboard computers, cameras, and other sensors.
In general, the architecture makes the percepts from the sensors available to the program, runs the program, and feeds the program’s action
choices to the actuators as they are generated.
Agent programs
The agent programs can take the current percept as input from the sensors and return an action to the actuators.
The difference between the agent program and agent function is, the agent program, which takes the current percept as
input, and the agent function, which takes the entire percept history.
The agent program takes just the current percept as input because nothing more is available from the environment; if the
agent’s actions need to depend on the entire percept sequence, the agent will have to remember the percepts.
Rather trivial agent program that keeps track of the percept sequence and then uses it to index into a table of actions to
decide what to do.
The TABLE-DRIVEN-AGENT program is invoked for each new percept and returns an action each time. It retains the
complete percept sequence in memory.
function TABLE-DRIVEN-AGENT(percept) returns an action
persistent: percepts: a sequence, initially empty table,
a table of actions: indexed by percept sequences, append percept to the end of percepts
action ← LOOKUP(percepts , table)
return action
The agent program for a simple reflex agent in the two-state vacuum environment.
To build a rational agent in this way, we as designers must construct a table that contains the appropriate action for
every possible percept sequence.
It is instructive to consider why the table-driven approach to agent construction is doomed to failure.
Let P be the set of possible percepts and let T be the lifetime of the agent (the total number of percepts it will receive).
The lookup table will contain ∑T |P| t entries. t = 1
Consider the automated taxi: the visual input from a single camera comes in at the rate of roughly 27 megabytes per
second (30 frames per second, 640 × 480 pixels with 24 bits of color information).
This gives a lookup table with over 10^250,000,000,000entries for an hour’s driving.
Agents can be grouped into four classes based on their degree of perceived intelligence and
capability.
All these agents can improve their performance and generate better action over the time.
Goal-based agents
Utility-based agent
Simple Reflex agent:
The Simple reflex agents are the simplest agents. These agents take decisions on the basis of the current percepts and ignore
the rest of the percept history.
The Simple reflex agent does not consider any part of percepts history during their decision and action process.
The Simple reflex agent works on Condition-action rule, which means it maps the current state to action. Such as a Room
Cleaner agent, it works only if there is dirt in the room.
A more general and flexible approach is first to build a general-purpose interpreter for condition– action rules and then to
create rule sets for specific task environments.
The below Figure gives the structure of this general program in schematic form, showing how the condition–action rules
allow the agent to make the connection from percept to action.
We use rectangles to denote the current internal state of the agent’s decision process, and ovals to represent the background
information used in the process
A simple reflex agent acts according to a rule whose condition matches the current state, as
defined by the percept
function SIMPLE-REFLEX-AGENT(percept) returns an action
state ← INTERPRET-INPUT(percept)
action ← rule.ACTION
return action
The INTERPRET-INPUT function generates an abstracted description of the current state from
the percept, and the RULE-MATCH function returns the first rule in the set of rules that matches
the given state description.
Model-based reflex agent
The Model-based agent can work in a partially observable environment, and track the situation.
Model: It is knowledge about "how things happen in the world," so it is called a Model-based agent.
These agents have the model, "which is knowledge of the world" and based on the model they perform actions.
These knowledge is embedded to the agent’s program and they help agent to understand how the world works.
Implementation of this knowledge is called the model of the world and the agent that uses this model to decide
what action to take called model based agent.
Figure gives the structure of the model-based reflex agent with internal state, showing how the current percept is
combined with the old internal state to generate the updated description of the current state, based on the agent’s
model of how the world works .
It keeps track of the current state of the world, using an internal model. It then chooses an action in the same
way as the reflex agent.
function MODEL-BASED-REFLEX-AGENT(percept) returns an action
persistent: state: the agent’s current conception of the world state
model: a description of how the next state depends on current state and action
rules:a set of condition–action rules action, the most recent action, initially none
state ← UPDATE-STATE(state, action, percept, model)
rule ← RULE-MATCH(state, rules)
action ← rule.ACTION
return action
Goal-based agents
The knowledge of the current state environment is not always sufficient to decide for an agent to what to do.
The agent needs to know its goal which describes desirable situations.
Goal-based agents expand the capabilities of the model-based agent by having the "goal" information.
These agents may have to consider a long sequence of possible actions before deciding whether the goal is
achieved or not.
Such considerations of different scenarios are called searching and planning, which makes an agent
proactive.
A model-based, goal-based agent. It keeps track of the world state as well as a set of goals it is trying to
achieve, and chooses an action that will (eventually) lead to the achievement of its goals.
Utility-based Agents
These agents are similar to the goal-based agent but provide an extra component of utility
measurement which makes them different by providing a measure of success at a given state.
Utility-based agent not only focuses on goal(s) but also the best way to achieve the goal(s).
The Utility-based agent is useful when there are multiple possible alternatives, and an agent has to
choose in order to perform the best action.
The utility function maps each state to a real number to check how efficiently each action
achieves the goals.
A utility-based agent has to model and keep track of its environment, tasks that have involved a
great deal of research on perception, representation, reasoning, and learning.
A model-based, utility-based agent. It uses a model of the world, along with a utility function that measures its
preferences among states of the world.
Then it chooses the action that leads to the best expected utility, where expected utility is computed by averaging over
all possible outcome states, weighted by the probability of the outcome.
Learning Agents
A learning agent in AI is the type of agent which can learn from its past experiences, or it has learning
capabilities.
It starts to act with basic knowledge and then able to act and adapt automatically through learning.
Hence, learning agents are able to learn, analyze performance, and look for new ways to improve the
performance.