Foundations and History of Artificial
Intelligence
• Artificial Intelligence is not a new word and not a new technology for
researchers.
• This technology is much older than you would imagine.
• Even there are the myths of Mechanical men in Ancient Greek and Egyptian
Myths.
• Following are some milestones in the history of AI which defines the journey
from the AI generation to till date development.
Maturation of Artificial Intelligence (1943-1952)
• Year 1943: The first work which is now recognized as AI was done by Warren McCulloch
and Walter pits in 1943. They proposed a model of artificial neurons.
• Year 1949: Donald Hebb demonstrated an updating rule for modifying the connection
strength between neurons. His rule is now called Hebbian learning.
• Year 1950: The Alan Turing who was an English mathematician and pioneered Machine
learning in 1950. Alan Turing publishes "Computing Machinery and Intelligence" in which
he proposed a test. The test can check the machine's ability to exhibit intelligent behavior
equivalent to human intelligence, called a Turing test.
The birth of Artificial Intelligence (1952-1956)
• Year 1955: An Allen Newell and Herbert A. Simon created the "first artificial intelligence
program"Which was named as "Logic Theorist". This program had proved 38 of 52
Mathematics theorems, and find new and more elegant proofs for some theorems.
• Year 1956: The word "Artificial Intelligence" first adopted by American Computer
scientist John McCarthy at the Dartmouth Conference. For the first time, AI coined as
an academic field.
• At that time high-level computer languages such as FORTRAN, LISP, or COBOL were
invented. And the enthusiasm for AI was very high at that time.
The golden years-Early enthusiasm (1956-1974)
• Year 1966: The researchers emphasized developing algorithms which can
solve mathematical problems. Joseph Weizenbaum created the first chatbot
in 1966, which was named as ELIZA.
• Year 1972: The first intelligent humanoid robot was built in Japan which was
named as WABOT-1.
• The first AI winter (1974-1980)
• The duration between years 1974 to 1980 was the first AI winter duration.
AI winter refers to the time period where computer scientist dealt with a
severe shortage of funding from government for AI researches.
• During AI winters, an interest of publicity on artificial intelligence was
decreased.
A boom of AI (1980-1987)
• Year 1980: After AI winter duration, AI came back with "Expert System". Expert systems
were programmed that emulate the decision-making ability of a human expert.
• In the Year 1980, the first national conference of the American Association of Artificial
Intelligence was held at Stanford University.
The second AI winter (1987-1993)
• The duration between the years 1987 to 1993 was the second AI
Winter duration.
• Again Investors and government stopped in funding for AI research
as due to high cost but not efficient result. The expert system such
as XCON was very cost effective.
• The emergence of intelligent agents (1993-2011)
• Year 1997: In the year 1997, IBM Deep Blue beats world chess
champion, Gary Kasparov, and became the first computer to beat a
world chess champion.
• Year 2002: for the first time, AI entered the home in the form of
Roomba, a vacuum cleaner.
• Year 2006: AI came in the Business world till the year 2006.
Companies like Facebook, Twitter, and Netflix also started using AI.
Deep learning, big data and artificial general
intelligence (2011-present)
• AI has developed to a remarkable level. The concept of Deep learning, big data, and
data science are now trending like a boom.
• Nowadays companies like Google, Facebook, IBM, and Amazon are working with AI and
creating amazing devices.
• The future of Artificial Intelligence is inspiring and will come with high intelligence.
Problem solving
• The reflex agents are known as the simplest agents because they directly map states into
actions.
• Unfortunately, these agents fail to operate in an environment where the mapping is too
large to store and learn.
• Goal-based agent, on the other hand, considers future actions and the desired outcomes.
• Here, we will discuss one type of goal-based agent known as a problem-solving agent,
which uses atomic representation with no internal states visible to the problem-solving
algorithms.
Problem-solving agent
• The problem-solving agent perfoms precisely by defining problems and its several
solutions.
• According to psychology, “a problem-solving refers to a state where we wish to reach to a
definite goal from a present state or condition.”
• According to computer science, a problem-solving is a part of artificial intelligence which
encompasses a number of techniques such as algorithms, heuristics to solve a problem.
Problem-solving agent
• Therefore, a problem-solving agent is a goal-driven agent and focuses on
satisfying the goal.
• Steps performed by Problem-solving agent
• Goal Formulation: It is the first and simplest step in problem-solving. It organizes
the steps/sequence required to formulate one goal out of multiple goals as well
as actions to achieve that goal. Goal formulation is based on the current situation
and the agent's performance measure (discussed below).
• Problem Formulation: It is the most important step of problem-solving which
decides what actions should be taken to achieve the formulated goal.
• There are following five components involved in problem formulation:
• Initial State: It is the starting state or initial step of the agent towards its goal.
• Actions: It is the description of the possible actions available to the agent.
• Transition Model: It describes what each action does.
• Goal Test: It determines if the given state is a goal state.
• Path cost: It assigns a numeric cost to each path that follows the goal. The
problem-solving agent selects a cost function, which reflects its performance
measure. Remember, an optimal solution has the lowest path cost among all
the solutions.
Problem solving agents
• Note: Initial state, actions, and transition model together define the state-
space of the problem implicitly. State-space of a problem is a set of all states
which can be reached from the initial state followed by any sequence of actions.
The state-space forms a directed map or graph where nodes are the states, links
between the nodes are actions, and the path is a sequence of states connected
by the sequence of actions.
• Search: It identifies all the best possible sequence of actions to reach the goal
state from the current state. It takes a problem as an input and returns solution
as its output.
• Solution: It finds the best algorithm out of various algorithms, which may be
proven as the best optimal solution.
• Execution: It executes the best optimal solution from the searching algorithms
to reach the goal state from the current state.
Search: It identifies all the best possible sequence of actions to reach the goal
state from the current state. It takes a problem as an input and returns solution
as its output.
Solution: It finds the best algorithm out of various algorithms, which may be
proven as the best optimal solution.
Execution: It executes the best optimal solution from the searching algorithms
to reach the goal state from the current state.
Example Problems
• Basically, there are two types of problem approaches:
• Toy Problem: It is a concise and exact description of the problem which is used
by the researchers to compare the performance of algorithms.
• Real-world Problem: It is real-world based problems which require solutions.
Unlike a toy problem, it does not depend on descriptions, but we can have a
general formulation of the problem.
Some Toy Problems
• 8 Puzzle Problem: Here, we have a 3x3 matrix with movable tiles numbered from
1 to 8 with a blank space. The tile adjacent to the blank space can slide into that
space. The objective is to reach a specified goal state similar to the goal state, as
shown in the below figure.
• In the figure, our task is to convert the current state into goal state by sliding
digits into the blank space.
• In the below figure, our task is to convert the current(Start) state into goal state
by sliding digits into the blank space.
The problem formulation is as follows:
• States: It describes the location of each numbered tiles and the blank tile.
• Initial State: We can start from any state as the initial state.
• Actions: Here, actions of the blank space is defined, i.e., either left, right, up or
down
• Transition Model: It returns the resulting state as per the given state and
actions.
• Goal test: It identifies whether we have reached the correct goal-state.
• Path cost: The path cost is the number of steps in the path where the cost of
each step is 1.
• Note: The 8-puzzle problem is a type of sliding-block problem which is used for
testing new search algorithms in artificial intelligence.
8-queens problem
• The aim of this problem is to place eight queens on a chessboard in an order
where no queen may attack another. A queen can attack other queens
either diagonally or in same row and column.
• From the following figure, we can understand the problem as well as its correct
solution.
8-queens problem
• It is noticed from the above figure that each queen is set into the chessboard in
a position where no other queen is placed diagonally, in same row or column.
Therefore, it is one right approach to the 8-queens problem.
• For this problem, there are two main kinds of formulation:
• Incremental formulation: It starts from an empty state where the operator
augments a queen at each step.
• Following steps are involved in this formulation:
• States: Arrangement of any 0 to 8 queens on the chessboard.
• Initial State: An empty chessboard
• Actions: Add a queen to any empty box.
• Transition model: Returns the chessboard with the queen added in a box.
• Goal test: Checks whether 8-queens are placed on the chessboard without any
attack.
• Path cost: There is no need for path cost because only final states are counted.
• In this formulation, there is approximately 1.8 x 1014 possible sequence to
investigate.
• Complete-state formulation: It starts with all the 8-queens on the chessboard
and moves them around, saving from the attacks.
8-queens problem
• Following steps are involved in this formulation
• States: Arrangement of all the 8 queens one per column with no
queen attacking the other queen.
• Actions: Move the queen at the location where it is safe from the
attacks.
• This formulation is better than the incremental formulation as it
reduces the state space from 1.8 x 1014 to 2057, and it is easy to
find the solutions.
Some Real-world problems
• raveling salesperson problem(TSP): It is a touring problem where the salesman
can visit each city only once. The objective is to find the shortest tour and sell-
out the stuff in each city.
• VLSI Layout problem: In this problem, millions of components and connections
are positioned on a chip in order to minimize the area, circuit-delays, stray-
capacitances, and maximizing the manufacturing yield.
Some Real-world problems
• The layout problem is split into two parts:
• Cell layout: Here, the primitive components of the circuit are grouped into cells,
each performing its specific function. Each cell has a fixed shape and size. The
task is to place the cells on the chip without overlapping each other.
• Channel routing: It finds a specific route for each wire through the gaps
between the cells.
• Protein Design: The objective is to find a sequence of amino acids which will fold
into 3D protein having a property to cure some disease.
• Searching for solutions
• We have seen many problems. Now, there is a need to search for solutions to
solve them.
• In this section, we will understand how searching can be used by the agent to
solve a problem.
• For solving different kinds of problem, an agent makes use of different strategies
to reach the goal by searching the best possible algorithms. This process of
searching is known as search strategy.
Searching for solutions
• We have seen many problems. Now, there is a need to search for
solutions to solve them.
• In this section, we will understand how searching can be used by the
agent to solve a problem.
• For solving different kinds of problem, an agent makes use of different
strategies to reach the goal by searching the best possible algorithms.
This process of searching is known as search strategy.
Search Strategies
• Uninformed Search (Blind Search) :
This type of search strategy does not have any additional information about the
states except the information provided in the problem definition. They can only
generate the successors and distinguish a goal state from a non-goal state.
These type of search does not maintain any internal state, that’s why it is also
known as Blind search or Uninformed Search .
• There are following types of uninformed searches:
• Breadth-first search
• Uniform cost search
• Depth-first search
• Depth-limited search
• Iterative deepening search
• Bidirectional search
Uninformed Search (Blind Search)
• Breadth-first search (BFS)
• It is a simple search strategy where the root node is expanded first, then
covering all other successors of the root node, further move to expand the next
level nodes and the search continues until the goal node is not found.
• BFS expands the shallowest (i.e., not deep) node first using FIFO (First in first
out) order. Thus, new nodes (i.e., children of a parent node) remain in the queue
and old unexpanded node which are shallower than the new nodes, get
expanded first.
• In BFS, goal test (a test to check whether the current state is a goal state or not)
is applied to each node at the time of its generation rather when it is selected
for expansion.
Breadth-first search tree
• In the above figure, it is seen that the nodes are expanded level by
level starting from the root node A till the last node I in the tree.
Therefore, the BFS sequence followed is: A->B->C->D->E->F->G->I.
• BFS Algorithm
• Set a variable NODE to the initial state, i.e., the root node.
• Set a variable GOAL which contains the value of the goal state.
• Loop each node by traversing level by level until the goal state is not
found.
• While performing the looping, start removing the elements from the
queue in FIFO order.
• If the goal state is found, return goal state otherwise continue the
search.
BFS
• The performance measure of BFS is as follows:
• Completeness: It is a complete strategy as it definitely finds the goal
state.
• Optimality: It gives an optimal solution if the cost of each node is same.
• Space Complexity: The space complexity of BFS is O(bd), i.e., it requires a
huge amount of memory. Here, b is the branching factor and d denotes
the depth/level of the tree
• Time Complexity: BFS consumes much time to reach the goal node for
large instances.
• Disadvantages of BFS
• The biggest disadvantage of BFS is that it requires a lot of memory space,
therefore it is a memory bounded strategy.
• BFS is time taking search strategy because it expands the nodes
breadthwise.
• Note: BFS expands the nodes level by level, i.e., breadthwise, therefore it
is also known as a Level search technique.
Depth-first search
• This search strategy explores the deepest node first, then backtracks to explore
other nodes.
• It uses LIFO (Last in First Out) order, which is based on the stack, in order to
expand the unexpanded nodes in the search tree.
• The search proceeds to the deepest level of the tree where it has no
successors.
• This search expands nodes till infinity, i.e., the depth of the tree.
DFS search tree
• In the above figure, DFS works starting from the initial node A (root node) and
traversing in one direction deeply till node I and then backtrack to B and so on.
Therefore, the sequence will be A->B->D->I->E->C->F->G.
• DFS Algorithm
• Set a variable NODE to the initial state, i.e., the root node.
• Set a variable GOAL which contains the value of the goal state.
• Loop each node by traversing deeply in one direction/path in search of the goal
node.
• While performing the looping, start removing the elements from the stack in
LIFO order.
• If the goal state is found, return goal state otherwise backtrack to expand
nodes in other direction.
DFS
• The performance measure of DFS
• Completeness: DFS does not guarantee to reach the goal state.
• Optimality: It does not give an optimal solution as it expands nodes in
one direction deeply.
• Space complexity: It needs to store only a single path from the root
node to the leaf node. Therefore, DFS has O(bm) space complexity
where b is the branching factor(i.e., total no. of child nodes, a parent
node have) and m is the maximum length of any path.
• Time complexity: DFS has O(bm) time complexity.
• Disadvantages of DFS
• It may get trapped in an infinite loop.
• It is also possible that it may not reach the goal state.
• DFS does not give an optimal solution.
• Note: DFS uses the concept of backtracking to explore each node in a
search tree.