0% found this document useful (0 votes)
15 views68 pages

Module No 2

The document outlines various problem-solving methods in artificial intelligence, including uninformed and informed search strategies, heuristics, and local search algorithms. It details the steps involved in problem formulation, examples of toy and real-world problems, and the performance measures for algorithms. Additionally, it discusses specific algorithms like Depth First Search, Uniform-cost search, and A* Search, highlighting their characteristics and applications in optimization problems.

Uploaded by

dalaldhruv725
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
15 views68 pages

Module No 2

The document outlines various problem-solving methods in artificial intelligence, including uninformed and informed search strategies, heuristics, and local search algorithms. It details the steps involved in problem formulation, examples of toy and real-world problems, and the performance measures for algorithms. Additionally, it discusses specific algorithms like Depth First Search, Uniform-cost search, and A* Search, highlighting their characteristics and applications in optimization problems.

Uploaded by

dalaldhruv725
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 68

UNIT II PROBLEM-SOLVING METHODS

• Problem-solving Methods - Search Strategies- Uninformed - Informed -


• Heuristics - Local Search Algorithms and Optimization Problems -
• Searching with Partial Observations – Constraint Satisfaction Problems –
• Constraint Propagation - Backtracking Search - Game Playing – Optimal
• Decisions in Games – Alpha - Beta Pruning - Stochastic Games
Steps performed by Problem-solving
agent
• Goal Formulation: It is the first and simplest step in problem-solving. It
organizes the steps/sequence required to formulate one goal out of
multiple goals as well as actions to achieve that goal. Goal formulation is
based on the current situation and the agent's performance measure
(discussed below).
• Problem Formulation: It is the most important step of problem-solving
which decides what actions should be taken to achieve the formulated
goal. There are following five components involved in problem
formulation:
• Initial State: It is the starting state or initial step of the agent towards its
goal.
• Actions: It is the description of the possible actions available to the
agent.
• Transition Model: It describes what each action does.
• Goal Test: It determines if the given state is a goal state.
• Path cost: It assigns a numeric cost to each path that follows the goal.
The problem-solving agent selects a cost function, which reflects its
performance measure. Remember, an optimal solution has the lowest
Search Algorithms in AI
• Artificial Intelligence is the study of building agents that act
rationally. Most of the time, these agents perform some kind of
search algorithm in the background in order to achieve their
tasks.
• A search problem consists of:
• A State Space. Set of all possible states where you can be.
• A Start State. The state from where the search begins.
• A Goal Test. A function that looks at the current state returns whether
or not it is the goal state.
• The Solution to a search problem is a sequence of actions,
called the plan that transforms the start state to the goal state.
• Initial state, actions, and transition model together define
the state-space of the problem implicitly.
• Toy Problem: It is a concise and exact description of the
problem which is used by the researchers to compare the
performance of algorithms.
• Real-world Problem: It is real-world-based problems that
require solutions. Unlike a toy problem, it does not depend on
descriptions, but we can have a general formulation of the
problem.
• 8 Puzzle Problem: Here, we have a 3x3 matrix with movable
tiles numbered from 1 to 8 with a blank space. The tile
adjacent to the blank space can slide into that space. The
objective is to reach a specified goal state similar to the goal
state,
• 8-queens problem: The aim of this problem is to place eight
queens on a chessboard in an order where no queen may
attack another. A queen can attack other queens
either diagonally or in the same row and column.
• For this problem, there are two main kinds of formulation:
• Incremental formulation: It starts from an empty state where the
operator augments a queen at each step.
• Following steps are involved in this formulation:
• States: Arrangement of any 0 to 8 queens on the chessboard.
• Initial State: An empty chessboard
• Actions: Add a queen to any empty box.
• Transition model: Returns the chessboard with the queen added
in a box.
• Goal test: Checks whether 8-queens are placed on the
chessboard without any attack.
• Path cost: There is no need for path cost because only final states
are counted.
• In this formulation, there is approximately 1.8 x 1014 possible
sequence to investigate.
• Some Real-world problems
• Traveling salesperson problem(TSP): It is a touring
problem where the salesman can visit each city only once. The
objective is to find the shortest tour and sell-out the stuff in each
city.
• VLSI Layout problem: In this problem, millions of components
and connections are positioned on a chip in order to minimize the
area, circuit-delays, stray-capacitances, and maximizing the
manufacturing yield.
• Measuring problem-solving performance
• Completeness: It measures if the algorithm guarantees to find a
solution (if any solution exist).
• Optimality: It measures if the strategy searches for an optimal
solution.
• Time Complexity: The time taken by the algorithm to find a
solution.
Uninformed Search Algorithms:
• The search algorithms in this section have no additional
information on the goal node other than the one provided in the
problem definition. The plans to reach the goal state from the
start state differ only by the order and/or length of actions.
Uninformed search is also called Blind search. These
algorithms can only generate the successors and differentiate
between the goal state and non goal state.
• Depth First Search:
• Depth-first search (DFS) is an algorithm for traversing or
searching tree or graph data structures. The algorithm starts at
the root node (selecting some arbitrary node as the root node in
the case of a graph) and explores as far as possible along each
branch before backtracking. It uses last in- first-out strategy and
hence it is implemented using a stack.
Path: S -> A -> B -> C -> G
• Input: n = 4, e = 6
0 -> 1, 0 -> 2, 1 -> 2, 2 -> 0, 2 -> 3, 3 -> 3
Output: DFS from vertex 1 : 1 2 0 3
input: n = 4, e = 6
2 -> 0, 0 -> 2, 1 -> 2, 0 -> 1, 3 -> 3, 1 -> 3
Output: DFS from vertex 2 : 2 0 1 3
DFS
Complexity calculation: DFS
• Time complexity: O(V + E), where V is the number of vertices and E is the
number of edges in the graph.
• To implement DFS traversal, you need to take the following stages.//
Procedure/algo
• Step 1: Create a stack with the total number of vertices in the graph as the
size.
• Step 2: Choose any vertex as the traversal's beginning point. Push a visit to
that vertex and add it to the stack.
• Step 3 - Push any non-visited adjacent vertices of a vertex at the top of the
stack to the top of the stack.
• Step 4 - Repeat steps 3 and 4 until there are no more vertices to visit from
the vertex at the top of the stack.
• Step 5 - If there are no new vertices to visit, go back and pop one from the
stack using backtracking.
• Step 6 - Continue using steps 3, 4, and 5 until the stack is empty.
• Step 7 - When the stack is entirely unoccupied, create the final spanning
tree by deleting the graph's unused edges.
Breadth-first search (uninformed search )
Uniform-cost search(uninformed search)
• , Unlike BFS, this uninformed search explores nodes based on
their path cost from the root node. It expands a node n
having the lowest path cost g(n), where g(n) is the total cost
from a root node to node n. Uniform-cost search is
significantly different from the breadth-first search because
of the following two reasons:
• First, the goal test is applied to a node only when it is
selected for expansion not when it is first
generated because the first goal node which is generated
may be on a suboptimal path.
• Secondly, a goal test is added to a node, only when a better/
optimal path is found.
• Cheapest path from source to one of the goal state
A->B->E->F gives the optimal path cost i.e., 0+1+3+4=8.
• Uniform-cost search Algorithm
• Set a variable NODE to the initial state, i.e., the root node and
expand it.
• After expanding the root node, select one node having the
lowest path cost and expand it further. Remember, the
selection of the node should give an optimal path cost.
• If the goal node is searched with optimal value, return goal
state, else carry on the search.
• performance measure of Uniform-cost search
• Completeness: It guarantees to reach the goal state.
• Optimality: It gives optimal path cost solutions for the
search.
• Space and time complexity: The worst space and time
complexity of the uniform…………………………..?// calculate
• Disadvantages of Uniform-cost search
• It does not care about the number of steps a path has taken
to reach the goal state.
• It may stick to an infinite loop if there is a path with infinite
zero cost sequence.
• It works hard as it examines each node in search of lowest
cost path.
8 puzzle /tiles problem
QUESTION: DIFFERENCE BETWEEN INFORMED AND UNINFORMED ALGO IN AI
Heuristic Function /Search
• A good heuristic function is determined by its efficiency. More
is the information about the problem, more is the processing
time.
• 8-puzzle, 8-queen, tic-tac-toe, etc., can be solved more
efficiently with the help of a heuristic function
Solution
• we can create and use several heuristic functions as per the requirement. It
is also clear from the above example that a heuristic function h(n) can be
defined as the information required to solve a given problem more
efficiently. The information can be related to the nature of the state, the
cost of transforming from one state to another, goal node
characteristics, etc., which is expressed as a heuristic function.
• Properties of a Heuristic search Algorithm
• Use of heuristic function in a heuristic search algorithm leads to the following
properties of a heuristic search algorithm:
• Admissible Condition: An algorithm is said to be admissible, if it returns an
optimal solution.
• Completeness: An algorithm is said to be complete, if it terminates with a
solution (if the solution exists).
• Dominance Property: If there are two admissible heuristic
algorithms A1 and A2 having h1 and h2 heuristic functions, then A1 is said to
dominate A2 if h1 is better than h2 for all the values of node n.
• Optimality Property: If an algorithm is complete, admissible,
and dominating other algorithms, it will be the best one and will definitely
give an optimal solution
Heuristic search problem/ALGO
• Best-First Search
• A* Search
• Bidirectional Search
• Tabu Search
• Beam Search
• Simulated Annealing
• Hill Climbing
• Constraint Satisfaction Problems
Best first search: BFS☹INFORMED _Heuristic
• Algorithims:
• https://www.youtube.com/watch?v=7ffDUDjwz5E
• The idea of Best First Search is to use an evaluation function to decide
which adjacent is most promising and then explore.
• Let OPEN be a priority queue containing the initial queue
• Loop
• If OPEN is empty return failure
• Node 🡨----Remove –First(open)
• Then return the path from the initial(start) to the final node(goal node)
• Else generate all successors of node and put the newly generated node into OPEN according to their Fvalue.
• End Loop
A* Search Algorithm
• Motivation
• To approximate the shortest path in real-life situations, like- in maps, games where
there can be many hindrances.
• We can consider a 2D Grid having several obstacles and we start from a source cell
(colored red below) to reach a goal cell (colored green below)
• A* Search algorithm is one of the best and popular technique used in path-finding
and graph traversals.
• really a smart algorithm that separates it from the other conventional
algorithms
• many games and web-based maps use this algorithm to find the
shortest path very efficiently (approximation).
• It is an advanced BFS algorithm that searches for shorter
paths first rather than longer paths. A* is optimal as well as
a complete algorithm.
• What do I mean by Optimal and Complete? Optimal meaning that A*
is sure to find the least cost from the source to the destination and
Complete meaning that it is going to find all the paths that are
available to us from the source to the destination.
Well, in most cases, yes. But A*
is slow and also the space it
requires is a lot as it saves all
the possible paths that are
available to us. This makes
other faster algorithms have an
upper hand over A* but it is
nevertheless, one of the best
algorithms out there.
Heuristics - Local Search Algorithms and
Optimization Problems
• Hill-climbing Search
• Simulated Annealing
• Local Beam Search
A computational problem whose objective is to find the best from all
feasible solutions// OPTIMIZATION PROBLEM
• The informed and uninformed search expands the nodes systematically
in two ways:
• keeping different paths in the memory and
• selecting the best suitable path,
• This leads to a solution state required to reach the goal node. But
beyond these “classical search algorithms," we have some “local
search algorithms” where the path cost does not matter, and only
focuses on the solution-state needed to reach the goal node.
• A local search algorithm completes its task by traversing on a single
current node rather than multiple paths and following the neighbors
of that node generally.
• Advantages
• Local search algorithms use a very little or constant amount of
memory as they operate only on a single path.
• Most often, they find a reasonable solution in large or infinite state
spaces where the classical or systematic algorithms do not work.
• An objective function is a function whose value is either minimized or
maximized in different contexts of the optimization problems. In the
case of search algorithms, an objective function can be the path cost
for reaching the goal node, etc
• Working on a Local search algorithm
• Consider the below state-space landscape having both:
• Location: It is defined by the state.
• Elevation: It is defined by the value of the objective function or
heuristic cost function.
• The local search algorithm explores the above landscape by finding
the following two points:
• Global Minimum: If the elevation corresponds to the cost, then the
task is to find the lowest valley, which is known as Global Minimum.
• Global Maxima: If the elevation corresponds to an objective function,
then it finds the highest peak which is called as Global Maxima. It is
the highest point in the valley.
• Note: Local search algorithms do not burden to remember all the
nodes in the memory; it operates on complete state-formulation.
• Hill Climbing Algorithm: Hill climbing search is a local search
problem. The purpose of the hill climbing search is to climb a
hill and reach the topmost peak/ point of that hill. It is based
on the heuristic search technique where the person who is
climbing up on the hill estimates the direction which will lead
him to the highest peak.
• State-space Landscape of Hill climbing algorithm
• Global Maximum: It is the highest point on the hill, which is
the goal state.
• Local Maximum: It is the peak higher than all other peaks but
lower than the global maximum.
• Flat local maximum: It is the flat area over the hill where it
has no uphill or downhill. It is a saturated point of the hill.
• Shoulder: It is also a flat area where the summit is possible.
• Current state: It is the current position of the person.
• Simple hill-climbing search
• Simple hill climbing is the simplest technique to climb a hill. The
task is to reach the highest peak of the mountain. Here, the
movement of the climber depends on his move/steps. If he finds
his next step better than the previous one, he continues to move
else remains in the same state. This search focus only on his
previous and next step.
• Algorithms
1. Create a CURRENT node, a NEIGHBOUR node, and a GOAL node.
2. If the CURRENT node=GOAL node, return GOAL and terminate
the search.
3. Else CURRENT node<= NEIGHBOUR node, move ahead.
4. Loop until the goal is not reached or a point is not found.
Key point while
solving any
hill-climbing problem
is to choose an
appropriate heuristic
function.

h(x) = +1 for all the blocks in the support structure if the block is correctly
positioned otherwise -1 for all the blocks in the support structure.
Disadvantages
• Hill Climbing is a short sighted technique as it evaluates only immediate
possibilities. So it may end up in few situations from which it can not pick
any further states. Let's look at these states and some solutions for them:
1. Local maximum: It's a state which is better than all neighbors, but there
exists a better state which is far from the current state; if local maximum
occurs within sight of the solution, it is known as “foothills”
2. Plateau: In this state, all neighboring states have same heuristic values,
so it's unclear to choose the next state by making local comparisons
3. Ridge: It's an area which is higher than surrounding states, but it can not
be reached in a single move; for example, we have four possible
directions to explore (N, E, W, S) and an area exists in NE direction
• There are few solutions to overcome these situations:
1. We can backtrack to one of the previous states and explore other
directions
2. We can skip few states and make a jump in new directions
3. We can explore several directions to figure out the correct path
Steepest Ascent Hill Climbing
• This differs from the basic Hill climbing algorithm by choosing the best successor rather than
the first successor that is better. This indicates that it has elements of the breadth-first algorithm.
• 1 Evaluate the initial state
• 2 If it is a goal state Then quit
• otherwise make the current state this initial state and proceed;
• 3 Repeat
• set Target to be the state that any successor of the current state can better;
• for each operator that can be applied to the current state
• apply the new operator and create a new state
• evaluate this state
• If this state is the goal state Then quit
• Otherwise compare with Target
• If better set Target to this value
• If Target is better than current state set the current state to Target
• Until a solution is found or the current state does not change
• Both the basic and STEEPEST method of hill climbing may fail to find a
solution by reaching a state from which no subsequent improvement can be
made and this state is not the solution.
• Local maximum state is a state that is better than its neighbors but not
better than faraway states. These are often known as foothills.
• Plateau states are states which have approximately the same value and it is
not clear in which direction to move in order to reach the solution.
• Ridge states are special types of local maximum states. The surrounding
area is basically unfriendly and makes it difficult to escape from, in single
steps, so the path peters out when surrounded by ridges.
• Escape relies on: backtracking to a previous good state and proceeding in a
completely different direction--- involves keeping records of the current
path from the outset; making a giant leap forward to a different part of the
search space perhaps by applying a sensible small step repeatedly, good for
plateau;
• applying more than one rule before testing is good for ridges.
• None of these escape strategies can guarantee success.
Stochastic Hill Climbing
• Stochastic Hill climbing is an optimization algorithm.
• It makes use of randomness as part of the search process. This
makes the algorithm appropriate for nonlinear objective functions
where other local search algorithms do not operate well.
• The algorithm takes the initial point as the current best candidate
solution and generates a new point within the step size distance of
the provided point. The generated point is evaluated, and if it is
equal or better than the current point, it is taken as the current
point.
• The generation of the new point uses randomness, often referred to
as Stochastic Hill Climbing. This means that the algorithm can skip
over bumpy, noisy, discontinuous, or deceptive regions of the
response surface as part of the search.
• Drawbacks of Stochastic hill climbing
• As a local search algorithm, it can get stuck in local optima.
Nevertheless, multiple restarts may allow the algorithm to locate
the global optimum.
• As a local search algorithm, it can get stuck in local optima.
Nevertheless, multiple restarts may allow the algorithm to locate the
global optimum.
• The step size must be large enough to allow better nearby points in
the search space to be located, but not so large that the search jumps
over out of the region that contains the local optima.
Implementation of AI algorithms
• DFS
• BFS
• UCS
• BEST FIRST SEARCH
• A*( PROOF A* IS ADMISSIBLE)
• HIL CLIMBING// All version ( simple+ steepest+ stochastic+ random)
Random Restart Hill climbing Algorithms
• Random-restart hill-climbing […] conducts a series of hill-climbing
searches from randomly generated initial states until a goal is found.
• Random-restart hill-climbing conducts a series of
hill-climbing searches from randomly generated initial
states, running each until it halts or makes no discernible
progress” (Russell & Norvig, 2003).
• https://www.youtube.com/watch?v=WxsEc1uEXRE
Python function//Lab2
• for x in range(3):
• print('Hello!')
• # Prints Hello! Hello! Hello!
• range(start,stop,step)//
• range(stop)
• Generate a sequence of numbers from 0 to 6
• for x in range(7):
• print(x)
• # Prints 0 1 2 3 4 5 6
range(start, stop)
• Generate a sequence of numbers from 2 to 6
• for x in range(2, 7):
• print(x)
• # Prints 2 3 4 5 6
• range(start, stop, step)
• The range increments by 1 by default. However, you can specify a
different increment by adding a step parameter.
• # Increment the range() with 2
• for x in range(2, 7, 2):
• print(x)
• # Prints 2 4 6
Constraint Satisfaction Problems(CSP):
Definition & Examples
• Consider a Sudoku game with some numbers filled initially in some
squares. You are expected to fill the empty squares with numbers
ranging from 1 to 9 in such a way that no row, column, or block has a
number repeating itself. This is a very basic constraint satisfaction
problem. You are supposed to solve a problem keeping in mind some
constraints.
• The remaining squares that are to be filled are known as variables,
and the range of numbers (1-9) that can fill them is known as a
domain. Variables take on values from the domain.
• The conditions governing how a variable will choose its domain are
known as constraints.
• A constraint satisfaction problem (CSP) is a problem that
requires its solution within some limitations or conditions
also known as constraints. It consists of the following:
• A finite set of variables which stores the solution (V = {V1,
V2, V3,....., Vn})
• A set of discrete values known as the domain from which
the solution is picked (D = {D1, D2, D3,.....,Dn})
• A finite set of constraints (C = {C1, C2, C3,......, Cn})
• Please note, that the elements in the domain can be both continuous
and discrete but in AI, we generally only deal with discrete values.
Popular Problems with CSP
• CryptArithmetic (Coding alphabets to numbers.)
• n-Queen (In an n-queen problem, n queens should be placed in an
nXn matrix such that no queen shares the same row, column or
diagonal.)
• Map Coloring (coloring different regions of map, ensuring no adjacent
regions have the same color)
• Crossword (everyday puzzles appearing in newspapers)
• Sudoku (a number grid)
• Latin Square Problem
Cryptarithmetic problem
• CROSS+ROADS=DANGER
• LET + LEE = ALL , then A + L + L = ?
• Rules for Solving Cryptarithmetic Problems
• Each Letter, Symbol represents only one digit throughout the
problem.
• Numbers must not begin with zero i.e. 0567 (wrong) , 567 (correct).
• Aim is to find the value of each letter in the Cryptarithmetic
problems
• There must be only one solution to the Cryptarithmetic problems
• The numerical base, unless specifically stated, is 10.
• After replacing letters with their digits, the resulting arithmetic
operations must be correct.
• Carry over can only be 1 in Cryptarithmetic problems
https://www.faceprep.in/logical-reasoning/cryptarithmetic-problems/
Assume (E=5) If KANSAS + OHIO = OREGON Then find the value
of G + R + O + S + S
•L = 1 E = 5 T = 6 1.7
2.8
•1 5 6 3.9
4.10
• 1 5 5 (+)
• 3 1 1 LET + LEE = ALL, then A + L + L = ? https://www.youtube.com/watch?v=VQ4gsqYBv1s

• A = 3 So, 3 + 1 + 1 = 5 ==> E
Game Playing in Artificial Intelligence
• Game Playing is an important domain of artificial intelligence. Games
don’t require much knowledge; the only knowledge we need to
provide is the rules, legal moves, and the conditions of winning or
losing the game.
• Both players try to win the game. So, both of them try to make the
best move possible at each turn. Searching techniques like
BFS(Breadth First Search) are not accurate for this as the branching
factor is very high, so searching will take a lot of time. So, we need
another search procedures that improve –
• Generate procedure so that only good moves are generated.
• Test procedure so that the best move can be explored first.
Minimax search procedure:: O(O
• The most common search technique in game playing
• Backtracking algo // O(

• It is a depth-first depth-limited search procedure. It is used for games


like chess and tic-tac-toe.
• Minimax algorithm uses two functions –
• MOVEGEN: It generates all the possible moves that can be generated
from the current position.
• STATICEVALUATION: It returns a value depending upon the goodness
from the viewpoint of two-player
• https://www.youtube.com/watch?v=Ntu8nNBL28o
• This algorithm is a two-player game, so we call the first player
PLAYER1 and the second player PLAYER2. The value of each node is
backed-up by its children. For PLAYER1 the backed-up value is the
maximum value of its children and for PLAYER2 the backed-up value is
the minimum value of its children. It provides the most promising
move to PLAYER1, assuming that PLAYER2 has made the best move. It
is a recursive algorithm, as the same procedure occurs at each level.
We assume that PLAYER 1 will
start the game. 4 levels are
generated. The value of nodes H, I,
J, K, L, M, N, and O is provided by
the STATICEVALUATION function.
Level 3 is maximizing level, so all
nodes of level 3 will take the
maximum values of their children.
Level 2 is minimizing level, so all
its nodes will take minimum
values of their children. This
process continues. The value of A
is 23. That means A should choose
C move to win.
(Alpha-Beta Pruning)
• Alpha-Beta pruning is not actually a new algorithm, but rather an
optimization technique for the minimax algorithm. It reduces the
computation time by a huge factor. This allows us to search much
faster and even go into deeper levels in the game tree. It cuts off
branches in the game tree which need not be searched because there
already exists a better move available.
• It is called Alpha-Beta pruning because it passes 2 extra parameters in
the minimax function, namely alpha and beta.
• Alpha is the best value that the maximizer currently can guarantee at
that level or above.
• Beta is the best value that the minimizer currently can guarantee at
that level or below.

You might also like