0% found this document useful (0 votes)
67 views

Unit No 2 (Problem Solving by Intelligent Search)

This document discusses artificial intelligence problem solving through intelligent search. It begins by defining AI and its focus on learning, reasoning and self-correction processes. It then provides examples of AI problems including Google Maps, face detection, text editors, chatbots and online payments. It defines state space search as a technique for solving AI problems, where the state space represents all possible problem states. Finally, it provides a partial example of a state space search tree for a Tic Tac Toe game up to a depth of 3 or 4 moves.

Uploaded by

Pankaj Haldikar
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
67 views

Unit No 2 (Problem Solving by Intelligent Search)

This document discusses artificial intelligence problem solving through intelligent search. It begins by defining AI and its focus on learning, reasoning and self-correction processes. It then provides examples of AI problems including Google Maps, face detection, text editors, chatbots and online payments. It defines state space search as a technique for solving AI problems, where the state space represents all possible problem states. Finally, it provides a partial example of a state space search tree for a Tic Tac Toe game up to a depth of 3 or 4 moves.

Uploaded by

Pankaj Haldikar
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 42

Artificial Intelligence

Unit No-:02
Problem Solving by Intelligent Search
1. With example discuss the nature of AI problems.

The term "Artificial Intelligence" refers to the simulation of human intelligence processes by
machines, especially computer systems. It also includes Expert systems, voice
recognition, machine vision, and natural language processing (NLP).

AI programming focuses on three cognitive aspects, such as learning, reasoning, and self-
correction.

o Learning Processes
o Reasoning Processes
o Self-correction Processes

Learning Processes

This part of AI programming is concerned with gathering data and creating rules for
transforming it into useful information. The rules, which are also called algorithms, offer
computing devices with step-by-step instructions for accomplishing a particular job.

Reasoning Processes

This part of AI programming is concerned with selecting the best algorithm to achieve the
desired result.

Self-Correction Processes

This part of AI programming aims to fine-tune algorithms regularly in order to ensure that
they offer the most reliable results possible.

Artificial Intelligence is an extensive field of computer science which focuses on developing


intelligent machines capable of doing activities that would normally require human
intelligence. While AI is a multidisciplinary science with numerous methodologies, advances
in deep learning and machine learning create a paradigm shift in almost every aspect of
technology.
2. List example AI problems. Explain any one of them.

The following are the examples of AI-Artificial Intelligence:

1. Google Maps and Ride-Hailing Applications


2. Face Detection and recognition
3. Text Editors and Autocorrect
4. Chatbots
5. E-Payments

1. Google Maps and Ride-Hailing Applications

Travelling to a new destination does not require much thought any longer. Rather than
relying on confusing address directions, we can now easily open our phone's map app and
type in our destination.

So how does the app know about the appropriate directions, best way, and even the presence
of roadblocks and traffic jams? A few years ago, only GPS (satellite-based navigation) was
used as a navigation guide. However, artificial intelligence (AI) now provides users with a
much better experience in their unique surroundings.

The app algorithm uses machine learning to recall the building's edges that are supplied into
the system after the person has manually acknowledged them. This enables the map to
provide simple visuals of buildings. Another feature is identifying and understanding
handwritten house numbers, which assists travelers in finding the exact house they need.
Their outline or handwritten label can also recognize locations that lack formal street signs.

The application has been trained to recognize and understand traffic. As a result, it suggests
the best way to avoid traffic congestion and bottlenecks. The AI-based algorithm also
informs users about the precise distance and time it will take them to arrive at their
destination. It has been trained to calculate this based on the traffic situations. Several ride-
hailing applications have emerged as a result of the use of similar AI technology. So,
whenever you need to book a cab via an app by putting your location on a map, this is how it
works.

2. Face Detection and Recognition

Utilizing face ID for unlocking our phones and using virtual filters on our faces while taking
pictures are two uses of AI that are presently essential for our day-by-day lives.

Face recognition is used in the former, which means that every human face can be
recognized. Face recognition is used in the above, which recognizes a particular face.

How does it work?

Intelligent machines often match-and some cases, even exceed human performance! -
Human potential. Human babies begin to identifying facial features such as eyes, lips, nose,
and face shapes. A face, though, is more than just that. A number of characteristics
distinguish human faces. Smart machines are trained in order to recognize facial coordinates
(x, y, w, and h; which form a square around the face as an area of interest), landmarks (nose,
eyes, etc.), and alignment (geometric structures). This improves the human ability to identify
faces by several factors. Face recognition is also used by government facilities or at the
airport for monitoring, and security.

3. Text Editors or Autocorrect

When typing a document, there are inbuilt or downloadable auto-correcting tools for editors
of spelling errors, readability, mistakes, and plagiarism based on their difficulty level.

It should have taken a long time for us to master our language and become fluent in it.
Artificially intelligent algorithms often used deep learning, machine learning, and natural
language in order to detect inappropriate language use and recommend improvements.
Linguists and computer scientists collaborate in teaching machines grammar in the same way
that we learned it in school. Machines are fed large volumes of high-quality data that has
been structured in a way that machines can understand. Thus, when we misspell a single
comma, the editor will highlight it in red and offer suggestions.

4. Chatbots

Answering a customer's inquiries can take a long time. The use of algorithms to train
machines to meet customer needs through chatbots is an artificially intelligent solution to
this problem. This allows machines to answer as well as take and track orders.

We used Natural Language Processing (NLP) to train chatbots to impersonate customer


service agents' conversational approaches. Advanced chatbots do not require complex input
formats (such as yes/o questions). They are capable of responding to complex questions that
necessitate comprehensive answers. They will appear to be a customer representative, in fact,
another example of artificial intelligence (AI). If you give a negative rating to a response, the
bot will figure out what went wrong and fix it the next time, ensuring that you get the best
possible service.

5. Online-Payments

It can be a time-consuming errand to rush to the bank for any transaction. Good news!
Artificial Intelligence is now being used by banks to support customers by simplifying the
process of payment.

Artificial intelligence has enabled you to deposit checks from the convenience of our own
home. Since AI is capable of deciphering handwriting and making online cheque processing
practicable. Artificial Intelligence can potentially be utilized to detect fraud by observing
consumers' credit card spending patterns. For example, the algorithms are aware of what
items User X purchases, when and where they are purchased, and in what price range they
are purchased. If there is some suspicious behaviour that does not match the user's profile,
then the system immediately signals user X.
3. What is state space search? Discuss its components with example.
A state is a representation of problem elements at a given moment.
A State space is the set of all states reachable from the initial state.
A state space forms a graph in which the nodes are states and the arcs between nodes are
actions.
In the state space, a path is a sequence of states connected by a sequence of actions.
The solution of a problem is part of the graph formed by the state space.

The state space representation forms the basis of most of the AI methods.
Its structure corresponds to the structure of problem solving in two important ways:
1. It allows for a formal definition of a problem as per the need to convert some given
situation into some desired situation using a set of permissible operations.
2. It permits the problem to be solved with the help of known techniques and control
strategies to move through the problem space until goal state is found.

4. Design a partial (Tree depth 3or 4) state space search for the given Tic
Tac Toe game.
Prerequisites: Minimax Algorithm in Game Theory, Evaluation Function in Game Theory
Let us combine what we have learnt so far about minimax and evaluation function to write
a proper Tic-Tac-Toe AI (Artificial Intelligence) that plays a perfect game. This AI will
consider all possible scenarios and makes the most optimal move.

Finding the Best Move :


We shall be introducing a new function called findBestMove(). This function evaluates all
the available moves using minimax() and then returns the best move the maximizer can
make. The pseudocode is as follows :

function findBestMove(board):
bestMove = NULL
for each move in board :
if current move is better than bestMove
bestMove = current move
return bestMove

Minimax :
To check whether or not the current move is better than the best move we take the help
of minimax() function which will consider all the possible ways the game can go and
returns the best value for that move, assuming the opponent also plays optimally
The code for the maximizer and minimizer in the minimax() function is similar
to findBestMove(), the only difference is, instead of returning a move, it will return a
value. Here is the pseudocode :

function minimax(board, depth, isMaximizingPlayer):

if current board state is a terminal state :


return value of the board

if isMaximizingPlayer :
bestVal = -INFINITY
for each move in board :
value = minimax(board, depth+1, false)
bestVal = max( bestVal, value)
return bestVal

else :
bestVal = +INFINITY
for each move in board :
value = minimax(board, depth+1, true)
bestVal = min( bestVal, value)
return bestVal

Checking for GameOver state :


To check whether the game is over and to make sure there are no moves left we
use isMovesLeft() function. It is a simple straightforward function which checks whether a
move is available or not and returns true or false respectively. Pseudocode is as follows :
function isMovesLeft(board):
for each cell in board:
if current cell is empty:
return true
return false

Making our AI smarter :


One final step is to make our AI a little bit smarter. Even though the following AI plays
perfectly, it might choose to make a move which will result in a slower victory or a faster
loss. Lets take an example and explain it.
Assume that there are 2 possible ways for X to win the game from a give board state.
 Move A : X can win in 2 move
 Move B : X can win in 4 moves
Our evaluation function will return a value of +10 for both moves A and B. Even though
the move A is better because it ensures a faster victory, our AI may choose B sometimes.
To overcome this problem we subtract the depth value from the evaluated score. This
means that in case of a victory it will choose a the victory which takes least number of
moves and in case of a loss it will try to prolong the game and play as many moves as
possible. So the new evaluated value will be
 Move A will have a value of +10 – 2 = 8
 Move B will have a value of +10 – 4 = 6
Now since move A has a higher score compared to move B our AI will choose
move A over move B. The same thing must be applied to the minimizer. Instead of
subtracting the depth we add the depth value as the minimizer always tries to get, as
negative a value as possible. We can subtract the depth either inside the evaluation function
or outside it. Anywhere is fine. I have chosen to do it outside the function. Pseudocode
implementation is as follows.
if maximizer has won:
return WIN_SCORE – depth

else if minimizer has won:


return LOOSE_SCORE + depth

5. How does water jug problem can be represented as state space search?
Give partial structure.
Problem: You are given two jugs, a 4-gallon one and a 3-gallon one.Neither has any
measuring mark on it.There is a pump that can be used to fill the jugs with water.How can
you get exactly 2 gallons of water into the 4-gallon jug.

Solution:
The state space for this problem can be described as the set of ordered pairs of integers (x,y)
Where,
X represents the quantity of water in the 4-gallon jug X= 0,1,2,3,4
Y represents the quantity of water in 3-gallon jug Y=0,1,2,3
Start State: (0,0)
Goal State: (2,0)
Generate production rules for the water jug problem

Production Rules:
Rule State Process
1 (X,Y | X<4) (4,Y)
{Fill 4-gallon jug}
2 (X,Y |Y<3) (X,3)
{Fill 3-gallon jug}
3 (X,Y |X>0) (0,Y)
{Empty 4-gallon jug}
4 (X,Y | Y>0) (X,0)
{Empty 3-gallon jug}
5 (X,Y | X+Y>=4 ^ Y>0) (4,Y-(4-X))
{Pour water from 3-gallon jug into 4-gallon jug until
4-gallon jug is full}
6 (X,Y | X+Y>=3 ^X>0) (X-(3-Y),3)
{Pour water from 4-gallon jug into 3-gallon jug until
3-gallon jug is full}
7 (X,Y | X+Y<=4 ^Y>0) (X+Y,0)
{Pour all water from 3-gallon jug into 4-gallon jug}
8 (X,Y | X+Y <=3^ X>0) (0,X+Y)
{Pour all water from 4-gallon jug into 3-gallon jug}
9 (0,2) (2,0)
{Pour 2 gallon water from 3 gallon jug into 4 gallon
jug}

Initialization:
Start State: (0,0)
Apply Rule 2:
(X,Y | Y<3) -> (X,3)
{Fill 3-gallon jug}
Now the state is (X,3)

Iteration 1:
Current State: (X,3)
Apply Rule 7:
(X,Y | X+Y<=4 (X+Y,0)
^Y>0) {Pour all water from 3-gallon jug into 4-gallon
jug}
Now the state is (3,0)

Iteration 2:
Current State : (3,0)
Apply Rule 2:
(X,Y | Y<3) -> (3,3)
{Fill 3-gallon jug}
Now the state is (3,3)

Iteration 3:
Current State:(3,3)
Apply Rule 5:
(X,Y | X+Y>=4 ^ (4,Y-(4-X))
Y>0) {Pour water from 3-gallon jug into 4-gallon jug
until 4-gallon jug is full}
Now the state is (4,2)
Iteration 4:
Current State : (4,2)
Apply Rule 3:
(X,Y | X>0) (0,Y)
{Empty 4-gallon jug}
Now state is (0,2)

Iteration 5:
Current State : (0,2)
Apply Rule 9:
(0,2) (2,0)
{Pour 2 gallon water from 3 gallon jug into 4
gallon jug}
Now the state is (2,0)

Goal Achieved.

State Space Tree:

6. Elaborate the procedure of AI problem solving.


The reflex agent of AI directly maps states into action. Whenever these agents fail to
operate in an environment where the state of mapping is too large and not easily performed
by the agent, then the stated problem dissolves and sent to a problem-solving domain
which breaks the large stored problem into the smaller storage area and resolves one by
one. The final integrated action will be the desired outcomes.
On the basis of the problem and their working domain, different types of problem-solving
agent defined and use at an atomic level without any internal state visible with a problem-
solving algorithm. The problem-solving agent performs precisely by defining problems and
several solutions. So we can say that problem solving is a part of artificial intelligence that
encompasses a number of techniques such as a tree, B-tree, heuristic algorithms to solve a
problem. We can also say that a problem-solving agent is a result-driven agent and always
focuses on satisfying the goals.
Steps problem-solving in AI: The problem of AI is directly associated with the nature of
humans and their activities. So we need a number of finite steps to solve a problem which
makes human easy works.
These are the following steps which require to solve a problem :
 Goal Formulation: This one is the first and simple step in problem-solving. It
organizes finite steps to formulate a target/goals which require some action to achieve
the goal. Today the formulation of the goal is based on AI agents.
 Problem formulation: It is one of the core steps of problem-solving which decides
what action should be taken to achieve the formulated goal. In AI this core part is
dependent upon software agent which consisted of the following components to
formulate the associated problem.
Components to formulate the associated problem:
 Initial State: This state requires an initial state for the problem which starts the AI
agent towards a specified goal. In this state new methods also initialize problem domain
solving by a specific class.
 Action: This stage of problem formulation works with function with a specific class
taken from the initial state and all possible actions done in this stage.
 Transition: This stage of problem formulation integrates the actual action done by the
previous action stage and collects the final stage to forward it to their next stage.
 Goal test: This stage determines that the specified goal achieved by the integrated
transition model or not, whenever the goal achieves stop the action and forward into the
next stage to determines the cost to achieve the goal.
 Path costing: This component of problem-solving numerical assigned what will be the
cost to achieve the goal. It requires all hardware software and human working cost.

7. Compare and contrast blind search and heuristic search techniques.


Uninformed/Blind Search:
The uninformed search does not contain any domain knowledge such as closeness, the
location of the goal. It operates in a brute-force way as it only includes information about
how to traverse the tree and how to identify leaf and goal nodes. Uninformed search applies
a way in which search tree is searched without any information about the search space like
initial state operators and test for the goal, so it is also called blind search.It examines each
node of the tree until it achieves the goal node.

It can be divided into five main types:


o Breadth-first search
o Uniform cost search
o Depth-first search
o Iterative deepening depth-first search
o Bidirectional Search
Informed Search
Informed search algorithms use domain knowledge. In an informed search, problem
information is available which can guide the search. Informed search strategies can find a
solution more efficiently than an uninformed search strategy. Informed search is also called a
Heuristic search.

A heuristic is a way which might not always be guaranteed for best solutions but guaranteed
to find a good solution in reasonable time.

Informed search can solve much complex problem which could not be solved in another
way.

An example of informed search algorithms is a traveling salesman problem.

1. Greedy Search
2. A* Search

8. Consider a search tree of your choice and trace depth search algorithm.
Example of DFS algorithm
Now, let's understand the working of the DFS algorithm by using an example. In the
example given below, there is a directed graph having 7 vertices.

Now, let's start examining the graph starting from Node H.

Step 1 - First, push H onto the stack.


STACK: H

Step 2 - POP the top element from the stack, i.e., H, and print it. Now, PUSH all the
neighbors of H onto the stack that are in ready state.
Print: H]STACK: A

Step 3 - POP the top element from the stack, i.e., A, and print it. Now, PUSH all the
neighbors of A onto the stack that are in ready state.
Print: A
STACK: B, D
Step 4 - POP the top element from the stack, i.e., D, and print it. Now, PUSH all the
neighbors of D onto the stack that are in ready state.
Print: D
STACK: B, F

Step 5 - POP the top element from the stack, i.e., F, and print it. Now, PUSH all the
neighbors of F onto the stack that are in ready state.
Print: F
STACK: B

Step 6 - POP the top element from the stack, i.e., B, and print it. Now, PUSH all the
neighbors of B onto the stack that are in ready state.
Print: B
STACK: C

Step 7 - POP the top element from the stack, i.e., C, and print it. Now, PUSH all the
neighbors of C onto the stack that are in ready state.
Print: C
STACK: E, G

Step 8 - POP the top element from the stack, i.e., G and PUSH all the neighbors of G onto
the stack that are in ready state.
Print: G
STACK: E

Step 9 - POP the top element from the stack, i.e., E and PUSH all the neighbors of E onto
the stack that are in ready state.
Print: E
STACK:

Now, all the graph nodes have been traversed, and the stack is empty.

Complexity of Depth-first search algorithm

The time complexity of the DFS algorithm is O(V+E), where V is the number of vertices
and E is the number of edges in the graph.

The space complexity of the DFS algorithm is O(V).

Implementation of DFS algorithm

Now, let's see the implementation of DFS algorithm in Java.

In this example, the graph that we are using to demonstrate the code is given as follows -
/*A sample java program to implement the DFS algorithm*/

import java.util.*;

class DFSTraversal {
private LinkedList<Integer> adj[]; /*adjacency list representation*/
private boolean visited[];

/* Creation of the graph */


DFSTraversal(int V) /*'V' is the number of vertices in the graph*/
{
adj = new LinkedList[V];
visited = new boolean[V];

for (int i = 0; i < V; i++)


adj[i] = new LinkedList<Integer>();
}

/* Adding an edge to the graph */


void insertEdge(int src, int dest) {
adj[src].add(dest);
}

void DFS(int vertex) {


visited[vertex] = true; /*Mark the current node as visited*/
System.out.print(vertex + " ");

Iterator<Integer> it = adj[vertex].listIterator();
while (it.hasNext()) {
int n = it.next();
if (!visited[n])
DFS(n);
}
}

public static void main(String args[]) {


DFSTraversal graph = new DFSTraversal(8);

graph.insertEdge(0, 1);
graph.insertEdge(0, 2);
graph.insertEdge(0, 3);
graph.insertEdge(1, 3);
graph.insertEdge(2, 4);
graph.insertEdge(3, 5);
graph.insertEdge(3, 6);
graph.insertEdge(4, 7);
graph.insertEdge(4, 5);
graph.insertEdge(5, 2);

System.out.println("Depth First Traversal for the graph is:");


graph.DFS(0);
}
}

Output

9. Consider a search tree of your choice and trace breadth search algorithm.
Example of BFS algorithm
Now, let's understand the working of BFS algorithm by using an example. In the example
given below, there is a directed graph having 7 vertices.
In the above graph, minimum path 'P' can be found by using the BFS that will start from
Node A and end at Node E. The algorithm uses two queues, namely QUEUE1 and
QUEUE2. QUEUE1 holds all the nodes that are to be processed, while QUEUE2 holds all
the nodes that are processed and deleted from QUEUE1.
Now, let's start examining the graph starting from Node A.

Step 1 - First, add A to queue1 and NULL to queue2.


QUEUE1 = {A}
QUEUE2 = {NULL}

Step 2 - Now, delete node A from queue1 and add it into queue2. Insert all neighbors of
node A to queue1.

QUEUE1 = {B, D}
QUEUE2 = {A}

Step 3 - Now, delete node B from queue1 and add it into queue2. Insert all neighbors of
node B to queue1.

QUEUE1 = {D, C, F}
QUEUE2 = {A, B}

Step 4 - Now, delete node D from queue1 and add it into queue2. Insert all neighbors of
node D to queue1. The only neighbor of Node D is F since it is already inserted, so it will not
be inserted again.

QUEUE1 = {C, F}
QUEUE2 = {A, B, D}

Step 5 - Delete node C from queue1 and add it into queue2. Insert all neighbors of node C to
queue1.

QUEUE1 = {F, E, G}
QUEUE2 = {A, B, D, C}

Step 5 - Delete node F from queue1 and add it into queue2. Insert all neighbors of node F to
queue1. Since all the neighbors of node F are already present, we will not insert them again.

QUEUE1 = {E, G}
QUEUE2 = {A, B, D, C, F}

Step 6 - Delete node E from queue1. Since all of its neighbors have already been added, so
we will not insert them again. Now, all the nodes are visited, and the target node E is
encountered into queue2.

QUEUE1 = {G}
QUEUE2 = {A, B, D, C, F, E}
Complexity of BFS algorithm

Time complexity of BFS depends upon the data structure used to represent the graph. The
time complexity of BFS algorithm is O(V+E), since in the worst case, BFS algorithm
explores every node and edge. In a graph, the number of vertices is O(V), whereas the
number of edges is O(E).

The space complexity of BFS can be expressed as O(V), where V is the number of vertices.

Implementation of BFS algorithm

Now, let's see the implementation of BFS algorithm in java.

In this code, we are using the adjacency list to represent our graph. Implementing the
Breadth-First Search algorithm in Java makes it much easier to deal with the adjacency list
since we only have to travel through the list of nodes attached to each node once the node is
dequeued from the head (or start) of the queue.

In this example, the graph that we are using to demonstrate the code is given as follows -

1. import java.io.*;
2. import java.util.*;
3. public class BFSTraversal
4. {
5. private int vertex; /* total number number of vertices in the graph */
6. private LinkedList<Integer> adj[]; /* adjacency list */
7. private Queue<Integer> que; /* maintaining a queue */
8. BFSTraversal(int v)
9. {
10. vertex = v;
11. adj = new LinkedList[vertex];
12. for (int i=0; i<v; i++)
13. {
14. adj[i] = new LinkedList<>();
15. }
16. que = new LinkedList<Integer>();
17. }
18. void insertEdge(int v,int w)
19. {
20. adj[v].add(w); /* adding an edge to the adjacency list (edges are bidirectional in this
example) */
21. }
22. void BFS(int n)
23. {
24. boolean nodes[] = new boolean[vertex]; /* initialize boolean array for holding the d
ata */
25. int a = 0;
26. nodes[n]=true;
27. que.add(n); /* root node is added to the top of the queue */
28. while (que.size() != 0)
29. {
30. n = que.poll(); /* remove the top element of the queue */
31. System.out.print(n+" "); /* print the top element of the queue */
32. for (int i = 0; i < adj[n].size(); i++) /* iterate through the linked list and push all neig
hbors into queue */
33. {
34. a = adj[n].get(i);
35. if (!nodes[a]) /* only insert nodes into queue if they have not been explored alre
ady */
36. {
37. nodes[a] = true;
38. que.add(a);
39. }
40. }
41. }
42. }
43. public static void main(String args[])
44. {
45. BFSTraversal graph = new BFSTraversal(10);
46. graph.insertEdge(0, 1);
47. graph.insertEdge(0, 2);
48. graph.insertEdge(0, 3);
49. graph.insertEdge(1, 3);
50. graph.insertEdge(2, 4);
51. graph.insertEdge(3, 5);
52. graph.insertEdge(3, 6);
53. graph.insertEdge(4, 7);
54. graph.insertEdge(4, 5);
55. graph.insertEdge(5, 2);
56. graph.insertEdge(6, 5);
57. graph.insertEdge(7, 5);
58. graph.insertEdge(7, 8);
59. System.out.println("Breadth First Traversal for the graph is:");
60. graph.BFS(2);
61. }
62.}

Output

10. Consider a search tree of your choice and best depth search algorithm.
In BFS and DFS, when we are at a node, we can consider any of the adjacent as the next
node. So both BFS and DFS blindly explore paths without considering any cost function.
The idea of Best First Search is to use an evaluation function to decide which adjacent is
most promising and then explore.
Best First Search falls under the category of Heuristic Search or Informed Search.
Implementation of Best First Search:
We use a priority queue or heap to store the costs of nodes that have the lowest evaluation
function value. So the implementation is a variation of BFS, we just need to change Queue
to PriorityQueue.
// Pseudocode for Best First Search
Best-First-Search(Graph g, Node start)
1) Create an empty PriorityQueue
PriorityQueue pq;
2) Insert "start" in pq.
pq.insert(start)
3) Until PriorityQueue is empty
u = PriorityQueue.DeleteMin
If u is the goal
Exit
Else
Foreach neighbor v of u
If v "Unvisited"
Mark v "Visited"
pq.insert(v)
Mark u "Examined"
End procedure

Illustration:
Let us consider the below example:

 We start from source “S” and search for goal “I” using given costs and Best First
search.

 pq initially contains S
 We remove s from and process unvisited neighbors of S to pq.
 pq now contains {A, C, B} (C is put before B because C has lesser cost)

 We remove A from pq and process unvisited neighbors of A to pq.


 pq now contains {C, B, E, D}

 We remove C from pq and process unvisited neighbors of C to pq.


 pq now contains {B, H, E, D}

 We remove B from pq and process unvisited neighbors of B to pq.


 pq now contains {H, E, D, F, G}
 We remove H from pq.
 Since our goal “I” is a neighbor of H, we return.
11. Differentiate between Depth first search and Breadth first search.
BFS vs DFS

S.
No. Parameters BFS DFS
BFS stands for Breadth First DFS stands for Depth First
1. Stands for Search. Search.
BFS(Breadth First Search) uses
Queue data structure for finding DFS(Depth First Search) uses
2. Data Structure the shortest path. Stack data structure.
DFS is also a traversal approach
in which the traverse begins at the
BFS is a traversal approach in root node and proceeds through
which we first walk through all the nodes as far as possible until
nodes on the same level before we reach the node with no
3. Definition moving on to the next level. unvisited nearby nodes.
BFS can be used to find a single
source shortest path in an
unweighted graph because, in
BFS, we reach a vertex with a In DFS, we might traverse
minimum number of edges from through more edges to reach a
4. Technique a source vertex. destination vertex from a source.
Conceptual BFS builds the tree level by DFS builds the tree sub-tree by
5. Difference level. sub-tree.
It works on the concept of FIFO It works on the concept of LIFO
6. Approach used (First In First Out). (Last In First Out).
BFS is more suitable for
searching vertices closer to the DFS is more suitable when there
7. Suitable for given source. are solutions away from source.
DFS is more suitable for game or
puzzle problems. We make a
BFS considers all neighbors first decision, and the then explore all
Suitable for and therefore not suitable for paths through this decision. And
Decision decision-making trees used in if this decision leads to win
8. Treestheirwinning games or puzzles. situation, we stop.
The Time complexity of BFS is The Time complexity of DFS is
O(V + E) when Adjacency List is also O(V + E) when Adjacency
used and O(V^2) when List is used and O(V^2) when
Adjacency Matrix is used, where Adjacency Matrix is used, where
V stands for vertices and E V stands for vertices and E stands
9. Time Complexity stands for edges. for edges.
Visiting of Siblings/ Here, siblings are visited before Here, children are visited before
10. Children the children. the siblings.
Nodes that are traversed several The visited nodes are added to the
Removal of times are deleted from the stack and then removed when
11. Traversed Nodes queue. there are no more nodes to visit.
DFS algorithm is a recursive
In BFS there is no concept of algorithm that uses the idea of
12. Backtracking backtracking. backtracking
BFS is used in various DFS is used in various
applications such as bipartite applications such as acyclic
13. Applications graphs, shortest paths, etc. graphs and topological order etc.
14. Memory BFS requires more memory. DFS requires less memory.
BFS is optimal for finding the DFS is not optimal for finding the
15. Optimality shortest path. shortest path.
DFS has lesser space complexity
In BFS, the space complexity is because at a time it needs to store
more critical as compared to time only a single path from the root to
16. Space complexity complexity. the leaf node.
17. Speed BFS is slow as compared to DFS.DFS is fast as compared to BFS.
When the target is close to the When the target is far from the
18. When to use? source, BFS performs better. source, DFS is preferable.

12. Explain Best First Search algorithm. Illustrate

Greedy best-first search algorithm always selects the path which appears best at that
moment. It is the combination of depth-first search and breadth-first search algorithms. It
uses the heuristic function and search. Best-first search allows us to take the advantages of
both algorithms. With the help of best-first search, at each step, we can choose the most
promising node. In the best first search algorithm, we expand the node which is closest to the
goal node and the closest cost is estimated by heuristic function, i.e.

f(n)= g(n).
Were, h(n)= estimated cost from node n to the goal.
The greedy best first algorithm is implemented by the priority queue.

Best first search algorithm:

o Step 1: Place the starting node into the OPEN list.


o Step 2: If the OPEN list is empty, Stop and return failure.
o Step 3: Remove the node n, from the OPEN list which has the lowest value of h(n),
and places it in the CLOSED list.
o Step 4: Expand the node n, and generate the successors of node n.
o Step 5: Check each successor of node n, and find whether any node is a goal node or
not. If any successor node is goal node, then return success and terminate the search,
else proceed to Step 6.
o Step 6: For each successor node, algorithm checks for evaluation function f(n), and
then check if the node has been in either OPEN or CLOSED list. If the node has not
been in both list, then add it to the OPEN list.
o Step 7: Return to Step 2.
Advantages:

o Best first search can switch between BFS and DFS by gaining the advantages of both
the algorithms.
o This algorithm is more efficient than BFS and DFS algorithms.

Disadvantages:

o It can behave as an unguided depth-first search in the worst case scenario.


o It can get stuck in a loop as DFS.
o This algorithm is not optimal.

Example:

Consider the below search problem, and we will traverse it using greedy best-first search. At
each iteration, each node is expanded using evaluation function f(n)=h(n) , which is given in
the below table.

In this search example, we are using two lists which are OPEN and CLOSED Lists.
Following are the iteration for traversing the above example.

Expand the nodes of S and put in the CLOSED list

Initialization: Open [A, B], Closed [S]

Iteration 1: Open [A], Closed [S, B]

Iteration 2: Open [E, F, A], Closed [S, B]


: Open [E, A], Closed [S, B, F]

Iteration 3: Open [I, G, E, A], Closed [S, B, F]


: Open [I, E, A], Closed [S, B, F, G]

Hence the final solution path will be: S----> B----->F----> G


Time Complexity: The worst case time complexity of Greedy best first search is O(bm).

Space Complexity: The worst case space complexity of Greedy best first search is O(b m).
Where, m is the maximum depth of the search space.

Complete: Greedy best-first search is also incomplete, even if the given state space is finite.

Optimal: Greedy best first search algorithm is not optimal.

13. Explain with example A* algorithm. Illustrate

A* search is the most commonly known form of best-first search. It uses heuristic function
h(n), and cost to reach the node n from the start state g(n). It has combined features of UCS
and greedy best-first search, by which it solve the problem efficiently. A* search algorithm
finds the shortest path through the search space using the heuristic function. This search
algorithm expands less search tree and provides optimal result faster. A* algorithm is similar
to UCS except that it uses g(n)+h(n) instead of g(n).

In A* search algorithm, we use search heuristic as well as the cost to reach the node. Hence
we can combine both costs as following, and this sum is called as a fitness number.

Algorithm of A* search:

Step1: Place the starting node in the OPEN list.


Step 2: Check if the OPEN list is empty or not, if the list is empty then return failure and
stops.
Step 3: Select the node from the OPEN list which has the smallest value of evaluation
function (g+h), if node n is goal node then return success and stop, otherwise
Step 4: Expand node n and generate all of its successors, and put n into the closed list. For
each successor n', check whether n' is already in the OPEN or CLOSED list, if not then
compute evaluation function for n' and place into Open list.

Step 5: Else if node n' is already in OPEN and CLOSED, then it should be attached to the
back pointer which reflects the lowest g(n') value.

Step 6: Return to Step 2.

Advantages:

o A* search algorithm is the best algorithm than other search algorithms.


o A* search algorithm is optimal and complete.
o This algorithm can solve very complex problems.

Disadvantages:

o It does not always produce the shortest path as it mostly based on heuristics and
approximation.
o A* search algorithm has some complexity issues.
o The main drawback of A* is memory requirement as it keeps all generated nodes in
the memory, so it is not practical for various large-scale problems.

Example:

In this example, we will traverse the given graph using the A* algorithm. The heuristic value
of all states is given in the below table so we will calculate the f(n) of each state using the
formula f(n)= g(n) + h(n), where g(n) is the cost to reach any node from start state.
Here we will use OPEN and CLOSED list.

Solution:
Initialization: {(S, 5)}

Iteration1: {(S--> A, 4), (S-->G, 10)}

Iteration2: {(S--> A-->C, 4), (S--> A-->B, 7), (S-->G, 10)}

Iteration3: {(S--> A-->C--->G, 6), (S--> A-->C--->D, 11), (S--> A-->B, 7), (S-->G, 10)}

Iteration 4 will give the final result, as S--->A--->C--->G it provides the optimal path with
cost 6.

Points to remember:

o A* algorithm returns the path which occurred first, and it does not search for all
remaining paths.
o The efficiency of A* algorithm depends on the quality of heuristic.
o A* algorithm expands all nodes which satisfy the condition f(n)<="" li="">

Complete: A* algorithm is complete as long as:

o Branching factor is finite.


o Cost at every action is fixed.

Optimal: A* search algorithm is optimal if it follows below two conditions:

o Admissible: the first condition requires for optimality is that h(n) should be an
admissible heuristic for A* tree search. An admissible heuristic is optimistic in nature.
o Consistency: Second required condition is consistency for only A* graph-search.

If the heuristic function is admissible, then A* tree search will always find the least cost
path.

Time Complexity: The time complexity of A* search algorithm depends on heuristic


function, and the number of nodes expanded is exponential to the depth of solution d. So the
time complexity is O(b^d), where b is the branching factor.

Space Complexity: The space complexity of A* search algorithm is O(b^d)


14. Discuss branch and bound search algorithm. Illustrate

Branch and bound is one of the techniques used for problem solving. It is similar to the
backtracking since it also uses the state space tree. It is used for solving the optimization
problems and minimization problems. If we have given a maximization problem then we can
convert it using the Branch and bound technique by simply converting the problem into a
maximization problem.

Let's understand through an example.

Jobs = {j1, j2, j3, j4}

P = {10, 5, 8, 3}

d = {1, 2, 1, 2}

The above are jobs, problems and problems given. We can write the solutions in two ways
which are given below:

Suppose we want to perform the jobs j1 and j2 then the solution can be represented in two
ways:

The first way of representing the solutions is the subsets of jobs.

S1 = {j1, j4}

The second way of representing the solution is that first job is done, second and third jobs
are not done, and fourth job is done.

S2 = {1, 0, 0, 1}

The solution s1 is the variable-size solution while the solution s2 is the fixed-size solution.

First, we will see the subset method where we will see the variable size.

First method:
In this case, we first consider the first job, then second job, then third job and finally we
consider the last job.

As we can observe in the above figure that the breadth first search is performed but not the
depth first search. Here we move breadth wise for exploring the solutions. In backtracking,
we go depth-wise whereas in branch and bound, we go breadth wise.

Now one level is completed. Once I take first job, then we can consider either j2, j3 or j4. If
we follow the route then it says that we are doing jobs j1 and j4 so we will not consider jobs
j2 and j3.

Now we will consider the node 3. In this case, we are doing job j2 so we can consider either
job j3 or j4. Here, we have discarded the job j1.

Now we will expand the node 4. Since here we are doing job j3 so we will consider only job
j4.
Now we will expand node 6, and here we will consider the jobs j3 and j4.

Now we will expand node 7 and here we will consider job j4.

Now we will expand node 9, and here we will consider job j4.

The last node, i.e., node 12 which is left to be expanded. Here, we consider job j4.

The above is the state space tree for the solution s1 = {j1, j4}
Second method:

We will see another way to solve the problem to achieve the solution s1.

First, we consider the node 1 shown as below:

Now, we will expand the node 1. After expansion, the state space tree would be appeared as:

On each expansion, the node will be pushed into the stack shown as below:

Now the expansion would be based on the node that appears on the top of the stack. Since
the node 5 appears on the top of the stack, so we will expand the node 5. We will pop out the
node 5 from the stack. Since the node 5 is in the last job, i.e., j4 so there is no further scope
of expansion.

The next node that appears on the top of the stack is node 4. Pop out the node 4 and expand.
On expansion, job j4 will be considered and node 6 will be added into the stack shown as
below:
The next node is 6 which is to be expanded. Pop out the node 6 and expand. Since the node 6
is in the last job, i.e., j4 so there is no further scope of expansion.

The next node to be expanded is node 3. Since the node 3 works on the job j2 so node 3 will
be expanded to two nodes, i.e., 7 and 8 working on jobs 3 and 4 respectively. The nodes 7
and 8 will be pushed into the stack shown as below:
The next node that appears on the top of the stack is node 8. Pop out the node 8 and expand.
Since the node 8 works on the job j4 so there is no further scope for the expansion.

The next node that appears on the top of the stack is node 7. Pop out the node 7 and expand.
Since the node 7 works on the job j3 so node 7 will be further expanded to node 9 that works
on the job j4 as shown as below and the node 9 will be pushed into the stack.

The next node that appears on the top of the stack is node 9. Since the node 9 works on the
job 4 so there is no further scope for the expansion.

The next node that appears on the top of the stack is node 2. Since the node 2 works on the
job j1 so it means that the node 2 can be further expanded. It can be expanded upto three
nodes named as 10, 11, 12 working on jobs j2, j3, and j4 respectively. There newly nodes
will be pushed into the stack shown as below:

In the above method, we explored all the nodes using the stack that follows the LIFO
principle.
Third method

There is one more method that can be used to find the solution and that method is Least cost
branch and bound. In this technique, nodes are explored based on the cost of the node. The
cost of the node can be defined using the problem and with the help of the given problem, we
can define the cost function. Once the cost function is defined, we can define the cost of the
node.

Let's first consider the node 1 having cost infinity shown as below:

Now we will expand the node 1. The node 1 will be expanded into four nodes named as 2, 3,
4 and 5 shown as below:

Let's assume that cost of the nodes 2, 3, 4, and 5 are 25, 12, 19 and 30 respectively.

Since it is the least cost branch n bound, so we will explore the node which is having the
least cost. In the above figure, we can observe that the node with a minimum cost is node 3.
So, we will explore the node 3 having cost 12.

Since the node 3 works on the job j2 so it will be expanded into two nodes named as 6 and 7
shown as below:

The node 6 works on job j3 while the node 7 works on job j4. The cost of the node 6 is 8 and
the cost of the node 7 is 7. Now we have to select the node which is having the minimum
cost. The node 7 has the minimum cost so we will explore the node 7. Since the node 7
already works on the job j4 so there is no further scope for the expansion.
15. What is and-or graph. Give example.
PROBLEM REDUCTION:
So far we have considered search strategies for OR graphs through which we want to find a
single path to a goal. Such structure represent the fact that we know how to get from anode
to a goal state if we can discover how to get from that node to a goal state along any one of
the branches leaving it.

AND-OR GRAPHS
The AND-OR GRAPH (or tree) is useful for representing the solution of problems that can
solved by decomposing them into a set of smaller problems, all of which must then be
solved. This decomposition, or reduction, generates arcs that we call AND arcs. One AND
arc may point to any number of successor nodes, all of which must be solved in order for the
arc to point to a solution. Just as in an OR graph, several arcs may emerge from a single
node, indicating a variety of ways in which the original problem might be solved. This is
why the structure is called not simply an AND-graph but rather an AND-OR graph (which
also happens to be an AND-OR tree)

EXAMPLE FOR AND-OR GRAPH

ALGORITHM:
1. Let G be a graph with only starting node INIT.
2. Repeat the followings until INIT is labeled SOLVED or h(INIT) > FUTILITY
a) Select an unexpanded node from the most promising path from INIT (call it NODE)
b) Generate successors of NODE. If there are none, set h(NODE) = FUTILITY (i.e.,
NODE is unsolvable); otherwise for each SUCCESSOR that is not an ancestor of
NODE do the following:
i. Add SUCCESSSOR to G.
ii. If SUCCESSOR is a terminal node, label it SOLVED and set h(SUCCESSOR)
= 0.
iii. If SUCCESSPR is not a terminal node, compute its h
c) Propagate the newly discovered information up the graph by doing the following: let S
be set of SOLVED nodes or nodes whose h values have been changed and need to
have values propagated back to their parents. Initialize S to Node. Until S is empty
repeat the followings:
i. Remove a node from S and call it CURRENT.
ii. Compute the cost of each of the arcs emerging from CURRENT. Assign
minimum cost of its successors as its h.
iii. Mark the best path out of CURRENT by marking the arc that had the minimum
cost in step ii
iv. Mark CURRENT as SOLVED if all of the nodes connected to it through new
labeled arc have been labeled SOLVED
v. If CURRENT has been labeled SOLVED or its cost was just changed,
propagate its new cost back up through the graph. So add all of the ancestors of
CURRENT to S.

EXAMPLE: 1
STEP 1:

A is the only node, it is at the end of the current best path. It is expanded, yielding nodes B,
C, D. The arc to D is labeled as the most promising one emerging from A, since it costs
6compared to B and C, Which costs 9.

STEP 2:

Node B is chosen for expansion. This process produces one new arc, the AND arc to E and
F, with a combined cost estimate of 10.so we update the f’ value of D to 10.Going back one
more level, we see that this makes the AND arc B-C better than the arc to D, so it is labeled
as the current best path.

STEP 3:
We traverse the arc from A and discover the unexpanded nodes B and C. If we going to find
a solution along this path, we will have to expand both B and C eventually, so let’s choose to
explore B first. This generates two new arcs, the ones to G and to H. Propagating their f’
values backward, we update f’ of B to 6(since that is the best we think we can do, which we
can achieve by going through G). This requires updating the cost of the AND arc B-C to
12(6+4+2). After doing that, the arc to D is again the better path from A, so we record that as
the current best path and either node E or node F will chosen for expansion at step 4.

STEP4:

16. “Game playing follows adverse search”, Discuss.

Adversarial search is a search, where we examine the problem which arises when we
try to plan ahead of the world and other agents are planning against us.

o In previous topics, we have studied the search strategies which are only associated
with a single agent that aims to find the solution which often expressed in the form of
a sequence of actions.
o But, there might be some situations where more than one agent is searching for the
solution in the same search space, and this situation usually occurs in game playing.
o The environment with more than one agent is termed as multi-agent environment, in
which each agent is an opponent of other agent and playing against each other. Each
agent needs to consider the action of other agent and effect of that action on their
performance.
o So, Searches in which two or more players with conflicting goals are trying to
explore the same search space for the solution, are called adversarial searches,
often known as Games.
o Games are modeled as a Search problem and heuristic evaluation function, and these
are the two main factors which help to model and solve games in AI.
Types of Games in AI:

Deterministic Chance Moves

Perfect information Chess, Checkers, go, Othello Backgammon, monopoly

Imperfect information Battleships, blind, tic-tac-toe Bridge, poker, scrabble, nuclear war

o Perfect information: A game with the perfect information is that in which agents can
look into the complete board. Agents have all the information about the game, and they
can see each other moves also. Examples are Chess, Checkers, Go, etc.
o Imperfect information: If in a game agents do not have all information about the game
and not aware with what's going on, such type of games are called the game with
imperfect information, such as tic-tac-toe, Battleship, blind, Bridge, etc.
o Deterministic games: Deterministic games are those games which follow a strict pattern
and set of rules for the games, and there is no randomness associated with them.
Examples are chess, Checkers, Go, tic-tac-toe, etc.
o Non-deterministic games: Non-deterministic are those games which have various
unpredictable events and has a factor of chance or luck. This factor of chance or luck is
introduced by either dice or cards. These are random, and each action response is not
fixed. Such games are also called as stochastic games.
Example: Backgammon, Monopoly, Poker, etc.

Zero-Sum Game

o Zero-sum games are adversarial search which involves pure competition.


o In Zero-sum game each agent's gain or loss of utility is exactly balanced by the losses or
gains of utility of another agent.
o One player of the game try to maximize one single value, while other player tries to
minimize it.
o Each move by one player in the game is called as ply.
o Chess and tic-tac-toe are examples of a Zero-sum game.

Zero-sum game: Embedded thinking

The Zero-sum game involved embedded thinking in which one agent or player is trying to
figure out:

o What to do.
o How to decide the move
o Needs to think about his opponent as well
o The opponent also thinks what to do

Each of the players is trying to find out the response of his opponent to their actions. This
requires embedded thinking or backward reasoning to solve the game problems in AI.

Formalization of the problem:

A game can be defined as a type of search in AI which can be formalized of the


following elements:

o Initial state: It specifies how the game is set up at the start.


o Player(s): It specifies which player has moved in the state space.
o Action(s): It returns the set of legal moves in state space.
o Result(s, a): It is the transition model, which specifies the result of moves in the state
space.
o Terminal-Test(s): Terminal test is true if the game is over, else it is false at any case.
The state where the game ends is called terminal states.
o Utility(s, p): A utility function gives the final numeric value for a game that ends in
terminal states s for player p. It is also called payoff function. For Chess, the outcomes
are a win, loss, or draw and its payoff values are +1, 0, ½. And for tic-tac-toe, utility
values are +1, -1, and 0.

17. Discuss Min-Max procedure of Game Playing.


o Mini-max algorithm is a recursive or backtracking algorithm which is used in decision-making
and game theory. It provides an optimal move for the player assuming that opponent is also
playing optimally.
o Mini-Max algorithm uses recursion to search through the game-tree.
o Min-Max algorithm is mostly used for game playing in AI. Such as Chess, Checkers, tic-tac-toe,
go, and various tow-players game. This Algorithm computes the minimax decision for the
current state.
o In this algorithm two players play the game, one is called MAX and other is called MIN.
o Both the players fight it as the opponent player gets the minimum benefit while they get the
maximum benefit.
o Both Players of the game are opponent of each other, where MAX will select the maximized
value and MIN will select the minimized value.
o The minimax algorithm performs a depth-first search algorithm for the exploration of the
complete game tree.
o The minimax algorithm proceeds all the way down to the terminal node of the tree, then
backtrack the tree as the recursion.
Working of Min-Max Algorithm:

o The working of the minimax algorithm can be easily described using an example.
Below we have taken an example of game-tree which is representing the two-player
game.
o In this example, there are two players one is called Maximizer and other is called
Minimizer.
o Maximizer will try to get the Maximum possible score, and Minimizer will try to get
the minimum possible score.
o This algorithm applies DFS, so in this game-tree, we have to go all the way through
the leaves to reach the terminal nodes.
o At the terminal node, the terminal values are given so we will compare those value
and backtrack the tree until the initial state occurs. Following are the main steps
involved in solving the two-player game tree:

Step-1: In the first step, the algorithm generates the entire game-tree and apply the utility
function to get the utility values for the terminal states. In the below tree diagram, let's take
A is the initial state of the tree. Suppose maximizer takes first turn which has worst-case
initial value =- infinity, and minimizer will take next turn which has worst-case initial value
= +infinity.

Step 2: Now, first we find the utilities value for the Maximizer, its initial value is -∞, so we
will compare each value in terminal state with initial value of Maximizer and determines the
higher nodes values. It will find the maximum among the all.

o For node D max(-1,- -∞) => max(-1,4)= 4


o For Node E max(2, -∞) => max(2, 6)= 6
o For Node F max(-3, -∞) => max(-3,-5) = -3
o For node G max(0, -∞) = max(0, 7) = 7
Step 3: In the next step, it's a turn for minimizer, so it will compare all nodes value with +∞,
and will find the 3rd layer node values.

o For node B= min(4,6) = 4


o For node C= min (-3, 7) = -3

Step 4: Now it's a turn for Maximizer, and it will again choose the maximum of all nodes
value and find the maximum value for the root node. In this game tree, there are only 4
layers, hence we reach immediately to the root node, but in real games, there will be more
than 4 layers.

o For node A max(4, -3)= 4

That was the complete workflow of the minimax two player game.
Properties of Mini-Max algorithm:

o Complete- Min-Max algorithm is Complete. It will definitely find a solution (if exist),
in the finite search tree.
o Optimal- Min-Max algorithm is optimal if both opponents are playing optimally.
o Time complexity- As it performs DFS for the game-tree, so the time complexity of
Min-Max algorithm is O(bm), where b is branching factor of the game-tree, and m is
the maximum depth of the tree.
o Space Complexity- Space complexity of Mini-max algorithm is also similar to DFS
which is O(bm).

Limitation of the minimax Algorithm:

The main drawback of the minimax algorithm is that it gets really slow for complex games
such as Chess, go, etc. This type of games has a huge branching factor, and the player has
lots of choices to decide. This limitation of the minimax algorithm can be improved
from alpha-beta pruning which we have discussed in the next topic

18. Define Alpha and Beta. Discuss role of Alpha-Beta in game playing.
o Alpha-beta pruning is a modified version of the minimax algorithm. It is an optimization
technique for the minimax algorithm.
o As we have seen in the minimax search algorithm that the number of game states it has to
examine are exponential in depth of the tree. Since we cannot eliminate the exponent, but
we can cut it to half. Hence there is a technique by which without checking each node of
the game tree we can compute the correct minimax decision, and this technique is
called pruning. This involves two threshold parameter Alpha and beta for future
expansion, so it is called alpha-beta pruning. It is also called as Alpha-Beta Algorithm.
o Alpha-beta pruning can be applied at any depth of a tree, and sometimes it not only prune
the tree leaves but also entire sub-tree.
o The two-parameter can be defined as:
a. Alpha: The best (highest-value) choice we have found so far at any point along the
path of Maximizer. The initial value of alpha is -∞.
b. Beta: The best (lowest-value) choice we have found so far at any point along the
path of Minimizer. The initial value of beta is +∞.
o The Alpha-beta pruning to a standard minimax algorithm returns the same move as the
standard algorithm does, but it removes all the nodes which are not really affecting the
final decision but making algorithm slow. Hence by pruning these nodes, it makes the
algorithm fast.
Condition for Alpha-beta pruning:

The main condition which required for alpha-beta pruning is:

α>=β

Key points about alpha-beta pruning:

o The Max player will only update the value of alpha.


o The Min player will only update the value of beta.
o While backtracking the tree, the node values will be passed to upper nodes instead of
values of alpha and beta.
o We will only pass the alpha, beta values to the child nodes.

Working of Alpha-Beta Pruning:

Let's take an example of two-player search tree to understand the working of Alpha-beta
pruning

Step 1: At the first step the, Max player will start first move from node A where α= -∞ and
β= +∞, these value of alpha and beta passed down to node B where again α= -∞ and β= +∞,
and Node B passes the same value to its child D.

Step 2: At Node D, the value of α will be calculated as its turn for Max. The value of α is
compared with firstly 2 and then 3, and the max (2, 3) = 3 will be the value of α at node D
and node value will also 3.

Step 3: Now algorithm backtrack to node B, where the value of β will change as this is a
turn of Min, Now β= +∞, will compare with the available subsequent nodes value, i.e. min
(∞, 3) = 3, hence at node B now α= -∞, and β= 3.
In the next step, algorithm traverse the next successor of Node B which is node E, and the
values of α= -∞, and β= 3 will also be passed.

Step 4: At node E, Max will take its turn, and the value of alpha will change. The current
value of alpha will be compared with 5, so max (-∞, 5) = 5, hence at node E α= 5 and β= 3,
where α>=β, so the right successor of E will be pruned, and algorithm will not traverse it,
and the value at node E will be 5.

Step 5: At next step, algorithm again backtrack the tree, from node B to node A. At node A,
the value of alpha will be changed the maximum available value is 3 as max (-∞, 3)= 3, and
β= +∞, these two values now passes to right successor of A which is Node C.

At node C, α=3 and β= +∞, and the same values will be passed on to node F.

Step 6: At node F, again the value of α will be compared with left child which is 0, and
max(3,0)= 3, and then compared with right child which is 1, and max(3,1)= 3 still α remains
3, but the node value of F will become 1.
Step 7: Node F returns the node value 1 to node C, at C α= 3 and β= +∞, here the value of
beta will be changed, it will compare with 1 so min (∞, 1) = 1. Now at C, α=3 and β= 1, and
again it satisfies the condition α>=β, so the next child of C which is G will be pruned, and
the algorithm will not compute the entire sub-tree G.

Step 8: C now returns the value of 1 to A here the best value for A is max (3, 1) = 3.
Following is the final game tree which is the showing the nodes which are computed and
nodes which has never computed. Hence the optimal value for the maximizer is 3 for this
example.

You might also like