Unit No 2 (Problem Solving by Intelligent Search)
Unit No 2 (Problem Solving by Intelligent Search)
Unit No-:02
Problem Solving by Intelligent Search
1. With example discuss the nature of AI problems.
The term "Artificial Intelligence" refers to the simulation of human intelligence processes by
machines, especially computer systems. It also includes Expert systems, voice
recognition, machine vision, and natural language processing (NLP).
AI programming focuses on three cognitive aspects, such as learning, reasoning, and self-
correction.
o Learning Processes
o Reasoning Processes
o Self-correction Processes
Learning Processes
This part of AI programming is concerned with gathering data and creating rules for
transforming it into useful information. The rules, which are also called algorithms, offer
computing devices with step-by-step instructions for accomplishing a particular job.
Reasoning Processes
This part of AI programming is concerned with selecting the best algorithm to achieve the
desired result.
Self-Correction Processes
This part of AI programming aims to fine-tune algorithms regularly in order to ensure that
they offer the most reliable results possible.
Travelling to a new destination does not require much thought any longer. Rather than
relying on confusing address directions, we can now easily open our phone's map app and
type in our destination.
So how does the app know about the appropriate directions, best way, and even the presence
of roadblocks and traffic jams? A few years ago, only GPS (satellite-based navigation) was
used as a navigation guide. However, artificial intelligence (AI) now provides users with a
much better experience in their unique surroundings.
The app algorithm uses machine learning to recall the building's edges that are supplied into
the system after the person has manually acknowledged them. This enables the map to
provide simple visuals of buildings. Another feature is identifying and understanding
handwritten house numbers, which assists travelers in finding the exact house they need.
Their outline or handwritten label can also recognize locations that lack formal street signs.
The application has been trained to recognize and understand traffic. As a result, it suggests
the best way to avoid traffic congestion and bottlenecks. The AI-based algorithm also
informs users about the precise distance and time it will take them to arrive at their
destination. It has been trained to calculate this based on the traffic situations. Several ride-
hailing applications have emerged as a result of the use of similar AI technology. So,
whenever you need to book a cab via an app by putting your location on a map, this is how it
works.
Utilizing face ID for unlocking our phones and using virtual filters on our faces while taking
pictures are two uses of AI that are presently essential for our day-by-day lives.
Face recognition is used in the former, which means that every human face can be
recognized. Face recognition is used in the above, which recognizes a particular face.
Intelligent machines often match-and some cases, even exceed human performance! -
Human potential. Human babies begin to identifying facial features such as eyes, lips, nose,
and face shapes. A face, though, is more than just that. A number of characteristics
distinguish human faces. Smart machines are trained in order to recognize facial coordinates
(x, y, w, and h; which form a square around the face as an area of interest), landmarks (nose,
eyes, etc.), and alignment (geometric structures). This improves the human ability to identify
faces by several factors. Face recognition is also used by government facilities or at the
airport for monitoring, and security.
When typing a document, there are inbuilt or downloadable auto-correcting tools for editors
of spelling errors, readability, mistakes, and plagiarism based on their difficulty level.
It should have taken a long time for us to master our language and become fluent in it.
Artificially intelligent algorithms often used deep learning, machine learning, and natural
language in order to detect inappropriate language use and recommend improvements.
Linguists and computer scientists collaborate in teaching machines grammar in the same way
that we learned it in school. Machines are fed large volumes of high-quality data that has
been structured in a way that machines can understand. Thus, when we misspell a single
comma, the editor will highlight it in red and offer suggestions.
4. Chatbots
Answering a customer's inquiries can take a long time. The use of algorithms to train
machines to meet customer needs through chatbots is an artificially intelligent solution to
this problem. This allows machines to answer as well as take and track orders.
5. Online-Payments
It can be a time-consuming errand to rush to the bank for any transaction. Good news!
Artificial Intelligence is now being used by banks to support customers by simplifying the
process of payment.
Artificial intelligence has enabled you to deposit checks from the convenience of our own
home. Since AI is capable of deciphering handwriting and making online cheque processing
practicable. Artificial Intelligence can potentially be utilized to detect fraud by observing
consumers' credit card spending patterns. For example, the algorithms are aware of what
items User X purchases, when and where they are purchased, and in what price range they
are purchased. If there is some suspicious behaviour that does not match the user's profile,
then the system immediately signals user X.
3. What is state space search? Discuss its components with example.
A state is a representation of problem elements at a given moment.
A State space is the set of all states reachable from the initial state.
A state space forms a graph in which the nodes are states and the arcs between nodes are
actions.
In the state space, a path is a sequence of states connected by a sequence of actions.
The solution of a problem is part of the graph formed by the state space.
The state space representation forms the basis of most of the AI methods.
Its structure corresponds to the structure of problem solving in two important ways:
1. It allows for a formal definition of a problem as per the need to convert some given
situation into some desired situation using a set of permissible operations.
2. It permits the problem to be solved with the help of known techniques and control
strategies to move through the problem space until goal state is found.
4. Design a partial (Tree depth 3or 4) state space search for the given Tic
Tac Toe game.
Prerequisites: Minimax Algorithm in Game Theory, Evaluation Function in Game Theory
Let us combine what we have learnt so far about minimax and evaluation function to write
a proper Tic-Tac-Toe AI (Artificial Intelligence) that plays a perfect game. This AI will
consider all possible scenarios and makes the most optimal move.
function findBestMove(board):
bestMove = NULL
for each move in board :
if current move is better than bestMove
bestMove = current move
return bestMove
Minimax :
To check whether or not the current move is better than the best move we take the help
of minimax() function which will consider all the possible ways the game can go and
returns the best value for that move, assuming the opponent also plays optimally
The code for the maximizer and minimizer in the minimax() function is similar
to findBestMove(), the only difference is, instead of returning a move, it will return a
value. Here is the pseudocode :
if isMaximizingPlayer :
bestVal = -INFINITY
for each move in board :
value = minimax(board, depth+1, false)
bestVal = max( bestVal, value)
return bestVal
else :
bestVal = +INFINITY
for each move in board :
value = minimax(board, depth+1, true)
bestVal = min( bestVal, value)
return bestVal
5. How does water jug problem can be represented as state space search?
Give partial structure.
Problem: You are given two jugs, a 4-gallon one and a 3-gallon one.Neither has any
measuring mark on it.There is a pump that can be used to fill the jugs with water.How can
you get exactly 2 gallons of water into the 4-gallon jug.
Solution:
The state space for this problem can be described as the set of ordered pairs of integers (x,y)
Where,
X represents the quantity of water in the 4-gallon jug X= 0,1,2,3,4
Y represents the quantity of water in 3-gallon jug Y=0,1,2,3
Start State: (0,0)
Goal State: (2,0)
Generate production rules for the water jug problem
Production Rules:
Rule State Process
1 (X,Y | X<4) (4,Y)
{Fill 4-gallon jug}
2 (X,Y |Y<3) (X,3)
{Fill 3-gallon jug}
3 (X,Y |X>0) (0,Y)
{Empty 4-gallon jug}
4 (X,Y | Y>0) (X,0)
{Empty 3-gallon jug}
5 (X,Y | X+Y>=4 ^ Y>0) (4,Y-(4-X))
{Pour water from 3-gallon jug into 4-gallon jug until
4-gallon jug is full}
6 (X,Y | X+Y>=3 ^X>0) (X-(3-Y),3)
{Pour water from 4-gallon jug into 3-gallon jug until
3-gallon jug is full}
7 (X,Y | X+Y<=4 ^Y>0) (X+Y,0)
{Pour all water from 3-gallon jug into 4-gallon jug}
8 (X,Y | X+Y <=3^ X>0) (0,X+Y)
{Pour all water from 4-gallon jug into 3-gallon jug}
9 (0,2) (2,0)
{Pour 2 gallon water from 3 gallon jug into 4 gallon
jug}
Initialization:
Start State: (0,0)
Apply Rule 2:
(X,Y | Y<3) -> (X,3)
{Fill 3-gallon jug}
Now the state is (X,3)
Iteration 1:
Current State: (X,3)
Apply Rule 7:
(X,Y | X+Y<=4 (X+Y,0)
^Y>0) {Pour all water from 3-gallon jug into 4-gallon
jug}
Now the state is (3,0)
Iteration 2:
Current State : (3,0)
Apply Rule 2:
(X,Y | Y<3) -> (3,3)
{Fill 3-gallon jug}
Now the state is (3,3)
Iteration 3:
Current State:(3,3)
Apply Rule 5:
(X,Y | X+Y>=4 ^ (4,Y-(4-X))
Y>0) {Pour water from 3-gallon jug into 4-gallon jug
until 4-gallon jug is full}
Now the state is (4,2)
Iteration 4:
Current State : (4,2)
Apply Rule 3:
(X,Y | X>0) (0,Y)
{Empty 4-gallon jug}
Now state is (0,2)
Iteration 5:
Current State : (0,2)
Apply Rule 9:
(0,2) (2,0)
{Pour 2 gallon water from 3 gallon jug into 4
gallon jug}
Now the state is (2,0)
Goal Achieved.
A heuristic is a way which might not always be guaranteed for best solutions but guaranteed
to find a good solution in reasonable time.
Informed search can solve much complex problem which could not be solved in another
way.
1. Greedy Search
2. A* Search
8. Consider a search tree of your choice and trace depth search algorithm.
Example of DFS algorithm
Now, let's understand the working of the DFS algorithm by using an example. In the
example given below, there is a directed graph having 7 vertices.
Step 2 - POP the top element from the stack, i.e., H, and print it. Now, PUSH all the
neighbors of H onto the stack that are in ready state.
Print: H]STACK: A
Step 3 - POP the top element from the stack, i.e., A, and print it. Now, PUSH all the
neighbors of A onto the stack that are in ready state.
Print: A
STACK: B, D
Step 4 - POP the top element from the stack, i.e., D, and print it. Now, PUSH all the
neighbors of D onto the stack that are in ready state.
Print: D
STACK: B, F
Step 5 - POP the top element from the stack, i.e., F, and print it. Now, PUSH all the
neighbors of F onto the stack that are in ready state.
Print: F
STACK: B
Step 6 - POP the top element from the stack, i.e., B, and print it. Now, PUSH all the
neighbors of B onto the stack that are in ready state.
Print: B
STACK: C
Step 7 - POP the top element from the stack, i.e., C, and print it. Now, PUSH all the
neighbors of C onto the stack that are in ready state.
Print: C
STACK: E, G
Step 8 - POP the top element from the stack, i.e., G and PUSH all the neighbors of G onto
the stack that are in ready state.
Print: G
STACK: E
Step 9 - POP the top element from the stack, i.e., E and PUSH all the neighbors of E onto
the stack that are in ready state.
Print: E
STACK:
Now, all the graph nodes have been traversed, and the stack is empty.
The time complexity of the DFS algorithm is O(V+E), where V is the number of vertices
and E is the number of edges in the graph.
In this example, the graph that we are using to demonstrate the code is given as follows -
/*A sample java program to implement the DFS algorithm*/
import java.util.*;
class DFSTraversal {
private LinkedList<Integer> adj[]; /*adjacency list representation*/
private boolean visited[];
Iterator<Integer> it = adj[vertex].listIterator();
while (it.hasNext()) {
int n = it.next();
if (!visited[n])
DFS(n);
}
}
graph.insertEdge(0, 1);
graph.insertEdge(0, 2);
graph.insertEdge(0, 3);
graph.insertEdge(1, 3);
graph.insertEdge(2, 4);
graph.insertEdge(3, 5);
graph.insertEdge(3, 6);
graph.insertEdge(4, 7);
graph.insertEdge(4, 5);
graph.insertEdge(5, 2);
Output
9. Consider a search tree of your choice and trace breadth search algorithm.
Example of BFS algorithm
Now, let's understand the working of BFS algorithm by using an example. In the example
given below, there is a directed graph having 7 vertices.
In the above graph, minimum path 'P' can be found by using the BFS that will start from
Node A and end at Node E. The algorithm uses two queues, namely QUEUE1 and
QUEUE2. QUEUE1 holds all the nodes that are to be processed, while QUEUE2 holds all
the nodes that are processed and deleted from QUEUE1.
Now, let's start examining the graph starting from Node A.
Step 2 - Now, delete node A from queue1 and add it into queue2. Insert all neighbors of
node A to queue1.
QUEUE1 = {B, D}
QUEUE2 = {A}
Step 3 - Now, delete node B from queue1 and add it into queue2. Insert all neighbors of
node B to queue1.
QUEUE1 = {D, C, F}
QUEUE2 = {A, B}
Step 4 - Now, delete node D from queue1 and add it into queue2. Insert all neighbors of
node D to queue1. The only neighbor of Node D is F since it is already inserted, so it will not
be inserted again.
QUEUE1 = {C, F}
QUEUE2 = {A, B, D}
Step 5 - Delete node C from queue1 and add it into queue2. Insert all neighbors of node C to
queue1.
QUEUE1 = {F, E, G}
QUEUE2 = {A, B, D, C}
Step 5 - Delete node F from queue1 and add it into queue2. Insert all neighbors of node F to
queue1. Since all the neighbors of node F are already present, we will not insert them again.
QUEUE1 = {E, G}
QUEUE2 = {A, B, D, C, F}
Step 6 - Delete node E from queue1. Since all of its neighbors have already been added, so
we will not insert them again. Now, all the nodes are visited, and the target node E is
encountered into queue2.
QUEUE1 = {G}
QUEUE2 = {A, B, D, C, F, E}
Complexity of BFS algorithm
Time complexity of BFS depends upon the data structure used to represent the graph. The
time complexity of BFS algorithm is O(V+E), since in the worst case, BFS algorithm
explores every node and edge. In a graph, the number of vertices is O(V), whereas the
number of edges is O(E).
The space complexity of BFS can be expressed as O(V), where V is the number of vertices.
In this code, we are using the adjacency list to represent our graph. Implementing the
Breadth-First Search algorithm in Java makes it much easier to deal with the adjacency list
since we only have to travel through the list of nodes attached to each node once the node is
dequeued from the head (or start) of the queue.
In this example, the graph that we are using to demonstrate the code is given as follows -
1. import java.io.*;
2. import java.util.*;
3. public class BFSTraversal
4. {
5. private int vertex; /* total number number of vertices in the graph */
6. private LinkedList<Integer> adj[]; /* adjacency list */
7. private Queue<Integer> que; /* maintaining a queue */
8. BFSTraversal(int v)
9. {
10. vertex = v;
11. adj = new LinkedList[vertex];
12. for (int i=0; i<v; i++)
13. {
14. adj[i] = new LinkedList<>();
15. }
16. que = new LinkedList<Integer>();
17. }
18. void insertEdge(int v,int w)
19. {
20. adj[v].add(w); /* adding an edge to the adjacency list (edges are bidirectional in this
example) */
21. }
22. void BFS(int n)
23. {
24. boolean nodes[] = new boolean[vertex]; /* initialize boolean array for holding the d
ata */
25. int a = 0;
26. nodes[n]=true;
27. que.add(n); /* root node is added to the top of the queue */
28. while (que.size() != 0)
29. {
30. n = que.poll(); /* remove the top element of the queue */
31. System.out.print(n+" "); /* print the top element of the queue */
32. for (int i = 0; i < adj[n].size(); i++) /* iterate through the linked list and push all neig
hbors into queue */
33. {
34. a = adj[n].get(i);
35. if (!nodes[a]) /* only insert nodes into queue if they have not been explored alre
ady */
36. {
37. nodes[a] = true;
38. que.add(a);
39. }
40. }
41. }
42. }
43. public static void main(String args[])
44. {
45. BFSTraversal graph = new BFSTraversal(10);
46. graph.insertEdge(0, 1);
47. graph.insertEdge(0, 2);
48. graph.insertEdge(0, 3);
49. graph.insertEdge(1, 3);
50. graph.insertEdge(2, 4);
51. graph.insertEdge(3, 5);
52. graph.insertEdge(3, 6);
53. graph.insertEdge(4, 7);
54. graph.insertEdge(4, 5);
55. graph.insertEdge(5, 2);
56. graph.insertEdge(6, 5);
57. graph.insertEdge(7, 5);
58. graph.insertEdge(7, 8);
59. System.out.println("Breadth First Traversal for the graph is:");
60. graph.BFS(2);
61. }
62.}
Output
10. Consider a search tree of your choice and best depth search algorithm.
In BFS and DFS, when we are at a node, we can consider any of the adjacent as the next
node. So both BFS and DFS blindly explore paths without considering any cost function.
The idea of Best First Search is to use an evaluation function to decide which adjacent is
most promising and then explore.
Best First Search falls under the category of Heuristic Search or Informed Search.
Implementation of Best First Search:
We use a priority queue or heap to store the costs of nodes that have the lowest evaluation
function value. So the implementation is a variation of BFS, we just need to change Queue
to PriorityQueue.
// Pseudocode for Best First Search
Best-First-Search(Graph g, Node start)
1) Create an empty PriorityQueue
PriorityQueue pq;
2) Insert "start" in pq.
pq.insert(start)
3) Until PriorityQueue is empty
u = PriorityQueue.DeleteMin
If u is the goal
Exit
Else
Foreach neighbor v of u
If v "Unvisited"
Mark v "Visited"
pq.insert(v)
Mark u "Examined"
End procedure
Illustration:
Let us consider the below example:
We start from source “S” and search for goal “I” using given costs and Best First
search.
pq initially contains S
We remove s from and process unvisited neighbors of S to pq.
pq now contains {A, C, B} (C is put before B because C has lesser cost)
S.
No. Parameters BFS DFS
BFS stands for Breadth First DFS stands for Depth First
1. Stands for Search. Search.
BFS(Breadth First Search) uses
Queue data structure for finding DFS(Depth First Search) uses
2. Data Structure the shortest path. Stack data structure.
DFS is also a traversal approach
in which the traverse begins at the
BFS is a traversal approach in root node and proceeds through
which we first walk through all the nodes as far as possible until
nodes on the same level before we reach the node with no
3. Definition moving on to the next level. unvisited nearby nodes.
BFS can be used to find a single
source shortest path in an
unweighted graph because, in
BFS, we reach a vertex with a In DFS, we might traverse
minimum number of edges from through more edges to reach a
4. Technique a source vertex. destination vertex from a source.
Conceptual BFS builds the tree level by DFS builds the tree sub-tree by
5. Difference level. sub-tree.
It works on the concept of FIFO It works on the concept of LIFO
6. Approach used (First In First Out). (Last In First Out).
BFS is more suitable for
searching vertices closer to the DFS is more suitable when there
7. Suitable for given source. are solutions away from source.
DFS is more suitable for game or
puzzle problems. We make a
BFS considers all neighbors first decision, and the then explore all
Suitable for and therefore not suitable for paths through this decision. And
Decision decision-making trees used in if this decision leads to win
8. Treestheirwinning games or puzzles. situation, we stop.
The Time complexity of BFS is The Time complexity of DFS is
O(V + E) when Adjacency List is also O(V + E) when Adjacency
used and O(V^2) when List is used and O(V^2) when
Adjacency Matrix is used, where Adjacency Matrix is used, where
V stands for vertices and E V stands for vertices and E stands
9. Time Complexity stands for edges. for edges.
Visiting of Siblings/ Here, siblings are visited before Here, children are visited before
10. Children the children. the siblings.
Nodes that are traversed several The visited nodes are added to the
Removal of times are deleted from the stack and then removed when
11. Traversed Nodes queue. there are no more nodes to visit.
DFS algorithm is a recursive
In BFS there is no concept of algorithm that uses the idea of
12. Backtracking backtracking. backtracking
BFS is used in various DFS is used in various
applications such as bipartite applications such as acyclic
13. Applications graphs, shortest paths, etc. graphs and topological order etc.
14. Memory BFS requires more memory. DFS requires less memory.
BFS is optimal for finding the DFS is not optimal for finding the
15. Optimality shortest path. shortest path.
DFS has lesser space complexity
In BFS, the space complexity is because at a time it needs to store
more critical as compared to time only a single path from the root to
16. Space complexity complexity. the leaf node.
17. Speed BFS is slow as compared to DFS.DFS is fast as compared to BFS.
When the target is close to the When the target is far from the
18. When to use? source, BFS performs better. source, DFS is preferable.
Greedy best-first search algorithm always selects the path which appears best at that
moment. It is the combination of depth-first search and breadth-first search algorithms. It
uses the heuristic function and search. Best-first search allows us to take the advantages of
both algorithms. With the help of best-first search, at each step, we can choose the most
promising node. In the best first search algorithm, we expand the node which is closest to the
goal node and the closest cost is estimated by heuristic function, i.e.
f(n)= g(n).
Were, h(n)= estimated cost from node n to the goal.
The greedy best first algorithm is implemented by the priority queue.
o Best first search can switch between BFS and DFS by gaining the advantages of both
the algorithms.
o This algorithm is more efficient than BFS and DFS algorithms.
Disadvantages:
Example:
Consider the below search problem, and we will traverse it using greedy best-first search. At
each iteration, each node is expanded using evaluation function f(n)=h(n) , which is given in
the below table.
In this search example, we are using two lists which are OPEN and CLOSED Lists.
Following are the iteration for traversing the above example.
Space Complexity: The worst case space complexity of Greedy best first search is O(b m).
Where, m is the maximum depth of the search space.
Complete: Greedy best-first search is also incomplete, even if the given state space is finite.
A* search is the most commonly known form of best-first search. It uses heuristic function
h(n), and cost to reach the node n from the start state g(n). It has combined features of UCS
and greedy best-first search, by which it solve the problem efficiently. A* search algorithm
finds the shortest path through the search space using the heuristic function. This search
algorithm expands less search tree and provides optimal result faster. A* algorithm is similar
to UCS except that it uses g(n)+h(n) instead of g(n).
In A* search algorithm, we use search heuristic as well as the cost to reach the node. Hence
we can combine both costs as following, and this sum is called as a fitness number.
Algorithm of A* search:
Step 5: Else if node n' is already in OPEN and CLOSED, then it should be attached to the
back pointer which reflects the lowest g(n') value.
Advantages:
Disadvantages:
o It does not always produce the shortest path as it mostly based on heuristics and
approximation.
o A* search algorithm has some complexity issues.
o The main drawback of A* is memory requirement as it keeps all generated nodes in
the memory, so it is not practical for various large-scale problems.
Example:
In this example, we will traverse the given graph using the A* algorithm. The heuristic value
of all states is given in the below table so we will calculate the f(n) of each state using the
formula f(n)= g(n) + h(n), where g(n) is the cost to reach any node from start state.
Here we will use OPEN and CLOSED list.
Solution:
Initialization: {(S, 5)}
Iteration3: {(S--> A-->C--->G, 6), (S--> A-->C--->D, 11), (S--> A-->B, 7), (S-->G, 10)}
Iteration 4 will give the final result, as S--->A--->C--->G it provides the optimal path with
cost 6.
Points to remember:
o A* algorithm returns the path which occurred first, and it does not search for all
remaining paths.
o The efficiency of A* algorithm depends on the quality of heuristic.
o A* algorithm expands all nodes which satisfy the condition f(n)<="" li="">
o Admissible: the first condition requires for optimality is that h(n) should be an
admissible heuristic for A* tree search. An admissible heuristic is optimistic in nature.
o Consistency: Second required condition is consistency for only A* graph-search.
If the heuristic function is admissible, then A* tree search will always find the least cost
path.
Branch and bound is one of the techniques used for problem solving. It is similar to the
backtracking since it also uses the state space tree. It is used for solving the optimization
problems and minimization problems. If we have given a maximization problem then we can
convert it using the Branch and bound technique by simply converting the problem into a
maximization problem.
P = {10, 5, 8, 3}
d = {1, 2, 1, 2}
The above are jobs, problems and problems given. We can write the solutions in two ways
which are given below:
Suppose we want to perform the jobs j1 and j2 then the solution can be represented in two
ways:
S1 = {j1, j4}
The second way of representing the solution is that first job is done, second and third jobs
are not done, and fourth job is done.
S2 = {1, 0, 0, 1}
The solution s1 is the variable-size solution while the solution s2 is the fixed-size solution.
First, we will see the subset method where we will see the variable size.
First method:
In this case, we first consider the first job, then second job, then third job and finally we
consider the last job.
As we can observe in the above figure that the breadth first search is performed but not the
depth first search. Here we move breadth wise for exploring the solutions. In backtracking,
we go depth-wise whereas in branch and bound, we go breadth wise.
Now one level is completed. Once I take first job, then we can consider either j2, j3 or j4. If
we follow the route then it says that we are doing jobs j1 and j4 so we will not consider jobs
j2 and j3.
Now we will consider the node 3. In this case, we are doing job j2 so we can consider either
job j3 or j4. Here, we have discarded the job j1.
Now we will expand the node 4. Since here we are doing job j3 so we will consider only job
j4.
Now we will expand node 6, and here we will consider the jobs j3 and j4.
Now we will expand node 7 and here we will consider job j4.
Now we will expand node 9, and here we will consider job j4.
The last node, i.e., node 12 which is left to be expanded. Here, we consider job j4.
The above is the state space tree for the solution s1 = {j1, j4}
Second method:
We will see another way to solve the problem to achieve the solution s1.
Now, we will expand the node 1. After expansion, the state space tree would be appeared as:
On each expansion, the node will be pushed into the stack shown as below:
Now the expansion would be based on the node that appears on the top of the stack. Since
the node 5 appears on the top of the stack, so we will expand the node 5. We will pop out the
node 5 from the stack. Since the node 5 is in the last job, i.e., j4 so there is no further scope
of expansion.
The next node that appears on the top of the stack is node 4. Pop out the node 4 and expand.
On expansion, job j4 will be considered and node 6 will be added into the stack shown as
below:
The next node is 6 which is to be expanded. Pop out the node 6 and expand. Since the node 6
is in the last job, i.e., j4 so there is no further scope of expansion.
The next node to be expanded is node 3. Since the node 3 works on the job j2 so node 3 will
be expanded to two nodes, i.e., 7 and 8 working on jobs 3 and 4 respectively. The nodes 7
and 8 will be pushed into the stack shown as below:
The next node that appears on the top of the stack is node 8. Pop out the node 8 and expand.
Since the node 8 works on the job j4 so there is no further scope for the expansion.
The next node that appears on the top of the stack is node 7. Pop out the node 7 and expand.
Since the node 7 works on the job j3 so node 7 will be further expanded to node 9 that works
on the job j4 as shown as below and the node 9 will be pushed into the stack.
The next node that appears on the top of the stack is node 9. Since the node 9 works on the
job 4 so there is no further scope for the expansion.
The next node that appears on the top of the stack is node 2. Since the node 2 works on the
job j1 so it means that the node 2 can be further expanded. It can be expanded upto three
nodes named as 10, 11, 12 working on jobs j2, j3, and j4 respectively. There newly nodes
will be pushed into the stack shown as below:
In the above method, we explored all the nodes using the stack that follows the LIFO
principle.
Third method
There is one more method that can be used to find the solution and that method is Least cost
branch and bound. In this technique, nodes are explored based on the cost of the node. The
cost of the node can be defined using the problem and with the help of the given problem, we
can define the cost function. Once the cost function is defined, we can define the cost of the
node.
Let's first consider the node 1 having cost infinity shown as below:
Now we will expand the node 1. The node 1 will be expanded into four nodes named as 2, 3,
4 and 5 shown as below:
Let's assume that cost of the nodes 2, 3, 4, and 5 are 25, 12, 19 and 30 respectively.
Since it is the least cost branch n bound, so we will explore the node which is having the
least cost. In the above figure, we can observe that the node with a minimum cost is node 3.
So, we will explore the node 3 having cost 12.
Since the node 3 works on the job j2 so it will be expanded into two nodes named as 6 and 7
shown as below:
The node 6 works on job j3 while the node 7 works on job j4. The cost of the node 6 is 8 and
the cost of the node 7 is 7. Now we have to select the node which is having the minimum
cost. The node 7 has the minimum cost so we will explore the node 7. Since the node 7
already works on the job j4 so there is no further scope for the expansion.
15. What is and-or graph. Give example.
PROBLEM REDUCTION:
So far we have considered search strategies for OR graphs through which we want to find a
single path to a goal. Such structure represent the fact that we know how to get from anode
to a goal state if we can discover how to get from that node to a goal state along any one of
the branches leaving it.
AND-OR GRAPHS
The AND-OR GRAPH (or tree) is useful for representing the solution of problems that can
solved by decomposing them into a set of smaller problems, all of which must then be
solved. This decomposition, or reduction, generates arcs that we call AND arcs. One AND
arc may point to any number of successor nodes, all of which must be solved in order for the
arc to point to a solution. Just as in an OR graph, several arcs may emerge from a single
node, indicating a variety of ways in which the original problem might be solved. This is
why the structure is called not simply an AND-graph but rather an AND-OR graph (which
also happens to be an AND-OR tree)
ALGORITHM:
1. Let G be a graph with only starting node INIT.
2. Repeat the followings until INIT is labeled SOLVED or h(INIT) > FUTILITY
a) Select an unexpanded node from the most promising path from INIT (call it NODE)
b) Generate successors of NODE. If there are none, set h(NODE) = FUTILITY (i.e.,
NODE is unsolvable); otherwise for each SUCCESSOR that is not an ancestor of
NODE do the following:
i. Add SUCCESSSOR to G.
ii. If SUCCESSOR is a terminal node, label it SOLVED and set h(SUCCESSOR)
= 0.
iii. If SUCCESSPR is not a terminal node, compute its h
c) Propagate the newly discovered information up the graph by doing the following: let S
be set of SOLVED nodes or nodes whose h values have been changed and need to
have values propagated back to their parents. Initialize S to Node. Until S is empty
repeat the followings:
i. Remove a node from S and call it CURRENT.
ii. Compute the cost of each of the arcs emerging from CURRENT. Assign
minimum cost of its successors as its h.
iii. Mark the best path out of CURRENT by marking the arc that had the minimum
cost in step ii
iv. Mark CURRENT as SOLVED if all of the nodes connected to it through new
labeled arc have been labeled SOLVED
v. If CURRENT has been labeled SOLVED or its cost was just changed,
propagate its new cost back up through the graph. So add all of the ancestors of
CURRENT to S.
EXAMPLE: 1
STEP 1:
A is the only node, it is at the end of the current best path. It is expanded, yielding nodes B,
C, D. The arc to D is labeled as the most promising one emerging from A, since it costs
6compared to B and C, Which costs 9.
STEP 2:
Node B is chosen for expansion. This process produces one new arc, the AND arc to E and
F, with a combined cost estimate of 10.so we update the f’ value of D to 10.Going back one
more level, we see that this makes the AND arc B-C better than the arc to D, so it is labeled
as the current best path.
STEP 3:
We traverse the arc from A and discover the unexpanded nodes B and C. If we going to find
a solution along this path, we will have to expand both B and C eventually, so let’s choose to
explore B first. This generates two new arcs, the ones to G and to H. Propagating their f’
values backward, we update f’ of B to 6(since that is the best we think we can do, which we
can achieve by going through G). This requires updating the cost of the AND arc B-C to
12(6+4+2). After doing that, the arc to D is again the better path from A, so we record that as
the current best path and either node E or node F will chosen for expansion at step 4.
STEP4:
Adversarial search is a search, where we examine the problem which arises when we
try to plan ahead of the world and other agents are planning against us.
o In previous topics, we have studied the search strategies which are only associated
with a single agent that aims to find the solution which often expressed in the form of
a sequence of actions.
o But, there might be some situations where more than one agent is searching for the
solution in the same search space, and this situation usually occurs in game playing.
o The environment with more than one agent is termed as multi-agent environment, in
which each agent is an opponent of other agent and playing against each other. Each
agent needs to consider the action of other agent and effect of that action on their
performance.
o So, Searches in which two or more players with conflicting goals are trying to
explore the same search space for the solution, are called adversarial searches,
often known as Games.
o Games are modeled as a Search problem and heuristic evaluation function, and these
are the two main factors which help to model and solve games in AI.
Types of Games in AI:
Imperfect information Battleships, blind, tic-tac-toe Bridge, poker, scrabble, nuclear war
o Perfect information: A game with the perfect information is that in which agents can
look into the complete board. Agents have all the information about the game, and they
can see each other moves also. Examples are Chess, Checkers, Go, etc.
o Imperfect information: If in a game agents do not have all information about the game
and not aware with what's going on, such type of games are called the game with
imperfect information, such as tic-tac-toe, Battleship, blind, Bridge, etc.
o Deterministic games: Deterministic games are those games which follow a strict pattern
and set of rules for the games, and there is no randomness associated with them.
Examples are chess, Checkers, Go, tic-tac-toe, etc.
o Non-deterministic games: Non-deterministic are those games which have various
unpredictable events and has a factor of chance or luck. This factor of chance or luck is
introduced by either dice or cards. These are random, and each action response is not
fixed. Such games are also called as stochastic games.
Example: Backgammon, Monopoly, Poker, etc.
Zero-Sum Game
The Zero-sum game involved embedded thinking in which one agent or player is trying to
figure out:
o What to do.
o How to decide the move
o Needs to think about his opponent as well
o The opponent also thinks what to do
Each of the players is trying to find out the response of his opponent to their actions. This
requires embedded thinking or backward reasoning to solve the game problems in AI.
o The working of the minimax algorithm can be easily described using an example.
Below we have taken an example of game-tree which is representing the two-player
game.
o In this example, there are two players one is called Maximizer and other is called
Minimizer.
o Maximizer will try to get the Maximum possible score, and Minimizer will try to get
the minimum possible score.
o This algorithm applies DFS, so in this game-tree, we have to go all the way through
the leaves to reach the terminal nodes.
o At the terminal node, the terminal values are given so we will compare those value
and backtrack the tree until the initial state occurs. Following are the main steps
involved in solving the two-player game tree:
Step-1: In the first step, the algorithm generates the entire game-tree and apply the utility
function to get the utility values for the terminal states. In the below tree diagram, let's take
A is the initial state of the tree. Suppose maximizer takes first turn which has worst-case
initial value =- infinity, and minimizer will take next turn which has worst-case initial value
= +infinity.
Step 2: Now, first we find the utilities value for the Maximizer, its initial value is -∞, so we
will compare each value in terminal state with initial value of Maximizer and determines the
higher nodes values. It will find the maximum among the all.
Step 4: Now it's a turn for Maximizer, and it will again choose the maximum of all nodes
value and find the maximum value for the root node. In this game tree, there are only 4
layers, hence we reach immediately to the root node, but in real games, there will be more
than 4 layers.
That was the complete workflow of the minimax two player game.
Properties of Mini-Max algorithm:
o Complete- Min-Max algorithm is Complete. It will definitely find a solution (if exist),
in the finite search tree.
o Optimal- Min-Max algorithm is optimal if both opponents are playing optimally.
o Time complexity- As it performs DFS for the game-tree, so the time complexity of
Min-Max algorithm is O(bm), where b is branching factor of the game-tree, and m is
the maximum depth of the tree.
o Space Complexity- Space complexity of Mini-max algorithm is also similar to DFS
which is O(bm).
The main drawback of the minimax algorithm is that it gets really slow for complex games
such as Chess, go, etc. This type of games has a huge branching factor, and the player has
lots of choices to decide. This limitation of the minimax algorithm can be improved
from alpha-beta pruning which we have discussed in the next topic
18. Define Alpha and Beta. Discuss role of Alpha-Beta in game playing.
o Alpha-beta pruning is a modified version of the minimax algorithm. It is an optimization
technique for the minimax algorithm.
o As we have seen in the minimax search algorithm that the number of game states it has to
examine are exponential in depth of the tree. Since we cannot eliminate the exponent, but
we can cut it to half. Hence there is a technique by which without checking each node of
the game tree we can compute the correct minimax decision, and this technique is
called pruning. This involves two threshold parameter Alpha and beta for future
expansion, so it is called alpha-beta pruning. It is also called as Alpha-Beta Algorithm.
o Alpha-beta pruning can be applied at any depth of a tree, and sometimes it not only prune
the tree leaves but also entire sub-tree.
o The two-parameter can be defined as:
a. Alpha: The best (highest-value) choice we have found so far at any point along the
path of Maximizer. The initial value of alpha is -∞.
b. Beta: The best (lowest-value) choice we have found so far at any point along the
path of Minimizer. The initial value of beta is +∞.
o The Alpha-beta pruning to a standard minimax algorithm returns the same move as the
standard algorithm does, but it removes all the nodes which are not really affecting the
final decision but making algorithm slow. Hence by pruning these nodes, it makes the
algorithm fast.
Condition for Alpha-beta pruning:
α>=β
Let's take an example of two-player search tree to understand the working of Alpha-beta
pruning
Step 1: At the first step the, Max player will start first move from node A where α= -∞ and
β= +∞, these value of alpha and beta passed down to node B where again α= -∞ and β= +∞,
and Node B passes the same value to its child D.
Step 2: At Node D, the value of α will be calculated as its turn for Max. The value of α is
compared with firstly 2 and then 3, and the max (2, 3) = 3 will be the value of α at node D
and node value will also 3.
Step 3: Now algorithm backtrack to node B, where the value of β will change as this is a
turn of Min, Now β= +∞, will compare with the available subsequent nodes value, i.e. min
(∞, 3) = 3, hence at node B now α= -∞, and β= 3.
In the next step, algorithm traverse the next successor of Node B which is node E, and the
values of α= -∞, and β= 3 will also be passed.
Step 4: At node E, Max will take its turn, and the value of alpha will change. The current
value of alpha will be compared with 5, so max (-∞, 5) = 5, hence at node E α= 5 and β= 3,
where α>=β, so the right successor of E will be pruned, and algorithm will not traverse it,
and the value at node E will be 5.
Step 5: At next step, algorithm again backtrack the tree, from node B to node A. At node A,
the value of alpha will be changed the maximum available value is 3 as max (-∞, 3)= 3, and
β= +∞, these two values now passes to right successor of A which is Node C.
At node C, α=3 and β= +∞, and the same values will be passed on to node F.
Step 6: At node F, again the value of α will be compared with left child which is 0, and
max(3,0)= 3, and then compared with right child which is 1, and max(3,1)= 3 still α remains
3, but the node value of F will become 1.
Step 7: Node F returns the node value 1 to node C, at C α= 3 and β= +∞, here the value of
beta will be changed, it will compare with 1 so min (∞, 1) = 1. Now at C, α=3 and β= 1, and
again it satisfies the condition α>=β, so the next child of C which is G will be pruned, and
the algorithm will not compute the entire sub-tree G.
Step 8: C now returns the value of 1 to A here the best value for A is max (3, 1) = 3.
Following is the final game tree which is the showing the nodes which are computed and
nodes which has never computed. Hence the optimal value for the maximizer is 3 for this
example.