0% found this document useful (0 votes)
11 views

Module 2 - Ai

Uploaded by

Gokul Mass
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views

Module 2 - Ai

Uploaded by

Gokul Mass
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 25

CSD 3102 - ARTIFICIAL INTELLIGENCE TECHNIQUES

III YEAR CSE, CS & IoT

1
MODULE II HEURISTIC SEARCH TECHNIQUES 9

General Search algorithm – Uniformed Search Methods – BFS, Uniform


Cost Search - Depth First search , Depth Limited search (DLS), Iterative
Deepening - Informed Search-Introduction- Generate and Test, BFS, A*
Search, Memory Bounded Heuristic Search - Local Search Algorithms
and Optimization Problems – Hill climbing and Simulated Annealing.
Search Techniques

⚫Un-informed (Blind) Search Techniques do not


take into account the location of the goal. Intuitively, these
algorithms ignore where they are going until they find a goal
and report success. Uninformed search methods use only
information available in the problem definition and past
explorations, e.g. cost of the path generated so far. Examples
are
⚫ – Breadth-first search (BFS)
⚫ – Depth-first search (DFS)
⚫ – Iterative deepening (IDA)
⚫ – Bi-directional search

3
Un- informed Search Techniques
Breadth-first search (BFS): At each level, we expand all nodes (possible solutions), if there
exists a solution then it will be found.
Space complexity Order is: O(| V |) where as time complexity is O(| V | + | E | ) a graph with
V vertex vector and E Edges, |V| means cardinality of V
⚫It is complete, optimal, best when space is no problem as it takes much space

Algorithm BFS
The algorithm uses a queue data structure to store intermediate results as it traverses the
graph, as follows:
1.Create a queue with the root node and add its direct children
2.Remove a node in order from queue and examine it
⚫ If the element sought is found in this node, quit the search and return a result.
⚫ Otherwise append any successors (the direct child nodes) that have not yet been
discovered.
3.If the queue is empty, every node on the graph has been examined – quit the search and
return "not found".
4.If the queue is not empty, repeat from Step 2.

4
Un-informed Search Techniques

Depth-first search (DFS): We can start with a node and explore with all possible solutions
available with this node.
Time and Space complexity: Time Order is: O(| V | + | E | ) , Space Order is: O(| V |)
⚫It is not complete, non-optimal, may stuck in infinite loop

DFS starts at the root node and explores as far as possible along each branch before
backtracking

1.Create a stack with the root node and add its direct children
2.Remove a node in order from stack and examine it
If the element sought is found in this node, quit the search and return a result.
Otherwise insert any successors (the direct child nodes)
that have not yet been discovered before existing nodes.
3.If the stack is empty, every node on the graph has been examined – quit the search and
return "not found".
4.If the stack is not empty, repeat from Step 2.

5
BFS for a water jug problem

(0,0)

(4,0) (0,3)

(0,0) (1,3) (4,3) (0,0) (3,0)


(4,3)

6
DFS for a
water jug
(0,0) problem
(4,0)

(4,3)

(0,3)

(3,0)

(3,3)

(4,2)
(0,2)
(2,0) 28
⚫Search Techniques
⚫ Informed Search Techniques A search strategy which is better than another
at identifying the most promising branches of a search-space is said to be more
informed. It incorporates additional measure of a potential of a specific state to
reach the goal. The potential of a state (node) to reach a goal is measured through a
heuristic function. These are also called intelligent search
⚫ Best first search
⚫ Greedy Search
⚫ A* search

⚫ In every informed search (Best First or A * Search), there is a heuristic function and
or a local function g(n). The heuristic function at every state decides the direction
where next search is to be made.

8
⚫Algorithm for Greedy Best First Search
Let h(n) be the heuristic function in a graph. In simple case, let it be the
straight line distance SLD from a node to destination.
1. Start from source node S, determine all nodes outward from S and
queue them.
2. Examine a node from queue (as generated in 1) .
 If this node is desired destination node, stop and return success.
 Evaluate h(n) of this node. The node with optimal h(n) gives the next
successor, term this node as S.
3. Repeat steps 1 and 2.

9
Algorithm for A* Search
Let h(n) be the heuristic function in a graph. In simple case, let it be the
straight line distance SLD from a node to destination. Let g(n) be the
function depending on the distance from source to current node. Thus
f(n) = g(n) + h(n)
1.Start from source node S, determine all nodes outward from S and
queue them.
2.Examine a node from queue (as generated in 1) .
* If this node is desired destination node, stop and return
success.
* Evaluate f(n) at this node. The node with optimal f(n) gives the next
successor, term this node as S.
3.Repeat steps 1 and 2.
Time = O(log f(n)) where h(n) is the actual distance travelled from n to
goal

10
Best First Search An Example
There are cities in a country (Romania). The task is to reach from A(rad) to B(ucharest)

11
Method: Greedy Best First Search: Start from Source (Arad). At each possible outward node n from
S, write the heuristic function h(n). Proceed further in the direction in which h(n) is minimum.
Repeat the exercise till goal (destination- Bucharest ) is achieved

12
Method: A* Search: Start from Source (Arad). At each possible outward n, node from S, calculate
f(n)=g(n)+h(n), where the heuristic function is h(n) and the total distance travelled so far is g(n).
Proceed further in the direction in which h(n)( is minimum. Repeat the exercise till goal
(destination- Bucharest ) is achieved

13
Method: A* Search: Start from Source (Arad). At each possible outward node from S,
write the heuristic function h(n). Add the total distance travelled so far g(n). Proceed
further in the direction in which h(n)( is minimum. Repeat the exercise till goal
(destination- Bucharest ) is achieved

14
Heuristics
⚫Where the exhaustive search is impractical, heuristic methods are used
to speed up the process of finding a satisfactory solution via mental
shortcuts to ease the cognitive load of making a decision. Examples of
this method include using a rule of thumb, an educated guess, an
intuitive judgment, stereotyping, or common sense.
⚫In more precise terms, heuristics are strategies using readily accessible,
though loosely applicable, information to control problem solving in
human beings and machines. Error and trial is simplest form of
heuristics. We can fit some variables in an algebraic equation to solve it.

15
Local Search Algorithms
⚫ Generate and Test
1. Generate a possible solution.
2.Test to see if this is actually a solution.
3. Quit if a solution has been found.
Otherwise, return to step 1.

Features:
1. Acceptable for simple problems.
2. Inefficient for problems with large space.

16
Local Search Algorithms
⚫Just operate on a single current state rather than multiple
paths
⚫Generally move only to neighbors of that state
⚫The paths followed by the search are not retained hence
the method is not systematic
Benefits:
1. uses little memory – a
constant amount for current state and
some information or infinite
2.can find reasonable solutions in
large (continuous) state spaces
⚫ where systematic algorithms are unsuitable
17
Local Search

⚫ State space landscape has two axes


⚫ location (defined by states)

⚫ Elevation or height (defined by objective function or by the


value of heuristic cost function)

⚫ In thisfigure, the cost refers to global minima and the


objective function refers to global maxima(profit e.g.)

18
Local Search

19
Local Search: Hill Climbing

⚫ Hill Climbing is an iterative algorithm that starts with an arbitrary solution


to a problem, then attempts to find a better solution by incrementally
changing a single element of the solution. If the change produces a better
solution, an incremental change is made to the new solution, repeating
until no further improvements can be found.
⚫ In simple hill climbing, the first closer node is chosen, whereas in steepest
ascent hill climbing all successors are compared and the closest to the
solution is chosen. Both forms fail if there is no closer node, which may
happen if there are local maxima in the search space which are not
solutions.
⚫ Steepest ascent hill climbing is similar to best- first search, which tries
all possible extensions of the current path instead of only one.
⚫ Stochastic hill climbing does not examine all neighbors before deciding how
to move. Rather, it selects a neighbor at random, and decides (based on the
amount of improvement in that neighbor) whether to move to that
neighbor or to examine another.
20
Simple Hill Climbing
•It is the simplest form of the Hill Climbing Algorithm. It only takes into account the
neighboring node for its operation.
•If the neighboring node is better than the current node then it sets the neighbor node
as the current node.
•The algorithm checks only one neighbor at a time. Following are a few of the key
feature of the Simple Hill Climbing Algorithm.
•Since it needs low computation power, it consumes lesser time
•The algorithm results in sub-optimal solutions and at times the solution is not
guaranteed.

21
Algorithm
1. Examine the current state, Return success if it is a goal
state
2. Continue the Loop until a new solution is found or no
operators are left to apply
3. Apply the operator to the node in the current state
4. Check for the new state
If Current State = Goal State, Return success and exit
Else if New state is better than current state then Goto New state
Else return to step 2
5. Exit
Steepest-Ascent Hill Climbing
• Steepest-Ascent hill climbing is an advanced form of simple Hill Climbing Algorithm.
• It runs through all the nearest neighbor nodes and selects the node which is nearest to the
goal state.
• The algorithm requires more computation power than Simple Hill Climbing Algorithm as it
searches through multiple neighbors at once.
Algorithm:
1. Examine the current state, Return success if it is a goal state
2. Continue the Loop until a new solution is found or no operators are left to apply
Let ‘Temp’ be a state such that any successor of the current state will have a higher value for
the objective function. For all operators that can be applied to the current state
• Apply the operator to create a new state
• Examine new state
• If Current State = Goal State, Return success and exit
• Else if New state is better than Temp then set this state as Temp
• If Temp is better than Current State set Current state to Target
Stochastic Hill Climbing
• Stochastic Hill Climbing doesn’t look at all its neighboring nodes to check if it is
better than the current node instead, it randomly selects one neighboring node,
and based on the pre-defined criteria it decides whether to go to the neighboring
node or select an alternate node.

Algorithm:
• Evaluate the initial state. If it is a goal state then stop and return success.
Otherwise, make the initial state the current state.
• Repeat these steps until a solution is found or the current state does not change.
• Select a state that has not been yet applied to the current state.
• Apply the successor function to the current state and generate all the neighbor states.
• Among the generated neighbor states which are better than the current state choose a state
randomly (or based on some probability function).
• If the chosen state is the goal state, then return success, else make it the current state and
repeat step 2 of the second point.
• Exit from the function.
Advantage of Hill Climbing Algorithm

• Hill Climbing is very useful in routing-related problems


like Travelling Salesmen Problem, Job Scheduling, Chip
Designing, and Portfolio Management
• It is good in solving the optimization problem while using
only limited computation power

You might also like