0% found this document useful (0 votes)
91 views43 pages

Informed Search Strategies in AI

This document discusses informed search strategies in artificial intelligence, highlighting the differences between uninformed and informed searches. It covers various algorithms such as A*, greedy best-first search, recursive best-first search, and local search algorithms, including hill climbing and simulated annealing. Additionally, it explores adversarial search techniques, including the minimax procedure and alpha-beta pruning for optimizing decision-making in competitive environments.

Uploaded by

cosmiccanvas47
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
91 views43 pages

Informed Search Strategies in AI

This document discusses informed search strategies in artificial intelligence, highlighting the differences between uninformed and informed searches. It covers various algorithms such as A*, greedy best-first search, recursive best-first search, and local search algorithms, including hill climbing and simulated annealing. Additionally, it explores adversarial search techniques, including the minimax procedure and alpha-beta pruning for optimizing decision-making in competitive environments.

Uploaded by

cosmiccanvas47
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

Artificial Intelligence &

Machine Learning
Module 1.2
INFORMED (HEURISTIC) SEARCH STRATEGIES

• This section shows how an informed search strategy—one that uses


problem-specific knowledge beyond the definition of the problem
itself—can find solutions more efficiently than can an uninformed
strategy.
Uninformed & Informed Search

Uninformed Search Informed Search


1. Search without information 1. Search with information
2. No knowledge 2. Use knowledge to find steps to solution
3. Time Consuming 3. Quick Solution
4. More Complexity(Time, Space) 4. Less Complexity(Time, Space)
5. DFS, BFS etc. 5. A*, RBFS, Best First Search etc.
Heuristic Function

• The purpose of heuristic function is to guide the search process in the


most profitable path among all that are available.
• A heuristic evaluation function h(n) is the cost of the cheapest path
from the state at node n , to a goal state.
• A heuristic evaluation function which accurately represent the actual
cost of getting the goal sate, tells us very clearly which nodes in the
state space to expand next, and lead to quickly to the goal state.
• Straight-line distance (Triangle Inequality Theorem)
• Manhattan Distance
Best-First Search

• The general approach we consider is called best-first search.


• Best-first search is an instance of the general TREE-SEARCH or GRAPH-SEARCH algorithm
in which a node is selected for expansion based on an evaluation function, f(n).
• The evaluation function is construed as a cost estimate, so the node with the lowest
evaluation is expanded first. The implementation of best-first graph search is identical to
that for uniform-cost search, except for the use of f instead of g to order the priority
queue.
• The choice of f determines the search strategy. Most best-first algorithms include as a
component of f a heuristic function, denoted h(n):
h(n) = estimated cost of the cheapest path from the state at node n to a goal state.
• Notice that h(n) takes a node as input, but, unlike g(n), it depends only on the state at that
node.
Best-First
Search
Greedy best-first search

• Greedy best-first search tries to expand the node that is closest to the goal, on
the grounds that this is likely to lead to a solution quickly. Thus, it evaluates
nodes by using just the heuristic function; that is, f (n) = h(n).
• Let us see how this works for route-finding problems in Romania; we use the
straight- line distance heuristic, which we will call h . If the goal is Bucharest,
we need to know the straight-line distances to Bucharest, which are shown in
Figure. For example, hSLD(In(Arad))=366.
• Notice that the values of hSLD cannot be computed from the problem description
itself. Moreover, it takes a certain amount of experience to know that hSLD is
correlated with actual road distances and is, therefore, a useful heuristic.
• The worst-case time and space complexity for the tree version is O(bm), where
m is the maximum depth of the search space.
• Incomplete. Like DFS
Greedy
Best-First
Search
A* search: Minimizing the total estimated
solution cost

• The most widely known form of best-first search is called A∗ search (pronounced “A-star
search”). It evaluates nodes by combining g(n), the cost to reach the node, and h(n), the cost to
get from the node to the goal:
f(n) = g(n) + h(n)
• Since g(n) gives the path cost from the start node to node n, and h(n) is the estimated cost of
the cheapest path from n to the goal, we have f (n) = estimated cost of the cheapest solution
through n .
• Thus, if we are trying to find the cheapest solution, a reasonable thing to try first is the node
with the lowest value of g(n) + h(n).
• It turns out that this strategy is more than just reasonable: provided that the heuristic function
h(n) satisfies certain conditions, A∗ search is both complete and optimal. The algorithm is
identical to UNIFORM-COST-SEARCH except that A∗ uses g + h instead of g.
A* search: Minimizing the
total estimated solution cost

S
5
6
D 2
A 9
6
3 2
E
9 B 1
2 C 2
7
5 7
G1
8
G3
G2 F
A* search: Minimizing the
total estimated solution cost

S
5 5
6
D 2
A 9 6
6
3 2
7 E
9 B 1 5

2 3 C 2
4 7
5 7
G1
0
8 G3
G2 F 0
0 6
Recursive best-first search (RBFS)

• Recursive best-first search (RBFS) is a simple recursive algorithm that attempts to


mimic the operation of standard best-first search, but using only linear space.
• Its structure is similar to that of a recursive depth-first search, but rather than
continuing indefinitely down the current path, it uses the f limit variable to keep
track of the f-value of the best alternative path available from any ancestor of the
current node.
• If the current node exceeds this limit, the recursion unwinds back to the alternative
path. As the recursion unwinds, RBFS replaces the f-value of each node along the
path with a backed-up value—the best f -value of its children.
• In this way, RBFS remembers the f-value of the best leaf in the forgotten subtree and
can therefore decide whether it’s worth re-expanding the subtree at some later time.
Recursive best-first search (RBFS)
Recursive best-
first search (RBFS)
Recursive Best-First Search (RBFS) Example
Local Search algorithm and Optimization problem

• If the path to the goal does not matter, we might consider a different class of algorithms, ones that
do not worry about paths at all.
• Local search algorithms operate using a single current node (rather than multiple paths) and
generally move only to neighbours of that node.
• Typically, the paths followed by the search are not retained. Although local search algorithms are not
systematic, they have two key advantages:
1. They use very little memory—usually a constant amount; and
2. They can often find reasonable solutions in large or infinite (continuous) state spaces for which
systematic algorithms are unsuitable.
• In addition to finding goals, local search algorithms are useful for solving pure optimization
problems, in which the aim is to find the best state according to an objective function.
• Many optimization problems do not fit the “standard” search model introduced in previous topics.
Local Search algorithm and Optimization problem

• To understand local search, we find it useful to consider the


state-space landscape.
• A landscape has both “location” (defined by the state) and
“elevation” (defined by the value of the heuristic cost function or
objective function).
• If elevation corresponds to cost, then the aim is to find the lowest
valley—a global minimum; if elevation corresponds to an
objective function, then the aim is to find the highest peak—a
global maximum. (You can convert from one to the other just by
inserting a minus sign.)
• Local search algorithms explore this landscape. A complete local
search algorithm always finds a goal if one exists; an optimal
algorithm always finds a global minimum/maximum.
Hill Climbing

• The hill-climbing search algorithm (steepest-ascent version) is


shown in Figure, It is simply a loop that continually moves in the
direction of increasing value—that is, uphill. It terminates when it
reaches a “peak” where no neighbour has a higher value.
• The algorithm does not maintain a search tree, so the data structure for
the current node need only record the state and the value of the
objective function. Hill climbing does not look ahead beyond the
immediate neighbours of the current state. This resembles trying to
find the top of Mount Everest in a thick fog while suffering from
amnesia.
• The hill-climbing search algorithm, which is the most basic local search
technique. At each step, the current node is replaced by the best
neighbour; in this version, that means the neighbour with the highest
VALUE, but if a heuristic cost estimate h is used, we would find the
neighbour with the lowest h.
• Hill Climbing Algo(Local search, Greedy approach, No backtracking)
Hill Climbing

Unfortunately, hill climbing often gets stuck for the following


reasons:
• Local maxima: a local maximum is a peak that is higher than each
of its neighbouring states but lower than the global maximum. Hill
climbing algorithms that reach the vicinity of a local maximum
will be drawn upward toward the peak but will then be stuck with
nowhere else to go.
• Plateau: a plateau is a flat area of the state-space landscape. It can
be a flat local maximum, from which no uphill exit exists, or a
shoulder, from which progress is possible.
• Ridges: Ridges result in a sequence of local maxima that is very
difficult for greedy algorithms to navigate.
Simulated Annealing

• A hill-climbing algorithm that never makes “downhill”


moves toward states with lower value (or higher cost) is
guaranteed to be incomplete, because it can get stuck on
a local maximum. In contrast, a purely random walk—
that is, moving to a successor chosen uniformly at
random from the set of successors—is complete but
extremely inefficient.
• Therefore, it seems reasonable to try to combine hill
climbing with a random walk in some way that yields
both efficiency and completeness. Simulated annealing
is such an algorithm.
• In metallurgy, annealing is the process used to temper or
harden metals and glass by heating them to a high
temperature and then gradually cooling them, thus
allowing the material to reach a low-energy crystalline
state.
Frontier List Explored
A, B S
C, E S , A, B
Local Beam Search H, J S, A, B, C, E
L, N S, A, B, C, E, H, J
• Keeping just one node in memory S, A, B, C, E, H, J, L, N
might seem to be an extreme reaction
to the problem of memory limitations.
The local beam search algorithm
keeps track of k states rather than just
one.
• It begins with k generated states. At
each step, all the successors of all k
states are generated. If any one is a
goal, the algorithm halts. Otherwise, it
selects the k best successors from the
complete list and repeats.
Genetic Algorithm

• John Holland introduced genetic


algorithms in 1960 based on the
concept of Darwin's theory of evolution,
and his student David E. Goldberg
further extended GA in 1989.
• It is the abstraction of real biological
evolution
• It used to solve complex problem (like
NP Hard)
• It focuses on optimization
• Example: Maxone, 0/1 Knapsack
Genetic
Algorithm
Initial Population

Calculate Fitness

Selection

Genetic Cross Over

Algorithm Mutation

No Stopping
Criteria

Yes
Maxone Problem
Crossover for 60%
Initial Population Selection Based on their Fitness Population Predefined Mutation

In one generation, the total population


fitness changed from 34 to 37, thus
improved by ~9% At this point, we go
through the same process all over again,
until a stopping criterion is met
Adversarial Search

• In this we cover competitive environments, in which the agents’ goals are in conflict,
giving rise to adversarial search problems—often known as games.
• Mathematical game theory, a branch of economics, views any multiagent environment as
a game, provided that the impact of each agent on the others is “significant,” regardless of
whether the agents are cooperative or competitive.
• In AI, the most common games are of a rather specialized kind—what game theorists call
deterministic, turn-taking, two-player, zero-sum games of perfect information (such as
chess).
• In our terminology, this means deterministic, fully observable environments in which two
agents act alternately and in which the utility values at the end of the game are always
equal and opposite. For example, if one player wins a game of chess, the other player
necessarily loses. It is this opposition between the agents’ utility functions that makes the
situation adversarial.
Adversarial Search

• We begin with a definition of the optimal move and an algorithm for


finding it. We then look at techniques for choosing a good move when
time is limited. Pruning allows us to ignore portions of the search tree
that make no difference to the final choice, and heuristic evaluation
functions allow us to approximate the true utility of a state without
doing a complete search.
Adversarial Search

• A game can be formally defined as a kind of search problem with the following
elements:
• S0: The initial state, which specifies how the game is set up at the start.
• PLAYER(p): Defines which player has the move in a state.
• ACTIONS(a): Returns the set of legal moves in a state.
• RESULT(s,a): The transition model, which defines the result of a move.
• TERMINAL-TEST(s): A terminal test, which is true when the game is over and false
otherwise. States where the game has ended are called terminal states.
• UTILITY(s,p): A utility function (also called an objective function or payoff
function), defines the final numeric value for a game that ends in terminal state s for
a player p. In chess, the outcome is a win, loss, or draw, with values +1, 0, or -1 .
MiniMax Procedure

• It is widely used in the two player turn based game such as Tic-Tac-Toe
or chess.
• It is an algorithm that is used in game theory to find the optimal move
for the payer assuming that your opponent also play optimally.
• In Minimax two player are called minimizer and maximizer. Maximiser
tries to get the highest score possible while the Minimizer tries to get
lowest score possible so minimizer tries to do opposite.
MiniMax Procedure

• Consider a game tree as follows:


• Assume that you are the maximizing player and you get the first
chance to move i.e. you are in the root and your opponent is in the next
level.
MiniMax Procedure

MAX

MIN MIN

MAX MAX MAX MAX

4 8 9 3 2 -2 9 -1
Game Tree
MiniMax
Procedure
Alpha–Beta pruning

• The problem with minimax search is that the number of game states it must
examine is exponential in the depth of the tree.
• Unfortunately, we can’t eliminate the exponent, but it turns out we can
effectively cut it in half. The trick is that it is possible to compute the correct
minimax decision without looking at every node in the game tree.
• That is, we can borrow the idea of pruning from previous Chapter to
eliminate large parts of the tree from consideration. The particular technique
we examine is called alpha–beta pruning.
• When applied to a standard minimax tree, it returns the same move as
minimax would, but prunes away branches that cannot possibly influence the
final decision.
Alpha–Beta
Pruning Move
ordering
Alpha–Beta pruning

MAX 𝑉≥ 𝛽

MIN MIN
𝑉≤ 𝛼

MAX MAX MAX MAX 𝑉≥ 𝛽

Pruning
4 8 9 3 2 -2 9 -1
Condition
Alpha–Beta
pruning
CONSTRAINT SATISFACTION PROBLEMS

• This topic describes a way to solve a wide variety of problems more efficiently.
• We use a factored representation for each state: a set of variables, each of
which has a value.
• A problem is solved when each variable has a value that satisfies all the
constraints on the variable. A problem described this way is called a
constraint satisfaction problem, or CSP.
• CSP search algorithms take advantage of the structure of states and use
general-purpose rather than problem-specific heuristics to enable the solution
of complex problems.
• The main idea is to eliminate large portions of the search space all at once by
identifying variable/value combinations that violate the constraints.
DEFINING CONSTRAINT SATISFACTION
PROBLEMS

• A constraint satisfaction problem consists of three components, X, D, and C :


• X is a set of variables, {X1,...,Xn}.
• D is a set of domains, {D1, . . . , Dn}, one for each variable.
• C is a set of constraints that specify allowable combinations of values.
• Each domain Di consists of a set of allowable values, {v1,...,vk} for variable Xi.
• Each constraint Ci consists of a pair ⟨scope , rel ⟩, where scope is a tuple of variables
that participate in the constraint and rel is a relation that defines the values that
those variables can take on.
• A relation can be represented as an explicit list of all tuples of values that satisfy the
constraint, or as an abstract relation that supports two operations: testing if a tuple
is a member of the relation and enumerating the members of the relation.
DEFINING CONSTRAINT SATISFACTION
PROBLEMS

• For example, if X1 and X2 both have the domain {A, B}, then the constraint
saying the two variables must have different values can be written as ⟨(X1,
X2), [(A, B), (B, A)]⟩ or as ⟨(X1, X2), X1 X2⟩.
• To solve a CSP, we need to define a state space and the notion of a solution.
• Each state in a CSP is defined by an assignment of values to some or all of
the variables, {Xi = vi, Xj = vj , . . .}.
• An assignment that does not violate any constraints is called a consistent or
legal assignment. A complete assignment is one in which every variable is
assigned, and a solution to a CSP is a consistent, complete assignment.
• A partial assignment is one that assigns values to only some of the
variables.
SUDOKU GAME

CONSTRAINT SATISFACTION PROBLEMS


Map coloring

• Suppose that, having tired of Romania, we are looking at a map of Australia showing
each of its states and territories (Figure). We are given the task of coloring each
region either red, green, or blue in such a way that no neighboring regions have the
same color.
• To formulate this as a CSP, we define the variables to be the regions

• The domain of each variable is the set Di = {red , green , blue }. The constraints
require neighboring regions to have distinct colors. Since there are nine places where
regions border, there are nine constraints:
Map coloring
Map coloring

• Here we are using abbreviations;

• is a shortcut for
• where can be fully enumerated in turn as

• There are many possible solutions to this problem, such as

You might also like