0% found this document useful (0 votes)
31 views

Session 3 - Local Search

The document describes various local search algorithms for solving optimization and constraint satisfaction problems. It discusses hill climbing search, genetic algorithms, local search in continuous spaces, and local search approaches for constraint satisfaction problems. The key learning outcomes are to describe what AI is and intelligent agents, and explain intelligent search algorithms to solve problems.

Uploaded by

Alfian Rizki
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
31 views

Session 3 - Local Search

The document describes various local search algorithms for solving optimization and constraint satisfaction problems. It discusses hill climbing search, genetic algorithms, local search in continuous spaces, and local search approaches for constraint satisfaction problems. The key learning outcomes are to describe what AI is and intelligent agents, and explain intelligent search algorithms to solve problems.

Uploaded by

Alfian Rizki
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 34

Course : Artificial Intelligence

Effective Period : July 2020

Local Search

Session 03

1
Learning Outcomes
At the end of this session, students will be able to:
 LO 2: Describe what is AI and identify concept of intelligent
agent
 LO 3: Explain various intelligent search algorithms to solve
the problems

2
Outline
1. Local Search Algorithms and Optimization Problems
2. Hill Climbing Search (Steepest-Ascent)
3. Genetic Algorithms
4. Local Search in Continuous Spaces
5. Constraint Satisfaction Problems Definition
6. Local Search for Constraint Satisfaction Problems

3
Local Search
• In the problem we studied so far, the solution is the path.
For example:
– The solution of the traveling in Romania problem is a
sequence of cities to get to Bucharest
• In many optimization problem, the path is irrelevant. The
goal itself is the solution. For example:
– The 8 queens problem
• What matters is the final configuration of queens,
not the order in which they are added
4
Local Search
• Local search algorithms operate using a single current
node (rather than multiple paths) and generally move only
to neighbors of that node.
– Typically, the paths followed by the search are not
retained.
• Local search has two advantages
– They use very little memory
– They can often find reasonable solutions in large or
infinite (continuous) state spaces
5
Local Search
• Local search algorithms are useful for solving pure
optimization problems
– To find a best state according to an objecting function

6
Local Search
• The aim of the optimization algorithm is to find the highest
peak (global maximum)

7
Local Search
• A complete algorithm always finds a goal if it exists
• An optimal algorithm always finds the global
maximum/minimum

8
Hill Climbing
• It is simply that continually moves in the direction of
increasing value (uphill)
• It terminates when it reaches a “peak” where no neighbor
has a higher value
• It starts with a random (potentially poor) solution, and
iteratively makes small changes to the solution, each time
improving it a little.

9
Hill Climbing
• All successors of a node are evaluated and the one that
give the most improvement is selected

• Choose the successor with the best solution

– Highest-valued = best solution

• Hill-climbing is like “climbing a mountain without


compass”

10
Hill Climbing
• Example 1 2 3
Start state 4 6
h(x) = 2
7 5 8

1 3 1 2 3 1 2 3 1 2 3
4 2 6 4 6 4 5 6 4 6
7 5 8 7 5 8 7 8 7 5 8
h(x) = 3 h(x) = 3 h(x) = 1 h(x) = 3

1 2 3
Goal state 4 5 6 h(x) = 0
7 8
11
Problems in Hill Climbing
• Hill Climbing search often get stuck due to the following
conditions:
– Local maxima, ridges, and plateaux

12
Problems in Hill Climbing
• Local maxima/minima
– A peak that is higher than each of its neighboring
states but lower than the global maximum.

13
Problems in Hill Climbing
• Ridges (Sequence of local maxima)
– It is very difficult for greedy algorithms to navigate

14
Problems in Hill Climbing
• Plateaux
– A flat area of the state-space landscape
– It can be a flat local maximum or a shoulder

Not Difference ?

15
Problems in Hill Climbing
• Ways Out
– Backtrack to some earlier node and try going in a
different direction.

– Make a big jump to try to get in a new section.


– Moving in several directions at once.

16
Genetic Algorithm
• Is a variant of stochastic beam search
– Successor states are generated by combining two
parent states
– Similar with our DNA which is a mixed of our parents’
DNA

17
Genetic Algorithm
• Starts with k randomly generated states, called population
– Each state is an individual, represented as a string
over a finite alphabet (ex. 01010100)
• The objective function is called fitness function: better
states have high values of fitness function

18
Genetic Algorithm
• In the 8-queen problem, an individual can be represented
by a string digits 1 to 8, that represents the position of the 8
queens in the 8 columns.

19
Genetic Algorithm
• Possible fitness function is the number of non-attacking
pairs of queens
• Fitness function of the solution
– 7 + 6 + 5 + 4 + 3 + 2 + 1 = 28
• How much? 

20
Genetic Algorithm
• Pairs of individuals are selected at random for
reproduction w.r.t. some probabilities

21
Genetic Algorithm
• A crossover point is chosen randomly in the string
• Offspring are created by crossing the parents at the
crossover point.

22
Genetic Algorithm
• Each element in the string is also subject to some
mutation with a small probability

23
Genetic Algorithm

24
Local Search in Continuous Space
• Suppose we want to site three airports in Jakarta:

– 6-D state space defined by (x1,y1), (x2,y2), (x3,y3)

– Objective function f(x1,y1, x2,y2, x3,y3) = sum of squared


distances from each city to nearest airports
• How to choose select the next successor?
– Use gradient to increase/reduce f, i.e. by

25
Local Search in Continuous Space
• Important: we need to understand what function that we
want to optimize!
– Highest value is better  Gradient ascent optimization
– Lowest value is better  Gradient descent optimization

• Local search in continuous space plays important role in


the current state-of-the-art deep learning algorithm

26
Defining Constraint Satisfaction Problem
• Standard search problem

– state is a "black box“ – any data structure that supports


successor function, heuristic function, and goal test
• Constraint satisfaction problem (CSP)

– state is defined by variables Xi with values from

domain Di

– goal test is a set of constraints specifying allowable


combinations of values for subsets of variables

27
Defining Constraint Satisfaction Problem
Map coloring

• Variables WA, NT, Q, NSW, V, SA, T


• Domains Di = {red, green, blue}
• Constraints: adjacent regions must have different colors
• e.g., WA ≠ NT, NT ≠ SA, (WA,NT) є {(red, green),(red, blue),(green,
red), (green, blue),(blue, red),(blue, green), …} 28
Defining Constraint Satisfaction Problem

• Solution is assigning value for each variable fulfill the constraints


• e.g., {WA = red, NT = green, Q = red, NSW = green, V = red, SA =
blue, T = green}

29
Local Search for CSP
• Hill-climbing typically works with "complete" states, i.e., all
variables assigned
• To apply to CSP

– allow states with unsatisfied constraints


– operators reassign variable values

• Variable selection: randomly select any conflicted variable

30
Local Search for CSP
• Value selection by min-conflicts heuristic:
– choose value that violates the fewest constraints
– i.e., hill-climb with h(n) = total number of violated
constraints
• Given random initial state, can solve n-queens in almost
constant time for arbitrary n with high probability (e.g., n =
10,000,000)

31
Local Search for CSP: Example
• States: 4 queens in 4 columns (44 = 256 states)

• Actions: move queen in column

• Goal test: no attacks

• Evaluation: h(n) = number of attacks

32
Local Search for CSP: Example
• States: 8 queens in 8 columns (88 = 65,536 states)

• Actions: move queen in column

• Goal test: no attacks

• Evaluation: h(n) = number of attacks

33
References
• Stuart Russell, Peter Norvig. 2010. Artificial Intelligence :
A Modern Approach. Pearson Education. New Jersey.
ISBN:9780132071482
• Local Search:
https://courses.edx.org/asset-v1:ColumbiaX+CSMM.101x+2
T2017_2+type@asset+block@AI_edx_SearchAgents_Local.p
df

34

You might also like