0% found this document useful (0 votes)
36 views34 pages

Local Search in AI: Concepts & Applications

Artificial Intelligence (AI) refers to the creation of machines that can mimic human-like intelligence, including learning, reasoning, and problem-solving. AI has various applications across sectors such as healthcare, finance, and robotics, and offers advantages like high accuracy and speed, but also presents challenges like high costs and lack of creativity. The document outlines the goals, components, types of agents, and search algorithms in AI, emphasizing the importance of understanding the environment in which these agents operate.

Uploaded by

adityamahantyakm
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
36 views34 pages

Local Search in AI: Concepts & Applications

Artificial Intelligence (AI) refers to the creation of machines that can mimic human-like intelligence, including learning, reasoning, and problem-solving. AI has various applications across sectors such as healthcare, finance, and robotics, and offers advantages like high accuracy and speed, but also presents challenges like high costs and lack of creativity. The document outlines the goals, components, types of agents, and search algorithms in AI, emphasizing the importance of understanding the environment in which these agents operate.

Uploaded by

adityamahantyakm
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

What is Artificial Intelligence?

Artificial Intelligence is composed of two words Artificial and Intelligence,


where Artificial defines "man-made," and intelligence defines "thinking
power", hence AI means "a man-made thinking power."

So, we can define AI as:

"It is a branch of computer science by which we can create intelligent


machines which can behave like a human, think like humans, and able to make
decisions."

Artificial Intelligence exists when a machine can have human based skills such
as learning, reasoning, and solving problems.

Why Artificial Intelligence?


Following are some main reasons to learn about AI:

o With the help of AI, you can create such software or devices which can
solve real-world problems very easily and with accuracy such as health
issues, marketing, traffic issues, etc.
o With the help of AI, you can create your personal virtual Assistant, such
as Cortana, Google Assistant, Siri, etc.
o With the help of AI, you can build such Robots which can work in an
environment where survival of humans can be at risk.
o AI opens a path for other new technologies, new devices, and new
Opportunities.

Goals of Artificial Intelligence:


Following are the main goals of Artificial Intelligence:

1. Replicate human intelligence


2. Solve Knowledge-intensive tasks
3. An intelligent connection of perception and action
4. Building a machine which can perform tasks that requires human
intelligence such as:
o Proving a theorem
o Playing chess
o Plan some surgical operation
o Driving a car in traffic
5. Creating some system which can exhibit intelligent behaviour, learn new
things by itself, demonstrate, explain, and can advise to its user.

What Comprises to Artificial


Intelligence?
To create the AI first we should know that how intelligence is composed, so the
Intelligence is an intangible part of our brain which is a combination
of Reasoning, learning, problem-solving perception, language understanding,
etc.

To achieve the above factors for a machine or software Artificial Intelligence


requires the following discipline:

o Mathematics
o Biology
o Psychology
o Sociology
o Computer Science
o Neurons Study
o Statistics

Advantages of Artificial Intelligence:


Following are some main advantages of Artificial Intelligence:

o High Accuracy with less error


o High-Speed
o High reliability
o Useful for risky areas
o Digital Assistant
o Useful as a public utility

Disadvantages of Artificial Intelligence:


Following are the disadvantages of AI:

o High Cost
o Can't think out of the box
o No feelings and emotions
o Increase dependency on machines
o No Original Creativity

Application of AI:
Following are some sectors which have the application of Artificial Intelligence:

1. AI in Astronomy

2. AI in Healthcare

3. AI in Gaming

4. AI in Finance

5. AI in Data Security

6. AI in Social Media

7. AI in Travel & Transport

8. AI in Automotive Industry

9. AI in Robotics

10. AI in Entertainment

11. AI in Agriculture

12. AI in E-commerce
13. AI in education

Agents in Artificial Intelligence:

What is an Agent?
An agent can be anything that perceive its environment through sensors and
act upon that environment through actuators. An Agent runs in the cycle
of perceiving, thinking, and acting. An agent can be:

o Human-Agent: A human agent has eyes, ears, and other organs which
work for sensors and hand, legs, vocal tract work for actuators.
o Robotic Agent: A robotic agent can have cameras, infrared range finder,
NLP for sensors and various motors for actuators.
o Software Agent: Software agent can have keystrokes, file contents as
sensory input and act on those inputs and display output on the screen.

Hence the world around us is full of agents such as thermostat, cell phone,
camera, and even we are also agents.

Sensor: Sensor is a device which detects the change in the environment and
sends the information to other electronic devices. An agent observes its
environment through sensors.

Actuators: Actuators are the component of machines that converts energy into
motion. The actuators are only responsible for moving and controlling a
system. An actuator can be an electric motor, gears, rails, etc.

Effectors: Effectors are the devices which affect the environment. Effectors can
be legs, wheels, arms, fingers, wings, fins, and display screen.
Intelligent Agents:
An intelligent agent is an autonomous entity which acts upon an environment
using sensors and actuators for achieving goals. An intelligent agent may learn
from the environment to achieve their goals. A thermostat is an example of an
intelligent agent.

Following are the main four rules for an AI agent:

o Rule 1: An AI agent must have the ability to perceive the environment.


o Rule 2: The observation must be used to make decisions.
o Rule 3: Decision should result in an action.
o Rule 4: The action taken by an AI agent must be a rational action.

Rational Agent:
A rational agent is an agent which has clear preference, models uncertainty,
and acts in a way to maximize its performance measure with all possible
actions.

A rational agent is said to perform the right things. AI is about creating rational
agents to use for game theory and decision theory for various real-world
scenarios.

For an AI agent, the rational action is most important because in AI


reinforcement learning algorithm, for each best possible action, agent gets the
positive reward and for each wrong action, an agent gets a negative reward.

Structure of an AI Agent:
The structure of an intelligent agent is a combination of architecture and agent
program. It can be viewed as:

Agent = Architecture + Agent program

Architecture: Architecture is machinery that an AI agent executes on.

Agent program: Agent program is an implementation of agent function. An


agent program executes on the physical architecture to produce function f.
Types of AI Agents:
Agents can be grouped into five classes based on their degree of perceived
intelligence and capability. These are:

o Simple Reflex Agent


o Model-based reflex agent
o Goal-based agents
o Utility-based agent
o Learning agent

1. Simple Reflex agent:


o The Simple reflex agents are the simplest agents. These agents take
decisions on the basis of the current percepts and ignore the rest of the
percept history.
o These agents only succeed in the fully observable environment.
o The Simple reflex agent does not consider any part of percepts history
during their decision and action process.
o The Simple reflex agent works on Condition-action rule, which means it
maps the current state to action. Such as a Room Cleaner agent, it works
only if there is dirt in the room.
o Problems for the simple reflex agent design approach:
o They have very limited intelligence
o They do not have knowledge of non-perceptual parts of the
current state
o Mostly too big to generate and to store.
o Not adaptive to changes in the environment.
2. Model-based reflex agent:
o The Model-based agent can work in a partially observable environment,
and track the situation.
o A model-based agent has two important factors:
o Model: It is knowledge about "how things happen in the world,"
so it is called a Model-based agent.
o Internal State: It is a representation of the current state based on
percept history.
o These agents have the model, "which is knowledge of the world" and
based on the model they perform actions.
o Updating the agent state requires information about:

How the world evolves

How the agent's action affects the world.

3. Goal-based agents:
o The knowledge of the current state environment is not always sufficient
to decide for an agent to what to do.
o The agent needs to know its goal which describes desirable situations.
o Goal-based agents expand the capabilities of the model-based agent by
having the "goal" information.
o They choose an action, so that they can achieve the goal.
o These agents may have to consider a long sequence of possible actions
before deciding whether the goal is achieved or not. Such considerations
of different scenario are called searching and planning, which makes an
agent proactive.

4. Utility-based agents:
o These agents are similar to the goal-based agent but provide an extra
component of utility measurement which makes them different by
providing a measure of success at a given state.
o Utility-based agent act based not only goals but also the best way to
achieve the goal.
o The Utility-based agent is useful when there are multiple possible
alternatives, and an agent has to choose in order to perform the best
action.
o The utility function maps each state to a real number to check how
efficiently each action achieves the goals.
5. Learning Agents:
o A learning agent in AI is the type of agent which can learn from its past
experiences, or it has learning capabilities.
o It starts to act with basic knowledge and then able to act and adapt
automatically through learning.
o A learning agent has mainly four conceptual components, which are:
. Learning element: It is responsible for making improvements by
learning from environment
a. Critic: Learning element takes feedback from critic which
describes that how well the agent is doing with respect to a fixed
performance standard.
b. Performance element: It is responsible for selecting external
action
c. Problem generator: This component is responsible for suggesting
actions that will lead to new and informative experiences.
o Hence, learning agents are able to learn, analyze performance, and look
for new ways to improve the performance.

Agent Environment in AI:


An environment can be described as a situation in which an agent is present.
The environment is where agent lives, operate and provide the agent with
something to sense and act upon it. An environment is mostly said to be non-
feministic.

Features of Environment:
As per Russell and Norvig, an environment can have various features from the
point of view of an agent:

1. Fully observable vs Partially Observable


2. Static vs Dynamic
3. Discrete vs Continuous
4. Deterministic vs Stochastic
5. Single-agent vs Multi-agent
6. Episodic vs sequential
7. Known vs Unknown
8. Accessible vs Inaccessible

1. Fully observable vs. Partially Observable


o If an agent sensor can sense or access the complete state of an
environment at each point of time then it is a fully
observable environment, else it is partially observable.
o A fully observable environment is easy as there is no need to maintain
the internal state to keep track history of the world.
o An agent with no sensors in all environments then such an environment
is called as unobservable.

2. Deterministic vs. Stochastic

o If an agent's current state and selected action can completely determine


the next state of the environment, then such environment is called a
deterministic environment.
o A stochastic environment is random in nature and cannot be determined
completely by an agent.
o In a deterministic, fully observable environment, agent does not need to
worry about uncertainty.
3. Episodic vs. Sequential

o In an episodic environment, there is a series of one-shot actions, and


only the current percept is required for the action.
o However, in Sequential environment, an agent requires memory of past
actions to determine the next best actions.

4. Single-agent vs. Multi-agent

o If only one agent is involved in an environment, and operating by itself


then such an environment is called single agent environment.
o However, if multiple agents are operating in an environment, then such
an environment is called a multi-agent environment.
o The agent design problems in the multi-agent environment are different
from single agent environment.

5. Static vs. Dynamic

o If the environment can change itself while an agent is deliberating then


such environment is called a dynamic environment else it is called a
static environment.
o Static environments are easy to deal because an agent does not need to
continue looking at the world while deciding for an action.
o However for dynamic environment, agents need to keep looking at the
world at each action.
o Taxi driving is an example of a dynamic environment whereas Crossword
puzzles are an example of a static environment.

6. Discrete vs. Continuous

o If in an environment there are a finite number of percepts and actions


that can be performed within it, then such an environment is called a
discrete environment else it is called continuous environment.
o A chess game comes under discrete environment as there is a finite
number of moves that can be performed.
o A self-driving car is an example of a continuous environment.

7. Known vs Unknown
o Known and unknown is not actually a feature of an environment, but it is
an agent's state of knowledge to perform an action.
o In a known environment, the results for all actions are known to the
agent. While in unknown environment, agent needs to learn how it
works in order to perform an action.
o It is quite possible that a known environment to be partially observable
and an Unknown environment to be fully observable.

8. Accessible vs Inaccessible

o If an agent can obtain complete and accurate information about the


state's environment, then such an environment is called an Accessible
environment else it is called inaccessible.
o An empty room whose state can be defined by its temperature is an
example of an accessible environment.
o Information about an event on earth is an example of Inaccessible
environment.

Search Algorithms in Artificial


Intelligence:
Problem-solving agents:
In Artificial Intelligence, Search techniques are universal problem-solving
methods. Rational agents or Problem-solving agents in AI mostly used these
search strategies or algorithms to solve a specific problem and provide the best
result. Problem-solving agents are the goal-based agents and use atomic
representation. In this topic, we will learn various problem-solving search
algorithms.

Properties of Search Algorithms:


o Completeness
o Optimality
o Time Complexity
o Space Complexity
Types of search algorithms:
Based on the search problems we can classify the search algorithms into
uninformed (Blind search) search and informed search (Heuristic search)
algorithms.

Uninformed Search Algorithms:


Uninformed search is a class of general-purpose search algorithms which
operates in brute force-way. Uninformed search algorithms do not have
additional information about state or search space other than how to traverse
the tree, so it is also called blind search.

Following are the various types of uninformed search algorithms:

1. Breadth-first Search
2. Depth-first Search
3. Depth-limited Search
4. Iterative deepening depth-first search
5. Uniform cost search
6. Bidirectional Search
1. Breadth-first search (BFS):
It is a simple search strategy where the root node is expanded first, then
covering all other successors of the root node, further move to expand the
next level nodes and the search continues until the goal node is not found.
BFS expands the shallowest (i.e., not deep) node first using FIFO (First in first
out) order. Thus, new nodes (i.e., children of a parent node) remain in the
queue and old unexpanded node which are shallower than the new nodes, get
expanded first.
In BFS, goal test (a test to check whether the current state is a goal state or
not) is applied to each node at the time of its generation rather when it is
selected for expansion.

(Breadth-first search tree)


In the above figure, it is seen that the nodes are expanded level by level
starting from the root node A till the last node I in the tree. Therefore, the BFS
sequence followed is: A->B->C->D->E->F->G->I.

BFS Algorithm:

o Set a variable NODE to the initial state, i.e., the root node.
o Set a variable GOAL which contains the value of the goal state.
o Loop each node by traversing level by level until the goal state is not
found.
o While performing the looping, start removing the elements from the
queue in FIFO order.
o If the goal state is found, return goal state otherwise continue the
search.
The performance measure of BFS is as follows:

o Completeness: It is a complete strategy as it definitely finds the goal


state.
o Optimality: It gives an optimal solution if the cost of each node is same.
o Space Complexity: The space complexity of BFS is O (bd), i.e., it requires
a huge amount of memory. Here, b is the branching factor and d
denotes the depth/level of the tree
o Time Complexity: BFS consumes much time to reach the goal node for
large instances.

Disadvantages of BFS:

o The biggest disadvantage of BFS is that it requires a lot of memory


space; therefore it is a memory bounded strategy.
o BFS is time taking search strategy because it expands the
nodes breadthwise.

Note: BFS expands the nodes level by level, i.e., breadthwise, therefore it is
also known as a Level search technique.

2. Depth-first search (DFS):


This search strategy explores the deepest node first, and then backtracks to
explore other nodes. It uses LIFO (Last in First Out) order, which is based on
the stack, in order to expand the unexpanded nodes in the search tree. The
search proceeds to the deepest level of the tree where it has no successors.
This search expands nodes till infinity, i.e., the depth of the tree.

(DFS search tree)


In the above figure, DFS works starting from the initial node A (root node) and
traversing in one direction deeply till node I and then backtrack to B and so on.
Therefore, the sequence will be A->B->D->I->E->C->F->G.

DFS Algorithm:

o Set a variable NODE to the initial state, i.e., the root node.
o Set a variable GOAL which contains the value of the goal state.
o Loop each node by traversing deeply in one direction/path in search of
the goal node.
o While performing the looping, start removing the elements from the
stack in LIFO order.
o If the goal state is found, return goal state otherwise backtrack to
expand nodes in other direction.

The performance measure of DFS:

o Completeness: DFS does not guarantee to reach the goal state.


o Optimality: It does not give an optimal solution as it expands nodes in
one direction deeply.
o Space complexity: It needs to store only a single path from the root
node to the leaf node. Therefore, DFS has O (bm) space complexity
where b is the branching factor (i.e., total no. of child nodes, a parent
node have) and m is the maximum length of any path.
o Time complexity: DFS has O (bm) time complexity.

Disadvantages of DFS:

o It may get trapped in an infinite loop.


o It is also possible that it may not reach the goal state.
o DFS does not give an optimal solution.

Note: DFS uses the concept of backtracking to explore each node in a search
tree.

3. Depth-limited search:
This search strategy is similar to DFS with a little difference. The difference is
that in depth-limited search, we limit the search by imposing a depth limit l to
the depth of the search tree. It does not need to explore till infinity. As a result,
the depth-first search is a special case of depth-limited search. When the limit
l is infinite.

(Depth-limited search on a binary tree)

In the above figure, the depth-limit is 1. So, only level 0 and 1 get expanded
in A->B->C DFS sequence, starting from the root node A till node B. It is not
giving satisfactory result because we could not reach the goal node I.

Depth-limited search Algorithm:

o Set a variable NODE to the initial state, i.e., the root node.
o Set a variable GOAL which contains the value of the goal state.
o Set a variable LIMIT which carries a depth-limit value.
o Loop each node by traversing in DFS manner till the depth-limit value.
o While performing the looping, start removing the elements from the
stack in LIFO order.
o If the goal state is found, return goal state. Else terminate the search.

The performance measure of Depth-limited search:

o Completeness: Depth-limited search does not guarantee to reach the


goal node.
o Optimality: It does not give an optimal solution as it expands the nodes
till the depth-limit.
o Space Complexity: The space complexity of the depth-limited search is O
(bl).
o Time Complexity: The time complexity of the depth-limited search is O
(bl).
Disadvantages of Depth-limited search:

o This search strategy is not complete.


o It does not provide an optimal solution.

Note: Depth-limit search terminates with two kinds of failures: the standard
failure value indicates “no solution," and cut-off value, which indicates “no
solution within the depth-limit."

4. Iterative deepening depth-first search/ Iterative


deepening search:
This search is a combination of BFS and DFS, as BFS guarantees to reach the
goal node and DFS occupies less memory space. Therefore, iterative deepening
search combines these two advantages of BFS and DFS to reach the goal node.
It gradually increases the depth-limit from 0, 1, and 2 and so on and reaches
the goal node.

In the above figure, the goal node is H and initial depth-limit =[0-1]. So, it will
expand level 0 and 1 and will terminate with A->B->C sequence. Further,
change the depth-limit =[0-3], it will again expand the nodes from level 0 till
level 3 and the search terminate with A->B->D->F->E->H sequence where H is
the desired goal node.

Iterative deepening search Algorithm:

 Explore the nodes in DFS order.


 Set a LIMIT variable with a limit value.
 Loop each node up to the limit value and further increase the limit value
accordingly.
 Terminate the search when the goal state is found.

The performance measure of Iterative deepening search:

o Completeness: Iterative deepening search may or may not reach the


goal state.
o Optimality: It does not give an optimal solution always.
o Space Complexity: It has the same space complexity as BFS, i.e., O (bd).
o Time Complexity: It has O (d) time complexity.

Disadvantages of Iterative deepening search:

o The drawback of iterative deepening search is that it seems wasteful


because it generates states multiple times.

Note: Generally, iterative deepening search is required when the search space
is large, and the depth of the solution is unknown.

Informed Search Algorithms:


An informed search is more efficient than an uninformed search because in
informed search, along with the current state information, some additional
information is also present, which make it easy to reach the goal state.

o Best First Search Algorithm(Greedy search)


o A* Search Algorithm
o AO* Search Algorithm

1. Best-first Search (Greedy search):


A best-first search is a general approach of informed search. Here, a node is
selected for expansion based on an evaluation function f (n), where f (n)
interprets the cost estimate value. The evaluation function expands that node
first, which has the lowest cost. A component of f (n) is h (n) which carries the
additional information required for the search algorithm, i.e.,
h (n) = estimated cost of the cheapest path from the current node n to the
goal node.
Note: If the current node n is a goal node, the value of h (n) will be 0.
Best-first search is known as a greedy search because it always tries to explore
the node which is nearest to the goal node and selects that path, which gives a
quick solution. Thus, it evaluates nodes with the help of the heuristic function,
i.e., f (n) =h (n).

Best-first search Algorithm:

o Set an OPEN list and a CLOSE list where the OPEN list contains visited
but unexpanded nodes and the CLOSE list contains visited as well as
expanded nodes.
o Initially, traverse the root node and visit its next successor nodes and
place them in the OPEN list in ascending order of their heuristic value.
o Select the first successor node from the OPEN list with the lowest
heuristic value and expand further.
o Now, rearrange all the remaining unexpanded nodes in the OPEN list
and repeat above two steps.
o If the goal node is reached, terminate the search, else expand further.

In the above figure, the root node is A, and its next level successor nodes
are B and C with h (B)=2 and h(C)=4. Our task is to explore that node which has
the lowest h (n) value. So, we will select node B and expand it further to
node D and E. Again, search out that node which has the lowest h (n) value and
explore it further.

The performance measure of Best-first search Algorithm:

o Completeness: Best-first search is incomplete even in finite state space.


o Optimality: It does not provide an optimal solution.
o Time and Space complexity: It has O (bm) worst time and space
complexity, where m is the maximum depth of the search tree. If the
quality of the heuristic function is good, the complexities could be
reduced substantially.

Note: Best first searches combine the advantage of BFS and DFS to find the
best solution.

Disadvantages of Best-first search:

o BFS does not guarantee to reach the goal state.


o Since the best-first search is a greedy approach, it does not give an
optimized solution.
o It may cover a long distance in some cases.

2. A* Search Algorithm:
A* search is the most widely used informed search algorithm where a node n is
evaluated by combining values of the functions g(n)and h(n). The function g (n)
is the path cost from the start/initial node to a node n and h (n) is the
estimated cost of the cheapest path from node n to the goal node. Therefore,
we have,
f (n)=g(n)+h(n)
Where f (n) is the estimated cost of the cheapest solution through n.
So, in order to find the cheapest solution, try to find the lowest values of f (n).
Let’s see the below example to understand better.
In the above example, S is the root node, and G is the goal node. Starting from
the root node S and moving towards its next successive nodes A and B. In
order to reach the goal node G, calculate the f (n) value of node S,
A and B using the evaluation equation i.e.
f (n)=g(n)+h(n)

Calculation of f (n) for node S:


f(S)=(distance from node S to S) + h(S)

 0+10=10.

Calculation of f (n) for node A:


f(A)=(distance from node S to A)+h(A)

 2+12=14

Calculation of f (n) for node B:


f(B)=(distance from node S to B)+h(B)

 3+14=17

Therefore, node A has the lowest f(n) value. Hence, nodes A will be explored to
its next level nodes C and D and again calculate the lowest f (n) value. After
calculating, the sequence we get is S->A->D->G with f (n) =13(lowest value).

The performance measure of A* search:

o Completeness: The star (*) in A* search guarantees to reach the goal


node.
o Optimality: An underestimated cost will always give an optimal solution.
o Space and time complexity: A* search has O (bd) space and
time complexities.

Disadvantage of A* search:

o A* mostly runs out of space for a long period.

3. AO* search Algorithm:


AO* search is a specialized graph based on AND/OR operation. It is a problem
decomposition strategy where a problem is decomposed into smaller pieces
and solved separately to get a solution required to reach the desired goal.
Although A*search and AO* search, both follow best-first search order, but
they are dissimilar from one another.
Let's understand AO* working with the help of the below example:

Here, the destination/ goal are to eat some food. We have two ways, either
order food from any restaurant or buy some food ingredients and cook to eat
food. Thus, we can apply any of the two ways, the choice depends on us. It is
not guaranteed whether the order will be delivered on time, food will be tasty
or not, etc. But if we will purchase and cook it, we will be more satisfied.
Therefore, the AO* search provides two ways to choose either OR or AND. It is
better to choose AND rather OR to get a good optimal solution.

Hill Climbing Algorithm in AI:


Hill Climbing Algorithm:
Hill climbing search is a local search problem. The purpose of the hill climbing
search is to climb a hill and reach the topmost peak/ point of that hill. It is
based on the heuristic search technique where the person who is climbing up
on the hill estimates the direction which will lead him to the highest peak.

State-space Landscape of Hill climbing algorithm:


To understand the concept of hill climbing algorithm, consider the below
landscape representing the goal state/peak and the current state of the
climber. The topographical regions shown in the figure can be defined as:

o Global Maximum: It is the highest point on the hill, which is the goal
state.
o Local Maximum: It is the peak higher than all other peaks but lower than
the global maximum.
o Flat local maximum: It is the flat area over the hill where it has no uphill
or downhill. It is a saturated point of the hill.
o Shoulder: It is also a flat area where the summit is possible.
o Current state: It is the current position of the person.

Types of Hill climbing search algorithm:


There are following types of hill-climbing search:

o Simple hill climbing


o Steepest-ascent hill climbing
o Stochastic hill climbing
o Random-restart hill climbing

 Simple hill climbing search:


Simple hill climbing is the simplest technique to climb a hill. The task is to reach
the highest peak of the mountain. Here, the movement of the climber depends
on his move/steps. If he finds his next step better than the previous one, he
continues to move else remain in the same state. This search focuses only on
his previous and next step.

Simple hill climbing Algorithm:

o Create a CURRENT node, NEIGHBOUR node, and a GOAL node.


o If the CURRENT node=GOAL node, return GOAL and terminate the
search.
o Else CURRENT node<= NEIGHBOUR node, move ahead.
o Loop until the goal is not reached or a point is not found.

 Steepest-ascent hill climbing:


Steepest-ascent hill climbing is different from simple hill climbing search.
Unlike simple hill climbing search, it considers all the successive nodes,
compares them, and chooses the node which is closest to the solution.
Steepest hill climbing search is similar to best-first search because it focuses on
each node instead of one.
Note: Both simple, as well as steepest-ascent hill climbing search, fails when
there is no closer node.

Steepest-ascent hill climbing algorithm:

o Create a CURRENT node and a GOAL node.


o If the CURRENT node=GOAL node, return GOAL and terminate the
search.
o Loop until a better node is not found to reach the solution.
o If there is any better successor node present, expand it.
o When the GOAL is attained, return GOAL and terminate.

 Stochastic hill climbing:


Stochastic hill climbing does not focus on all the nodes. It selects one node at
random and decides whether it should be expanded or search for a better one.

 Random-restart hill climbing:


Random-restart algorithm is based on try and tries strategy. It iteratively
searches the node and selects the best one at each step until the goal is not
found. The success depends most commonly on the shape of the hill. If there
are few plateaus, local maxima, and ridges, it becomes easy to reach the
destination.

Limitations of Hill climbing algorithm:


Hill climbing algorithm is a fast and furious approach. It finds the solution state
rapidly because it is quite easy to improve a bad state. But, there are following
limitations of this search:
 Local Maxima: It is that peak of the mountain which is highest than all
its neighbouring states but lower than the global maxima. It is not the
goal peak because there is another peak higher than it.
 Plateau: It is a flat surface area where no uphill exists. It becomes
difficult for the climber to decide that in which direction he should move
to reach the goal point. Sometimes, the person gets lost in the flat area.

 Ridges: It is a challenging problem where the person finds two or more


local maxima of the same height commonly. It becomes difficult for the
person to navigate the right point and stuck to that point itself.

.
Constraint Satisfaction Problems:
By the name, it is understood that constraint satisfaction means solving a
problem under certain constraints or rules.
Constraint satisfaction is a technique where a problem is solved when its values
satisfy certain constraints or rules of the problem. Such type of technique leads
to a deeper understanding of the problem structure as well as its complexity.
Constraint satisfaction depends on three components, namely:

 X: It is a set of variables.
 D: It is a set of domains where the variables reside. There is a specific
domain for each variable.
 C: It is a set of constraints which are followed by the set of variables.

In constraint satisfaction, domains are the spaces where the variables reside,
following the problem specific constraints. These are the three main elements
of a constraint satisfaction technique. The constraint value consists of a pair
of {scope, rel}. The scope is a tuple of variables which participate in the
constraint and rel is a relation which includes a list of values which the
variables can take to satisfy the constraints of the problem.

Solving Constraint Satisfaction Problems:


The requirements to solve a constraint satisfaction problem (CSP) are:

 A state-space
 The notion of the solution.

A state in state-space is defined by assigning values to some or all variables


such as:
{X1=v1, X2=v2, and so on…}.

An assignment of values to a variable can be done in three ways:

 Consistent or Legal Assignment: An assignment which does not violate


any constraint or rule is called Consistent or legal assignment.
 Complete Assignment: An assignment where every variable is assigned
with a value and the solution to the CSP remains consistent. Such
assignment is known as complete assignment.
 Partial Assignment: An assignment which assigns values to some of the
variables only. Such type of assignments is called Partial assignments.

Types of Domains in CSP:


There are following two types of domains which are used by the variables:

o Discrete Domain: It is an infinite domain which can have one state for
multiple variables. For example, a start state can be allocated infinite
times for each variable.
o Finite Domain: It is a finite domain which can have continuous states
describing one domain for one specific variable. It is also called a
continuous domain.

Constraint Types in CSP:


With respect to the variables, basically there are following types of constraints:

o Unary Constraints: It is the simplest type of constraints that restricts the


value of a single variable.
o Binary Constraints: It is the constraint type which relates two variables.
A value x2 will contain a value which lies between x1 and x3.
o Global Constraints: It is the constraint type which involves an arbitrary
number of variables.

Some special types of solution algorithms are used to solve the following
types of constraints:

 Linear Constraints: These types of constraints are commonly used in


linear programming where each variable containing an integer value
exists in linear form only.
 Non-linear Constraints: These types of constraints are used in non-linear
programming where each variable (an integer value) exists in a non-
linear form.

Note: A special constraint which works in real-world is known as Preference


constraint.

Constraint Propagation:
In local state-spaces, the choice is only one, i.e., to search for a solution. But in
CSP, we have two choices either:
 We can search for a solution or
 We can perform a special type of inference called constraint
propagation.

Constraint propagation is a special type of inference which helps in reducing


the legal number of values for the variables. The idea behind constraint
propagation is local consistency.
In local consistency, variables are treated as nodes, and each binary constraint
is treated as an arc in the given problem.

There are following local consistencies:

 Node Consistency: A single variable is said to be node consistent if all


the values in the variable’s domain satisfy the unary constraints on the
variables.
 Arc Consistency: A variable is arc consistent if every value in its domain
satisfies the binary constraints of the variables.
 Path Consistency: When the evaluation of a set of two variables with
respect to a third variable can be extended over another variable,
satisfying all the binary constraints. It is similar to arc consistency.
 K-consistency: This type of consistency is used to define the notion of
stronger forms of propagation. Here, we examine the k-consistency of
the variables.

CSP Problems:
Constraint satisfaction includes those problems which contains some
constraints while solving the problem. CSP includes the following problems:

 Graph Colouring: The problem where the constraint is that no adjacent


sides can have the same colour.
 Sudoku Playing: The gameplay where the constraint is that no number
from 0-9 can be repeated in the same row or column.

 N-queen problem: In n-queen problem, the constraint is that no queen


should be placed either diagonally, in the same row or column.

 Crossword: In crossword problem, the constraint is that there should be


the correct formation of the words, and it should be meaningful.
 Latin square Problem: In this game, the task is to search the pattern
which is occurring several times in the game. They may be shuffled but
will contain the same digits.

 Crypt arithmetic Problem: This problem has one most important


constraint that is; we cannot assign a different digit to the same
character. All digits should contain a unique alphabet. Character. All
digits should contain a unique alphabet.
Heuristic Functions in AI:
As we have already seen that an informed search make use of heuristic
functions in order to reach the goal node in a more prominent way.
Therefore, there are several pathways in a search tree to reach the goal
node from the current node. The selection of a good heuristic function
matters certainly. A good heuristic function is determined by its efficiency.
More is the information about the problem, more is the processing time.

Some toy problems, such as 8-puzzle, 8-queen, tic-tac-toe, etc., can be


solved more efficiently with the help of a heuristic function.

Consider the following 8-puzzle problem where we have a start state and a
goal state. Our task is to slide the tiles of the current/start state and place it
in an order followed in the goal state. There can be four moves either left,
right, up, or down. There can be several ways to convert the current/start
state to the goal state, but, we can use a heuristic function h (n) to solve the
problem more efficiently.

A heuristic function for the 8-puzzle problem is defined below:


h (n)=Number of tiles out of position.
So, there is total of three tiles out of position i.e., 6, 5 and 4. Do not count the
empty tile present in the goal state). I.e. h (n) =3. Now, we require to minimize
the value of h (n) =0.
We can construct a state-space tree to minimize the h (n) value to 0, as shown
below:
It is seen from the above state space tree that the goal state is minimized from
h (n)=3 to h(n)=0. However, we can create and use several heuristic functions
as per the requirement. It is also clear from the above example that a heuristic
function h (n) can be defined as the information required to solve a given
problem more efficiently. The information can be related to the nature of the
state, cost of transforming from one state to another, goal node
characteristics, etc., which is expressed as a heuristic function.

Properties of a Heuristic search Algorithm


Use of heuristic function in a heuristic search algorithm leads to following
properties of a heuristic search algorithm:

 Admissible Condition: An algorithm is said to be admissible, if it returns


an optimal solution.
 Completeness: An algorithm is said to be complete, if it terminates with
a solution (if the solution exists).
 Dominance Property: If there are two admissible heuristic
algorithms A1 and A2 having h1 and h2 heuristic functions, then A1 is
said to dominate A2 if h1 is better than h2 for all the values of node n.
 Optimality Property: If an algorithm is complete, admissible,
and dominating other algorithms, it will be the best one and will
definitely give an optimal solution.

You might also like