0% found this document useful (0 votes)
24 views72 pages

Artifical Intelligence

For bca bangalore University
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
24 views72 pages

Artifical Intelligence

For bca bangalore University
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 72

Artificial intelligence

Introduction

Artificial intelligence (AI) is the intelligence of machines or


software, as opposed to the intelligence of humans or animals. It
is also the field of study in computer science that develops and
studies intelligent machines. "AI" may also refer to the machines
themselves.
AI technology is widely used throughout industry, government
and science. Some high-profile applications are: advanced web
search engines (e.g., Google Search), recommendation
systems (used by YouTube, Amazon, and Netflix), understanding
human speech (such as Siri and Alexa), self-driving
cars (e.g., Waymo), generative or creative tools (ChatGPT and AI
art), and competing at the highest level in strategic games (such
as chess and Go).

Artificial intelligence was founded as an academic discipline in


1956. The field went through multiple cycles of optimism followed
by disappointment and loss of funding, but after 2012, when deep
learning surpassed all previous AI techniques, there was a vast
increase in funding and interest.
The various sub-fields of AI research are centered around
particular goals and the use of particular tools. The traditional
goals of AI research include reasoning, knowledge
representation, planning, learning, natural language
processing, perception, and support for robotics. General
intelligence (the ability to solve an arbitrary problem) is among
the field's long-term goals. To solve these problems, AI
researchers have adapted and integrated a wide range of
problem-solving techniques, including search and mathematical
optimization, formal logic, artificial neural networks, and methods
based on statistics, operations research, and economics. AI also
draws upon psychology, linguistics, philosophy, neuroscience and
many other fields.

Artificial Intelligence works on a few concepts and the environment

surrounding them, to be a functional field of study. These


concepts are search Agents. Now, what are agents? Are they highly
qualified and intelligent spies that work in the field of AI, or are
they electronic devices that we work upon?

What is an Agent?

An "agent" is an automated entity in the context of artificial


intelligence (AI) that interacts with its surroundings by sensing its
surroundings using sensors, then acting with actuators or
effectors.

Actuators are the elements that translate energy into motion.


They take on the function of directing and moving a system. A
few examples include gears, motors, and railroads.
In simple words, it is a decision-making entity that enables the AI
to do some work. AI is all about studying rational agents.

Anything that makes decisions, whether it be a person, business,


computer, or piece of software, can be a rational agent. These
rational agents will always have an environment that might
contain another agent. Since these agents have the power to
automatically learn and process things, these agents are known
as Intelligent search Agents (IA).

Generally speaking, following are the examples of intelligent


agents:

 Humans: Yes, all humans are agents. Humans have hands, legs,
mouths, and other bodily parts that serve as actuators in addition
to their eyes, hearing, and other sensors.

 Software: This Agent receives sensory information from file


contents, keystrokes, and network packages, acts on that input,
and then displays the result on a screen.

 Robotic: Robotic agents feature sensors like cameras and infrared


range finders and actuators like different servos and motors.

How are the IAs structured?


Whatever the IA is, it has a particular structure. It consists of the
following parts.

 Architecture: This describes equipment or gadgets with


actuators and sensors. This equipment is operated by an
intelligent agent. Personal computers, automobiles, and cameras
are a few examples.

 Agent Function: Actions are mapped from a certain perceptual


sequence in the agent function. A history of what the intelligent
agent has perceived is referred to as a percept sequence.

 Agent program: This is how the agent function is implemented


or carried out. The agent program's execution on the physical
architecture results in the agent function.

How do the IAs interact with their environment?

As mentioned above, all intelligent agents have an environment,


and they constantly interact with their environments. There are
mainly two ways how IAs interact with their environment :

1. Perception:

In a passive interaction called perception, the agent learns


about its surroundings without making any changes to them.
The robot's sensors enable it to gather information about its
surroundings without having an impact on them.
Consequently, receiving information from sensors is referred
to as perception.

2. Action:

A proactive contact in which the environment is altered is


called an action. It is referred to as an action when the robot
moves an obstruction with its arm because the environment
has altered. As it completes the task, the robot's arm is
referred to as an "Effector".

Examples of Effectors include legs, fingers, wheels, display


screens, and arms.

Components of IAs (source)

How should the search Agents Act in AI?

The following details how an agent should behave:

 A logical actor makes the proper decision. The course of action


that makes the agent the most successful is the right one.
 An omniscient agent may anticipate the effects of a decision and
take appropriate action, but this is not practical in everyday life.

 The level of achievement as determined by the performance


measure

 The agent's full series of perceptions up to the present is known


as the percept sequence.

 The agent's understanding of the surroundings

 What operations is the agent capable of?

What are the types of search Agents used in AI?

Depending on how intelligent and capable they are believed to


be, agents may be divided into five groups. Over time, each of
these substances might enhance their effectiveness and produce
greater results.

Here are the five types of agents:

1. Simple-Reflex Agent:

The simplest agents are those with basic reflexes. These


agents disregard the remainder of the perception history and
base their judgments only on their present perceptions.
These agents only work well in environments that are
completely visible. The Simple Reflex Agent makes decisions
and does actions without taking any account of perceptual
history.

The Condition-action rule under which the Simple Reflex


Agent operates allows it to translate the current state into
action. Like a room cleaner, it only functions if there is filth in
the space.

Simple reflex agent design approach (source)

Here are some disadvantages of the simplex reflex agent:

 These agents have very limited intelligence.


 They are unaware of non-perceptual elements of the current
situation.
 Most of the time, it is too enormous to produce and store.
 These agents are not able to adjust to environmental
changes.

2. Model-based Reflex Agent:

The model-based agent can track the situation and operate in


a partially observable environment.

Two crucial components make up a model-based agent:

 Model: A model-based agent is one that has knowledge of


"how things happen in the world."

 Internal State: This portrayal of the present condition is


based on perceptual history.

It operates by identifying a rule whose condition corresponds


to the present circumstance. By using a model of the world, a
model-based agent may manage partially observable
settings. The agent must monitor its internal state, which is
altered by each perception and is influenced by its prior
perceptions. The agent stores the current state and keeps
some sort of structure characterizing the invisible portion of
the universe.
The model-based reflex agent design approach (source)

3. Goal-Based Reflex search Agents:

They decide what they will do in order to accomplish their


objectives. Since the knowledge underpinning a choice is
clearly represented in a goal-based approach, it is more
adaptable than the reflex agent.

This gives the agent the ability to pick from a variety of


options, choosing the one that leads to the desired state.
These agents are more adaptable because the knowledge
that underpins their decisions is expressed directly and is
modifiable. Typically, they call for research and preparation.
The behavior of the goal-based agent is easily modifiable.
Goal-based agents (source)

4. Utility-Based search Agents:

Knowing the environment's present condition is not always


enough for an agent to make decisions. The agent must be
aware of its aim, which outlines ideal circumstances. By
possessing the "target" knowledge, goal-based agents
enhance the model-based agent's capabilities.

They decide on a course of action in order to fulfill their


objective. Before determining whether the aim has been
accomplished or not, these agents may have to take into
account a lengthy list of potential activities. Such analyses of
many scenarios are referred to as seeking and planning,
which empowers an agent to take the initiative.

When goals are insufficient.


 There are many competing objectives, but only a select
number can be accomplished.
 Goals have a chance of achievement, therefore you must
balance their importance with the possibility of
accomplishment.

5. Learning search Agents:

An AI agent with learning capabilities or the ability to learn


from its experiences in the past is referred to as a "learning
agent." Beginning with fundamental information, it may then
act and adapt on its own through learning.

A learning agent primarily consists of these four conceptual


parts:

 Element of learning: It is accountable for producing


advancements by absorbing environmental knowledge.

 Critic: The learning component uses criticism to gauge how


well the agent is performing in relation to a predetermined
performance benchmark.

 Element of performance: It is in charge of choosing an


external action.
 Problem Generator: This element is in charge of making
recommendations for activities that will result in novel and
educational experiences.

Learning agents (source)

Applications of Intelligent search Agents:

Artificial intelligence-based intelligent agents have been used


extensively in daily life:

1. Information retrieval, navigation, and search:

Intelligent agents improve information access and navigation.


This is accomplished by employing search engines to look for
information. Users may have to spend a lot of time searching
through the internet's numerous data objects in order to find
a particular data item. This work is quickly completed on
behalf of users by intelligent agents.

2. Monotonous office tasks:

To cut operational expenses, several businesses have


automated a few administrative duties. Customer service and
sales are two functional domains that have been mechanized.
Office productivity has also been increased with intelligent
agents.

3. Medical evaluation:

To help patients' health, intelligent agents have also been


used in healthcare services. The patient is regarded as the
environment in this situation. The sensor that collects
information on the patient's symptoms is a computer
keyboard. This data is used by the intelligent agent to choose
the optimal course of action. Actuators used in healthcare
include testing and treatments.

4. Vacuum cleaning:

AI agents are also utilized to improve vacuum cleaning's


effectiveness and cleanliness. The surroundings in this
scenario might be a table, carpet, or room. Cameras, bump
sensors, and sensors for dirt detection are a few of the
sensors used in vacuum cleaning. Actuators like brushes,
wheels, and vacuum extractors start the action.

5. Autonomous vehicle:

Self-driving automobiles operate better with the assistance of


intelligent agents. Various sensors are used in autonomous
driving to get data from the surroundings. These consist of
radar, GPS, and cameras. The environment in this application
might consist of pedestrians, other cars, roadways, or road
signs. Actions are started using a variety of actuators. For
instance, the car's brakes are used to stop it.

How to improve the performance of


Intelligent search Agents?

We only need to question ourselves, "How can we increase our


performance in a task?" while confronting the problem of how to
improve intelligent Agent performances. Clearly, the solution is
straightforward. We carry out the task, recall the outcomes, and
make adjustments in light of our memories of prior tries.

In the same manner that humans improve, so do AI agents. By


storing its previous responses and attempts, the Agent improves
and learns how to react more effectively in the future. Artificial
intelligence and machine learning converge here.
Problem-solving Artificial intelligence agents use a variety of
algorithms and analyses to provide answers. As follows:

1. Search Algorithms:

The use of search strategies is seen to be a universal


approach to issue resolution. These algorithms and tactics
are used by rational or problem-solving agents to solve issues
and produce the best outcomes.

 Uninformed search algorithms, often known as blind


searches, operate via brute force and lack any subject
expertise.

 Informed search algorithms, often referred to as heuristic


searches, employ domain expertise to identify the search
tactics required to locate the solution to a problem.

2. Hill Climbing Algorithms:

Hill climbing algorithms are local search algorithms that go


continually upward, increasing their value or height until they
locate the ideal answer to the issue or the top of the
mountain.

Hill climbing algorithms are very good at maximizing the


solution of mathematical problems. Because it only considers
its good near neighbour, this algorithm is also known as a
"greedy local search."

3. Mean-Ends Analysis:

The means-end analysis is a method of problem-solving that


combines backward and forward search approaches to
restrict searches in artificial intelligence applications.

The means-end analysis determines the optimal operators to


employ for each difference after evaluating the differences
between the Initial State and the Final State. The analysis
then reduces the difference between the current and desired
states by applying the operators to each matched difference.

Rationality in Artificial Intelligence

Rationality in Artificial Intelligence refers to the state of being


reasonable, sensible, and having a sound sense of judgement. It
is concerned with the predicted behaviours and outcomes based
on the agent’s perceptions

1. An agent’s rationality is determined by the performance


indicators that indicate how successful a project is, the agent’s
percept sequence up to this point, the agent’s prior
understanding of the surroundings, and the acts that the agent is
capable of doing.
The principle of rationality is the idea that agents (including
artificial intelligence) should make decisions by considering all
relevant information and choosing the option that is most likely to
lead to the desired outcome.

2. Rational agents are agents whose actions make sense from


the point of view of the information possessed by the agent and
its goals (or the task for which it was designed)

3. Rationality is a property of actions and does not specify—


although it does constrain— the process by which the actions are
selected.

Problem Solving in Artificial Intelligence


The reflex agent of AI directly maps states into action. Whenever
these agents fail to operate in an environment where the state of
mapping is too large and not easily performed by the agent, then
the stated problem dissolves and sent to a problem-solving
domain which breaks the large stored problem into the smaller
storage area and resolves one by one. The final integrated action
will be the desired outcomes.

On the basis of the problem and their working domain, different


types of problem-solving agent defined and use at an atomic level
without any internal state visible with a problem-solving
algorithm. The problem-solving agent performs precisely by
defining problems and several solutions. So we can say that
problem solving is a part of artificial intelligence that
encompasses a number of techniques such as a tree, B-tree,
heuristic algorithms to solve a problem.

We can also say that a problem-solving agent is a result-driven


agent and always focuses on satisfying the goals.

There are basically three types of problem in artificial intelligence:

1. Ignorable: In which solution steps can be ignored.

2. Recoverable: In which solution steps can be undone.

3. Irrecoverable: Solution steps cannot be undo.

Steps problem-solving in AI: The problem of AI is directly


associated with the nature of humans and their activities. So we
need a number of finite steps to solve a problem which makes
human easy works.

These are the following steps which require to solve a problem :

Problem definition: Detailed specification of inputs and


acceptable system solutions.

Problem analysis: Analyse the problem thoroughly.


Knowledge Representation: collect detailed information about
the problem and define all possible techniques.

Problem-solving: Selection of best techniques.

Components to formulate the associated problem:

Initial State: This state requires an initial state for the problem
which starts the AI agent towards a specified goal. In this state
new methods also initialize problem domain solving by a specific
class.

Action: This stage of problem formulation works with function


with a specific class taken from the initial state and all possible
actions done in this stage.

Transition: This stage of problem formulation integrates the


actual action done by the previous action stage and collects the
final stage to forward it to their next stage.

Goal test: This stage determines that the specified goal achieved
by the integrated transition model or not, whenever the goal
achieves stop the action and forward into the next stage to
determines the cost to achieve the goal.

Path costing: This component of problem-solving numerical


assigned what will be the cost to achieve the goal. It requires all
hardware software and human working cost.
Search Algorithms in AI
Artificial Intelligence is the study of building agents that act
rationally. Most of the time, these agents perform some kind of
search algorithm in the background in order to achieve their
tasks.

A search problem consists of:

A State Space. Set of all possible states where you can be.

A Start State. The state from where the search begins.

A Goal State. A function that looks at the current state returns


whether or not it is the goal state.

The Solution to a search problem is a sequence of actions, called


the plan that transforms the start state to the goal state.

This plan is achieved through search algorithms.

Types of search algorithms:

There are six of the fundamental search algorithms, divided into


two categories, as shown below.

Categories of search algorithms in AI

Uninformed Search Algorithms:

The search algorithms in this section have no additional


information on the goal node other than the one provided in the
problem definition. The plans to reach the goal state from the
start state differ only by the order and/or length of actions.
Uninformed search is also called Blind search. These algorithms
can only generate the successors and differentiate between the
goal state and non goal state.
The following uninformed search algorithms are discussed in this
section.

1. Depth First Search

2. Breadth First Search

3. Uniform Cost Search

Each of these algorithms will have:

 A problem graph, containing the start node S and the goal


node G.

 A strategy, describing the manner in which the graph will


be traversed to get to G.

 A fringe, which is a data structure used to store all the


possible states (nodes) that you can go from the current
states.

 A tree, that results while traversing to the goal node.

 A solution plan, which the sequence of nodes from S to G.

Depth First Search:

Depth-first search (DFS) is an algorithm for traversing or


searching tree or graph data structures. The algorithm starts at
the root node (selecting some arbitrary node as the root node in
the case of a graph) and explores as far as possible along each
branch before backtracking. It uses last in- first-out strategy and
hence it is implemented using a stack.

Example:
Question. Which solution would DFS find to move from node S to
node G if run on the graph below?

Solution. The equivalent search tree for the above graph is as


follows. As DFS traverses the tree “deepest node first”, it would
always pick the deeper branch until it reaches the solution (or it
runs out of nodes, and goes to the next branch). The traversal is
shown in blue arrows.

Algorithm:

1. Enter root node on stack


2. Do until stack is not empty
a. Remove node
I. If node=goal stop
II. Push all children of the node in stack.
Path: S -> A -> B -> C -> G
= the depth of the search tree = the number of levels of the search tree.
= number of nodes in level .
Time complexity: Equivalent to the number of nodes traversed in
DFS.
Space complexity: Equivalent to how large can the fringe (data structure used
to store all the possible states (nodes) that you can go from the current states)
get.
Completeness: DFS is complete if the search tree is finite, meaning for a
given finite search tree, DFS will come up with a solution if it exists.
Optimality: DFS is not optimal, meaning the number of steps in reaching
the solution, or the cost spent in reaching it is high.
Advantages:
3. Less time to reach the goal node if traversal in right path.
4. Less memory
Disadvantage:
1. No guarantee of finding solution
2. Infinite loop
Breadth First Search:
Breadth-first search (BFS) is an algorithm for traversing or searching tree or
graph data structures. It starts at the tree root (or some arbitrary node of a
graph, sometimes referred to as a ‘search key’), and explores all of the
neighbour nodes at the present depth prior to moving on to the nodes at the
next depth level. It is implemented using a queue.
Example:
Question. Which solution would BFS find to move from node S to node G if
run on the graph below?

Solution. The equivalent search tree for the above graph is as follows. As
BFS traverses the tree “shallowest node first”, it would always pick the
shallower branch until it reaches the solution (or it runs out of nodes, and
goes to the next branch). The traversal is shown in blue arrows.
Algorithm:
1. Enter starting node on queue
2. If queue is empty, then return fail and stop
3. If first element on queue is goal node, then return success and
stop
4. Else, Remove and expand first element from queue and place
children at the end of Queue.
5. Goto step 2.
6.

Path: S -> D -> G


= the depth of the shallowest solution.
= number of nodes in level .
Time complexity: Equivalent to the number of nodes traversed in BFS until
the shallowest
solution.
Space complexity: Equivalent to how large can the fringe
get.
Completeness: BFS is complete, meaning for a given search tree, BFS will
come up with a solution if it exists.
Optimality: BFS is optimal as long as the costs of all edges are equal.
Uniform-cost Search Algorithm:
Uniform-cost search is a searching algorithm used for traversing a
weighted tree or graph. This algorithm comes into play when a different
cost is available for each edge. The primary goal of the uniform-cost
search is to find a path to the goal node which has the lowest cumulative
cost. Uniform-cost search expands nodes according to their path costs
from the root node. It can be used to solve any graph/tree where the
optimal cost is in demand. A uniform-cost search algorithm is
implemented by the priority queue. It gives maximum priority to the
lowest cumulative cost. Uniform cost search is equivalent to BFS algorithm
if the path cost of all edges is the same.

Advantages:

o Uniform cost search is optimal because at every state the path with
the least cost is chosen.

Disadvantages:

o It does not care about the number of steps involve in searching and
only concerned about path cost. Due to which this algorithm may be
stuck in an infinite loop.

Example:
Completeness:

Uniform-cost search is complete, such as if there is a solution, UCS will


find it.

Time Complexity:

Let C* is Cost of the optimal solution, and ε is each step to get closer
to the goal node. Then the number of steps is = C*/ε+1. Here we have
taken +1, as we start from state 0 and end to C*/ε.

Hence, the worst-case time complexity of Uniform-cost search isO(b1 +

[C*/ε]
)/.

Space Complexity:

The same logic is for space complexity so, the worst-case space
complexity of Uniform-cost search is O(b1 + [C*/ε]).

Optimal:

Uniform-cost search is always optimal as it only selects a path with the


lowest path cost.
Informed Search Algorithms
So far we have talked about the uninformed search algorithms which
looked through search space for all possible solutions of the problem
without having any additional knowledge about search space. But
informed search algorithm contains an array of knowledge such as how far
we are from the goal, path cost, how to reach to goal node, etc. This
knowledge help agents to explore less to the search space and find more
efficiently the goal node.

The informed search algorithm is more useful for large search space.
Informed search algorithm uses the idea of heuristic, so it is also called
Heuristic search.

Heuristics function: Heuristic is a function which is used in Informed


Search, and it finds the most promising path. It takes the current state of
the agent as its input and produces the estimation of how close agent is
from the goal. The heuristic method, however, might not always give the
best solution, but it guaranteed to find a good solution in reasonable time.
Heuristic function estimates how close a state is to the goal. It is
represented by h(n), and it calculates the cost of an optimal path between
the pair of states. The value of the heuristic function is always positive.

Admissibility of the heuristic function is given as:

1. h(n) <= h*(n)

Here h(n) is heuristic cost, and h*(n) is the estimated cost.


Hence heuristic cost should be less than or equal to the
estimated cost.

Pure Heuristic Search:

Pure heuristic search is the simplest form of heuristic search


algorithms. It expands nodes based on their heuristic value h(n).
It maintains two lists, OPEN and CLOSED list. In the CLOSED list, it
places those nodes which have already expanded and in the
OPEN list, it places nodes which have yet not been expanded.
On each iteration, each node n with the lowest heuristic value is
expanded and generates all its successors and n is placed to the
closed list. The algorithm continues unit a goal state is found.

In the informed search we will discuss two main algorithms which


are given below:

o Best First Search Algorithm(Greedy search)

o A* Search Algorithm

o AO* algorithm

1.) Best-first Search Algorithm (Greedy Search):

Greedy best-first search algorithm always selects the path which


appears best at that moment. It is the combination of depth-first
search and breadth-first search algorithms. It uses the heuristic
function and search. Best-first search allows us to take the
advantages of both algorithms. With the help of best-first search,
at each step, we can choose the most promising node. In the best
first search algorithm, we expand the node which is closest to the
goal node and the closest cost is estimated by heuristic function,
i.e.

1. f(n)= g(n).

Were, h(n)= estimated cost from node n to the goal.

The greedy best first algorithm is implemented by the priority


queue.

Best first search algorithm:

o Step 1: Place the starting node into the OPEN list.


o Step 2: If the OPEN list is empty, Stop and return
failure.

o Step 3: Remove the node n, from the OPEN list which


has the lowest value of h(n), and places it in the
CLOSED list.

o Step 4: Expand the node n, and generate the


successors of node n.

o Step 5: Check each successor of node n, and find


whether any node is a goal node or not. If any
successor node is goal node, then return success
and terminate the search, else proceed to Step 6.

o Step 6: For each successor node, algorithm checks


for evaluation function f(n), and then check if the
node has been in either OPEN or CLOSED list. If the
node has not been in both list, then add it to the
OPEN list.

o Step 7: Return to Step 2.

Advantages:

o Best first search can switch between BFS and DFS by


gaining the advantages of both the algorithms.

o This algorithm is more efficient than BFS and DFS


algorithms.

Disadvantages:

o It can behave as an unguided depth-first search in


the worst case scenario.
o It can get stuck in a loop as DFS.

o This algorithm is not optimal.

Example:

Consider the below search problem, and we will traverse


it using greedy best-first search. At each iteration, each
node is expanded using evaluation function f(n)=h(n) ,
which is given in the below table.

In this search example, we are using two lists which


are OPEN and CLOSED Lists. Following are the iteration
for traversing the above example.

13 S

4
12 A B

2
E F
Expand the nodes of S and put in the CLOSED list

Initialization: Open [A, B], Closed [S]

Iteration 1: Open [A], Closed [S, B]

Iteration 2: Open [E, F, A], Closed [S, B]


: Open [E, A], Closed [S, B, F]

Iteration 3: Open [I, G, E, A], Closed [S, B, F]


: Open [I, E, A], Closed [S, B, F, G]

Hence the final solution path will be: S----> B----->F---->


G

Time Complexity: The worst case time complexity of


Greedy best first search is O(bm)

Space Complexity: The worst case space complexity of


Greedy best first search is O(bm). Where, m is the
maximum depth of the search space.

Complete: Greedy best-first search is also incomplete,


even if the given state space is finite.

Optimal: Greedy best first search algorithm is not


optimal.
2.) A* Search Algorithm:

A* search is the most commonly known form of best-first


search. It uses heuristic function h(n), and cost to reach
the node n from the start state g(n). It has combined
features of UCS and greedy best-first search, by which it
solve the problem efficiently. A* search algorithm finds
the shortest path through the search space using the
heuristic function. This search algorithm expands less
search tree and provides optimal result faster. A*
algorithm is similar to UCS except that it uses g(n)+h(n)
instead of g(n).

In A* search algorithm, we use search heuristic as well as


the cost to reach the node. Hence we can combine both
costs as following, and this sum is called as a fitness
number.

At each point in the search space, only those node is


expanded which have the lowest value of f(n), and the
algorithm terminates when the goal node is found.

Algorithm of A* search:

Step1: Place the starting node in the OPEN list.

Step 2: Check if the OPEN list is empty or not, if the list is


empty then return failure and stops.
Step 3: Select the node from the OPEN list which has the
smallest value of evaluation function (g+h), if node n is
goal node then return success and stop, otherwise

Step 4: Expand node n and generate all of its successors,


and put n into the closed list. For each successor n',
check whether n' is already in the OPEN or CLOSED list, if
not then compute evaluation function for n' and place
into Open list.

Step 5: Else if node n' is already in OPEN and CLOSED,


then it should be attached to the back pointer which
reflects the lowest g(n') value.

Step 6: Return to Step 2.

Advantages:

o A* search algorithm is the best algorithm than other


search algorithms.

o A* search algorithm is optimal and complete.

o This algorithm can solve very complex problems.

Disadvantages:

o It does not always produce the shortest path as it


mostly based on heuristics and approximation.

o A* search algorithm has some complexity issues.

o The main drawback of A* is memory requirement as


it keeps all generated nodes in the memory, so it is
not practical for various large-scale problems.

Example:
In this example, we will traverse the given graph using
the A* algorithm. The heuristic value of all states is given
in the below table so we will calculate the f(n) of each
state using the formula f(n)= g(n) + h(n), where g(n) is
the cost to reach any node from start state.

Here we will use OPEN and CLOSED list.

Points to remember:

o A* algorithm returns the path which occurred first,


and it does not search for all remaining paths.

o The efficiency of A* algorithm depends on the quality


of heuristic.

o A* algorithm expands all nodes which satisfy the


condition f(n)<="" li="">

Complete: A* algorithm is complete as long as:

o Branching factor is finite.


o Cost at every action is fixed.

Optimal: A* search algorithm is optimal if it follows below


two conditions:

o Admissible: the first condition requires for optimality


is that h(n) should be an admissible heuristic for A*
tree search. An admissible heuristic is optimistic in
nature.

o Consistency: Second required condition is


consistency for only A* graph-search.

If the heuristic function is admissible, then A* tree search


will always find the least cost path.

Time Complexity: The time complexity of A* search


algorithm depends on heuristic function, and the number
of nodes expanded is exponential to the depth of solution
d. So the time complexity is O(b^d), where b is the
branching factor.

Space Complexity: The space complexity of A* search


algorithm is O(b^d)
AO* algorithm – Artificial intelligence
Best-first search is what the AO* algorithm does. The AO*
method divides any given difficult problem into a smaller group of
problems that are then resolved using the AND-OR graph concept.
AND OR graphs are specialized graphs that are used in problems
that can be divided into smaller problems. The AND side of the
graph represents a set of tasks that must be completed to
achieve the main goal, while the OR side of the graph represents
different methods for accomplishing the same main goal.

AND-OR Graph

In the above figure, the buying of a car may be broken


down into smaller problems or tasks that can be
accomplished to achieve the main goal in the above
figure, which is an example of a simple AND-OR graph. The
other task is to either steal a car that will help us
accomplish the main goal or use your own money to
purchase a car that will accomplish the main goal. The
AND symbol is used to indicate the AND part of the
graphs, which refers to the need that all subproblems
containing the AND to be resolved before the preceding
node or issue may be finished.
The start state and the target state
are already known in the knowledge-based search
strategy known as the AO* algorithm, and the best path is
identified by heuristics. The informed search technique
considerably reduces the algorithm’s time complexity. The
AO* algorithm is far more effective in searching AND-OR
trees than the A* algorithm.

Working of AO* algorithm:

The evaluation function in AO* looks like this:


f(n) = g(n) + h(n)
f(n) = Actual cost + Estimated cost
here,
f(n) = The actual cost of traversal.
g(n) = the cost from the initial node to the current
node.
h(n) = estimated cost from the current node to the
goal state.

Difference between the A* Algorithm and AO* algorithm

 A* algorithm and AO* algorithm both works


on the best first search.

 They are both informed search and works on given


heuristics values.

 A* always gives the optimal solution but AO* doesn’t


guarantee to give the optimal solution.
 Once AO* got a solution doesn’t explore all possible
paths but A* explores all paths.

 When compared to the A* algorithm, the AO*


algorithm uses less memory.

 opposite to the A* algorithm, the AO* algorithm


cannot go into an endless loop.

Example:

AO* Algorithm – Question tree

Here in the above example below the Node which is given


is the heuristic value i.e h(n). Edge length is considered
as 1.
Step 1

AO* Algorithm (Step-1)

With help of f(n) = g(n) + h(n) evaluation function,

Start from node A,

f(A⇢B) = g(B) + h(B)

=1 + 5 ……here g(n)=1 is taken by


default for path cost

=6

f(A⇢C+D) = g(c) + h(c) + g(d) + h(d)


=1+2+1+4 ……here we have added C &
D because they are in AND

=8

So, by calculation A⇢B path is chosen which is the


minimum path, i.e f(A⇢B)

Step 2

AO* Algorithm (Step-2)

According to the answer of step 1, explore node B

Here the value of E & F are calculated as follows,

f(B⇢E) = g(e) + h(e)


f(B⇢E) = 1 + 7

=8

f(B⇢f) = g(f) + h(f)

f(B⇢f) = 1 + 9

= 10

So, by above calculation B⇢E path is chosen which is


minimum path, i.e f(B⇢E)

because B's heuristic value is different from its actual


value The heuristic is

updated and the minimum cost path is selected. The


minimum value in our situation is 8.

Therefore, the heuristic for A must be updated due to the


change in B's heuristic.

So we need to calculate it again.

f(A⇢B) = g(B) + updated h(B)

=1+8

=9

We have Updated all values in the above tree.

Step 3
AO* Algorithm (Step-3) -Geeksforgeeks

By comparing f(A⇢B) & f(A⇢C+D)

f(A⇢C+D) is shown to be smaller. i.e 8 < 9

Now explore f(A⇢C+D)

So, the current node is C

f(C⇢G) = g(g) + h(g)

f(C⇢G) = 1 + 3

=4
f(C⇢H+I) = g(h) + h(h) + g(i) + h(i)

f(C⇢H+I) = 1 + 0 + 1 + 0 ……here we have added


H & I because they are in AND

=2

f(C⇢H+I) is selected as the path with the lowest cost and


the heuristic is also left unchanged

because it matches the actual cost. Paths H & I are solved


because the heuristic for those paths is 0,

but Path A⇢D needs to be calculated because it has an


AND.

f(D⇢J) = g(j) + h(j)

f(D⇢J) = 1 + 0

=1

the heuristic of node D needs to be updated to 1.

f(A⇢C+D) = g(c) + h(c) + g(d) + h(d)

=1+2+1+1

=5

as we can see that path f(A⇢C+D) is get solved and this


tree has become a solved tree now.
In simple words, the main flow of this algorithm is that we
have to find firstly level 1st heuristic

value and then level 2nd and after that update the values
with going upward means towards the root node.

In the above tree diagram, we have updated all the


values.

Means-Ends Analysis in Artificial


Intelligence
o We have studied the strategies which can reason either in forward or
backward, but a mixture of the two directions is appropriate for solving
a complex and large problem. Such a mixed strategy, make it possible
that first to solve the major part of a problem and then go back and
solve the small problems arise during combining the big parts of the
problem. Such a technique is called Means-Ends Analysis.
o Means-Ends Analysis is problem-solving techniques used in Artificial
intelligence for limiting search in AI programs.
o It is a mixture of Backward and forward search technique.
o The MEA technique was first introduced in 1961 by Allen Newell, and
Herbert A. Simon in their problem-solving computer program, which was
named as General Problem Solver (GPS).
o The MEA analysis process centered on the evaluation of the difference
between the current state and goal state.
How means-ends analysis Works:
The means-ends analysis process can be applied recursively for a problem. It is
a strategy to control search in problem-solving. Following are the main Steps
which describes the working of MEA technique for solving a problem.
a. First, evaluate the difference between Initial State and final State.
b. Select the various operators which can be applied for each difference.
c. Apply the operator at each difference, which reduces the difference
between the current state and goal state.
Operator Subgoaling
In the MEA process, we detect the differences between the current state and
goal state. Once these differences occur, then we can apply an operator to
reduce the differences. But sometimes it is possible that an operator cannot be
applied to the current state. So we create the subproblem of the current state,
in which operator can be applied, such type of backward chaining in which
operators are selected, and then sub goals are set up to establish the
preconditions of the operator is called Operator Subgoaling.
Algorithm for Means-Ends Analysis:
Let's we take Current state as CURRENT and Goal State as GOAL, then
following are the steps for the MEA algorithm.
o Step 1: Compare CURRENT to GOAL, if there are no differences between
both then return Success and Exit.
o Step 2: Else, select the most significant difference and reduce it by doing
the following steps until the success or failure occurs.
a. Select a new operator O which is applicable for the current
difference, and if there is no such operator, then signal failure.
b. Attempt to apply operator O to CURRENT. Make a description of
two states.
i) O-Start, a state in which O?s preconditions are satisfied.
ii) O-Result, the state that would result if O were applied In O-
start.
c. If
(First-Part <------ MEA (CURRENT, O-START)
And
(LAST-Part <----- MEA (O-Result, GOAL), are successful, then signal
Success and return the result of combining FIRST-PART, O, and
LAST-PART.
The above-discussed algorithm is more suitable for a simple problem and not
adequate for solving complex problems.
Example of Mean-Ends Analysis:
Let's take an example where we know the initial state and goal state as given
below. In this problem, we need to get the goal state by finding differences
between the initial state and goal state and applying operators.

Solution:
To solve the above problem, we will first find the differences between initial
states and goal states, and for each difference, we will generate a new state
and will apply the operators. The operators we have for this problem are:
o Move
o Delete
o Expand
1. Evaluating the initial state: In the first step, we will evaluate the initial state
and will compare the initial and Goal state to find the differences between
both states.

2. Applying Delete operator: As we can check the first difference is that in goal
state there is no dot symbol which is present in the initial state, so, first we will
apply the Delete operator to remove this dot.
3. Applying Move Operator: After applying the Delete operator, the new state
occurs which we will again compare with goal state. After comparing these
states, there is another difference that is the square is outside the circle, so,
we will apply the Move Operator.

4. Applying Expand Operator: Now a new state is generated in the third step,
and we will compare this state with the goal state. After comparing the states
there is still one difference which is the size of the square, so, we will
apply Expand operator, and finally, it will generate the goal state.

Adversarial Search
Adversarial search is a search, where we examine the problem which arises
when we try to plan ahead of the world and other agents are planning
against us.
o In previous topics, we have studied the search strategies which are only
associated with a single agent that aims to find the solution which often
expressed in the form of a sequence of actions.
o But, there might be some situations where more than one agent is
searching for the solution in the same search space, and this situation
usually occurs in game playing.
o The environment with more than one agent is termed as multi-agent
environment, in which each agent is an opponent of other agent and
playing against each other. Each agent needs to consider the action of
other agent and effect of that action on their performance.
o So, Searches in which two or more players with conflicting goals are
trying to explore the same search space for the solution, are called
adversarial searches, often known as Games.
o Games are modeled as a Search problem and heuristic evaluation
function, and these are the two main factors which help to model and
solve games in AI.
Types of Games in AI:

Deterministic Chance Moves

Perfect information Chess, Checkers, go, Backgammon, monopoly


Othello

Imperfect Battleships, blind, tic-tac- Bridge, poker, scrabble, nuclear


information toe war

o Perfect information: A game with the perfect information is that in


which agents can look into the complete board. Agents have all the
information about the game, and they can see each other moves also.
Examples are Chess, Checkers, Go, etc.
o Imperfect information: If in a game agents do not have all information
about the game and not aware with what's going on, such type of games
are called the game with imperfect information, such as tic-tac-toe,
Battleship, blind, Bridge, etc.
o Deterministic games: Deterministic games are those games which follow
a strict pattern and set of rules for the games, and there is no
randomness associated with them. Examples are chess, Checkers, Go,
tic-tac-toe, etc.
o Non-deterministic games: Non-deterministic are those games which
have various unpredictable events and has a factor of chance or luck.
This factor of chance or luck is introduced by either dice or cards. These
are random, and each action response is not fixed. Such games are also
called as stochastic games.
Example: Backgammon, Monopoly, Poker, etc.
Note: In this topic, we will discuss deterministic games, fully observable
environment, zero-sum, and where each agent acts alternatively.
Zero-Sum Game
o Zero-sum games are adversarial search which involves pure competition.
o In Zero-sum game each agent's gain or loss of utility is exactly balanced
by the losses or gains of utility of another agent.
o One player of the game try to maximize one single value, while other
player tries to minimize it.
o Each move by one player in the game is called as ply.
o Chess and tic-tac-toe are examples of a Zero-sum game.
Zero-sum game: Embedded thinking
The Zero-sum game involved embedded thinking in which one agent or player
is trying to figure out:
o What to do.
o How to decide the move
o Needs to think about his opponent as well
o The opponent also thinks what to do
Each of the players is trying to find out the response of his opponent to their
actions. This requires embedded thinking or backward reasoning to solve the
game problems in AI.
Formalization of the problem:
A game can be defined as a type of search in AI which can be formalized of
the following elements:
PauseNext
Unmute
Current Time 0:02
888888
Duration 18:10
Loaded: 3.30%

o Initial state: It specifies how the game is set up at the start.


o Player(s): It specifies which player has moved in the state space.
o Action(s): It returns the set of legal moves in state space.
o Result(s, a): It is the transition model, which specifies the result of
moves in the state space.
o Terminal-Test(s): Terminal test is true if the game is over, else it is false
at any case. The state where the game ends is called terminal states.
o Utility(s, p): A utility function gives the final numeric value for a game
that ends in terminal states s for player p. It is also called payoff
function. For Chess, the outcomes are a win, loss, or draw and its payoff
values are +1, 0, ½. And for tic-tac-toe, utility values are +1, -1, and 0.
Game tree:
A game tree is a tree where nodes of the tree are the game states and Edges of
the tree are the moves by players. Game tree involves initial state, actions
function, and result Function.
Example: Tic-Tac-Toe game tree:
The following figure is showing part of the game-tree for tic-tac-toe game.
Following are some key points of the game:
o There are two players MAX and MIN.
o Players have an alternate turn and start with MAX.
o MAX maximizes the result of the game tree
o MIN minimizes the result.
Example Explanation:
o From the initial state, MAX has 9 possible moves as he starts first. MAX
place x and MIN place o, and both player plays alternatively until we
reach a leaf node where one player has three in a row or all squares are
filled.
o Both players will compute each node, minimax, the minimax value which
is the best achievable utility against an optimal adversary.
o Suppose both the players are well aware of the tic-tac-toe and playing
the best play. Each player is doing his best to prevent another one from
winning. MIN is acting against Max in the game.
o So in the game tree, we have a layer of Max, a layer of MIN, and each
layer is called as Ply. Max place x, then MIN puts o to prevent Max from
winning, and this game continues until the terminal node.
o In this either MIN wins, MAX wins, or it's a draw. This game-tree is the
whole search space of possibilities that MIN and MAX are playing tic-tac-
toe and taking turns alternately.
Hence adversarial Search for the minimax procedure works as follows:
o It aims to find the optimal strategy for MAX to win the game.
o It follows the approach of Depth-first search.
o In the game tree, optimal leaf node could appear at any depth of the
tree.
o Propagate the minimax values up to the tree until the terminal node
discovered.
In a given game tree, the optimal strategy can be determined from the
minimax value of each node, which can be written as MINIMAX(n). MAX prefer
to move to a state of maximum value and MIN prefer to move to a state of
minimum value then:

Mini-Max Algorithm in Artificial Intelligence


o Mini-max algorithm is a recursive or backtracking algorithm
which is used in decision-making and game theory. It
provides an optimal move for the player assuming that
opponent is also playing optimally.

o Mini-Max algorithm uses recursion to search through the


game-tree.
o Min-Max algorithm is mostly used for game playing in AI.
Such as Chess, Checkers, tic-tac-toe, go, and various tow-
players game. This Algorithm computes the minimax
decision for the current state.

o In this algorithm two players play the game, one is called


MAX and other is called MIN.

o Both the players fight it as the opponent player gets the


minimum benefit while they get the maximum benefit.

o Both Players of the game are opponent of each other, where


MAX will select the maximized value and MIN will select the
minimized value.

o The minimax algorithm performs a depth-first search


algorithm for the exploration of the complete game tree.

o The minimax algorithm proceeds all the way down to the


terminal node of the tree, then backtrack the tree as the
recursion.

Pseudo-code for MinMax Algorithm:

1. function minimax(node, depth, maximizingPlayer) is

2. if depth ==0 or node is a terminal node then

3. return static evaluation of node

4.

5. if MaximizingPlayer then // for Maximizer Player

6. maxEva= -infinity

7. for each child of node do


8. eva= minimax(child, depth-1, false)

9. maxEva= max(maxEva,eva) //gives Maximum of the val


ues

10. return maxEva

11.

12. else // for Minimizer player

13. minEva= +infinity

14. for each child of node do

15. eva= minimax(child, depth-1, true)

16. minEva= min(minEva, eva) //gives minimum of th


e values

17. return minEva

Initial call:

Minimax(node, 3, true)

Working of Min-Max Algorithm:

o The working of the minimax algorithm can be easily


described using an example. Below we have taken an
example of game-tree which is representing the two-player
game.

o In this example, there are two players one is called


Maximizer and other is called Minimizer.

o Maximizer will try to get the Maximum possible score, and


Minimizer will try to get the minimum possible score.
o This algorithm applies DFS, so in this game-tree, we have to
go all the way through the leaves to reach the terminal
nodes.

o At the terminal node, the terminal values are given so we


will compare those value and backtrack the tree until the
initial state occurs. Following are the main steps involved in
solving the two-player game tree:

Step-1: In the first step, the algorithm generates the entire


game-tree and apply the utility function to get the utility values
for the terminal states. In the below tree diagram, let's take A is
the initial state of the tree. Suppose maximizer takes first turn
which has worst-case initial value =- infinity, and minimizer will
take next turn which has worst-case initial value = +infinity.
Step 2: Now, first we find the utilities value for the Maximizer, its
initial value is -∞, so we will compare each value in terminal state
with initial value of Maximizer and determines the higher nodes
values. It will find the maximum among the all.

o For node D max(-1,- -∞) => max(-1,4)= 4

o For Node E max(2, -∞) => max(2, 6)= 6

o For Node F max(-3, -∞) => max(-3,-5) = -3

o For node G max(0, -∞) = max(0, 7) = 7

Step 3: In the next step, it's a turn for minimizer, so it will


compare all nodes value with +∞, and will find the 3 rd layer node
values.

o For node B= min(4,6) = 4


o For node C= min (-3, 7) = -3

Step 4: Now it's a turn for Maximizer, and it will again choose the
maximum of all nodes value and find the maximum value for the
root node. In this game tree, there are only 4 layers, hence we
reach immediately to the root node, but in real games, there will
be more than 4 layers.

o For node A max(4, -3)= 4


That was the complete workflow of the minimax two player game.

Properties of Mini-Max algorithm:

o Complete- Min-Max algorithm is Complete. It will definitely


find a solution (if exist), in the finite search tree.

o Optimal- Min-Max algorithm is optimal if both opponents


are playing optimally.

o Time complexity- As it performs DFS for the game-tree, so


the time complexity of Min-Max algorithm is O(bm), where b
is branching factor of the game-tree, and m is the maximum
depth of the tree.

o Space Complexity- Space complexity of Mini-max


algorithm is also similar to DFS which is O(bm).
Limitation of the minimax Algorithm:

The main drawback of the minimax algorithm is that it gets really


slow for complex games such as Chess, go, etc. This type of
games has a huge branching factor, and the player has lots of
choices to decide. This limitation of the minimax algorithm can be
improved from alpha-beta pruning which we have discussed in
the next topic.

Alpha-Beta Pruning
o Alpha-beta pruning is a modified version of the minimax
algorithm. It is an optimization technique for the minimax
algorithm.

o As we have seen in the minimax search algorithm that the


number of game states it has to examine are exponential in
depth of the tree. Since we cannot eliminate the exponent,
but we can cut it to half. Hence there is a technique by
which without checking each node of the game tree we can
compute the correct minimax decision, and this technique is
called pruning. This involves two threshold parameter Alpha
and beta for future expansion, so it is called alpha-beta
pruning. It is also called as Alpha-Beta Algorithm.

o Alpha-beta pruning can be applied at any depth of a tree,


and sometimes it not only prune the tree leaves but also
entire sub-tree.

o The two-parameter can be defined as:

a. Alpha: The best (highest-value) choice we have found


so far at any point along the path of Maximizer. The
initial value of alpha is -∞.
b. Beta: The best (lowest-value) choice we have found so
far at any point along the path of Minimizer. The initial
value of beta is +∞.

o The Alpha-beta pruning to a standard minimax algorithm


returns the same move as the standard algorithm does, but
it removes all the nodes which are not really affecting the
final decision but making algorithm slow. Hence by pruning
these nodes, it makes the algorithm fast.

Note: To better understand this topic, kindly study the minimax


algorithm.

Condition for Alpha-beta pruning:

The main condition which required for alpha-beta pruning is:

1. α>=β

Key points about alpha-beta pruning:

o The Max player will only update the value of alpha.

o The Min player will only update the value of beta.

o While backtracking the tree, the node values will be passed


to upper nodes instead of values of alpha and beta.

o We will only pass the alpha, beta values to the child nodes.

Pseudo-code for Alpha-beta Pruning:

1. function minimax(node, depth, alpha, beta, maximizingPlaye


r) is

2. if depth ==0 or node is a terminal node then

3. return static evaluation of node


4.

5. if MaximizingPlayer then // for Maximizer Player

6. maxEva= -infinity

7. for each child of node do

8. eva= minimax(child, depth-1, alpha, beta, False)

9. maxEva= max(maxEva, eva)

10. alpha= max(alpha, maxEva)

11. if beta<=alpha

12. break

13. return maxEva

14.

15. else // for Minimizer player

16. minEva= +infinity

17. for each child of node do

18. eva= minimax(child, depth-1, alpha, beta, true)

19. minEva= min(minEva, eva)

20. beta= min(beta, eva)

21. if beta<=alpha

22. break

23. return minEva

Working of Alpha-Beta Pruning:


Let's take an example of two-player search tree to understand the
working of Alpha-beta pruning

Step 1: At the first step the, Max player will start first move from
node A where α= -∞ and β= +∞, these value of alpha and beta
passed down to node B where again α= -∞ and β= +∞, and Node
B passes the same value to its child D.

Step 2: At Node D, the value of α will be calculated as its turn for


Max. The value of α is compared with firstly 2 and then 3, and the
max (2, 3) = 3 will be the value of α at node D and node value will
also 3.

Step 3: Now algorithm backtrack to node B, where the value of β


will change as this is a turn of Min, Now β= +∞, will compare with
the available subsequent nodes value, i.e. min (∞, 3) = 3, hence
at node B now α= -∞, and β= 3.

In the next step, algorithm traverse the next successor of Node B


which is node E, and the values of α= -∞, and β= 3 will also be
passed.

Step 4: At node E, Max will take its turn, and the value of alpha
will change. The current value of alpha will be compared with 5,
so max (-∞, 5) = 5, hence at node E α= 5 and β= 3, where α>=β,
so the right successor of E will be pruned, and algorithm will not
traverse it, and the value at node E will be 5.
Step 5: At next step, algorithm again backtrack the tree, from
node B to node A. At node A, the value of alpha will be changed
the maximum available value is 3 as max (-∞, 3)= 3, and β= +∞,
these two values now passes to right successor of A which is Node
C.

At node C, α=3 and β= +∞, and the same values will be passed
on to node F.

Step 6: At node F, again the value of α will be compared with left


child which is 0, and max(3,0)= 3, and then compared with right
child which is 1, and max(3,1)= 3 still α remains 3, but the node
value of F will become 1.
Step 7: Node F returns the node value 1 to node C, at C α= 3 and
β= +∞, here the value of beta will be changed, it will compare
with 1 so min (∞, 1) = 1. Now at C, α=3 and β= 1, and again it
satisfies the condition α>=β, so the next child of C which is G will
be pruned, and the algorithm will not compute the entire sub-tree
G.
Step 8: C now returns the value of 1 to A here the best value for
A is max (3, 1) = 3. Following is the final game tree which is the
showing the nodes which are computed and nodes which has
never computed. Hence the optimal value for the maximizer is 3
for this example.
Move Ordering in Alpha-Beta pruning:

The effectiveness of alpha-beta pruning is highly dependent on


the order in which each node is examined. Move order is an
important aspect of alpha-beta pruning.

It can be of two types:

o Worst ordering: In some cases, alpha-beta pruning


algorithm does not prune any of the leaves of the tree, and
works exactly as minimax algorithm. In this case, it also
consumes more time because of alpha-beta factors, such a
move of pruning is called worst ordering. In this case, the
best move occurs on the right side of the tree. The time
complexity for such an order is O(bm).
o Ideal ordering: The ideal ordering for alpha-beta pruning
occurs when lots of pruning happens in the tree, and best
moves occur at the left side of the tree. We apply DFS hence
it first search left of the tree and go deep twice as minimax
algorithm in the same amount of time. Complexity in ideal
ordering is O(bm/2).

Rules to find good ordering:

Following are some rules to find good ordering in alpha-beta


pruning:

o Occur the best move from the shallowest node.

o Order the nodes in the tree such that the best nodes are
checked first.

o Use domain knowledge while finding the best move. Ex: for
Chess, try order: captures first, then threats, then forward
moves, backward moves.

o We can bookkeep the states, as there is a possibility that


states may repeat.
Step 8: C now returns the value of 1 to A here the best value
for A is max (3, 1) = 3. Following is the final game tree which
is the showing the nodes which are computed and nodes
which has never computed. Hence the optimal value for the
maximizer is 3 for this example.
Move Ordering in Alpha-Beta pruning:

The effectiveness of alpha-beta pruning is highly dependent


on the order in which each node is examined. Move order is
an important aspect of alpha-beta pruning.

It can be of two types:

o Worst ordering: In some cases, alpha-beta pruning


algorithm does not prune any of the leaves of the tree,
and works exactly as minimax algorithm. In this case, it
also consumes more time because of alpha-beta factors,
such a move of pruning is called worst ordering. In this
case, the best move occurs on the right side of the tree.
The time complexity for such an order is O(bm).
o Ideal ordering: The ideal ordering for alpha-beta
pruning occurs when lots of pruning happens in the tree,
and best moves occur at the left side of the tree. We
apply DFS hence it first search left of the tree and go
deep twice as minimax algorithm in the same amount of
time. Complexity in ideal ordering is O(bm/2).

Rules to find good ordering:

Following are some rules to find good ordering in alpha-beta


pruning:

o Occur the best move from the shallowest node.

o Order the nodes in the tree such that the best nodes are
checked first.

o Use domain knowledge while finding the best move. Ex:


for Chess, try order: captures first, then threats, then
forward moves, backward moves.

o We can bookkeep the states, as there is a possibility


that states may repeat.

You might also like