0% found this document useful (0 votes)
19 views

Unit-1 AI

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
19 views

Unit-1 AI

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 34

Unit – 1

Short Answer questions

1. Summarize the history of Intelligent Agents.

Year 1997: In the year 1997, IBM Deep Blue beats world chess champion,
Gary Kasparov, and became the first computer to beat a world chess
champion.
Year 2002: for the first time, AI entered the home in the form of Roomba,
a vacuum cleaner.
Year 2006: AI came in the Business world till the year 2006. Companies like
Facebook, Twitter, and Netflix also started using AI.
2. Summarize the features regarding the Nature of Environments
and Structure of Agents.
Structure of Intelligent Agents:
Agent’s structure can be viewed as –
a. Agent = Architecture + Agent Program
b. Architecture = The machinery that an agent executes on.
c. Agent Program = an implementation of an agent
function. The Nature of Environments:
Some programs operate in the entirely artificial environment confined to
keyboard input, database, computer file systems and character output on a
screen.
In contrast, some software agents exist in rich, unlimited softbots domains.
The simulator has a very detailed, complex environment. The software
agents needs to choose from a long array of actions in real time. A softbot
designed to scan the online preferences of the customer and show
interesting items to the customer works in the real as well as an artificial
environment.
The most famous artificial environment is the Turing Test environment, in
which one real and other artificial agents are tested on equal ground. This is
very challenging environment as it is highly for a software agent to perform
as well as a human.
Turing Test:
The success of an intelligent behavior of a system can be measured with
Turing Test.
Two persons and a machine to be evaluated participate in the text. Out of
the two persons, one plays the role of the tested. Each of them sits in
different rooms. The tester is unaware of who is a human. He interrogates
the questions by typing and sending them to both intelligences, to which he
receives typed responses.
This test aims at fooling the tester. If the tester fails to determine
machine’s response from the human response, then the machine is said to be
intelligent.
3. Outline the properties of DFS and BFS Algorithms.
Parameters BFS DFS
Stands for Breadth First Search Depth First Search
Data Structure Uses Queue data Uses Stack data
structure for finding structure.
the shortest path.
Definition Traversal approach in Traversal approach in
which we first walk which traverse begins
through all nodes on at the root node and
the same level before proceeds through the
moving on to the next nodes as far as
level possible until we reach
the node with no
unvisited nearby nodes.
Technique Find a single source Traverse through more
shortest path in an edges to reach a
unweighted graph destination vertex
because, in BFS, we from a source.
reach a vertex with a
minimum number of

edges from a source


vertex
Conceptual Difference Builds the tree level by Builds the tree sub-
level. tree by sub-tree
Approach used Works on the concept Works on the concept
of FIFO of LIFO
Suitable More suitable for More suitable when
searching vertices there are solution away
closer to the given from source.
source.
Suitable for Decision BFS consider all More suitable for
Trees their winning neighbors first and games or puzzle
therefore not suitable problems. We make a
for decision-making decision, and the then
trees used in games or explore all paths
puzzles. through this decision.
And if this decision
leads to win situation,
we stop
Time Complexity O(V + E) when O(V + E) when
Adjacency Lists is used Adjacency Lists is used
and O(V^2) when and O(V^2) when
Adjacency Matrix is Adjacency Matrix is
used used
Visiting of Siblings are visited Children are visited
Siblings/Children before the children. before the siblings.
Removal of traversed Nodes that are The visited nodes are
Nodes traversed several added to the stack and
times are deleted from then removed when
the queue. there are no more
nodes to visit.
Backtracking No concept of Recursive algorithm
backtracking that uses the idea of
backtracking
Applications Bipartite graphs, Acyclic graphs and
shortest path topological order
Memory Requires more memory Requires less memory
Optimality Optimal for finding the Not optimal for finding
shortest path the shortest path
Space complexity Is more critical as Lesser space
compared to time complexity because at
complexity a time it needs to
store only a single path
from the root to the
leaf node.
Speed slow fast
When to use? When the target is When the target is far
close to the from the source, FDS
source, is preferable.
BFS performs better

Long Questions

1. Explain the various types of Agents and Environments.

Fully Observable vs Partially Observable:


- When an agent sensor is capable to sense or access the complete state
of an agent at each point in time, it is said to be a fully observable
environment else it is partially observable.
- Maintaining a fully observable environment is easy as there is no need
to keep track of the history of the surrounding.
- An environment is called unobservable when the agent has no sensors
in all environments.
Deterministic vs Stochastic:
- When a uniqueness in the agent’s current state completely determines
the next state of the agent, the environment is said to be deterministic.
- The stochastic environment is random in nature which is not unique
and cannot be completely determined by the agent.
Competitive vs Collaborative:
- An agent is said to be in a competitive environment when it competes
against another agent to optimize the output.
- An agent is said to be in a collaborative environment when multiple
agents cooperate to produce the desired output.
Single-agent vs Multi-agent:
- An environment consisting of only one agent is said to be a single-agent
environment.
- An environment involving more than one agent is a
multi-agent environment.
Dynamic vs Static:
- An environment that keeps constantly changing itself when the agent
is up with some action is said to be dynamic.
- An idle environment with no change in its state is called a
static environment.
Discrete vs Continuous:
- If an environment consists of a finite number of actions that can be
deliberated in the environment to obtain the output, it is said to be a
discrete environment.
- The environment in which the actions are performed cannot be numbered
i.e. is not discrete, is said to be continuous.
Episodic vs Sequential:
- In an Episodic task environment, each of the agent’s actions is divided
into atomic incidents or episodes. There is no dependency between
current and previous incidents. In each incident, an agent receives
input from the environment and then performs the corresponding
action.
- In a sequential environment, the previous decisions can affect all future
decisions. The next action of the agent depends on what action he has
taken previously and what action he is supposed to take in the future.
Known vs Unknown:
- In a known environment, the output for all probable actions is
given. Obviously, in case of unknown environment, for an agent to
make a decision, it has to gain knowledge about how the
environment works.
Types of Agents:
Agents can be grouped into five classes based on their degree of perceived
intelligence and capability:
- Simple Reflex Agents
- Model-Based Reflex Agents
- Goal-Based Agents
- Utility-Based Agents
- Learning Agents
Simple Reflex agents:

Problems with Simple reflex agents are:


- Very Limited intelligence.
- No knowledge of non-perceptual parts of the state.
- Usually too big to generate and store.
- If there occurs any change in the environment, then the collection of
rules need to be updated.
Model-based reflex agents:
A model-based agent can handle partially observable environments by the
use of a model about the world. The agent has to keep track of the internal
state which is adjusted by each percept and that depends on the percept
history.
Updating the state required information about:
- How the world evolves independently from the agent, and
- How the agent’s actions affect the world.
Goal-based agents:
Their every action is intended to reduce its distance from the goal. This
allows the agent a way to choose among multiple possibilities, selecting the
one which reaches a goal state.
Utility-based agents:
The agents which are developed having their end uses as building blocks are
called utility-based agents. They choose actions based on a preference
(utility) for each state.
Learning Agent:
A learning agent in AI is the type of agent that can learn from its past
experiences or it has learning capabilities. A learning agent has mainly four
conceptual components, which are:
- Learning element: It is responsible for making improvements by
learning from the environment.
- Critic: The learning element takes feedback from critics which
describes how well the agent is doing with respect to a fixed
performance standard
- Performance element: It is responsible for selecting external action.
- Problem Generator: This component is responsible for suggesting
actions that will lead to new and informative experiences.
2. Justify the term “Concept of Rationality” with an example.

The Concept of Rationality:


- A rational agent is “one that does the right thing”. … but what is the right
thing?
- Need performance measures
- Performance measure: objective criterion for success of an agent’s
behavior.
- E.g., performance measure of the vacuum-cleaner agent could be amount
of dirt cleaned up, amount of time taken, amount of electricity consumed,
cleanliness of floor afterwards, …
- Rationality here depends on four things:
a. The performance measure that defines the criterion of success
b. The agent’s prior knowledge of the environment
c. The actions that the agent can perform
d. The agent’s percept sequence to date.
Rational agent:
- For each possible percept sequence, a rational agent should select an
action that is excepted to maximize its performance measure, given
the evidence provided by the percept sequence and whatever built-in
knowledge the agent has.
- Rationality is distinct from omniscience (all-knowing with
infinite knowledge)
- Agents can perform actions in order to modify future percepts so as to
obtain useful information (information gathering, exploration)
- A rational agent should not only gather information, but also learn
as much as possible from what it perceives
- An agent is autonomous if its behavior is determined by its own
experience (with ability to learn and adapt). Rational agents should be
autonomous.
Rationality:
A rational agent chooses whichever action maximizes the expected value of
the performance measure given the percept sequence to date.
Rational ≠ omniscient
Percepts may not supply all relevant information
Rational ≠ clairvoyant
Action outcomes may not be as expected
Hence rational ≠ successful

Rational 🡪 exploration, learning, autonomy.

3. What is the procedure to evaluate the Agent?

PEAS System is used to categorize similar agents together. The PEAS


system delivers the performance measure with respect to the environment,
actuators, and sensors of the respective agent. Most of the highest
performing agents are Rational Agents.
Rational Agent: The rational agent considers all possibilities and chooses
to perform a highly efficient action. For example, it chooses the shortest
path will low cost for high efficiency.
PEAS. Stands for a Performance measure, Environment, Actuator, and
Sensor.
Performance Measure: Performance measure is the unit to define the
success of an agent. Performance varies with agents based on their
different precepts.
Environment: Environment is the surrounding of an agent at every instant. It
keeps changing with time if the agent is set in motion. There are 5 major
types of environments:
- Fully Observable & Partially Observable
- Episodic & Sequential
- Static & Dynamic
- Discrete & Continuous
- Deterministic & Stochastic
Actuator: An actuator is a part of the agent that delivers the output of
action to the environment
Sensor: Sensors are the receptive parts of an agent that takes in the input
for the agent.
Agent Performance Environment Actuator
Measure
Hospital Patient’s health, Hospital, Prescription,
Management Doctors, Diagnosis, Scan
Admission
System Patients report
Process, Payment
Automated Car The Roads, Traffic, Steering wheel,
Drive comfortable Vehicle Accelerator,
trip, Safety, Brake, Mirror
Maximum
Distance
Subject Maximize scores, Classroom, Desk, Smart displays.
Tutorin Improvement is Chair, Board, Corrections
g students Staff, Students
Part-picking Percentage of Conveyor belt Jointed arms and
robot parts in correct with parts; bins hand
bins
Satellite image Correct image Downlink from Display
analysis system categorization orbiting satellite categorization of
scene

4. Develop an algorithm for Simple Problem-Solving Agent.

On the basis of the problem and their working domain, different types of
problem-solving agent defined and use at an atomic level without any
internal state visible with a problem-solving algorithm. The problem-solving
agent performs precisely by defining problems and several solution. So we
can say that problem solving is a part of artificial intelligence that
encompasses a number of techniques such as a tree, B-tree, heuristic
algorithms to solve a problem.
We can also that a problem-solving agent is a result-driven agent and always
focuses on satisfying the goals.
Steps problem-solving in AI:
The problem of AI is directly associated with the nature of humans
and their activities. So we need a number of finite steps to solve a
problem which makes human easy works.
These are the following steps which require to solve a problem:
- Goal Formulation: This one is the first and simple step in problem-
solving. It organized finite steps to formulate a target/goals which
require some action to achieve the goal. Today the formulation of
the goal is based on AI agents.
- Problem formulation: It is one of the core steps of problem-solving
which decides what action should be taken to achieve the
formulated goal. In AI this core part is dependent upon software
agent which consisted of the following components to formulate the
associated problem.
Components to formulate the associated problem:
- Initial State: This state requires an initial state for the problem which
starts the AI agent towards a specified goal. In this state new methods
also initialize problem domain solving by a specific class.
- Action: This stage of problem formulation works with function with a
specific class taken from the initial state and all possible actions done in
this stage.
- Transition: This stage of problem formulation integrates the actual
action done by the previous action stage and collects the final stage to
forward it to their next stage.
- Goal test: This stage determines that the specified goal achieved by
the integrated transition model or not, whenever the goal achieves stop
the action and forward into the next stage to determines the cost to
achieve the goal.
- Path costing: This component of problem-solving numerical assigned
what will be the cost to achieve the goal. It requires all hardware
software and human working cost.
5. Explain the various Search Strategies and with a suitable
example.
Depth First Search:
Depth-first search (DFS) is an algorithm for traversing or searching tree
or graph data structures. The algorithm starts at the root node and
explores as far as possible along each branch backtracking. It uses last
in-first-out strategy and hence it is implementing using a stack.
Time Complexity: Equivalent to the number of nodes traversed in DFS
2 3 d d
T(n) = 1 + n + n + . . . + n = O(n )

Space Complexity: Equivalent to how large can the fringe get.


S(n) = O(n x d)
Completeness: DFS is complete if the search tree is finite, meaning for a
given finite search tree, DFS will come up with a solution if it exists.
Optimality: DFS is not optimal, meaning the number of steps in reaching
the solution, or the cost spent in reaching it is high.
Breadth First Search:
BFS is an algorithm for traversing or searching tree or graph data
structures. It starts at the tree root, and explores all of the neighbor nodes
at the present depth prior to moving on to the nodes at the next depth level.
It is implemented using a queue.
Time complexity: Equivalent to the number of nodes traversed in BFS until
2 3 s s
the shallowest solution. T(n) = 1 + n + n + . . . + n = O(n )
Space complexity: Equivalent to how large can the fringe get S(n) =
s
O(n ) Completeness: BFS is complete, meaning for a given search tree,
BFS will come up with a solution if it exists.
Optimality: BFS is optimal as long as the costs of all edges are equal.
Uniform Cost Search:
UCS is different from BFS and DFS because here the costs come into play.
In other words, traversing via different edges might not have the same cost.
The goal is to find a path where the cumulative sum of costs is the least.
Cost of a node is defined as:
cost(node) = cumulative cost of all nodes from root
cost(root) = 0
Greedy Search:
In greedy search, we expand the node closest to the goal node. The
“closeness” is estimated by a heuristic h(x).
Heuristic: A heuristic h is defined as h(x) = Estimate of distance of node x
from the goal node. Lower the value of h(x), closer is the node from the
goal. Strategy: Expand the node closest to the goal state, i.e. expand the
node with a lower h value.
Strategy: Expand the node closest to the goal state, i.e. expand the node
with a lower h value.
*
A Tree Search:

This method combines the strengths of uniform-cost search and greedy


search. In this search, the heuristic is the summation of the cost in UCS,
denoted by g(x), and the cost in the greedy search, denoted by h(x). The
summed cost is denoted by f(x).
*
Heuristic: The following points should be noted w.r.t heuristics in A
search. f(x) = g(x) + h(x)
- Here, h(x) is called the forward cost and is an estimate of the distance
of the current node from the goal node.
- And, g(x) is called the backward cost and is the cumulative cost of a node
from the root node.
*
- A search is optimal only when for all nodes, the forward cost for a node
*
h(x) underestimates the actual cost h (x) to reach the goal. This
*
property of A heuristic is called admissibility.
*
Admissibility: 0 ≤ h(x) ≤ h (x)
Strategy: Choose the node with the lowest f(x)
*
value A Graph Search:
*
- A tree search works well, except that it takes time reexploring the
branches it has already explored. In other words, if the same node
*
has expanded twice in different branches of the search tree, A
search might explore both of those branches, thus wasting time
*
- A Graph Search, or simply Graph Search, removes this limitation by
adding this rule: do not expand the same node more than once.
- Heuristic. Graph search is optimal only when the forward cost between
two successive nodes A and B, given by h(A) – h(B), is less than or equal
to the backward cost between those two nodes g(A 🡪 B). This property of
the graph search heuristic is called consistency.
Consistency: h(A) – h(B) ≤ g(A 🡪 B)
6. Enumerate the steps regarding Best-First Search
Algorithm with a suitable example.
- Create 2 empty lists: OPEN and CLOSED
- Start from the initial node and put it in the ‘ordered’ OPEN list
- Repeat the next steps until the GOAL node is reached
a. If the OPEN list is empty, then EXIT the loop returning ‘False’
b. Select the first/top node in the OPEN list and move it to the
CLOSED list. Also, capture the information of the parent node.
c. If N is a GOAL node, then move the node to the Closed list and
exit the loop returning ‘True’. The solution can be found by
backtracking the path.
d. If N is not the GOAL node, expand node N to generate the
‘immediate’ next nodes linked to node N and add all those to the
OPEN list.
e. Reorder the nodes in the OPEN list in ascending order according to
an evaluation function f(n)
This algorithm will traverse the shortest path first in the queue. The time
complexity of the algorithm is given by O(n*log n)
Example:
Let’s have a look at the graph below and try to implement both Greedy BFS
and A* algorithms step by step using the two list, OPEN and CLOSED
g(n) Path Distance
h(n) Estimate to Goal
f(n) Combined Heuristics i.e. g(n) + h(n)
7. Recall the term “Adversarial Search” and specify its

need.

Adversarial search is a game-playing technique where the agents are


surrounded by a competitive environment. A conflicting goal is given to the
agents. These agents compete with one another and try to defeat one
another in order to win the game. Such conflicting goals give rise to the
adversarial search.
Techniques required to get the best optimal solution
There is always a need to choose those algorithms which provide the best
optimal solution in a limited time. So, we use the following techniques which
could fulfill our requirements:
- Pruning: A technique which allows ignoring the unwanted portions of
a search tree which make no difference in its final result.
- Heuristic Evaluation Function: It allows to approximate the cost value
at each level of the search tree, before reaching the goal node.
Elements of Game Playing search
To play a game, we use a game tree to know all the possible choices and to
pick the best one out. There are following elements of a game-playing:
- S0: It is the initial state from where a game begins
- PLAYER(s): It defines which player is having the current turn to make
a move in the state.
- ACTIONS(s): It defines the set of legal moves to be used in a state.
- RESULT (s, a): It is a transition model which defines the result of
a move.
- TERMINAL-TEST (s): It defines that the game has ended and
returns true.
- UTILITY (s, p): It defines the final value with which the game
has ended. This function is also known as Objective function or
Payoff function. The price which the winner will get i.e.
- (-1): If the PLAYER loses
- (+1): If the PLAYER wins
- (0): If there is a draw between the PLAYERS.
Let’s understand the working of the elements with the help of a game tree
designed for tic-tac-toe. Here, the node represents the game state and
edges represent the moves taken by the players.
● INITIAL STATE (S0): The top node in the game-tree represents the
initial state in the tree and shows all the possible choice to pick out one.
● PLAYER (s): There are two players, MAX and MIN. MAX begins the
game by picking one best move and place X in the empty square box.
● ACTIONS (s): Both the players can make moves in the empty boxes
chance by chance.
● RESULT (s, a): The moves made by MIN and MAX will decide the
outcome of the game.
● TERMINAL-TEST(s): When all the empty boxes will be filled, it will be
the terminating state of the game.
● UTILITY: At the end, we will get to know who wins: MAX or MIN,
and accordingly, the price will be given to them.
8. Describe the Min-Max Algorithm with an example.

- The min-max algorithm is a recursive or backtracking method. It


suggests the best move for the player, provided that the opponent
is likewise playing well.
- In AI, the Min-Max algorithm is mostly employed for game play.
- The game is played by two players, one named MAX and the other named
MIN in this algorithm.
- Both players FIGHT it, since the opponent players receives the
smallest benefit while they receive the greatest profit.
- Both players in the game are adversaries, with MAX selecting
the maximum value and MIN selecting the minimum value.
- For the exploration of the entire game tree, the minimax method uses
a depth-first search strategy
- The minimax algorithm descends all the way to the tree’s terminal node,
then recursively backtracks the tree.
Pseudocode:
- Given a state s
a. The maximizing player picks action a in Actions(s) that produces the
highest value of Min-Value(Result(s, a)).
b. The minimizing player picks action a in Actions(s) that produces
the lowest value of Max-Value(Result(s, a)).
- Function Max-Value(state)
a. v = -∞
b. if Terminal(state):
return Utility(state)
c. for action in Actions(state):
v = Max(v, Min-Value(Result(state, action)))
return v
- Function Min-Value(state):
a. v = ∞
b. if Terminal(state):
return Utility(state)
c. for action in Actions(state):
v = Min(v, Max-Value(Result(state, action)))
return v
Example:
- Step 1:
- Step 2:
- Step 3:
- Step 4:
9. Describe the Alpha Beta Pruning Process with an
example.
The minimax algorithm is optimized via alpha-beta pruning. The requirement
for pruning arose from the fact that decision trees might become extremely
complex I some circumstances. Some superfluous branches in that tree add
to the model’s complexity. To circumvent this, Alpha-Beta pruning is used,
which saves the computer from having to examine the entire tree. The
algorithm is slowed by these atypical nodes. As a result of deleting these
nodes, the algorithm becomes more efficient.
Condition for Alpha-beta pruning:
- Alpha: At any point along the Maximiser path, Alpha is the best option
or the highest value we’ve discovered. The initial value for alpha is - ∞.
- Beta: At any point along the Minimiser path, Beta is the best option or
the lowest value we’ve discovered. The initial value for alpha is + ∞.
- The condition for Alpha-beta pruning is that α >= β.
- The alpha and beta values of each node must be kept track of. Alpha can
only be updated when it’s MAX’s time and beta can only be updated
when it’s MIN’s turn
- MAX will update only alpha values and the MIN player will update
only beta values.
- The node values will passed to upper nodes instead of alpha and beta
values during going into the tree’s reverse.
- Alpha and Beta values only are passed to child
nodes. Key points in Alpha-beta Pruning:
● Alpha: Alpha is the best choice or the highest value that we have found at
any instance along the path of Maximizer. The initial value for alpha is – ∞.
● Beta: Beta is the best choice or the lowest value that we have found at
any instance along the path of Minimizer. The initial value for alpha is + ∞.
● The condition for Alpha-beta Pruning is that α >= β.
● Each node has to keep track of its alpha and beta values. Alpha can be
updated only when it’s MAX’s turn and, similarly, beta can be updated
only when it’s MIN’s chance.
● MAX will update only alpha values and MIN player will update only
beta values.
● The node values will be passed to upper nodes instead of values of alpha
and beta during go into reverse of tree.
● Alpha and Beta values only be passed to child
nodes. Working of Alpha-beta Pruning:

You might also like