0% found this document useful (0 votes)
89 views65 pages

MCQ Ai

The document consists of a series of questions and multiple-choice options related to artificial intelligence concepts, such as agent types, search strategies, and performance measures. It covers various scenarios and theoretical frameworks in AI, including decision-making in uncertain environments and the evaluation of algorithms. The questions are structured to assess understanding of AI principles, heuristics, and agent architectures.

Uploaded by

hawasa2785
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as XLSX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
89 views65 pages

MCQ Ai

The document consists of a series of questions and multiple-choice options related to artificial intelligence concepts, such as agent types, search strategies, and performance measures. It covers various scenarios and theoretical frameworks in AI, including decision-making in uncertain environments and the evaluation of algorithms. The questions are structured to assess understanding of AI principles, heuristics, and agent architectures.

Uploaded by

hawasa2785
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as XLSX, PDF, TXT or read online on Scribd
You are on page 1/ 65

Sl. No.

Question Option 1 Option 2 Option 3

An autonomous
drone agent must fly
through unknown
Partially
terrain to a target Fully observable,
1 observable, Static, stochastic
while avoiding deterministic
dynamic
obstacles. What type
of environment is it
operating in?

An agent that updates


its internal model
using sensor data and
2 Goal-based Learning agent Reflex agent
adapts to
environment changes
is classified as:

You are designing a


cleaning robot for a
multi-room office.
3 Which agent structure Simple reflex Model-based Goal-based
is most suitable for
optimal coverage and
energy saving?
In which scenario
In graphs with
would uniform-cost When all path costs are In depth-limited
4 varying step
search perform better equal trees
costs
than BFS?

An agent in a partially
observable
environment receives
Use of belief
5 noisy sensor input Random exploration Rule-based reflexes
state
and must plan ahead.
Which technique best
aids its rationality?

Which of the
following justifies
using Iterative Guarantees completeness
Always expands
6 Deepening Search and optimality with Avoids goal testing
all nodes
over Depth-First or limited memory
Breadth-First in
practical AI systems?
Consider a robot that
can map an area as it
moves through it. Uniform cost
7 Depth-first search Online DFS
Which search strategy search
is ideal for minimum
memory and safety?

Expected
performance
A rational agent acts Its accumulated rewards The number of
8 measures given
to maximize: regardless of goals actions taken
percept
sequence

A Mars rover must act


in real-time with
incomplete data Dynamic,
Static, stochastic, partially Episodic, stochastic,
9 about terrain and deterministic,
observable observable
solar levels. Which fully observable
environment
attributes apply?

In a dynamic
Environmental
environment, the Fully defined
10 Deterministic transitions changes during
agent’s decision must percepts
decision-making
account for:
An AI agent is trained
to play chess using
Minimax and Alpha-
Utility agent, Adversarial agent,
11 Beta pruning. What Reflex agent, stochastic
single-agent deterministic
type of agent and
environment is
modeled?

In designing a robot
that must balance
between speed and
12 safety while Reflex Model-based Goal-based
navigating, which
agent architecture is
most applicable?
When using uniform-
cost search, what
Non-negative
13 condition must be Zero-cost paths Finite state space
step costs
met to ensure
optimality?

A delivery robot is
equipped with limited
sensors and
frequently encounters Model-based
14 Simple reflex Learning agent
obstacles that were reflex
not initially mapped.
Which agent model
helps it best?

Why is BFS often not


suitable for memory- It explores It stores every node
15 It lacks completeness
constrained irrelevant nodes at each level
environments?
You are modeling an
agent that can adjust
behavior based on
time, temperature, Percept sequence
16 Sensor input Agent function
and previous errors. memory
Which element in the
agent's structure
allows this?

What problem is best A maze where


A puzzle game with fixed
17 solved using an online the layout is A sorting algorithm
rules
search strategy? unknown

A rational vacuum Clean randomly Maximize coverage


18 cleaner agent is Clean only the visible dirt to avoid energy while minimizing
expected to: waste energy use

What is the primary


Search becomes Time complexity is
19 risk when using DFS in Completeness is lost
optimal linear
a space with cycles?
A GPS navigator using
real-time traffic data
must choose the
20 shortest route BFS UCS A*
considering travel
time. Which algorithm
is most appropriate?

A robot with sensors


prone to error must
infer its true location Belief state
21 Reactive rules Predefined goals
over time. What estimation
should its agent
model include?

Why is partial
observability more
Uses pre-trained Involves belief state
22 challenging than full Requires less data
weights maintenance
observability in AI
systems?
An adversarial agent
playing a real-time
multiplayer game Utility maximization
23 must adapt rapidly. Action-reaction cycles Predefined rules and opponent
What must its modeling
architecture focus
on?

In a multi-agent
competitive
environment, the Agents do not Actions of others
24 Sensors degrade
assumption of a static learn affect state
environment is
violated because:

In a search problem,
the branching factor
is 4 and the solution
25 depth is 10. What is 40 4^10 10
the approximate node
count explored by
BFS?
A belief-state-based
agent will outperform Fully known Random mazes Deterministic
26
others in which of the environments with partial maps puzzles
following?

Why is A* search
Uses both cost-
better than greedy Faster memory
27 Uses no heuristic so-far and
best-first search for usage
heuristic
shortest pathfinding?

In a problem
formulation, which of
28 Initial state Transition model Utility function
the following is not
required?
An agent repeatedly
chooses actions that
seem best locally but
29 gets stuck in sub- Breadth-first search Hill-climbing A* search
optimal regions. What
type of algorithm is it
using?

When using Iterative


Deepening Search,
It explores deeper levels It ignores shallow It builds the full tree
30 why is time
multiple times nodes first
complexity still
acceptable?
A rational agent for
autonomous driving
must consider sensor Probabilistic with
31 noise, other drivers, Reflex with rules utility Greedy search
and road conditions. maximization
What kind of model is
best?

What is the worst-


case time complexity
O(b^m), where m is O(d^b), where d
32 of Depth-First Search O(1)
maximum depth is depth
in an infinite-depth
tree?

When must an AI When


When the transition When current state
33 agent use search with environment is
model is known is uncertain
partial information? deterministic
Which situation best Outcome Multiple outcomes
34 describes a stochastic Agent is in full control depends solely possible for the
environment? on agent's action same action

What feature of a
utility-based agent Utility function
Use of transition
35 allows it to choose Hard-coded goals ranking
models
between conflicting preferences
goals?

What differentiates a
It stores an internal
model-based reflex It has no internal
36 It uses goal formulation representation of
agent from a simple memory
the world
reflex agent?

Sl. No. Question Option A Option B Option C


A robot solved 85% of
240 tasks. How many
1 204 212 220
tasks did it solve
correctly?

A* search uses f(n) =


2 g(n) + h(n). If g(n)=5, 8 15 3
h(n)=3, what is f(n)?

An agent receives 480


percepts/hour. Each
3 2 3 4
takes 0.5s. Processing
time in minutes?
A vacuum agent uses
0.25 units per clean. It
4 30 35 40
cleans 160 times.
Total energy used?

BFS tree with b=2 and


5 63 64 127
d=6. Total nodes?

In A*, h(n) = 0 for all


6 nodes. What does A* DFS UCS BFS
reduce to?

Which of the
following is not a Complete
7 Performance measure Learning
property of a rational perception
agent?

If a heuristic
8 overestimates cost, it Admissible Consistent Inadmissible
is:

An agent chooses
between two tasks.
9 Task A: 0.9 × 80, Task Task A Task B Both same
B: 0.6 × 120. Which is
better?

Bidirectional search
10 with b=3, d=8. Time O(3^4) O(3^8) O(8^3)
complexity?

In a search tree, b=3


and d=4. How many
11 40 81 121
nodes generated in
BFS?

A sensor sends 100


signals/hour. Each
12 200 2 20
signal = 2KB. Total
data in MB?

A rational agent
increases success rate
13 from 65% to 91%. 25% 26% 40%
Percentage
improvement?

An online agent
processes 0.5s per
14 percept. 120 1 0.5 2
percepts/hour = how
many minutes?

Greedy Best-First
15 Search uses which g(n) h(n) f(n) = g(n)+h(n)
evaluation function?
In CSP, which method
chooses the variable
16 Forward Checking Degree Heuristic MRV
with fewest legal
values?

What is the outcome


Same result
17 of Alpha-Beta pruning Less memory Better path
faster
in minimax?

If an agent perceives
60 times/min for 2
18 3600 600 7200
hrs, how many total
percepts?

A problem-solving
agent evaluates 100
19 1 min 10 sec 5 sec
nodes. Time per node
= 0.1s. Total time?

Admissible heuristics Never Underestimate


20 Be exact
must: overestimate randomly

A game tree of depth


4 has how many
21 40 81 121
nodes if branching
factor is 3?

In A*, consistent Never expand a


22 May overestimate Are always perfect
heuristics: node twice

If a rational agent acts


with 80% success and
23 80 60 20
reward 100, expected
reward is:

A search agent
expands 3 nodes per
24 10s 30s 60s
second. Time to
expand 90 nodes?

In CSP, arc
No support exists One variable
25 consistency is violated All values are legal
in domain assigned
when:

Which strategy uses


local knowledge to
26 DFS Greedy Hill Climbing
improve a solution
iteratively?
Sl. No. Question Option A Option B Option C
A robot solved 85% of
240 tasks. How many
27 204 212 220
tasks did it solve
correctly?
A* search uses f(n) =
28 g(n) + h(n). If g(n)=5, 8 15 3
h(n)=3, what is f(n)?

An agent receives 480


percepts/hour. Each
29 2 3 4
takes 0.5s. Processing
time in minutes?

A vacuum agent uses


0.25 units per clean. It
30 30 35 40
cleans 160 times.
Total energy used?

BFS tree with b=2 and


31 63 64 127
d=6. Total nodes?

In A*, h(n) = 0 for all


32 nodes. What does A* DFS UCS BFS
reduce to?

Which of the
following is not a Complete
33 Performance measure Learning
property of a rational perception
agent?

If a heuristic
34 overestimates cost, it Admissible Consistent Inadmissible
is:

An agent chooses
between two tasks.
35 Task A: 0.9 × 80, Task Task A Task B Both same
B: 0.6 × 120. Which is
better?

Bidirectional search
36 with b=3, d=8. Time O(3^4) O(3^8) O(8^3)
complexity?
Option 4 Answer

Episodic,
2 CO1 Topic 1
deterministic

Utility-based 2

Utility-based 4

3
When goal is
2
shallow

Ignoring noise 2

Requires
admissible 1
heuristic

3
BFS with
3
heuristics

Accuracy of state
2
transitions

Dynamic,
stochastic,
4
partially
observable

Precomputed
2
action sets
3
Model-free
3
agent, episodic

Utility-based 4

3
Admissible
2
heuristics

Utility-based
3
agent

It uses heuristics
3
inefficiently

3
Actuators 3

A language
translation 2
engine

Clean
continuously
3
regardless of
power level

All nodes are


1
reached

3
IDS 3

Actuator
2
recalibration

Avoids goal
3
formulation

3
Single-state
3
evaluation

Reflex agents
3
dominate

1000 2

3
Grid-based
deterministic 2
systems

Ignores repeated
2
states

Goal test 3

3
Depth-first
2
search

Most nodes are


4
at the bottom

3
Episodic agent 2

O(log b) 1

When sensors
are highly 3
accurate

3
Agent never
perceives 3
environment

Reflexive
condition 2
checking

It makes random
3
decisions

Option D Answer c01

200 204

5 8

5 4
45 40

128 127

IDS UCS

Random
Random behavior
behavior

Efficient Inadmissible

None Task A

O(2^3) O(3^4)

242 121

0.2 200

38% 26%

3 1

None h(n)
Arc Consistency MRV

Fewer branches Same result faster

1200 7200

15 sec 10 sec

Never
Be consistent
overestimate

242 121

Never expand a
Ignore g(n)
node twice

90 80

90s 30s

All arcs are No support exists


consistent in domain

BFS Hill Climbing

Option D Answer

200 204
5 8

5 4

45 40

128 127

IDS UCS

Random
Random behavior
behavior

Efficient Inadmissible

None Task A

O(2^3) O(3^4)
c02
Correct
Sl.No Question Option A Option B Option C Option D CO2
Answer

In a CSP, with 5
variables each having 3 Analyze
1 values, how many total 15 125 243 625 C (243) solution
assignments are spaces
possible?

In a minimax algorithm
with depth 4 and Analyze
2 branching factor 3, 81 121 40 121 B (121) adversari
how many nodes are al trees
evaluated?

Player has 3 moves,


opponent has 2 Evaluate
3 responses per move. 3 6 9 12 B (6) game
How many game states trees
after opponent’s turn?

Alpha-beta pruning
halves nodes in depth- Apply
4 3 tree (BF=4). How 32 40 64 20 A (32) optimizati
many nodes on
evaluated?

Evaluation scores Apply


5 A=12, B=15, C=20. A B C All equal C (C) heuristic
Which is best? selection
Greedy search: h(C)=3, Apply
6 path cost=6. Total 9 6 3 12 A (9) heuristic
estimate? strategy
A* search: actual=5,
7 8 5 13 15 C (13) Apply A*
heuristic=8. Total cost?

Expand node first using Analyze


8 informed search: A B C D C (C) A*
h(C)=4, g=5 decisions
Simulated annealing:
A
9 T=1000, r=0.95, after 5 773.78 800 950 900 Apply SA
(773.78)
iterations?

A* again: actual=5, Apply A*


10 13 12 10 15 A (13)
heuristic=8. Total cost? again

Analyze
Minimax tree: depth 5,
11 1365 1024 341 1000 A (1365) complexit
BF=4. Total nodes?
y
If P→Q = 0.9 and Q→R
12 P)? 0.9 0.81 1 0.8 B (0.81)
= 0.9, P(R
8 axioms, 3
Evaluate
13 conclusions each. Total 24 8 11 20 A (24)
inference
conclusions?

8 clauses, avg 5 literals. Analyze


14 112 280 160 200 B (280)
Pairs of clauses = ? resolution

If P→Q = 0.8, Q→R =


15 P)? 0.64 0.8 1 0.5 A (0.64)
0.8, P is true. P(R
6 axioms, 4
16 conclusions each. Total 24 10 12 20 A (24) Apply FOL
conclusions?

10 clauses, 4 literals Apply


17 180 200 360 320 C (360)
each, comparisons = ? resolution

P→Q = 0.85, Q→R =


18 P)? 0.85 0.7 0.7225 1 C (0.7225)
0.85, P is true. P(R
In a game tree with Analyze
19 BF=2, depth=3, nodes 8 15 16 10 B (15) game
=? trees
Heuristic values:
Apply
20 h(A)=2, h(B)=5, h(C)=1. A B C None C (C)
GBFS
Best pick?
In alpha-beta, how is Analyze
More Extra C
21 performance Full tree Pruning optimizati
memory depth (Pruning)
improved? on

Understa
Never C (Never
Admissible heuristic Overesti nd
22 Equal overestim None overestim
means? mate heuristic
ate ate)
design
C (g(n) Understa
23 A* uses? Only g(n) Only h(n) g(n)+h(n) h(n)-g(n)
+h(n)) nd A*

Distinguis
h
Which is NOT a Hill
24 A* Greedy BFS C (BFS) informed/
heuristic strategy? Climbing
uninform
ed

Apply CSP
25 Constraint: X ≠ Y is? Unary Binary Ternary Hard B (Binary) constraint
types

Minimum Remaining Reduce C


Expand Variable Value Apply CSP
26 Values (MRV) heuristic constraint (Variable
deeper ordering ordering heuristics
helps? s ordering)

Gets
Analyze
In hill climbing, main stuck in Memory C (Gets
27 Slow No goal local
drawback? local use stuck)
search
optima

Values in All All Understa


A (Values
Arc consistency domain constraint Unary constraint nd arc
28 in
means? have s are domain s consisten
domain...)
support binary enforced cy

First- B
Which logic supports Propositio Understa
29 Predicate Boolean order (Predicate
quantifiers? nal nd FOL
only )

Compare
Which is a complete Hill Simulated Greedy algorithm
30 BFS C (BFS)
algorithm? climbing annealing search complete
ness

Apply
Constructi Logical C (Logical logic-
31 Resolution is used for? Planning Search
on inference inference) based
inference

Analyze
Worst Average C (Best
32 Alpha-beta optimal in? All cases Best case pruning
case case case)
efficiency
Apply
Game playing AI uses C
33 DFS BFS Minimax SA adversari
which? (Minimax)
al search
Greedy Best First B
Sum of Apply
34 Search chooses node Path cost Heuristic Random (Heuristic
costs GBFS
based on? )
A node has g=4 and
35 10 6 4 12 A (10) Apply A*
h=6. What’s f in A*?
A Understa
Clause resolution Contradic
36 Solutions Costs Heuristics (Contradi nd
checks for? tions
ctions) resolution
Option
Sl.No Question Option A Option B Option C Option D c03
Answer

CSP with
5
variables
and 3
1 15 125 243 35 C
values:
total
assignme
nts?

Minimax:
depth 4,
branching
2 81 120 121 250 A
3 – nodes
evaluated
?

Minimax:
3 player
moves,
3 opponent 5 6 3 9B
has 2 –
total
states?

Alpha-
beta
pruning:
4 branching 64 32 48 80 C
4, depth
3, pruned
by half

Best
game
state with
5 scores A B C None C
A=12,
B=15,
C=20
Greedy
BFS:
h(C)=3,
6 path 3 6 9 12 C
cost=6 –
total
cost?

A*: cost
to
current=1
0, h=15;
7 15 18 23 13 B
next node
cost=5,
h=8 –
total?

Informed
search:
expand
first?
8 A C B D D
h(A)=9,p=
3;
h(B)=7,p=
4...

Simulated
annealing
: T=1000,
9 773.78 950 800 900 A
rate=0.95,
5
iterations

A*: next
node path
10 10 8 13 15 C
= 5+8.
Total?

Minimax:
depth 5,
11 branch 4 341 1024 1365 624 C
– total
nodes?

P→Q
(0.9),
12 Q→R P)? 0.9 0.81 0.99 0.72
(0.9), P is
true: P(R
8 axioms,
3
13 conclusio 24 11 8 18 A
ns each –
total?

8 clauses,
avg 5
literals –
14 literal 1120 140 320 560 D
comparis
ons for all
pairs?

P→Q
(0.8),
Q→R
15 0.6 0.8 0.64 0.72 C
(0.8), P
true –
P(R)?

6 axioms,
4
conclusio
16 ns each – 20 24 18 30 B
total
conclusio
ns?

10
clauses,
avg 4
literals –
17 720 180 640 900 C
literal
comparis
ons (all
pairs)?

P→Q &
Q→R
18 (0.85), P 0.7 0.7225 0.85 0.65 B
true –
P(R)?

Prop
logic: 4
rules @ 3
19 cost; FOL: 2 5 3 7C
5 rules @
2 – cost
diff?
P↔Q
and
Q↔R
20 0.75 0.5 0.5625 0.625 C
(0.75), P
true –
P(R)?

Prop
logic:
5×4=20;
21 1 2 3 4A
FOL:
6×3=18 –
cost diff?

P↔Q
and
Q↔R
22 0.64 0.72 0.8 0.6 B
(0.8), P
true –
P(R)?

Precision=
0.8,
23 Recall=0.7 0.75 0.74 0.76 0.72 A
– F1
Score?

Prop
logic:
6×3=18;
24 6 8 10 9B
FOL:
5×2=10 –
diff?

Precision=
0.8,
25 Recall=0.5 0.6 0.62 0.67 0.5 C
– F1
Score?

∀x
All No
(Human(x Only
Humans humans humans
26 ) → some are B
are gods are are
Mortal(x)) mortal
mortal mortal
means?

5
propositio
27 ns, each 0.24 0.16807 0.3 0.7 B
P=0.7 – all
true?
P→Q &
Q→R
true, P
28 0.8 0.64 1 0.72 B
true, each
0.8 –
P(R)?

Modus
Ponens:
29 P=0.9, 0.85 0.77 0.765 0.65 C
P→Q=0.8
5 – Q=?

Domain=5
,
Predicate
s=3, arity
30 75 100 50 60 A
2–
atomic
sentences
?

P(A,x) and
{x→B, {x→y,
31 P(y,B) – {A→y} {x→y} A
y→A} A→B}
MGU?

Agent: 10
percepts/
sec,
32 update 600 300 1200 900 A
every 2s,
1 min →
stored?

4
functions,
3 args, 2
33 64 16 32 48 A
values –
function
apps?

P(x,y) and
{x→A, {x→A,
34 P(A,z) – {z→y} {x→A} D
y→z} z→y}
MGU?

Agent
updates
35 every 5s 360 240 720 1800 A
for 2h –
updates?
3
predicate
s (2 vars),
36 domain=4 48 36 24 16 A
– atomic
sentences
?
CO4
Sl. No. Question Option A Option B Option C Option D (Correct
Option)

In a resolution-based system
with 10 clauses and 3 literals
1 305 405 450 390 B
each, how many literal
comparisons for all pairs?

In unification, how many


2 possible unifications with 4 625 120 1024 540 A
variables and 5 constants?

In resolution with 14 clauses of


3 avg. 4 literals, how many literal 1200 1344 1456 1560 C
comparisons for all pairs?

In FOL with 7 individuals and 4


4 unary predicates, how many 2¹⁴ 2²⁸ 2⁷ 4⁷ B
interpretations exist?
How many unifications with 5
5 3125 1024 7776 120 C
variables and 6 constants?

If 100 results retrieved, 60


6 relevant, 80 total relevant: 0.6 0.75 0.85 0.7 B
What is Recall?

Given: P ∨ Q, ¬Q ∨ R, ¬P.
7 P Q R None C
What is inferred by resolution?

Forward-chaining: 4 rules need


8 3 facts; 9 initial facts. New 3 4 2 1A
facts inferred?

10 rules, each firing with 80%


9 probability. Expected number 9 7 8 10 C
of rules to fire?
Backward chaining: 4 subgoals,
10 3 steps each. Total steps 6 12 16 9B
required?

If P is true, and P→Q→R→S


11 each with 0.9 prob, what is 0.81 0.729 0.9 0.85 B
prob(S)?
Resolving A∨B, ¬B∨C, ¬A
12 A B C None C
results in?

Inference: 5 queries/sec, 3
13 expansions/query, 10 sec. 100 120 130 150 D
Total nodes expanded?

Forward-chaining: 5 rules, 3
14 facts/rule, 2 cycles, 10 initial 25 30 35 40 D
facts. Total facts?

Backward chaining: 6 goals, 4


15 subgoals each. Total 24 30 36 26 B
processed?
Given: P∨Q, ¬P, Q∨R. What is
16 Q P R None C
inferred?
Given: A∨B, ¬B∨C, ¬A, ¬C∨D.
17 B D C A B
Resolvent?

FOL inference: 5 nodes/sec, 4


18 expansions/node, 10 sec. How 100 80 200 50 D
many nodes?

Forward-chaining: 15 facts, 4
19 rules/cycle, 3 facts/rule, 2 45 39 30 40 B
cycles. Total facts?

Logical agent: 20 percepts/sec,


updates every 10 percepts in
20 50 sec 100 sec 60 sec 120 sec B
0.5s, 1000 percepts. Total
time?
Backward chaining: 5 goals, 3
21 subgoals/goal, 2 steps each. 20 18 30 24 C
Total steps?

Given: P∨Q, ¬Q∨R, ¬P∨S.


22 S Q∨ S R∨S P C
What is inferred?

7 nodes/sec, 3 children/node,
23 121 35 105 210 A
5 sec. Total expanded nodes?

Forward-chaining: 3 rules/sec,
24 2 facts/rule, 5 sec, 15 facts 30 36 50 45 D
initially. Total facts?

Backward chaining: 6 subgoals,


25 10 8 12 16 C
2 steps each. Total steps?

A→B = 0.7, B→C = 0.9, A true.


26 0.63 0.7 0.9 0.5 A
Prob(C)?
Given: P∨Q, ¬Q∨R, ¬P. What is
27 Q P R S C
inferred?
FOL system: 4 nodes/sec, 2
28 children/node, 6 seconds. 24 12 48 36 A
Nodes processed?

Forward-chaining: 4 rules/sec,
29 3 facts/rule, 3 sec, 20 initial 42 56 48 56 D
facts. Total?

Backward-chaining: 8 subgoals,
30 24 21 18 30 A
3 steps each. Total steps?

CO4
Sl. No. Question Option A Option B Option C Option D (Correct
Option)

In a resolution-based system
with 10 clauses and 3 literals
1 305 405 450 390 B
each, how many literal
comparisons for all pairs?

In unification, how many


2 possible unifications with 4 625 120 1024 540 A
variables and 5 constants?

In resolution with 14 clauses of


3 avg. 4 literals, how many literal 1200 1344 1456 1560 C
comparisons for all pairs?

In FOL with 7 individuals and 4


4 unary predicates, how many 2¹⁴ 2²⁸ 2⁷ 4⁷ B
interpretations exist?
How many unifications with 5
5 3125 1024 7776 120 C
variables and 6 constants?
If 100 results retrieved, 60
6 relevant, 80 total relevant: 0.6 0.75 0.85 0.7 B
What is Recall?
Given: P ∨ Q, ¬Q ∨ R, ¬P.
7 P Q R None C
What is inferred by resolution?

Forward-chaining: 4 rules need


8 3 facts; 9 initial facts. New 3 4 2 1A
facts inferred?
10 rules, each firing with 80%
9 probability. Expected number 9 7 8 10 C
of rules to fire?
Backward chaining: 4 subgoals,
10 3 steps each. Total steps 6 12 16 9B
required?
Sl. No. Question Option A Option B

A machine learning model trained with


5 features can classify 4 different
1 categories. If each feature has 3 243 81
possible values, how many distinct
input combinations exist?
A dataset contains 1000 samples, and a
learning algorithm achieves 85%
accuracy when trained on 80% of the
2 85 87.5
data. If retrained using 100% data,
what is the expected accuracy
improvement?

A hypothesis space contains 2ⁿ


hypotheses. If the system processes
3 10 sec 100 sec
10,000 hypotheses/sec, how long to
evaluate when n=20?
An EBL system refines 15
rules/iteration, reducing memory by
4 10% per iteration. If initial memory is 590 MB 656 MB
1GB, how much remains after 5
iterations?

A hidden Markov model has 4 hidden


states and 3 observation symbols. For
5 99 100
100 observations, how many state
transitions in Viterbi decoding?

An SVM with RBF kernel requires O(N²)


6 training time. Dataset has 20,000 4 × 10⁸ 8 × 10⁸
samples. Training computations?
In a multi-agent system, 4 agents take
7 5 actions/turn in a 10-turn episode. 100 200
Total action choices?

A CNN-based RL agent processes


128×128 pixel images with 3 color
8 4.9 million 9.8 million
channels, 100 images/sec. How many
pixel values per sec?

A neuroevolution system has 1000


networks, each with 50 neurons and 20
9 1 million 2 million
connections. Total weights
updated/generation?
An AI model sees 1 million samples
with 500 features. If training time
10 50 min 100 min
scales O(n log n), and 100K samples
take 10 min, how long for full dataset?

EBL system refines 25 rules/iteration,


reducing search space by 12% per
11 2167 2431
iteration. If initial space is 5000, states
after 6 iterations?
Dataset of 5000 samples gives 88%
accuracy with 80% training. If data
12 89% 90%
augmentation improves 5%, new
accuracy?

SVM with RBF kernel needs O(N²).


13 Dataset has 50,000 samples. 2.5 × 10⁹ 5.0 × 10⁹
Computations required?
RL agent explores 20×20 grid, moves
14 4/sec. If convergence requires 50,000 3,125 sec 4,500 sec
moves, training time?

Multi-agent system: 6 agents, 5


15 actions/turn, 25-turn episode. Total 500 750
actions?

RL agent processes 512×512 RGB


16 images at 60 fps. Pixel values processed 48 million 76 million
per second?
Neuroevolution: 5000 networks, 128
neurons, 40 connections/neuron. 30%
17 256 million 384 million
selected for 5 generations. Total
connections updated?

AI model with 1.5 million samples, each


1000 features. Feature extraction O(n
18 144 min 156 min
log n). 100K takes 12 min. Full dataset
time?

EBL system reduces space by


15%/iteration. Initial space: 10,000
19 3772 4081
states. After 6 iterations, states
remain?
Dataset of 7500 samples gives 87%
20 accuracy with 70% training. If 88.50% 90.20%
augmentation adds 4%, final accuracy?

SVM with RBF kernel, training O(N²).


21 Dataset: 75,000 samples. 5.6 × 10⁹ 7.2 × 10⁹
Computations required?
Multi-agent: 8 agents, 6 actions/turn,
22 1200 1400
30-turn episode. Total actions?
23 Question Option A Option B
A machine learning model is trained
with 5 features, each having 3 possible
24 243 81
values. How many distinct input
combinations exist?

A dataset contains 1000 samples. A


model achieves 85% accuracy on 80%
25 training data. Assuming proportional 85 87.5
improvement, what is the expected
accuracy when trained on 100%?

A hypothesis space has 2ⁿ hypotheses.


26 At 10,000 evaluations/sec, how long to 10 sec 100 sec
evaluate all hypotheses when n=20?

An EBL system refines 15


rules/iteration and reduces memory by
27 590 MB 656 MB
10% each time. Starting at 1GB, what
remains after 5 iterations?

A hidden Markov model (HMM) has 4


states and 3 observation symbols. For
28 99 100
100 observations, how many
transitions in Viterbi decoding?

An SVM with RBF kernel requires O(N²)


29 training time. For 20,000 samples, how 4×10⁸ 8×10⁸
many computations are needed?

In a multi-agent system with 4 agents,


30 each taking 5 actions/turn over 10 100 200
turns, how many total actions?

A CNN-based RL agent processes


31 128×128 RGB images at 100
images/sec. How many pixel v
32 Question Option A Option B
A machine learning model trained with
5 features can classify 4 categories. If
33 243 81
each feature has 3 possible values, how
many distinct input combinations exist?

A dataset contains 1000 samples, and a


model achieves 85% accuracy using
34 80% of data. If retrained using 100% 85 87.5
data (assuming proportionality), what
is the expected accuracy?

A hypothesis space contains 2ⁿ


hypotheses. If 10,000 hypotheses are
35 10 sec 100 sec
processed per second and n = 20, how
long to evaluate all hypotheses?

An EBL system refines 15 rules per


iteration and reduces memory by 10%
36 590 MB 656 MB
per iteration. If initial memory is 1 GB,
how much remains after 5 iterations?

A hidden Markov model (HMM) has 4


hidden states and 3 observation
symbols. If a sequence has 100
37 99 100
observations, how many state
transitions occur during Viterbi
decoding?
Option C Option D CO5 (Correct Option)

125 1024 A
90 92.5 B

102.4 sec 1048.57 sec C


729 MB 810 MB C

297 400 A

16 × 10⁸ 32 × 10⁸ A
300 400 D

19.6 million 39.2 million C

5 million 10 million C
150 min 200 min C

2875 3120 B
92% 93% C

7.5 × 10⁹ 10.0 × 10⁹ A

5,000 sec 6,250 sec D

1000 1250 C

157 million 251 million C


512 million 640 million C

168 min 180 min C

4379 4665 A
91.00% 92.30% C

8.4 × 10⁹ 9.0 × 10⁹ B

1600 1800 C
Option C Option D Answer CO

125 1024 243 CO5

90 92.5 87.5 CO5

102.4 sec 1048.57 sec 102.4 sec CO5

729 MB 810 MB 729 MB CO5

297 400 99 CO5

16×10⁸ 32×10⁸ 4×10⁸ CO5

300 400 400 CO5

Option C Option D Answer CO


125 1024 243 CO5

90 92.5 87.5 CO5

102.4 sec 1048.57 sec 102.4 sec CO5

729 MB 810 MB 729 MB CO5

297 400 99 CO5

You might also like