0% found this document useful (0 votes)
38 views25 pages

Ai QB Ans (GKJ)

The document provides a comprehensive overview of Artificial Intelligence (AI), covering its definition, goals, types, and key concepts such as agents, rationality, and search algorithms. It discusses various AI applications, advantages and disadvantages, and differentiates between search strategies like BFS, DFS, and A*. Additionally, it explores specific algorithms, learning types, and the properties of environments in AI.

Uploaded by

sshibaprasad07
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
38 views25 pages

Ai QB Ans (GKJ)

The document provides a comprehensive overview of Artificial Intelligence (AI), covering its definition, goals, types, and key concepts such as agents, rationality, and search algorithms. It discusses various AI applications, advantages and disadvantages, and differentiates between search strategies like BFS, DFS, and A*. Additionally, it explores specific algorithms, learning types, and the properties of environments in AI.

Uploaded by

sshibaprasad07
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 25

AI QB ANS

Short Questions:
1. What is AI?

AI (Artificial Intelligence) is the branch of computer science aimed at creating


machines that can mimic human intelligence, including learning, reasoning, and
problem-solving.

It enables machines to perform tasks without explicit programming by using


algorithms.

2. What are the goals of AI?

Replicate human intelligence.

Solve knowledge-intensive tasks.

Connect perception and action intelligently.

Build systems capable of independent learning and decision-making.

3. What are types of AI?


Based on Capabilities:

Weak AI (Narrow AI)

General AI

Super AI

Based on Functionality:

Reactive Machines

Limited Memory

Theory of Mind

Self-Aware AI

4. What is Weak and Strong AI?

Weak AI: Specialized to perform specific tasks (e.g., Siri, image recognition).

Strong AI: Hypothetical AI capable of performing any intellectual task like humans,
with self-awareness and consciousness.

5. What are agents?

AI QB ANS 1
Entities that perceive the environment through sensors and act upon it using
effectors.

6. What is rationality?

Rationality is performing actions based on percepts to achieve the best possible


outcome.

7. What is an ideal rational agent?

An agent that maximizes its performance based on percept sequence and prior
knowledge of the environment.

8. What is an environment?

The surroundings in which an agent operates and interacts.

9. What are properties of Search Algorithm?

Completeness, Optimality, Time Complexity, Space Complexity.

10. What are the types of uninformed search?

Breadth-first search

Uniform-cost search

Depth-first search

Iterative deepening depth-first search

Bidirectional search

11. What is the difference between uninformed and informed search?

Uninformed Search: Operates blindly without domain-specific knowledge.

Informed Search: Uses heuristics to guide the search process more efficiently.

12. What is uniform-cost search?

A search algorithm that expands nodes based on the lowest cumulative cost, ensuring
an optimal path.

13. What is Heuristic function?

A function estimating the cost from the current state to the goal, guiding informed
searches.

14. What are the features of the hill climbing algorithm?

Moves in the direction of increasing value.

Focuses on immediate neighbors.

AI QB ANS 2
Greedy approach without backtracking.

15. What is the local maximum and global maximum?

Local Maximum: A peak better than surrounding states but not the highest.

Global Maximum: The highest peak in the entire state space.

16. Define agent, rational agent, and agent program.

Agent: An entity perceiving and acting in an environment.

Rational Agent: Acts to maximize performance based on percepts and knowledge.

Agent Program: The implementation defining how an agent acts based on percepts.

17. What do you mean by local maxima with respect to search technique?

Local maxima refer to a state in the search space that is better than neighboring states
but is not the optimal solution (global maxima). Hill climbing can get stuck at local
maxima.

18. List out some of the applications of Artificial Intelligence.

Astronomy, Healthcare, Gaming, Finance, Data Security, Social Media, Travel &
Transport, Automotive Industry, Robotics, Entertainment, Agriculture, E-commerce, and
Education.

19. Mention the criteria for the evaluation of search strategy.

Completeness, Optimality, Time Complexity, and Space Complexity.

20. Differentiate BFS & DFS.

Aspect BFS (Breadth-First Search) DFS (Depth-First Search)

Traversal Order Level by level Depth-wise

Data Structure Used Queue Stack

Completeness Guaranteed for finite spaces Guaranteed only for finite spaces

Time Complexity O(b^d) O(b^m)

Space Complexity High ((O(b^d)) Low (O(b \cdot m))

Optimality Yes, if path cost is uniform No

21. With a suitable example explain multiple-connected graph.

A multiple-connected graph has multiple paths connecting the same pair of nodes.
Example: In a road network, multiple routes connect two cities.

22. What are the different types of planning?

AI QB ANS 3
Classical Planning, Conditional Planning, Contingent Planning, and Hierarchical Task
Network (HTN) Planning.

23. List out the different types of induction heuristics?

Not specifically mentioned in the document.

24. Define an inference procedure.

A systematic process for deriving logical conclusions from a knowledge base using
rules and facts.

25. What is the basic difference between A and AO algorithm?**

A\: Solves single-state problems, uses f(n) = g(n) + h(n).

AO\: Solves AND-OR problems and works on decision trees.

26. What is futility cutoff in game playing?

A pruning technique that avoids exploring nodes unlikely to improve the current
decision, saving time in algorithms like alpha-beta pruning.

27. Differentiate between Declarative and Procedural representation of knowledge .

Aspect Declarative Knowledge Procedural Knowledge

Definition Describes "what" is known Describes "how" to achieve tasks

Representation Facts and assertions Algorithms and procedures

Usage Used for reasoning Used for execution

28. State the significance of using Heuristic functions.

Heuristics guide search algorithms by estimating how close a state is to the goal,
reducing search space and improving efficiency.

29. What do you mean by local maxima with respect to search technique?

Duplicate of Question 17.

30. What are the differences and similarities between Problem Solving and Planning?
Differences:

Problem-solving focuses on immediate actions; planning involves long-term strategies.

Problem-solving uses reactive techniques, planning uses predictive approaches.

Similarities:

Both aim to achieve a goal.

Both involve evaluating states and actions.

AI QB ANS 4
31. ‘Minimax is not good for game playing when the opponent is not playing optimally.’
Justify using suitable example.

If the opponent makes random moves, minimax may overestimate the adversary's skill
and compute unnecessary scenarios, wasting computational resources.

32. When and why Nonmonotonic Reasoning is used?

Used when conclusions drawn may need to be retracted as new information becomes
available, suitable for dynamic and uncertain environments.

33. Distinguish between Supervised learning and Unsupervised learning.

Aspect Supervised Learning Unsupervised Learning

Input Data Labeled Unlabeled

Goal Predict outcomes Find patterns or clusters

Examples Classification, Regression Clustering, Association

34. What is Blocks World Problem?

A problem in AI where blocks are manipulated to achieve a goal configuration using


planning algorithms.

35. What is STRIPS?

A planning language for specifying an initial state, goal state, and a set of actions with
preconditions and effects.

36. What is planning in AI?

The process of creating a sequence of actions to transition from an initial state to a


desired goal state efficiently.

Descriptive Questions:
1. Advantages and Disadvantages of AI
Advantages:

High Accuracy: AI systems, like medical diagnostic tools, make fewer errors than
humans by using data-driven decision-making.

High Speed and Efficiency: AI processes vast amounts of data and performs complex
calculations faster than humans, such as in financial trading.

Utility in Risky Areas: Robots powered by AI can perform tasks in dangerous


environments, such as bomb defusal or space exploration.

AI QB ANS 5
Digital Assistance: Virtual assistants like Siri and Alexa help in daily tasks, providing
information and managing schedules.

Improved Public Services: AI is used in self-driving cars, making transportation safer


and more efficient.

Disadvantages:

Costly Maintenance: AI requires expensive hardware, software, and regular updates to


remain functional.

Lack of Creativity: AI can only work within predefined constraints and lacks human
imagination.

Dependence on Machines: Increasing reliance on AI reduces human cognitive


engagement.

Ethical Concerns: AI raises issues like job displacement and potential misuse in
surveillance or warfare.

2. Describe Any Three Applications of AI

Healthcare: AI-powered tools assist in diagnosing diseases, predicting health


deterioration, and personalizing treatments. For instance, AI systems analyze medical
images for faster and more accurate diagnoses.

Gaming: AI enables machines to play strategic games like chess or Go by predicting


opponents' moves and planning its own strategy.

Social Media: Platforms like Facebook and Twitter use AI to organize user data, detect
trends, and personalize content recommendations.

3. Describe Simple Rational Agent and Goal-Based Agents

Simple Rational Agent:

Acts based on current observations or percepts.

Operates using predefined condition-action rules, e.g., turning on a light when it's
dark.

Effective in fully observable and simple environments.

Goal-Based Agents:

Use goals to determine actions, focusing on achieving a specific objective.

More flexible than simple agents as they consider future outcomes.

AI QB ANS 6
Ideal for dynamic or complex environments where achieving a specific goal is
crucial, like navigation systems.

4. Describe Properties of Environment

Discrete vs. Continuous: A discrete environment, like a chessboard, has distinct states,
while continuous environments, like a driving scenario, involve infinite possibilities.

Observable vs. Partially Observable: Fully observable environments allow agents to


access complete information, unlike partially observable environments, where only
partial data is available.

Static vs. Dynamic: In static environments, conditions remain unchanged during


actions, whereas dynamic ones, like traffic systems, evolve.

Single-Agent vs. Multi-Agent: A single-agent environment involves one decision-


maker, while multi-agent environments require agents to interact or compete, such as in
games.

5. Describe Types of Search Algorithms

Uninformed Search:

Operates without additional knowledge, relying only on the problem definition (e.g.,
Breadth-First Search (BFS), Depth-First Search (DFS), Uniform-cost Search).

Informed Search:

Uses heuristics or additional domain knowledge to guide the search (e.g., A*


search, Greedy Search).

Local Search:

Focuses on optimizing the solution in a limited area, using techniques like Hill
Climbing or Simulated Annealing.

6. Describe DFS and BFS Algorithms with Their Advantages and Disadvantages

DFS (Depth-First Search):

Explores nodes along a branch before backtracking.

Utilizes a stack or recursion.

Advantages:

Requires minimal memory as it tracks only the current path.

Quicker in finding solutions for deep paths.

AI QB ANS 7
Disadvantages:

May fall into infinite loops in graphs without cycle checks.

Does not guarantee finding the shortest path.

BFS (Breadth-First Search):

Explores all neighbors before moving deeper.

Uses a queue data structure.

Advantages:

Guarantees the shortest path if the cost is uniform.

Suitable for systematic exploration of shallow nodes.

Disadvantages:

High memory usage as it tracks all nodes at a level.

Slower when exploring deeper nodes in large graphs.

7. Describe Uniform Cost Search with Advantages and Disadvantages

Uniform-Cost Search:

A search algorithm that expands nodes based on the lowest cumulative cost.

Ensures the optimal path to the goal.

Advantages:

Always finds the most cost-effective solution.

Guarantees completeness for finite graphs.

Disadvantages:

Computationally expensive due to exploring all paths with equal cost.

Ignores the number of steps, focusing solely on path cost.

8. Explain Greedy Search Algorithm with Examples

Greedy Search:

Expands the node that seems closest to the goal using a heuristic \(h(n)\).

Example: In route finding, it picks the city closest to the destination based on
straight-line distance.

Advantages:

AI QB ANS 8
Faster than uninformed searches.

Effective in certain structured problems.

Disadvantages:

May fail to find the optimal path.

Can loop or get stuck in local optima.

9. Explain A Search Algorithm with Examples

A Search:

Combines cost-so-far \(g(n)\) with heuristic \(h(n)\): \(f(n) = g(n) + h(n)\).

Finds an optimal path if the heuristic is admissible (underestimates cost).

Example:
In a graph where nodes represent cities, A* uses road distance (\(g(n)\)) and straight-line
distance (\(h(n)\)) to plan the shortest route.
Advantages:

Complete and optimal.

Reduces unnecessary exploration using heuristics.

Disadvantages:

Memory-intensive due to maintaining open and closed lists.

10. Describe Simple Hill Climb Algorithm with Example

Hill Climbing:

A local search algorithm that moves toward better states by evaluating neighbors.

Stops when no neighbor is better.

Example:
In the Traveling Salesman Problem, it improves the path by swapping cities to reduce travel
distance.
Advantages:

Simple to implement and requires little memory.

Works well with appropriate heuristics.

Disadvantages:

Stuck in local maxima or plateaus without exploring alternatives.

AI QB ANS 9
11. Describe Means-End Analysis Algorithm with Examples

Means-End Analysis:

A problem-solving strategy that reduces the difference between the current state
and goal state.

Involves breaking the problem into smaller sub-problems (sub-goals) and solving
each sequentially.

Steps:

1. Compare the current state with the goal state.

2. Identify the largest difference.

3. Apply an operator to reduce the difference.

4. Repeat until the goal state is reached.

Example:
Goal: Bake a cake.

Current State: Ingredients are unprepared.

Operators: Mix ingredients, bake in the oven, decorate.

Solution: Reduce differences (e.g., mix ingredients) step by step until the cake is ready.

12. Describe Constraint Satisfaction Problem (CSP) with Examples

CSP:

A problem where a set of variables must satisfy constraints.

Solutions are found by assigning values to variables that meet all constraints.

Example:

Sudoku Puzzle:

Variables: Cells in the grid.

Constraints: Each row, column, and sub-grid must contain numbers 1-9 without
repetition.

13. What are the disadvantages of Steepest Hill Climbing Search Procedure? Using a
suitable
search tree, illustrate that these drawbacks are eliminated in Best First Search.
Disadvantages of Steepest Hill Climbing Search Procedure

AI QB ANS 10
1. Local Maxima Problem: The algorithm can get stuck at a local maximum, which is a
peak higher than its neighbors but not the global maximum.

2. Plateau Problem: The algorithm can get stuck on a plateau, where all neighboring
states have the same value, preventing further progress.

3. Ridge Problem: The algorithm can get stuck on a ridge, where progress requires
multiple simultaneous changes to variables.

Best-First Search Overcoming the Drawbacks


Best-First Search addresses these limitations by using a heuristic function to estimate the
cost of reaching the goal from a given state. This allows the algorithm to explore promising
paths first, reducing the likelihood of getting stuck in local optima or plateaus.
Illustrative Search Tree
Consider the following search tree:
A
/ \
B C
/ \ / \
D E F G

Let's assume the following heuristic values for each node:

A: 10

B: 8

C: 6

D: 5

E: 7

F: 3

G: 2

Steepest Hill Climbing:


Starting from A, the algorithm would move to B, then to D. However, D is a local maximum.
The algorithm would get stuck here, unable to reach the global maximum at G.
Best-First Search:
Best-First Search would prioritize nodes with the lowest estimated cost to the goal. It would
first explore A, then C, and then G, reaching the global maximum.

14. Briefly explain about semantic network with its advantages and disadvantages.
Construct a

AI QB ANS 11
semantic net representation for the following. -Ram gives Lucy a gift -Every dog has
bitten a
mail carrier.
Drawbacks in Semantic representation:

1. Semantic networks take more computational time at runtime as we need to traverse the
complete network tree to answer some questions. It might be possible in the worst case
scenario that after traversing the entire tree, we find that the solution does not exist in
this network.

2. Semantic networks try to model human-like memory (Which has 1015 neurons and
links)
to store the information, but in practice, it is not possible to build such a vast semantic
network.

3. These types of representations are inadequate as they do not have any equivalent
quantifier, e.g., for all, for some, none, etc.

4. Semantic networks do not have any standard definition for the link names.

5. These networks are not intelligent and depend on the creator of the system.

Advantages of Semantic network:

1. Semantic networks are a natural representation of knowledge.

2. Semantic networks convey meaning in a transparent manner.

3. These networks are simple and easily understandable.

15. Using a suitable example, illustrate the steps of A* search. Why is A* search better
than Best First Search?
A* Search Algorithm (Brief Steps):

1. Initialize: Start with the start node (S) and set f(n) = g(n) + h(n) , where:

g(n) : Path cost from start to node n

h(n) : Heuristic estimate to the goal

2. Expand nodes: Choose the node with the lowest f(n) value, and expand it to its
neighbors.

3. Update costs: For each neighbor, calculate f(n) = g(n) + h(n) and add it to the open list.

4. Repeat until the goal is reached.

AI QB ANS 12
Example:
Start: (1, 1) , Goal: (4, 4)

Heuristic: Manhattan distance. Expand nodes based on the lowest f(n) until the goal is
found.

Why A* is Better than Best-First Search:

1. Optimality: A* guarantees the shortest path if the heuristic is admissible, while Best-
First Search may miss optimal paths.

2. Efficiency: A* considers both path cost ( g(n) ) and heuristic ( h(n) ), while Best-First
Search only considers the heuristic.

3. Example: A* accounts for detours, while Best-First Search might overlook them, leading
to suboptimal paths.

16. Describe the Minimax algorithm for searching game tree. Explain the effect of the
addition of alpha-beta cutoffs to this search algorithm.
Minimax Algorithm (Pointwise)

1. Purpose: Used in two-player zero-sum games (e.g., chess, tic-tac-toe) to determine


the optimal move.

2. Game Tree: Constructs a tree where each node represents a game state.

3. Player Roles:

MAX: The maximizing player.

MIN: The minimizing player (opponent of MAX).

4. Evaluation Function: Assigns a numerical value to terminal (leaf) nodes, representing


the desirability of that state for MAX.

5. Minimax Procedure:

MAX Nodes: Select the move that leads to the child node with the highest value.

MIN Nodes: Select the move that leads to the child node with the lowest value.

6. Backpropagation: Values are propagated back up the tree, alternating between


maximizing and minimizing at each node.

7. Optimal Move: The best move for MAX is the one leading to the child node with the
highest value at the root node.

Alpha-Beta Pruning

AI QB ANS 13
1. Purpose: Optimization technique for Minimax to reduce search space and improve
efficiency.

2. Alpha Value: The best (highest) value found so far along MAX's search path.

3. Beta Value: The best (lowest) value found so far along MIN's search path.

4. Pruning Conditions:

Alpha Cutoff: If a MIN node's value is ≤ alpha, prune the subtree (as MIN will never
choose it).

Beta Cutoff: If a MAX node's value is ≥ beta, prune the subtree (as MAX will never
choose it).

5. Effect:

Reduced Search Space: Eliminates branches that cannot influence the final
decision.

Improved Performance: Reduces the number of nodes to evaluate, speeding up the


algorithm.

Deeper Search: With fewer nodes, the algorithm can explore deeper levels for more
accurate decisions.

17. Short Note on Natural Language Processing (NLP)

NLP:

Natural Language Processing (NLP) is a field of artificial intelligence that focuses on the
interaction between computers and human language. It enables machines to
understand, interpret, and generate human language in a meaningful way. NLP uses
techniques from computational linguistics, machine learning, and deep learning to
analyze and process text or voice data.
Key applications of NLP:

Text summarization: Condensing long documents into shorter summaries.

Sentiment analysis: Determining the emotional tone of text (positive, negative,


neutral).

Machine translation: Translating text from one language to another.

Text classification: Categorizing text into predefined classes (e.g., spam


detection).

AI QB ANS 14
Chatbots and virtual assistants: Creating conversational agents that can interact
with humans.

Information extraction: Extracting specific information from text documents (e.g.,


names, dates, addresses).

Speech recognition: Converting spoken language into text.

18. Draw and Explain the Architecture of the Expert System.

1. Knowledge Base:

Stores domain-specific knowledge and facts in a structured format (rules, frames, or


semantic networks).

Includes:

Fact Database: Stores factual information relevant to the domain.

Knowledge Base: Stores domain-specific rules and heuristics.

2. Inference Engine:

Uses reasoning techniques (like forward chaining or backward chaining) to apply the
knowledge base to solve problems or make decisions.

Derives new conclusions from existing knowledge.

Can be rule-based, case-based, or model-based.

3. Explanation System:

Provides explanations for the system's reasoning and conclusions.

Helps users understand how the system arrived at a particular decision.

4. Knowledge Base Editor:

Allows domain experts to maintain and update the knowledge base.

Facilitates knowledge acquisition and refinement.

5. User Interface:

Provides a user-friendly way to interact with the expert system.

Enables users to input queries, receive answers, and access explanations.

6. Expert System Shell:

Provides the underlying structure and functionality of the expert system.

Includes the inference engine, explanation system, and user interface.

AI QB ANS 15
Can be customized with domain-specific knowledge.

7. User:

Interacts with the expert system through the user interface.

Provides input queries and receives answers and explanations.

19. What are the different types of grammar used in Natural Language Processing?
Explain each
with examples.
1. Context-Free Grammar (CFG):

Purpose: Defines sentence structure using rules that form hierarchical structures
(parse trees).

Example:

Sentence → Noun Phrase + Verb Phrase

"The cat chases the dog" → Noun Phrase ("The cat") + Verb Phrase ("chases the
dog")

2. Dependency Grammar:

Purpose: Focuses on word relationships (dependencies) in a sentence.

Example:

AI QB ANS 16
"John eats an apple"

"eats" (verb) → "John" (subject)

"eats" → "apple" (object)

3. Transformational Grammar:

Purpose: Focuses on transforming sentences (e.g., from active to passive voice).

Example:

Active: "The cat chases the mouse"

Passive: "The mouse is chased by the cat"

20. Assume the following facts 1.Ram likes only easy courses. 2. Engg. Courses are hard.
3.All
courses in Arts are easy. 4. AR04 is an Arts course. Use Resolution to answer the
question,
“What courses would Ram like?”

Understanding the Problem and Translating to Logical Form


We're given the following facts:

1. Ram likes only easy courses.

2. Engineering courses are hard.

3. All courses in Arts are easy.

4. AR04 is an Arts course.

Let's translate these into logical form using propositional logic:

1. Likes(Ram, X) -> Easy(X) (If Ram likes X, then X is easy)

2. Engg(X) -> Hard(X) (If X is an Engineering course, then X is hard)

3. Arts(X) -> Easy(X) (If X is an Arts course, then X is easy)

4. Arts(AR04) (AR04 is an Arts course)

Using Resolution to Prove the Query

We want to prove: Likes(Ram, AR04).


To do this, we'll use a proof by contradiction. We'll assume the negation of what we want to
prove and derive a contradiction.
Negation of the Goal: ¬Likes(Ram, AR04)

Now, let's convert these statements into clause form:

AI QB ANS 17
1. ¬Likes(Ram, X) ∨ Easy(X)
2. ¬Engg(X) ∨ Hard(X)
3. ¬Arts(X) ∨ Easy(X)

4. Arts(AR04)

5. ¬Likes(Ram, AR04) (Negation of the goal)

Applying Resolution:

1. From (3) and (4), we can resolve to get: Easy(AR04)

2. From (1) and (5), we can resolve to get: Easy(AR04)

Since we've derived the same clause twice, we've reached a contradiction. This means our
assumption that ¬Likes(Ram, AR04) is false.
Therefore, we can conclude that Likes(Ram, AR04) is true.

Conclusion:
Ram would like AR04 because it's an easy course, as all Arts courses are easy.

21. Explain Forward and Backward Reasoning with the help of examples.

Forward Reasoning (Data-Driven):

Starts with known facts and applies inference rules to derive new facts.

Continues until a goal state is reached or no further inferences can be made.

Example:

Known facts:

It's raining.

If it's raining, the streets are wet.

Inference rule:

If X, then Y.

Goal: Are the streets wet?

Forward chaining:

Since it's raining (known fact), and "If it's raining, the streets are wet" (inference
rule), we can infer that the streets are wet.

Backward Reasoning (Goal-Driven):

AI QB ANS 18
Starts with a goal and works backward, identifying the conditions that must be true for
the goal to be achieved.

Continues until a set of initial conditions is reached that can be directly verified.

Example:

Goal: Prove that the suspect is guilty.

Backward chaining:

To prove guilt, we need to prove that the suspect committed the crime.

To prove the crime, we need to prove that the suspect had motive, opportunity, and
means.

We then work backward to find evidence for each of these conditions.

22. Explain about knowledge acquisition.


Knowledge acquisition is the process of gathering and organizing knowledge from various
sources and representing it in a form that can be used by a knowledge-based system. This
involves:

Knowledge Engineering: Experts are interviewed and their knowledge is elicited and
formalized.

Machine Learning: Algorithms are used to extract knowledge from data, such as text,
images, or sensor readings.

Knowledge Extraction: Techniques are used to automatically extract knowledge from


existing databases or text documents.

23. Explain the basic components of a heuristic systems


A heuristic system typically consists of the following components:

Knowledge Base: Stores domain-specific knowledge, facts, and rules.

Inference Engine: Applies reasoning techniques (e.g., forward chaining, backward


chaining) to derive new conclusions from the knowledge base.

User Interface: Allows users to interact with the system, providing input and receiving
output.

Explanation Facility: Can explain the reasoning process and the conclusions reached.

24. What are the different types of learning? Explain with suitable example.

Supervised Learning: The system is trained on labeled data, where the correct output
is provided for each input.

AI QB ANS 19
Example: Training a model to classify emails as spam or not spam, using a dataset
of labeled emails.

Unsupervised Learning: The system learns patterns from unlabeled data without
explicit guidance.

Example: Clustering customers into groups based on their purchasing behavior.

Reinforcement Learning: The system learns through trial and error, receiving rewards
or penalties for its actions.

Example: Training a robot to navigate a maze by rewarding it for reaching the goal
and penalizing it for hitting obstacles.

25. How facts are represented using prepositional logic? give an example.
Propositional logic uses propositions (declarative statements) to represent facts.
Propositions can be combined using logical operators like AND, OR, NOT, and IMPLIES to
form complex statements.

Example:

Fact: "All birds can fly."

Propositional representation:

For all x, if x is a bird, then x can fly.

∀x (Bird(x) → CanFly(x))
26. "As per the law, it is a crime for an American to sell weapons to hostile nations.
Country A, an enemy of America, has some missiles, and all the missiles were sold to it
by
Robert, who is an American citizen."
Prove that "Robert is a criminal."
Explain the solution with forward chaining techniques.

1. Initial facts:

American(Robert)

Enemy(CountryA, America)

HasMissiles(CountryA)

SoldTo(Robert, CountryA, Missiles)

2. Rules:

If American(X) and SoldTo(X, Y, Weapons) and Enemy(Y, America), then


Criminal(X).

AI QB ANS 20
3. Forward chaining:

From facts 1, 4, and 2, we can apply rule 1 to infer:

Criminal(Robert)

Therefore, we have proven that "Robert is a criminal" using forward chaining.

27. Translate the following sentences into FOL.


a. Everything is bitter or sweet
b. Either everything is bitter or everything is sweet
c. There is somebody who is loved by everyone
d. Nobody is loved by no one
e. If someone is noisy, everybody is annoyed
f. Frogs are green.
g. Frogs are not green.
h. No frog is green.
i. Some frogs are not green.
j. A mechanic likes Bob.
k. A mechanic likes herself.
l. Every mechanic likes Bob.
m. Some mechanic likes every nurse.
n. There is a mechanic who is liked by every nurse.

Ans:
a. Everything is bitter or sweet:
∀x (Bitter(x) ∨ Sweet(x))
b. Either everything is bitter or everything is sweet:
( ∀x Bitter(x)) ∨ (∀x Sweet(x))
c. There is somebody who is loved by everyone:
∃x ∀y Loves(y, x)
d. Nobody is loved by no one:
∃ ∃
¬ x ¬ y Loves(y, x)
(which is equivalent to ∀x ∃y Loves(y, x))
e. If someone is noisy, everybody is annoyed:
∃ ∀
x Noisy(x) → y Annoyed(y)

f. Frogs are green:


∀ x Frog(x) → Green(x)

g. Frogs are not green:



¬ x (Frog(x) → Green(x))

AI QB ANS 21
(which is equivalent to ∃x (Frog(x) ∧ ¬Green(x)))
h. No frog is green:

¬ x (Frog(x) ∧ Green(x))

i. Some frogs are not green:


∃ x (Frog(x)∧ ¬Green(x))

j. A mechanic likes Bob:


∃ x (Mechanic(x) ∧ Likes(x, Bob))

k. A mechanic likes herself:


∃ x (Mechanic(x) ∧ Likes(x, x))

l. Every mechanic likes Bob:


∀ x (Mechanic(x) → Likes(x, Bob))

m. Some mechanic likes every nurse:


∃x (Mechanic(x) ∧∀y (Nurse(y) → Likes(x, y)))

n. There is a mechanic who is liked by every nurse:


∃ x (Mechanic(x) ∧∀ y (Nurse(y) → Likes(y, x)))

28. What is Goal Stack Planning? Explain with one example.


Ans:

Goal Stack Planning:


Goal Stack Planning breaks down a problem into smaller tasks, solving each step
sequentially, ensuring each subgoal is met before moving to the next.
Steps:

1. Start with a main goal.

2. Decompose the main goal into subgoals.

3. Push subgoals onto a stack.

4. Solve the topmost goal (remove it from the stack).

5. If the goal is not directly achievable, decompose it further into smaller subgoals.

6. Repeat until all goals are achieved.

Example:

Goal: Make a cup of tea.

1. Main Goal: Make a cup of tea.

2. Decompose:

AI QB ANS 22
Goal 1: Boil water.

Goal 2: Steep the tea.

Goal 3: Pour into a cup.

3. Push Goals onto Stack: [Boil water, Steep tea, Pour into a cup]

4. Solve: Start by boiling water (top of the stack).

5. Decompose further: Boil water → Fill kettle with water, turn on kettle.

6. Complete all subgoals: Once water is boiled, move to steeping tea, then pouring
into a cup.

29. A cryptography problem: In the following pattern


SEND
MORE

MONEY
we have to replace each letter by a distinct digit so that the resulting sum is correct.

30. Explain different techniques used in statistical NLP.

1. Descriptive Statistics in NLP

Frequency Counts: Tallying occurrences of words/phrases in a text corpus, important


for tasks like keyword extraction and sentiment analysis.

Measures of Central Tendency:

Mean: Average length of words or sentences.

Median: Central value of word/sentence lengths, less affected by outliers.

AI QB ANS 23
Mode: Most frequent word/sentence length.

Measures of Dispersion:

Variance/Standard Deviation: Measures variability in word or sentence lengths.

2. Probability Distributions

Uniform Distribution: Equal probability for each word/character selection.

Normal Distribution: Many linguistic features (e.g., sentence length) follow a normal
distribution.

3. Statistical Methods for Text Analysis

Tokenization: Splitting text into smaller units (words/sentences).

n-grams: Sequences of n items from text used in language modeling, improving


accuracy.

4. Hypothesis Testing

Chi-Square Test: Tests the independence of categorical variables, useful in text


classification.

T-Test & ANOVA: Compare means to identify statistically significant differences (e.g.,
sentiment analysis).

5. Machine Learning Models

Various models like Logistic Regression, Naive Bayes, SVMs, and Neural Networks
(RNNs, LSTMs, Transformers) are used for tasks such as text classification and
language modeling.

6. Evaluation Metrics

Precision, Recall, F1-Score: Measures to evaluate binary classification.

Accuracy: Overall correctness but may be misleading with imbalanced datasets.

7. Dimensionality Reduction

PCA & t-SNE: Techniques for reducing data dimensions, useful for visualizing high-
dimensional word embeddings and document clusters.

8. Clustering

K-Means & Hierarchical Clustering: Techniques for grouping similar texts, useful in
topic modeling.

9. Bayesian Inference

AI QB ANS 24
LDA (Latent Dirichlet Allocation): A probabilistic model for discovering hidden topics in
a corpus.

10. Time Series Analysis

Markov Chains & Hidden Markov Models: Used for sequence modeling, useful in tasks
like speech recognition and part-of-speech tagging.

AI QB ANS 25

You might also like