MCA 2 SEM - ARTIFICIAL INTELLIGENCE
Module – 3 - Syllabus
Basic planning systems: STRIPS - Advanced planning systems: K-STRIPS - Strategic explanations: Why, Why
Not, How - Forms of learning: inductive, decision trees, ensemble learning - Computational learning theory
- Handling uncertainty: non-monotonic reasoning, probabilistic reasoning - Certainty factors - Fuzzy logic
Objectives of the Syllabus
1.Understand basic planning concepts
1. Learn the structure and working of STRIPS as a foundational AI planning system.
2.Explore advanced planning systems
1. Study K-STRIPS and its role in handling uncertainty and incomplete knowledge in planning.
3.Develop skills in AI explanation frameworks
1. Understand “Why”, “Why Not”, and “How” explanations for interpreting AI decisions.
4.Gain knowledge of machine learning approaches
1. Apply inductive learning, decision trees, and ensemble learning techniques to problem-solving.
5.Comprehend computational learning theory
1. Learn theoretical foundations such as PAC learning to evaluate learning models.
6.Handle uncertainty in reasoning systems
1. Apply non-monotonic reasoning and probabilistic reasoning to dynamic knowledge bases.
7.Use certainty factors for decision-making
1. Represent degrees of belief in expert systems for better diagnostic reasoning.
8.Apply fuzzy logic for approximate reasoning
1. Implement fuzzy logic principles for real-world problems where binary logic is insufficient.
Basic Planning Systems STRIPS (Stanford Research Institute Problem Solver)
About : Example: Making Tea
•Planning in AI involves generating a sequence of actions 1. Initial State
•Kettle is empty
to achieve a specific goal. •Water is not boiled
2. Goal State 1. Example: Robot Moving Boxes
•STRIPS stands for Stanford Research Institute Problem •Tea is ready 2. Initial State: Box in room A, robot in room
Solver. 3. Actions (with Preconditions and Effects) A
1.Fill Kettle Goal State: Box in room B
•Developed in the late 1960s at the Stanford Research 1. Precondition: Kettle is empty Actions:
2. Effect: Kettle has water 3. Pick Up Box (precondition: robot in same
Institute (now SRI International). 2.Boil Water room as box)
1. Precondition: Kettle has water 4. Move to Room B (precondition: holding
•Created by Richard Fikes and Nils Nilsson. 2. Effect: Water is boiled box)
3.Brew Tea 5. Put Down Box (precondition: in room B)
•One of the most influential approaches to automated 1. Precondition: Water is boiled, tea 6. Plan: Pick Up Box → Move to Room B →
leaves available Put Down Box
planning.
2. Effect: Tea is ready
4. STRIPS Plan Generated
•Provided the foundational concepts for many modern AI •Fill Kettle → Boil Water → Brew Tea
planning systems.
It tells a computer: Limitations
1.Where you are now (initial state) •Cannot handle:
2.Where you want to go (goal state) • Uncertainty (probabilistic outcomes)
3.What actions you can take (with rules for when • Incomplete knowledge
they can happen and what they change) • Dynamic changes during planning
•This led to advanced systems like K-STRIPS.
K-STRIPS
K-STRIPS is an advanced version of STRIPS designed to handle •Example
situations where the system does not have complete •Scenario: A robot needs to pick up an object from
knowledge about the environment. Room A, but it’s not sure if the object is there.
•K-STRIPS plan:
•Go to Room A
•Check if the object is there (gather knowledge)
Key Points about K-STRIPS •If found → pick up object
1.Meaning •Else → go to Room B and search again
1. K stands for Knowledge.
2. It’s a knowledge-based extension of the original STRIPS
planning system.
2.Purpose •Advantage
1. To plan in environments where some facts are •Works in dynamic and real-world
unknown or uncertain. environments where you can’t know everything
2. Unlike STRIPS, it can represent what the agent knows, in advance.
does not know, and needs to find out.
Strategic Explanations What They Are:
•Why – Reason for a chosen action •Part of explainable AI (XAI).
•Why Not – Reason for not choosing another action •Help users, developers, or operators understand the
•How – Steps to achieve a goal reasoning behind an AI system’s decisions or actions.
•Especially important in planning systems, expert
Strategic Explanations in AI systems, and decision support systems.
1.Why
1. Explains the reason for a chosen action. Purpose:
2. Example: “Why did the robot turn left?” — •Build trust between the user and the AI.
Because the right path was blocked. •Allow debugging of decision-making processes.
2.Why Not •Help in learning and training by showing reasoning
1. Explains the reason for not choosing another steps.
action.
2. Example: “Why didn’t the robot go straight?” —
Because there was an obstacle ahead. When Used:
3.How •In expert systems (e.g., medical diagnosis systems like
1. Explains the steps to achieve a goal. MYCIN).
2. Example: “How will the robot reach the charging •In robotics (explaining navigation choices).
station?” — Turn left → move forward 2 meters •In intelligent tutoring systems (explaining problem-
→ dock. solving steps).
Forms of Learning
•Inductive Learning – Learn rules from examples
•Decision Trees – Tree-like model for decisions and outcomes
•Ensemble Learning – Combine multiple models for better accuracy
Inductive Learning Decision Trees
Inductive learning is a machine learning approach where the A Decision Tree is a tree-like structure used for Advantages
system generalizes rules or patterns from specific examples decision-making and prediction. It breaks down data •Easy to visualize and interpret.
(data). into smaller subsets while gradually developing a •Works with both categorical and numerical data.
👉 It is the process of going from the particular to the general. decision rule. •Requires little preprocessing.
•Nodes: represent tests or conditions on an attribute. Limitations
How it works •Can overfit if the tree grows too deep.
•Branches: represent the outcome of the test.
•The learner is given a set of training examples. •Sensitive to noisy data.
•Leaves: represent the final decision or class label. •Greedy splitting may not always produce the best global
•From these examples, it infers (induces) general rules that can
be applied to new, unseen data. How It Works tree .
1.Start at the root node with all training data.
Animal Features Class 2.Choose the best attribute (using measures like
Information Gain, Entropy, or Gini Index).
Cat Small, 4 legs, Meows Pet 3.Split the data based on that attribute.
Dog Medium, 4 legs, Barks Pet 4.Repeat the process recursively for each branch until:
1. All data in a node belongs to the same class
Cow Large, 4 legs, Moos Farm (pure).
2. No attributes are left.
Key Characteristics 3. The tree reaches a stopping condition.
1.Data-driven – learns from examples.
2.Generalization – creates rules beyond training data. Example 🌳 (Weather & Play
Tennis)
3.Fallible – conclusions may be wrong if examples are
Training data:
incomplete or biased. •If it’s Sunny and Hot → Don’t
Real-world Applications Play.
•Spam email detection (learning patterns from spam vs. non- •If it’s Overcast → Play.
spam examples). •If it’s Rainy and Windy → Don’t
•Medical diagnosis (learning disease symptoms from patient Play.
records). •If it’s Rainy and Calm → Play.
•Image classification (cat vs. dog recognition from labeled
pictures).
Ensemble Learning Example (Spam Detection)
Ensemble learning is a machine learning technique where we
•Model 1: Decision Tree → 85% accuracy.
combine the predictions of multiple models (called weak learners)
•Model 2: Naïve Bayes → 80% accuracy.
to produce a better and more robust model (called a strong
•Model 3: SVM → 82% accuracy.
learner).
Individually, none is perfect.
👉 Idea: “A group of models working together performs better than a
But an ensemble (e.g., majority vote) may
single model alone.”
reach 90%+ accuracy, because the errors of
one model are corrected by others.
Why use Ensemble Learning?
•Individual models may be weak (prone to errors or bias).
•By combining them, we reduce errors, variance, and bias, and Advantages
improve accuracy. •Higher accuracy and robustness.
•Works well with complex, noisy datasets.
Types of Ensemble Learning •Reduces risk of overfitting (especially
1.Bagging (Bootstrap Aggregating) bagging).
1. Trains multiple models on different random samples of Disadvantages
the data. •More computationally expensive.
2. Predictions are combined (e.g., majority voting for •Less interpretable than single models.
classification, averaging for regression).
3. Example: Random Forest 🌲
4. Benefit: Reduces variance and prevents overfitting.
2.Boosting
1. Trains models sequentially, where each new model
focuses on the errors made by previous ones.
2. Example: AdaBoost, Gradient Boosting, XGBoost,
LightGBM.
3. Benefit: Reduces bias and improves accuracy.
3.Stacking
1. Combines multiple different models (heterogeneous
learners) and uses a meta-learner to make the final
prediction.
2. Example: Combining Logistic Regression, Decision
Trees, and SVM → then feeding their outputs into
another model like a Neural Network.
3. Benefit: Exploits strengths of different algorithms.