ML TERM I&II
Sanskriti University MCA IV Semester Term-I Examination (March 2025) paper on
Machine Learning (MCA 623):
✅ PART – A (6 questions × 1 mark = 6 Marks)
(Very Short Answers / Multiple Choice Questions)
1a. Define machine learning.
➡ Machine Learning is a subset of AI that allows systems to learn from data and improve
from experience without being explicitly programmed.
1b. What is overfitting in machine learning?
➡ Overfitting occurs when a model learns both the training data and noise too well, resulting
in poor performance on unseen data.
1c. What is the role of a cost function in machine learning?
➡ A cost function measures how well the model’s predictions match the actual outcomes; it
guides the optimization process.
1d. List any two supervised learning algorithms.
➡ Linear Regression and Decision Trees.
1e. Which algorithm uses distance as a measure to classify data?
➡ K-Nearest Neighbors (KNN).
1f. What is the use of the confusion matrix?
➡ It is used to evaluate the performance of a classification model by showing actual vs.
predicted classifications.
✅ PART – B (3 questions × 2 marks = 6 Marks)
(Short Answer Type Questions)
2. Differentiate between supervised and unsupervised learning.
| Feature | Supervised Learning | Unsupervised Learning |
|----------------|--------------------------------------|---------------------------------|
| Data | Labeled | Unlabeled |
| Purpose | Prediction/classification | Pattern detection/clustering |
| Example | Logistic Regression, Decision Tree | K-Means, PCA |
3. What is the significance of K value in KNN?
➡ The value of K determines the number of nearest neighbors to consider for
classification.
➡ A smaller K can be noisy and lead to overfitting, while a larger K makes the
algorithm more stable.
4. List the types of machine learning.
➡ Supervised Learning
➡ Unsupervised Learning
➡ Reinforcement Learning
✅ PART – C (2 questions × 4 marks = 8 Marks)
(Answer any 2 out of 3)
5. From the given figure (scatter plot), identify the classification line and predict the
class of a new sample point (x=3 and y=4):
Based on the image and scatter plot:
o The line divides two classes.
o A new point (3, 4) appears to fall below the line, so it belongs to "Negative"
class (based on figure orientation).
6. Calculate entropy and information gain using given data table:
Play Tennis Sunny Overcast Rain
Yes 2 4 3
No 3 0 2
Total = 14
Entropy(S) = −(9/14)log₂(9/14) − (5/14)log₂(5/14) ≈ 0.940
Calculate Entropy for each feature and then Information Gain (IG):
o IG(Sunny) = Entropy(S) − (Weighted Entropy of feature subsets)
o Final IG depends on computing subsets (can be done explicitly if required).
7. Using the given KNN table, classify the test sample based on majority voting:
Story Suspense Emotions Prediction
No First Yes Show
Yes Second No Skip
No First Yes Show
Using K = 3, nearest neighbors are 2 "Show", 1 "Skip"
Final classification = "Show"
Thanks for sharing all the question papers. Here are the simple answers for the first Machine
Learning question paper (Course Code: MCA 628):
PART – A (6 x 1 = 6 Marks)
a. ReLU function φ(x) = max(0, x); φ(-5) =
Answer: 0
b. tanh(x) = (e^x - e^-x) / (e^x + e^-x); tanh(2) ≈
Answer: 0.964
c. Common activation function in output layer for multi-class classification:
Answer: b) Softmax
d. From the figure, entropy is
Answer: b) High (since the distribution is scattered among classes)
e. For splitting using Information Gain (Entropy is high → split well):
Answer: b) Maximum, Minimum
f. Probability value from figure (5 colored balls out of 10 total):
Answer: d) 0.5
PART – B (Short Answers – 3 x 2 = 6 Marks)
2. How many internal nodes in the given decision tree?
From image: Tree has 3 internal nodes (i.e., nodes that split into branches).
3. Find the result using the given decision tree:
From image: Traverse based on attributes (you can specify the path, if needed). Assume inputs
match a leaf labeled “Yes”.
4. Is the measure of impurity in Decision Trees:
Answer: Entropy (and also Gini Index)
PART – C (2 x 4 = 8 Marks)
5. Use the given data to apply Naive Bayes Classifier.
You calculate class probabilities and conditional probabilities.
Example format:
P(Play=Yes) = total Yes / total records
Then for test instance:
P(Yes|X) ∝ P(Yes) × P(Feature1|Yes) × P(Feature2|Yes) × ...
(Same for No), choose the higher.
6. Step-by-step procedure to train a neural network:
Initialize weights and biases
Forward propagate input
Calculate loss
Backpropagate the error
Update weights using gradient descent
Repeat until convergence
7. What are epochs, learning rate, batch size?
Epoch: One complete pass through training dataset
Learning Rate: Step size for updating weights
Batch Size: Number of samples processed before weights update