0% found this document useful (0 votes)
10 views7 pages

Unit 3,4,5 MCQ

The document contains a series of multiple-choice questions and answers related to density estimation, linear discriminant functions, support vector machines, neural networks, and decision trees. Each question tests knowledge on specific concepts such as Parzen windows, K-nearest neighbor estimation, and backpropagation algorithms. The answers provide insights into the correct options for each question, highlighting key principles in machine learning and pattern classification.

Uploaded by

rathathalamura
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views7 pages

Unit 3,4,5 MCQ

The document contains a series of multiple-choice questions and answers related to density estimation, linear discriminant functions, support vector machines, neural networks, and decision trees. Each question tests knowledge on specific concepts such as Parzen windows, K-nearest neighbor estimation, and backpropagation algorithms. The answers provide insights into the correct options for each question, highlighting key principles in machine learning and pattern classification.

Uploaded by

rathathalamura
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 7

UNIT-3

1. Density Estimation
Q1. The main disadvantage of histogram-based density estimation is:
A. It is too fast
B. It requires a parametric form
C. It is sensitive to bin width and origin
D. It uses kernel functions
Answer: C

2. Parzen Windows
Q2. As the window width (h) in Parzen estimation becomes very small, the resulting estimate
becomes:
A. Smoother
B. More biased
C. More discrete and noisy
D. Less sensitive to outliers
Answer: C

3. K-Nearest Neighbor Estimation


Q3. In K-NN density estimation, the density is estimated by:
A. Counting total samples
B. Averaging distances of all points
C. Volume enclosing K neighbors
D. Distance to the farthest class centroid
Answer: C

4. The Nearest Neighbor Rule


Q4. The asymptotic error rate of 1-NN classifier is:
A. Equal to Bayes error
B. Greater than Bayes error but less than twice it
C. Always zero
D. Equal to training error
Answer: B

5. Metrics and Nearest Neighbor Classification


Q5. Mahalanobis distance is preferred over Euclidean distance when:
A. Features are binary
B. Features are independent
C. Features are correlated
D. Features are normalized
Answer: C

6. Fuzzy Classification
Q6. In fuzzy K-NN, a sample is assigned to:
A. Only the closest class
B. All classes with crisp probability
C. One class with 100% certainty
D. Multiple classes with varying membership values
Answer: D

7. Reduced Coulomb Energy (RCE) Networks


Q7. In an RCE network, the radius of a receptive field is adjusted based on:
A. Number of neighbors
B. Activation function
C. Classification error
D. Distance to training points and threshold
Answer: D

8. Approximations by Series Expansions


Q8. In Fourier series expansion, increasing the number of terms results in:
A. Lower training accuracy
B. Smoother approximations
C. More random outputs
D. Less generalization
Answer: B

UNIT -4

Linear Discriminant
Linear Discriminant Functions and Decision Surfaces

Q1. A linear discriminant function in two dimensions represents a:


A. Point
B. Line
C. Curve
D. Hyperplane
Answer: B

Generalized Linear Discriminant Functions

Q2. A generalized linear discriminant function can be used to:


A. Classify only two categories
B. Handle nonlinear separability
C. Solve multi-class problems
D. Estimate probabilities directly
✅ Answer: C

Two-Category Linearly Separable Case


Q3. In a two-class linearly separable problem, the decision boundary is:
A. Non-existent
B. A quadratic curve
C. A straight line
D. A cluster center
Answer: C

Perceptron Criterion Functions

Q4. The perceptron criterion function minimizes:


A. Total distance between classes
B. Misclassified samples' distance from the boundary
C. Weight norm
D. Variance within a class
✅ Answer: B

Relaxation Procedures

Q5. Relaxation procedures in pattern classification are used to:


A. Speed up convergence
B. Reduce noise
C. Minimize energy functions
D. Ensure orthogonality
Answer: A

Nonseparable Behavior

Q6. When data is not linearly separable, the perceptron:


A. Always converges
B. Gives infinite solutions
C. Does not converge
D. Classifies all correctly
Answer: C

Minimum Squared Error Procedures


Q7. Minimum squared error methods are equivalent to:
A. Linear regression
B. Perceptron
C. Maximum likelihood
D. Histogram equalization
Answer: A

Linear Programming Algorithms

Q8. Linear programming methods in classification are mainly used to:


A. Find kernel functions
B. Solve optimization problems with linear constraints
C. Reduce dimensionality
D. Calculate distances
Answer: B

🔹 Support Vector Machines

Q9. The support vectors in an SVM are:


A. Points far from the decision boundary
B. Outliers
C. Points lying on the margin
D. Random samples
Answer: C
Unit-5
ONE MARK QUESTIONS (Intermediate Level MCQs)
Introduction to Neural Networks

Q1. The function of an activation function in a neural network is to:


A. Multiply inputs by weights
B. Perform output normalization
C. Introduce non-linearity
D. Store weights
Answer: C

Multilayer Neural Networks: Feedforward & Classification

Q2. In a feedforward neural network, data flows:


A. In a loop
B. Bidirectionally
C. In one direction from input to output
D. Randomly between layers
Answer: C

Backpropagation Algorithm

Q3. Backpropagation minimizes the error by adjusting:


A. Output values
B. Learning rate
C. Weights using gradient descent
D. Bias only
Answer: C

Decision Trees

Q4. In a decision tree, the attribute chosen for a split is based on:
A. Variance
B. Information gain or Gini index
C. Mean square error
D. Learning rate
Answer: B

CART (Classification and Regression Trees)

Q5. The CART algorithm generates:


A. Only classification rules
B. Only regression outputs
C. Binary decision trees
D. Fuzzy inference rules
Answer: C

Applications: Face Recognition System

Q6. In face recognition, neural networks are primarily used for:


A. Data compression
B. Pattern classification
C. Language translation
D. Hashing images
Answer: B

You might also like