Untitled Document (1)
Untitled Document (1)
---
2. D) Loss function
Explanation: Batch normalization normalizes the inputs of layers (input or hidden) to stabilize
learning. The loss function is not normalized.
3. B) Mode collapse
Explanation: In GANs, mode collapse happens when the generator produces limited varieties of
outputs, failing to capture the diversity of real data.
5. B) To speed up convergence
Explanation: Teacher forcing uses actual target outputs during training of LSTMs instead of
predicted ones, which helps the model learn faster and more accurately.
---
6. B) P(parameters|data)
Explanation: The posterior distribution is the probability of the parameters given the observed
data, central to Bayesian inference.
8. B) Normality of residuals
Explanation: A QQ plot compares the quantiles of a dataset with a normal distribution,
assessing whether data is normally distributed.
---
---
16. C) O(n³)
Explanation: The Hungarian algorithm for solving assignment problems has a cubic time
complexity, suitable for small to moderate datasets.
---
Machine Learning
23. B) F1-score
Explanation: The F1-score balances precision and recall, making it suitable for imbalanced
datasets where accuracy can be misleading.
---
Data Engineering
28. B) HAVING
Explanation: HAVING is used to filter grouped data after aggregation, unlike WHERE, which
filters rows before aggregation.
29. A) Data modeling approaches
Explanation: Schema-on-read means data is interpreted at query time, while schema-on-write
enforces structure when data is ingested.
---
31. C) 95%
Explanation: In a normal distribution, about 95% of the data falls within ±2 standard deviations
from the mean.
34. B) ANOVA
Explanation: ANOVA (Analysis of Variance) is used to test differences between three or more
group means.
37. B) O(log n)
Explanation: A balanced Binary Search Tree allows efficient querying in logarithmic time.
---
Basic Concepts
---
47. A) df.head()
Explanation: This command shows the first 5 rows of a DataFrame.
50. B) TensorFlow
Explanation: TensorFlow is a popular deep learning library for building and training neural
networks.
---
Simple Statistics
51. B) -1 to 1
Explanation: The correlation coefficient ranges from -1 (perfect negative) to 1 (perfect positive),
with 0 indicating no correlation.
52. B) 10
Explanation: Mean = (5 + 10 + 15) / 3 = 10
53. B) 4
Explanation: Sorted list = [1, 3, 5, 7]; Median = (3 + 5)/2 = 4
54. C) Mean
Explanation: The mean is heavily influenced by extreme values, unlike median or mode.
---
Let me know if you'd like a PDF of these explanations or want to focus more deeply on any
topic!