Open In App

How to Obtain TP, TN, FP, FN with Scikit-Learn

Last Updated : 09 Aug, 2024
Comments
Improve
Suggest changes
Like Article
Like
Report

Answer: To obtain True Positives (TP), True Negatives (TN), False Positives (FP), and False Negatives (FN) for evaluating classification models, Scikit-Learn offers a straightforward method using the confusion_matrix function. This function helps in extracting these metrics directly from your model's predictions.

This article will guide you through obtaining True Positive (TP), True Negative (TN), False Positive (FP), and False Negative (FN) values using Scikit-Learn in Python.

Introduction to Confusion Matrix

A confusion matrix is a table that is used to describe the performance of a classification model. It consists of four metrics:

  • True Positives (TP): The number of instances correctly predicted as positive.
  • True Negatives (TN): The number of instances correctly predicted as negative.
  • False Positives (FP): The number of instances incorrectly predicted as positive.
  • False Negatives (FN): The number of instances incorrectly predicted as negative.

Obtaining The Four Metrics : TP, TN,FP,FN

To compute these values, we first need to create a confusion matrix using Scikit-Learn's confusion_matrix function. Below is a step-by-step guide:

Below is a complete example that demonstrates how to obtain TP, TN, FP, and FN values:

Python
from sklearn.metrics import confusion_matrix

# True labels
y_true = [0, 1, 1, 0, 1, 0, 0, 1, 1, 0]

# Predicted labels
y_pred = [0, 1, 0, 0, 1, 1, 0, 1, 1, 1]

# Compute confusion matrix
cm = confusion_matrix(y_true, y_pred)

# Extract TP, TN, FP, FN
tn, fp, fn, tp = cm.ravel()

print(f"True Positives (TP): {tp}")
print(f"True Negatives (TN): {tn}")
print(f"False Positives (FP): {fp}")
print(f"False Negatives (FN): {fn}")

Output:

True Positives (TP): 4
True Negatives (TN): 3
False Positives (FP): 2
False Negatives (FN): 1

Conclusion

Understanding and extracting TP, TN, FP, and FN values is essential for evaluating the performance of classification models. These metrics provide a deeper insight into the model's behavior beyond simple accuracy. By using Scikit-Learn's confusion_matrix function, you can easily obtain these values and use them to calculate other performance metrics such as precision, recall, and F1 score.


Next Article
Article Tags :
Practice Tags :

Similar Reads