0% found this document useful (0 votes)
7 views9 pages

Performance Measures

Uploaded by

itsrainingrivers
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views9 pages

Performance Measures

Uploaded by

itsrainingrivers
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 9

Performance Measures

Confusion Matrix
 The confusion matrix is a table that summarizes how successful the classification model
is at predicting examples belonging to various classes. One axis of the confusion matrix
is the label that the model predicted, and the other axis is the actual label.
 TP
The predicted value is positive and it’s positive.
 FP
The predicted value is positive and its negative.
 FN
The predicted value is negative and its positive.
 TN
The predicted value is negative and its negative.
Measure Factor

 Rate is a measure factor in a confusion matrix. It has also 4 type TPR, FPR, TNR, FNR
 True Positive Rate(TPR): True Positive/positive
 False Positive Rate(FPR): False Positive /Negative
 False Negative Rate(FNR): False Negative/Positive
 True Negative Rate(TNR): True Negative/Negative
For better performance, TPR, TNR should be high and FNR, FPR should be low.
Accuracy

 The number of samples correctly classified out of all the samples


present in the test set.
 It is defined as total correctly classified example divided by the total
number of classified examples. Lets express it in terms of confusion
matrix:
Precision

 The number of samples actually belonging to the positive class out of all
the samples that were predicted to be of the positive class by the
model.
 It is define as the actual correct prediction divided by total prediction
made by model.
Recall

 The number of samples predicted correctly to be belonging to the


positive class out of all the samples that actually belong to the positive
class.
 Recall is calculated as the number of true positives divided by the total
number of true positives and false negatives.
F1-Score
 F1 score is a weighted average of precision and recall.
 F1 score is usually more useful than accuracy, especially if you have an
uneven class distribution.
 The harmonic mean of the precision and recall scores obtained for the
positive class.
Specificity

 The number of samples predicted correctly to be in the negative class


out of all the samples in the dataset that actually belong to the negative
class.
Sensitivity

 Sensitivity is the proportion of true positives tests out of all patients with a condition. In
other words, it is the ability of a test or instrument to yield a positive result for a subject
that has that disease. The ability to correctly classify a test is essential.
 Sensitivity does not allow providers to understand individuals who tested positive but did
not have the disease.
 the equation for sensitivity is the following:

Sensitivity=(True Positives)/(True Positives +False Negatives)

You might also like