0% found this document useful (0 votes)
56 views5 pages

AUC ROC Curve for ML Enthusiasts

The document explains the AUC ROC curve, a crucial evaluation metric for binary classification models that visualizes the trade-off between true positive and false positive rates at various thresholds. It defines key terms such as AUC, ROC curve, sensitivity, specificity, and their importance in assessing model performance. The document emphasizes the significance of selecting appropriate thresholds to balance sensitivity and specificity for optimal classification results.

Uploaded by

REENA BHARATHI
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
56 views5 pages

AUC ROC Curve for ML Enthusiasts

The document explains the AUC ROC curve, a crucial evaluation metric for binary classification models that visualizes the trade-off between true positive and false positive rates at various thresholds. It defines key terms such as AUC, ROC curve, sensitivity, specificity, and their importance in assessing model performance. The document emphasizes the significance of selecting appropriate thresholds to balance sensitivity and specificity for optimal classification results.

Uploaded by

REENA BHARATHI
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 5

Guide to AUC ROC Curve in Machine

Learning : What Is Specificity?


Introduction
You’ve built your machine learning model – so what’s next? You need to evaluate and validate
how good (or bad) it is, so you can decide whether to implement it. That’s where the AUC ROC
curve comes in.. The name might be a mouthful, but it is just saying that we are calculating the
“Area Under the Curve” (AUC) of the “Receiver Operating Characteristic” (ROC
AUC ROC curve helps us visualize how well our machine learning classifier performs.

What is the AUC-ROC Curve?


An ROC curve, or receiver operating characteristic curve, is like a graph that shows how well a
classification model performs. It helps us see how the model makes decisions at different levels
of certainty. The curve has two lines: one for how often the model correctly identifies positive
cases (true positives) and another for how often it mistakenly identifies negative cases as positive
(false positives). By looking at this graph, we can understand how good the model is and choose
the threshold that gives us the right balance between correct and incorrect predictions.
The Receiver Operator Characteristic (ROC) curve is an evaluation metric for binary
classification problems. It is a probability curve that plots the TPR against FPR at various
threshold values and essentially separates the ‘signal’ from the ‘noise.’ In other words, it
shows the performance of a classification model at all classification thresholds. The Area Under
the Curve (AUC) is the measure of the ability of a binary classifier to distinguish between
classes and is used as a summary of the ROC curve.
The higher the AUC, the better the model’s performance at distinguishing between the positive
and negative classes.
When AUC = 1, the classifier can correctly distinguish between all the Positive and the Negative
class points. If, however, the AUC had been 0, then the classifier would predict all Negatives as
Positives and all Positives as Negatives.

When 0.5<AUC<1, there is a high chance that the classifier will be able to distinguish the
positive class values from the negative ones. This is so because the classifier is able to detect
more numbers of True positives and True negatives than False negatives and False positives.

When AUC=0.5, then the classifier is not able to distinguish between Positive and Negative class
points. Meaning that the classifier either predicts a random class or a constant class for all the
data points.
Defining the terms used in AUC and ROC Curve?
1. AUC (Area Under the Curve): A single metric representing the overall performance of
a binary classification model based on the area under its ROC curve.
2. ROC Curve (Receiver Operating Characteristic Curve): A graphical plot illustrating
the trade-off between True Positive Rate and False Positive Rate at various classification
thresholds.
3. True Positive Rate (Sensitivity): Proportion of actual positives correctly identified by
the model.
4. False Positive Rate: The model incorrectly classifies the proportion of actual negatives
as positives.
5. Specificity (True Negative Rate): Proportion of actual negatives correctly identified by
the model.
What are Sensitivity and Specificity?
Sensitivity / True Positive Rate / Recall

Sensitivity tells us what proportion of the positive class got correctly classified.
A simple example would be determining what proportion of the actual sick people were correctly
detected by the model.
False Negative Rate

False Negative Rate (FNR) tells us what proportion of the positive class got incorrectly classified
by the classifier. A higher TPR and a lower FNR are desirable since we want to classify the
positive class correctly.
Specificity / True Negative Rate

Specificity tells us what proportion of the negative class got correctly classified. Taking the same
example as in Sensitivity, Specificity would mean determining the proportion of healthy people
who were correctly identified by the model.

False Positive Rate

FPR tells us what proportion of the negative class got incorrectly classified by the classifier. A
higher TNR and a lower FPR are desirable since we want to classify the negative class correctly.
Out of these metrics, Sensitivity and Specificity are perhaps the most important, and we will see
later on how these are used to build an evaluation metric. But before that, let’s understand why
the probability of prediction is better than predicting the target class directly.

Probability of Predictions
A machine learning classification model can be used to naturally predict the data point’s actual
class or predict its probability of belonging to different classes, employing an AUC-ROC curve
for evaluation. The latter gives us more control over the result. We can determine our own
threshold to interpret the result of the classifier, a valuable aspect when considering the nuances
of the AUC-ROC curve. This approach is sometimes more prudent than just building a
completely new model!
Setting different thresholds for classifying positive classes for data points will inadvertently
change the Sensitivity and Specificity of the model. And one of these thresholds will probably
give a better result than the others, depending on whether we are aiming to lower the number of
False Negatives or False Positives.

The metrics change with the changing threshold values. We can generate different confusion
matrices and compare the various metrics that we discussed in the previous section. But that
would not be a prudent thing to do. Instead, we can plot roc curves between some of these
metrics to quickly visualize which threshold is giving us a better result.
How Does the AUC-ROC Curve Work?
In an AUC-ROC curve, a higher X-axis value indicates a higher number of False positives than
True negatives. While a higher Y-axis value indicates a higher number of True positives than
False negatives. So, the choice of the threshold depends on the ability to balance False positives
and False negatives naturally. Let’s dig a bit deeper and understand what our ROC curve would
look like for different threshold values and how the specificity and sensitivity would vary.

We can try and understand this graph by generating a confusion matrix for each point
corresponding to a threshold and talk about the performance of our classifier:

Point A is where the Sensitivity is the highest and Specificity the lowest. This means all the
Positive class points are classified correctly, and all the Negative class points are classified
incorrectly.
In fact, any point on the blue line corresponds to a situation where the True Positive Rate is equal
to False Positive Rate.
All points above this line correspond to the situation where the proportion of correctly classified
points belonging to the Positive class is greater than the proportion of incorrectly classified
points belonging to the Negative class.

Although Point B has the same Sensitivity as Point A, it has a higher Specificity. Meaning the
number of incorrectly Negative class points is lower than the previous threshold. This indicates
that this threshold is better than the previous one.

Between points C and D, the Sensitivity at point C is higher than point D for the same
Specificity. This means, for the same number of incorrectly classified Negative class points, the
classifier predicted a higher number of Positive class points. Therefore, the threshold at point C
is better than point D.
Now, depending on how many incorrectly classified points we want to tolerate for our classifier,
we would choose between point B or C to predict whether you can defeat me in PUBG or not.

Point E is where the Specificity becomes highest. Meaning the model classifies no False
Positives. The model can correctly classify all the Negative class points! We would choose this
point if our problem was to give perfect song recommendations to our users.
Going by this logic, can you guess where the point corresponding to a perfect classifier would lie
on the graph? Yes! It would be on the top-left corner of the ROC graph corresponding to the
coordinate (0, 1) in the cartesian plane. Here, both the Sensitivity and Specificity would be the
highest, and the classifier would correctly classify all the Positive and Negative class points.

You might also like