Recall :  How many positive cases can our model recall. i.e Of total 100 cancer cases available, how many cancer cases can our model recall or predict  accurately . E,g of 100 cancer cases, our model can recall 30 of them.

Precision :  How precise is our model’s positive  prediction ? i.e Of  200 cancer cases predicted as positive, how many are true cancer cases. e.g Of 200 cases predicted as cancer,   only 60 are true cancer cases.

Let’s explain it with a Cancer detection model example, which gave us the following result?

Predicted Negative

Predicted Positive

Negative Cases

TN: 9,760

FP: 140

Positive Cases

FN: 40

TP: 60

Now, we  might ask three questions to explain the model:

  1. What percent of your predictions were correct?
    You answer: the “accuracy” was (9,760+60) out of 10,000 = 98.2%
    This is not really good metrics to use here, since our class is heavily skewed. So the more better question to gauge the accuracy of the model will be.
  2. What percent of the positive(Cancer) cases did we catch?
    You answer: the “recall” was 60 out of 100 = 60%
    i.e Of the 100 cancer cases, our model can capture 60 cancer case.
  3. What percent of positive(Cancer) predictions were correct?
    You answer: the “precision” was 60 out of 200 = 30% i.e Of the 200 for which  our model predicts cancer, only 30 really have them, 70 of them are false alarms.

 

 

Advertisements