Home > Mobile >  How do I calculate support when having true posives, true negatives, false positives and false negat
How do I calculate support when having true posives, true negatives, false positives and false negat

Time:03-18

Somehow my evaluation of my binary classifier does not add up. This is the evaluation of my model:

True Positive(TP)  =  75 
False Positive(FP) =  64 
True Negative(TN)  =  47 
False Negative(FN) =  34 
Accuracy of the binary classification = 0.554545 
precision: [0.58024691 0.53956835] 
recall: [0.42342342 0.68807339] 
fscore: [0.48958333 0.60483871] 
support: [111 109] 

Now so far it looks good, but I just realized that it doesn't really add up. As I see it support should return the total true values in each class. since I have only two, 75 47= 122 and not 111 for the true class. I understand that here TP and FN was summed up to get to 111 and TN and FP accordingly for 109. Or do I not understand support correctly? Here for the first class False Positives was added to True Negatives. That doesn't make sense, does it? How would I interpret this number?

So either I do not understand what support means, or maybe my code is wrong, but I looked at the documentation and made sure, that the values returned are assigned accordingly for the confusion matrix as well as precision_recall_fscore_support. So please explain what I am doing wrong here:

from sklearn.metrics import confusion_matrix 
from sklearn.metrics import precision_recall_fscore_support as score  
def evaluation(y_test, y_pred):
     cm = confusion_matrix(y_test, y_pred)
     TN, FP, FN, TP = confusion_matrix(y_test, y_pred).ravel()
     print('True Positive(TP)  = ', TP)
     print('False Positive(FP) = ', FP)
     print('True Negative(TN)  = ', TN)
     print('False Negative(FN) = ', FN)
     accuracy =  (TP TN) /(TP FP TN FN)
     print('Accuracy of the binary classification = {:0.6f}'.format(accuracy))
     precision, recall, fscore, support = score(y_test, y_pred)
     print('precision: {}'.format(precision))
     print('recall: {}'.format(recall))
     print('fscore: {}'.format(fscore))
     print('support: {}'.format(support))

evaluation(y_test, prediction > 0.5)

CodePudding user response:

The support is the number of cases you have for real positives and negatives. In your example there are 109 (75 34) real positive and 111 (64 47) real negative cases.

  • Related