In a confusion matrix, the diagonals tell us the number of correct predictions for each class and the off-diagonals are the errors. Consider a toy problem where out of 28 examples belonging to the negative class (label 0), 28 were predicted correctly (TN) and out of 8 positive examples (label 1), 6 were predicted correctly (TP). So, there were 2 examples from the negative classes that were not captured by the classifier.
Accuracy is the overall class performance. But if I want to find out the individual class performance, I should use the diagonals for each class and divide by the total examples in that class. I applied the formula and got opposite results for the individual class performance. Can somebody please help what is going on: why I got 100% for the positive class when two examples were missed? Here is what I did
cmMatrix =
28 2
0 6
acc_0 = 100*(cmMatrix(1,1))/sum(cmMatrix(1,:)) = 93.33
acc_1 = 100*(cmMatrix(2,2))/sum(cmMatrix(2,:)) = 100
The above values seem counter-intuitive!! I should be getting 100% for acc_0
since the classifier did not miss any examples (diagonal = 28) and for acc_1
for class 1 I should be getting 93.33
CodePudding user response:
confusionmat arguments are:
cmMatrix = confusionmat(yrtue, ypredicted)
you most likely swapped the arguments.