Home > Enterprise >  imbalabced data set score after smote
imbalabced data set score after smote

Time:09-15

Is it correct to use 'accuracy' as a metric for an imbalanced data set after using oversampling methods such as SMOTE or we have to use other metrics such as AUROC or other presicion-recall related metrics?

CodePudding user response:

You can use accuracy for the dataset after using SMOTE since now it shouldn't be imbalanced as far as I know. You should try the other metrics though for a more detailed evaluation (classification_report_imbalenced combines some metrics)

CodePudding user response:

SMOTE and similar imbalance treatment techniques will be only be applied to you training data. When you have a largely imbalanced data set, say 99% against 1%, accuracy on the TEST set might still give you a value of 99% by always choosing the larger class. Therefore, you should definitely switch to another metric. Popular variants are the F1 score, but there is also a balanced version of the accuracy, see scikit-learn BA page.

As mentioned by @Nocry, applying several evaluation measures, might give you a better feeling. For example, check how accuracy (the regular variant) and balanced accuracy perform with and without using SMOTE, then you should see the difference.

  • Related