Comparison of Evaluation Metrics in Classification Applications with Imbalanced Datasets

TitleComparison of Evaluation Metrics in Classification Applications with Imbalanced Datasets
Publication TypeConference Paper
Year of Publication2008
AuthorsFatourechi, M., R. K. Ward, S. G. Mason, J. Huggins, A. Schlogl, and G. E. Birch
Conference NameMachine Learning and Applications, 2008. ICMLA '08. 7th International Conference on
Pagination777 -782
Date Publisheddec.
Keywordsclassification application, classifier testing, evaluation metrics, imbalanced datasets, Kappa coefficient, model selection, pattern classification
Abstract

A new framework is proposed for comparing evaluation metrics in classification applications with imbalanced datasets (i.e., the probability of one class vastly exceeds others). For model selection as well as testing the performance of a classifier, this framework finds the most suitable evaluation metric amongst a number of metrics. We apply this framework to compare two metrics: overall accuracy and Kappa coefficient. Simulation results demonstrate that Kappa coefficient is more suitable.

URLhttp://dx.doi.org/10.1109/ICMLA.2008.34
DOI10.1109/ICMLA.2008.34

a place of mind, The University of British Columbia

Electrical and Computer Engineering
2332 Main Mall
Vancouver, BC Canada V6T 1Z4
Tel +1.604.822.2872
Fax +1.604.822.5949
Email:

Emergency Procedures | Accessibility | Contact UBC | © Copyright 2018 The University of British Columbia