Accuracy paradox explained

The accuracy paradox is the paradoxical finding that accuracy is not a good metric for predictive models when classifying in predictive analytics. This is because a simple model may have a high level of accuracy but be too crude to be useful. For example, if the incidence of category A is dominant, being found in 99% of cases, then predicting that case is category A will have an accuracy of 99%. Precision and recall are better measures in such cases.The underlying issue is that there is a class imbalance between the positive class and the negative class. Prior probabilities for these classes need to be accounted for in error analysis. Precision and recall help, but precision too can be biased by very unbalanced class priors in the test sets.

Example

For example, a city of 1 million people has ten terrorists. A profiling system results in the following confusion matrix:

Fail Pass Sum
Fail10 0 10
Pass990 999000 999990
Sum1000 999000 1000000

Even though the accuracy is ≈ 99.9%, 990 out of the 1000 positive predictions are incorrect. The precision of = 1% reveals its poor performance. As the classes are so unbalanced, a better metric is the F1 score = ≈ 2% (the recall being = 1).

Literature

See also