In machine learning, a probabilistic classifier is a classifier that is able to predict, given an observation of an input, a probability distribution over a set of classes, rather than only outputting the most likely class that the observation should belong to. Probabilistic classifiers provide classification that can be useful in its own right[1] or when combining classifiers into ensembles.
Formally, an "ordinary" classifier is some rule, or function, that assigns to a sample a class label :
\hat{y}=f(x)
The samples come from some set (e.g., the set of all documents, or the set of all images), while the class labels form a finite set defined prior to training.
Probabilistic classifiers generalize this notion of classifiers: instead of functions, they are conditional distributions
\Pr(Y\vertX)
x\inX
y\inY
\hat{y}=\operatorname{\argmax}y\Pr(Y=y\vertX)
or, in English, the predicted class is that which has the highest probability.
Binary probabilistic classifiers are also called binary regression models in statistics. In econometrics, probabilistic classification in general is called discrete choice.
Some classification models, such as naive Bayes, logistic regression and multilayer perceptrons (when trained under an appropriate loss function) are naturally probabilistic. Other models such as support vector machines are not, but methods exist to turn them into probabilistic classifiers.
Some models, such as logistic regression, are conditionally trained: they optimize the conditional probability
\Pr(Y\vertX)
\Pr(X\vertY)
\Pr(Y)
\Pr(Y\vertX)
See main article: article and Calibration (statistics). Not all classification models are naturally probabilistic, and some that are, notably naive Bayes classifiers, decision trees and boosting methods, produce distorted class probability distributions.[3] In the case of decision trees, where is the proportion of training samples with label in the leaf where ends up, these distortions come about because learning algorithms such as C4.5 or CART explicitly aim to produce homogeneous leaves (giving probabilities close to zero or one, and thus high bias) while using few samples to estimate the relevant proportion (high variance).[4]
Calibration can be assessed using a calibration plot (also called a reliability diagram).[5] A calibration plot shows the proportion of items in each class for bands of predicted probability or score (such as a distorted probability distribution or the "signed distance to the hyperplane" in a support vector machine). Deviations from the identity function indicate a poorly-calibrated classifier for which the predicted probabilities or scores can not be used as probabilities. In this case one can use a method to turn these scores into properly calibrated class membership probabilities.
For the binary case, a common approach is to apply Platt scaling, which learns a logistic regression model on the scores.[6] An alternative method using isotonic regression[7] is generally superior to Platt's method when sufficient training data is available.[3]
In the multiclass case, one can use a reduction to binary tasks, followed by univariate calibration with an algorithm as described above and further application of the pairwise coupling algorithm by Hastie and Tibshirani.[8]
Commonly used evaluation metrics that compare the predicted probability to observed outcomes include log loss, Brier score, and a variety of calibration errors. The former is also used as a loss function in the training of logistic models.
Calibration errors metrics aim to quantify the extent to which a probabilistic classifier's outputs are well-calibrated. As Philip Dawid put it, "a forecaster is well-calibrated if, for example, of those events to which he assigns a probability 30 percent, the long-run proportion that actually occurs turns out to be 30 percent".[9] Foundational work in the domain of measuring calibration error is the Expected Calibration Error (ECE) metric.[10] More recent works propose variants to ECE that address limitations of the ECE metric that may arise when classifier scores concentrate on narrow subset of the [0,1], including the Adaptive Calibration Error (ACE) [11] and Test-based Calibration Error (TCE).[12]
A method used to assign scores to pairs of predicted probabilities and actual discrete outcomes, so that different predictive methods can be compared, is called a scoring rule.
p\ell(x),\ell=1,...,K