Confusion of the inverse, also called the conditional probability fallacy or the inverse fallacy, is a logical fallacy whereupon a conditional probability is equated with its inverse; that is, given two events A and B, the probability of A happening given that B has happened is assumed to be about the same as the probability of B given A, when there is actually no evidence for this assumption.[1] [2] More formally, P(A|B) is assumed to be approximately equal to P(B|A).
Relative size | Malignant | Benign | Total | |
---|---|---|---|---|
Test positive | 0.8 (true positive) | 9.9 (false positive) | 10.7 | |
Test negative | 0.2 (false negative) | 89.1 (true negative) | 89.3 | |
Total | 1 | 99 | 100 |
The correct probability of malignancy given a positive test result as stated above is 7.5%, derived via Bayes' theorem:
\begin{align} &{} P(malignant|positive)\\[8pt] &=
P(positive|malignant)P(malignant) | |
P(positive|malignant)P(malignant)+P(positive|benign)P(benign) |
\\[8pt] &=
(0.80 ⋅ 0.01) | |
(0.80 ⋅ 0.01)+(0.10 ⋅ 0.99) |
=0.075 \end{align}
Other examples of confusion include:
For other errors in conditional probability, see the Monty Hall problem and the base rate fallacy. Compare to illicit conversion.
Relative size (%) | Ill | Well | Total | |
---|---|---|---|---|
Test positive | 0.99 (true positive) | 0.99 (false positive) | 1.98 | |
Test negative | 0.01 (false negative) | 98.01 (true negative) | 98.02 | |
Total | 1 | 99 | 100 |
The magnitude of this problem is best understood in terms of conditional probabilities.
Suppose 1% of the group suffer from the disease, and the rest are well. Choosing an individual at random,
P(ill)=1\%=0.01andP(well)=99\%=0.99.
Suppose that when the screening test is applied to a person not having the disease, there is a 1% chance of getting a false positive result (and hence 99% chance of getting a true negative result, a number known as the specificity of the test), i.e.
P(positive|well)=1\%,andP(negative|well)=99\%.
Finally, suppose that when the test is applied to a person having the disease, there is a 1% chance of a false negative result (and 99% chance of getting a true positive result, known as the sensitivity of the test), i.e.
P(negative|ill)=1\%andP(positive|ill)=99\%.
The fraction of individuals in the whole group who are well and test negative (true negative):
P(well\capnegative)=P(well) x P(negative|well)=99\% x 99\%=98.01\%.
The fraction of individuals in the whole group who are ill and test positive (true positive):
P(ill\cappositive)=P(ill) x P(positive|ill)=1\% x 99\%=0.99\%.
The fraction of individuals in the whole group who have false positive results:
P(well\cappositive)=P(well) x P(positive|well)=99\% x 1\%=0.99\%.
The fraction of individuals in the whole group who have false negative results:
P(ill\capnegative)=P(ill) x P(negative|ill)=1\% x 1\%=0.01\%.
Furthermore, the fraction of individuals in the whole group who test positive:
\begin{align} P(positive)&{}=P(well\cappositive)+P(ill\cappositive)\\ &{}=0.99\%+0.99\%=1.98\%. \end{align}
Finally, the probability that an individual actually has the disease, given that the test result is positive:
P(ill|positive)= | P(ill\cappositive) |
P(positive) |
=
0.99\% | |
1.98\% |
=50\%.
In this example, it should be easy to relate to the difference between the conditional probabilities P(positive | ill) which with the assumed probabilities is 99%, and P(ill | positive) which is 50%: the first is the probability that an individual who has the disease tests positive; the second is the probability that an individual who tests positive actually has the disease. Thus, with the probabilities picked in this example, roughly the same number of individuals receive the benefits of early treatment as are distressed by false positives; these positive and negative effects can then be considered in deciding whether to carry out the screening, or if possible whether to adjust the test criteria to decrease the number of false positives (possibly at the expense of more false negatives).
. Scott Plous. The Psychology of Judgment and Decisionmaking . 1993 . 978-0-07-050477-6 . 131–134 .
. David M. Eddy . 1982 . Probabilistic reasoning in clinical medicine: Problems and opportunities . Daniel Kahneman . D. . Kahneman . Paul Slovic . P. . Slovic . Amos Tversky . A. . Tversky . Judgment under uncertainty: Heuristics and biases . 249–267 . New York . Cambridge University Press . 0-521-24064-6 . Description simplified as in .
. Rational Choice in an Uncertain World . Reid Hastie. Robyn Dawes . 2001 . 978-0-7619-2275-9. Robyn Dawes . 122–123 .