In probability theory, conditional probability is a measure of the probability of an event occurring, given that another event (by assumption, presumption, assertion or evidence) is already known to have occurred.[1] This particular method relies on event A occurring with some sort of relationship with another event B. In this situation, the event A can be analyzed by a conditional probability with respect to B. If the event of interest is and the event is known or assumed to have occurred, "the conditional probability of given ", or "the probability of under the condition ", is usually written as [2] or occasionally . This can also be understood as the fraction of probability B that intersects with A, or the ratio of the probabilities of both events happening to the "given" one happening (how many times A occurs rather than not assuming B has occurred):
P(A\midB)=
P(A\capB) | |
P(B) |
For example, the probability that any given person has a cough on any given day may be only 5%. But if we know or assume that the person is sick, then they are much more likely to be coughing. For example, the conditional probability that someone unwell (sick) is coughing might be 75%, in which case we would have that = 5% and = 75 %. Although there is a relationship between and in this example, such a relationship or dependence between and is not necessary, nor do they have to occur simultaneously.
may or may not be equal to, i.e., the unconditional probability or absolute probability of . If, then events and are said to be independent: in such a case, knowledge about either event does not alter the likelihood of each other. (the conditional probability of given) typically differs from . For example, if a person has dengue fever, the person might have a 90% chance of being tested as positive for the disease. In this case, what is being measured is that if event (having dengue) has occurred, the probability of (tested as positive) given that occurred is 90%, simply writing = 90%. Alternatively, if a person is tested as positive for dengue fever, they may have only a 15% chance of actually having this rare disease due to high false positive rates. In this case, the probability of the event (having dengue) given that the event (testing positive) has occurred is 15% or = 15%. It should be apparent now that falsely equating the two probabilities can lead to various errors of reasoning, which is commonly seen through base rate fallacies.
While conditional probabilities can provide extremely useful information, limited information is often supplied or at hand. Therefore, it can be useful to reverse or convert a conditional probability using Bayes' theorem:
P(A\midB)={{P(B\midA)P(A)}\over{P(B)}}
Given two events and from the sigma-field of a probability space, with the unconditional probability of being greater than zero (i.e.,, the conditional probability of given (
P(A\midB)
P(A\capB)
P(A\midB)=
P(A\capB) | |
P(B) |
For a sample space consisting of equal likelihood outcomes, the probability of the event A is understood as the fraction of the number of outcomes in A to the number of all outcomes in the sample space. Then, this equation is understood as the fraction of the set
A\capB
P(A\capB) | |
P(B) |
P(A\midB)
Some authors, such as de Finetti, prefer to introduce conditional probability as an axiom of probability:
P(A\capB)=P(A\midB)P(B)
This equation for a conditional probability, although mathematically equivalent, may be intuitively easier to understand. It can be interpreted as "the probability of B occurring multiplied by the probability of A occurring, provided that B has occurred, is equal to the probability of the A and B occurrences together, although not necessarily occurring at the same time". Additionally, this may be preferred philosophically; under major probability interpretations, such as the subjective theory, conditional probability is considered a primitive entity. Moreover, this "multiplication rule" can be practically useful in computing the probability of
A\capB
P(A\cupB)=P(A)+P(B)-P(A\capB)
Thus the equations can be combined to find a new representation of the :
P(A\capB)=P(A)+P(B)-P(A\cupB)=P(A\midB)P(B)
P(A\cupB)={P(A)+P(B)-P(A\midB){P(B)}}
Conditional probability can be defined as the probability of a conditional event
AB
AB=cupi\left(capj<i\overline{B}j,AiBi \right)
Ai
Bi
It can be shown that
P(AB)=
P(A\capB) | |
P(B) |
which meets the Kolmogorov definition of conditional probability.
If
P(B)=0
P(A\midB)
The case of greatest interest is that of a random variable, conditioned on a continuous random variable resulting in a particular outcome . The event
B=\{X=x\}
Instead of conditioning on being exactly, we could condition on it being closer than distance
\epsilon
B=\{x-\epsilon<X<x+\epsilon\}
For example, if two continuous random variables and have a joint density
fX,Y(x,y)
\epsilon
\begin{aligned} \lim\epsilonP(Y\inU\midx0-\epsilon<X<x0+\epsilon)&= \lim\epsilon
| ||||||||||
|
\\ &=
\intUfX,(x0,y)dy | |
\intRfX,(x0,y)dy |
. \end{aligned}
fX(x0)
It is tempting to define the undefined probability
P(A\midX=x)
\{X=x\}
\{W=w\}
\lim\epsilonP(A\midx-\epsilon\leX\lex+\epsilon) ≠ \lim\epsilonP(A\midw-\epsilon\leW\lew+\epsilon).
See also: Conditional probability distribution, Conditional expectation and Regular conditional probability. Let be a discrete random variable and its possible outcomes denoted . For example, if represents the value of a rolled dice then is the set
\{1,2,3,4,5,6\}
For a value in and an event, the conditional probabilityis given by
P(A\midX=x)
c(x,A)=P(A\midX=x)
For a fixed, we can form the random variable
Y=c(X,A)
P(A\midX=x)
The conditional probability of given can thus be treated as a random variable with outcomes in the interval
[0,1]
The partial conditional probability
P(A\midB1\equivb1,\ldots,Bm\equivbm)
A
Bi
bi
n
n
A
n
Bi\equivbi
Pn(A\midB1\equivb1,\ldots,Bm\equivbm)= \operatornameE(\overline{A}n\mid\overline{B}
n | |
1=b |
1,\ldots,
n | |
\overline{B} | |
m=b |
m)
Based on that, partial conditional probability can be defined as
P(A\midB1\equivb1,\ldots,Bm\equivbm) =\limn\toinftyPn(A\midB1\equivb1,\ldots,Bm\equivbm),
bin\inN
Jeffrey conditionalization[9] is a special case of partial conditional probability, in which the condition events must form a partition:
P(A\midB1\equivb1,\ldots,Bm\equivbm) =
m | |
\sum | |
i=1 |
biP(A\midBi)
Suppose that somebody secretly rolls two fair six-sided dice, and we wish to compute the probability that the face-up value of the first one is 2, given the information that their sum is no greater than 5.
Probability that D1 = 2
Table 1 shows the sample space of 36 combinations of rolled values of the two dice, each of which occurs with probability 1/36, with the numbers displayed in the red and dark gray cells being D1 + D2.
D1 = 2 in exactly 6 of the 36 outcomes; thus P(D1 = 2) = = :
D2 | |||||||
1 | 2 | 3 | 4 | 5 | 6 | ||
---|---|---|---|---|---|---|---|
D1 | 1 | 2 | 3 | 4 | 5 | 6 | 7 |
2 | 3 | 4 | 5 | 6 | 7 | 8 | |
3 | 4 | 5 | 6 | 7 | 8 | 9 | |
4 | 5 | 6 | 7 | 8 | 9 | 10 | |
5 | 6 | 7 | 8 | 9 | 10 | 11 | |
6 | 7 | 8 | 9 | 10 | 11 | 12 |
Probability that D1 + D2 ≤ 5
Table 2 shows that D1 + D2 ≤ 5 for exactly 10 of the 36 outcomes, thus P(D1 + D2 ≤ 5) = :
D2 | |||||||
1 | 2 | 3 | 4 | 5 | 6 | ||
---|---|---|---|---|---|---|---|
D1 | 1 | 2 | 3 | 4 | 5 | 6 | 7 |
2 | 3 | 4 | 5 | 6 | 7 | 8 | |
3 | 4 | 5 | 6 | 7 | 8 | 9 | |
4 | 5 | 6 | 7 | 8 | 9 | 10 | |
5 | 6 | 7 | 8 | 9 | 10 | 11 | |
6 | 7 | 8 | 9 | 10 | 11 | 12 | |
Probability that D1 = 2 given that D1 + D2 ≤ 5
Table 3 shows that for 3 of these 10 outcomes, D1 = 2.
Thus, the conditional probability P(D1 = 2 | D1+D2 ≤ 5) = = 0.3:
D2 | |||||||
1 | 2 | 3 | 4 | 5 | 6 | ||
---|---|---|---|---|---|---|---|
D1 | 1 | 2 | 3 | 4 | 5 | 6 | 7 |
2 | 3 | 4 | 5 | 6 | 7 | 8 | |
3 | 4 | 5 | 6 | 7 | 8 | 9 | |
4 | 5 | 6 | 7 | 8 | 9 | 10 | |
5 | 6 | 7 | 8 | 9 | 10 | 11 | |
6 | 7 | 8 | 9 | 10 | 11 | 12 |
Here, in the earlier notation for the definition of conditional probability, the conditioning event B is that D1 + D2 ≤ 5, and the event A is D1 = 2. We have
P(A\midB)=\tfrac{P(A\capB)}{P(B)}=\tfrac{3/36}{10/36}=\tfrac{3}{10},
In statistical inference, the conditional probability is an update of the probability of an event based on new information.[10] The new information can be incorporated as follows:
A\capB
A\capB
This approach results in a probability measure that is consistent with the original probability measure and satisfies all the Kolmogorov axioms. This conditional probability measure also could have resulted by assuming that the relative magnitude of the probability of A with respect to X will be preserved with respect to B (cf. a Formal Derivation below).
The wording "evidence" or "information" is generally used in the Bayesian interpretation of probability. The conditioning event is interpreted as evidence for the conditioned event. That is, P(A) is the probability of A before accounting for evidence E, and P(A|E) is the probability of A after having accounted for evidence E or after having updated P(A). This is consistent with the frequentist interpretation, which is the first definition given above.
When Morse code is transmitted, there is a certain probability that the "dot" or "dash" that was received is erroneous. This is often taken as interference in the transmission of a message. Therefore, it is important to consider when sending a "dot", for example, the probability that a "dot" was received. This is represented by:
P(dotsent|dotreceived)=P(dotreceived|dotsent)
P(dotsent) | |
P(dotreceived) |
.
P(dotsent)=
3 | |
7 |
and P(dashsent)=
4 | |
7 |
P(dotreceived)
P(dotreceived)=P(dotreceived\capdotsent)+P(dotreceived\capdashsent)
P(dotreceived)=P(dotreceived\middotsent)P(dotsent)+P(dotreceived\middashsent)P(dashsent)
P(dotreceived)=
9 | x | |
10 |
3 | |
7 |
+
1 | x | |
10 |
4 | |
7 |
=
31 | |
70 |
Now,
P(dotsent\middotreceived)
P(dotsent\middotreceived)=P(dotreceived\middotsent)
P(dotsent) | |
P(dotreceived) |
=
9 | |
10 |
x
| ||||
|
=
27 | |
31 |
See main article: Independence (probability theory).
Events A and B are defined to be statistically independent if the probability of the intersection of A and B is equal to the product of the probabilities of A and B:
P(A\capB)=P(A)P(B).
If P(B) is not zero, then this is equivalent to the statement that
P(A\midB)=P(A).
Similarly, if P(A) is not zero, then
P(B\midA)=P(B)
is also equivalent. Although the derived forms may seem more intuitive, they are not the preferred definition as the conditional probabilities may be undefined, and the preferred definition is symmetrical in A and B. Independence does not refer to a disjoint event.[12]
It should also be noted that given the independent event pair [A B] and an event C, the pair is defined to be conditionally independent if the product holds true:[13]
P(AB\midC)=P(A\midC)P(B\midC)
This theorem could be useful in applications where multiple independent events are being observed.
Independent events vs. mutually exclusive events
The concepts of mutually independent events and mutually exclusive events are separate and distinct. The following table contrasts results for the two cases (provided that the probability of the conditioning event is not zero).
P(A\midB)= | P(A) | 0 | |
P(B\midA)= | P(B) | 0 | |
P(A\capB)= | P(A)P(B) | 0 |
These fallacies should not be confused with Robert K. Shope's 1978 "conditional fallacy", which deals with counterfactual examples that beg the question.
See main article: Confusion of the inverse.