In probability theory and statistics, a sequence of independent Bernoulli trials with probability 1/2 of success on each trial is metaphorically called a fair coin. One for which the probability is not 1/2 is called a biased or unfair coin. In theoretical studies, the assumption that a coin is fair is often made by referring to an ideal coin.
John Edmund Kerrich performed experiments in coin flipping and found that a coin made from a wooden disk about the size of a crown and coated on one side with lead landed heads (wooden side up) 679 times out of 1000.[1] In this experiment the coin was tossed by balancing it on the forefinger, flipping it using the thumb so that it spun through the air for about a foot before landing on a flat cloth spread over a table. Edwin Thompson Jaynes claimed that when a coin is caught in the hand, instead of being allowed to bounce, the physical bias in the coin is insignificant compared to the method of the toss, where with sufficient practice a coin can be made to land heads 100% of the time.[2] Exploring the problem of checking whether a coin is fair is a well-established pedagogical tool in teaching statistics.
(\Omega,l{F},P)
H
T
2\Omega
\{\}
\{H,T\}
x | \{\} | \{H\} | \{T\} | \{H,T\} | |
---|---|---|---|---|---|
P(x) | 0 | 0.5 | 0.5 | 1 |
So the full probability space which defines a fair coin is the triplet
(\Omega,l{F},P)
(H,T)\to(1,0)
(H,T)\to(1,-1)
The probabilistic and statistical properties of coin-tossing games are often used as examples in both introductory and advanced text books and these are mainly based in assuming that a coin is fair or "ideal". For example, Feller uses this basis to introduce both the idea of random walks and to develop tests for homogeneity within a sequence of observations by looking at the properties of the runs of identical values within a sequence.[3] The latter leads on to a runs test. A time-series consisting of the result from tossing a fair coin is called a Bernoulli process.
If a cheat has altered a coin to prefer one side over another (a biased coin), the coin can still be used for fair results by changing the game slightly. John von Neumann gave the following procedure:[4]
The reason this process produces a fair result is that the probability of getting heads and then tails must be the same as the probability of getting tails and then heads, as the coin is not changing its bias between flips and the two flips are independent. This works only if getting one result on a trial does not change the bias on subsequent trials, which is the case for most non-malleable coins (but not for processes such as the Pólya urn). By excluding the events of two heads and two tails by repeating the procedure, the coin flipper is left with the only two remaining outcomes having equivalent probability. This procedure only works if the tosses are paired properly; if part of a pair is reused in another pair, the fairness may be ruined. Also, the coin must not be so biased that one side has a probability of zero.
This method may be extended by also considering sequences of four tosses. That is, if the coin is flipped twice but the results match, and the coin is flipped twice again but the results match now for the opposite side, then the first result can be used. This is because HHTT and TTHH are equally likely. This can be extended to any multiple of 2.
The expected value of flips at the n game
E(Fn)
HT
TH
E(Fn|HT,TH)=2
TT
HH
E(Fn|TT,HH)=2+E(Fn+1)
E(F)=E(Fn)=E(Fn+1)
E(Fn+1|Fn,...,F1)=Fn
E(E(Fn+1|Fn,...,X1))=E(Fn)
E(Fn+1)=E(E(Fn+1|Fn,...,F1))=E(Fn)
\begin{align}E(F) &=E(Fn)\\ &=E(Fn|TT,HH)P(TT,HH)+E(Fn|HT,TH)P(HT,TH)\\ &=(2+E(Fn+1))P(TT,HH)+2P(HT,TH)\\ &=(2+E(F))P(TT,HH))+2P(HT,TH)\\ &=(2+E(F))(P(TT)+P(HH))+2(P(HT)+P(TH))\\ &=(2+E(F))(P(T)2+P(H)2)+4P(H)P(T)\\ &=(2+E(F))(1-2P(H)P(T))+4P(H)P(T)\\ &=2+E(F)-2P(H)P(T)E(F)\\ \end{align}
\thereforeE(F)=2+E(F)-2P(H)P(T)E(F) ⇒ E(F)=
1 | = | |
P(H)P(T) |
1 | |
P(H)(1-P(H)) |
The more biased our coin is, the more likely it is that we will have to perform a greater number of trials before a fair result.
Suppose that the bias
b:=P(H)
p
b
p=0.5
X\in\{H,T\}
p\geb
H
X=H
p
p\gets
p-b | |
1-b |
p<b
T
X=T
p
p\gets
p | |
b |
Note that the above algorithm does not reach the optimal expected number of coin tosses, which is
1/H(b)
H(b)
The above algorithm has an expected number of biased coinflips being
1 | |
2b(1-b) |
The correctness of the above algorithm is a perfect exercise of conditional expectation.We now analyze the expected number of coinflips.
Given the bias
b=P(H)
p
fb(p)
fb(p)
This magically solves to the following function:
When
p=0.5
fb(0.5)=
1 | |
2b(1-b) |
The idea of this algorithm can be extended to generating any biased coin with a specified probability.