Checking whether a coin is fair explained

In statistics, the question of checking whether a coin is fair is one whose importance lies, firstly, in providing a simple problem on which to illustrate basic ideas of statistical inference and, secondly, in providing a simple problem that can be used to compare various competing methods of statistical inference, including decision theory. The practical problem of checking whether a coin is fair might be considered as easily solved by performing a sufficiently large number of trials, but statistics and probability theory can provide guidance on two types of question; specifically those of how many trials to undertake and of the accuracy of an estimate of the probability of turning up heads, derived from a given sample of trials.

A fair coin is an idealized randomizing device with two states (usually named "heads" and "tails") which are equally likely to occur. It is based on the coin flip used widely in sports and other situations where it is required to give two parties the same chance of winning. Either a specially designed chip or more usually a simple currency coin is used, although the latter might be slightly "unfair" due to an asymmetrical weight distribution, which might cause one state to occur more frequently than the other, giving one party an unfair advantage.[1] So it might be necessary to test experimentally whether the coin is in fact "fair"  - that is, whether the probability of the coin's falling on either side when it is tossed is exactly 50%. It is of course impossible to rule out arbitrarily small deviations from fairness such as might be expected to affect only one flip in a lifetime of flipping; also it is always possible for an unfair (or "biased") coin to happen to turn up exactly 10 heads in 20 flips. Therefore, any fairness test must only establish a certain degree of confidence in a certain degree of fairness (a certain maximum bias). In more rigorous terminology, the problem is of determining the parameters of a Bernoulli process, given only a limited sample of Bernoulli trials.

Preamble

This article describes experimental procedures for determining whether a coin is fair or unfair. There are many statistical methods for analyzing such an experimental procedure. This article illustrates two of them.

Both methods prescribe an experiment (or trial) in which the coin is tossed many times and the result of each toss is recorded. The results can then be analysed statistically to decide whether the coin is "fair" or "probably not fair".

An important difference between these two approaches is that the first approach gives some weight to one's prior experience of tossing coins, while the second does not. The question of how much weight to give to prior experience, depending on the quality (credibility) of that experience, is discussed under credibility theory.

Posterior probability density function

One method is to calculate the posterior probability density function of Bayesian probability theory.

A test is performed by tossing the coin N times and noting the observed numbers of heads, h, and tails, t. The symbols H and T represent more generalised variables expressing the numbers of heads and tails respectively that might have been observed in the experiment. Thus N = H + T = h + t.

Next, let r be the actual probability of obtaining heads in a single toss of the coin. This is the property of the coin which is being investigated. Using Bayes' theorem, the posterior probability density of r conditional on h and t is expressed as follows:

f(r\midH=h,T=t)=

\Pr(H=h\midr,N=h+t)g(r)
1
\int\Pr(H=h\midp,N=h+t)g(p)dp
0

,

where g(r) represents the prior probability density distribution of r, which lies in the range 0 to 1.

The prior probability density distribution summarizes what is known about the distribution of r in the absence of any observation. We will assume that the prior distribution of r is uniform over the interval [0, 1]. That is, g(r) = 1. (In practice, it would be more appropriate to assume a prior distribution which is much more heavily weighted in the region around 0.5, to reflect our experience with real coins.)

The probability of obtaining h heads in N tosses of a coin with a probability of heads equal to r is given by the binomial distribution:

\Pr(H=h\midr,N=h+t)={N\chooseh}rh(1-r)t.

Substituting this into the previous formula:

f(r\midH=h,T=t) =

{N\chooseh
r

h(1-r)t}

1
{\int
0

{N\chooseh}ph(1-p)tdp} =

rh(1-r)t
1
\intph(1-p)tdp
0

.

This is in fact a beta distribution (the conjugate prior for the binomial distribution), whose denominator can be expressed in terms of the beta function:

f(r\midH=h,T=t)=

1
B(h+1,t+1)

rh(1-r)t.

As a uniform prior distribution has been assumed, and because h and t are integers, this can also be written in terms of factorials:

f(r\midH=h,T=t)=

(h+t+1)!
h!t!

rh(1-r)t.

Example

For example, let N = 10, h = 7, i.e. the coin is tossed 10 times and 7 heads are obtained:

f(r\midH=7,T=3)=

(10+1)!
7!3!

r7(1-r)3=1320r7(1-r)3.

The graph on the right shows the probability density function of r given that 7 heads were obtained in 10 tosses. (Note: r is the probability of obtaining heads when tossing the same coin once.)

Notes and References

  1. However, if the coin is caught rather than allowed to bounce or spin, it is difficult to bias a coin flip's outcome. See Teacher's Corner: You Can Load a Die, But You Can't Bias a Coin. Andrew . . American Statistician . 2002 . 56 . 4 . 308–311 . 10.1198/000313002605 . Deborah Nolan. 123597087 .