Bayesian average explained

A Bayesian average is a method of estimating the mean of a population using outside information, especially a pre-existing belief,[1] which is factored into the calculation. This is a central feature of Bayesian interpretation. This is useful when the available data set is small.[2]

Calculating the Bayesian average uses the prior mean m and a constant C. C is chosen based on the typical data set size required for a robust estimate of the sample mean. The value is larger when the expected variation between data sets (within the larger population) is small. It is smaller when the data sets are expected to vary substantially from one another.

\bar{x}={Cm+

n
\sum
i=1

xi\overC+n}

This is equivalent to adding C data points of value m to the data set. It is a weighted average of a prior average m and the sample average.

When the

xi

are binary values 0 or 1, m can be interpreted as the prior estimate of a binomial probability with the Bayesian average giving a posterior estimate for the observed data. In this case, C can be chosen based on the desired binomial proportion confidence interval for the sample value. For example, for rare outcomes when m is small choosing

C\simeq9/m

ensures a 99% confidence interval has width about 2m.

See also

References

Notes and References

  1. Web site: Bayesian Average Ratings. www.evanmiller.org. 2016-05-21.
  2. Web site: Of Bayesian average and star ratings. Masurel. Paul. fulmicoton.com. 2016-05-21.