Behrens–Fisher problem explained

In statistics, the Behrens–Fisher problem, named after Walter-Ulrich Behrens and Ronald Fisher, is the problem of interval estimation and hypothesis testing concerning the difference between the means of two normally distributed populations when the variances of the two populations are not assumed to be equal, based on two independent samples.

Specification

One difficulty with discussing the Behrens–Fisher problem and proposed solutions, is that there are many different interpretations of what is meant by "the Behrens–Fisher problem". These differences involve not only what is counted as being a relevant solution, but even the basic statement of the context being considered.

Context

Let X1, ..., Xn and Y1, ..., Ym be i.i.d. samples from two populations which both come from the same location–scale family of distributions. The scale parameters are assumed to be unknown and not necessarily equal, and the problem is to assess whether the location parameters can reasonably be treated as equal. Lehmann[1] states that "the Behrens–Fisher problem" is used both for this general form of model when the family of distributions is arbitrary, and for when the restriction to a normal distribution is made. While Lehmann discusses a number of approaches to the more general problem, mainly based on nonparametrics,[2] most other sources appear to use "the Behrens–Fisher problem" to refer only to the case where the distribution is assumed to be normal: most of this article makes this assumption.

Requirements of solutions

Solutions to the Behrens–Fisher problem have been presented that make use of either a classical or a Bayesian inference point of view and either solution would be notionally invalid judged from the other point of view. If consideration is restricted to classical statistical inference only, it is possible to seek solutions to the inference problem that are simple to apply in a practical sense, giving preference to this simplicity over any inaccuracy in the corresponding probability statements. Where exactness of the significance levels of statistical tests is required, there may be an additional requirement that the procedure should make maximum use of the statistical information in the dataset. It is well known that an exact test can be gained by randomly discarding data from the larger dataset until the sample sizes are equal, assembling data in pairs and taking differences, and then using an ordinary t-test to test for the mean-difference being zero: clearly this would not be "optimal" in any sense.

The task of specifying interval estimates for this problem is one where a frequentist approach fails to provide an exact solution, although some approximations are available. Standard Bayesian approaches also fail to provide an answer that can be expressed as straightforward simple formulae, but modern computational methods of Bayesian analysis do allow essentially exact solutions to be found. Thus study of the problem can be used to elucidate the differences between the frequentist and Bayesian approaches to interval estimation.

Outline of different approaches

Behrens and Fisher approach

Ronald Fisher in 1935 introduced fiducial inference[3] [4] in order to apply it to this problem. He referred to an earlier paper by Walter-Ulrich Behrens from 1929. Behrens and Fisher proposed to find the probability distribution of

T\equiv{\barx1-\barx2\over

2/n
\sqrt{s
1

+

2/n
s
2}}

where

\barx1

and

\barx2

are the two sample means, and s1 and s2 are their standard deviations. See Behrens–Fisher distribution. Fisher approximated the distribution of this by ignoring the random variation of the relative sizes of the standard deviations,

{s1/\sqrt{n1}\over

2/n
\sqrt{s
1

+

2/n
s
2}}.

Fisher's solution provoked controversy because it did not have the property that the hypothesis of equal means would be rejected with probability α if the means were in fact equal. Many other methods of treating the problem have been proposed since, and the effect on the resulting confidence intervals have been investigated.[5]

Welch's approximate t solution

See main article: Welch's t test and Welch–Satterthwaite equation. A widely used method is that of B. L. Welch,[6] who, like Fisher, was at University College London. The variance of the mean difference

\bard=\barx1-\barx2

results in

2
s
\bard

=

2
s
1
n1

+

2
s
2
n2

.

Welch (1938) approximated the distribution of

2
s
\bard
by the Type III Pearson distribution (a scaled chi-squared distribution) whose first two moments agree with that of
2
s
\bard
. This applies to the following number of degrees of freedom (d.f.), which is generally non-integer:

\nu{(\gamma1+

2
\gamma
2)

\over

2/(n
\gamma
1-1)

+

2/(n
\gamma
2-1)}

where\gammai=

2/n
\sigma
i.

Under the null hypothesis of equal expectations,, the distribution of the Behrens–Fisher statistic T, which also depends on the variance ratio σ12/σ22, could now be approximated by Student's t distribution with these ν degrees of freedom. But this ν contains the population variances σi2, and these are unknown. The following estimate only replaces the population variances by the sample variances:

\hat\nu

(g+
2
g
2)
1
2/(n
g+
2/(n
g
2-1)
1-1)

wheregi=

2/n
s
i.

This

\hat\nu

is a random variable. A t distribution with a random number of degrees of freedom does not exist. Nevertheless, the Behrens–Fisher T can be compared with a corresponding quantile of Student's t distribution with these estimated numbers of degrees of freedom,

\hat\nu

, which is generally non-integer. In this way, the boundary between acceptance and rejection region of the test statistic T is calculated based on the empirical variances si2, in a way that is a smooth function of these.

This method also does not give exactly the nominal rate, but is generally not too far off. However, if the population variances are equal, or if the samples are rather small and the population variances can be assumed to be approximately equal, it is more accurate to use Student's t-test.

Other approaches

A number of different approaches to the general problem have been proposed, some of which claim to "solve" some version of the problem. Among these are,

In Dudewicz’s comparison of selected methods,[11] it was found that the Dudewicz–Ahmed procedure is recommended for practical use.

Exact solutions to the common and generalized Behrens–Fisher problems

For several decades, it was commonly believed that no exact solution to the common Behrens–Fisher problem existed. However, it was proved in 1966 that it has an exact solution.[12] In 2018 the probability density function of a generalized Behrens–Fisher distribution of m means and m distinct standard errors from m samples of distinct sizes from independent normal distributions with distinct means and variances was proved and the paper also examined its asymptotic approximations.[13] A follow-up paper showed that the classic paired t-test is a central Behrens–Fisher problem with a non-zero population correlation coefficient and derived its corresponding probability density function by solving its associated non-central Behrens–Fisher problem with a nonzero population correlation coefficient.[14] It also solved a more general non-central Behrens–Fisher problem with a non-zero population correlation coefficient in the appendix.[14]

Variants

A minor variant of the Behrens–Fisher problem has been studied.[15] In this instance the problem is, assuming that the two population-means are in fact the same, to make inferences about the common mean: for example, one could require a confidence interval for the common mean.

Generalisations

One generalisation of the problem involves multivariate normal distributions with unknown covariance matrices, and is known as the multivariate Behrens–Fisher problem.[16]

The nonparametric Behrens–Fisher problem does not assume that the distributions are normal.[17] [18] Tests include the Cucconi test of 1968 and the Lepage test of 1971.

Notes

  1. Lehmann (1975) p.95
  2. Lehmann (1975) Section 7
  3. Fisher . R. A. . 1935 . The fiducial argument in statistical inference . Annals of Eugenics . 8 . 4. 391–398 . 10.1111/j.1469-1809.1935.tb02120.x . 2440/15222 . free .
  4. Web site: R. A. Fisher's Fiducial Argument and Bayes' Theorem by Teddy Seidenfeld.
  5. Web site: Sezer, A. et al. Comparison of confidence intervals for the Behrens–Fisher Problem Comm. Stats. 2015.
  6. Welch (1938, 1947)
  7. Chapman . D. G. . 1950 . Some two sample tests . . 21 . 4 . 601–606 . 10.1214/aoms/1177729755 . free .
  8. Prokof'yev . V. N. . Shishkin . A. D. . 1974 . Successive classification of normal sets with unknown variances . Radio Engng. Electron. Phys . 19 . 2 . 141–143 .
  9. Dudewicz & Ahmed (1998, 1999)
  10. Wang . Chang . A New Non-asymptotic t-test for Behrens-Fisher Problems . 2022 . math.ST . 2210.16473 .
  11. Dudewicz, Ma, Mai, and Su (2007)
  12. Kabe . D. G. . On the exact distribution of the Fisher-Behren'-Welch statistic . Metrika . December 1966 . 10 . 1 . 13–15 . 10.1007/BF02613414 . 120965543 .
  13. Xiao . Yongshun . On the Solution of a Generalized Behrens-Fisher Problem . Far East Journal of Theoretical Statistics . 22 March 2018 . 54 . 1 . 21–140 . 10.17654/TS054010021 . 21 May 2020.
  14. Xiao . Yongshun . On the Solution of a Non-Central Behrens-Fisher Problem with a Non-Zero Population Correlation Coefficient . Far East Journal of Theoretical Statistics . 12 December 2018 . 54 . 6 . 527–600 . 10.17654/TS054060527 . 125245802 . 21 May 2020.
  15. Young, G. A., Smith, R. L. (2005) Essentials of Statistical Inference, CUP. (page 204)
  16. Belloni & Didier (2008)
  17. Brunner. E.. Nonparametric Behrens–Fisher Problem: Asymptotic Theory and a Small Sample Approximation. Biometrical Journal. 2000. 42. 17–25. 10.1002/(SICI)1521-4036(200001)42:1<17::AID-BIMJ17>3.0.CO;2-U.
  18. Konietschke. Frank. nparcomp: An R Software Package for Nonparametric Multiple Comparisons and Simultaneous Confidence Intervals. Journal of Statistical Software. 2015. 64. 9. 10.18637/jss.v064.i09. 26 September 2016. free.

References

External links