F-distribution explained

In probability theory and statistics, the F-distribution or F-ratio, also known as Snedecor's F distribution or the Fisher–Snedecor distribution (after Ronald Fisher and George W. Snedecor), is a continuous probability distribution that arises frequently as the null distribution of a test statistic, most notably in the analysis of variance (ANOVA) and other F-tests.[1] [2] [3]

Definition

The F-distribution with d1 and d2 degrees of freedom is the distribution of

X=

U1/d1
U2/d2

where U_1 and U_2 are independent random variables with chi-square distributions with respective degrees of freedom d_1 and d_2.

It can be shown to follow that the probability density function (pdf) for X is given by

\begin{align} f(x;d1,d2)&=

\sqrt{
d1
(d
d2
d
2
1x)
(d
d1+d2
2)
1x+d
} \\[5pt]&=\frac \left(\frac\right)^ x^ \left(1+\frac \, x \right)^\end

for real x > 0. Here

B

is the beta function. In many applications, the parameters d1 and d2 are positive integers, but the distribution is well-defined for positive real values of these parameters.

The cumulative distribution function is

F(x;d1,d2)=I

d1x/(d1x+d2)

\left(\tfrac{d1}{2},\tfrac{d2}{2}\right),

where I is the regularized incomplete beta function.

The expectation, variance, and other details about the F(d1, d2) are given in the sidebox; for d2 > 8, the excess kurtosis is

\gamma2=12

d2-22)(d1+d2-2)+(d2-4)(d
2
2-2)
1(5d
d1(d2-6)(d2-8)(d1+d2-2)

.

The k-th moment of an F(d1, d2) distribution exists and is finite only when 2k < d2 and it is equal to

\muX(k)=\left(

d2
d1

\right)k

\Gamma\left(\tfrac{d1
2

+k\right)}{\Gamma\left(\tfrac{d1}{2}\right)}

\Gamma\left(\tfrac{d2
2

-k\right)}{\Gamma\left(\tfrac{d2}{2}\right)}.

[4]

The F-distribution is a particular parametrization of the beta prime distribution, which is also called the beta distribution of the second kind.

The characteristic function is listed incorrectly in many standard references (e.g.,). The correct expression [5] is

F
\varphi
d1,d2

(s)=

\Gamma\left(d1+d2\right)
2
\Gamma\left(\tfrac{d2

{2}\right)}U\left(

d1,1-
2
d2,-
2
d2
d1

\imaths\right)

where U(a, b, z) is the confluent hypergeometric function of the second kind.

Characterization

A random variate of the F-distribution with parameters

d1

and

d2

arises as the ratio of two appropriately scaled chi-squared variates:[6]

X=

U1/d1
U2/d2

where

U1

and

U2

have chi-squared distributions with

d1

and

d2

degrees of freedom respectively, and

U1

and

U2

are independent.

In instances where the F-distribution is used, for example in the analysis of variance, independence of

U1

and

U2

might be demonstrated by applying Cochran's theorem.

Equivalently, since the chi-squared distribution is the sum of independent standard normal random variables, the random variable of the F-distribution may also be written

X=

2
s
1
2
\sigma
1

÷

2
s
2
2
\sigma
2

,

where

2
s
1

=

2
S
1
d1
and
2
s
2

=

2
S
2
d2
,
2
S
1
is the sum of squares of

d1

random variables from normal distribution
2)
N(0,\sigma
1
and
2
S
2
is the sum of squares of

d2

random variables from normal distribution
2)
N(0,\sigma
2
.

In a frequentist context, a scaled F-distribution therefore gives the probability

2
p(s
2

\mid

2,
\sigma
1
2)
\sigma
2
, with the F-distribution itself, without any scaling, applying where
2
\sigma
1
is being taken equal to
2
\sigma
2
. This is the context in which the F-distribution most generally appears in F-tests: where the null hypothesis is that two independent normal variances are equal, and the observed sums of some appropriately selected squares are then examined to see whether their ratio is significantly incompatible with this null hypothesis.

The quantity

X

has the same distribution in Bayesian statistics, if an uninformative rescaling-invariant Jeffreys prior is taken for the prior probabilities of
2
\sigma
1
and
2
\sigma
2
.[7] In this context, a scaled F-distribution thus gives the posterior probability
2
p(\sigma
2
2
/\sigma
1

\mid

2
s
1,
2
s
2)
, where the observed sums
2
s
1
and
2
s
2
are now taken as known.

Properties and related distributions

X\sim

2
\chi
d1
and

Y\sim

2
\chi
d2
(Chi squared distribution) are independent, then
X/d1
Y/d2

\simF(d1,d2)

Xk\sim\Gamma(\alphak,\betak)

(Gamma distribution) are independent, then
\alpha2\beta1X1
\alpha1\beta2X2

\simF(2\alpha1,2\alpha2)

X\sim\operatorname{Beta}(d1/2,d2/2)

(Beta distribution) then
d2X
d1(1-X)

\sim\operatorname{F}(d1,d2)

X\simF(d1,d2)

, then
d1X/d2
1+d1X/d2

\sim\operatorname{Beta}(d1/2,d2/2)

.

X\simF(d1,d2)

, then
d1
d2

X

has a beta prime distribution:
d1
d2

X\sim

\prime}\left(\tfrac{d
\operatorname{\beta
1}{2},\tfrac{d

2}{2}\right)

.

X\simF(d1,d2)

then

Y=

\lim
d2\toinfty

d1X

has the chi-squared distribution
2
\chi
d1

F(d1,d2)

is equivalent to the scaled Hotelling's T-squared distribution
d2
d1(d1+d2-1)

\operatorname{T}2(d1,d1+d2-1)

.

X\simF(d1,d2)

then

X-1\simF(d2,d1)

.

X\simt(n)

Student's t-distribution — then: \beginX^ &\sim \operatorname(1, n) \\X^ &\sim \operatorname(n, 1)\end

X

and

Y

are independent, with

X,Y\sim

Laplace(μ, b) then \frac
\sim \operatorname(2,2)

X\simF(n,m)

then

\tfrac{log{X}}{2}\sim\operatorname{FisherZ}(n,m)

(Fisher's z-distribution)

λ=0

.

λ1=λ2=0

\operatorname{Q}X(p)

is the quantile p for

X\simF(d1,d2)

and

\operatorname{Q}Y(1-p)

is the quantile

1-p

for

Y\simF(d2,d1)

, then \operatorname_X(p)=\frac.

See also

(0,infty)

is given as

f(x)=

\alpha
2
2\betax\alpha-1\exp(-\betax2+\gammax)
\Psi{\left(\alpha,
\gamma
\sqrt{\beta
2

\right)}}

, where

\Psi(\alpha,z)={}1\Psi

1\left(\begin{matrix}\left(\alpha,1
2

\right)\\(1,0)\end{matrix};z\right)

denotes the Fox–Wright Psi function.

External links

Notes and References

  1. Book: Johnson , Norman Lloyd . Samuel Kotz . N. Balakrishnan . Continuous Univariate Distributions, Volume 2 (Second Edition, Section 27) . Wiley . 1995 . 0-471-58494-0.
  2. NIST (2006). Engineering Statistics Handbook – F Distribution
  3. Book: Mood , Alexander . Franklin A. Graybill . Duane C. Boes . Introduction to the Theory of Statistics . Third . 246–249 . McGraw-Hill . 1974 . 0-07-042864-6.
  4. Web site: Taboga . Marco . The F distribution.
  5. Phillips, P. C. B. (1982) "The true characteristic function of the F distribution," Biometrika, 69: 261–264
  6. Book: DeGroot, M. H. . 1986 . Probability and Statistics . 2nd . Addison-Wesley . 0-201-11366-X . 500 .
  7. Book: Box, G. E. P. . G. C. . Tiao . 1973 . Bayesian Inference in Statistical Analysis . Addison-Wesley . 110 . 0-201-00622-7 .
  8. Mahmoudi . Amin . Javed . Saad Ahmed . October 2022 . Probabilistic Approach to Multi-Stage Supplier Evaluation: Confidence Level Measurement in Ordinal Priority Approach . Group Decision and Negotiation . en . 31 . 5 . 1051–1096 . 10.1007/s10726-022-09790-1 . 0926-2644 . 9409630 . 36042813.
  9. Sun . Jingchao . Kong . Maiying . Pal . Subhadip . The Modified-Half-Normal distribution: Properties and an efficient sampling scheme . Communications in Statistics - Theory and Methods . 22 June 2021 . 52 . 5 . 1591–1613 . 10.1080/03610926.2021.1934700 . 237919587 . 0361-0926.