Fisher transformation explained

In statistics, the Fisher transformation (or Fisher z-transformation) of a Pearson correlation coefficient is its inverse hyperbolic tangent (artanh). When the sample correlation coefficient r is near 1 or -1, its distribution is highly skewed, which makes it difficult to estimate confidence intervals and apply tests of significance for the population correlation coefficient ρ.[1] [2] [3] The Fisher transformation solves this problem by yielding a variable whose distribution is approximately normally distributed, with a variance that is stable over different values of r.

Definition

Given a set of N bivariate sample pairs (XiYi), i = 1, ..., N, the sample correlation coefficient r is given by

r=

\operatorname{cov
(X,Y)}{\sigma

X\sigmaY}=

\sum
N
i=1
(Xi-\bar{X
)(Y

i-\bar{Y})}{\sqrt{\sum

N
i=1

(Xi-\bar{X})2}\sqrt{\sum

N
i=1

(Yi-\bar{Y})2}}.

Here

\operatorname{cov}(X,Y)

stands for the covariance between the variables

X

and

Y

and

\sigma

stands for the standard deviation of the respective variable. Fisher's z-transformation of r is defined as

z={1\over2}ln\left({1+r\over1-r}\right)=\operatorname{artanh}(r),

where "ln" is the natural logarithm function and "artanh" is the inverse hyperbolic tangent function.

If (XY) has a bivariate normal distribution with correlation ρ and the pairs (XiYi) are independent and identically distributed, then z is approximately normally distributed with mean

{1\over2}ln\left({{1+\rho}\over{1-\rho}}\right),

and a standard deviation which does not depend on the value of the correlation rho (i.e., a Variance-stabilizing transformation)

{1\over\sqrt{N-3}},

where N is the sample size, and ρ is the true correlation coefficient.

This transformation, and its inverse

r=

\exp(2z)-1
\exp(2z)+1

=\operatorname{tanh}(z),

can be used to construct a large-sample confidence interval for r using standard normal theory and derivations. See also application to partial correlation.

Derivation

Hotelling gives a concise derivation of the Fisher transformation.[4]

To derive the Fisher transformation, one starts by considering an arbitrary increasing, twice-differentiable function of

r

, say

G(r)

. Finding the first term in the large-

N

expansion of the corresponding skewness

\kappa3

results[5] in
\kappa
3=6\rho-3(1-\rho2)G\prime(\rho)/G\prime(\rho)
\sqrt{N
}+O(N^).Setting

\kappa3=0

and solving the corresponding differential equation for

G

yields the inverse hyperbolic tangent

G(\rho)=\operatorname{artanh}(\rho)

function.

Similarly expanding the mean m and variance v of

\operatorname{artanh}(r)

, one gets

m =

\operatorname{artanh}(\rho)+

\rho
2N

+O(N-2)

and

v =

1+
N
6-\rho2
2N2

+O(N-3)

respectively.

The extra terms are not part of the usual Fisher transformation. For large values of

\rho

and small values of

N

they represent a large improvement of accuracy at minimal cost, although they greatly complicate the computation of the inverse – a closed-form expression is not available. The near-constant variance of the transformation is the result of removing its skewness – the actual improvement is achieved by the latter, not by the extra terms. Including the extra terms, i.e., computing (z-m)/v1/2, yields:
z-\operatorname{artanh)-
(\rho
\rho
2N
}which has, to an excellent approximation, a standard normal distribution.[6]

Application

The application of Fisher's transformation can be enhanced using a software calculator as shown in the figure. Assuming that the r-squared value found is 0.80, that there are 30 data, and accepting a 90% confidence interval, the r-squared value in another random sample from the same population may range from 0.588 to 0.921. When r-squared is outside this range, the population is considered to be different.

Discussion

The Fisher transformation is an approximate variance-stabilizing transformation for r when X and Y follow a bivariate normal distribution. This means that the variance of z is approximately constant for all values of the population correlation coefficient ρ. Without the Fisher transformation, the variance of r grows smaller as |ρ| gets closer to 1. Since the Fisher transformation is approximately the identity function when |r| < 1/2, it is sometimes useful to remember that the variance of r is well approximated by 1/N as long as |ρ| is not too large and N is not too small. This is related to the fact that the asymptotic variance of r is 1 for bivariate normal data.

The behavior of this transform has been extensively studied since Fisher introduced it in 1915. Fisher himself found the exact distribution of z for data from a bivariate normal distribution in 1921; Gayen in 1951[7] determined the exact distribution of z for data from a bivariate Type A Edgeworth distribution. Hotelling in 1953 calculated the Taylor series expressions for the moments of z and several related statistics[8] and Hawkins in 1989 discovered the asymptotic distribution of z for data from a distribution with bounded fourth moments.[9]

An alternative to the Fisher transformation is to use the exact confidence distribution density for ρ given by[10] [11] \pi (\rho | r) =\frac(1 - r^2)^ \cdot(1 - \rho^2)^ \cdot(1 - r \rho)^ F\!\left(\frac,-\frac; \nu + \frac; \frac\right)where

F

is the Gaussian hypergeometric function and

\nu=N-1>1

.

Other uses

While the Fisher transformation is mainly associated with the Pearson product-moment correlation coefficient for bivariate normal observations, it can also be applied to Spearman's rank correlation coefficient in more general cases.[12] A similar result for the asymptotic distribution applies, but with a minor adjustment factor: see the cited article for details.

See also

References

  1. Fisher . R. A. . 1915 . Frequency distribution of the values of the correlation coefficient in samples of an indefinitely large population . Biometrika . 10 . 507–521 . 2331838. 4. 10.2307/2331838. 2440/15166 . free .
  2. Ronald Fisher . Fisher . R. A. . 1921 . On the 'probable error' of a coefficient of correlation deduced from a small sample . Metron . 1 . 3–32.
  3. Rick Wicklin. Fisher's transformation of the correlation coefficient. September 20, 2017. https://blogs.sas.com/content/iml/2017/09/20/fishers-transformation-correlation.html. Accessed Feb 15,2022.
  4. Hotelling. Harold. 1953. New Light on the Correlation Coefficient and its Transforms. Journal of the Royal Statistical Society, Series B (Methodological). 15. 2. 193–225. 10.1111/j.2517-6161.1953.tb00135.x. 0035-9246.
  5. Winterbottom. Alan. 1979. A Note on the Derivation of Fisher's Transformation of the Correlation Coefficient. The American Statistician. 33. 3. 142–143. 10.2307/2683819. 2683819 . 0003-1305.
  6. Vrbik . Jan . Population moments of sampling distributions . Computational Statistics . December 2005 . 20 . 4 . 611–621 . 10.1007/BF02741318. 120592303 .
  7. Gayen . A. K. . The Frequency Distribution of the Product-Moment Correlation Coefficient in Random Samples of Any Size Drawn from Non-Normal Universes . 38 . 1951 . 219–247 . Biometrika . 2332329 . 1/2 . 10.1093/biomet/38.1-2.219.
  8. Harold Hotelling . Hotelling . H . 1953 . New light on the correlation coefficient and its transforms . Journal of the Royal Statistical Society, Series B . 15 . 193–225 . 2983768 . 2 .
  9. Hawkins . D. L. . 1989 . Using U statistics to derive the asymptotic distribution of Fisher's Z statistic . . 43 . 235–237 . 10.2307/2685369 . 4 . 2685369. u-statistic .
  10. Taraldsen. Gunnar. 2021. The Confidence Density for Correlation. Sankhya A. en. 10.1007/s13171-021-00267-y. 244594067 . 0976-8378. free.
  11. Taraldsen. Gunnar. 2020. Confidence in Correlation. en. 10.13140/RG.2.2.23673.49769.
  12. Encyclopedia: Zar . Jerrold H. . 2005. Spearman Rank Correlation: Overview . Encyclopedia of Biostatistics. 10.1002/9781118445112.stat05964 . 9781118445112 .

External links