Spearman's rank correlation coefficient explained

In statistics, Spearman's rank correlation coefficient or Spearman's ρ, named after Charles Spearman[1] and often denoted by the Greek letter

\rho

(rho) or as

rs

, is a nonparametric measure of rank correlation (statistical dependence between the rankings of two variables). It assesses how well the relationship between two variables can be described using a monotonic function.

The Spearman correlation between two variables is equal to the Pearson correlation between the rank values of those two variables; while Pearson's correlation assesses linear relationships, Spearman's correlation assesses monotonic relationships (whether linear or not). If there are no repeated data values, a perfect Spearman correlation of +1 or −1 occurs when each of the variables is a perfect monotone function of the other.

Intuitively, the Spearman correlation between two variables will be high when observations have a similar (or identical for a correlation of 1) rank (i.e. relative position label of the observations within the variable: 1st, 2nd, 3rd, etc.) between the two variables, and low when observations have a dissimilar (or fully opposed for a correlation of −1) rank between the two variables.

Spearman's coefficient is appropriate for both continuous and discrete ordinal variables.[2] [3] Both Spearman's

\rho

and Kendall's

\tau

can be formulated as special cases of a more general correlation coefficient.

Applications

The coefficient can be used to determine how well data fits a model,[4] or to determine the similarity of text documents.[5]

Definition and calculation

The Spearman correlation coefficient is defined as the Pearson correlation coefficient between the rank variables.[6]

For a sample of size n, the n raw scores

Xi,Yi

are converted to ranks

\operatorname{R}({Xi}),\operatorname{R}({Yi})

, and

rs

is computed as

rs= \rho\operatorname{R(X),\operatorname{R}(Y)}=

\operatorname{cov
(\operatorname{R}(X),

\operatorname{R}(Y))} {\sigma\operatorname{R(X)}\sigma\operatorname{R(Y)}},

where

\rho

denotes the usual Pearson correlation coefficient, but applied to the rank variables,

\operatorname{cov}(\operatorname{R}(X),\operatorname{R}(Y))

is the covariance of the rank variables,

\sigma\operatorname{R(X)}

and

\sigma\operatorname{R(Y)}

are the standard deviations of the rank variables.

Only if all n ranks are distinct integers, it can be computed using the popular formula

rs=1-

6\sum
2
d
i
n(n2-1)

,

where

di=\operatorname{R}(Xi)-\operatorname{R}(Yi)

is the difference between the two ranks of each observation,

n is the number of observations.

Consider a bivariate sample

(xi,yi),i=1...,n

with corresponding ranks

(R(Xi),R(Yi))=(Ri,Si)

.Then the Spearman correlation coefficient of

x,y

is

rs=

1
n
n
\sum
i=1
RiSi -\overline{R
\overline{S}

}{ \sigmaR\sigmaS},

where, as usual,

\overline{R}=

n
style1
n
style\sum
i=1

Ri

,

\overline{S}=

n
style1
n
style\sum
i=1

Si

,
2
\sigma
R

=

n
style1
n
style\sum
i=1

(Ri-\overline{R})2

,and
2
\sigma
S

=

n
style1
n
style\sum
i=1

(Si-\overline{S})2

,

We shall show that

rs

can be expressed purely in terms of

di:=Ri-Si

,provided we assume that there be no ties within each sample.

Under this assumption, we have that

R,S

can be viewed as random variablesdistributed like a uniformly distributed random variable,

U

, on

\{1,2,\ldots,n\}

.Hence

\overline{R}=\overline{S}=E[U]

and
2
\sigma
R
2
=\sigma
S

=Var(U)=E[U2]-E[U]2

,where

E[U]=

n
style1
n
style\sum
i=1

i=

style(n+1)
2
,

E[U2]=

n
style1
n
style\sum
i=1

i2=

style(n+1)(2n+1)
6
,and thus

Var(U)=

style(n+1)(2n+1)
6

-

2
\left(
style(n+1)
2
\right)

=

stylen2-1
12
.(These sums can be computed using the formulas for the triangular number and Square pyramidal number,or basic summation results from discrete mathematics.)

Observe now that

\begin{align}

1
n
n
\sum
i=1

RiSi-\overline{R}\overline{S} &=

1
n
n
\sum
i=1
1
2
2
(R
i

+

2
S
i

-

2
d
i

) -\overline{R}2\\ &=

1
2
1
n
n
\sum
i=1
2
R
i

+

1
2
1
n
n
\sum
i=1
2
S
i

-

1
2n
n
\sum
i=1
2
d
i

-\overline{R}2\\ &=(

1
n
n
\sum
i=1
2
R
i

-\overline{R}2) -

1
2n
n
\sum
i=1
2
d
i

\\ &=

2
\sigma
R

-

1
2n
n
\sum
i=1
2
d
i

\\ &=\sigmaR\sigmaS-

1
2n
n
\sum
i=1
2
d
i

\\ \end{align}

Putting this all together thus yields

rs=

\sigmaR\sigmaS-
1
2n
n
\sum
i=1
2
d
i
\sigmaR\sigmaS

=1-

n
\sum
i=1
2
d
i
2n
n2-1
12

=1-

n
6\sum
i=1
2
d
i
n(n2-1)

.

Identical values are usually[7] each assigned fractional ranks equal to the average of their positions in the ascending order of the values, which is equivalent to averaging over all possible permutations.

If ties are present in the data set, the simplified formula above yields incorrect results: Only if in both variables all ranks are distinct, then

\sigma\operatorname{R(X)}\sigma\operatorname{R(Y)}=\operatorname{Var}{(\operatorname{R}(X))}=\operatorname{Var}{(\operatorname{R}(Y))}=(n2-1)/12

(calculated according to biased variance).The first equation — normalizing by the standard deviation — may be used even when ranks are normalized to [0, 1] ("relative ranks") because it is insensitive both to translation and linear scaling.

The simplified method should also not be used in cases where the data set is truncated; that is, when the Spearman's correlation coefficient is desired for the top X records (whether by pre-change rank or post-change rank, or both), the user should use the Pearson correlation coefficient formula given above.[8]

Related quantities

See main article: Correlation and dependence.

There are several other numerical measures that quantify the extent of statistical dependence between pairs of observations. The most common of these is the Pearson product-moment correlation coefficient, which is a similar correlation method to Spearman's rank, that measures the “linear” relationships between the raw numbers rather than between their ranks.

An alternative name for the Spearman rank correlation is the “grade correlation”;[9] in this, the “rank” of an observation is replaced by the “grade”. In continuous distributions, the grade of an observation is, by convention, always one half less than the rank, and hence the grade and rank correlations are the same in this case. More generally, the “grade” of an observation is proportional to an estimate of the fraction of a population less than a given value, with the half-observation adjustment at observed values. Thus this corresponds to one possible treatment of tied ranks. While unusual, the term “grade correlation” is still in use.[10]

Interpretation

The sign of the Spearman correlation indicates the direction of association between X (the independent variable) and Y (the dependent variable). If Y tends to increase when X increases, the Spearman correlation coefficient is positive. If Y tends to decrease when X increases, the Spearman correlation coefficient is negative. A Spearman correlation of zero indicates that there is no tendency for Y to either increase or decrease when X increases. The Spearman correlation increases in magnitude as X and Y become closer to being perfectly monotonic functions of each other. When X and Y are perfectly monotonically related, the Spearman correlation coefficient becomes 1. A perfectly monotonic increasing relationship implies that for any two pairs of data values and, that and always have the same sign. A perfectly monotonic decreasing relationship implies that these differences always have opposite signs.

The Spearman correlation coefficient is often described as being "nonparametric". This can have two meanings. First, a perfect Spearman correlation results when X and Y are related by any monotonic function. Contrast this with the Pearson correlation, which only gives a perfect value when X and Y are related by a linear function. The other sense in which the Spearman correlation is nonparametric is that its exact sampling distribution can be obtained without requiring knowledge (i.e., knowing the parameters) of the joint probability distribution of X and Y.

Example

In this example, the arbitrary raw data in the table below is used to calculate the correlation between the IQ of a person with the number of hours spent in front of TV per week [fictitious values used].

IQ,

Xi

Hours of TV per week,

Yi

1067
10027
862
10150
9928
10329
9720
11312
1126
11017

Firstly, evaluate

2
d
i
. To do so use the following steps, reflected in the table below.
  1. Sort the data by the first column (

Xi

). Create a new column

xi

and assign it the ranked values 1, 2, 3, ..., n.
  1. Next, sort the augmented (with

xi

) data by the second column (

Yi

). Create a fourth column

yi

and similarly assign it the ranked values 1, 2, 3, ..., n.
  1. Create a fifth column

di

to hold the differences between the two rank columns (

xi

and

yi

).
  1. Create one final column
2
d
i
to hold the value of column

di

squared.
IQ,

Xi

Hours of TV per week,

Yi

rank

xi

rank

yi

di

2
d
i
8621100
972026−416
992838−525
1002747−39
10150510−525
1032969−39
106773416
110178539
112692749
11312104636

With

2
d
i
found, add them to find

\sum

2
d
i

=194

. The value of n is 10. These values can now be substituted back into the equation

\rho=1-

6\sum
2
d
i
n(n2-1)

to give

\rho=1-

6 x 194
10(102-1)

,

which evaluates to with a p-value = 0.627188 (using the t-distribution).

That the value is close to zero shows that the correlation between IQ and hours spent watching TV is very low, although the negative value suggests that the longer the time spent watching television the lower the IQ. In the case of ties in the original values, this formula should not be used; instead, the Pearson correlation coefficient should be calculated on the ranks (where ties are given ranks, as described above).

Confidence intervals

Confidence intervals for Spearman's ρ can be easily obtained using the Jackknife Euclidean likelihood approach in de Carvalho and Marques (2012).[11] The confidence interval with level

\alpha

is based on a Wilks' theorem given in the latter paper, and is given by

\left\{\theta:

n(Z
\{\sum-\theta)\
i
2}{\sum
n
i=1

(Zi-\theta)2}\leq

2
\chi
1,\alpha

\right\},

where

2
\chi
1,\alpha
is the

\alpha

quantile of a chi-square distribution with one degree of freedom, and the

Zi

are jackknife pseudo-values. This approach is implemented in the R package spearmanCI.

Determining significance

One approach to test whether an observed value of ρ is significantly different from zero (r will always maintain) is to calculate the probability that it would be greater than or equal to the observed r, given the null hypothesis, by using a permutation test. An advantage of this approach is that it automatically takes into account the number of tied data values in the sample and the way they are treated in computing the rank correlation.

Another approach parallels the use of the Fisher transformation in the case of the Pearson product-moment correlation coefficient. That is, confidence intervals and hypothesis tests relating to the population value ρ can be carried out using the Fisher transformation:

F(r)=

1ln
2
1+r
1-r

=\operatorname{arctanh}r.

If F(r) is the Fisher transformation of r, the sample Spearman rank correlation coefficient, and n is the sample size, then

z=\sqrt{

n-3
1.06
} F(r)

is a z-score for r, which approximately follows a standard normal distribution under the null hypothesis of statistical independence .[12] [13]

One can also test for significance using

t=r\sqrt{

n-2
1-r2
},

which is distributed approximately as Student's t-distribution with degrees of freedom under the null hypothesis.[14] A justification for this result relies on a permutation argument.[15]

A generalization of the Spearman coefficient is useful in the situation where there are three or more conditions, a number of subjects are all observed in each of them, and it is predicted that the observations will have a particular order. For example, a number of subjects might each be given three trials at the same task, and it is predicted that performance will improve from trial to trial. A test of the significance of the trend between conditions in this situation was developed by E. B. Page[16] and is usually referred to as Page's trend test for ordered alternatives.

Correspondence analysis based on Spearman's ρ

Classic correspondence analysis is a statistical method that gives a score to every value of two nominal variables. In this way the Pearson correlation coefficient between them is maximized.

There exists an equivalent of this method, called grade correspondence analysis, which maximizes Spearman's ρ or Kendall's τ.[17]

Approximating Spearman's ρ from a stream

There are two existing approaches to approximating the Spearman's rank correlation coefficient from streaming data.[18] [19] The first approachinvolves coarsening the joint distribution of

(X,Y)

. For continuous

X,Y

values:

m1,m2

cutpoints are selected for

X

and

Y

respectively, discretizingthese random variables. Default cutpoints are added at

-infty

and

infty

. A count matrix of size

(m1+1) x (m2+1)

, denoted

M

, is then constructed where

M[i,j]

stores the number of observations thatfall into the two-dimensional cell indexed by

(i,j)

. For streaming data, when a new observation arrives, the appropriate

M[i,j]

element is incremented. The Spearman's rankcorrelation can then be computed, based on the count matrix

M

, using linear algebra operations (Algorithm 2). Note that for discrete randomvariables, no discretization procedure is necessary. This method is applicable to stationary streaming data as well as large data sets. For non-stationary streaming data, where the Spearman's rank correlation coefficient may change over time, the same procedure can be applied, but to a moving window of observations. When using a moving window, memory requirements grow linearly with chosen window size.

The second approach to approximating the Spearman's rank correlation coefficient from streaming data involves the use of Hermite series based estimators. These estimators, based on Hermite polynomials,allow sequential estimation of the probability density function and cumulative distribution function in univariate and bivariate cases. Bivariate Hermite series densityestimators and univariate Hermite series based cumulative distribution function estimators are plugged into a large sample version of theSpearman's rank correlation coefficient estimator, to give a sequential Spearman's correlation estimator. This estimator is phrased interms of linear algebra operations for computational efficiency (equation (8) and algorithm 1 and 2). These algorithms are only applicable to continuous random variable data, but havecertain advantages over the count matrix approach in this setting. The first advantage is improved accuracy when applied to large numbers of observations. The second advantage is that the Spearman's rank correlation coefficient can becomputed on non-stationary streams without relying on a moving window. Instead, the Hermite series based estimator uses an exponential weighting scheme to track time-varying Spearman's rank correlation from streaming data,which has constant memory requirements with respect to "effective" moving window size. A software implementation of these Hermite series based algorithms exists and is discussed in Software implementations.

Software implementations

See also

Further reading

External links

Notes and References

  1. Spearman . C. . January 1904 . The Proof and Measurement of Association between Two Things . The American Journal of Psychology . 15 . 1 . 72–101 . 10.2307/1412159. 1412159 .
  2. [Level of measurement#Typology|Scale types]
  3. Book: Lehman, Ann . Jmp For Basic Univariate And Multivariate Statistics: A Step-by-step Guide . limited . SAS Press . 2005 . 978-1-59047-576-8 . Cary, NC . 123.
  4. Web site: A Guide to Spearman’s Rank . Royal Geographic Society.
  5. Web site: A Measure of Similarity in Textual Data Using Spearman's Rank Correlation Coefficient . November 2019 . Nino Arsov . Milan Dukovski . Milan Dukovski . Blagoja Evkoski .
  6. Book: Myers . Jerome L. . Arnold D. . Well . Research Design and Statistical Analysis . limited . Lawrence Erlbaum . 2003 . 2nd . 978-0-8058-4037-7 . 508.
  7. Book: Dodge, Yadolah . 2010 . The Concise Encyclopedia of Statistics . limited . Springer-Verlag New York . 502 . 978-0-387-31742-7 .
  8. Book: Al Jaber . Ahmed Odeh . Haifaa Omar . Elayyan . Toward Quality Assurance and Excellence in Higher Education . River Publishers . 2018 . 978-87-93609-54-9 . 284.
  9. Book: Yule . G. U. . Kendall . M. G. . 1950 . An Introduction to the Theory of Statistics . 14th . 1968 . Charles Griffin & Co. . 268 .
  10. Piantadosi . J. . Howlett . P. . Boland . J. . 2007 . Matching the grade correlation coefficient using a copula with maximum disorder . Journal of Industrial and Management Optimization . 3 . 2 . 305–312 . 10.3934/jimo.2007.3.305. free .
  11. M. . de Carvalho. F. . Marques. 2012 . Jackknife Euclidean likelihood-based inference for Spearman's rho . North American Actuarial Journal . 16 . 4. 487‒492 . 10.1080/10920277.2012.10597644 . 55046385.
  12. Choi . S. C. . 1977 . Tests of Equality of Dependent Correlation Coefficients . . 64 . 3 . 645–647 . 10.1093/biomet/64.3.645 .
  13. Fieller . E. C. . Hartley . H. O. . Pearson . E. S. . 1957 . Tests for rank correlation coefficients. I . Biometrika . 44 . 3–4 . 470–481 . 10.1093/biomet/44.3-4.470 . 10.1.1.474.9634 .
  14. Book: Press . Vettering . Teukolsky . Flannery . 1992 . Numerical Recipes in C: The Art of Scientific Computing . registration . 2nd . 640 . Cambridge University Press . 9780521437202 .
  15. Book: Kendall . M. G. . Stuart . A. . 1973 . The Advanced Theory of Statistics, Volume 2: Inference and Relationship . Griffin . 978-0-85264-215-3 . registration . Sections 31.19, 31.21.
  16. Page, E. B. . Ordered hypotheses for multiple treatments: A significance test for linear ranks . Journal of the American Statistical Association . 58 . 216–230 . 1963 . 10.2307/2282965 . 301 . 2282965 .
  17. Book: Kowalczyk . T. . Pleszczyńska . E. . Ruland . F. . 2004 . Grade Models and Methods for Data Analysis with Applications for the Analysis of Data Populations . Studies in Fuzziness and Soft Computing . 151 . Springer Verlag . Berlin Heidelberg New York . 978-3-540-21120-4.
  18. Book: Xiao, W. . 2019 IEEE International Conference on Big Data (Big Data) . Novel Online Algorithms for Nonparametric Correlations with Application to Analyze Sensor Data . 2019 . 404–412 . 10.1109/BigData47090.2019.9006483. 978-1-7281-0858-2 . 211298570 .
  19. Stephanou, Michael . Varughese, Melvin . Sequential estimation of Spearman rank correlation using Hermite series estimators . Journal of Multivariate Analysis . July 2021 . 186 . 104783 . 10.1016/j.jmva.2021.104783 . 2012.06287 . 235742634 .
  20. Stephanou, M. and Varughese, M . Hermiter: R package for sequential nonparametric estimation . Computational Statistics . 2023 . 10.1007/s00180-023-01382-0 . 2111.14091 . 244715035 .
  21. Web site: Linear or rank correlation - MATLAB corr. www.mathworks.com.