In statistics, the Hodges–Lehmann estimator is a robust and nonparametric estimator of a population's location parameter. For populations that are symmetric about one median, such as the Gaussian or normal distribution or the Student t-distribution, the Hodges–Lehmann estimator is a consistent and median-unbiased estimate of the population median. For non-symmetric populations, the Hodges–Lehmann estimator estimates the "pseudo–median", which is closely related to the population median.
The Hodges–Lehmann estimator was proposed originally for estimating the location parameter of one-dimensional populations, but it has been used for many more purposes. It has been used to estimate the differences between the members of two populations. It has been generalized from univariate populations to multivariate populations, which produce samples of vectors.
It is based on the Wilcoxon signed-rank statistic. In statistical theory, it was an early example of a rank-based estimator, an important class of estimators both in nonparametric statistics and in robust statistics. The Hodges–Lehmann estimator was proposed in 1963 independently by Pranab Kumar Sen and by Joseph Hodges and Erich Lehmann, and so it is also called the "Hodges–Lehmann–Sen estimator".
In the simplest case, the "Hodges–Lehmann" statistic estimates the location parameter for a univariate population.[1] [2] Its computation can be described quickly. For a dataset with n measurements, the set of all possible two-element subsets of it
(zi,zj)
i
j
The Hodges–Lehmann statistic also estimates the difference between two populations. For two sets of data with m and n observations, the set of two-element sets made of them is their Cartesian product, which contains m × n pairs of points (one from each set); each such pair defines one difference of values. The Hodges–Lehmann statistic is the median of the m × n differences.[3]
For a population that is symmetric, the Hodges–Lehmann statistic estimates the population's median. It is a robust statistic that has a breakdown point of 0.29, which means that the statistic remains bounded even if nearly 30 percent of the data have been contaminated. This robustness is an important advantage over the sample mean, which has a zero breakdown point, being proportional to any single observation and so liable to being misled by even one outlier. The sample median is even more robust, having a breakdown point of 0.50.[4] The Hodges–Lehmann estimator is much better than the sample mean when estimating mixtures of normal distributions, also.[5]
For symmetric distributions, the Hodges–Lehmann statistic has greater efficiency than does the sample median. For the normal distribution, the Hodges-Lehmann statistic is nearly as efficient as the sample mean. For the Cauchy distribution (Student t-distribution with one degree of freedom), the Hodges-Lehmann is infinitely more efficient than the sample mean, which is not a consistent estimator of the median.[4]
For non-symmetric populations, the Hodges-Lehmann statistic estimates the population's "pseudo-median", a location parameter that is closely related to the median. The difference between the median and pseudo-median is relatively small, and so this distinction is neglected in elementary discussions. Like the spatial median, the pseudo–median is well defined for all distributions of random variables having dimension two or greater; for one-dimensional distributions, there exists some pseudo–median, which need not be unique, however. Like the median, the pseudo–median is defined for even heavy–tailed distributions that lack any (finite) mean.
The one-sample Hodges–Lehmann statistic need not estimate any population mean, which for many distributions does not exist. The two-sample Hodges–Lehmann estimator need not estimate the difference of two means or the difference of two (pseudo-)medians; rather, it estimates the differences between the population of the paired random–variables drawn respectively from the populations.[3]
The Hodges–Lehmann univariate statistics have several generalizations in multivariate statistics: