Median absolute deviation explained

In statistics, the median absolute deviation (MAD) is a robust measure of the variability of a univariate sample of quantitative data. It can also refer to the population parameter that is estimated by the MAD calculated from a sample.[1]

For a univariate data set X1X2, ..., Xn, the MAD is defined as the median of the absolute deviations from the data's median

\tilde{X}=\operatorname{median}(X)

:

\operatorname{MAD}=\operatorname{median}(|Xi-\tilde{X}|)

that is, starting with the residuals (deviations) from the data's median, the MAD is the median of their absolute values.

Example

Consider the data (1, 1, 2, 2, 4, 6, 9). It has a median value of 2. The absolute deviations about 2 are (1, 1, 0, 0, 2, 4, 7) which in turn have a median value of 1 (because the sorted absolute deviations are (0, 0, 1, 1, 2, 4, 7)). So the median absolute deviation for this data is 1.

Uses

The median absolute deviation is a measure of statistical dispersion. Moreover, the MAD is a robust statistic, being more resilient to outliers in a data set than the standard deviation. In the standard deviation, the distances from the mean are squared, so large deviations are weighted more heavily, and thus outliers can heavily influence it. In the MAD, the deviations of a small number of outliers are irrelevant.

Because the MAD is a more robust estimator of scale than the sample variance or standard deviation, it works better with distributions without a mean or variance, such as the Cauchy distribution.

Relation to standard deviation

\sigma

, one takes

\hat{\sigma}=k\operatorname{MAD},

where

k

is a constant scale factor, which depends on the distribution.[2]

For normally distributed data

k

is taken to be

k=1/\left(\Phi-1(3/4)\right)1/0.674491.4826,

i.e., the reciprocal of the quantile function

\Phi-1

(also known as the inverse of the cumulative distribution function) for the standard normal distribution

Z=(X-\mu)/\sigma

.[3] [4]

Derivation

The argument 3/4 is such that

\pm\operatorname{MAD}

covers 50% (between 1/4 and 3/4) of the standard normal cumulative distribution function, i.e.
12
=

P(|X-\mu|\le\operatorname{MAD})=P\left(\left|

X-\mu
\sigma

\right|\le

\operatorname{MAD
} \sigma\right) = P\left(|Z| \le \frac\right).Therefore, we must have that

\Phi\left(\operatorname{MAD}/\sigma\right)-\Phi\left(-\operatorname{MAD}/\sigma\right)=1/2.

Noticing that

\Phi\left(-\operatorname{MAD}/\sigma\right)=1-\Phi\left(\operatorname{MAD}/\sigma\right),

we have that

\operatorname{MAD}/\sigma=\Phi-1(3/4)=0.67449

, from which we obtain the scale factor

k=1/\Phi-1(3/4)=1.4826

.

Another way of establishing the relationship is noting that MAD equals the half-normal distribution median:

\operatorname{MAD}=\sigma\sqrt{2}\operatorname{erf}-1(1/2)0.67449\sigma.

This form is used in, e.g., the probable error.

In the case of complex values (X+iY), the relation of MAD to the standard deviation is unchanged for normally distributed data.

MAD using geometric median

Analogously to how the median generalizes to the geometric median (gm) in multivariate data, MAD can be generalized to MADGM (median of distances to gm) in n dimensions. This is done by replacing the absolute differences in one dimension by euclidean distances of the data points to the geometric median in n dimensions.[5] This gives the identical result as the univariate MAD in 1 dimension and generalizes to any number of dimensions. MADGM needs the geometric median to be found, which is done by an iterative process.

The population MAD

The population MAD is defined analogously to the sample MAD, but is based on the complete distribution rather than on a sample. For a symmetric distribution with zero mean, the population MAD is the 75th percentile of the distribution.

Unlike the variance, which may be infinite or undefined, the population MAD is always a finite number. For example, the standard Cauchy distribution has undefined variance, but its MAD is 1.

The earliest known mention of the concept of the MAD occurred in 1816, in a paper by Carl Friedrich Gauss on the determination of the accuracy of numerical observations.[6] [7]

See also

References

Notes and References

  1. Book: Dodge . Yadolah . The concise encyclopedia of statistics . 2010 . Springer . New York . 978-0-387-32833-1 .
  2. Rousseeuw . P. J. . Peter Rousseeuw . Croux . C. . 1993 . Alternatives to the median absolute deviation . Journal of the American Statistical Association . 88 . 424. 1273–1283 . 10.1080/01621459.1993.10476408 . 2027.42/142454 . free .
  3. Book: Statistics and Data Analysis for Financial Engineering . Ruppert, D. . 2010 . Springer . 9781441977878 . 118 . 2015-08-27.
  4. Leys . C. . etal . 2013 . Detecting outliers: Do not use standard deviation around the mean, use absolute deviation around the median . Journal of Experimental Social Psychology . 49. 4. 764–766. 10.1016/j.jesp.2013.03.013 .
  5. Web site: Spacek . Libor . Rstats - Rust Implementation of Statistical Measures, Vector Algebra, Geometric Median, Data Analysis and Machine Learning . crates.io . 26 July 2022 . 6.
  6. Gauss . Carl Friedrich . Carl Friedrich Gauss . Bestimmung der Genauigkeit der Beobachtungen . Zeitschrift für Astronomie und Verwandte Wissenschaften . 1 . 187–197 . 1816 .
  7. Book: Walker, Helen . Studies in the History of the Statistical Method . Williams & Wilkins Co . 1931 . Baltimore, MD . 24–25 .