Signal-to-noise statistic explained

In mathematics the signal-to-noise statistic distance between two vectors a and b with mean values

\mua

and

\mub

and standard deviation

\sigmaa

and

\sigmab

respectively is:

Dsn={(\mua-\mub)\over(\sigmaa+\sigmab)}

In the case of Gaussian-distributed data and unbiased class distributions, this statistic can be related to classification accuracy given an ideal linear discrimination, and a decision boundary can be derived.[1]

This distance is frequently used to identify vectors that have significant difference. One usage is in bioinformatics to locate genes that are differential expressed on microarray experiments.[2] [3] [4]

See also

Notes and References

  1. Auffarth, B., Lopez, M., Cerquides, J. (2010). Comparison of redundancy and relevance measures for feature selection in tissue classification of CT images. Advances in Data Mining. Applications and Theoretical Aspects. p. 248--262. Springer.
  2. Golub, T.R. et al. (1999) Molecular Classification of Cancer: Class Discovery and Class Prediction by Gene Expression Monitoring. Science 286, 531-537,
  3. Slonim D.K. et al. (2000) Class Prediction and Discovery Using Gene Expression Data. Procs. of the Fourth Annual International Conference on Computational Molecular Biology Tokyo, Japan April 8 - 11, p263-272
  4. Pomeroy, S.L. et al. (2002) Gene Expression-Based Classification and Outcome Prediction of Central Nervous System Embryonal Tumors. Nature 415, 436–442.