Sensitivity and specificity explained

In medicine and statistics, sensitivity and specificity mathematically describe the accuracy of a test that reports the presence or absence of a medical condition. If individuals who have the condition are considered "positive" and those who do not are considered "negative", then sensitivity is a measure of how well a test can identify true positives and specificity is a measure of how well a test can identify true negatives:

If the true status of the condition cannot be known, sensitivity and specificity can be defined relative to a "gold standard test" which is assumed correct. For all testing, both diagnoses and screening, there is usually a trade-off between sensitivity and specificity, such that higher sensitivities will mean lower specificities and vice versa.

A test which reliably detects the presence of a condition, resulting in a high number of true positives and low number of false negatives, will have a high sensitivity. This is especially important when the consequence of failing to treat the condition is serious and/or the treatment is very effective and has minimal side effects.

A test which reliably excludes individuals who do not have the condition, resulting in a high number of true negatives and low number of false positives, will have a high specificity. This is especially important when people who are identified as having a condition may be subjected to more testing, expense, stigma, anxiety, etc.

The terms "sensitivity" and "specificity" were introduced by American biostatistician Jacob Yerushalmy in 1947.[1]

There are different definitions within laboratory quality control, wherein "analytical sensitivity" is defined as the smallest amount of substance in a sample that can accurately be measured by an assay (synonymously to detection limit), and "analytical specificity" is defined as the ability of an assay to measure one particular organism or substance, rather than others.[2] However, this article deals with diagnostic sensitivity and specificity as defined at top.

Application to screening study

Imagine a study evaluating a test that screens people for a disease. Each person taking the test either has or does not have the disease. The test outcome can be positive (classifying the person as having the disease) or negative (classifying the person as not having the disease). The test results for each subject may or may not match the subject's actual status. In that setting:

After getting the numbers of true positives, false positives, true negatives, and false negatives, the sensitivity and specificity for the test can be calculated. If it turns out that the sensitivity is high then any person who has the disease is likely to be classified as positive by the test. On the other hand, if the specificity is high, any person who does not have the disease is likely to be classified as negative by the test. An NIH web site has a discussion of how these ratios are calculated.[3]

Definition

Sensitivity

Consider the example of a medical test for diagnosing a condition. Sensitivity (sometimes also named the detection rate in a clinical setting) refers to the test's ability to correctly detect ill patients out of those who do have the condition.[4] Mathematically, this can be expressed as:

\begin{align} sensitivity&=

numberoftruepositives
numberoftruepositives+numberoffalsenegatives

\\[8pt] &=

numberoftruepositives
totalnumberofsickindividualsinpopulation

\\[8pt] &=probabilityofapositivetestgiventhatthepatienthasthedisease \end{align}

A negative result in a test with high sensitivity can be useful for "ruling out" disease, since it rarely misdiagnoses those who do have the disease. A test with 100% sensitivity will recognize all patients with the disease by testing positive. In this case, a negative test result would definitively rule out the presence of the disease in a patient. However, a positive result in a test with high sensitivity is not necessarily useful for "ruling in" disease. Suppose a 'bogus' test kit is designed to always give a positive reading. When used on diseased patients, all patients test positive, giving the test 100% sensitivity. However, sensitivity does not take into account false positives. The bogus test also returns positive on all healthy patients, giving it a false positive rate of 100%, rendering it useless for detecting or "ruling in" the disease.

The calculation of sensitivity does not take into account indeterminate test results.If a test cannot be repeated, indeterminate samples either should be excluded from the analysis (the number of exclusions should be stated when quoting sensitivity) or can be treated as false negatives (which gives the worst-case value for sensitivity and may therefore underestimate it).

A test with a higher sensitivity has a lower type II error rate.

Specificity

Consider the example of a medical test for diagnosing a disease. Specificity refers to the test's ability to correctly reject healthy patients without a condition. Mathematically, this can be written as:

\begin{align} specificity&=

numberoftruenegatives
numberoftruenegatives+numberoffalsepositives

\\[8pt] &=

numberoftruenegatives
totalnumberofwellindividualsinpopulation

\\[8pt] &=probabilityofanegativetestgiventhatthepatientiswell \end{align}

A positive result in a test with high specificity can be useful for "ruling in" disease, since the test rarely gives positive results in healthy patients.[5] A test with 100% specificity will recognize all patients without the disease by testing negative, so a positive test result would definitively rule in the presence of the disease. However, a negative result from a test with high specificity is not necessarily useful for "ruling out" disease. For example, a test that always returns a negative test result will have a specificity of 100% because specificity does not consider false negatives. A test like that would return negative for patients with the disease, making it useless for "ruling out" the disease.

A test with a higher specificity has a lower type I error rate.

Graphical illustration

The above graphical illustration is meant to show the relationship between sensitivity and specificity. The black, dotted line in the center of the graph is where the sensitivity and specificity are the same. As one moves to the left of the black dotted line, the sensitivity increases, reaching its maximum value of 100% at line A, and the specificity decreases. The sensitivity at line A is 100% because at that point there are zero false negatives, meaning that all the negative test results are true negatives. When moving to the right, the opposite applies, the specificity increases until it reaches the B line and becomes 100% and the sensitivity decreases. The specificity at line B is 100% because the number of false positives is zero at that line, meaning all the positive test results are true positives.

The middle solid line in both figures that show the level of sensitivity and specificity is the test cutoff point. As previously described, moving this line results in a trade-off between the level of sensitivity and specificity. The left-hand side of this line contains the data points that tests below the cut off point and are considered negative (the blue dots indicate the False Negatives (FN), the white dots True Negatives (TN)). The right-hand side of the line shows the data points that tests above the cut off point and are considered positive (red dots indicate False Positives (FP)). Each side contains 40 data points.

For the figure that shows high sensitivity and low specificity, there are 3 FN and 8 FP. Using the fact that positive results = true positives (TP) + FP, we get TP = positive results - FP, or TP = 40 - 8 = 32. The number of sick people in the data set is equal to TP + FN, or 32 + 3 = 35. The sensitivity is therefore 32 / 35 = 91.4%. Using the same method, we get TN = 40 - 3 = 37, and the number of healthy people 37 + 8 = 45, which results in a specificity of 37 / 45 = 82.2 %.

For the figure that shows low sensitivity and high specificity, there are 8 FN and 3 FP. Using the same method as the previous figure, we get TP = 40 - 3 = 37. The number of sick people is 37 + 8 = 45, which gives a sensitivity of 37 / 45 = 82.2 %. There are 40 - 8 = 32 TN. The specificity therefore comes out to 32 / 35 = 91.4%.

The red dot indicates the patient with the medical condition. The red background indicates the area where the test predicts the data point to be positive. The true positive in this figure is 6, and false negatives of 0 (because all positive condition is correctly predicted as positive). Therefore, the sensitivity is 100% (from). This situation is also illustrated in the previous figure where the dotted line is at position A (the left-hand side is predicted as negative by the model, the right-hand side is predicted as positive by the model). When the dotted line, test cut-off line, is at position A, the test correctly predicts all the population of the true positive class, but it will fail to correctly identify the data point from the true negative class.

Similar to the previously explained figure, the red dot indicates the patient with the medical condition. However, in this case, the green background indicates that the test predicts that all patients are free of the medical condition. The number of data point that is true negative is then 26, and the number of false positives is 0. This result in 100% specificity (from). Therefore, sensitivity or specificity alone cannot be used to measure the performance of the test.

Medical usage

In medical diagnosis, test sensitivity is the ability of a test to correctly identify those with the disease (true positive rate), whereas test specificity is the ability of the test to correctly identify those without the disease (true negative rate).If 100 patients known to have a disease were tested, and 43 test positive, then the test has 43% sensitivity. If 100 with no disease are tested and 96 return a completely negative result, then the test has 96% specificity. Sensitivity and specificity are prevalence-independent test characteristics, as their values are intrinsic to the test and do not depend on the disease prevalence in the population of interest.[6] Positive and negative predictive values, but not sensitivity or specificity, are values influenced by the prevalence of disease in the population that is being tested. These concepts are illustrated graphically in this applet Bayesian clinical diagnostic model which show the positive and negative predictive values as a function of the prevalence, sensitivity and specificity.

Misconceptions

It is often claimed that a highly specific test is effective at ruling in a disease when positive, while a highly sensitive test is deemed effective at ruling out a disease when negative.[7] [8] This has led to the widely used mnemonics SPPIN and SNNOUT, according to which a highly specific test, when positive, rules in disease (SP-P-IN), and a highly sensitive test, when negative, rules out disease (SN-N-OUT). Both rules of thumb are, however, inferentially misleading, as the diagnostic power of any test is determined by the prevalence of the condition being tested, the test's sensitivity and its specificity.[9] [10] [11] The SNNOUT mnemonic has some validity when the prevalence of the condition in question is extremely low in the tested sample.

The tradeoff between specificity and sensitivity is explored in ROC analysis as a trade off between TPR and FPR (that is, recall and fallout).[12] Giving them equal weight optimizes informedness = specificity + sensitivity − 1 = TPR − FPR, the magnitude of which gives the probability of an informed decision between the two classes (> 0 represents appropriate use of information, 0 represents chance-level performance, < 0 represents perverse use of information).[13]

Sensitivity index

The sensitivity index or d′ (pronounced "dee-prime") is a statistic used in signal detection theory. It provides the separation between the means of the signal and the noise distributions, compared against the standard deviation of the noise distribution. For normally distributed signal and noise with mean and standard deviations

\muS

and

\sigmaS

, and

\muN

and

\sigmaN

, respectively, d′ is defined as:

d\prime=

\muS-\muN
\sqrt{1
2
\left(\sigma
S
+
2\right)
\sigma
N
2
}[14]

An estimate of d′ can be also found from measurements of the hit rate and false-alarm rate. It is calculated as:

d′ = Z(hit rate) − Z(false alarm rate),[15]

where function Z(p), p ∈ [0, 1], is the inverse of the cumulative Gaussian distribution.

d′ is a dimensionless statistic. A higher d′ indicates that the signal can be more readily detected.

Confusion matrix

See main article: Confusion matrix.

The relationship between sensitivity, specificity, and similar terms can be understood using the following table. Consider a group with P positive instances and N negative instances of some condition. The four outcomes can be formulated in a 2×2 contingency table or confusion matrix, as well as derivations of several metrics using the four outcomes, as follows:

Estimation of errors in quoted sensitivity or specificity

Sensitivity and specificity values alone may be highly misleading. The 'worst-case' sensitivity or specificity must be calculated in order to avoid reliance on experiments with few results. For example, a particular test may easily show 100% sensitivity if tested against the gold standard four times, but a single additional test against the gold standard that gave a poor result would imply a sensitivity of only 80%. A common way to do this is to state the binomial proportion confidence interval, often calculated using a Wilson score interval.

Confidence intervals for sensitivity and specificity can be calculated, giving the range of values within which the correct value lies at a given confidence level (e.g., 95%).[16]

Terminology in information retrieval

In information retrieval, the positive predictive value is called precision, and sensitivity is called recall. Unlike the Specificity vs Sensitivity tradeoff, these measures are both independent of the number of true negatives, which is generally unknown and much larger than the actual numbers of relevant and retrieved documents. This assumption of very large numbers of true negatives versus positives is rare in other applications.

The F-score can be used as a single measure of performance of the test for the positive class. The F-score is the harmonic mean of precision and recall:

F=2 x

precision x recall
precision+recall

In the traditional language of statistical hypothesis testing, the sensitivity of a test is called the statistical power of the test, although the word power in that context has a more general usage that is not applicable in the present context. A sensitive test will have fewer Type II errors.

Terminology in genome analysis

Similarly to the domain of information retrieval, in the research area of gene prediction, the number of true negatives (non-genes) in genomic sequences is generally unknown and much larger than the actual number of genes (true positives). The convenient and intuitively understood term specificity in this research area has been frequently used with the mathematical formula for precision and recall as defined in biostatistics. The pair of thus defined specificity (as positive predictive value) and sensitivity (true positive rate) represent major parameters characterizing the accuracy of gene prediction algorithms. [17] [18] [19] [20] Conversely, the term specificity in a sense of true negative rate would have little, if any, application in the genome analysis research area.

Further reading

External links

Notes and References

  1. Yerushalmy J . Statistical problems in assessing methods of medical diagnosis with special reference to x-ray techniques . Public Health Reports . 62 . 2 . 1432–39 . 1947 . 20340527 . 10.2307/4586294 . 4586294 . 19967899 .
  2. Saah AJ, Hoover DR. [Sensitivity and specificity revisited: significance of the terms in analytic and diagnostic language]. ]. Ann Dermatol Venereol . 1998 . 125 . 4 . 291–4 . 9747274 .
  3. Parikh . Rajul . Mathai . Annie . Parikh . Shefali . Chandra Sekhar . G . Thomas . Ravi . Understanding and using sensitivity, specificity and predictive values . Indian Journal of Ophthalmology . 2008 . 56 . 1 . 45–50 . 10.4103/0301-4738.37595 . 18158403 . 2636062 . free .
  4. Altman DG, Bland JM . Diagnostic tests. 1: Sensitivity and specificity . BMJ . 308 . 6943 . 1552 . June 1994 . 8019315 . 2540489 . 10.1136/bmj.308.6943.1552 .
  5. Web site: SpPin and SnNout. Centre for Evidence Based Medicine (CEBM). 18 January 2023.
  6. Web site: Mangrulkar. Rajesh. Diagnostic Reasoning I and II. 24 January 2012. 1 August 2011. https://web.archive.org/web/20110801200357/https://open.umich.edu/education/med/m1/patientspop-decisionmaking/2010/materials. dead.
  7. Web site: Evidence-Based Diagnosis. Michigan State University. 2013-08-23. https://web.archive.org/web/20130706035232/http://omerad.msu.edu/ebm/Diagnosis/Diagnosis4.html. 2013-07-06. dead.
  8. Web site: Sensitivity and Specificity. Emory University Medical School Evidence Based Medicine course.
  9. Baron JA . 44505648 . Too bad it isn't true . Medical Decision Making . 14 . 2 . 107 . Apr–Jun 1994 . 8028462 . 10.1177/0272989X9401400202 .
  10. Boyko EJ . 31400167 . Ruling out or ruling in disease with the most sensitive or specific diagnostic test: short cut or wrong turn? . Medical Decision Making . 14 . 2 . 175–9 . Apr–Jun 1994 . 8028470 . 10.1177/0272989X9401400210 .
  11. Pewsner D, Battaglia M, Minder C, Marx A, Bucher HC, Egger M . Ruling a diagnosis in or out with "SpPIn" and "SnNOut": a note of caution . BMJ . 329 . 7459 . 209–13 . July 2004 . 15271832 . 487735 . 10.1136/bmj.329.7459.209 .
  12. Fawcett. Tom. An Introduction to ROC Analysis. Pattern Recognition Letters. 2006. 27. 8. 861–874. 10.1016/j.patrec.2005.10.010. 2006PaReL..27..861F. 2027090 . 10.1.1.646.2144.
  13. David M. W. . Powers . 2011 . Evaluation: From Precision, Recall and F-Measure to ROC, Informedness, Markedness & Correlation . Journal of Machine Learning Technologies . 2 . 1 . 37–63 .
  14. Gale SD, Perkel DJ . A basal ganglia pathway drives selective auditory responses in songbird dopaminergic neurons via disinhibition . The Journal of Neuroscience . 30 . 3 . 1027–37 . January 2010 . 20089911 . 2824341 . 10.1523/JNEUROSCI.3585-09.2010 .
  15. Book: Macmillan. Neil A.. Creelman. C. Douglas . Detection Theory: A User's Guide. 7. 15 September 2004. Psychology Press. 978-1-4106-1114-7.
  16. Web site: Diagnostic test online calculator calculates sensitivity, specificity, likelihood ratios and predictive values from a 2x2 table – calculator of confidence intervals for predictive parameters. medcalc.org.
  17. Burge . Christopher . Christopher Burge . Karlin . Samuel . Samuel Karlin . Prediction of complete gene structures in human genomic DNA . 10.1006/jmbi.1997.0951 . Journal of Molecular Biology . 268 . 1 . 78–94 . 1997 . 9149143 . dead . https://web.archive.org/web/20150620094015/https://ai.stanford.edu/~serafim/cs262/Papers/GENSCAN.pdf . 2015-06-20. 10.1.1.115.3107 .
  18. Web site: GeneMark-ES. Lomsadze A. Gene finding in novel genomes by self-training algorithm . Nucleic Acids Research . 33 . 2005. 20 . 6494–6906 . 10.1093/nar/gki937 . 16314312 . 1298918 .
  19. Korf I . Gene finding in novel genomes . BMC Bioinformatics . 5 . 59 . 2004 . 15144565 . 421630 . 10.1186/1471-2105-5-59 . free .
  20. Yandell M, Ence D . A beginner's guide to eukaryotic genome annotation . Nature Reviews. Genetics . 13 . 5 . 329–42 . April 2012 . 22510764 . 10.1038/nrg3174 . 3352427 .