Confidence interval explained

Informally, in frequentist statistics, a confidence interval (CI) is an interval which is expected to typically contain the parameter being estimated. More specifically, given a confidence level

\gamma

(95% and 99% are typical values), a CI is a random interval which contains the parameter being estimated

\gamma

% of the time.[1] [2] The confidence level, degree of confidence or confidence coefficient represents the long-run proportion of CIs (at the given confidence level) that theoretically contain the true value of the parameter; this is tantamount to the nominal coverage probability. For example, out of all intervals computed at the 95% level, 95% of them should contain the parameter's true value.[3]

Factors affecting the width of the CI include the sample size, the variability in the sample, and the confidence level.[4] All else being the same, a larger sample produces a narrower confidence interval, greater variability in the sample produces a wider confidence interval, and a higher confidence level produces a wider confidence interval.[5]

Definition

Let

X

be a random sample from a probability distribution with statistical parameter

\theta

, which is a quantity to be estimated, and

\varphi

, representing quantities that are not of immediate interest. A confidence interval for the parameter

\theta

, with confidence level or coefficient

\gamma

, is an interval

(u(X),v(X))

determined by random variables

u(X)

and

v(X)

with the property:

P(u(X)<\theta<v(X))=\gammaforevery(\theta,\varphi).

The number

\gamma

, whose typical value is close to but not greater than 1, is sometimes given in the form

1-\alpha

(or as a percentage

100%(1-\alpha)

), where

\alpha

is a small positive number, often 0.05.

It is important for the bounds

u(X)

and

v(X)

to be specified in such a way that as long as

X

is collected randomly, every time we compute a confidence interval, there is probability

\gamma

that it would contain

\theta

, the true value of the parameter being estimated. This should hold true for any actual

\theta

and

\varphi

.[2]

Approximate confidence intervals

In many applications, confidence intervals that have exactly the required confidence level are hard to construct, but approximate intervals can be computed. The rule for constructing the interval may be accepted as providing a confidence interval at level

\gamma

if

P(u(X)<\theta<v(X)) ≈  \gammaforevery(\theta,\varphi)

to an acceptable level of approximation. Alternatively, some authors[6] simply require that

P(u(X)<\theta<v(X))\ge\gammaforevery(\theta,\varphi),

which is useful if the probabilities are only partially identified or imprecise, and also when dealing with discrete distributions. Confidence limits of the form

P(u(X)<\theta)\ge\gamma

  and  

P(\theta<v(X))\ge\gamma

are called conservative;[7] accordingly, one speaks of conservative confidence intervals and, in general, regions.

Desired properties

When applying standard statistical procedures, there will often be standard ways of constructing confidence intervals. These will have been devised so as to meet certain desirable properties, which will hold given that the assumptions on which the procedure relies are true. These desirable properties may be described as: validity, optimality, and invariance.

Of the three, "validity" is most important, followed closely by "optimality". "Invariance" may be considered as a property of the method of derivation of a confidence interval, rather than of the rule for constructing the interval. In non-standard applications, these same desirable properties would be sought:

Validity

This means that the nominal coverage probability (confidence level) of the confidence interval should hold, either exactly or to a good approximation.

Optimality

This means that the rule for constructing the confidence interval should make as much use of the information in the data-set as possible.

One way of assessing optimality is by the width of the interval so that a rule for constructing a confidence interval is judged better than another if it leads to intervals whose widths are typically shorter.

Invariance

In many applications, the quantity being estimated might not be tightly defined as such.

For example, a survey might result in an estimate of the median income in a population, but it might equally be considered as providing an estimate of the logarithm of the median income, given that this is a common scale for presenting graphical results. It would be desirable that the method used for constructing a confidence interval for the median income would give equivalent results when applied to constructing a confidence interval for the logarithm of the median income: Specifically the values at the ends of the latter interval would be the logarithms of the values at the ends of former interval.

Methods of derivation

For non-standard applications, there are several routes that might be taken to derive a rule for the construction of confidence intervals. Established rules for standard procedures might be justified or explained via several of these routes. Typically a rule for constructing confidence intervals is closely tied to a particular way of finding a point estimate of the quantity being considered.

Summary statistics

This is closely related to the method of moments for estimation. A simple example arises where the quantity to be estimated is the population mean, in which case a natural estimate is the sample mean. Similarly, the sample variance can be used to estimate the population variance. A confidence interval for the true mean can be constructed centered on the sample mean with a width which is a multiple of the square root of the sample variance.

Likelihood theory

Estimates can be constructed using the maximum likelihood principle, the likelihood theory for this provides two ways of constructing confidence intervals or confidence regions for the estimates.

Estimating equations

The estimation approach here can be considered as both a generalization of the method of moments and a generalization of the maximum likelihood approach. There are corresponding generalizations of the results of maximum likelihood theory that allow confidence intervals to be constructed based on estimates derived from estimating equations.

Hypothesis testing

If hypothesis tests are available for general values of a parameter, then confidence intervals/regions can be constructed by including in the confidence region all those points for which the hypothesis test of the null hypothesis that the true value is the given value is not rejected at a significance level of [7]

Bootstrapping

In situations where the distributional assumptions for the above methods are uncertain or violated, resampling methods allow construction of confidence intervals or prediction intervals. The observed data distribution and the internal correlations are used as the surrogate for the correlations in the wider population.

Central limit theorem

The central limit theorem is a refinement of the law of large numbers. For a large number of independent identically distributed random variables

X1,...,Xn,

with finite variance, the average

\overline{X}n

approximately has a normal distribution, no matter what the distribution of the

Xi

is, with the approximation roughly improving in proportion to

\sqrt{n}

.[2]

Example

Suppose

{X1,\ldots,Xn}

is an independent sample from a normally distributed population with unknown parameters mean

\mu

and variance

\sigma2.

Let

\bar{X}=

{X1+ … +Xn
},

S2=

{1
}\sum_^n (X_i - \bar)^2.

Where

\bar{X}

is the sample mean, and

S2

is the sample variance. Then

T=

{\bar{X
-

\mu}}{{S/\sqrt{n}}}

has a Student's t distribution with

n-1

degrees of freedom.[8] Note that the distribution of

T

does not depend on the values of the unobservable parameters

\mu

and

\sigma2

; i.e., it is a pivotal quantity. Suppose we wanted to calculate a 95% confidence interval for

\mu.

Then, denoting

c

as the 97.5th percentile of this distribution,

PT(-c\leqT\leqc)=0.95.

Note that "97.5th" and "0.95" are correct in the preceding expressions. There is a 2.5% chance that

T

will be less than

-c

and a 2.5% chance that it will be larger than

+c.

Thus, the probability that

T

will be between

-c

and

+c

is 95%.

PT

is the probability measure under the student

t

distribution.

Consequently,

P\mu\left(\bar{X}-

{cS
} \leq \mu \leq \bar + \frac\right) = 0.95,

and we have a theoretical (stochastic) 95% confidence interval for

\mu.

Here

P\mu

is the probability measure under unknown distribution of

\mu

.

After observing the sample we find values

\bar{x}

for

\bar{X}

and

s

for

S,

from which we compute the confidence interval

\left[\bar{x}-

cs
\sqrt{n
}, \bar + \frac\right].

Interpretation

Various interpretations of a confidence interval can be given (taking the 95% confidence interval as an example in the following).

See also: Neyman construction.

Common misunderstandings

Confidence intervals and levels are frequently misunderstood, and published studies have shown that even professional scientists often misinterpret them.[10] [11] [12] [13] [14] [15]

Examples of how naïve interpretation of confidence intervals can be problematic

Confidence procedure for uniform location

Welch[17] presented an example which clearly shows the difference between the theory of confidence intervals and other theories of interval estimation (including Fisher's fiducial intervals and objective Bayesian intervals). Robinson[18] called this example "[p]ossibly the best known counterexample for Neyman's version of confidence interval theory." To Welch, it showed the superiority of confidence interval theory; to critics of the theory, it shows a deficiency. Here we present a simplified version.

Suppose that

X1,X2

are independent observations from a uniform

(\theta-1/2,\theta+1/2)

distribution. Then the optimal 50% confidence procedure for

\theta

is[19]

\bar{X}\pm\begin{cases} \dfrac{|X1-X2|}{2}&if|X1-X2|<1/2\\[8pt] \dfrac{1-|X1-X2|}{2}&if|X1-X2|\geq1/2. \end{cases}

A fiducial or objective Bayesian argument can be used to derive the interval estimate

\bar{X}\pm

1-|X1-X2|
4

,

which is also a 50% confidence procedure. Welch showed that the first confidence procedure dominates the second, according to desiderata from confidence interval theory; for every

\theta1 ≠ \theta

, the probability that the first procedure contains

\theta1

is less than or equal to the probability that the second procedure contains

\theta1

. The average width of the intervals from the first procedure is less than that of the second. Hence, the first procedure is preferred under classical confidence interval theory.

However, when

|X1-X2|\geq1/2

, intervals from the first procedure are guaranteed to contain the true value

\theta

: Therefore, the nominal 50% confidence coefficient is unrelated to the uncertainty we should have that a specific interval contains the true value. The second procedure does not have this property.

Moreover, when the first procedure generates a very short interval, this indicates that

X1,X2

are very close together and hence only offer the information in a single data point. Yet the first interval will exclude almost all reasonable values of the parameter due to its short width. The second procedure does not have this property.

The two counter-intuitive properties of the first procedure – 100% coverage when

X1,X2

are far apart and almost 0% coverage when

X1,X2

are close together – balance out to yield 50% coverage on average. However, despite the first procedure being optimal, its intervals offer neither an assessment of the precision of the estimate nor an assessment of the uncertainty one should have that the interval contains the true value.

This example is used to argue against naïve interpretations of confidence intervals. If a confidence procedure is asserted to have properties beyond that of the nominal coverage (such as relation to precision, or a relationship with Bayesian inference), those properties must be proved; they do not follow from the fact that a procedure is a confidence procedure.

Confidence procedure for ω2

Steiger[20] suggested a number of confidence procedures for common effect size measures in ANOVA. Morey et al. point out that several of these confidence procedures, including the one for ω2, have the property that as the F statistic becomes increasingly small—indicating misfit with all possible values of ω2—the confidence interval shrinks and can even contain only the single value ω2 = 0; that is, the CI is infinitesimally narrow (this occurs when

p\geq1-\alpha/2

for a

100(1-\alpha)\%

CI).

This behavior is consistent with the relationship between the confidence procedure and significance testing: as F becomes so small that the group means are much closer together than we would expect by chance, a significance test might indicate rejection for most or all values of ω2. Hence the interval will be very narrow or even empty (or, by a convention suggested by Steiger, containing only 0). However, this does not indicate that the estimate of ω2 is very precise. In a sense, it indicates the opposite: that the trustworthiness of the results themselves may be in doubt. This is contrary to the common interpretation of confidence intervals that they reveal the precision of the estimate.

History

Methods for calculating confidence intervals for the binomial proportion appeared from the 1920s.[21] [22] The main ideas of confidence intervals in general were developed in the early 1930s,[23] [24] [25] and the first thorough and general account was given by Jerzy Neyman in 1937.

Neyman described the development of the ideas as follows (reference numbers have been changed):

[My work on confidence intervals] originated about 1930 from a simple question of Waclaw Pytkowski, then my student in Warsaw, engaged in an empirical study in farm economics. The question was: how to characterize non-dogmatically the precision of an estimated regression coefficient? ...

Pytkowski's monograph ... appeared in print in 1932.[26] It so happened that, somewhat earlier, Fisher published his first paper[27] concerned with fiducial distributions and fiducial argument. Quite unexpectedly, while the conceptual framework of fiducial argument is entirely different from that of confidence intervals, the specific solutions of several particular problems coincided. Thus, in the first paper in which I presented the theory of confidence intervals, published in 1934, I recognized Fisher's priority for the idea that interval estimation is possible without any reference to Bayes' theorem and with the solution being independent from probabilities a priori. At the same time I mildly suggested that Fisher's approach to the problem involved a minor misunderstanding.

In medical journals, confidence intervals were promoted in the 1970s but only became widely used in the 1980s.[28] By 1988, medical journals were requiring the reporting of confidence intervals.[29]

See also

Confidence interval for specific distributions

Bibliography

External links

Notes and References

  1. Book: Zar, Jerrold H.. Biostatistical Analysis. Prentice Hall. 199. 978-0130815422. 4th. Upper Saddle River, N.J.. 43–45. 39498633.
  2. Dekking. Frederik Michel. Kraaikamp. Cornelis. Lopuhaä. Hendrik Paul. Meester. Ludolf Erwin. 2005. A Modern Introduction to Probability and Statistics. Springer Texts in Statistics. en-gb. 10.1007/1-84628-168-7. 978-1-85233-896-1. 1431-875X.
  3. Book: Introductory statistics. Illowsky, Barbara. Dean, Susan L., 1945-, Illowsky, Barbara., OpenStax College. 978-1-947172-05-0. Houston, Texas. 899241574.
  4. Hazra. Avijit. October 2017. Using the confidence interval confidently. Journal of Thoracic Disease. 9. 10. 4125–4130. 10.21037/jtd.2017.09.14. 2072-1439. 5723800. 29268424 . free .
  5. Book: Khare. Vikas. Ocean Energy Modeling and Simulation with Big Data Computational Intelligence for System Optimization and Grid Integration. Nema. Savita. Baredar. Prashant. 2020. Butterworth-Heinemann . 978-0-12-818905-4. English. 1153294021.
  6. Book: Roussas, George G. . 1997 . A Course in Mathematical Statistics . 2nd . Academic Press . 397.
  7. Book: Cox, D.R. . Hinkley, D.V. . 1974 . Theoretical Statistics . Chapman & Hall.
  8. Rees, D.G. (2001). Essential Statistics, 4th Edition, Chapman and Hall/CRC. (Section 9.5)
  9. Cox D.R., Hinkley D.V. (1974) Theoretical Statistics, Chapman & Hall, pp. 214, 225, 233
  10. Web site: Kalinowski. Pawel. Identifying Misconceptions about Confidence Intervals. 2010. 2021-12-22.
  11. Web site: Archived copy . 2014-09-16 . https://web.archive.org/web/20160304043241/http://irt.com.ne.kr/data/researchers%20misunderstand%20ci%20and%20error%20bars.pdf . 2016-03-04 . dead .
  12. Hoekstra, R., R. D. Morey, J. N. Rouder, and E-J. Wagenmakers, 2014. Robust misinterpretation of confidence intervals. Psychonomic Bulletin & Review Vol. 21, No. 5, pp. 1157-1164. http://www.ejwagenmakers.com/inpress/HoekstraEtAlPBR.pdf
  13. https://www.sciencenews.org/blog/context/scientists-grasp-confidence-intervals-doesnt-inspire-confidence Scientists' grasp of confidence intervals doesn't inspire confidence
  14. Greenland. Sander. Senn. Stephen J.. Rothman. Kenneth J.. Carlin. John B.. Poole. Charles. Goodman. Steven N.. Altman. Douglas G.. April 2016 . Statistical tests, P values, confidence intervals, and power: a guide to misinterpretations. European Journal of Epidemiology . 31. 4. 337–350. 10.1007/s10654-016-0149-3. 0393-2990. 4877414. 27209009.
  15. Helske . Jouni . Helske . Satu . Cooper . Matthew . Ynnerman . Anders . Besancon . Lonni . Can Visualization Alleviate Dichotomous Thinking? Effects of Visual Representations on the Cliff Effect . IEEE Transactions on Visualization and Computer Graphics . Institute of Electrical and Electronics Engineers (IEEE) . 27 . 8 . 2021-08-01 . 1077-2626 . 10.1109/tvcg.2021.3073466. 2002.07671 . 3397–3409. 33856998 . 233230810 .
  16. Web site: 1.3.5.2. Confidence Limits for the Mean. nist.gov. 2014-09-16. https://web.archive.org/web/20080205120031/http://www.itl.nist.gov/div898/handbook/eda/section3/eda352.htm. 2008-02-05. dead.
  17. Welch. B. L. . 1939. On Confidence Limits and Sufficiency, with Particular Reference to Parameters of Location. 2235987 . The Annals of Mathematical Statistics. 10. 1. 58–69. 10.1214/aoms/1177732246. free.
  18. Robinson. G. K. . 1975. Some Counterexamples to the Theory of Confidence Intervals. 2334498 . Biometrika. 62. 1. 155–161. 10.2307/2334498.
  19. Pratt. J. W. . 1961. Book Review: Testing Statistical Hypotheses. by E. L. Lehmann. 2282344 . Journal of the American Statistical Association. 56. 293. 163–167. 10.1080/01621459.1961.10482103.
  20. Steiger. J. H. . 2004. Beyond the F test: Effect size confidence intervals and tests of close fit in the analysis of variance and contrast analysis. Psychological Methods. 9. 2. 164–182. 10.1037/1082-989x.9.2.164. 15137887 .
  21. Edwin B. Wilson (1927) Probable Inference, the Law of Succession, and Statistical Inference, Journal of the American Statistical Association, 22:158, 209-212, https://doi.org/10.1080/01621459.1927.10502953
  22. C.J. Clopper, E.S. Pearson, The use of confidence or fiducial limits illustrated in the case of the binomial, Biometrika 26(4), 1934, pages 404–413, https://doi.org/10.1093/biomet/26.4.404
  23. Neyman, J. (1934). On the Two Different Aspects of the Representative Method: The Method of Stratified Sampling and the Method of Purposive Selection. Journal of the Royal Statistical Society, 97(4), 558–625. https://doi.org/10.2307/2342192 (see Note I in the appendix)
  24. J. Neyman (1935), Ann. Math. Statist. 6(3): 111-116 (September, 1935). https://doi.org/10.1214/aoms/1177732585
  25. Neyman, J. (1970). A glance at some of my personal experiences in the process of research. In Scientists at Work: Festschrift in honour of Herman Wold. Edited by T. Dalenius, G. Karlsson, S. Malmquist. Almqvist & Wiksell, Stockholm. https://worldcat.org/en/title/195948
  26. Pytkowski, W., The dependence of the income in small farms upon their area, the outlay and the capital invested in cows. (Polish, English summary) Bibliotaka Palawska, 1932.
  27. Fisher, R. (1930). Inverse Probability. Mathematical Proceedings of the Cambridge Philosophical Society, 26(4), 528-535. https://doi.org/10.1017/S0305004100016297
  28. Altman. Douglas G.. 1991. Statistics in medical journals: Developments in the 1980s. Statistics in Medicine. en. 10. 12. 1897–1913. 10.1002/sim.4780101206. 1805317 . 1097-0258.
  29. Gardner. Martin J.. Altman. Douglas G.. 1988. Estimating with confidence. British Medical Journal. en. 296. 6631. 1210–1211. 10.1136/bmj.296.6631.1210 . 3133015 . 2545695 .