Generalized p-value explained

In statistics, a generalized p-value is an extended version of the classical p-value, which except in a limited number of applications, provides only approximate solutions.

Conventional statistical methods do not provide exact solutions to many statistical problems, such as those arising in mixed models and MANOVA, especially when the problem involves a number of nuisance parameters. As a result, practitioners often resort to approximate statistical methods or asymptotic statistical methods that are valid only when the sample size is large. With small samples, such methods often have poor performance. Use of approximate and asymptotic methods may lead to misleading conclusions or may fail to detect truly significant results from experiments.

Tests based on generalized p-values are exact statistical methods in that they are based on exact probability statements. While conventional statistical methods do not provide exact solutions to such problems as testing variance components or ANOVA under unequal variances, exact tests for such problems can be obtained based on generalized p-values.[1] [2]

In order to overcome the shortcomings of the classical p-values, Tsui and Weerahandi[2] extended the classical definition so that one can obtain exact solutions for such problems as the Behrens - Fisher problem and testing variance components. This is accomplished by allowing test variables to depend on observable random vectors as well as their observed values, as in the Bayesian treatment of the problem, but without having to treat constant parameters as random variables.

Example

To describe the idea of generalized p-values in a simple example, consider a situation of sampling from a normal population with the mean

\mu

, and the variance

\sigma2

. Let

\overline{X}

and

S2

be the sample mean and the sample variance. Inferences on all unknown parameters can be based on the distributional results

Z=\sqrt{n}(\overline{X}-\mu)/\sigma\simN(0,1)

and

U=nS2/\sigma2\sim

2
\chi
n-1

.

Now suppose we need to test the coefficient of variation,

\rho=\mu/\sigma

. While the problem is not trivial with conventional p-values, the task can be easily accomplished based on the generalized test variable

R=

\overline{x
S}

{s\sigma}-

\overline{X
-

\mu}{\sigma} =

\overline{x
} \frac ~-~ \frac,where

\overline{x}

is the observed value of

\overline{X}

and

s

is the observed value of

S

. Note that the distribution of

R

and its observed value are both free of nuisance parameters. Therefore, a test of a hypothesis with a one-sided alternative such as

HA:\rho<\rho0

can be based on the generalized p-value

p=Pr(R\ge\rho0)

, a quantity that can be easily evaluated via Monte Carlo simulation or using the non-central t-distribution.

References

External links

Notes and References

  1. Weerahandi (1995)
  2. Tsui & Weerahandi (1989)