Well-behaved statistic explained

Although the term well-behaved statistic often seems to be used in the scientific literature in somewhat the same way as is well-behaved in mathematics (that is, to mean "non-pathological"[1] [2]) it can also be assigned precise mathematical meaning, and in more than one way. In the former case, the meaning of this term will vary from context to context. In the latter case, the mathematical conditions can be used to derive classes of combinations of distributions with statistics which are well-behaved in each sense.

First Definition: The variance of a well-behaved statistical estimator is finite and one condition on its mean is that it is differentiable in the parameter being estimated.[3]

Second Definition: The statistic is monotonic, well-defined, and locally sufficient.[4]

Conditions for a Well-Behaved Statistic: First Definition

More formally the conditions can be expressed in this way. T is a statistic for \theta that is a function of the sample, _,...,_. For T to be well-behaved we require:

_\left[T\left({ X }_{ 1 },...,{ X }_{ n } \right) \right] <\infty \quad \forall \quad \theta \in \Theta : Condition 1

_\left(T \right) differentiable in \theta \quad \forall \quad \theta \in \Theta, and the derivative satisfies:

\frac \int \prod _^ d_...d_ = \int d_...d_: Condition 2

Conditions for a Well-Behaved Statistic: Second Definition

In order to derive the distribution law of the parameter T, compatible with

\boldsymbolx

, the statistic must obey some technical properties. Namely, a statistic s is said to be well-behaved if it satisfies the following three statements:
  1. monotonicity. A uniformly monotone relation exists between s and ? for any fixed seed

\{z1,\ldots,zm\}

– so as to have a unique solution of (1);
  1. well-defined. On each observed s the statistic is well defined for every value of ?, i.e. any sample specification

\{x1,\ldots,x

m
m\}\inakX
such that

\rho(x1,\ldots,xm)=s

has a probability density different from 0 – so as to avoid considering a non-surjective mapping from

akXm

to

akS

, i.e. associating via

s

to a sample

\{x1,\ldots,xm\}

a ? that could not generate the sample itself;
  1. local sufficiency.

\{\breve\theta1,\ldots,\breve\thetaN\}

constitutes a true T sample for the observed s, so that the same probability distribution can be attributed to each sampled value. Now,

\breve\thetaj=h-1(s,\breve

j,
z
1

\ldots,\breve

j)
z
m
is a solution of (1) with the seed

\{\breve

j,\ldots,\breve
z
1
j\}
z
m
. Since the seeds are equally distributed, the sole caveat comes from their independence or, conversely from their dependence on ? itself. This check can be restricted to seeds involved by s, i.e. this drawback can be avoided by requiring that the distribution of

\{Z1,\ldots,Zm|S=s\}

is independent of ?. An easy way to check this property is by mapping seed specifications into

xi

s specifications. The mapping of course depends on ?, but the distribution of

\{X1,\ldots,Xm|S=s\}

will not depend on ?, if the above seed independence holds – a condition that looks like a local sufficiency of the statistic S.

The remainder of the present article is mainly concerned with the context of data mining procedures applied to statistical inference and, in particular, to the group of computationally intensive procedure that have been called algorithmic inference.

Algorithmic inference

See main article: Algorithmic inference.

In algorithmic inference, the property of a statistic that is of most relevance is the pivoting step which allows to transference of probability-considerations from the sample distribution to the distribution of the parameters representing the population distribution in such a way that the conclusion of this statistical inference step is compatible with the sample actually observed.

By default, capital letters (such as U, X) will denote random variables and small letters (u, x) their corresponding realizations and with gothic letters (such as

akU,akX

) the domain where the variable takes specifications. Facing a sample

\boldsymbolx=\{x1,\ldots,xm\}

, given a sampling mechanism

(g\theta,Z)

, with

\theta

scalar, for the random variable X, we have

\boldsymbolx=\{g\theta(z1),\ldots,g\theta(zm)\}.

The sampling mechanism

(g\theta,\boldsymbolz)

, of the statistic s, as a function ? of

\{x1,\ldots,xm\}

with specifications in

akS

, has an explaining function defined by the master equation:

s=\rho(x1,\ldots,xm)=\rho(g\theta(z1),\ldots,g\theta(zm))=h(\theta,z1,\ldots,zm),          (1)

for suitable seeds

\boldsymbolz=\{z1,\ldots,zm\}

and parameter ?

Example

For instance, for both the Bernoulli distribution with parameter p and the exponential distribution with parameter ? the statistic

m
\sum
i=1

xi

is well-behaved. The satisfaction of the above three properties is straightforward when looking at both explaining functions:

gp(u)=1

if

u\leqp

, 0 otherwise in the case of the Bernoulli random variable, and

gλ(u)=-logu/λ

for the Exponential random variable, giving rise to statistics

sp=\sum

m
i=1

I[0,p](ui)

and
s
λ=-1
λ
m
\sum
i=1

logui.

Vice versa, in the case of X following a continuous uniform distribution on

[0,A]

the same statistics do not meet the second requirement. For instance, the observed sample

\{c,c/2,c/3\}

gives

s'A=11/6c

. But the explaining function of this X is

ga(u)=ua

.Hence a master equation

sA=\sum

m
i=1

uia

would produce witha U sample

\{0.8,0.8,0.8\}

and a solution

\brevea=0.76c

. This conflicts with the observed sample since the first observed value should result greater than the right extreme of the X range. The statistic

sA=max\{x1,\ldots,xm\}

is well-behaved in this case.

Analogously, for a random variable X following the Pareto distribution with parameters K and A (see Pareto example for more detail of this case),

s1=\sum

m
i=1

logxi

and

s2=mini=1,\ldots,m\{xi\}

can be used as joint statistics for these parameters.

As a general statement that holds under weak conditions, sufficient statistics are well-behaved with respect to the related parameters. The table below gives sufficient / Well-behaved statistics for the parameters of some of the most commonly used probability distributions.

Common distribution laws together with related sufficient and well-behaved statistics.! Distribution !! Definition of density function !! Sufficient/Well-behaved statistic
Uniform discrete

f(x;n)=1/nI\{1,2,\ldots,n\

}(x)

sn=maxixi

Bernoulli

f(x;p)=px(1-p)1-xI\{0,1\

}(x)

sP=\sum

m
i=1

xi

Binomial

f(x;n,p)=\binom{n}{x}px(1-p)n-xI0,1,\ldots,(x)

sP=\sum

m
i=1

xi

Geometric

f(x;p)=p(1-p)xI\{0,1,\ldots\

}(x)

sP=\sum

m
i=1

xi

Poisson

f(x;\mu)=e-\mu\mux/x!I\{0,1,\ldots\

}(x)

sM

m
=\sum
i=1

xi

Uniform continuous

f(x;a,b)=1/(b-a)I[a,b](x)

sA=minixi;sB=maxixi

Negative exponential

f(x;λ)eI[0,infty](x)

sΛ

m
=\sum
i=1

xi

Pareto

f(x;a,k)=

a\left(
k
x
k

\right)-aI[k,infty](x)

sA

m
=\sum
i=1

logxi;sK=minixi

Gaussian

f(x,\mu,\sigma)=1/(\sqrt{2\pi}\sigma)

-(x-\mu2)/(2\sigma2)
e

sM=\sum

m
i=1

xi;s\Sigma

m(x
=\sqrt{\sum
i-\bar

x)2}

Gamma

f(x;r,λ)=λ/\Gamma(r)(λx)r-1eI[0,infty](x)

sΛ

m
=\sum
i=1

xi;sK

m
=\prod
i=1

xi

References

Notes and References

  1. Web site: Mediation analysis and categorical variables: The final frontier . Dawn Iacobucci . 7 February 2017.
  2. Web site: The Law of Genius and Home Runs Refuted . 7 February 2017 . John DiNardo . Jason Winfree .
  3. Web site: (no title) . A DasGupta . 7 February 2017.
  4. Book: Apolloni, B . Bassis, S. . Malchiodi, D. . Witold, P. . The Puzzle of Granular Computing . Springer . Studies in Computational Intelligence . Berlin . 138 . 2008.