Minimax estimator explained

In statistical decision theory, where we are faced with the problem of estimating a deterministic parameter (vector)

\theta\in\Theta

from observations

x\inl{X},

an estimator (estimation rule)

\deltaM

is called minimax if its maximal risk is minimal among all estimators of

\theta

. In a sense this means that

\deltaM

is an estimator which performs best in the worst possible case allowed in the problem.

Problem setup

Consider the problem of estimating a deterministic (not Bayesian) parameter

\theta\in\Theta

from noisy or corrupt data

x\inl{X}

related through the conditional probability distribution

P(x\mid\theta)

. Our goal is to find a "good" estimator

\delta(x)

for estimating the parameter

\theta

, which minimizes some given risk function

R(\theta,\delta)

. Here the risk function (technically a Functional or Operator since

R

is a function of a function, NOT function composition) is the expectation of some loss function

L(\theta,\delta)

with respect to

P(x\mid\theta)

. A popular example for a loss function[1] is the squared error loss

L(\theta,\delta)=\|\theta-\delta\|2

, and the risk function for this loss is the mean squared error (MSE).

Unfortunately, in general, the risk cannot be minimized since it depends on the unknown parameter

\theta

itself (If we knew what was the actual value of

\theta

, we wouldn't need to estimate it). Therefore additional criteria for finding an optimal estimator in some sense are required. One such criterion is the minimax criterion.

Definition

Definition : An estimator

\deltaM:l{X}\Theta

is called minimax with respect to a risk function

R(\theta,\delta)

if it achieves the smallest maximum risk among all estimators, meaning it satisfies

\sup\thetaR(\theta,\deltaM)=inf\delta\sup\thetaR(\theta,\delta).

Least favorable distribution

Logically, an estimator is minimax when it is the best in the worst case. Continuing this logic, a minimax estimator should be a Bayes estimator with respect to a least favorable prior distribution of

\theta

. To demonstrate this notion denote the average risk of the Bayes estimator

\delta\pi

with respect to a prior distribution

\pi

as

r\pi=\intR(\theta,\delta\pi)d\pi(\theta)

Definition: A prior distribution

\pi

is called least favorable if for every other distribution

\pi'

the average risk satisfies

r\pi\geqr\pi

.

Theorem 1: If

r\pi=\sup\thetaR(\theta,\delta\pi),

then:

\delta\pi

is minimax.
  1. If

\delta\pi

is a unique Bayes estimator, it is also the unique minimax estimator.

\pi

is least favorable.

Corollary: If a Bayes estimator has constant risk, it is minimax. Note that this is not a necessary condition.

Example 1: Unfair coin[2] : Consider the problem of estimating the "success" rate of a binomial variable,

x\simB(n,\theta)

. This may be viewed as estimating the rate at which an unfair coin falls on "heads" or "tails". In this case the Bayes estimator with respect to a Beta-distributed prior,

\theta\simBeta(\sqrt{n}/2,\sqrt{n}/2)

is
M=x+0.5\sqrt{n
\delta
}, \,

with constant Bayes risk

r=1
4(1+\sqrt{n

)2}

and, according to the Corollary, is minimax.

Definition: A sequence of prior distributions

\pin

is called least favorable if for any other distribution

\pi'

,

\limn

r
\pin

\geqr\pi.

Theorem 2: If there are a sequence of priors

\pin

and an estimator

\delta

such that

\sup\thetaR(\theta,\delta)=\limn

r
\pin

, then :

\delta

is minimax.
  1. The sequence

\pin

is least favorable.

Notice that no uniqueness is guaranteed here. For example, the ML estimator from the previous example may be attained as the limit of Bayes estimators with respect to a uniform prior,

\pin\simU[-n,n]

with increasing support and also with respect to a zero-mean normal prior

\pin\simN(0,n\sigma2)

with increasing variance. So neither the resulting ML estimator is unique minimax nor the least favorable prior is unique.

Example 2: Consider the problem of estimating the mean of

p

dimensional Gaussian random vector,

x\simN(\theta,Ip\sigma2)

. The maximum likelihood (ML) estimator for

\theta

in this case is simply

\deltaML=x

, and its risk is

R(\theta,\deltaML)=E{\|\deltaML

p
-\theta\|
i=1

E(xi-\theta

2=p
i)

\sigma2.

The risk is constant, but the ML estimator is actually not a Bayes estimator, so the Corollary of Theorem 1 does not apply. However, the ML estimator is the limit of the Bayes estimators with respect to the prior sequence

\pin\simN(0,n\sigma2)

, and, hence, indeed minimax according to Theorem 2. Nonetheless, minimaxity does not always imply admissibility. In fact in this example, the ML estimator is known to be inadmissible (not admissible) whenever

p>2

. The famous James–Stein estimator dominates the ML whenever

p>2

. Though both estimators have the same risk

p\sigma2

when

\|\theta\|infty

, and they are both minimax, the James–Stein estimator has smaller risk for any finite

\|\theta\|

. This fact is illustrated in the following figure.

Some examples

In general, it is difficult, often even impossible to determine the minimax estimator. Nonetheless, in many cases, a minimax estimator has been determined.

Example 3: Bounded normal mean: When estimating the mean of a normal vector

x\simN(\theta,In\sigma2)

, where it is known that

\|\theta\|2\leqM

. The Bayes estimator with respect to a prior which is uniformly distributed on the edge of the bounding sphere is known to be minimax whenever

M\leqn

. The analytical expression for this estimator is
M(x)=MJn+1(M\|x\|)
\|x\|Jn(M\|x\|)
\delta

x,

where

Jn(t)

, is the modified Bessel function of the first kind of order n.

Asymptotic minimax estimator

The difficulty of determining the exact minimax estimator has motivated the study of estimators of asymptotic minimax – an estimator

\delta'

is called

c

-asymptotic (or approximate) minimax if

\sup\theta\in\ThetaR(\theta,\delta')\leqcinf\delta\sup\thetaR(\theta,\delta).

For many estimation problems, especially in the non-parametric estimation setting, various approximate minimax estimators have been established. The design of the approximate minimax estimator is intimately related to the geometry, such as the metric entropy number, of

\Theta

.

Randomised minimax estimator

Sometimes, a minimax estimator may take the form of a randomised decision rule. An example is shown on the left. The parameter space has just two elements and each point on the graph corresponds to the risk of a decision rule: the x-coordinate is the risk when the parameter is

\theta1

and the y-coordinate is the risk when the parameter is

\theta2

. In this decision problem, the minimax estimator lies on a line segment connecting two deterministic estimators. Choosing

\delta1

with probability

1-p

and

\delta2

with probability

p

minimises the supremum risk.

Relationship to robust optimization

Robust optimization is an approach to solve optimization problems under uncertainty in the knowledge of underlying parameters,. For instance, the MMSE Bayesian estimation of a parameter requires the knowledge of parameter correlation function. If the knowledge of this correlation function is not perfectly available, a popular minimax robust optimization approach is to define a set characterizing the uncertainty about the correlation function, and then pursuing a minimax optimization over the uncertainty set and the estimator respectively. Similar minimax optimizations can be pursued to make estimators robust to certain imprecisely known parameters. For instance, a recent study dealing with such techniques in the area of signal processing can be found in.

In R. Fandom Noubiap and W. Seidel (2001) an algorithm for calculating a Gamma-minimax decision rule has been developed, when Gamma is given by a finite number of generalized moment conditions. Such a decision rule minimizes the maximum of the integrals of the risk function with respect to all distributions in Gamma. Gamma-minimax decision rules are of interest in robustness studies in Bayesian statistics.

References

Notes and References

  1. Book: Berger, J.O. . J.O. Berger (statistician)

    . J.O. Berger (statistician) . 1985 . Statistical Decision Theory and Bayesian Analysis . Springer-Verlag. New York . 2 . xv+425 . 0580664 . 0-387-96098-8.

  2. Steinhaus . Hugon . The problem of estimation . Ann. Math. Statist. . 1957 . 28 . 3. 633–648 . 10.1214/aoms/1177706876 . 2237224. 92313. 0088.35503. free .