Estimation theory is a branch of statistics that deals with estimating the values of parameters based on measured empirical data that has a random component. The parameters describe an underlying physical setting in such a way that their value affects the distribution of the measured data. An estimator attempts to approximate the unknown parameters using the measurements.In estimation theory, two approaches are generally considered:[1]
For example, it is desired to estimate the proportion of a population of voters who will vote for a particular candidate. That proportion is the parameter sought; the estimate is based on a small random sample of voters. Alternatively, it is desired to estimate the probability of a voter voting for a particular candidate, based on some demographic features, such as age.
Or, for example, in radar the aim is to find the range of objects (airplanes, boats, etc.) by analyzing the two-way transit timing of received echoes of transmitted pulses. Since the reflected pulses are unavoidably embedded in electrical noise, their measured values are randomly distributed, so that the transit time must be estimated.
As another example, in electrical communication theory, the measurements which contain information regarding the parameters of interest are often associated with a noisy signal.
For a given model, several statistical "ingredients" are needed so the estimator can be implemented. The first is a statistical sample – a set of data points taken from a random vector (RV) of size N. Put into a vector,Secondly, there are M parameterswhose values are to be estimated. Third, the continuous probability density function (pdf) or its discrete counterpart, the probability mass function (pmf), of the underlying distribution that generated the data must be stated conditional on the values of the parameters:It is also possible for the parameters themselves to have a probability distribution (e.g., Bayesian statistics). It is then necessary to define the Bayesian probabilityAfter the model is formed, the goal is to estimate the parameters, with the estimates commonly denoted
\hat{\boldsymbol{\theta}}
One common estimator is the minimum mean squared error (MMSE) estimator, which utilizes the error between the estimated parameters and the actual value of the parametersas the basis for optimality. This error term is then squared and the expected value of this squared value is minimized for the MMSE estimator.
See main article: Estimator.
Commonly used estimators (estimation methods) and topics related to them include:
Consider a received discrete signal,
x[n]
N
A
w[n]
\sigma2
l{N}(0,\sigma2)
A
The model for the signal is then
Two possible (of many) estimators for the parameter
A
\hat{A}1=x[0]
\hat{A}2=
1 | |
N |
N-1 | |
\sum | |
n=0 |
x[n]
Both of these estimators have a mean of
A
At this point, these two estimators would appear to perform the same.However, the difference between them becomes apparent when comparing the variances.and
It would seem that the sample mean is a better estimator since its variance is lower for every N > 1.
See main article: Maximum likelihood.
Continuing the example using the maximum likelihood estimator, the probability density function (pdf) of the noise for one sample
w[n]
x[n]
x[n]
l{N}(A,\sigma2)
x
Taking the first derivative of the log-likelihood functionand setting it to zero
This results in the maximum likelihood estimatorwhich is simply the sample mean.From this example, it was found that the sample mean is the maximum likelihood estimator for
N
To find the Cramér–Rao lower bound (CRLB) of the sample mean estimator, it is first necessary to find the Fisher information numberand copying from above
Taking the second derivativeand finding the negative expected value is trivial since it is now a deterministic constant
-E \left[
\partial2 | |
\partialA2 |
lnp(x;A) \right] =
N | |
\sigma2 |
Finally, putting the Fisher information intoresults in
Comparing this to the variance of the sample mean (determined previously) shows that the sample mean is equal to the Cramér–Rao lower bound for all values of
N
A
See main article: German tank problem.
One of the simplest non-trivial examples of estimation is the estimation of the maximum of a uniform distribution. It is used as a hands-on classroom exercise and to illustrate basic principles of estimation theory. Further, in the case of estimation based on a single sample, it demonstrates philosophical issues and possible misunderstandings in the use of maximum likelihood estimators and likelihood functions.
1,2,...,N
The formula may be understood intuitively as;the gap being added to compensate for the negative bias of the sample maximum as an estimator for the population maximum.
This has a variance ofso a standard deviation of approximately
N/k
m | |
k |
The sample maximum is the maximum likelihood estimator for the population maximum, but, as discussed above, it is biased.
Numerous fields require the use of estimation theory.Some of these fields include:
Measured data are likely to be subject to noise or uncertainty and it is through statistical probability that optimal solutions are sought to extract as much information from the data as possible.
See main article: category and Estimation theory.