Least-squares adjustment explained

Least-squares adjustment is a model for the solution of an overdetermined system of equations based on the principle of least squares of observation residuals. It is used extensively in the disciplines of surveying, geodesy, and photogrammetry—the field of geomatics, collectively.

Formulation

There are three forms of least squares adjustment: parametric, conditional, and combined:

Clearly, parametric and conditional adjustments correspond to the more general combined case when and, respectively. Yet the special cases warrant simpler solutions, as detailed below. Often in the literature, may be denoted .

Solution

The equalities above only hold for the estimated parameters

\hat{X}

and observations

\hat{Y}

, thus

f\left(\hat{X},\hat{Y}\right)=0

. In contrast, measured observations

\tilde{Y}

and approximate parameters

\tilde{X}

produce a nonzero misclosure:\tilde = f\left(\tilde,\tilde\right).One can proceed to Taylor series expansion of the equations, which results in the Jacobians or design matrices: the first one,A = \partial/\partial;and the second one,B = \partial/\partial.The linearized model then reads:\tilde + A \hat + B \hat = 0,where

\hat{x}=\hat{X}-\tilde{X}

are estimated parameter corrections to the a priori values, and

\hat{y}=\hat{Y}-\tilde{Y}

are post-fit observation residuals.

In the parametric adjustment, the second design matrix is an identity, B=-I, and the misclosure vector can be interpreted as the pre-fit residuals,

\tilde{y}=\tilde{w}=h(\tilde{X})-\tilde{Y}

, so the system simplifies to:A \hat = \hat - \tilde,which is in the form of ordinary least squares. In the conditional adjustment, the first design matrix is null, .For the more general cases, Lagrange multipliers are introduced to relate the two Jacobian matrices, and transform the constrained least squares problem into an unconstrained one (albeit a larger one). In any case, their manipulation leads to the

\hat{X}

and

\hat{Y}

vectors as well as the respective parameters and observations a posteriori covariance matrices.

Computation

Given the matrices and vectors above, their solution is found via standard least-squares methods; e.g., forming the normal matrix and applying Cholesky decomposition, applying the QR factorization directly to the Jacobian matrix, iterative methods for very large systems, etc.

Applications

Related concepts

Extensions

If rank deficiency is encountered, it can often be rectified by the inclusion of additional equations imposing constraints on the parameters and/or observations, leading to constrained least squares.

Bibliography

Lecture notes and technical reports:
Books and chapters:

Notes and References

  1. Book: Kotz . Samuel . Read . Campbell B. . Balakrishnan . N. . Vidakovic . Brani . Johnson . Norman L. . Gauss-Helmert Model . Encyclopedia of Statistical Sciences . John Wiley & Sons, Inc. . Hoboken, NJ, USA . 2004-07-15 . 978-0-471-66719-3 . 10.1002/0471667196.ess0854.pub2 .
  2. Book: Förstner . Wolfgang . Wrobel . Bernhard P. . Geometry and Computing . 11 . Photogrammetric Computer Vision . Estimation . Springer International Publishing . Cham . 2016 . 978-3-319-11549-8 . 1866-6795 . 10.1007/978-3-319-11550-4_4 . 75–190.
  3. Schaffrin . Burkhard . Snow . Kyle . Total Least-Squares regularization of Tykhonov type and an ancient racetrack in Corinth . Linear Algebra and Its Applications . Elsevier BV . 432 . 8 . 2010 . 0024-3795 . 10.1016/j.laa.2009.09.014 . 2061–2076. free .
  4. Neitzel . Frank . Generalization of total least-squares on example of unweighted and weighted 2D similarity transformation . Journal of Geodesy . Springer Science and Business Media LLC . 84 . 12 . 2010-09-17 . 0949-7714 . 10.1007/s00190-010-0408-0 . 751–762. 2010JGeod..84..751N . 123207786 .