Distortion (optics) explained

In geometric optics, distortion is a deviation from rectilinear projection; a projection in which straight lines in a scene remain straight in an image. It is a form of optical aberration.

Radial distortion

Although distortion can be irregular or follow many patterns, the most commonly encountered distortions are radially symmetric, or approximately so, arising from the symmetry of a photographic lens. These radial distortions can usually be classified as either barrel distortions or pincushion distortions.[1]

Mathematically, barrel and pincushion distortion are quadratic, meaning they increase as the square of distance from the center. In mustache distortion the quartic (degree 4) term is significant: in the center, the degree 2 barrel distortion is dominant, while at the edge the degree 4 distortion in the pincushion direction dominates. Other distortions are in principle possible – pincushion in center and barrel at the edge, or higher order distortions (degree 6, degree 8) – but do not generally occur in practical lenses, and higher order distortions are small relative to the main barrel and pincushion effects.

Occurrence

In photography, distortion is particularly associated with zoom lenses, particularly large-range zooms, but may also be found in prime lenses, and depends on focal distance – for example, the Canon EF 50mm 1.4 exhibits barrel distortion at extremely short focal distances. Barrel distortion may be found in wide-angle lenses, and is often seen at the wide-angle end of zoom lenses, while pincushion distortion is often seen in older or low-end telephoto lenses. Mustache distortion is observed particularly on the wide end of zooms, with certain retrofocus lenses, and more recently on large-range zooms such as the Nikon 18–200 mm.

A certain amount of pincushion distortion is often found with visual optical instruments, e.g., binoculars, where it serves to counteract the globe effect.

In order to understand these distortions, it should be remembered that these are radial defects; the optical systems in question have rotational symmetry (omitting non-radial defects), so the didactically correct test image would be a set of concentric circles having even separation – like a shooter's target. It will then be observed that these common distortions actually imply a nonlinear radius mapping from the object to the image: What is seemingly pincushion distortion, is actually simply an exaggerated radius mapping for large radii in comparison with small radii. A graph showing radius transformations (from object to image) will be steeper in the upper (rightmost) end. Conversely, barrel distortion is actually a diminished radius mapping for large radii in comparison with small radii. A graph showing radius transformations (from object to image) will be less steep in the upper (rightmost) end.

Chromatic aberration

Radial distortion that depends on wavelength is called "lateral chromatic aberration" – "lateral" because radial, "chromatic" because dependent on color (wavelength). This can cause colored fringes in high-contrast areas in the outer parts of the image. This should not be confused with axial (longitudinal) chromatic aberration, which causes aberrations throughout the field, particularly purple fringing.

Origin of terms

The names for these distortions come from familiar objects which are visually similar.

Software correction

See also: Image rectification. right|thumb|With uncorrected barrel distortion (at 26mm)right|thumb|Barrel distortion corrected with software (this is the ENIAC computer)

Radial distortion, whilst primarily dominated by low-order radial components,[2] can be corrected using Brown's distortion model,[3] also known as the Brown–Conrady model based on earlier work by Conrady.[4] The Brown–Conrady model corrects both for radial distortion and for tangential distortion caused by physical elements in a lens not being perfectly aligned. The latter is also known as decentering distortion. See Zhang[5] for additional discussion of radial distortion. The Brown-Conrady distortion model is

\beginx_\mathrm = x_\mathrm & + (x_\mathrm - x_\mathrm)(K_1r^2 + K_2r^4 + \cdots) + (P_1(r^2 + 2(x_\mathrm - x_\mathrm)^2) \\& + 2P_2(x_\mathrm - x_\mathrm)(y_\mathrm - y_\mathrm))(1 + P_3r^2 + P_4r^4 \cdots) \\y_\mathrm = y_\mathrm & + (y_\mathrm - y_\mathrm)(K_1r^2 + K_2r^4 + \cdots) + (2P_1(x_\mathrm - x_\mathrm)(y_\mathrm - y_\mathrm) \\& + P_2(r^2 + 2(y_\mathrm - y_\mathrm)^2))(1 + P_3r^2 + P_4r^4 \cdots),\end

where

(xd,yd)

is the distorted image point as projected on image plane using specified lens;

(xu,yu)

is the undistorted image point as projected by an ideal pinhole camera;

(xc,yc)

is the distortion center;

Kn

is the

nth

radial distortion coefficient;

Pn

is the

nth

tangential distortion coefficient; and

r

=

\sqrt{(xd-x

2
c)

+(yd-y

2}
c)
, the Euclidean distance between the distorted image point and the distortion center.[2]

Barrel distortion typically will have a negative term for

K1

whereas pincushion distortion will have a positive value. Moustache distortion will have a non-monotonic radial geometric series where for some

r

the sequence will change sign.

To model radial distortion, the division model[6] typically provides a more accurate approximation than Brown-Conrady's even-order polynomial model,[7]

\beginx_\mathrm & = x_\mathrm + \frac \\y_\mathrm & = y_\mathrm + \frac,\end

using the same parameters previously defined. For radial distortion, this division model is often preferred over the Brown–Conrady model, as it requires fewer terms to more accurately describe severe distortion.[7] Using this model, a single term is usually sufficient to model most cameras.[8]

Software can correct those distortions by warping the image with a reverse distortion. This involves determining which distorted pixel corresponds to each undistorted pixel, which is non-trivial due to the non-linearity of the distortion equation.[2] Lateral chromatic aberration (purple/green fringing) can be significantly reduced by applying such warping for red, green and blue separately.

Distorting or undistorting requires either both sets of coefficients or inverting the non-linear problem which, in general, lacks an analytical solution. Standard approaches such as approximating, locally linearizing and iterative solvers all apply. Which solver is preferable depends on the accuracy required and the computational resources available.

In addition to usually being sufficient to model most cameras, as mentioned, the single-term division model has an analytical solution to the reverse-distortion problem.[7] In this case, the distorted pixels are given by

\beginx_\mathrm & = x_\mathrm + \frac (1 - \sqrt) \\y_\mathrm & = y_\mathrm + \frac (1 - \sqrt),\end

where

ru

=

\sqrt{(xu-x

2
c)

+(yu-y

2}
c)
, the Euclidean distance between the undistorted image point and the undistortion/distortion center.

Calibrated

Calibrated systems work from a table of lens/camera transfer functions:

Manual

Manual systems allow manual adjustment of distortion parameters:

convert distorted_image.jpg -distort barrel "0.06335 -0.18432 -0.13009" corrected_image.jpg

Besides these systems that address images, there are some that also adjust distortion parameters for videos:

Related phenomena

Radial distortion is a failure of a lens to be rectilinear: a failure to image lines into lines. If a photograph is not taken straight-on then, even with a perfect rectilinear lens, rectangles will appear as trapezoids: lines are imaged as lines, but the angles between them are not preserved (tilt is not a conformal map). This effect can be controlled by using a perspective control lens, or corrected in post-processing.

Due to perspective, cameras image a cube as a square frustum (a truncated pyramid, with trapezoidal sides) – the far end is smaller than the near end. This creates perspective, and the rate at which this scaling happens (how quickly more distant objects shrink) creates a sense of a scene being deep or shallow. This cannot be changed or corrected by a simple transform of the resulting image, because it requires 3D information, namely the depth of objects in the scene. This effect is known as perspective distortion; the image itself is not distorted, but is perceived as distorted when viewed from a normal viewing distance.

Note that if the center of the image is closer than the edges (for example, a straight-on shot of a face), then barrel distortion and wide-angle distortion (taking the shot from close) both increase the size of the center, while pincushion distortion and telephoto distortion (taking the shot from far) both decrease the size of the center. However, radial distortion bends straight lines (out or in), while perspective distortion does not bend lines, and these are distinct phenomena. Fisheye lenses are wide-angle lenses with heavy barrel distortion and thus exhibit both these phenomena, so objects in the center of the image (if shot from a short distance) are particularly enlarged: even if the barrel distortion is corrected, the resulting image is still from a wide-angle lens, and will still have a wide-angle perspective.

See also

References

  1. Web site: Distortion . Paul van Walree . Photographic optics . 2 February 2009 . https://web.archive.org/web/20090129134205/http://toothwalker.org/optics/distortion.html . 29 January 2009 . dead .
  2. de Villiers . J. P. . Leuschner . F.W. . Geldenhuys. R. . Centi-pixel accurate real-time inverse distortion correction. 2008 International Symposium on Optomechatronic Technologies . SPIE . 17–19 November 2008 . 10.1117/12.804771.
  3. Brown . Duane C. . Decentering distortion of lenses . Photogrammetric Engineering . 32 . 3 . 444–462 . May 1966 . https://web.archive.org/web/20180312205006/https://www.asprs.org/wp-content/uploads/pers/1966journal/may/1966_may_444-462.pdf . dead . 2018-03-12 .
  4. 1919MNRAS..79..384C. Decentred Lens-Systems. Conrady. A. E.. Monthly Notices of the Royal Astronomical Society. 79. 5. 384–390. 1919. 10.1093/mnras/79.5.384. free.
  5. Zhengyou . Zhang . A Flexible New Technique for Camera Calibration . MSR-TR-98-71 . Microsoft Research . 1998 .
  6. Fitzgibbon . A. W. . Simultaneous linear estimation of multiple view geometry and lens distortion. Proceedings of the 2001 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR) . IEEE . 2001. 10.1109/CVPR.2001.990465.
  7. Bukhari . F. . Dailey . M. N. . Automatic Radial Distortion Estimation from a Single Image. Journal of mathematical imaging and vision . Springer . 2013 . 10.1007/s10851-012-0342-2.
  8. Wang . J. . Shi . F. . Zhang . J. . Liu . Y. . A new calibration model of camera lens distortion. Pattern Recognition . Elsevier . 2008 . 10.1016/j.patcog.2007.06.012.
  9. Web site: PTlens . 2 January 2012.
  10. Web site: Lensfun. 16 April 2022.
  11. Web site: lensfun – Rev 246 – /trunk/README . https://archive.today/20131013142525/http://svn.berlios.de/wsvn/lensfun/trunk/README?rev=246 . dead . 13 October 2013 . 13 October 2013 .
  12. Web site: OpenCV. opencv.org/. 22 January 2018.
  13. Web site: Wiley . Carlisle . Articles: Digital Photography Review . Dpreview.com . 2013-07-03 . dead . https://web.archive.org/web/20120707100436/http://www.dpreview.com/articles/distortion/ . 7 July 2012 .
  14. Web site: ImageMagick v6 Examples -- Lens Corrections.
  15. Web site: Hugin tutorial – Simulating an architectural projection . 9 September 2009.
  16. Web site: FFmpeg Filters Documentation.

External links