Pythagorean addition explained

In mathematics, Pythagorean addition is a binary operation on the real numbers that computes the length of the hypotenuse of a right triangle, given its two sides. According to the Pythagorean theorem, for a triangle with sides

a

and

b

, this length can be calculated asa \oplus b = \sqrt,where

denotes the Pythagorean addition operation.

This operation can be used in the conversion of Cartesian coordinates to polar coordinates. It also provides a simple notation and terminology for some formulas when its summands are complicated; for example, the energy-momentum relation in physics becomesE = mc^2 \oplus pc.It is implemented in many programming libraries as the hypot function, in a way designed to avoid errors arising due to limited-precision calculations performed on computers. In its applications to signal processing and propagation of measurement uncertainty, the same operation is also called addition in quadrature; it is related to the quadratic mean or "root mean square".

Applications

(x,y)

to polar coordinates

(r,\theta)

:\beginr&=x\oplus y=\operatorname(x,y)\\\theta&=\operatorname(y,x).\\\end

If measurements

X,Y,Z,...

have independent errors

\DeltaX,\DeltaY,\DeltaZ,...

respectively, the quadrature method gives the overall error,\varDelta_o = \sqrtwhereas the upper limit of the overall error is\varDelta_u = \varDelta_X + \varDelta_Y + \varDelta_Z + \cdotsif the errors were not independent.

This is equivalent of finding the magnitude of the resultant of adding orthogonal vectors, each with magnitude equal to the uncertainty, using the Pythagorean theorem.

In signal processing, addition in quadrature is used to find the overall noise from independent sources of noise. For example, if an image sensor gives six digital numbers of shot noise, three of dark current noise and two of Johnson–Nyquist noise under a specific condition, the overall noise is\sigma = 6 \oplus 3 \oplus 2 = \sqrt = 7digital numbers, showing the dominance of larger sources of noise.

The root mean square of a finite set of

n

numbers is just their Pythagorean sum, normalized to form a generalized mean by dividing by

\sqrtn

.

Properties

The operation

is associative and commutative, and \sqrt = x_1 \oplus x_2 \oplus \cdots \oplus x_n.This means that the real numbers under

form a commutative semigroup.

The real numbers under

are not a group, because

can never produce a negative number as its result, whereas each element of a group must be the result of applying the group operation to itself and the identity element. On the non-negative numbers, it is still not a group, because Pythagorean addition of one number by a second positive number can only increase the first number, so no positive number can have an inverse element. Instead, it forms a commutative monoid on the non-negative numbers, with zero as its identity.

Implementation

Hypot is a mathematical function defined to calculate the length of the hypotenuse of a right-angle triangle. It was designed to avoid errors arising due to limited-precision calculations performed on computers. Calculating the length of the hypotenuse of a triangle is possible using the square root function on the sum of two squares, but hypot avoids problems that occur when squaring very large or very small numbers. If calculated using the natural formula,r = \sqrt,the squares of very large or small values of

x

and

y

may exceed the range of machine precision when calculated on a computer, leading to an inaccurate result caused by arithmetic underflow and overflow. The hypot function was designed to calculate the result without causing this problem.

If either input to hypot is infinite, the result is infinite. Because this is true for all possible values of the other input, the IEEE 754 floating-point standard requires that this remains true even when the other input is not a number (NaN).

Since C++17, there has been an additional hypot function for 3D calculations:[1] r = \sqrt.

Calculation order

The difficulty with the naive implementation is that

x2+y2

may overflow or underflow, unless the intermediate result is computed with extended precision. A common implementation technique is to exchange the values, if necessary, so that

|x|\ge|y|

, and then to use the equivalent formr = |x| \sqrt.

The computation of

y/x

cannot overflow unless both

x

and

y

are zero. If

y/x

underflows, the final result is equal to

|x|

, which is correct within the precision of the calculation. The square root is computed of a value between 1 and 2. Finally, the multiplication by

|x|

cannot underflow, and overflows only when the result is too large to represent. This implementation has the downside that it requires an additional floating-point division, which can double the cost of the naive implementation, as multiplication and addition are typically far faster than division and square root. Typically, the implementation is slower by a factor of 2.5 to 3.[2]

More complex implementations avoid this by dividing the inputs into more cases:

x

is much larger than

y

,

xy|x|

, to within machine precision.

x2

overflows, multiply both

x

and

y

by a small scaling factor (e.g. 2-64 for IEEE single precision), use the naive algorithm which will now not overflow, and multiply the result by the (large) inverse (e.g. 264).

y2

underflows, scale as above but reverse the scaling factors to scale up the intermediate values.

However, this implementation is extremely slow when it causes incorrect jump predictions due to different cases. Additional techniques allow the result to be computed more accurately, e.g. to less than one ulp.

Programming language support

The function is present in many programming languages and libraries, includingCSS,C++11,D,Fortran (since Fortran 2008),Go,JavaScript (since ES2015),Julia,Java (since version 1.5),Kotlin,MATLAB,PHP,Python,Ruby,Rust,and Scala.

See also

Further reading

Notes and References

  1. https://en.cppreference.com/w/cpp/numeric/math/hypot Common mathematical functions std::hypot, std::hypotf, std::hypotl
  2. Measured on ARM and x64 (Intel and AMD) for different compilers with maximum optimization for 32 bit and 64 bit floats.