In numerical analysis, the Kahan summation algorithm, also known as compensated summation,[1] significantly reduces the numerical error in the total obtained by adding a sequence of finite-precision floating-point numbers, compared to the obvious approach. This is done by keeping a separate running compensation (a variable to accumulate small errors), in effect extending the precision of the sum by the precision of the compensation variable.
In particular, simply summing
n
n
\sqrt{n}
n
The algorithm is attributed to William Kahan;[3] Ivo Babuška seems to have come up with a similar algorithm independently (hence Kahan - Babuška summation).[4] Similar, earlier techniques are, for example, Bresenham's line algorithm, keeping track of the accumulated error in integer operations (although first documented around the same time[5]) and the delta-sigma modulation.[6]
In pseudocode, the algorithm will be:
function KahanSum(input) // Prepare the accumulator. var sum = 0.0 // A running compensation for lost low-order bits. var c = 0.0 // The array input has elements indexed input[1] to input[input.length]. for i = 1 to input.length do // c is zero the first time around. var y = input[i] - c // Alas, sum is big, y small, so low-order digits of y are lost. var t = sum + y // (t - sum) cancels the high-order part of y; // subtracting y recovers negative (low part of y) c = (t - sum) - y // Algebraically, c should always be zero. Beware // overly-aggressive optimizing compilers! sum = t // Next time around, the lost low part will be added to y in a fresh attempt. next i return sum
This algorithm can also be rewritten to use the Fast2Sum algorithm:[7]
function KahanSum2(input) // Prepare the accumulator. var sum = 0.0 // A running compensation for lost low-order bits. var c = 0.0 // The array input has elements indexed for i = 1 to input.length do // c is zero the first time around. var y = input[i] + c // sum + c is an approximation to the exact sum. (sum,c) = Fast2Sum(sum,y) // Next time around, the lost low part will be added to y in a fresh attempt. next i return sum
The algorithm does not mandate any specific choice of radix, only for the arithmetic to "normalize floating-point sums before rounding or truncating". Computers typically use binary arithmetic, but to make the example easier to read, it will be given in decimal. Suppose we are using six-digit decimal floating-point arithmetic, sum
has attained the value 10000.0, and the next two values of input[i]
are 3.14159 and 2.71828. The exact result is 10005.85987, which rounds to 10005.9. With a plain summation, each incoming value would be aligned with sum
, and many low-order digits would be lost (by truncation or rounding). The first result, after rounding, would be 10003.1. The second result would be 10005.81828 before rounding and 10005.8 after rounding. This is not correct.
However, with compensated summation, we get the correctly rounded result of 10005.9.
Assume that c
has the initial value zero. Trailing zeros shown where they are significant for the six-digit floating-point number. y = 3.14159 - 0.00000 y = input[i] - c t = 10000.0 + 3.14159 t = sum + y = 10003.14159 Normalization done, next round off to six digits. = 10003.1 Few digits from input[i] met those of sum. Many digits have been lost! c = (10003.1 - 10000.0) - 3.14159 c = (t - sum) - y (Note: Parenthesis must be evaluated first!) = 3.10000 - 3.14159 The assimilated part of y minus the original full y. = -0.0415900 Because c is close to zero, normalization retains many digits after the floating point. sum = 10003.1 sum = t
The sum is so large that only the high-order digits of the input numbers are being accumulated. But on the next step, c
, an approximation of the running error, counteracts the problem. y = 2.71828 - (-0.0415900) Most digits meet, since c is of a size similar to y. = 2.75987 The shortfall (low-order digits lost) of previous iteration successfully reinstated. t = 10003.1 + 2.75987 But still only few meet the digits of sum. = 10005.85987 Normalization done, next round to six digits. = 10005.9 Again, many digits have been lost, but c helped nudge the round-off. c = (10005.9 - 10003.1) - 2.75987 Estimate the accumulated error, based on the adjusted y. = 2.80000 - 2.75987 As expected, the low-order parts can be retained in c with no or minor round-off effects. = 0.0401300 In this iteration, t was a bit too high, the excess will be subtracted off in next iteration. sum = 10005.9 Exact result is 10005.85987, sum is correct, rounded to 6 digits.
The algorithm performs summation with two accumulators: sum
holds the sum, and c
accumulates the parts not assimilated into sum
, to nudge the low-order part of sum
the next time around. Thus the summation proceeds with "guard digits" in c
, which is better than not having any, but is not as good as performing the calculations with double the precision of the input. However, simply increasing the precision of the calculations is not practical in general; if input
is already in double precision, few systems supply quadruple precision, and if they did, input
could then be in quadruple precision.
A careful analysis of the errors in compensated summation is needed to appreciate its accuracy characteristics. While it is more accurate than naive summation, it can still give large relative errors for ill-conditioned sums.
Suppose that one is summing
n
xi
i=1,\ldots,n
Sn=
n | |
\sum | |
i=1 |
xi
Sn+En
En
|En|\le[2\varepsilon+O(n\varepsilon2)]
n | |
\sum | |
i=1 |
|xi|,
\varepsilon
\varepsilon ≈ 10-16
|En|/|Sn|
|En| | |
|Sn| |
\le[2\varepsilon+O(n\varepsilon2)]
| ||||||||||
|
.
In the expression for the relative error bound, the fraction
\Sigma|xi|/|\Sigmaxi|
xi
\sqrt{n}
n\toinfty
Given a condition number, the relative error of compensated summation is effectively independent of
n
O(n\varepsilon2)
n
\varepsilon
n\varepsilon2
n
1/\varepsilon
n
1016
O(\varepsilon)
n
In comparison, the relative error bound for naive summation (simply adding the numbers in sequence, rounding at each step) grows as
O(\varepsilonn)
O\left(\varepsilon\sqrt{n}\right)
\varepsilon
\varepsilon2
O(n\varepsilon2)
By the same token, the
\Sigma|xi|
En
\Sigma|xi|
En
O\left(\varepsilon\sqrt{n}\right)
n\varepsilon2
Sn
\sqrt{n}
Neumaier[10] introduced an improved version of Kahan algorithm, which he calls an "improved Kahan–Babuška algorithm", which also covers the case when the next term to be added is larger in absolute value than the running sum, effectively swapping the role of what is large and what is small. In pseudocode, the algorithm is: function KahanBabushkaNeumaierSum(input) var sum = 0.0 var c = 0.0 // A running compensation for lost low-order bits. for i = 1 to input.length do var t = sum + input[i] if |sum| >= |input[i]| then c += (sum - t) + input[i] // If sum is bigger, low-order digits of input[i] are lost. else c += (input[i] - t) + sum // Else low-order digits of sum are lost. endif sum = t next i return sum + c // Correction only applied once in the very end.
This enhancement is similar to the replacement of Fast2Sum by 2Sum in Kahan's algorithm rewritten with Fast2Sum.
For many sequences of numbers, both algorithms agree, but a simple example due to Peters shows how they can differ. For summing
[1.0,+10100,1.0,-10100]
Higher-order modifications of better accuracy are also possible. For example, a variant suggested by Klein,[11] which he called a second-order "iterative Kahan–Babuška algorithm". In pseudocode, the algorithm is:
function KahanBabushkaKleinSum(input) var sum = 0.0 var cs = 0.0 var ccs = 0.0 var c = 0.0 var cc = 0.0 for i = 1 to input.length do var t = sum + input[i] if |sum| >= |input[i]| then c = (sum - t) + input[i] else c = (input[i] - t) + sum endif sum = t t = cs + c if |cs| >= |c| then cc = (cs - t) + c else cc = (c - t) + cs endif cs = t ccs = ccs + cc end loop return sum + cs + ccs
Although Kahan's algorithm achieves
O(1)
O(logn)
O\left(\sqrt{logn}\right)
Another alternative is to use arbitrary-precision arithmetic, which in principle need no rounding at all with a cost of much greater computational effort. A way of performing correctly rounded sums using arbitrary precision is to extend adaptively using multiple floating-point components. This will minimize computational cost in common cases where high precision is not needed.[13] [14] Another method that uses only integer arithmetic, but a large accumulator, was described by Kirchner and Kulisch; [15] a hardware implementation was described by Müller, Rüb and Rülling.[16]
In principle, a sufficiently aggressive optimizing compiler could destroy the effectiveness of Kahan summation: for example, if the compiler simplified expressions according to the associativity rules of real arithmetic, it might "simplify" the second step in the sequence
t = sum + y;
c = (t - sum) - y;
to
c = ((sum + y) - sum) - y;
and then to
c = 0;
thus eliminating the error compensation.[17] In practice, many compilers do not use associativity rules (which are only approximate in floating-point arithmetic) in simplifications, unless explicitly directed to do so by compiler options enabling "unsafe" optimizations,[18] [19] [20] [21] although the Intel C++ Compiler is one example that allows associativity-based transformations by default.[22] The original K&R C version of the C programming language allowed the compiler to re-order floating-point expressions according to real-arithmetic associativity rules, but the subsequent ANSI C standard prohibited re-ordering in order to make C better suited for numerical applications (and more similar to Fortran, which also prohibits re-ordering),[23] although in practice compiler options can re-enable re-ordering, as mentioned above.
A portable way to inhibit such optimizations locally is to break one of the lines in the original formulation into two statements, and make two of the intermediate products volatile: function KahanSum(input) var sum = 0.0 var c = 0.0 for i = 1 to input.length do var y = input[i] - c volatile var t = sum + y volatile var z = t - sum c = z - y sum = t next i return sum
In general, built-in "sum" functions in computer languages typically provide no guarantees that a particular summation algorithm will be employed, much less Kahan summation. The BLAS standard for linear algebra subroutines explicitly avoids mandating any particular computational order of operations for performance reasons,[24] and BLAS implementations typically do not use Kahan summation. The standard library of the Python computer language specifies an fsum function for accurate summation. Starting with Python 3.12, the built-in "sum" function uses the Neumaier summation.[25]
In the Julia language, the default implementation of the sum
function does pairwise summation for high accuracy with good performance,[26] but an external library provides an implementation of Neumaier's variant named sum_kbn
for the cases when higher accuracy is needed.[27]
In the C# language, HPCsharp nuget package implements the Neumaier variant and pairwise summation: both as scalar, data-parallel using SIMD processor instructions, and parallel multi-core.[28]