CORDIC (coordinate rotation digital computer), Volder's algorithm, Digit-by-digit method, Circular CORDIC (Jack E. Volder), Linear CORDIC, Hyperbolic CORDIC (John Stephen Walther), and Generalized Hyperbolic CORDIC (GH CORDIC) (Yuanyong Luo et al.), is a simple and efficient algorithm to calculate trigonometric functions, hyperbolic functions, square roots, multiplications, divisions, and exponentials and logarithms with arbitrary base, typically converging with one digit (or bit) per iteration. CORDIC is therefore also an example of digit-by-digit algorithms. CORDIC and closely related methods known as pseudo-multiplication and pseudo-division or factor combining are commonly used when no hardware multiplier is available (e.g. in simple microcontrollers and field-programmable gate arrays or FPGAs), as the only operations they require are additions, subtractions, bitshift and lookup tables. As such, they all belong to the class of shift-and-add algorithms. In computer science, CORDIC is often used to implement floating-point arithmetic when the target platform lacks hardware multiply for cost or space reasons.
Similar mathematical techniques were published by Henry Briggs as early as 1624 and Robert Flower in 1771, but CORDIC is better optimized for low-complexity finite-state CPUs.
CORDIC was conceived in 1956 by Jack E. Volder at the aeroelectronics department of Convair out of necessity to replace the analog resolver in the B-58 bomber's navigation computer with a more accurate and faster real-time digital solution. Therefore, CORDIC is sometimes referred to as a digital resolver.
In his research Volder was inspired by a formula in the 1946 edition of the CRC Handbook of Chemistry and Physics:
\begin{align} KnR\sin(\theta\pm\varphi)&=R\sin(\theta)\pm2-nR\cos(\theta),\\ KnR\cos(\theta\pm\varphi)&=R\cos(\theta)\mp2-nR\sin(\theta),\\ \end{align}
\varphi
\tan(\varphi)=2-n
Kn:=\sqrt{1+2-2n
His research led to an internal technical report proposing the CORDIC algorithm to solve sine and cosine functions and a prototypical computer implementing it. The report also discussed the possibility to compute hyperbolic coordinate rotation, logarithms and exponential functions with modified CORDIC algorithms. Utilizing CORDIC for multiplication and division was also conceived at this time. Based on the CORDIC principle, Dan H. Daggett, a colleague of Volder at Convair, developed conversion algorithms between binary and binary-coded decimal (BCD).
In 1958, Convair finally started to build a demonstration system to solve radar fix–taking problems named CORDIC I, completed in 1960 without Volder, who had left the company already. More universal CORDIC II models A (stationary) and B (airborne) were built and tested by Daggett and Harry Schuss in 1962.
Volder's CORDIC algorithm was first described in public in 1959, which caused it to be incorporated into navigation computers by companies including Martin-Orlando, Computer Control, Litton, Kearfott, Lear-Siegler, Sperry, Raytheon, and Collins Radio.
Volder teamed up with Malcolm McMillan to build Athena, a fixed-point desktop calculator utilizing his binary CORDIC algorithm. The design was introduced to Hewlett-Packard in June 1965, but not accepted. Still, McMillan introduced David S. Cochran (HP) to Volder's algorithm and when Cochran later met Volder he referred him to a similar approach John E. Meggitt (IBM) had proposed as pseudo-multiplication and pseudo-division in 1961. Meggitt's method also suggested the use of base 10 rather than base 2, as used by Volder's CORDIC so far. These efforts led to the ROMable logic implementation of a decimal CORDIC prototype machine inside of Hewlett-Packard in 1966, built by and conceptually derived from Thomas E. Osborne's prototypical Green Machine, a four-function, floating-point desktop calculator he had completed in DTL logic in December 1964. This project resulted in the public demonstration of Hewlett-Packard's first desktop calculator with scientific functions, the HP 9100A in March 1968, with series production starting later that year.
When Wang Laboratories found that the HP 9100A used an approach similar to the factor combining method in their earlier LOCI-1 (September 1964) and LOCI-2 (January 1965) Logarithmic Computing Instrument desktop calculators, they unsuccessfully accused Hewlett-Packard of infringement of one of An Wang's patents in 1968.
John Stephen Walther at Hewlett-Packard generalized the algorithm into the Unified CORDIC algorithm in 1971, allowing it to calculate hyperbolic functions, natural exponentials, natural logarithms, multiplications, divisions, and square roots. The CORDIC subroutines for trigonometric and hyperbolic functions could share most of their code. This development resulted in the first scientific handheld calculator, the HP-35 in 1972. Based on hyperbolic CORDIC, Yuanyong Luo et al. further proposed a Generalized Hyperbolic CORDIC (GH CORDIC) to directly compute logarithms and exponentials with an arbitrary fixed base in 2019. Theoretically, Hyperbolic CORDIC is a special case of GH CORDIC.
Originally, CORDIC was implemented only using the binary numeral system and despite Meggitt suggesting the use of the decimal system for his pseudo-multiplication approach, decimal CORDIC continued to remain mostly unheard of for several more years, so that Hermann Schmid and Anthony Bogacki still suggested it as a novelty as late as 1973 and it was found only later that Hewlett-Packard had implemented it in 1966 already.
Decimal CORDIC became widely used in pocket calculators, most of which operate in binary-coded decimal (BCD) rather than binary. This change in the input and output format did not alter CORDIC's core calculation algorithms. CORDIC is particularly well-suited for handheld calculators, in which low cost – and thus low chip gate count – is much more important than speed.
CORDIC has been implemented in the ARM-based STM32G4, Intel 8087, 80287, 80387 up to the 80486 coprocessor series as well as in the Motorola 68881 and 68882 for some kinds of floating-point instructions, mainly as a way to reduce the gate counts (and complexity) of the FPU sub-system.
CORDIC uses simple shift-add operations for several computing tasks such as the calculation of trigonometric, hyperbolic and logarithmic functions, real and complex multiplications, division, square-root calculation, solution of linear systems, eigenvalue estimation, singular value decomposition, QR factorization and many others. As a consequence, CORDIC has been used for applications in diverse areas such as signal and image processing, communication systems, robotics and 3D graphics apart from general scientific and technical computation.
The algorithm was used in the navigational system of the Apollo program's Lunar Roving Vehicle to compute bearing and range, or distance from the Lunar module. CORDIC was used to implement the Intel 8087 math coprocessor in 1980, avoiding the need to implement hardware multiplication.
CORDIC is generally faster than other approaches when a hardware multiplier is not available (e.g., a microcontroller), or when the number of gates required to implement the functions it supports should be minimized (e.g., in an FPGA or ASIC).In fact, CORDIC is a standard drop-in IP in FPGA development applications such as Vivado for Xilinx, while a power series implementation is not due to the specificity of such an IP, i.e. CORDIC can compute many different functions (general purpose) while a hardware multiplier configured to execute power series implementations can only compute the function it was designed for.
On the other hand, when a hardware multiplier is available (e.g., in a DSP microprocessor), table-lookup methods and power series are generally faster than CORDIC. In recent years, the CORDIC algorithm has been used extensively for various biomedical applications, especially in FPGA implementations.
The STM32G4 series and certain STM32H7 series of MCUs implement a CORDIC module to accelerate computations in various mixed signal applications such as graphics for human-machine interface and field oriented control of motors. While not as fast as a power series approximation, CORDIC is indeed faster than interpolating table based implementations such as the ones provided by the ARM CMSIS and C standard libraries. Though the results may be slightly less accurate as the CORDIC modules provided only achieve 20 bits of precision in the result. For example, most of the performance difference compared to the ARM implementation is due to the overhead of the interpolation algorithm, which achieves full floating point precision (24 bits) and can likely achieve relative error to that precision. Another benefit is that the CORDIC module is a coprocessor and can be run in parallel with other CPU tasks.
The issue with using Taylor series is that while they do provide small absolute error, they do not exhibit well behaved relative error. Other means of polynomial approximation, such as minimax optimization, may be used to control both kinds of error.
Many older systems with integer-only CPUs have implemented CORDIC to varying extents as part of their IEEE floating-point libraries. As most modern general-purpose CPUs have floating-point registers with common operations such as add, subtract, multiply, divide, sine, cosine, square root, log10, natural log, the need to implement CORDIC in them with software is nearly non-existent. Only microcontroller or special safety and time-constrained software applications would need to consider using CORDIC.
CORDIC can be used to calculate a number of different functions. This explanation shows how to use CORDIC in rotation mode to calculate the sine and cosine of an angle, assuming that the desired angle is given in radians and represented in a fixed-point format. To determine the sine or cosine for an angle
\beta
v0
v0=\begin{bmatrix}1\ 0\end{bmatrix}.
In the first iteration, this vector is rotated 45° counterclockwise to get the vector
v1
\gammai=\arctan{(2-i)}
i=0,1,2,...
More formally, every iteration calculates a rotation, which is performed by multiplying the vector
vi
Ri
vi+1=Rivi.
The rotation matrix is given by
Ri=\begin{bmatrix} \cos(\gammai)&-\sin(\gammai)\\ \sin(\gammai)&\cos(\gammai) \end{bmatrix}.
Using the trigonometric identity:
\begin{align}\tan(\gammai)&\equiv
\sin(\gammai) | |
\cos(\gammai) |
, \end{align}
the cosine factor can be taken out to give:
Ri=\cos(\gammai)\begin{bmatrix} 1&-\tan(\gammai)\\ \tan(\gammai)&1 \end{bmatrix}.
The expression for the rotated vector
vi+1=Rivi
\begin{bmatrix} xi+1\\ yi+1\end{bmatrix}=\cos(\gammai)\begin{bmatrix} 1&-\tan(\gammai)\\ \tan(\gammai)&1 \end{bmatrix}\begin{bmatrix} xi\\ yi \end{bmatrix},
where
xi
yi
vi
\gammai
\tan(\gammai)=\pm2-i
\begin{bmatrix} xi+1\\ yi+1\end{bmatrix}=\cos(\arctan(2-i))\begin{bmatrix} 1&-\sigmai2-i\\ \sigmai2-i&1 \end{bmatrix}\begin{bmatrix} xi\\ yi \end{bmatrix},
and
\sigmai
\gammai
\sigmai
The following trigonometric identity can be used to replace the cosine:
\cos(\gammai)\equiv
1 | ||||||||
|
giving this multiplier for each iteration:
Ki=\cos(\arctan(2-i))=
1 | |
\sqrt{1+2-2i |
The
Ki
K(n)
K(n)=
n-1 | |
\prod | |
i=0 |
Ki=
n-1 | |
\prod | |
i=0 |
1 | |
\sqrt{1+2-2i |
which is calculated in advance and stored in a table or as a single constant, if the number of iterations is fixed. This correction could also be made in advance, by scaling
v0
K=\limnK(n) ≈ 0.6072529350088812561694
to allow further reduction of the algorithm's complexity. Some applications may avoid correcting for
K
A
A=
1 | |
K |
=\limn
n-1 | |
\prod | |
i=0 |
\sqrt{1+2-2i
After a sufficient number of iterations, the vector's angle will be close to the wanted angle
\beta
The only task left is to determine whether the rotation should be clockwise or counterclockwise at each iteration (choosing the value of
\sigma
\beta
\betan+1
\beta0=\beta
\betai+1=\betai-\sigmai\gammai, \gammai=\arctan(2-i).
The values of
\gamman
\arctan(\gamman) ≈ \gamman
As can be seen in the illustration above, the sine of the angle
\beta
vn,
The rotation-mode algorithm described above can rotate any vector (not only a unit vector aligned along the x axis) by an angle between −90° and +90°. Decisions on the direction of the rotation depend on
\betai
The vectoring-mode of operation requires a slight modification of the algorithm. It starts with a vector whose x coordinate is positive whereas the y coordinate is arbitrary. Successive rotations have the goal of rotating the vector to the x axis (and therefore reducing the y coordinate to zero). At each step, the value of y determines the direction of the rotation. The final value of
\betai
In Java the Math class has a scalb(double x,int scale)
method to perform such a shift, C has the ldexp function, and the x86 class of processors have the fscale
floating point operation.
ITERS = 16theta_table = [atan2(1, 2**i) for i in range(ITERS)]
def compute_K(n): """ Compute K(n) for n = ITERS. This could also be stored as an explicit constant if ITERS above is fixed. """ k = 1.0 for i in range(n): k *= 1 / sqrt(1 + 2 ** (-2 * i)) return k
def CORDIC(alpha, n): K_n = compute_K(n) theta = 0.0 x = 1.0 y = 0.0 P2i = 1 # This will be 2**(-i) in the loop below for arc_tangent in theta_table: sigma = +1 if theta < alpha else -1 theta += sigma * arc_tangent x, y = x - sigma * y * P2i, sigma * P2i * x + y P2i /= 2 return x * K_n, y * K_n
if __name__
The number of logic gates for the implementation of a CORDIC is roughly comparable to the number required for a multiplier as both require combinations of shifts and additions. The choice for a multiplier-based or CORDIC-based implementation will depend on the context. The multiplication of two complex numbers represented by their real and imaginary components (rectangular coordinates), for example, requires 4 multiplications, but could be realized by a single CORDIC operating on complex numbers represented by their polar coordinates, especially if the magnitude of the numbers is not relevant (multiplying a complex vector with a vector on the unit circle actually amounts to a rotation). CORDICs are often used in circuits for telecommunications such as digital down converters.
In two of the publications by Vladimir Baykov,[1] [2] it was proposed to use the double iterations method for the implementation of the functions: arcsine, arccosine, natural logarithm, exponential function, as well as for the calculation of the hyperbolic functions. Double iterations method consists in the fact that unlike the classical CORDIC method, where the iteration step value changes every time, i.e. on each iteration, in the double iteration method, the iteration step value is repeated twice and changes only through one iteration. Hence the designation for the degree indicator for double iterations appeared:
i=0,0,1,1,2,2...
i=0,1,2...
The generalization of the CORDIC convergence problems for the arbitrary positional number system with radix
R
R-1
R
i
R-1
i
For inverse hyperbolic sine and arcosine functions, the number of iterations will be
2R
i
CORDIC is part of the class of "shift-and-add" algorithms, as are the logarithm and exponential algorithms derived from Henry Briggs' work. Another shift-and-add algorithm which can be used for computing many elementary functions is the BKM algorithm, which is a generalization of the logarithm and exponential algorithms to the complex plane. For instance, BKM can be used to compute the sine and cosine of a real angle
x
0+ix
\operatorname{cis}(x)=\cos(x)+i\sin(x)