Non-adjacent form explained

The non-adjacent form (NAF) of a number is a unique signed-digit representation, in which non-zero values cannot be adjacent. For example:

(0 1 1 1)2 = 4 + 2 + 1 = 7

(1 0 −1 1)2 = 8 − 2 + 1 = 7

(1 −1 1 1)2 = 8 − 4 + 2 + 1 = 7

(1 0 0 −1)2 = 8 − 1 = 7

All are valid signed-digit representations of 7, but only the final representation, (1 0 0 −1), is in non-adjacent form.

The non-adjacent form is also known as "canonical signed digit" representation.

Properties

NAF assures a unique representation of an integer, but the main benefit of it is that the Hamming weight of the value will be minimal. For regular binary representations of values, half of all bits will be non-zero, on average, but with NAF this drops to only one-third of all digits. This leads to efficient implementations of add/subtract networks (e.g. multiplication by a constant) in hardwired digital signal processing.[1]

Obviously, at most half of the digits are non-zero, which was the reason it was introduced by G.W. Reitweisner [2] for speeding up early multiplication algorithms, much like Booth encoding.

Because every non-zero digit has to be adjacent to two 0s, the NAF representation can be implemented such that it only takes a maximum of m + 1 bits for a value that would normally be represented in binary with m bits.

The properties of NAF make it useful in various algorithms, especially some in cryptography; e.g., for reducing the number of multiplications needed for performing an exponentiation. In the algorithm, exponentiation by squaring, the number of multiplications depends on the number of non-zero bits. If the exponent here is given in NAF form, a digit value 1 implies a multiplication by the base, and a digit value −1 by its reciprocal.

Other ways of encoding integers that avoid consecutive 1s include Booth encoding and Fibonacci coding.

Converting to NAF

There are several algorithms for obtaining the NAF representation of a value given in binary. One such is the following method using repeated division; it works by choosing non-zero coefficients such that the resulting quotient is divisible by 2 and hence the next coefficient a zero.[3]

Input E = (em−1 em−2 ··· e1 e0)2 Output Z = (zm zm−1 ··· z1 z0)NAF i ← 0 while E > 0 do if E is odd then zi ← 2 − (E mod 4) EEzi else zi ← 0 EE/2 ii + 1 return z

A faster way is given by Prodinger[4] where x is the input, np the string of positive bits and nm the string of negative bits:

Input x Output np, nm xh = x >> 1; x3 = x + xh; c = xh ^ x3; np = x3 & c; nm = xh & c;

which is used, for example, in .

External links

References

  1. Hewlitt. R.M.. Canonical signed digit representation for FIR digital filters. Signal Processing Systems, 2000. SiPS 2000. 2000 IEEE Workshop on. 416–426. 10.1109/SIPS.2000.886740. 2000. 978-0-7803-6488-2. 122082511.
  2. George W. . Reitwiesner . Binary Arithmetic . Advances in Computers . 1960 . 1 . 231–308 . 10.1016/S0065-2458(08)60610-5. 9780120121014 .
  3. Book: D. . Hankerson . A. . Menezes . S.A. . Vanstone . Guide to Elliptic Curve Cryptography . Springer . 978-0-387-21846-5 . 2004 . 98.
  4. Web site: Prodinger . Helmut . On Binary Representations of Integers with Digits -1, 0, 1 . Integers . 25 June 2021.