The IEEE 754-2008 standard includes decimal floating-point number formats in which the significand and the exponent (and the payloads of NaNs) can be encoded in two ways, referred to as binary encoding and decimal encoding.[1]
Both formats break a number down into a sign bit s, an exponent q (between qmin and qmax), and a p-digit significand c (between 0 and 10p−1). The value encoded is (-1)s×10q×c. In both formats the range of possible values is identical, but they differ in how the significand c is represented. In the decimal encoding, it is encoded as a series of p decimal digits (using the densely packed decimal (DPD) encoding). This makes conversion to decimal form efficient, but requires a specialized decimal ALU to process. In the binary integer decimal (BID) encoding, it is encoded as a binary number.
Using the fact that 210 = 1024 is only slightly more than 103 = 1000, 3n-digit decimal numbers can be efficiently packed into 10n binary bits. However, the IEEE formats have significands of 3n+1 digits, which would generally require 10n+4 binary bits to represent.
This would not be efficient, because only 10 of the 16 possible values of the additional four bits are needed. A more efficient encoding can be designed using the fact that the exponent range is of the form 3×2k, so the exponent never starts with 11
. Using the Decimal32 encoding (with a significand of 3*2+1 decimal digits) as an example (e
stands for exponent, m
for mantissa, i.e. significand):
0mmm
, omitting the leading 0 bit lets the significand fit into 23 bits:s 00eeeeee (0)mmm mmmmmmmmmm mmmmmmmmmm s 01eeeeee (0)mmm mmmmmmmmmm mmmmmmmmmm s 10eeeeee (0)mmm mmmmmmmmmm mmmmmmmmmm
100m
, omitting the leading 100 bits lets the significand fit into 21 bits. The exponent is shifted over 2 bits, and a 11
bit pair shows that this form is being used:s 1100eeeeee (100)m mmmmmmmmmm mmmmmmmmmm s 1101eeeeee (100)m mmmmmmmmmm mmmmmmmmmm s 1110eeeeee (100)m mmmmmmmmmm mmmmmmmmmm
s 1111
:s 11110 xxxxxxxxxxxxxxxxxxxxxxxxxx s 111110 xxxxxxxxxxxxxxxxxxxxxxxxx s 111111 xxxxxxxxxxxxxxxxxxxxxxxxx
The bits shown in parentheses are implicit: they are not included in the 32 bits of the Decimal32 encoding, but are implied by the two bits after the sign bit.
The Decimal64 and Decimal128 encodings have larger exponent and significand fields, but operate in a similar fashion.
For the Decimal128 encoding, 113 bits of significand is actually enough to encode 34 decimal digits, and the second form is never actually required.
A decimal floating-point number can be encoded in several ways, the different ways represent different precisions, for example 100.0 is encoded as 1000×10−1, while 100.00 is encoded as 10000×10−2. The set of possible encodings of the same numerical value is called a cohort in the standard. If the result of a calculation is inexact the largest amount of significant data is preserved by selecting the cohort member with the largest integer that can be stored in the significand along with the required exponent.
The proposed IEEE 754r standard limits the range of numbers to a significand of the form 10n-1, where n is the number of whole decimal digits that can be stored in the bits available so that decimal rounding is effected correctly.
32 bit | 64 bit | 128 bit | ||
---|---|---|---|---|
Storage bits | 32 | 64 | 128 | |
Trailing Significand bits | 20 | 50 | 110 | |
Significand bits | 23/24 | 53/54 | 113 | |
Significand digits | 7 | 16 | 34 | |
Combination bits | 11 | 13 | 17 | |
Exponent bits | 8 | 10 | 14 | |
Bias | 101 | 398 | 6176 | |
Standard emax | 96 | 384 | 6144 | |
Standard emin | −95 | −383 | −6143 |
A binary encoding is inherently less efficient for conversions to or from decimal-encoded data, such as strings (ASCII, Unicode, etc.) and BCD. A binary encoding is therefore best chosen only when the data are binary rather than decimal. IBM has published some unverified performance data.[2]