decimal128 is a decimal floating-point computer number format that occupies 128 bits in computer memory. Formally introduced in IEEE 754-2008,[1] it is intended for applications where it is necessary to emulate decimal rounding exactly, such as financial and tax computations.[2]
decimal128 supports 34 decimal digits of significand and an exponent range of −6143 to +6144, i.e. to . Because the significand is not normalized, most values with less than 34 significant digits have multiple possible representations;, etc. Zero has 12288 possible representations (24576 including negative zero).
IEEE 754 allows two alternative representation methods for decimal128 values. The standard does not specify how to signify which representation is used, for instance in a situation where decimal128 values are communicated between systems.
In one representation method, based on binary integer decimal (BID), the significand is represented as binary coded positive integer.
The other, alternative, representation method is based on densely packed decimal (DPD) for most of the significand (except the most significant digit).
Both alternatives provide exactly the same range of representable numbers: 34 digits of significand and possible exponent values.
In both cases, the most significant 4 bits of the significand (which actually only have 10 possible values) are combined with the most significant 2 bits of the exponent (3 possible values) to use 30 of the 32 possible values of 5 bits in the combination field. The remaining combinations encode infinities and NaNs.
Combination field | Exponent | Significand Msbits | Other | |
---|---|---|---|---|
±Infinity | ||||
. Sign bit ignored. Sixth bit of the combination field determines if the NaN is signaling. |
In the case of Infinity and NaN, all other bits of the encoding are ignored. Thus, it is possible to initialize an array to Infinities or NaNs by filling it with a single byte value.
This format uses a binary significand from 0 to = = 1ED09BEAD87C0378D8E63FFFFFFFF16 = .The encoding can represent binary significands up to = but values larger than are illegal (and the standard requires implementations to treat them as 0, if encountered on input).
As described above, the encoding varies depending on whether the most significant 4 bits of the significand are in the range 0 to 7 (00002 to 01112), or higher (10002 or 10012).
If the 2 bits after the sign bit are "00", "01", or "10", then the exponent field consists of the 14 bits following the sign bit, and thesignificand is the remaining 113 bits, with an implicit leading 0 bit:
s 00eeeeeeeeeeee (0)ttt tttttttttt tttttttttt tttttttttt tttttttttt tttttttttt tttttttttt tttttttttt tttttttttt tttttttttt tttttttttt tttttttttt s 01eeeeeeeeeeee (0)ttt tttttttttt tttttttttt tttttttttt tttttttttt tttttttttt tttttttttt tttttttttt tttttttttt tttttttttt tttttttttt tttttttttt s 10eeeeeeeeeeee (0)ttt tttttttttt tttttttttt tttttttttt tttttttttt tttttttttt tttttttttt tttttttttt tttttttttt tttttttttt tttttttttt tttttttttt
This includes subnormal numbers where the leading significand digit is 0.
If the 2 bits after the sign bit are "11", then the 14-bit exponent field is shifted 2 bits to the right (after both the sign bit and the "11" bits thereafter), and the represented significand is in the remaining 111 bits. In this case there is an implicit (that is, not stored) leading 3-bit sequence "100" in the true significand.
s 1100eeeeeeeeeeee (100)t tttttttttt tttttttttt tttttttttt tttttttttt tttttttttt tttttttttt tttttttttt tttttttttt tttttttttt tttttttttt tttttttttt s 1101eeeeeeeeeeee (100)t tttttttttt tttttttttt tttttttttt tttttttttt tttttttttt tttttttttt tttttttttt tttttttttt tttttttttt tttttttttt tttttttttt s 1110eeeeeeeeeeee (100)t tttttttttt tttttttttt tttttttttt tttttttttt tttttttttt tttttttttt tttttttttt tttttttttt tttttttttt tttttttttt tttttttttt
The "11" 2-bit sequence after the sign bit indicates that there is an implicit "100" 3-bit prefix to the significand. Compare having an implicit 1 in the significand of normal values for the binary formats. The "00", "01", or "10" bits are part of the exponent field.
For the decimal128 format, all of these significands are out of the valid range (they begin with), and are thus decoded as zero, but the pattern is same as decimal32 and decimal64.
In the above cases, the value represented is
(−1)sign × 10exponent−6176 × significand
If the four bits after the sign bit are "1111" then the value is an infinity or a NaN, as described above:
s 11110 xx...x ±infinity s 11111 0x...x a quiet NaN s 11111 1x...x a signalling NaN
In this version, the significand is stored as a series of decimal digits. The leading digit is between 0 and 9 (3 or 4 binary bits), and the rest of the significand uses the densely packed decimal (DPD) encoding.
The leading 2 bits of the exponent and the leading digit (3 or 4 bits) of the significand are combined into the five bits that follow the sign bit.
This twelve bits after that are the exponent continuation field, providing the less-significant bits of the exponent.
The last 110 bits are the significand continuation field, consisting of eleven 10-bit declets.[3] Each declet encodes three decimal digits[3] using the DPD encoding.
If the first two bits after the sign bit are "00", "01", or "10", then those are the leading bits of the exponent, and the three bits after that are interpreted as the leading decimal digit (0 to 7):If the first two bits after the sign bit are "11", then the second two bits are the leading bits of the exponent, and the last bit is prefixed with "100" to form the leading decimal digit (8 or 9): The remaining two combinations (11110 and 11111) of the 5-bit fieldare used to represent ±infinity and NaNs, respectively.
The DPD/3BCD transcoding for the declets is given by the following table.b9...b0 are the bits of the DPD, and d2...d0 are the three BCD digits.
The 8 decimal values whose digits are all 8s or 9s have four codings each.The bits marked x in the table above are ignored on input, but will always be 0 in computed results.(The non-standard encodings fill in the gap between and .)
In the above cases, with the true significand as the sequence of decimal digits decoded, the value represented is
(-1)signbit x
exponentbits2-617610 | |
10 |
x truesignificand10