G.711 | |
Long Name: | Pulse code modulation (PCM) of voice frequencies |
Status: | In force |
Year Started: | 1972 |
Version: | (02/00) |
Version Date: | February 2000 |
Organization: | ITU-T |
Related Standards: | G.191, G.711.0, G.711.1, G.729 |
Domain: | audio compression |
Website: | https://www.itu.int/rec/T-REC-G.711 |
G.711 is a narrowband audio codec originally designed for use in telephony that provides toll-quality audio at 64 kbit/s. It is an ITU-T standard (Recommendation) for audio encoding, titled Pulse code modulation (PCM) of voice frequencies released for use in 1972.
G.711 passes audio signals in the frequency band of 300–3400 Hz and samples them at the rate of 8000 Hz, with the tolerance on that rate of 50 parts per million (ppm).
It uses one of two different logarithmic companding algorithms: μ-law, which is used primarily in North America and Japan, and A-law, which is in use in most other countries outside North America. Each companded sample is quantized as 8 bits, resulting in a 64 kbit/s bit rate.
G.711 is a required standard in many technologies, such as in the H.320 and H.323 standards.[1] It can also be used for fax communication over IP networks (as defined in T.38 specification).
Two enhancements to G.711 have been published: G.711.0 utilizes lossless data compression to reduce the bandwidth usage and G.711.1 increases audio quality by increasing bandwidth.
G.711 defines two main companding algorithms, the μ-law algorithm and A-law algorithm. Both are logarithmic, but A-law was specifically designed to be simpler for a computer to process. The standard also defines a sequence of repeating code values which defines the power level of 0 dB.
The μ-law and A-law algorithms encode 14-bit and 13-bit signed linear PCM samples (respectively) to logarithmic 8-bit samples. Thus, the G.711 encoder will create a 64 kbit/s bitstream for a signal sampled at 8 kHz.
G.711 μ-law tends to give more resolution to higher range signals while G.711 A-law provides more quantization levels at lower signal levels.
The terms PCMU, G711u and G711MU are also used for G.711 μ-law, and PCMA and G711A for G.711 A-law.[2]
See main article: A-law algorithm. A-law encoding thus takes a 13-bit signed linear audio sample as input and converts it to an 8 bit value as follows:
Linear input code [3] | Compressed code XOR 01010101 | Linear output code [4] | |
---|---|---|---|
s0000000abcdx | {{overline|s}}000abcd | s0000000abcd1 | |
s0000001abcdx | {{overline|s}}001abcd | s0000001abcd1 | |
s000001abcdxx | {{overline|s}}010abcd | s000001abcd10 | |
s00001abcdxxx | {{overline|s}}011abcd | s00001abcd100 | |
s0001abcdxxxx | {{overline|s}}100abcd | s0001abcd1000 | |
s001abcdxxxxx | {{overline|s}}101abcd | s001abcd10000 | |
s01abcdxxxxxx | {{overline|s}}110abcd | s01abcd100000 | |
s1abcdxxxxxxx | {{overline|s}}111abcd | s1abcd1000000 |
Where is the sign bit, {{overline|s}}
is its inverse (i.e. positive values are encoded with MSB = = 1), and bits marked are discarded. Note that the first column of the table uses different representation of negative values than the third column. So for example, input decimal value −21 is represented in binary after bit inversion as 1000000010100, which maps to 00001010 (according to the first row of the table). When decoding, this maps back to 1000000010101, which is interpreted as output value −21 in decimal. Input value +52 (0000000110100 in binary) maps to 10011010 (according to the second row), which maps back to 0000000110101 (+53 in decimal).
This can be seen as a floating-point number with 4 bits of mantissa (equivalent to a 5-bit precision), 3 bits of exponent and 1 sign bit, formatted as {{overline|s}}eeemmmm
with the decoded linear value given by formula
y=(-1)s ⋅ (16 ⋅ min\{e,1\}+m+0.5) ⋅ 2max},
In addition, the standard specifies that all resulting even bits (LSB is even) are inverted before the octet is transmitted. This is to provide plenty of 0/1 transitions to facilitate the clock recovery process in the PCM receivers. Thus, a silent A-law encoded PCM channel has the 8 bit samples coded 0xD5 instead of 0x80 in the octets.
When data is sent over E0 (G.703), MSB (sign) is sent first and LSB is sent last.
ITU-T STL[5] defines the algorithm for decoding as follows (it puts the decoded values in the 13 most significant bits of the 16-bit output data type).
See also "ITU-T Software Tool Library 2009 User's manual" that can be found at.[6]
See main article: μ-law algorithm. The μ-law (sometimes referred to as ulaw, G.711Mu, or G.711μ) encoding takes a 14-bit signed linear audio sample in two's complement representation as input, inverts all bits after the sign bit if the value is negative, adds 33 (binary 100001) and converts it to an 8 bit value as follows:
Linear input value [7] | Compressed code XOR 11111111 | Linear output value [8] | |
---|---|---|---|
s00000001abcdx | s000abcd | s00000001abcd1 | |
s0000001abcdxx | s001abcd | s0000001abcd10 | |
s000001abcdxxx | s010abcd | s000001abcd100 | |
s00001abcdxxxx | s011abcd | s00001abcd1000 | |
s0001abcdxxxxx | s100abcd | s0001abcd10000 | |
s001abcdxxxxxx | s101abcd | s001abcd100000 | |
s01abcdxxxxxxx | s110abcd | s01abcd1000000 | |
s1abcdxxxxxxxx | s111abcd | s1abcd10000000 |
Where is the sign bit, and bits marked are discarded.
In addition, the standard specifies that the encoded bits are inverted before the octet is transmitted. Thus, a silent μ-law encoded PCM channel has the 8 bit samples transmitted 0xFF instead of 0x00 in the octets.
Adding 33 is necessary so that all values fall into a compression group and it is subtracted back when decoding.
Breaking the encoded value formatted as seeemmmm
into 4 bits of mantissa, 3 bits of exponent and 1 sign bit, the decoded linear value is given by formula
y=(-1)s ⋅ [(33+2m) ⋅ 2e-33],
Note that 0 is transmitted as 0xFF, and −1 is transmitted as 0x7F, but when received the result is 0 in both cases.
G.711.0, also known as G.711 LLC, utilizes lossless data compression to reduce the bandwidth usage by as much as 50 percent.[9] The Lossless compression of G.711 pulse code modulation standard was approved by ITU-T in September 2009.[10]
G.711.1 "Wideband embedded extension for G.711 pulse code modulation" is a higher-fidelity extension to G.711, ratified in 2008 and further extended in 2012.
G.711.1 allows a series of enhancement layers on top of a raw G.711 core stream (Layer 0): Layer 1 codes 16-bit audio in the same 4kHz narrowband, and Layer 2 allows 8kHz wideband using MDCT; each uses a fixed 16 kbps in addition to the 64 kbps core. They may be used together or singly, and each encodes the differences from the previous layer. Ratified in 2012, Layer 3 extends Layer 2 to 16kHz "superwideband," allowing another 16 kbps for the highest frequencies, while retaining layer independence. Peak bitrate becomes 96 kbps in original G.711.1, or 112 kbps with superwideband. No internal method of identifying or separating the layers is defined, leaving it to the implementation to packetize or signal them.
A decoder that doesn't understand any set of fidelity layers may ignore or drop non-core packets without affecting it, enabling graceful degradation across any G.711 (or original G.711.1) telephony system with no changes.
Also ratified in 2012 was G.711.0 lossless extended to the new fidelity layers. Like G.711.0, full G.711 backward compatibility is sacrificed for efficiency, though a G.711.0 aware node may still ignore or drop layer packets it doesn't understand.
The patents for G.711, released in 1972, have expired, so it may be used without the need for a licence.