The μ-law algorithm (sometimes written mu-law, often approximated as u-law) is a companding algorithm, primarily used in 8-bit PCM digital telecommunication systems in North America and Japan. It is one of the two companding algorithms in the G.711 standard from ITU-T, the other being the similar A-law. A-law is used in regions where digital telecommunication signals are carried on E-1 circuits, e.g. Europe.
The terms PCMU, G711u or G711MU are used for G711 μ-law.[1]
Companding algorithms reduce the dynamic range of an audio signal. In analog systems, this can increase the signal-to-noise ratio (SNR) achieved during transmission; in the digital domain, it can reduce the quantization error (hence increasing the signal-to-quantization-noise ratio). These SNR increases can be traded instead for reduced bandwidth for equivalent SNR.
At the cost of a reduced peak SNR, it can be mathematically shown that μ-law's non-linear quantization effectively increases dynamic range by 33 dB or bits over a linearly-quantized signal, hence 13.5 bits (which rounds up to 14 bits) is the most resolution required for an input digital signal to be compressed for 8-bit μ-law.[2]
The μ-law algorithm may be described in an analog form and in a quantized digital form.
For a given input, the equation for μ-law encoding is[3]
where in the North American and Japanese standards, and is the sign function. The range of this function is −1 to 1.
μ-law expansion is then given by the inverse equation:
The discrete form is defined in ITU-T Recommendation G.711.[4]
G.711 is unclear about how to code the values at the limit of a range (e.g. whether +31 codes to 0xEF or 0xF0).However, G.191 provides example code in the C language for a μ-law encoder.[5] The difference between the positive and negative ranges, e.g. the negative range corresponding to +30 to +1 is −31 to −2. This is accounted for by the use of 1s' complement (simple bit inversion) rather than 2's complement to convert a negative value to a positive value during encoding.
+8158 to +4063 in 16 intervals of 256 | 0x80 + interval number | |
+4062 to +2015 in 16 intervals of 128 | 0x90 + interval number | |
+2014 to +991 in 16 intervals of 64 | 0xA0 + interval number | |
+990 to +479 in 16 intervals of 32 | 0xB0 + interval number | |
+478 to +223 in 16 intervals of 16 | 0xC0 + interval number | |
+222 to +95 in 16 intervals of 8 | 0xD0 + interval number | |
+94 to +31 in 16 intervals of 4 | 0xE0 + interval number | |
+30 to +1 in 15 intervals of 2 | 0xF0 + interval number | |
0 | 0xFF | |
−1 | 0x7F | |
−31 to −2 in 15 intervals of 2 | 0x70 + interval number | |
−95 to −32 in 16 intervals of 4 | 0x60 + interval number | |
−223 to −96 in 16 intervals of 8 | 0x50 + interval number | |
−479 to −224 in 16 intervals of 16 | 0x40 + interval number | |
−991 to −480 in 16 intervals of 32 | 0x30 + interval number | |
−2015 to −992 in 16 intervals of 64 | 0x20 + interval number | |
−4063 to −2016 in 16 intervals of 128 | 0x10 + interval number | |
−8159 to −4064 in 16 intervals of 256 | 0x00 + interval number |
The μ-law algorithm may be implemented in several ways:
μ-law encoding is used because speech has a wide dynamic range. In analog signal transmission, in the presence of relatively constant background noise, the finer detail is lost. Given that the precision of the detail is compromised anyway, and assuming that the signal is to be perceived as audio by a human, one can take advantage of the fact that the perceived acoustic intensity level or loudness is logarithmic by compressing the signal using a logarithmic-response operational amplifier (Weber–Fechner law). In telecommunications circuits, most of the noise is injected on the lines, thus after the compressor, the intended signal is perceived as significantly louder than the static, compared to an uncompressed source. This became a common solution, and thus, prior to common digital usage, the μ-law specification was developed to define an interoperable standard.
This pre-existing algorithm had the effect of significantly lowering the amount of bits required to encode a recognizable human voice in digital systems. A sample could be effectively encoded using μ-law in as little as 8 bits, which conveniently matched the symbol size of the majority of common computers.
μ-law encoding effectively reduced the dynamic range of the signal, thereby increasing the coding efficiency while biasing the signal in a way that results in a signal-to-distortion ratio that is greater than that obtained by linear encoding for a given number of bits.
The μ-law algorithm is also used in the .au format, which dates back at least to the SPARCstation 1 by Sun Microsystems as the native method used by the /dev/audio interface, widely used as a de facto standard for sound on Unix systems. The au format is also used in various common audio APIs such as the classes in the sun.audio Java package in Java 1.1 and in some C# methods.
This plot illustrates how μ-law concentrates sampling in the smaller (softer) values. The horizontal axis represents the byte values 0-255 and the vertical axis is the 16-bit linear decoded value of μ-law encoding.
The μ-law algorithm provides a slightly larger dynamic range than the A-law at the cost of worse proportional distortions for small signals. By convention, A-law is used for an international connection if at least one country uses it.