An audio coding format[1] (or sometimes audio compression format) is a content representation format for storage or transmission of digital audio (such as in digital television, digital radio and in audio and video files). Examples of audio coding formats include MP3, AAC, Vorbis, FLAC, and Opus. A specific software or hardware implementation capable of audio compression and decompression to/from a specific audio coding format is called an audio codec; an example of an audio codec is LAME, which is one of several different codecs which implements encoding and decoding audio in the MP3 audio coding format in software.
Some audio coding formats are documented by a detailed technical specification document known as an audio coding specification. Some such specifications are written and approved by standardization organizations as technical standards, and are thus known as an audio coding standard. The term "standard" is also sometimes used for de facto standards as well as formal standards.
Audio content encoded in a particular audio coding format is normally encapsulated within a container format. As such, the user normally doesn't have a raw AAC file, but instead has a .m4a audio file, which is a MPEG-4 Part 14 container containing AAC-encoded audio. The container also contains metadata such as title and other tags, and perhaps an index for fast seeking.[2] A notable exception is MP3 files, which are raw audio coding without a container format. De facto standards for adding metadata tags such as title and artist to MP3s, such as ID3, are hacks which work by appending the tags to the MP3, and then relying on the MP3 player to recognize the chunk as malformed audio coding and therefore skip it. In video files with audio, the encoded audio content is bundled with video (in a video coding format) inside a multimedia container format.
An audio coding format does not dictate all algorithms used by a codec implementing the format. An important part of how lossy audio compression works is by removing data in ways humans can't hear, according to a psychoacoustic model; the implementer of an encoder has some freedom of choice in which data to remove (according to their psychoacoustic model).
A lossless audio coding format reduces the total data needed to represent a sound but can be de-coded to its original, uncompressed form. A lossy audio coding format additionally reduces the bit resolution of the sound on top of compression, which results in far less data at the cost of irretrievably lost information.
Transmitted (streamed) audio is most often compressed using lossy audio codecs as the smaller size is far more convenient for distribution. The most widely used audio coding formats are MP3 and Advanced Audio Coding (AAC), both of which are lossy formats based on modified discrete cosine transform (MDCT) and perceptual coding algorithms.
Lossless audio coding formats such as FLAC and Apple Lossless are sometimes available, though at the cost of larger files.
Uncompressed audio formats, such as pulse-code modulation (PCM, or .wav), are also sometimes used. PCM was the standard format for Compact Disc Digital Audio (CDDA).
In 1950, Bell Labs filed the patent on differential pulse-code modulation (DPCM). Adaptive DPCM (ADPCM) was introduced by P. Cummiskey, Nikil S. Jayant and James L. Flanagan at Bell Labs in 1973.[3] [4]
Perceptual coding was first used for speech coding compression, with linear predictive coding (LPC).[5] Initial concepts for LPC date back to the work of Fumitada Itakura (Nagoya University) and Shuzo Saito (Nippon Telegraph and Telephone) in 1966.[6] During the 1970s, Bishnu S. Atal and Manfred R. Schroeder at Bell Labs developed a form of LPC called adaptive predictive coding (APC), a perceptual coding algorithm that exploited the masking properties of the human ear, followed in the early 1980s with the code-excited linear prediction (CELP) algorithm which achieved a significant compression ratio for its time.[5] Perceptual coding is used by modern audio compression formats such as MP3[5] and AAC.
Discrete cosine transform (DCT), developed by Nasir Ahmed, T. Natarajan and K. R. Rao in 1974,[7] provided the basis for the modified discrete cosine transform (MDCT) used by modern audio compression formats such as MP3[8] and AAC. MDCT was proposed by J. P. Princen, A. W. Johnson and A. B. Bradley in 1987,[9] following earlier work by Princen and Bradley in 1986.[10] The MDCT is used by modern audio compression formats such as Dolby Digital,[11] [12] MP3,[8] and Advanced Audio Coding (AAC).[13]
Basic compression algorithm | Audio coding standard | Abbreviation | Introduction | Market share [14] | |
---|---|---|---|---|---|
Modified discrete cosine transform (MDCT) | Dolby Digital (AC-3) | AC3 | 1991 | 58% | [15] |
Adaptive Transform Acoustic Coding | ATRAC | 1992 | |||
MPEG Layer III | MP3 | 1993 | 49% | [16] | |
Advanced Audio Coding (MPEG-2 / MPEG-4) | AAC | 1997 | 88% | ||
Windows Media Audio | WMA | 1999 | |||
Ogg Vorbis | Ogg | 2000 | 7% | [17] | |
Constrained Energy Lapped Transform | CELT | 2011 | [18] | ||
Opus | Opus | 2012 | 8% | [19] | |
LDAC | LDAC | 2015 | [20] [21] | ||
Adaptive differential pulse-code modulation (ADPCM) | aptX / aptX-HD | aptX | 1989 | [22] | |
Digital Theater Systems | DTS | 1990 | 14% | [23] [24] | |
Master Quality Authenticated | MQA | 2014 | |||
Sub-band coding (SBC) | MPEG-1 Audio Layer II | MP2 | 1993 | ||
Musepack | MPC | 1997 |