A compression artifact (or artefact) is a noticeable distortion of media (including images, audio, and video) caused by the application of lossy compression. Lossy data compression involves discarding some of the media's data so that it becomes small enough to be stored within the desired disk space or transmitted (streamed) within the available bandwidth (known as the data rate or bit rate). If the compressor cannot store enough data in the compressed version, the result is a loss of quality, or introduction of artifacts. The compression algorithm may not be intelligent enough to discriminate between distortions of little subjective importance and those objectionable to the user.
The most common digital compression artifacts are DCT blocks, caused by the discrete cosine transform (DCT) compression algorithm used in many digital media standards, such as JPEG, MP3, and MPEG video file formats.[1] These compression artifacts appear when heavy compression is applied,[1] and occur often in common digital media, such as DVDs, common computer file formats such as JPEG, MP3 and MPEG files, and some alternatives to the compact disc, such as Sony's MiniDisc format. Uncompressed media (such as on Laserdiscs, Audio CDs, and WAV files) or losslessly compressed media (such as FLAC or PNG) do not suffer from compression artifacts.
The minimization of perceivable artifacts is a key goal in implementing a lossy compression algorithm. However, artifacts are occasionally intentionally produced for artistic purposes, a style known as glitch art[2] or datamoshing.[3]
Technically speaking, a compression artifact is a particular class of data error that is usually the consequence of quantization in lossy data compression. Where transform coding is used, it typically assumes the form of one of the basis functions of the coder's transform space.
When performing block-based discrete cosine transform (DCT)[1] coding for quantization, as in JPEG-compressed images, several types of artifacts can appear.
Other lossy algorithms, which use pattern matching to deduplicate similar symbols, are prone to introducing hard to detect errors in printed text. For example, the numbers "6" and "8" may get replaced. This has been observed to happen with JBIG2 in certain photocopier machines.[4] [5]
At low bit rates, any lossy block-based coding scheme introduces visible artifacts in pixel blocks and at block boundaries. These boundaries can be transform block boundaries, prediction block boundaries, or both, and may coincide with macroblock boundaries. The term macroblocking is commonly used regardless of the artifact's cause. Other names include blocking,[6] tiling,[7] mosaicing, pixelating, quilting, and checkerboarding.
Block-artifacts are a result of the very principle of block transform coding. The transform (for example the discrete cosine transform) is applied to a block of pixels, and to achieve lossy compression, the transform coefficients of each block are quantized. The lower the bit rate, the more coarsely the coefficients are represented and the more coefficients are quantized to zero. Statistically, images have more low-frequency than high-frequency content, so it is the low-frequency content that remains after quantization, which results in blurry, low-resolution blocks. In the most extreme case only the DC-coefficient, that is the coefficient which represents the average color of a block, is retained, and the transform block is only a single color after reconstruction.
Because this quantization process is applied individually in each block, neighboring blocks quantize coefficients differently. This leads to discontinuities at the block boundaries. These are most visible in flat areas, where there is little detail to mask the effect.
See main article: article and Deblocking filter. Various approaches have been proposed to reduce image compression effects, but to use standardized compression/decompression techniques and retain the benefits of compression (for instance, lower transmission and storage costs), many of these methods focus on "post-processing"—that is, processing images when received or viewed. No post-processing technique has been shown to improve image quality in all cases; consequently, none has garnered widespread acceptance, though some have been implemented and are in use in proprietary systems. Many photo editing programs, for instance, have proprietary JPEG artifact reduction algorithms built-in. Consumer equipment often calls this post-processing "MPEG Noise Reduction".[8]
Boundary artifact in JPEG can be turned into more pleasing "grains" not unlike those in high ISO photographic films. Instead of just multiplying the quantized coefficients with the quantisation step pertaining to the 2D-frequency, intelligent noise in the form of a random number in the interval can be added to the dequantized coefficient. This method can be added as an integral part to JPEG decompressors working on the trillions of existing and future JPEG images. As such it is not a "post-processing" technique.[9]
The ringing issue can be reduced at encode time by overshooting the DCT values, clamping the rings away.[10]
Posterization generally only happens at low quality, when the DC values are given too little importance. Tuning the quantization table helps.[11]
When motion prediction is used, as in MPEG-1, MPEG-2 or MPEG-4, compression artifacts tend to remain on several generations of decompressed frames, and move with the optic flow of the image, leading to a peculiar effect, part way between a painting effect and "grime" that moves with objects in the scene.
Data errors in the compressed bit-stream, possibly due to transmission errors, can lead to errors similar to large quantization errors, or can disrupt the parsing of the data stream entirely for a short time, leading to "break-up" of the picture. Where gross errors have occurred in the bit-stream, decoders continue to apply updates to the damaged picture for a short interval, creating a "ghost image" effect, until receiving the next independently compressed frame. In MPEG picture coding, these are known as "I-frames", with the 'I' standing for "intra". Until the next I-frame arrives, the decoder can perform error concealment.
Block boundary discontinuities can occur at edges of motion compensation prediction blocks. In motion compensated video compression, the current picture is predicted by shifting blocks (macroblocks, partitions, or prediction units) of pixels from previously decoded frames. If two neighboring blocks use different motion vectors, there will be a discontinuity at the edge between the blocks.
Video compression artifacts include cumulative results of compression of the comprising still images, for instance ringing or other edge busyness in successive still images appear in sequence as a shimmering blur of dots around edges, called mosquito noise, as they resemble mosquitoes swarming around the object.[12] [13] The so-called "mosquito noise" is caused by the block-based discrete cosine transform (DCT) compression algorithm used in most video coding standards, such as the MPEG formats.[14]
See main article: article and Deblocking filter. The artifacts at block boundaries can be reduced by applying a deblocking filter. As in still image coding, it is possible to apply a deblocking filter to the decoder output as post-processing.
In motion-predicted video coding with a closed prediction loop, the encoder uses the decoder output as the prediction reference from which future frames are predicted. To that end, the encoder conceptually integrates a decoder. If this "decoder" performs a deblocking, the deblocked picture is then used as a reference picture for motion compensation, which improves coding efficiency by preventing a propagation of block artifacts across frames. This is referred to as an in-loop deblocking filter. Standards which specify an in-loop deblocking filter include VC-1, H.263 Annex J, H.264/AVC, and H.265/HEVC.
Lossy audio compression typically works with a psychoacoustic model—a model of human hearing perception. Lossy audio formats typically involve the use of a time/frequency domain transform, such as a modified discrete cosine transform. With the psychoacoustic model, masking effects such as frequency masking and temporal masking are exploited, so that sounds that should be imperceptible are not recorded. For example, in general, human beings are unable to perceive a quiet tone played simultaneously with a similar but louder tone. A lossy compression technique might identify this quiet tone and attempt to remove it. Also, quantization noise can be "hidden" where they would be masked by more prominent sounds. With low compression, a conservative psy-model is used with small block sizes.
When the psychoacoustic model is inaccurate, when the transform block size is restrained, or when aggressive compression is used, this may result in compression artifacts. Compression artifacts in compressed audio typically show up as ringing, pre-echo, "birdie artifacts", drop-outs, rattling, warbling, metallic ringing, an underwater feeling, hissing, or "graininess".
An example of compression artifacts in audio is applause in a relatively highly compressed audio file (e.g. 96 kbit/sec MP3). In general, musical tones have repeating waveforms and more predictable variations in volume, whereas applause is essentially random, therefore hard to compress. A highly compressed track of applause may have "metallic ringing" and other compression artifacts.
Compression artifacts may intentionally be used as a visual style, sometimes known as glitch art. Rosa Menkman's glitch art makes use of compression artifacts,[15] particularly the discrete cosine transform blocks (DCT blocks) found in most digital media data compression formats such as JPEG digital images and MP3 digital audio.[16] In still images, an example is Jpegs by German photographer Thomas Ruff, which uses intentional JPEG artifacts as the basis of the picture's style.[17] [18]
In video art, one technique used is datamoshing, where two videos are interleaved so intermediate frames are interpolated from two separate sources. Another technique involves simply transcoding from one lossy video format to another, which exploits the difference in how the separate video codecs process motion and color information.[19] The technique was pioneered by artists Bertrand Planes in collaboration with Christian Jacquemin in 2006 with DivXPrime,[20] Sven König, Takeshi Murata, Jacques Perconte and Paul B. Davis in collaboration with Paperrad, and more recently used by David OReilly and within music videos for Chairlift and by Nabil Elderkin in the "Welcome to Heartbreak" music video for Kanye West.[21] [22]
There is also a genre of internet memes where often nonsensical images are purposefully heavily compressed sometimes multiple times for comedic effect. Images created using this technique are often referred to as "deep fried."[23]