Image compression explained

Image compression is a type of data compression applied to digital images, to reduce their cost for storage or transmission. Algorithms may take advantage of visual perception and the statistical properties of image data to provide superior results compared with generic data compression methods which are used for other digital data.[1]

Lossy and lossless image compression

Image compression may be lossy or lossless. Lossless compression is preferred for archival purposes and often for medical imaging, technical drawings, clip art, or comics. Lossy compression methods, especially when used at low bit rates, introduce compression artifacts. Lossy methods are especially suitable for natural images such as photographs in applications where minor (sometimes imperceptible) loss of fidelity is acceptable to achieve a substantial reduction in bit rate. Lossy compression that produces negligible differences may be called visually lossless.

Methods for lossy compression:

Methods for lossless compression:

Other properties

The best image quality at a given compression rate (or bit rate) is the main goal of image compression, however, there are other important properties of image compression schemes:

Scalability generally refers to a quality reduction achieved by manipulation of the bitstream or file (without decompression and re-compression). Other names for scalability are progressive coding or embedded bitstreams. Despite its contrary nature, scalability also may be found in lossless codecs, usually in form of coarse-to-fine pixel scans. Scalability is especially useful for previewing images while downloading them (e.g., in a web browser) or for providing variable quality access to e.g., databases. There are several types of scalability:

Region of interest coding. Certain parts of the image are encoded with higher quality than others. This may be combined with scalability (encode these parts first, others later).

Meta information. Compressed data may contain information about the image which may be used to categorize, search, or browse images. Such information may include color and texture statistics, small preview images, and author or copyright information.

Processing power. Compression algorithms require different amounts of processing power to encode and decode. Some high compression algorithms require high processing power.

The quality of a compression method often is measured by the peak signal-to-noise ratio. It measures the amount of noise introduced through a lossy compression of the image, however, the subjective judgment of the viewer also is regarded as an important measure, perhaps, being the most important measure.

History

Entropy coding started in the late 1940s with the introduction of Shannon–Fano coding,[8] the basis for Huffman coding which was published in 1952. Transform coding dates back to the late 1960s, with the introduction of fast Fourier transform (FFT) coding in 1968 and the Hadamard transform in 1969.[9]

An important development in image data compression was the discrete cosine transform (DCT), a lossy compression technique first proposed by Nasir Ahmed, T. Natarajan and K. R. Rao in 1973.[10] JPEG was introduced by the Joint Photographic Experts Group (JPEG) in 1992.[11] JPEG compresses images down to much smaller file sizes, and has become the most widely used image file format.[12] JPEG was largely responsible for the wide proliferation of digital images and digital photos,[13] with several billion JPEG images produced every day as of 2015.[14]

Lempel–Ziv–Welch (LZW) is a lossless compression algorithm developed by Abraham Lempel, Jacob Ziv and Terry Welch in 1984. It is used in the GIF format, introduced in 1987.[15] DEFLATE, a lossless compression algorithm developed by Phil Katz and specified in 1996, is used in the Portable Network Graphics (PNG) format.[16]

The JPEG 2000 standard was developed from 1997 to 2000 by a JPEG committee chaired by Touradj Ebrahimi (later the JPEG president).[17] In contrast to the DCT algorithm used by the original JPEG format, JPEG 2000 instead uses discrete wavelet transform (DWT) algorithms. It uses the CDF 9/7 wavelet transform (developed by Ingrid Daubechies in 1992) for its lossy compression algorithm,[18] and the Le Gall–Tabatabai (LGT) 5/3 wavelet transform[19] [20] (developed by Didier Le Gall and Ali J. Tabatabai in 1988)[21] for its lossless compression algorithm.[18] JPEG 2000 technology, which includes the Motion JPEG 2000 extension, was selected as the video coding standard for digital cinema in 2004.[22]

Huffman Coding

Huffman coding is a fundamental technique used in image compression algorithms to achieve efficient data representation. Named after its inventor David A. Huffman, this method is widely employed in various image compression standards such as JPEG and PNG.

Principle of Huffman Coding

Huffman coding is a form of entropy encoding that assigns variable-length codes to input symbols based on their frequencies of occurrence. The basic principle is to assign shorter codes to more frequently occurring symbols and longer codes to less frequent symbols, thereby reducing the average code length compared to fixed-length codes.

Application in Image Compression

In image compression, Huffman coding is typically applied after other transformations like Discrete Cosine Transform (DCT) in the case of JPEG compression. After transforming the image data into a frequency domain representation, Huffman coding is used to encode the transformed coefficients efficiently.

Steps in Huffman Coding for Image Compression

  1. Frequency Analysis: Calculate the frequency of occurrence of each symbol or symbol combination in the transformed image data.
  2. Constructing the Huffman Tree: Build a Huffman tree based on the symbol frequencies. The tree is constructed recursively by combining the nodes with the lowest frequencies until a single root node is formed.
  3. Assigning Codewords: Traverse the Huffman tree to assign variable-length codewords to each symbol, with shorter codewords assigned to more frequent symbols.
  4. Encoding: Replace the original symbols in the image data with their corresponding Huffman codewords to generate the compressed data stream.

Benefits of Huffman Coding in Image Compression

Conclusion

Huffman coding plays a crucial role in image compression by efficiently encoding image data into a compact representation. Its ability to adaptively assign variable-length codewords based on symbol frequencies makes it an essential component in modern image compression techniques, contributing to the reduction of storage space and transmission bandwidth while maintaining image quality.

Notes and References

  1. Web site: Image Data Compression.
  2. 10.1109/T-C.1974.223784. Discrete Cosine Transform. 1974 . https://web.archive.org/web/20111125071212/http://dasan.sejong.ac.kr/~dihan/dip/p5_DCT.pdf . 2011-11-25 . Ahmed . N. . Natarajan . T. . Rao . K.R. . IEEE Transactions on Computers . 90–93 . 149806273 .
  3. Web site: Gilad David Maayan . AI-Based Image Compression: The State of the Art . Towards Data Science . 6 April 2023 . Nov 24, 2021.
  4. Web site: High-Fidelity Generative Image Compression . 6 April 2023.
  5. Web site: Bühlmann. Matthias. 2022-09-28. Stable Diffusion Based Image Compression. 2022-11-02. Medium. en.
  6. Burt. P.. Adelson. E.. The Laplacian Pyramid as a Compact Image Code. IEEE Transactions on Communications. 1 April 1983. 31. 4. 532–540. 10.1109/TCOM.1983.1095851. 10.1.1.54.299. 8018433 .
  7. Shao . Dan . Walter G. . Kropatsch . Irregular Laplacian Graph Pyramid . Computer Vision Winter Workshop 2010. February 3–5, 2010. Libor . Špaček . Vojtěch . Franc . Czech Pattern Recognition Society . Nové Hrady, Czech Republic . https://web.archive.org/web/20130527045502/http://www.prip.tuwien.ac.at/twist/docs/irregularLaplacian.pdf . 2013-05-27 . live.
  8. Claude Elwood Shannon. Alcatel-Lucent. Bell System Technical Journal. A Mathematical Theory of Communication . 27 . 3–4 . 1948 . 379–423, 623–656 . 10.1002/j.1538-7305.1948.tb01338.x. 11858/00-001M-0000-002C-4314-2. https://web.archive.org/web/20110524064232/http://math.harvard.edu/~ctm/home/text/others/shannon/entropy/entropy.pdf . 2011-05-24 . live . 2019-04-21 . Claude Elwood Shannon. free .
  9. 10.1109/PROC.1969.6869 . Hadamard transform image coding . 1969 . Pratt . W.K. . Kane . J. . Andrews . H.C. . Proceedings of the IEEE . 57 . 58–68 .
  10. Ahmed . Nasir . N. Ahmed . How I Came Up With the Discrete Cosine Transform . . January 1991 . 1 . 1 . 4–5 . 10.1016/1051-2004(91)90086-Z . 1991DSP.....1....4A .
  11. Web site: T.81 – DIGITAL COMPRESSION AND CODING OF CONTINUOUS-TONE STILL IMAGES – REQUIREMENTS AND GUIDELINES . https://web.archive.org/web/20000818020219/http://www.w3.org/Graphics/JPEG/itu-t81.pdf . 2000-08-18 . live . . September 1992 . 12 July 2019.
  12. Web site: The JPEG image format explained . . . 5 August 2019 . 31 May 2018.
  13. Web site: What Is a JPEG? The Invisible Object You See Every Day . 13 September 2019 . . 24 September 2013.
  14. News: Baraniuk . Chris . Copy protections could come to JPEGs . 13 September 2019 . . . 15 October 2015.
  15. Web site: The GIF Controversy: A Software Developer's Perspective . 27 January 1995 . 26 May 2015.
  16. DEFLATE Compressed Data Format Specification version 1.3 . 1951 . Abstract . 1 . L. Peter Deutsch . L. Peter Deutsch . May 1996 . . 2014-04-23.
  17. Book: Taubman . David . Marcellin . Michael . JPEG2000 Image Compression Fundamentals, Standards and Practice: Image Compression Fundamentals, Standards and Practice . 2012 . . 9781461507994 .
  18. Unser . M. . Blu . T. . Mathematical properties of the JPEG2000 wavelet filters . IEEE Transactions on Image Processing . 2003 . 12 . 9 . 1080–1090 . 10.1109/TIP.2003.812329 . 18237979 . 2003ITIP...12.1080U . 2765169 . https://web.archive.org/web/20191013222932/https://pdfs.semanticscholar.org/6ed4/dece8b364416d9c390ba53df913bca7fb9a6.pdf . dead . 2019-10-13 .
  19. Web site: Sullivan . Gary . General characteristics and design considerations for temporal subband video coding . . . 8–12 December 2003 . 13 September 2019.
  20. Book: Bovik . Alan C. . The Essential Guide to Video Processing . 2009 . . 9780080922508 . 355 .
  21. Book: Le Gall . Didier . Tabatabai . Ali J. . ICASSP-88., International Conference on Acoustics, Speech, and Signal Processing . Sub-band coding of digital images using symmetric short kernel filters and arithmetic coding techniques . 1988 . 761–764 vol.2 . 10.1109/ICASSP.1988.196696 . 109186495.
  22. Book: Swartz . Charles S. . Understanding Digital Cinema: A Professional Handbook . 2005 . . 9780240806174 . 147 .