A digital image is an image composed of picture elements, also known as pixels, each with finite, discrete quantities of numeric representation for its intensity or gray level that is an output from its two-dimensional functions fed as input by its spatial coordinates denoted with x, y on the x-axis and y-axis, respectively.[1] Depending on whether the image resolution is fixed, it may be of vector or raster type.
See main article: Raster image. Raster images have a finite set of digital values, called picture elements or pixels. The digital image contains a fixed number of rows and columns of pixels. Pixels are the smallest individual element in an image, holding quantized values that represent the brightness of a given color at any specific point.
Typically, the pixels are stored in computer memory as a raster image or raster map, a two-dimensional array of small integers. These values are often transmitted or stored in a compressed form.
Raster images can be created by a variety of input devices and techniques, such as digital cameras, scanners, coordinate-measuring machines, seismographic profiling, airborne radar, and more. They can also be synthesized from arbitrary non-image data, such as mathematical functions or three-dimensional geometric models; the latter being a major sub-area of computer graphics. The field of digital image processing is the study of algorithms for their transformation.
Most users come into contact with raster images through digital cameras, which use any of several image file formats.
Some digital cameras give access to almost all the data captured by the camera, using a raw image format. The Universal Photographic Imaging Guidelines (UPDIG) suggests these formats be used when possible since raw files produce the best quality images. These file formats allow the photographer and the processing agent the greatest level of control and accuracy for output. Their use is inhibited by the prevalence of proprietary information (trade secrets) for some camera makers, but there have been initiatives such as OpenRAW to influence manufacturers to release these records publicly. An alternative may be Digital Negative (DNG), a proprietary Adobe product described as "the public, archival format for digital camera raw data".[2] Although this format is not yet universally accepted, support for the product is growing, and increasingly professional archivists and conservationists, working for respectable organizations, variously suggest or recommend DNG for archival purposes.[3] [4] [5] [6] [7] [8] [9] [10]
Vector images resulted from mathematical geometry (vector). In mathematical terms, a vector consists of both a magnitude, or length, and a direction.
Often, both raster and vector elements will be combined in one image; for example, in the case of a billboard with text (vector) and photographs (raster).
Example of vector file types are EPS, PDF, and AI.
Image viewer software displayed on images. Web browsers can display standard internet images formats including JPEG, GIF and PNG. Some can show SVG format which is a standard W3C format. In the past, when the Internet was still slow, it was common to provide "preview" images that would load and appear on the website before being replaced by the main image (to give at preliminary impression). Now Internet is fast enough and this preview image is seldom used.
Some scientific images can be very large (for instance, the 46 gigapixel size image of the Milky Way, about 194 Gb in size).[11] Such images are difficult to download and are usually browsed online through more complex web interfaces.
Some viewers offer a slideshow utility to display a sequence of images.
Early digital fax machines such as the Bartlane cable picture transmission system preceded digital cameras and computers by decades.The first picture to be scanned, stored, and recreated in digital pixels was displayed on the Standards Eastern Automatic Computer (SEAC) at NIST.[12] The advancement of digital imagery continued in the early 1960s, alongside development of the space program and in medical research. Projects at the Jet Propulsion Laboratory, MIT, Bell Labs and the University of Maryland, among others, used digital images to advance satellite imagery, wirephoto standards conversion, medical imaging, videophone technology, character recognition, and photo enhancement.[13]
Rapid advances in digital imaging began with the introduction of MOS integrated circuits in the 1960s and microprocessors in the early 1970s, alongside progress in related computer memory storage, display technologies, and data compression algorithms.
The invention of computerized axial tomography (CAT scanning), using x-rays to produce a digital image of a "slice" through a three-dimensional object, was of great importance to medical diagnostics. As well as origination of digital images, digitization of analog images allowed the enhancement and restoration of archaeological artifacts and began to be used in fields as diverse as nuclear medicine, astronomy, law enforcement, defence and industry.[14]
Advances in microprocessor technology paved the way for the development and marketing of charge-coupled devices (CCDs) for use in a wide range of image capture devices and gradually displaced the use of analog film and tape in photography and videography towards the end of the 20th century. The computing power necessary to process digital image capture also allowed computer-generated digital images to achieve a level of refinement close to photorealism.[15]
See main article: Image sensor.
The first semiconductor image sensor was the CCD, developed by Willard S. Boyle and George E. Smith at Bell Labs in 1969.[16] While researching MOS technology, they realized that an electric charge was the analogy of the magnetic bubble and that it could be stored on a tiny MOS capacitor. As it was fairly straightforward to fabricate a series of MOS capacitors in a row, they connected a suitable voltage to them so that the charge could be stepped along from one to the next.[17] The CCD is a semiconductor circuit that was later used in the first digital video cameras for television broadcasting.[18]
Early CCD sensors suffered from shutter lag. This was largely resolved with the invention of the pinned photodiode (PPD). It was invented by Nobukazu Teranishi, Hiromitsu Shiraki and Yasuo Ishihara at NEC in 1980. It was a photodetector structure with low lag, low noise, high quantum efficiency and low dark current. In 1987, the PPD began to be incorporated into most CCD devices, becoming a fixture in consumer electronic video cameras and then digital still cameras. Since then, the PPD has been used in nearly all CCD sensors and then CMOS sensors.
The NMOS active-pixel sensor (APS) was invented by Olympus in Japan during the mid-1980s. This was enabled by advances in MOS semiconductor device fabrication, with MOSFET scaling reaching smaller micron and then sub-micron levels.[19] [20] The NMOS APS was fabricated by Tsutomu Nakamura's team at Olympus in 1985.[21] The CMOS active-pixel sensor (CMOS sensor) was later developed by Eric Fossum's team at the NASA Jet Propulsion Laboratory in 1993.[22] By 2007, sales of CMOS sensors had surpassed CCD sensors.[23]
See main article: Image compression.
An important development in digital image compression technology was the discrete cosine transform (DCT), a lossy compression technique first proposed by Nasir Ahmed in 1972.[24] DCT compression is used in JPEG, which was introduced by the Joint Photographic Experts Group in 1992.[25] JPEG compresses images down to much smaller file sizes, and has become the most widely used image file format on the Internet.[26]
See also: Image stitching.
In digital imaging, a mosaic is a combination of non-overlapping images, arranged in some tessellation. Gigapixel images are an example of such digital image mosaics.Satellite imagery are often mosaicked to cover Earth regions.
Interactive viewing is provided by virtual-reality photography.