Deep image compositing is a way of compositing and rendering digital images that emerged in the mid-2010s. In addition to the usual color and opacity channels a notion of spatial depth is created. This allows multiple samples in the depth of the image to make up the final resulting color. This technique produces high quality results and removes artifacts around edges that could not be dealt with otherwise.
Deep data is encoded by advanced 3D renderers into an image that samples information about the path each rendered pixel takes along the z axis extending outward from the virtual camera through space, including the color and opacity of every non-opaque surface or volume it passes through along the way, as well as neighboring samples. It might be considered somewhat analogous to the way ray tracing generates simulated photon paths through such mediums; however, ray tracing and other traditional rendering techniques generally produce images that contain only three or four channels of color and opacity values per pixel, flattened into a two dimensional frame.
Depth maps, on the other hand, contain z axis information encoded in a grayscale image. Each level of gray represents a different slice of the z space. The "thickness" of each slice is determined at time of render, allowing for more or less depth fidelity depending on how deep the scene is. Depth maps have been a boon to compositors for blending 3D renders with live action and practical elements. To be useful, the map must have high enough bit depth to encode separation between close-to-camera objects and objects near infinity. Most 3D software packages are now capable of generating 16-bit and 32-bit depth maps, providing up to 2 billion depth levels. Depth maps do not however include transparency information about non-opaque surfaces or volumes and as such, objects beyond and viewed through these semi- or fully-transparent objects will have no depth information of their own and may not get composited or blurred correctly. Even the popular addition of cryptomattes to many post-production and VFX studios' pipelines, while providing separate color-coded ID shapes for individual elements in a rendered scene to further bridge the gap between CGI and compositing, don't allow for the nearly automated and fully non-linear workflows that deep data does. This is because deep images encapsulate enough 3D information that normally time-intensive tasks such as rotoscoping with numerous holdout mattes for complex interactions between moving characters and semi-transparent environmental volumes like smoke or water, are essentially trivial. Instead of going through that process, multiple mattes could easily be generated from a single set of deep images with no need to re-render every matte element and background for each case. In addition to that efficiency and flexibility, deep data images inherently provide much higher visual quality in common areas that have been difficult with traditional renders, such as the motion-blurred edges of characters with semi-transparent elements like hair.
One downside to the use of deep images is their substantial file size, since they encode a relatively enormous amount of data per frame compared to even multichannel formats such as OpenEXR.
The data is stored as a function of depth. This results in a function curve that can be used to look up the data at any arbitrary depth. Manipulating the data is harder.
Each sample is considered as an independent piece and can so be manipulated easily. To make sure the data is representing the right detail, an additional expand value needs to be introduced.
3D renderers produce the necessary data as a part of the rendering pipeline. Samples are gathered in depth and then combined. The deep data can be written out before this happens and so is nothing new to the process.Generating deep data from camera data needs a proper depth map. This is used in a couple of cases but still not accurate enough for detailed representation. For basic holdout task this can be sufficient though.
Deep images can be composited like regular images. The depth component makes it easier to determine the layering order. Traditionally this had to be input by the user. Deep images have that information for themselves and need no user input.Edge artifacts are reduced as transparent pixels have more data to work with.
Deep Images have been around in 3D rendering packages for quite a while now. The use of them for holdouts was first done at several VFX houses in shaders. Holdout mattes can be generated at render time. Using them in a more interactive manner was started recently by several companies, SideFX integrated it in their Houdini software and facilities like Industrial Light & Magic, DreamWorks Animation, Weta, AnimalLogic and DRD studios have implemented interactive solutions.
In 2014 the Academy of Motion Picture Arts and Sciences honored the technology with its annual SciTech awards. Dr. Peter Hillman for the long-term development and continued advancement of innovative, robust and complete toolsets for deep compositing and to Colin Doncaster, Johannes Saam, Areito Echevarria, Janne Kontkanen and Chris Cooper for the development, prototyping and promotion of technologies and workflows for deep compositing.