VC-6 | |
Organization: | SMPTE |
Domain: | Video compression |
SMPTE ST 2117-1,[1] informally known as VC-6, is a video coding format.[2]
The VC-6 codec is optimized for intermediate, mezzanine or contribution coding applications. Typically, these applications involve compressing finished compositions for editing, contribution, primary distribution, archiving and other applications where it is necessary to preserve image quality as close to the original as possible, whilst reducing bitrates, and optimizing processing, power and storage requirements. VC-6, like other codecs in this category[3] [4] uses only intra-frame compressions, where each frame is stored independently and can be decoded with no dependencies on any other frame.[5] The codec implements lossless and lossy compression, depending on the encoding parameters that have been selected. It was standardized in 2020. Earlier variants of the codec have been deployed by V-Nova since 2015 under the trade name Perseus. The codec is based on hierarchical data structures called s-trees, and does not involve DCT or wavelet transform compression. The compression mechanism is independent of the data being compressed, and can be applied to pixels as well as other non-image data.[6]
Unlike DCT based codecs, VC-6 is based on hierarchical, repeatable s-tree structures that are similar to modified quadtrees. These simple structures provide intrinsic capabilities, such as massive parallelism[7] and the ability to choose the type of filtering used to reconstruct higher-resolution images from lower-resolution images. In the VC-6 standard an up-sampler developed with an in-loop Convolutional Neural Network is provided to optimize the detail in the reconstructed image, without requiring a large computational overhead. The ability to navigate spatially within the VC-6 bitstream at multiple levels also provides the ability for decoding devices to apply more resources to different regions of the image allowing for Region-of-Interest applications to operate on compressed bitstreams without requiring a decode of the full-resolution image.[8]
At the NAB Show in 2015, V-Nova claimed "2x–3x average compression gains, at all quality levels, under practical real-time operating scenarios versus H.264, HEVC and JPEG2000.".[9] Making this announcement on 1 April before a major trade show attracted the attention of many compression experts.[10] Since then, V-Nova have deployed and licensed the technology, known at the time as Perseus, in both contribution and distribution applications around the world including Sky Italia,[11] Fast Filmz,[12] [13] Harmonic Inc, and others. A variant of the technology optimized for enhancing distribution codec will soon be standardized as MPEG-5 Part-2 LCEVC.[14] [15] [16]
The standard describes a compression algorithm that is applied to independent planes of data. These planes might be RGB or RGBA pixels originating in a camera, YCbCr pixels from a conventional TV-centric video source or some other planes of data. There may be up to 255 independent planes of data, and each plane can have a grid of data values of dimensions up to 65535 x 65535.[17] The SMPTE ST 2117-1 standard focuses on compressing planes of data values, typically pixels. To compress and decompress the data in each plane, VC-6 uses hierarchical representations of small tree-like structure that carry metadata used to predict other trees. There are 3 fundamental structures repeated in each plane.
The core compression structure in VC-6 is the s-tree. It is similar to the quadtree structure common in other schemes. An s-tree is comprised nodes arranged in a tree structure, where each node links to 4 nodes in the next layer. The total number of layers above the root node is known as the rise of the s-tree. Compression is achieved in an s-tree by using metadata to signal whether levels can be predicted with selective carrying of enhancement data in the bitstream. The more data that can be predicted, the less information that is sent, and the better the compression ratio.
The standard defines a tableau as the root node, or the highest layer of an s-tree, that contains nodes for another s-tree. Like the generic s-trees from which they are constructed, tableaux are arranged in layers with metadata in the nodes indicating whether or not higher layers are predicted or transmitted in the bitstream.
The hierarchical s-tree and tableau structures in the standard are used to carry enhancements (called resid-vals) and other metadata to reduce the amount of raw data that needs to be carried in the bitstream payload. The final hierarchical tool is an ability to arrange the tableaux, so that data from each plane (i.e. pixels) can be dequantized at different resolutions and used as predictors for higher resolutions. Each of these resolutions is defined by the standard as an echelon. Each echelon within a plane is identified by an index, where a more negative index indicates a low resolution and a larger, more positive index indicates a higher resolution.
VC-6 is an example of intra-frame coding, where each picture is coded without referencing other pictures. It is also intra-plane, where no information from one plane is used to predict another plane. As a result, the VC-6 bitstream contains all of the information for all of the planes of a single image. An image sequence is created by concatenating the bitstreams for multiple images, or by packaging them in a container such as MXF or Quicktime or Matroska.
The VC-6 bitstream is defined in the standard. by pseudo code, and a reference decoder has been demonstrated based on that definition. The primary header is the only fixed structure defined by the standard. The secondary header contains marker and sizing information depending on the values in the primary header. The tertiary header is entirely calculated, and then the payload structure is derived from the parameters calculated during header decoding
The standard defines a process called plane reconstruction for decoding images from a bitstream. The process starts with the echelon having the lowest index. No predictions are used for this echelon. Firstly, the bitstream rules are used to reconstruct residuals. Next, desparsification and entropy decoding processes are performed to fill the grid with data values at each coordinate. These values are then dequantised to create full-range values that can be used as predictions for the echelon with the next highest index. Each echelon uses the upsampler specified in the header to create a predicted plane from the echelon below which is added to the residual grid from the current echelon that can be upsampled as a prediction for the next echelon.[18]
The final, full-resolution, echelon, defined by the standard, is at index 0, and its results are displayed, rather than used for another echelon.
The standard defines a number of basic upsamplers[19] to create higher-resolution predictions from lower-resolution echelons. There are two linear upsamplers, bicubic and sharp, and a nearest-neighbour upsampler.
Six different non-linear upsamplers are defined by a set of processes and coefficients that are provided in JSON format. These coefficients were generated using Convolutional Neural Network[20] techniques.