Color layout descriptor explained

In digital image and video processing, a color layout descriptor (CLD) is designed to capture the spatial distribution of color in an image. The feature extraction process consists of two parts: grid based representative color selection and discrete cosine transform with quantization.

Color is the most basic quality of the visual contents, therefore it is possible to use colors to describe and represent an image. The MPEG-7 standard has tested the most efficient procedure to describe the color and has selected those that have provided more satisfactory results. This standard proposes different methods to obtain these descriptors, and one tool defined to describe the color is the CLD, that permits describing the color relation between sequences or group of images.

The CLD captures the spatial layout of the representative colors on a grid superimposed on a region or image. Representation is based on coefficients of the discrete cosine transform (DCT). This is a very compact descriptor being highly efficient in fast browsing and search applications. It can be applied to still images as well as to video segments.

Definition

The CLD is a very compact and resolution-invariant representation of color for high-speed image retrieval and it has been designed to efficiently represent the spatial distribution of colors. This feature can be used for a wide variety of similarity-based retrieval, content filtering and visualization. It is especially useful for spatial structure-based retrieval applications. This descriptor is obtained by applying the DCT transformation on a 2-D array of local representative colors in Y or Cb or Cr color space. The functionalities of the CLD are basically the matching:

– Image-to-image matching

– Video clip-to-video clip matching

Remark that the CLD is one of the most precise and fast color descriptor.

Extraction

The extraction process of this color descriptor consists of four stages:

The standard MPEG-7 recommends using the YCbCr color space for the CLD.

Image partitioning

See also: Image segmentation.

In the image partitioning stage, the input picture (on RGB color space) is divided into 64 blocks to guarantee the invariance to resolution or scale. The inputs and outputs of this step are summarized in the following table:

Representative color selection

After the image partitioning stage, a single representative color is selected from each block. Any method to select the representative color can be applied, but the standard recommends the use of the average of the pixel colors in a block as the corresponding representative color, since it is simpler and the description accuracy is sufficient in general. The selection results in a tiny image icon of size 8x8. The next figure shows this process. Note that in the image of the figure, the resolution of the original image has been maintained only in order to facilitate its representation. The inputs and outputs of this stage are summarized in the next table:

Once the tiny image icon is obtained, the color space conversion between RGB and YCbCr is applied.

DCT transformation

In the fourth stage, the luminance (Y) and the blue and red chrominance (Cb and Cr) are transformed by 8x8 DCT, so three sets of 64 DCT coefficients are obtained. To calculate the DCT in a 2D array, the formulas below are used.

Bpq=\alphap\alphaq

M-1
\sum
m=0
N-1
\sum
n=0

Amn\cos

\pi(2m+1)p\cos
2M
\pi(2n+1)q
2N

,    0\lep\leM-1,0\leq\leN-1

\alpha
p=\begin{cases}1
\sqrt{M

},&p=0\\sqrt{

2
M

},&1\lep\leM-1\end{cases}   

\alpha
q=\begin{cases}1
\sqrt{N

},&q=0\\sqrt{

2
N

},&1\leq\leN-1\end{cases}

The inputs and outputs of this stage are summarized in the next table:

Zigzag scanning

A zigzag scanning is performed with these three sets of 64 DCT coefficients, following the schema presented in the figure. The purpose of the zigzag scan is to group the low frequency coefficients of the 8x8 matrix. The inputs and outputs of this stage are summarized in the next table:

Finally, these three set of matrices correspond to the CLD of the input image.

Matching

The matching process helps to evaluate if two elements are equal comparing both elements and calculating the distance between them. In the case of color descriptors the matching process helps to evaluate if two images are similar. Its procedure is the following:

– Given an image as an input, the application attempts to find an image with a similar descriptor in a data base of images.

If we consider two CLDs:

, The distance between the two descriptors can be computed as:

D=\sqrt{\sumiwyi(DYi-DY

2}
i')

+\sqrt{\sumiwbi(DCbi-DCb

2}
i')

+\sqrt{\sumiwri(DCri-DCr

2}
i')

The subscript i represents the zigzag-scanning order of the coefficients. Furthermore, notice that is possible to weight the coefficients (w) in order to adjust the performance of the matching process. These weights let us give to some components of the descriptor more importance than others.Observing the formula, it can be extracted that:

– 2 images are the same if the distance is 0

– 2 images are similar if the distance is near to 0

Therefore, this matching process will let to identify images with similar color descriptors. Since the complexity of the similarity matching process shown above is low, high-speed image matching can be achieved.

See also

External links