Distributed source coding (DSC) is an important problem in information theory and communication. DSC problems regard the compression of multiple correlated information sources that do not communicate with each other.[1] By modeling the correlation between multiple sources at the decoder side together with channel codes, DSC is able to shift the computational complexity from encoder side to decoder side, therefore provide appropriate frameworks for applications with complexity-constrained sender, such as sensor networks and video/multimedia compression (see distributed video coding[2]). One of the main properties of distributed source coding is that the computational burden in encoders is shifted to the joint decoder.
In 1973, David Slepian and Jack Keil Wolf proposed the information theoretical lossless compression bound on distributed compression of two correlated i.i.d. sources X and Y.[3] After that, this bound was extended to cases with more than two sources by Thomas M. Cover in 1975,[4] while the theoretical results in the lossy compression case are presented by Aaron D. Wyner and Jacob Ziv in 1976.[5]
Although the theorems on DSC were proposed on 1970s, it was after about 30 years that attempts were started for practical techniques, based on the idea that DSC is closely related to channel coding proposed in 1974 by Aaron D. Wyner.[6] The asymmetric DSC problem was addressed by S. S. Pradhan and K. Ramchandran in 1999, which focused on statistically dependent binary and Gaussian sources and used scalar and trellis coset constructions to solve the problem.[7] They further extended the work into the symmetric DSC case.[8]
Syndrome decoding technology was first used in distributed source coding by the DISCUS system of SS Pradhan and K Ramachandran (Distributed Source Coding Using Syndromes).[7] They compress binary block data from one source into syndromes and transmit data from the other source uncompressed as side information. This kind of DSC scheme achieves asymmetric compression rates per source and results in asymmetric DSC. This asymmetric DSC scheme can be easily extended to the case of more than two correlated information sources. There are also some DSC schemes that use parity bits rather than syndrome bits.
The correlation between two sources in DSC has been modeled as a virtual channel which is usually referred as a binary symmetric channel.[9] [10]
Starting from DISCUS, DSC has attracted significant research activity and more sophisticated channel coding techniques have been adopted into DSC frameworks, such as Turbo Code, LDPC Code, and so on.
Similar to the previous lossless coding framework based on Slepian–Wolf theorem, efforts have been taken on lossy cases based on the Wyner–Ziv theorem. Theoretical results on quantizer designs was provided by R. Zamir and S. Shamai,[11] while different frameworks have been proposed based on this result, including a nested lattice quantizer and a trellis-coded quantizer.
Moreover, DSC has been used in video compression for applications which require low complexity video encoding, such as sensor networks, multiview video camcorders, and so on.[12]
With deterministic and probabilistic discussions of correlation model of two correlated information sources, DSC schemes with more general compressed rates have been developed.[13] [14] [15] In these non-asymmetric schemes, both of two correlated sources are compressed.
Under a certain deterministic assumption of correlation between information sources, a DSC framework in which any number of information sources can be compressed in a distributed way has been demonstrated by X. Cao and M. Kuijper.[16] This method performs non-asymmetric compression with flexible rates for each source, achieving the same overall compression rate as repeatedly applying asymmetric DSC for more than two sources. Then, by investigating the unique connection between syndromes and complementary codewords of linear codes, they have translated the major steps of DSC joint decoding into syndrome decoding followed by channel encoding via a linear block code and also via its complement code,[17] which theoretically illustrated a method of assembling a DSC joint decoder from linear code encoders and decoders.
The information theoretical lossless compression bound on DSC (the Slepian–Wolf bound) was first purposed by David Slepian and Jack Keil Wolf in terms of entropies of correlated information sources in 1973.[3] They also showed that two isolated sources can compress data as efficiently as if they were communicating with each other. This bound has been extended to the case of more than two correlated sources by Thomas M. Cover in 1975.[4]
Similar results were obtained in 1976 by Aaron D. Wyner and Jacob Ziv with regard to lossy coding of joint Gaussian sources.[5]
Distributed Coding is the coding of two or more dependent sources with separate encoders and joint decoder. Given two statistically dependent i.i.d. finite-alphabet random sequences X and Y, Slepian–Wolf theorem includes theoretical bound for the lossless coding rate for distributed coding of the two sources as below:[3]
RX\geqH(X|Y),
RY\geqH(Y|X),
RX+RY\geqH(X,Y).
If both the encoder and decoder of the two sources are independent, the lowest rate we can achieve for lossless compression is
H(X)
H(Y)
X
Y
H(X)
H(Y)
X
Y
X
Y
H(X,Y)
A special case of distributed coding is compression with decoder side information, where source
Y
RY=H(Y)
Y
H(X|Y)
X
Shortly after Slepian–Wolf theorem on lossless distributed compression was published, the extension to lossy compression with decoder side information was proposed as Wyner–Ziv theorem.[5] Similarly to lossless case, two statistically dependent i.i.d. sources
X
Y
Y
The Wyner–Ziv theorem presents the achievable lower bound for the bit rate of
X
D
X
Deterministic model
Probabilistic model
Asymmetric DSC means that, different bitrates are used in coding the input sources, while same bitrate is used in symmetric DSC. Taking a DSC design with two sources for example, in this example
X
Y
x
y
x
y
RX+RY\geq10
RX\geq5
RY\geq5
This means, the theoretical bound is
RX+RY=10
RX+RY=10
X
Y
RX=3
RY=7
RY=3
RX=7
It was understood that Slepian–Wolf coding is closely related to channel coding in 1974,[6] and after about 30 years, practical DSC started to be implemented by different channel codes. The motivation behind the use of channel codes is from two sources case, the correlation between input sources can be modeled as a virtual channel which has input as source
X
Y
The basic framework of syndrome based DSC is that, for each source, its input space is partitioned into several cosets according to the particular channel coding method used. Every input of each source gets an output indicating which coset the input belongs to, and the joint decoder can decode all inputs by received coset indices and dependence between sources. The design of channel codes should consider the correlation between input sources.
A group of codes can be used to generate coset partitions,[18] such as trellis codes and lattice codes. Pradhan and Ramchandran designed rules for construction of sub-codes for each source, and presented result of trellis-based coset constructions in DSC, which is based on convolution code and set-partitioning rules as in Trellis modulation, as well as lattice code based DSC.[7] [8] After this, embedded trellis code was proposed for asymmetric coding as an improvement over their results.[19]
After DISCUS system was proposed, more sophisticated channel codes have been adapted to the DSC system, such as Turbo Code, LDPC Code and Iterative Channel Code. The encoders of these codes are usually simple and easy to implement, while the decoders have much higher computational complexity and are able to get good performance by utilizing source statistics. With sophisticated channel codes which have performance approaching the capacity of the correlation channel, corresponding DSC system can approach the Slepian–Wolf bound.
Although most research focused on DSC with two dependent sources, Slepian–Wolf coding has been extended to more than two input sources case, and sub-codes generation methods from one channel code was proposed by V. Stankovic, A. D. Liveris, etc. given particular correlation models.[20]
Theorem: Any pair of correlated uniformly distributed sources,
X,Y\in\left\{0,1\right\}n
dH(X, |
Y)\leqt
(R1,R2)
R1,R2\geqn-k,R1+R2\geq2n-k
R1
R2
k\leq
t{n | |
n-log(\sum | |
i=0 |
\choosei})
(n,k,2t+1)
Proof: The Hamming bound for an
(n,k,2t+1)
k\leq
t{n | |
n-log(\sum | |
i=0 |
\choosei})
C
k x n
G
Let
R1+R2=2n-k
G1 |
(n-R1)
G
G2 |
(n-R2)
G
C1 |
C2 |
G1 |
G2 |
H1 |
H2 |
For a pair of input
(x,y)
s1=H1x |
s2=H2y |
x
y
x=u1G1+cs1 |
y=u2G2+cs2 |
cs1,cs2 |
s1,s2
C1,C2 |
y=x+e
w(e)\leqt
x+y=uG+cs=e |
u=\left[u1,u2\right] |
cs=cs1+cs2 |
Suppose there are two different input pairs with the same syndromes, that means there are two different strings
u1,u2 |
\in\left\{0,1\right\}k
|
|
(u1-u2)G=0 |
C
2t+1
u1G |
u2G |
\geq2t+1
w(e)\leqt
|
|
d | ||
|
\leqt
d | ||
|
\leqt
d | ||
|
\geq2t+1
Therefore, we can successfully compress the two dependent sources with constructed subcodes from an
(n,k,2t+1)
(R1,R2)
R1,R2\geqn-k,R1+R2\geq2n-k
R1
R2
k\leq
t{n | |
n-log(\sum | |
i=0 |
\choosei})
Take the same example as in the previous Asymmetric DSC vs. Symmetric DSC part, this part presents the corresponding DSC schemes with coset codes and syndromes including asymmetric case and symmetric case. The Slepian–Wolf bound for DSC design is shown in the previous part.
In the case where
RX=3
RY=7
y
Y
x
y
x
X
y
x
y
x
y
y
x
x
y
X
y
x
In this example, we can use the
(7,4,3)
C
H
x
X
s=Hx
y
s
x1 |
x2 |
s
Hx1=Hx2 |
H(x1-x2)=0 |
(7,4,3)
d | ||
|
x2)\geq |
3
x
dH(x,y)\leq1
Similarly, the bits distribution with
RX=7
RY=3
X
Y
In symmetric case, what we want is equal bitrate for the two sources: 5 bits each with separate encoder and joint decoder. We still use linear codes for this system, as we used for asymmetric case. The basic idea is similar, but in this case, we need to do coset partition for both sources, while for a pair of received syndromes (corresponds to one coset), only one pair of input variables are possible given the correlation between two sources.
C1 |
C2 |
s1=H1x |
s2=H2y |
x1, |
y1 |
x2, |
y2 |
H1x1 |
=
H1x2 |
H1y1 |
=
H1y2 |
w
y1=x1+e1 |
w(e1) |
\leq1
y2=x2+e2 |
w(e2) |
\leq1
Thus:
x1+x2 |
\in
C1 |
y1+y2=x1+x2+e3 |
\in
C2 |
where
e3=e2+e1 |
w(e3) |
\leq2
3
The two codes
C1 |
C2 |
(7,4,3)
3
G
G1 |
C1 |
G
G2 |
G
(5 x 7)
In general, a Wyner–Ziv coding scheme is obtained by adding a quantizer and a de-quantizer to the Slepian–Wolf coding scheme. Therefore, a Wyner–Ziv coder design could focus on the quantizer and corresponding reconstruction method design. Several quantizer designs have been proposed, such as a nested lattice quantizer,[21] trellis code quantizer[22] and Lloyd quantization method.[23]
Unfortunately, the above approaches do not scale (in design or operational complexity requirements) to sensor networks of large sizes, a scenario where distributed compression is most helpful. If there are N sources transmitting at R bits each (with some distributed coding scheme), the number of possible reconstructions scales
2NR
The central idea is the presence of a bit-subset selector which maintains a certain subset of the received (NR bits, in the above example) bits for each source. Let
l{B}
l{B}=2\{1,...,NR\
Then, we define the bit-subset selector mapping to be
l{S}:\{1,...,N\} → l{B}
Note that each choice of the bit-subset selector imposes a storage requirement (C) that is exponential in the cardinality of the set of chosen bits.
C=
N | |
\sum | |
n=1 |
2|l{S(n)|}
This allows a judicious choice of bits that minimize the distortion, given the constraints on decoder storage. Additional limitations on the set of allowable subsets are still needed. The effective cost function that needs to be minimized is a weighted sum of distortion and decoder storage
J=D+λC
The system design is performed by iteratively (and incrementally) optimizing the encoders, decoder and bit-subset selector till convergence.
The syndrome approach can still be used for more than two sources. Consider
a
n
x1,x2, … ,xa\in\{0,1\}n
H1,H2, … ,Hs
m1 x n,m2 x n, … ,ma x n
s1=H1x1,s2=H2x2, … ,sa=Haxa
m=m1+m2+ … ma
General theoretical result does not seem to exist. However, for a restricted kind of source so-called Hamming source [25] that only has at most one source different from the rest and at most one bit location not all identical, practical lossless DSC is shown to exist in some cases. For the case when there are more than two sources, the number of source tuple in a Hamming source is
2n(an+1)
2m\ge2n(an+1)
A simplest set of
a,n,m
a=3,n=5,m=9
n=21
m=27
Q1= \begin{pmatrix} 1 0 0 0 0 0 1 0 0 0 0 1 1 1 0 1 1 0 0 0 0\\ 0 1 0 0 0 0 1 1 0 0 0 0 1 0 0 0 0 0 1 1 1\\ 0 0 1 0 0 0 0 1 1 0 0 0 0 1 1 1 0 1 0 1 1\\ 0 0 0 1 0 0 0 0 1 1 0 0 0 1 0 0 1 1 1 1 0\\ 0 0 0 0 1 0 0 0 0 1 1 0 1 0 1 1 0 1 1 1 1\\ 0 0 0 0 0 1 0 0 0 0 1 1 0 0 1 0 0 1 1 0 1 \end{pmatrix},
Q2=\begin{pmatrix} 0 0 0 1 0 1 1 0 1 1 1 1 0 1 0 0 0 1 1 1 1\\ 1 0 0 0 1 0 1 1 0 1 1 1 1 0 1 1 1 1 0 0 0\\ 0 1 0 0 0 1 1 1 1 0 1 1 1 0 0 0 0 0 1 0 1\\ 1 0 1 0 0 0 1 1 1 1 0 1 0 1 1 1 0 0 1 1 1\\ 0 1 0 1 0 0 1 1 1 1 1 0 0 0 1 0 1 1 0 1 1\\ 0 0 1 0 1 0 0 1 1 1 1 1 1 1 0 1 0 1 1 1 0 \end{pmatrix},
Q3=\begin{pmatrix} 1 0 0 1 0 1 0 0 1 1 1 0 1 0 0 1 1 1 1 1 1\\ 1 1 0 0 1 0 0 0 0 1 1 1 0 0 1 1 1 1 1 1 1\\ 0 1 1 0 0 1 1 0 0 0 1 1 1 1 1 1 0 1 1 1 0\\ 1 0 1 1 0 0 1 1 0 0 0 1 0 0 1 1 1 1 0 0 1\\ 0 1 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 0 0\\ 0 0 1 0 1 1 0 1 1 1 0 0 1 1 1 1 0 0 0 1 1 \end{pmatrix},
G=[0|I9]
G=\begin{pmatrix} G1\ G2\ G3 \end{pmatrix}
G1,G2,G3
G
H1=\begin{pmatrix} G1\ Q1 \end{pmatrix}, H2=\begin{pmatrix} G2\ Q2 \end{pmatrix}, H3=\begin{pmatrix} G3\ Q3 \end{pmatrix}
H1= \begin{pmatrix} 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0\\ 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0\\ 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1\\ 1 0 0 0 0 0 1 0 0 0 0 1 1 1 0 1 1 0 0 0 0\\ 0 1 0 0 0 0 1 1 0 0 0 0 1 0 0 0 0 0 1 1 1\\ 0 0 1 0 0 0 0 1 1 0 0 0 0 1 1 1 0 1 0 1 1\\ 0 0 0 1 0 0 0 0 1 1 0 0 0 1 0 0 1 1 1 1 0\\ 0 0 0 0 1 0 0 0 0 1 1 0 1 0 1 1 0 1 1 1 1\\ 0 0 0 0 0 1 0 0 0 0 1 1 0 0 1 0 0 1 1 0 1 \end{pmatrix},
H2=\begin{pmatrix} 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0\\ 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0\\ 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0\\ 0 0 0 1 0 1 1 0 1 1 1 1 0 1 0 0 0 1 1 1 1\\ 1 0 0 0 1 0 1 1 0 1 1 1 1 0 1 1 1 1 0 0 0\\ 0 1 0 0 0 1 1 1 1 0 1 1 1 0 0 0 0 0 1 0 1\\ 1 0 1 0 0 0 1 1 1 1 0 1 0 1 1 1 0 0 1 1 1\\ 0 1 0 1 0 0 1 1 1 1 1 0 0 0 1 0 1 1 0 1 1\\ 0 0 1 0 1 0 0 1 1 1 1 1 1 1 0 1 0 1 1 1 0 \end{pmatrix},
H3=\begin{pmatrix} 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0\\ 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0\\ 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0\\ 1 0 0 1 0 1 0 0 1 1 1 0 1 0 0 1 1 1 1 1 1\\ 1 1 0 0 1 0 0 0 0 1 1 1 0 0 1 1 1 1 1 1 1\\ 0 1 1 0 0 1 1 0 0 0 1 1 1 1 1 1 0 1 1 1 0\\ 1 0 1 1 0 0 1 1 0 0 0 1 0 0 1 1 1 1 0 0 1\\ 0 1 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 0 0\\ 0 0 1 0 1 1 0 1 1 1 0 0 1 1 1 1 0 0 0 1 1 \end{pmatrix}.