In computer science and information theory, Tunstall coding is a form of entropy coding used for lossless data compression.
Tunstall coding was the subject of Brian Parker Tunstall's PhD thesis in 1967, while at Georgia Institute of Technology. The subject of that thesis was "Synthesis of noiseless compression codes" [1]
Its design is a precursor to Lempel–Ziv.
Unlike variable-length codes, which include Huffman and Lempel–Ziv coding,Tunstall coding is a code which maps source symbols to a fixed number of bits.[2]
Both Tunstall codes and Lempel–Ziv codes represent variable-length words by fixed-length codes.[3]
Unlike typical set encoding, Tunstall coding parses a stochastic source with codewords of variable length.
It can be shown[4] that, for a large enough dictionary, the number of bits per source letter can be arbitrarily close to
H(U)
The algorithm requires as input an input alphabet
l{U}
C
D
D := tree of
|l{U}|
l{U}
|D|<C
|l{U}|
Let's imagine that we wish to encode the string "hello, world".Let's further assume (somewhat unrealistically) that the input alphabet
l{U}
3\over12
We initialize the tree, starting with a tree of
|l{U}|=9
\lceillog2(9)\rceil=4
We then take the leaf of highest probability (here,
w1
|l{U}|=9
{1\over3} ⋅ {3\over12}={1\over12}
We obtain 17 words, which can each be encoded into a fixed-sized output of
\lceillog2(17)\rceil=5
Note that we could iterate further, increasing the number of words by
|l{U}|-1=8
Tunstall coding requires the algorithm to know, prior to the parsing operation, what the distribution of probabilities for each letter of the alphabet is.This issue is shared with Huffman coding.
Its requiring a fixed-length block output makes it lesser than Lempel–Ziv, which has a similar dictionary-based design, but with a variable-sized block output.
This is an example of a Tunstall code being used to read (for transmit) any data that is scrambled, e.g. by polynomial scrambling. This particular example helps to modify the base of the data from 2 to 3 in a stream therefore avoiding expensive base modification routines. With base modification we are particularly bound by 'efficiency' of reads, where ideally bits are used at an average to read the code. This ensures that upon use of the new base, which is duty bound to use at best bits per code, our reads do not result in lesser margin of efficiency of transmission for which we are employing the base modification in the first place. We can therefore then employ the read-to-modify-base mechanism for efficiently transmitting the data across channels that have a different base. eg. transmitting binary data across say MLT-3 channels with increased efficiency when compared to mapping codes (with large number of unused codes).
Symbol | Code | |
---|---|---|
AA | 010 | |
AB | 011 | |
AC | 100 | |
B | 00 | |
CA | 101 | |
CB | 110 | |
CC | 111 |