Page 195 - DCAP303_MULTIMEDIA_SYSTEMS
P. 195
Unit 11: Compression
Run-length coding is an example of a mapping that directly results in data compression in this notes
initial stage of the overall source encoding process. The representation of an image by a set of
transform coefficients is an example of the opposite case. Here, the mapper transforms the image
into an array of coefficients, making its interpixel redundancies more accessible for compression
in later stages of the encoding process.
The second stage, or quantizer block in Fig. 11.3 (a), reduces the accuracy of the mapper’s output
in accordance with some pre-established fidelity criterion. This stage reduces the psychovisual
redundancies of the input image. This operation is irreversible. Thus it must be omitted when
error-free compression is desired.
In the third and final stage of the source encoding process, the symbol coder creates a fixed- or
variable-length code to represent the quantizer output and maps the output in accordance with the
code. The term symbol coder distinguishes this coding operation from the overall source encoding
process. In most cases, a variable-length code is used to represent the mapped and quantized
data set. It assigns the shortest code words to the most frequently occurring output values and
thus reduces coding redundancy. The operation, of course, is reversible.
Fig. 11.3 (a) shows the source encoding process as three successive operations, but all three
operations are not necessarily included in every compression system. Recall, for example, that the
quantizer must be omitted when error-free compression is desired. In addition, some compression
techniques normally are modeled by merging blocks that are physically separate in Figure 11.3
(a). In the predictive compression systems, for instance, the mapper and quantizer are often
represented by a single block, which simultaneously performs both operations.
The source decoder shown in Fig. 11.3 (b) contains only two components: a symbol decoder and
an inverse mapper. These blocks perform, in reverse order, the inverse operations of the source
encoder’s symbol encoder and mapper blocks. Because quantization results in irreversible
information loss, an inverse quantizer block is not included in the general source decoder model
shown in Fig. 11.3 (b).
11.5.2 the Channel encoder and Decoder
The channel encoder and decoder play an important role in the overall encoding-decoding process
when the channel of Fig. 11.2 is noisy or prone to error. They are designed to reduce the impact
of channel noise by inserting a controlled form of redundancy into the source encoded data.
As the output of the source encoder contains little redundancy, it would be highly sensitive to
transmission noise without the addition of this “controlled redundancy.” One of the most useful
channel encoding techniques was devised by R. W. Hamming. It is based on appending enough
bits to the data being encoded to ensure that some minimum number of bits must change between
valid code words. Hamming showed, for example, that if 3 bits of redundancy are added to a
4-bit word, so that the distance between any two valid code words is 3, all single-bit errors can
be detected and corrected. (By appending additional bits of redundancy, multiple-bit errors can
be detected and corrected.) The 7-bit Hamming (7, 4) code word h , h , h …., h , h associated
2
1
3
7
6
with a 4-bit binary number b b b b is
3 2 1 0
h = b ⊕ b ⊕ b h = b 3
1
3
0
2
3
h = b ⊕ b ⊕ b h = b 2
3
0
1
2
5
h = b ⊕ b ⊕ b h = b 1
6
0
4
2
1
h = b 0
7
where ⊕ denotes the exclusive OR operation. Note that bits h , h , and h are even-parity bits for
2
4
1
the bit fields b b b , b b b , and b b b , respectively. (Recall that a string of binary bits has even
2
3
0
3 1 0
2 1 0
LoveLy professionaL University 189