Page 196 - DCAP303_MULTIMEDIA_SYSTEMS
P. 196
Multimedia Systems
notes parity if the number of bits with a value of 1 is even.) To decode a Hamming encoded result,
the channel decoder must check the encoded value for odd parity over the bit fields in which
even parity was previously established. A single-bit error is indicated by a nonzero parity word
c c c , where
4 2 1
c = h ⊕ h ⊕ h ⊕ h 1
1
5
3
1
c = h ⊕ h ⊕ h ⊕ h 2
2
2
6
3
c = h ⊕ h ⊕ h ⊕ h 7
4
5
6
4
If a nonzero value is found, the decoder simply complements the code word bit position indicated
by the parity word. The decoded binary value is then extracted from the corrected code word
as h h h h .
3 5 6 7
Select a video clip and compress it using a video converter. Write its step in your
notebook.
a Brief History of Compression ‘‘If you think of it, it
exists somewhere’’—David Byrne
igital video compression techniques have played an important role in the world of
telecommunication and multimedia systems where bandwidth is still a valuable
Dcommodity. Hence, video compression techniques are of prime importance for
reducing the amount of information needed for picture sequence without losing much of its
quality, judged by human viewers. Modern compression techniques involve very complex
electronic circuits and the cost of these can only be kept to an acceptable level by high volume
production of LSI chips (for a short introduction to how LSI chips are manufactured, visit Jim’s
site here). This means that we have to standardize the techniques of video compression.
The history of compression begins in the 1960s. An analogue videophone system had been
tried out in the 1960s, but it required a wide bandwidth and the postcard-size black-and-white
pictures produced did not add appreciably to voice communication! In the 1970s, it was realized
that visual speaker identification could substantially improve a multiparty discussion and
videoconference services were considered. Interest increased with improvements in picture
quality and digital coding.
With the available technology in the 1980s, the COST211 video codec (Encoder/Decoder),
based on differential pulse code modulation, DPCM (Pulse Code Modulation is still used in CD
audio files, so they are called PCM/.wav files), was standardized by CCITT, under the H.120
standard. For more information on the history of conferencing, follow this link. This codec’s
target bitrate was 2 Mbit/s for Europe and 1.544 Mbit/s for North America, suitable for their
respective first levels of digital heirarchy. However, the image quality, although having very
good spatial resolution (due to the nature of DPCM working on a pixel-by-pixel basis), had a
very poor temporal quality. It was soon realized that in order to improve the image quality,
without exceeding the target bitrate, less than one bit should be used to code each pixel. This
was only possible if a group of pixels (a “block”) were coded together, such that the bit per
pixel is fractional. This led to the design of so-called block-based codecs.
During the late 1980s study period, of the 15 block based videoconferencing proposals
submitted to the ITU-T (formerly the CCITT), 14 were based on the Discrete Cosine Transform
(DCT) and only one on Vector Quantization (VQ). The subjective quality of video sequences
Contd...
190 LoveLy professionaL University