Page 197 - DCAP303_MULTIMEDIA_SYSTEMS
P. 197
Unit 11: Compression
presented to the panel showed hardly any significant differences between the two coding notes
techniques. In parallel to ITU-T’s investigation during 1984-88, the Joint Photographic Experts
Group (JPEG) was also interested in compression of static images. They chose the DCT as the
main unit of compression, mainly due to the possibility of progressive image transmission.
JPEG’s decision undoubtedly influenced the ITU-T in favouring DCT over VQ. By now there
was a worldwide activity in implementing the DCT in chips and on DSPs.
In the late 1980s it was clear that the recommended ITU-T videoconferencing codec would use a
combination of interframe DPCM for minimum coding delay and the DCT. The codec showed
greatly improved picture quality over H.120. In fact, the image quality for videoconferencing
applications was found reasonable at 384 kbits/s or higher and good quality was possible
at significantly higher bitrates if around 1 Mbit/s. This effort was later extended to systems
based on multiples of 64 kbit/s (upto 30 multiples of this value). The standard definition was
completed in late 1989 and is officially called the H.261 standard, and the coding method is
referred to as the “p × 64” method (p is an integer between 1 and 30).
In the early 1990s, the Motion Picture Experts Group (MPEG) started investigating coding
techniques for storage of video, such as CD-ROMs. The aim was to develop a video codec
capable of compressing highly active video such as movies, on hard disks, with a performance
comparable to that of VHS quality. In fact, the basic framework of the H.261 generation of
MPEG, called the MPEG-1 standard, was capable of accomplishing this task at 1.5 Mbit/s.
Since for the storage of video, encoding and decoding delays are not a major constraint, one
can trade delay for compression efficiency. For example in the temporal domain a DCT might
be used rather than DPCM, or DPCM used but with much improved motion estimation, such
that the motion compensation removes temporal correlation. This later option was adopted
with MPEG-1.
These days, MPEG-1 decoders/players are becoming commonplace for multimedia on
computers. The MPEG-1 decoder plug-in hardware boards (e.g., MPEG magic cards) have
been around for a while, and now software MPEG-1 decoders are available with the release of
operating systems or multimedia extensions for PC and Mac platforms. Since in all standard
video codecs, only the decoders have to comply with proper syntax, software based encoding
has added the extra flexibility that might even improve the performance of MPEG-1 in the
future.
The MPEG-1 was originally optimized for typical applications using non-interlaced video of
25 frames per second (fps), in European format and 29.9 fps in North American format, in the
range of 1.2–1.5 Mbits/s for image quality comparable to home VCRs, it can certainly be used
at higher bitrates and resolutions. Early versions of MPEG-1 for interlaced video, such as those
used in broadcase, were called MPEG1+. A new generation of MPEG, called MPEG-2 was
soon adopted by broadcasters (who were initially reluctant to use any compression on video!).
The MPEG-2 codes for interlaced video at bit rates 4-9 Mbits/s, and is now well on its way to
making a significant impact in a range of applications such as digital terrestrial broadcasting,
digital satellite TV, digital cable TV, digital versatile disc (DVD) and many others. Television
broadcasters have started using MPEG-2 coded digital forms since the late 1990s.
A slightly improved version of MPEG-2, called MPEG-3, was to be used for coding of High
Definition (HD) TV, but since MPEG-2 could itself achieve this, MPEG-3 standards were
folded into MPEG-2.
It is foreseen that by 2014, the existing transmission of NTSC North America format will cease
and instead HDTV with MPEG-2 compression will be used in terrestrial broadcasting.
Questions:
1. Explain the term block-based codecs.
2. Prepare a list of MPEG standards.
LoveLy professionaL University 191