CSA3020

Lecture 6 - Video

References:
Steinmetz, R., and Nahrstedt, K. (1995). Multimedia: Computing, Communications & Applications. Prentice Hall. Chapter 5.
Steinmetz, R. and Nahrstedt, K. (2002).Multimedia Fundamentals: Vol. 1. Prentice Hall. Chapter 5.

The Role of Digital Video

Analog video and broadcast TV have matured as a technology, and there cannot be many more improvements over what we have today. Digital Video offers many advantages over analog video: higher quality, no deterioration, interactivity, more versatile transmission/distribution options, lower-cost editing, to mention a few.

Interactivity

As digital video is stored on random access devices (e.g., magnetic/optical disk), as opposed to sequential access (e.g., magnetic tape) used by analog video, it is possile to rapidly access any part of the video giving the real-time responses required for interactivity.

Editing

The ability to edit and re-edit video-based productions without risking deterioration of the storage medium is important in not only in the film industry, but also in the "home movies" sector (e.g., holiday footage). Also important in the latter sector is the availability of cheap digital video editing suites (e.g., Adobe Premiere) which can still offer reasonable-to-good quality, as well as the ability to add transitions and other special effects, and add audio tracks.

Quality

Analog signals deteriorating over time due to the storage medium of analog video, and digital signals are less susceptible to atmospheric conditions (mainly due to the error correction protocols used, which can determine that an error has occurred in transmission and re-request the "bad" data). Additionally, (compressed) digital video has lower bandwidth requirements than analog video, and consequently higher definition video can be transmitted in the same bandwidth (e.g., HDTV) resulting in better picture quality.

Transmission/Distribution Options

The low bandwidth requirements of compressed digital video means that it can be stored and distributed on compact disc. It also means that significantly more channels can be transmitted over airwaves or cable than analog video. Digital video can also go places that analog video cannot. Digital video can be attached to e-mail, or accessed over the WWW. Video-conferencing will become a killer application once the transport costs are cheap enough (currently excellent quality video-conferencing is possible only on ISDN and faster networks), or the internet bandwidth is regularised. Even so, low-band video conferencing is possible even over 14.4K modem connections. The ability to store once and transmit many times has offered the opportunity for video-on-demand services, where it is possible to watch what you want to, when you want to, and where you want to.

Important aspects of analog video (mainly television)

So, how does TV work? Let's assume that we are watching a live television broadcast television, as opposed to watching a video or delayed broadcast on our TV. A television camera converts light into electrical signals by using a prism (the lens) to split incoming light into red, green, and blue channels. Arrays of separate solid-state, light sensitive receptors (called charge coupled devices) are able to detect brightness differences at different points throughout an image. The surface of the CCD contains hundreds of thousands to millions of pixel points, each of which can electrically respond to the amount of light focused on its surface. The differences in image brightness detected at each of these points on the surface of the CCD are changed into electrical impulses. The brighter the light, the greater the impulse is generated. The impulse from each of these points can then be "read out" on a line-by-line basis by the CCD chips.
Assume, for a moment, that the TV which will receive the video signals is a black and white TV. The TV receiver is tasked with reconstructing the representation of the scene created by the TV camera, by converting the pixel-point impulses generated by the TV camera back into light. The back of a TV screen is covered with phosphor lines. An electron beam charges the phosphor lines causing them to glow. The TV receiver picks up signals transmitted on a particular frequency which are reproduced on a TV display in a left-to-right, top-to-bottom, scanning sequence. The problem is, however, that phosphor has a "memory span" that is too short: by the time the beam has charged the last line on the screen, the first line's charge will have faded, resulting in flicker. So, to reduce flicker when the video signal is eventually displayed on a TV screen, the TV camera captures each image in alternate odd and even lines. Each sequence of odd (or even) lines is called a field and two fields comprise a frame. This process is called interleaving. The process is continually repeated creating a constant sequence of changing field and frame information.
A standard video camera outputs three main elements: luminance, chrominance, and sync. In production and post-production recording, the lunimance and chrominance is kept separate (component video) resulting in higher quality. For broadcast and distribution (usually on low-cost VHS video tape) the luminance and chrominance is combined into one signal (composite video). Luminance is the brightness of a pixel-point. In black-and-white TV this is the only information needed to represent and reconstruct a scene (as explained above). Chrominance is colour information. In colour TVs, luminance is not required. However, luminance is part of the video signal standard so that black-and-white TVs can still reconstruct scenes captured by colour TV cameras and colour video. In colour TVs, the chrominance signals are interlaced with the luminance signals. Chrominance is made up of two parts, hue and saturation. Hue describes the actual colour displayed, and saturation is the intensity of the colour. Computer monitors are different from TVs. Computer monitors are usually RGB, which means that they require separate red, green and blue wires for each pixel, and luminance information is kept separate (i.e., computer monitors are best suited to component video).
Finally, the sync information in the video signal is used to ensure that the image remains stable. Sync is a series of electrical pulses that control the timing of each frame of video information. The electrical timing reference used for both fields and frames is none other than the household electrical current. In PAL, there are 25 frames (and hence 50 fields) per second, and alternating electrical current has precisely 50 cycles per second. So, every cycle there is a sync pulse for each field, and every two cycles sets the timing for one frame. Basically, every time a line is scanned, the electron gun is shut off. When the beam gets to the next line, the gun is switched on again. When an entire field has been scanned, the gun is switched off again, to allow the beam to travel to the first line of the next field. This is one important reason why there are different international video standards: in the States and other countries where ac current has 60 cycles per second, NTSC provides 30 frames per second.
Today, rather than using an interlaced approach to scanning, some video systems (including computer monitors and some of the new digital television standards) use a progressive or non-interlaced scanning approach where the fields (odd and even lines) are combined and reproduced at the same time in their proper sequence. Progressive scanning has a number of advantages, including the ability to more easily interface with computer-based video equipment.
There are three major broadcasting standards in force today (a 4th, HDTV, is fast becoming a standard, although take-up is likely to be slow-ish. Predictions are that by 2006 only 30% of US homes will have HDTV). The 3 standards are NTSC (National Television System Committee), SECAM (Sequential Color And Memory), and PAL (Phase Alternating Line). NTSC is predominant in the US, PAL in Western Europe, and SECAM in France, Russia, and Eastern Europe.
The difference between these basic international broadcast standards centers primarily on four things:

Digitizing analog video

As usual, before video can be manipulated in a digital environment, it must first be digitized. We must also cater for the major differences between the way the video signal is encoded for playback on a TV monitor (which assumes a composite video signal), and a computer monitor (which assumes a component video signal). Computer monitors also support the simultaneous display of a greater number of colours than a TV set.
On the face of it, as video is composed of many individual images, then digitizing video can re-use many of the techniques used to digitize photographs. And, indeed, many of the techniques can be shared, subject to the following major differences.
Analog video needs to be digitized at the same rate at which it is played. If the video is played at 25 or 30 frames per second, then the digitizer needs to digitize frames at the corresponding rate. Failure to capture full-motion video means that less than the full frame-rate is digitized, and, if the number of dropped frames it significant, the video playback will appear jittery. In its favour, video supports fewer colours than photo-quality digital images.

Nowadays, Firewire enables high-speed data transfer rates between video capture devices and computer hard disks. Sometimes, the video capture device will also compress the captured video, normally using MPEG-2, the industry standard video compressor/decompressor (codec) for digital television and DVD..

If your input device does not have firewire, or does not, itself, compress the video, then you'll probably want a video capture card. Video capture cards vary in price according to the number of frames per second that can be captured; the maximum resolution within which full-motion video can be captured; the signal source that can be digitized (e.g., whether the source can be composite or component video (VHS/BetaCam SP), PAL, NTSC, SECAM, etc.); the colour-depth supported; and finally, whether there is an on-board video compressor and the compression ratios supported.

Relevant Links

Digital Video Resources


Back to the index for this course.
In case of any difficulties or for further information e-mail cstaff@cs.um.edu.mt

Date last amended: 2nd September, 2002