0% found this document useful (0 votes)
190 views8 pages

Video: Frame Rate (Frames/second)

Video is an electronic medium that records moving visual images as a series of individual still frames played back at a specific frame rate. It has several key characteristics including the frame rate measured in frames per second, whether it uses interlaced or progressive scanning, its aspect ratio, color properties, quality, and compression method. Common video formats also differ in these characteristics like their use of interlacing, frame rates, and aspect ratios.

Uploaded by

Collins Jim
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
190 views8 pages

Video: Frame Rate (Frames/second)

Video is an electronic medium that records moving visual images as a series of individual still frames played back at a specific frame rate. It has several key characteristics including the frame rate measured in frames per second, whether it uses interlaced or progressive scanning, its aspect ratio, color properties, quality, and compression method. Common video formats also differ in these characteristics like their use of interlacing, frame rates, and aspect ratios.

Uploaded by

Collins Jim
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
You are on page 1/ 8

1.

1 INFFORMATION REPRESENTATION AHMED THAKUR

1.1.4 VIDEO

 Show understanding of the characteristics of video streams:  the


frame rate (frames/second)
 interlaced and progressive encoding
 video interframe compression algorithms and spatial and temporal redundancy 
multimedia container formats

Video is an electronic medium for the recording, copying, playback, broadcasting, and display of moving visual
media.

Characteristics of video streams


 Number of frames per second (frame rate)
 Interlaced and progressive encoding
 Aspect ratio
 Color space and bits per pixel
 Video quality
 Video compression method (digital only)
 Stereoscopic

 Frame Rate/Number of frames per second


Frame rate, the number of still pictures per unit of time of video, ranges from six or eight frames per second (frame/s)
for old mechanical cameras to 120 or more frames per second for new professional cameras.

PAL standards (Europe, Asia, Australia, etc.) and SECAM (France, Russia, parts of Africa etc.) specify 25 frame/s,
while NTSC standards (USA, Canada, Japan, etc.) specify 29.97 frames. Film is shot at the slower frame rate of 24
frames per second, which slightly complicates the process of transferring a cinematic motion picture to video. The
minimum frame rate to achieve a comfortable illusion of a moving image is about 16 frames/sec

 Interlaced vs progressive
Video can be interlaced or progressive.

Interlacing was invented as a way to reduce flicker in early mechanical and CRT video displays without increasing the
number of complete frames per second, which would have sacrificed image detail to remain within the limitations of a
narrow bandwidth. The horizontal scan lines of each complete frame are treated as if numbered consecutively, and
captured as two fields: an odd field (upper field) consisting of the odd-numbered lines and an even field (lower field)
consisting of the even-numbered lines.

Analog display devices reproduce each frame in the same way, effectively doubling the frame rate as far as perceptible
overall flicker is concerned. When the image capture device acquires the fields one at a time, rather than dividing up a
complete frame after it is captured, the frame rate for motion is effectively doubled as well, resulting in smoother, more
lifelike reproduction (although with halved detail) of rapidly moving parts of the image when viewed on an interlaced
CRT display, but the display of such a signal on a progressive scan device is problematic.

NTSC, P L and SECAM are interlaced formats. Abbreviated video resolution specifications often include an i to
indicate interlacing. For example, PAL video format is often specified as 576i50, where 576 indicates the total number
of horizontal scan lines, i indicates interlacing, and 50 indicates 50 fields (half-frames) per second.

In progressive scan systems, each refresh period updates all scan lines in each frame in sequence. When displaying a
natively progressive broadcast or recorded signal, the result is optimum spatial resolution of both the stationary and
moving parts of the image. When displaying a natively interlaced signal, however, overall spatial resolution is degraded
by simple line doubling—artifacts

https://www.facebook.com/groups/OAComputers/ Page 1
COMPUTER SCIENCE
9608 [email protected], 0300-8268885
1.1 INFFORMATION REPRESENTATION AHMED THAKUR

1.1.4 VIDEO

such as flickering or "comb" effects in moving parts of the image appear unless special signal processing eliminates
them. A procedure known as deinterlacing can optimize the display of an interlaced video signal from an analog, DVD
or satellite source on a progressive scan device such as an LCD Television, digital video projector or plasma panel.
Deinterlacing cannot, however, produce video quality that is equivalent to true progressive scan source material.

Aspect ratio
Aspect ratio describes the dimensions of video screens and video picture elements. All popular video formats are
rectilinear, and so can be described by a ratio between width and height. The screen aspect ratio of a traditional
television screen is 4:3, or about 1.33:1. High definition televisions use an aspect ratio of 16:9, or about 1.78:1. The
aspect ratio of a full 35 mm film frame with soundtrack (also known as the Academy ratio) is 1.375:1.

Comparison of common cinematography


and traditional television (green) aspect
ratios

Ratios where height is taller than width are uncommon in general everyday use, but are used in computer systems
where some applications are better suited for a vertical layout. The most common tall aspect ratio of 3:4 is referred to
as portrait mode and is created by physically rotating the display device 90 degrees from the normal position. Other tall
aspect ratios such as 9:16 are technically possible but rarely used. (For a detailed discussion of this topic, see page
orientation.)

Pixels on computer monitors are usually square, but pixels used in digital video often have non-square aspect ratios,
such as those used in the PAL and NTSC variants of the CCIR 601 digital video standard, and the corresponding
anamorphic widescreen formats. Therefore, a 720 by 480 pixel NTSC DV image displayes with the 4:3 aspect ratio (the
traditional television standard) if the pixels are thin, and displays at the 16:9 aspect ratio (the anamorphic

Color space and bits per pixel


Color model name describes the video color representation. YIQ was used in NTSC television. It corresponds closely
to the YUV scheme used in NTSC and PAL television and the YDbDr scheme used by SECAM television.

Example of U-V color plane, Y


value=0.5

https://www.facebook.com/groups/OAComputers/ Page 2
COMPUTER SCIENCE
9608 [email protected], 0300-8268885
1.1 INFFORMATION REPRESENTATION AHMED THAKUR

1.1.4 VIDEO

The number of distinct colors a pixel can represent depends on the number of bits per pixel (bpp). A common way to
reduce the amount of data required in digital video is by chroma subsampling (e.g., 4:4:4, 4:2:2, 4:2:0/4:1:1). Because
the human eye is less sensitive to details in color than brightness, the luminance data for all pixels is maintained, while
the chrominance data is averaged for a number of pixels in a block and that same value is used for all of them. For
example, this results in a 50% reduction in chrominance data using 2 pixel blocks (4:2:2) or 75% using 4 pixel
blocks(4:2:0). This process does not reduce the number of possible color values that can be displayed, it reduces the
number of distinct points at which the color changes.

Video quality
Video quality can be measured with formal metrics like PSNR or with subjective video quality using expert
observation.

The subjective video quality of a video processing system is evaluated as follows:


 Choose the video sequences (the SRC) to use for testing.
 Choose the settings of the system to evaluate (the HRC).
 Choose a test method for how to present video sequences to experts and to collect their ratings.
 Invite a sufficient number of experts, preferably not fewer than 15.
 Carry out testing.
 Calculate the average marks for each HRC based on the experts' ratings.

Many subjective video quality methods are described in the ITU-T recommendation BT.500. One of the standardized
method is the Double Stimulus Impairment Scale (DSIS). In DSIS, each expert views an unimpaired reference video
followed by an impaired version of the same video. The expert then rates the impaired video using a scale ranging from
"impairments are imperceptible" to "impairments are very annoying".

Video compression method (digital only)


Uncompressed video delivers maximum quality, but with a very high data rate. A variety of methods are used to
compress video streams, with the most effective ones using a Group Of Pictures (GOP) to reduce spatial and temporal
redundancy. Broadly speaking, spatial redundancy is reduced by registering differences between parts of a single
frame; this task is known as intraframe compression and is closely related to image compression. Likewise, temporal
redundancy can be reduced by registering differences between frames; this task is known as interframe compression,
including motion compensation and other techniques. The most common modern standards are MPEG-2, used for
DVD, Blu-ray and satellite television, and MPEG-4, used for AVCHD, Mobile phones (3GP) and Internet.Video
formats

Stereoscopic
Stereoscopic video can be created using several different methods:
 Two channels: a right channel for the right eye and a left channel for the left eye. Both channels may be viewed
simultaneously by using light-polarizing filters 90 degrees off-axis from each other on two video projectors. These
separately polarized channels are viewed wearing eyeglasses with matching polarization filters.

 One channel with two overlaid color-coded layers. This left and right layer technique is occasionally used for
network broadcast, or recent "anaglyph" releases of 3D movies on DVD. Simple Red/Cyan plastic glasses provide
the means to view the images discretely to form a stereoscopic view of the content.

 One channel with alternating left and right frames for the corresponding eye, using LCD shutter glasses that read the
frame sync from the VGA Display Data Channel to alternately block the image to each eye, so the appropriate eye sees
the correct frame. This method is most common in computer virtual reality applications such as in a Cave Automatic
Virtual Environment, but reduces effective video framerate to one-half of normal (for example, from 120 Hz to 60 Hz).

https://www.facebook.com/groups/OAComputers/ Page 3
COMPUTER SCIENCE
9608 [email protected], 0300-8268885
1.1 INFFORMATION REPRESENTATION AHMED THAKUR

1.1.4 VIDEO

Blu-ray Discs greatly improve the sharpness and detail of the two-color 3D effect in color-coded stereo programs. See
articles Stereoscopy and 3-D film.

Video formats
Different layers of video transmission and storage each provide their own set of formats to choose from.

For transmission, there is a physical connector and signal protocol ("video connection standard" below). A given
physical link can carry certain "display standards" that specify a particular refresh rate, display resolution, and color
space.

Many analog and digital recording formats are in use, and digital video clips can also be stored on a computer file
system as files, which have their own formats. In addition to the physical format used by the data storage device or
transmission medium, the stream of ones and zeros that is sent must be in a particular digital video compression format,
of which a number are available.

Analog video
Analog video is a video signal transferred by an analog signal. n analog color video signal contains luminance, brightness (Y)
and chrominance (C) of an analog television image. When combined into one channel, it is called composite video as is the
case, among others with NTSC, P L and SECAM.

Analog video may be carried in separate channels, as in two channel S-Video (YC) and multi-channel component video
formats.

Analog video is used in both consumer and professional television production applications. However, digital video
signal formats with higher quality have been adopted, including serial digital interface (SDI), Digital Visual Interface
(DVI), High-Definition Multimedia Interface (HDMI) and DisplayPort Interface.

STREAMING MEDIA
Streaming media is multimedia that is constantly received by and presented to an end-user while being delivered by a
provider. The verb "to stream" refers to the process of delivering media in this manner; the term refers to the delivery
method of the medium, rather than the medium itself, and is an alternative to downloading.

A client media player can begin to play the data (such as a movie) before the entire file has been transmitted.
Distinguishing delivery method from the media distributed applies specifically to telecommunications networks, as
most of the delivery systems are either inherently streaming (e.g., radio, television) or inherently nonstreaming (e.g.,
books, video cassettes, audio CDs). For example,

https://www.facebook.com/groups/OAComputers/ Page 4
COMPUTER SCIENCE
9608 [email protected], 0300-8268885
1.1 INFFORMATION REPRESENTATION AHMED THAKUR

1.1.4 VIDEO

in the 1930s, elevator music was among the earliest popularly available streaming media; nowadays Internet television
is a common form of streamed media. The term "streaming media" can apply to media other than video and audio such
as live closed captioning, ticker tape, and real-time text, which are all considered "streaming text". The term
"streaming" was first used in the early 1990s as a better description for video on demand on IP networks; at the time
such video was usually referred to as "store and forward video", which was misleading nomenclature.

 Spatial and Temporal Redundancy Removal


The following compression types are commonly used in Video compression:
 Spatial Redundancy Removal - Intraframe coding (JPEG)
 Spatial and Temporal Redundancy Removal - Intraframe and Interframe coding (H.261, MPEG)

Interframe compression
Interframe compression is compression applied to a sequence of video frames, rather than a single image. In general,
relatively little changes from one video frame to the next. Interframe compression exploits the similarities between
successive frames, known as temporal redundancy, to reduce the volume of data required to describe the sequence.

There are several interframe compression techniques, of various degrees of complexity, most of which attempt to more
efficiently describe the sequence by reusing parts of frames the receiver already has, in order to construct new frames.

Spatial Redundancy
Spatial Redundancy is the redundancy of information which exists within the same frame i.e. it is an intra-frame
redundancy. The numerical similarity of pixel values between frames which we discussed in the last post also exists
within the space of the given frame. So, spatially a frame contains pixels which have similar/near similar values to their
adjacent neighbors.

Conceptually if this spatial correlation or the redundancy among pixels is exploited, then PREDICTIONS can be made
about their adjacent neighbors with a reasonable/acceptable accuracy. And hence because of this correlation (leading to
better predictions) we ll have lesser information/data to be encoded and transmitted which leads to video frame
compression. Unless it is not a complex/complicated section of the picture frame, this redundancy can be efficiently
used in compressing a video sequence.

Example
 In video telephony, the adjacent pixels in the face/eye region have similar values and have minimal (numerical)
difference among them. So here there is a lot of spatial information which is redundant and

https://www.facebook.com/groups/OAComputers/ Page 5
COMPUTER SCIENCE
9608 [email protected], 0300-8268885
1.1 INFFORMATION REPRESENTATION AHMED THAKUR

1.1.4 VIDEO

 In high motion sequences like a soccer match, there are enough details in a given frame which are uniform and
smooth like the crowd seated. So, the prediction of the pixels from their neighbors becomes easier and efficient in
these regions since it has information which is almost the same across it.

Temporal Redundancy
Temporal Redundancy is the redundancy of information which exists between a set of frames i.e. it is an inter frame
redundancy. Given the nature of any video sequence, the numerical values of the luminance/brightness and
chrominance/color of all the pixels in a given frame is either exactly similar or at least near similar to those values in the
previous frames. This redundancy or in simpler words the 'repetition of information between frames' is exploited by all the
video compression algorithms.

The basic idea is not to encode the similar or the near similar pixel values which have already been encoded and
transmitted.

Example
 In video telephony there are fine movements of the face and hands yet the background information from frame to
frame remains the same throughout the sequence.

 In a live steaming of a tennis match, the motion is represented by the players and the ball while the remaining
information that of the stadium and the audience remains similar all through the video.

https://www.facebook.com/groups/OAComputers/ Page 6
COMPUTER SCIENCE
9608 [email protected], 0300-8268885
1.1 INFFORMATION REPRESENTATION AHMED THAKUR

1.1.4 VIDEO

 Multimedia Container Format


Multimedia Container Format, abbreviated MCF, is an unfinished container format specification and a predecessor of
Matroska. The project has been abandoned since early 2004, but many of its innovative features found their way into
Matroska.

Features
One of the objectives of the new format was to simplify its handling by players. This was to be done by making it
feature-complete, eliminating the need for third-party extensions and actively discouraging them. Because of the
simple, fixed structure, the time required to read and parse the header information was minimal. The small size of the
header (2.5 kB), which at the same time contained all the important data, facilitated quick scanning of collections of
MCF files, even over slow network links.

The key feature of MCF was being able to store several chapters of video, menus, subtitles in several languages and
multiple audio streams (e.g. for different languages) in the same file. At the same time, the content could be split
between several files called segments; assembling the segments into a complete movie was automatic, given the
segments were all present. Segments could also be played separately, and overlap between segments was customizable.
The format also allowed for variable frame rate video. To verify integrity, CRC32 checksums were embedded into the
file, and digital signatures were supported. A degree of resilience was built into the parser, allowing for playback of
partially corrupted movies.

MCF's per-frame overhead (7 bytes) was considerably lower than AVI (40 bytes), and comparable to Matroska (10
bytes).

Limits
The limits of the CF format were based on human perception and expectations of progress in bitrates of video. The time
code precision of the format is limited to 1 ms. The addressing in the file is limited to 64 bits, which is extremely large.
Frame size is limited by 32-bit frame size number, limiting frame size at 4 GiB. Time codes are stored as 40-bit
integers, which caps maximum movie length at approximately 35 years. The number of distinct streams in one file is
216, or 65536. A movie can be split into a maximum of 255 segments.

ACRONYMS
TM Asynchronous Transfer Mode
BMMC Block-Matching Motion Compensation
CBR Constant Bit Rate
CIF Common Intermediate Format
DCT Discrete Cosine Transform
DVD Digital Video Disk
GOB Group of Blocks
GOP Group of Pictures
HRD Hypothetical Reference Decoder
ITU International Telecommuncation Union

https://www.facebook.com/groups/OAComputers/ Page 7
COMPUTER SCIENCE
9608 [email protected], 0300-8268885
1.1 INFFORMATION REPRESENTATION AHMED THAKUR

1.1.4 VIDEO

ITU-R ITU Radio communication Assembly


ITU-T ITU Telecommunication Standardization Sector
JPEG Joint Photographic Experts Group
MB Macroblock
MPEG Motion Pictures Expert Group
NTSC National Television Systems Committee
OBMC Overlapped Block Motion Compensation
PAL Phase Alternating Line
QCIF Quarter-CIF
RBG Red Green Blue (color system)
SECAM Systeme Electronique Couleur Avec Memoire
SIF Source Input Format
VBR Variable Bit Rate
VBV Video Buffering Verifier
VLC Variable Length Code
VQ Vector Quantization

https://www.facebook.com/groups/OAComputers/ Page 8
COMPUTER SCIENCE
9608 [email protected], 0300-8268885

You might also like