0% found this document useful (0 votes)
81 views6 pages

High Level, Low Level Modulation (Refer s4 Communication) : Dolby Digital®

The ATSC has created 18 commonly used digital broadcast formats for video. The lowest quality digital format is about the same as the highest quality an analog TV can display. The 18 formats cover differences in: aspect ratio, resolution, frame rate and aspect ratio.
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
81 views6 pages

High Level, Low Level Modulation (Refer s4 Communication) : Dolby Digital®

The ATSC has created 18 commonly used digital broadcast formats for video. The lowest quality digital format is about the same as the highest quality an analog TV can display. The 18 formats cover differences in: aspect ratio, resolution, frame rate and aspect ratio.
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
You are on page 1/ 6

The ATSC has created 18 commonly used digital broadcast formats for video.

The lowest quality


digital format is about the same as the highest quality an analog TV can display. The 18 formats
cover differences in:

• Aspect ratio - Standard television has a 4:3 aspect ratio -- it is four units wide by three
units high. HDTV has a 16:9 aspect ratio, more like a movie screen.
• Resolution - The lowest standard resolution (SDTV) will be about the same as analog
TV and will go up to 704 x 480 pixels. The highest HDTV resolution is 1920 x 1080
pixels. HDTV can display about ten times as many pixels as an analog TV set.
• Frame rate - A set's frame rate describes how many times it creates a complete picture
on the screen every second. DTV frame rates usually end in "i" or "p" to denote whether
they are interlaced or progressive. DTV frame rates range from 24p (24 frames per
second, progressive) to 60p (60 frames per second, progressive).

Many of these standards have exactly the same aspect ratio and resolution -- their frame rates
differentiate them from one another. When you hear someone mention a "1080i" HDTV set,
they're talking about one that has a native resolution of 1920 x 1080 pixels and can display 60
frames per second, interlaced.

Broadcasters get to decide which of these formats they will use and whether they will broadcast
in high definition -- many are already using digital and high-definition signals. Electronics
manufacturers get to decide which aspect ratios and resolutions their TVs will use. Consumers
get to decide which resolutions are most important to them and buy their new equipment based
on that.

MPEG-2
DTV usually uses MPEG-2 encoding, the industry standard for most DVDs, to compress the signal to a reasonable
size. MPEG-2 compression reduces the size of the data by a factor of about 55:1, and it discards a lot of the visual
information the human eye would not notice was missing.

Until the analog shutoff date, broadcasters will have two available channels to send their signal --
a channel for analog, and a "virtual" channel for digital. Right now, people can watch an over-
the-air digital signal only if they are tuned in to the broadcaster's virtual digital channel. After
analog broadcasting ends, the only signals people will receive over the air will be digital.

However, even though a digital signal is better quality than an analog signal, it isn't necessarily
high definition. HDTV is simply the highest of all the DTV standards. But whether you see a
high-definition picture and hear the accompanying Dolby Digital® sound depends on two things.
First, the station has to be broadcasting a high-definition signal. Second, you have to have the
right equipment to receive and view it. We'll look at how to get an HDTV set and signal next.

High level, Low level modulation (refer s4 communication)

HIGH-LEVEL MODULATION is modulation produced in the plate circuit of the last radio
stage of the system.
LOW-LEVEL MODULATION is modulation produced in an earlier stage than the final
power amplifier.

The PLATE MODULATOR is a high-level modulator. The modulator tube must be capable of
varying the plate-supply voltage of the final power amplifier. It must vary the plate voltage so
that the plate current pulses will vary between 0 and nearly twice their unmodulated value to
achieve 100-percent modulation.

(or)

Low level

Here a small audio stage is used to modulate a low power stage; the output of this stage is then
amplified using a linear RF amplifier.

Advantages

The advantage of using a linear RF amplifier is that the smaller early stages can be modulated,
which only requires a small audio amplifier to drive the modulator.

Disadvantages

The great disadvantage of this system is that the amplifier chain is less efficient, because it has to
be linear to preserve the modulation. Hence Class C amplifiers cannot be employed.

An approach which marries the advantages of low-level modulation with the efficiency of a
Class C power amplifier chain is to arrange a feedback system to compensate for the substantial
distortion of the AM envelope. A simple detector at the transmitter output (which can be little
more than a loosely coupled diode) recovers the audio signal, and this is used as negative
feedback to the audio modulator stage. The overall chain then acts as a linear amplifier as far as
the actual modulation is concerned, though the RF amplifier itself still retains the Class C
efficiency. This approach is widely used in practical medium power transmitters, such as AM
radiotelephones.

High level

With high level modulation, the modulation takes place at the final amplifier stage where the
carrier signal is at its maximum
Advantages

One advantage of using class C amplifiers in a broadcast AM transmitter is that only the final
stage needs to be modulated, and that all the earlier stages can be driven at a constant level.
These class C stages will be able to generate the drive for the final stage for a smaller DC power
input. However, in many designs in order to obtain better quality AM the penultimate RF stages
will need to be subject to modulation as well as the final stage.

Disadvantages

A large audio amplifier will be needed for the modulation stage, at least equal to the power of the
transmitter output itself. Traditionally the modulation is applied using an audio transformer, and
this can be bulky. Direct coupling from the audio amplifier is also possible (known as a cascode
arrangement), though this usually requires quite a high DC supply voltage (say 30 V or more),
which is not suitable for mobile units

CCD
A charge-coupled device (CCD) is a device for the movement of electrical charge, usually from
within the device to an area where the charge can be manipulated, for example conversion into a
digital value. This is achieved by "shifting" the signals between stages within the device one at a
time. Technically, CCDs are implemented as shift registers that move charge between capacitive
bins in the device, with the shift allowing for the transfer of charge between bins.

Often the device is integrated with an image sensor, such as a photoelectric device to produce the
charge that is being read, thus making the CCD a major technology for digital imaging. Although
CCDs are not the only technology to allow for light detection, CCDs are widely used in
professional, medical, and scientific applications where high-quality image data are required.

The charge-coupled device was invented in 1969 at AT&T Bell Labs by Willard Boyle and
George E. Smith.

Operation:

In a CCD for capturing images, there is a photoactive region (an epitaxial layer of silicon), and a
transmission region made out of a shift register (the CCD, properly speaking).

An image is projected through a lens onto the capacitor array (the photoactive region), causing
each capacitor to accumulate an electric charge proportional to the light intensity at that location.
A one-dimensional array, used in line-scan cameras, captures a single slice of the image, while a
two-dimensional array, used in video and still cameras, captures a two-dimensional picture
corresponding to the scene projected onto the focal plane of the sensor. Once the array has been
exposed to the image, a control circuit causes each capacitor to transfer its contents to its
neighbor (operating as a shift register). The last capacitor in the array dumps its charge into a
charge amplifier, which converts the charge into a voltage. By repeating this process, the
controlling circuit converts the entire contents of the array in the semiconductor to a sequence of
voltages. In a digital device, these voltages are then sampled, digitized, and usually stored in
memory; in an analog device (such as an analog video camera), they are processed into a
continuous analog signal (e.g. by feeding the output of the charge amplifier into a low-pass filter)
which is then processed and fed out to other circuits for transmission, recording, or other

The charge packets (electrons, blue) are collected in potential wells (yellow) created by applying
positive voltage at the gate electrodes (G). Applying positive voltage to the gate electrode in the
correct sequence transfers the charge packets.

The photoactive region of the CCD is, generally, an epitaxial layer of silicon. It has a doping of
p+ (Boron) and is grown upon a substrate material, often p++. In buried channel devices, the
type of design utilized in most modern CCDs, certain areas of the surface of the silicon are ion
implanted with phosphorus, giving them an n-doped designation. This region defines the channel
in which the photogenerated charge packets will travel. The gate oxide, i.e. the capacitor
dielectric, is grown on top of the epitaxial layer and substrate. Later on in the process polysilicon
gates are deposited by chemical vapor deposition, patterned with photolithography, and etched in
such a way that the separately phased gates lie perpendicular to the channels. The channels are
further defined by utilization of the LOCOS process to produce the channel stop region. Channel
stops are thermally grown oxides that serve to isolate the charge packets in one column from
those in another. These channel stops are produced before the polysilicon gates are, as the
LOCOS process utilizes a high temperature step that would destroy the gate material. The
channels stops are parallel to, and exclusive of, the channel, or "charge carrying", regions.
Channel stops often have a p+ doped region underlying them, providing a further barrier to the
electrons in the charge packets (this discussion of the physics of CCD devices assumes an
electron transfer device, though hole transfer, is possible).

One should note that the clocking of the gates, alternately high and low, will forward and reverse
bias to the diode that is provided by the buried channel (n-doped) and the epitaxial layer (p-
doped). This will cause the CCD to deplete, near the p-n junction and will collect and move the
charge packets beneath the gates—and within the channels—of the device.

It should be noted that CCD manufacturing and operation can be optimized for different uses.
The above process describes a frame transfer CCD. While CCDs may be manufactured on a
heavily doped p++ wafer it is also possible to manufacture a device inside p-wells that have been
placed on an n-wafer. This second method, reportedly, reduces smear, dark current, and infrared
and red response. This method of manufacture is used in the construction of interline transfer
devices.
Another version of CCD is called a peristaltic CCD. In a peristaltic charge-coupled device, the
charge packet transfer operation is analogous to the peristaltic contraction and dilation of the
digestive system. The peristaltic CCD has an additional implant that keeps the charge away from
the silicon/silicon dioxide interface and generates a large lateral electric field from one gate to
the next. This provides an additional driving force to aid in transfer of the charge packets.

Architecture

The CCD image sensors can be implemented in several different architectures. The most
common are full-frame, frame-transfer, and interline. The distinguishing characteristic of each of
these architectures is their approach to the problem of shuttering.

In a full-frame device, all of the image area is active, and there is no electronic shutter. A
mechanical shutter must be added to this type of sensor or the image smears as the device is
clocked or read out.

With a frame-transfer CCD, half of the silicon area is covered by an opaque mask (typically
aluminum). The image can be quickly transferred from the image area to the opaque area or
storage region with acceptable smear of a few percent. That image can then be read out slowly
from the storage region while a new image is integrating or exposing in the active area. Frame-
transfer devices typically do not require a mechanical shutter and were a common architecture
for early solid-state broadcast cameras. The downside to the frame-transfer architecture is that it
requires twice the silicon real estate of an equivalent full-frame device; hence, it costs roughly
twice as much.

The interline architecture extends this concept one step further and masks every other column of
the image sensor for storage. In this device, only one pixel shift has to occur to transfer from
image area to storage area; thus, shutter times can be less than a microsecond and smear is
essentially eliminated. The advantage is not free, however, as the imaging area is now covered
by opaque strips dropping the fill factor to approximately 50 percent and the effective quantum
efficiency by an equivalent amount. Modern designs have addressed this deleterious
characteristic by adding microlenses on the surface of the device to direct light away from the
opaque regions and on the active area. Microlenses can bring the fill factor back up to 90 percent
or more depending on pixel size and the overall system's optical design.

The choice of architecture comes down to one of utility. If the application cannot tolerate an
expensive, failure-prone, power-intensive mechanical shutter, an interline device is the right
choice. Consumer snap-shot cameras have used interline devices. On the other hand, for those
applications that require the best possible light collection and issues of money, power and time
are less important, the full-frame device is the right choice. Astronomers tend to prefer full-frame
devices. The frame-transfer falls in between and was a common choice before the fill-factor
issue of interline devices was addressed. Today, frame-transfer is usually chosen when an
interline architecture is not available, such as in a back-illuminated device.

CCDs containing grids of pixels are used in digital cameras, optical scanners, and video cameras
as light-sensing devices. They commonly respond to 70 percent of the incident light (meaning a
quantum efficiency of about 70 percent) making them far more efficient than photographic film,
which captures only about 2 percent of the incident light.

Most common types of CCDs are sensitive to near-infrared light, which allows infrared
photography, night-vision devices, and zero lux (or near zero lux) video-recording/photography.
For normal silicon-based detectors, the sensitivity is limited to 1.1 μm. One other consequence of
their sensitivity to infrared is that infrared from remote controls often appears on CCD-based
digital cameras or camcorders if they do not have infrared blockers.

Cooling reduces the array's dark current, improving the sensitivity of the CCD to low light
intensities, even for ultraviolet and visible wavelengths. Professional observatories often cool
their detectors with liquid nitrogen to reduce the dark current, and therefore the thermal noise,

You might also like