Analog Communication Lecture Notes: Department of Electronics and Communications Microwave-I Notes By-Girraj Sharma
Analog Communication Lecture Notes: Department of Electronics and Communications Microwave-I Notes By-Girraj Sharma
LECTURE NOTES
BY-GIRRAJ SHARMA
ANALOG COMMUNICATION
UNIT 1: NOISE EFFECTS IN COMMUNICATION SYSTEMS: Resistor noise, Networks with reactive
elements, Noise temperature, Noise bandwidth, effective input noise temperature, Noise figure. Noise
figure & equivalent noise temperature in cascaded circuits.
UNIT 2 : AMPLITUDE MODULATION : Frequency translation, Recovery of base band signal, Spectrum
& power relations in AM systems. Methods of generation & demodulation of AM-DSB, AM-DSB/SC and
AM-SSB signals. Modulation & detector circuits for AM systems. AM transmitters & receivers.
UNIT 3: FREQUENCY MODULATION : Phase & freq. modulation & their relationship, Spectrum &
band width of a sinusoidally modulated FM signal, phasor diagram, Narrow band & wide band FM.
Generation & demodulation of FM signals. FM transmitters & receivers.. Comparison of AM, FM & PM.
Pre emphasis & deemphasis. Threshold in FM, PLL demodulator.
UNIT 4: NOISE IN AM AND FM: Calculation of signal-to-noise ratio in SSB-SC, DSB-SC, DSB with
carrier, Noise calculation of square law demodulator & envelope detector. Calculation of S/N ratio in FM
demodulators, Super heterodyne receivers.
UNIT 5: PULSE ANALOG MODULATION : Practical aspects of sampling: Natural and flat top sampling.
While noise is generally unwanted, it can serve a useful purpose in some applications, such as
random number generation or dithering.
Types
Thermal noise
Thermal noise is approximately white, meaning that its power spectral density is nearly equal
throughout the frequency spectrum. The amplitude of the signal has very nearly a Gaussian
probability density function. A communication system affected by thermal noise is often
modelled as an additive white Gaussian noise (AWGN) channel.
DEPARTMENT OF ELECTRONICS AND COMMUNICATIONS MICROWAVE-I NOTES
BY-GIRRAJ SHARMA
The root mean square (RMS) voltage due to thermal noise vn, generated in a resistance R (ohms)
over bandwidth Δf (hertz), is given by
where kB is Boltzmann's constant (joules per kelvin) and T is the resistor's absolute temperature
(kelvin).
As the amount of thermal noise generated depends upon the temperature of the circuit, very
sensitive circuits such as preamplifiers in radio telescopes are sometimes cooled in liquid
nitrogen to reduce the noise level.
Shot noise
Shot noise in electronic devices consists of unavoidable random statistical fluctuations of the
electric current in an electrical conductor. Random fluctuations are inherent when current flows,
as the current is a flow of discrete charges (electrons).
Flicker noise
Flicker noise, also known as 1/f noise, is a signal or process with a frequency spectrum that falls
off steadily into the higher frequencies, with a pink spectrum. It occurs in almost all electronic
devices, and results from a variety of effects, though always related to a direct current.
Burst noise
Burst noise consists of sudden step-like transitions between two or more levels (non-Gaussian),
as high as several hundred millivolts, at random and unpredictable times. Each shift in offset
voltage or current lasts for several milliseconds, and the intervals between pulses tend to be in
the audio range (less than 100 Hz), leading to the term popcorn noise for the popping or
crackling sounds it produces in audio circuits.
Avalanche noise
Avalanche noise is the noise produced when a junction diode is operated at the onset of
avalanche breakdown, a semiconductor junction phenomenon in which carriers in a high voltage
gradient develop sufficient energy to dislodge additional carriers through physical impact,
creating ragged current flows.
Quantification
DEPARTMENT OF ELECTRONICS AND COMMUNICATIONS MICROWAVE-I NOTES
BY-GIRRAJ SHARMA
The noise level in an electronic system is typically measured as an electrical power N in watts or
dBm, a root mean square (RMS) voltage (identical to the noise standard deviation) in volts,
dBμV or a mean squared error (MSE) in volts squared. Noise may also be characterized by its
probability distribution and noise spectral density N0(f) in watts per hertz.
A noise signal is typically considered as a linear addition to a useful information signal. Typical
signal quality measures involving noise are signal-to-noise ratio (SNR or S/N), signal-to-
quantization noise ratio (SQNR) in analog-to-digital coversion and compression, peak signal-to-
noise ratio (PSNR) in image and video coding, Eb/N0 in digital transmission, carrier to noise ratio
(CNR) before the detector in carrier-modulated systems, and noise figure in cascaded amplifiers.
Noise power is measured in Watts or decibels (dB) relative to a standard power, usually
indicated by adding a suffix after dB. Examples of electrical noise-level measurement units are
dBu, dBm0, dBrn, dBrnC, and dBrn(f1 − f2), dBrn(144-line).
Noise levels are usually viewed in opposition to signal levels and so are often seen as part of a
signal-to-noise ratio (SNR). Telecommunication systems strive to increase the ratio of signal
level to noise level in order to effectively transmit data. In practice, if the transmitted signal falls
below the level of the noise (often designated as the noise floor) in the system, data can no
longer be decoded at the receiver. Noise in telecommunication systems is a product of both
internal and external sources to the system.
Dither
If the noise source is correlated with the signal, such as in the case of quantisation error, the
intentional introduction of additional noise, called dither, can reduce overall noise in the
bandwidth of interest. This technique allows retrieval of signals below the nominal detection
threshold of an instrument. This is an example of stochastic resonance.
White noise
Colors of noise
DEPARTMENT OF ELECTRONICS AND COMMUNICATIONS MICROWAVE-I NOTES
BY-GIRRAJ SHARMA
White
Pink
Brown/Red
Grey
White noise is a random signal (or process) with a flat power spectral density. In other words,
the signal contains equal power within a fixed bandwidth at any center frequency. White noise
draws its name from white light in which the power spectral density of the light is distributed
over the visible band in such a way that the eye's three color receptors (cones) are approximately
equally stimulated.
Statistical properties
Being uncorrelated in time does not restrict the values a signal can take. Any distribution of
values is possible (although it must have zero DC component). Even a binary signal which can
only take on the values 1 or -1 will be white if the sequence is statistically uncorrelated. Noise
having a continuous distribution, such as a normal distribution, can of course be white.
It is often incorrectly assumed that Gaussian noise (i.e., noise with a Gaussian amplitude
distribution — see normal distribution) is necessarily white noise, yet neither property implies
the other. Gaussianity refers to the probability distribution with respect to the value i.e. the
probability that the signal has a certain given value, while the term 'white' refers to the way the
signal power is distributed over time or among frequencies.
FFT spectrogram of pink noise (left) and white noise (right), shown with linear frequency axis
(vertical).
We can therefore find Gaussian white noise, but also Poisson, Cauchy, etc. white noises. Thus,
the two words "Gaussian" and "white" are often both specified in mathematical models of
systems. Gaussian white noise is a good approximation of many real-world situations and
generates mathematically tractable models. These models are used so frequently that the term
additive white Gaussian noise has a standard abbreviation: AWGN. Gaussian white noise has the
useful statistical property that its values are independent (see Statistical independence).
White noise is the generalized mean-square derivative of the Wiener process or Brownian
motion.
Applications
It is used by some emergency vehicle sirens due to its ability to cut through background noise,
which makes it easier to locate.[citation needed]
It is also used to generate impulse responses. To set up the equalization (EQ) for a concert or
other performance in a venue, a short burst of white or pink noise is sent through the PA system
and monitored from various points in the venue so that the engineer can tell if the acoustics of
the building naturally boost or cut any frequencies. The engineer can then adjust the overall
equalization to ensure a balanced mix.
White noise can be used for frequency response testing of amplifiers and electronic filters. It is
not used for testing loudspeakers as its spectrum contains too great an amount of high frequency
content. Pink noise is used for testing transducers such as loudspeakers and microphones.
White noise is a common synthetic noise source used for sound masking by a tinnitus masker.[1]
White noise is a particularly good source signal for masking devices as it contains higher
frequencies in equal volumes to lower ones, and so is capable of more effective masking for high
pitched ringing tones most commonly perceived by tinnitus sufferers.[citation needed]
White noise is used as the basis of some random number generators. For example, Random.org
uses a system of atmospheric antennae to generate random digit patterns from white noise.
White noise machines are sold as privacy enhancers and sleep aids and to mask tinnitus. Some
people claim white noise, when used with headphones, can aid concentration by masking
irritating or distracting noises in a person's environment.[2]
Mathematical definition
White random vector
A random vector is a white random vector if and only if its mean vector and autocorrelation
matrix are the following:
That is, it is a zero mean random vector, and its autocorrelation matrix is a multiple of the
identity matrix. When the autocorrelation matrix is a multiple of the identity, we say that it has
spherical correlation.
A continuous time random process w(t) where is a white noise process if and only if its
mean function and autocorrelation function satisfy the following:
i.e. it is a zero mean process for all time and has infinite power at zero time shift since its
autocorrelation function is the Dirac delta function.
The above autocorrelation function implies the following power spectral density.
since the Fourier transform of the delta function is equal to 1. Since this power spectral density is
the same at all frequencies, we call it white as an analogy to the frequency spectrum of white
light.
These two ideas are crucial in applications such as channel estimation and channel equalization
in communications and audio. These concepts are also used in data compression.
Suppose that a random vector has covariance matrix Kxx. Since this matrix is Hermitian
symmetric and positive semidefinite, by the spectral theorem from linear algebra, we can
diagonalize or factor the matrix in the following way.
We can simulate the 1st and 2nd moment properties of this random vector with mean and
covariance matrix Kxx via the following transformation of a white vector of unit variance:
where
The method for whitening a vector with mean and covariance matrix Kxx is to perform the
following calculation:
White noise fed into a linear, time-invariant filter to simulate the 1st and 2nd moments of an
arbitrary random process.
Because Kx(τ) is Hermitian symmetric and positive semi-definite, it follows that Sx(ω) is real and
can be factored as
Choosing a minimum phase H(ω) so that its poles and zeros lie inside the left half s-plane, we
can then simulate x(t) with H(ω) as the transfer function of the filter.
where w(t) is a continuous-time, white-noise signal with the following 1st and 2nd moment
properties:
Thus, the resultant signal has the same 2nd moment properties as the desired signal x(t).
An arbitrary random process x(t) fed into a linear, time-invariant filter that whitens x(t) to create
white noise at the output.
We can whiten this signal using frequency domain techniques. We factor the power spectral
density Sx(ω) as described above.
Choosing the minimum phase H(ω) so that its poles and zeros lie inside the left half s-plane, we
can then whiten x(t) with the following inverse filter
We choose the minimum phase filter so that the resulting inverse filter is stable. Additionally, we
must be sure that H(ω) is strictly positive for all so that Hinv(ω) does not have any
singularities.
so that w(t) is a white noise random process with zero mean and constant, unit power spectral
density
Note that this power spectral density corresponds to a delta function for the covariance function
of w(t).
In music
White noise, pink noise, and brown noise are used as percussion in 8-bit (chiptune) music.
Flicker noise
Flicker noise is often characterized by the corner frequency ƒc between the regions dominated by
each type. MOSFETs have a higher ƒc than JFETs or bipolar transistors which is usually below 2
kHz for the latter.
The flicker noise voltage power in MOSFET can be expressed by K/(Cox•WLƒ), where K is the
process-dependent constant, W and L are channel width and length respectively[1].
Flicker noise is found in carbon composition resistors, where it is referred to as excess noise,
since it increases the overall noise level above the thermal noise level, which is present in all
resistors. In contrast, wire-wound resistors have the least amount of flicker noise. Since flicker
noise is related to the level of DC, if the current is kept low, thermal noise will be the
predominant effect in the resistor, and the type of resistor used will not affect noise levels.
Measurement
For measurements the interest is in the "drift" of a variable with respect to a measurement at a
previous time. This is calculated by applying the signal time differencing:
to
and a includes the contribution of both positive and negative frequency terms.
DEPARTMENT OF ELECTRONICS AND COMMUNICATIONS MICROWAVE-I NOTES
BY-GIRRAJ SHARMA
After some manipulation, the variance of the voltage difference is:
[2]
where ƒh is a brick-wall filter limiting the upper bandwidth during measurement.
Note: The equivalent noise resistance in terms of the mean-square noise-generator voltage, e2,
within a frequency increment, Δ f, is given by
Noise temperature
From Wikipedia, the free encyclopedia
PRL = kBTsBn
in watts, where:
Engineers often model noisy components as an ideal component in series with a noisy resistor.
The source resistor is often assumed to be at room temperature, conventionally taken as 290 K
(17 °C, 62 °F).[1]
The additive noise in a receiving system can be of thermal origin (thermal noise) or can be from
other noise-generating processes. Most of these other processes generate noise whose spectrum
and probability distributions are similar to thermal noise. Because of these similarities, the
contributions of all noise sources can be lumped together and regarded as thermal noise. The
noise power generated by all these sources ( ) can be described by assigning to the noise a
noise temperature ( ) defined as:[3]
Tn = Pn / (kBn)
In a wireless communications receiver, would equal the sum of two noise temperatures:
Tn = (Tant + Tsys)
is the antenna noise temperature and determines the noise power seen at the output of the
antenna. The physical temperature of the antenna has no affect on . is the noise
temperature of the receiver circuitry and is representative of the noise generated by the non-ideal
components inside the receiver.
The noise factor (a linear term) can be converted to noise figure (in decibels) using:
where
Components early in the cascade have a much larger influence on the overall noise temperature
than those later in the chain. This is because noise introduced by the early stages is, along with
the signal, amplified by the later stages. The Friis equation shows why a good quality
preamplifier is important in a receive chain.
Similar problems arise when trying to measure the noise temperature of an antenna. Since the
noise temperature is heavily dependent on the orientation of the antenna, the direction that the
Noise figure
Noise figure (NF) is a measure of degradation of the signal-to-noise ratio (SNR), caused by
components in a radio frequency (RF) signal chain. The noise figure is defined as the ratio of the
output noise power of a device to the portion thereof attributable to thermal noise in the input
termination at standard noise temperature T0 (usually 290 K). The noise figure is thus the ratio of
actual output noise to that which would remain if the device itself did not introduce noise. It is a
number by which the performance of a radio receiver can be specified.
General
The noise figure is the difference in decibels (dB) between the noise output of the actual receiver
to the noise output of an “ideal” receiver with the same overall gain and bandwidth when the
receivers are connected to sources at the standard noise temperature T0 (usually 290 K). The
noise power from a simple load is equal to kTB, where k is Boltzmann's constant, T is the
absolute temperature of the load (for example a resistor), and B is the measurement bandwidth.
This makes the noise figure a useful figure of merit for terrestrial systems where the antenna
effective temperature is usually near the standard 290 K. In this case, one receiver with a noise
figure say 2 dB better than another, will have an output signal to noise ratio that is about 2 dB
better than the other. However, in the case of satellite communications systems, where the
antenna is pointed out into cold space, the antenna effective temperature is often colder than
290 K. In these cases a 2 dB improvement in receiver noise figure will result in more than a 2 dB
improvement in the output signal to noise ratio. For this reason, the related figure of effective
noise temperature is therefore often used instead of the noise figure for characterizing satellite-
communication receivers and low noise amplifiers.
In heterodyne systems, output noise power includes spurious contributions from image-
frequency transformation, but the portion attributable to thermal noise in the input termination at
standard noise temperature includes only that which appears in the output via the principal
DEPARTMENT OF ELECTRONICS AND COMMUNICATIONS MICROWAVE-I NOTES
BY-GIRRAJ SHARMA
frequency transformation of the system and excludes that which appears via the image frequency
transformation.
Definition
The noise factor of a system is defined as:
where SNRin and SNRout are the input and output power signal-to-noise ratios, respectively. The
noise figure is defined as:
where SNRin,dB and SNRout,dB are in decibels (dB). The noise figure is the noise factor, given in
dB:
These formulae are only valid when the input termination is at standard noise temperature T0,
although in practice small differences in temperature do not significantly affect the values.
Devices with no gain (e.g., attenuators) have a noise factor F equal to their attenuation L
(absolute value, not in dB) when their physical temperature equals T0. More generally, for an
attenuator at a physical temperature T, the noise temperature is Te = (L − 1)T, giving a noise
factor of:
If several devices are cascaded, the total noise factor can be found with Friis' Formula:
Superheterodyne receiver
History
The superheterodyne principle was revisited in 1918 by U.S. Army Major Edwin Armstrong in
France during World War I. [2] He invented this receiver as a means of overcoming the
deficiencies of early vacuum tube triodes used as high-frequency amplifiers in radio direction
finding equipment. Unlike simple radio communication, which only needs to make transmitted
signals audible, direction-finders measure the received signal strength, which necessitates linear
amplification of the actual carrier wave.
In a triode radio-frequency (RF) amplifier, if both the plate and grid are connected to resonant
circuits tuned to the same frequency, stray capacitive coupling between the grid and the plate
will cause the amplifier to go into oscillation if the stage gain is much more than unity. In early
designs, dozens (in some cases over 100) low-gain triode stages had to be connected in cascade
to make workable equipment, which drew enormous amounts of power in operation and required
a team of maintenance engineers. The strategic value was so high, however, that the British
Admiralty felt the high cost was justified.
Armstrong realized that if RDF receivers could be operated at a higher frequency, this would
allow better detection of enemy shipping. However, at that time, no practical "short wave"
amplifier existed, (defined then as any frequency above 500 kHz) due to the limitations of
existing triodes.
A "heterodyne" refers to a beat or "difference" frequency produced when two or more radio
frequency carrier waves are fed to a detector. The term was coined by Canadian Engineer
Reginald Fessenden describing his proposed method of producing an audible signal from the
Morse Code transmissions of an Alexanderson alternator-type transmitter. With the spark gap
transmitters then in use, the Morse Code signal consisted of short bursts of a heavily modulated
carrier wave which could be clearly heard as a series of short chirps or buzzes in the receiver's
headphones. However, the signal from an Alexanderson Alternator did not have any such
inherent modulation and Morse Code from one of those would only be heard as a series of clicks
or thumps. Fessenden's idea was to run two Alexanderson Alternators, one producing a carrier
frequency 3 kHz higher than the other. In the receiver's detector the two carriers would beat
together to produce a 3 kHz tone thus in the headphones the morse signals would then be heard
as a series of 3 kHz beeps. For this he coined the term "heterodyne" meaning "Generated by a
Difference" (in frequency).
It had been noticed some time before that if a regenerative receiver was allowed to go into
oscillation, other receivers nearby would suddenly start picking up stations on frequencies
different from those that the stations were actually transmitted on. Armstrong (and others)
eventually deduced that this was caused by a "supersonic heterodyne" between the station's
carrier frequency and the oscillator frequency. Thus if a station was transmitting on 300 kHz and
the oscillating receiver was set to 400 kHz, the station would be heard not only at the original
300 kHz, but also at 100 kHz and 700 kHz.
Armstrong realized that this was a potential solution to the "short wave" amplification problem,
since the beat frequency still retained its original modulation, but on a lower carrier frequency.
To monitor a frequency of 1500 kHz for example, he could set up an oscillator to, for example,
1560 kHz, which would produce a heterodyne difference frequency of 60 kHz, a frequency that
could then be more conveniently amplified by the triodes of the day. He termed this the
"Intermediate Frequency" often abbreviated to "IF".
Early superheterodyne receivers used IFs as low as 20 kHz, often based on the self-resonance of
iron-cored transformers. This made them extremely susceptible to image frequency interference,
but at the time, the main objective was sensitivity rather than selectivity. Using this technique, a
small number of triodes could be made to do the work that formerly required dozens of triodes.
In the 1920s, commercial IF filters looked very similar to 1920s audio interstage coupling
transformers, had very similar construction and were wired up in an almost identical manner, and
so they were referred to as "IF Transformers". By the mid-1930s however, superheterodynes
were using higher intermediate frequencies, (typically around 440–470 kHz), with tuned coils
similar in construction to the aerial and oscillator coils, however the name "IF Transformer"
persisted and is still used today. Modern receivers typically use a mixture of ceramic resonator or
SAW (surface-acoustic wave) resonators as well as traditional tuned-inductor IF transformers.
By the 1930s, improvements in vacuum tube technology rapidly eroded the TRF receiver's cost
advantages, and the explosion in the number of broadcasting stations created a demand for
cheaper, higher-performance receivers.
The development of practical indirectly-heated-cathode tubes allowed the mixer and oscillator
functions to be combined in a single pentode tube, in the so-called autodyne mixer. This was
rapidly followed by the introduction of low-cost multi-element tubes specifically designed for
superheterodyne operation. These allowed the use of much higher intermediate frequencies
(typically around 440–470 kHz) which eliminated the problem of image frequency interference.
By the mid-30s, for commercial receiver production the TRF technique was obsolete.
The superheterodyne principle was eventually taken up for virtually all commercial radio and TV
designs.
Overview
The superheterodyne receiver has three elements: the local oscillator, a frequency mixer that
mixes the local oscillator's signal with the received signal, and a tuned amplifier.
Reception starts with an antenna signal, optionally amplified, including the frequency the user
wishes to tune, fd. The local oscillator is tuned to produce a frequency close to fd, fLO. The
received signal is mixed with the local oscillator signal. This stage does not just linearly add the
two inputs, like an audio mixer. Instead it multiplies the input by the local oscillator, producing
four frequencies in the output; the original signal, the original fLO, and the two new frequencies
fd+fLO and fd-fLO. The output signal also generally contains a number of undesirable mixtures as
well. (These are 3rd- and higher-order intermodulation products. If the mixing were performed
as a pure, ideal multiplication, the original fd and fLO would also not appear; in practice they do
appear because mixing is done by a nonlinear process that only approximates true ideal
multiplication.)
The amplifier portion of the system is tuned to be highly selective at a single frequency, fIF. By
changing fLO, the resulting fd-fLO (or fd+fLO) signal can be tuned to the amplifier's fIF. In typical
DEPARTMENT OF ELECTRONICS AND COMMUNICATIONS MICROWAVE-I NOTES
BY-GIRRAJ SHARMA
amplitude modulation ("AM radio" in the U.S., or MW) receivers, that frequency is 455 kHz; for
FM receivers, it is usually 10.7 MHz; for television, 45 MHz. Other signals from the mixed
output of the heterodyne are filtered out by the amplifier.
The advantage to this method is that most of the radio's signal path has to be sensitive to only a
narrow range of frequencies. Only the front end (the part before the frequency converter stage)
needs to be sensitive to a wide frequency range. For example, the front end might need to be
sensitive to 1–30 MHz, while the rest of the radio might need to be sensitive only to 455 kHz, a
typical IF. Only one or two tuned stages need to be adjusted to track over the tuning range of the
receiver; all the intermediate-frequency stages operate at a fixed frequency which need not be
adjusted.
To overcome obstacles such as image response, multiple IF stages are used, and in some cases
multiple stages with two IFs of different values are used. For example, the front end might be
sensitive to 1–30 MHz, the first half of the radio to 5 MHz, and the last half to 50 kHz. Two
frequency converters would be used, and the radio would be a "Double Conversion Super
Heterodyne"&mdash [6];a common example is a television receiver where the audio information
is obtained from a second stage of intermediate-frequency conversion. Receivers which are
tunable over a wide bandwidth (e.g. scanners) may use an intermediate frequency higher than the
signal, in order to improve image rejection.[7]
In the case of modern television receivers, no other technique was able to produce the precise
bandpass characteristic needed for vestigial sideband reception, first used with the original
NTSC system introduced in 1941. This originally involved a complex collection of tuneable
inductors which needed careful adjustment, but since the 1970s or early 1980s [citation needed] these
have been replaced with precision electromechanical surface acoustic wave (SAW) filters.
Fabricated by precision laser milling techniques, SAW filters are cheaper to produce, can be
made to extremely close tolerances, and are stable in operation. To avoid tooling costs associated
with these components most manufacturers then tended to design their receivers around the fixed
range of frequencies offered which resulted in de-facto standardization of intermediate
frequencies.
Radio transmitters may also use a mixer stage to produce an output frequency, working more or
less as the reverse of a superheterodyne receiver.
Intermediate frequencies
Usually the intermediate frequency is lower than either the carrier or oscillator frequencies, but
with some types of receiver (e.g. scanners and spectrum analyzers) it is more convenient to use a
higher intermediate frequency.
In order to avoid interference to and from signal frequencies close to the intermediate frequency,
in many countries IF frequencies are controlled by regulatory authorities. Examples of common
IFs are 455 kHz for medium-wave AM radio, 10.7 MHz for FM, 38.9 MHz (Europe) or 45 MHz
(US) for television, and 70 MHz for satellite and terrestrial microwave equipment.
Drawbacks
High-side and low-side injection
One major disadvantage to the superheterodyne receiver is the problem of image frequency. In
heterodyne receivers, an image frequency is an undesired input frequency equal to the station
frequency plus twice the intermediate frequency. The image frequency results in two stations
being received at the same time, thus producing interference. Image frequencies can be
eliminated by sufficient attenuation on the incoming signal by the RF amplifier filter of the
superheterodyne receiver.
Early Autodyne receivers typically used IFs of only 150 kHz or so, as it was difficult to maintain
reliable oscillation if higher frequencies were used. As a consequence, most Autodyne receivers
needed quite elaborate antenna tuning networks, often involving double-tuned coils, to avoid
image interference. Later superhets used tubes especially designed for oscillator/mixer use,
which were able to work reliably with much higher IFs, reducing the problem of image
interference and so allowing simpler and cheaper aerial tuning circuitry.
For medium-wave AM radio, a variety of IFs have been used, but usually 455 kHz is used.
It is difficult to keep stray radiation from the local oscillator below the level that a nearby
receiver can detect. The receiver's local oscillator can act like a miniature CW transmitter. This
means that there can be mutual interference in the operation of two or more superheterodyne
receivers in close proximity. In espionage, oscillator radiation gives a means to detect a covert
receiver and its operating frequency. One effective way of preventing the local oscillator signal
from radiating out from the receiver's antenna is by adding a stage of RF amplification between
the receiver's antenna and its mixer stage.
DEPARTMENT OF ELECTRONICS AND COMMUNICATIONS MICROWAVE-I NOTES
BY-GIRRAJ SHARMA
Local oscillator sideband noise
Local oscillators typically generate a single frequency signal that has negligible amplitude
modulation but some random phase modulation. Either of these impurities spreads some of the
signal's energy into sideband frequencies. That causes a corresponding widening of the receiver's
frequency response, which would defeat the aim to make a very narrow bandwidth receiver such
as to receive low-rate digital signals. Care needs to be taken to minimise oscillator phase noise,
usually by ensuring that the oscillator never enters a non-linear mode.
In electronics, modulation is the process of varying one or more properties of a high frequency
periodic waveform, called the carrier signal, with respect to a modulating signal. This is done in
a similar fashion as a musician may modulate a tone (a periodic waveform) from a musical
instrument by varying its volume, timing and pitch. The three key parameters of a periodic
waveform are its amplitude ("volume"), its phase ("timing") and its frequency ("pitch"), all of
which can be modified in accordance with a low frequency signal to obtain the modulated signal.
Typically a high-frequency sinusoid waveform is used as carrier signal, but a square wave pulse
train may also occur.
In music synthesizers, modulation may be used to synthesise waveforms with a desired overtone
spectrum. In this case the carrier frequency is typically in the same order or much lower than the
modulating waveform. See for example frequency modulation synthesis or ring modulation.
A device that performs modulation is known as a modulator and a device that performs the
inverse operation of modulation is known as a demodulator (sometimes detector or demod). A
device that can do both operations is a modem (short for "Modulator-Demodulator").
The aim of analog modulation is to transfer an analog baseband (or lowpass) signal, for
example an audio signal or TV signal, over an analog bandpass channel, for example a limited
radio frequency band or a cable TV network channel.
Analog and digital modulation facilitate frequency division multiplexing (FDM), where several
low pass information signals are transferred simultaneously over the same shared physical
medium, using separate passband channels.
The aim of digital baseband modulation methods, also known as line coding, is to transfer a
digital bit stream over a baseband channel, typically a non-filtered copper wire such as a serial
bus or a wired local area network.
The aim of pulse modulation methods is to transfer a narrowband analog signal, for example a
phone call over a wideband baseband channel or, in some of the schemes, as a bit stream over
another digital transmission system.
Amplitude modulation (AM) (here the amplitude of the carrier signal is varied in
accordance to the instantaneous amplitude of the modulating signal)
o Double-sideband modulation (DSB)
Double-sideband modulation with carrier (DSB-WC) (used on the AM
radio broadcasting band)
Double-sideband suppressed-carrier transmission (DSB-SC)
Double-sideband reduced carrier transmission (DSB-RC)
o Single-sideband modulation (SSB, or SSB-AM),
SSB with carrier (SSB-WC)
SSB suppressed carrier modulation (SSB-SC)
o Vestigial sideband modulation (VSB, or VSB-AM)
o Quadrature amplitude modulation (QAM)
Angle modulation
o Frequency modulation (FM) (here the frequency of the carrier signal is varied in
accordance to the instantaneous amplitude of the modulating signal)
o Phase modulation (PM) (here the phase shift of the carrier signal is varied in
accordance to the instantaneous amplitude of the modulating signal)
The accompanying figure shows the results of (amplitude-)modulating a signal onto a carrier
(both of which are sine waves). At any point along the y-axis, the amplitude of the modulated
signal is equal to the sum of the carrier signal and the modulating signal amplitudes.
A simple example: A telephone line is designed for transferring audible sounds, for example
tones, and not digital bits (zeros and ones). Computers may however communicate over a
telephone line by means of modems, which are representing the digital bits by tones, called
symbols. If there are four alternative symbols (corresponding to a musical instrument that can
generate four different tones, one at a time), the first symbol may represent the bit sequence 00,
the second 01, the third 10 and the fourth 11. If the modem plays a melody consisting of 1000
tones per second, the symbol rate is 1000 symbols/second, or baud. Since each tone (i.e.,
symbol) represents a message consisting of two digital bits in this example, the bit rate is twice
the symbol rate, i.e. 2000 bits per second. This is similar to the technique used by dialup
modems as opposed to DSL modems.
According to one definition of digital signal, the modulated signal is a digital signal, and
according to another definition, the modulation is a form of digital-to-analog conversion. Most
textbooks would consider digital modulation schemes as a form of digital transmission,
synonymous to data transmission; very few would consider it as analog transmission.
In the case of PSK (phase-shift keying), a finite number of phases are used.
In the case of FSK (frequency-shift keying), a finite number of frequencies are used.
In the case of ASK (amplitude-shift keying), a finite number of amplitudes are used.
In the case of QAM (quadrature amplitude modulation), a finite number of at least two
phases, and at least two amplitudes are used.
In QAM, an inphase signal (the I signal, for example a cosine waveform) and a quadrature phase
signal (the Q signal, for example a sine wave) are amplitude modulated with a finite number of
amplitudes, and summed. It can be seen as a two-channel system, each channel using ASK. The
resulting signal is equivalent to a combination of PSK and ASK.
In all of the above methods, each of these phases, frequencies or amplitudes are assigned a
unique pattern of binary bits. Usually, each phase, frequency or amplitude encodes an equal
number of bits. This number of bits comprises the symbol that is represented by the particular
phase, frequency or amplitude.
For example, with an alphabet consisting of 16 alternative symbols, each symbol represents 4
bits. Thus, the data rate is four times the baud rate.
In the case of PSK, ASK or QAM, where the carrier frequency of the modulated signal is
constant, the modulation alphabet is often conveniently represented on a constellation diagram,
showing the amplitude of the I signal at the x-axis, and the amplitude of the Q signal at the y-
axis, for each symbol.
PSK and ASK, and sometimes also FSK, are often generated and detected using the principle of
QAM. The I and Q signals can be combined into a complex-valued signal I+jQ (where j is the
imaginary unit). The resulting so called equivalent lowpass signal or equivalent baseband signal
is a complex-valued representation of the real-valued modulated physical signal (the so called
passband signal or RF signal).
These are the general steps used by the modulator to transmit data:
1. Bandpass filtering.
2. Automatic gain control, AGC (to compensate for attenuation, for example fading).
3. Frequency shifting of the RF signal to the equivalent baseband I and Q signals, or to an
intermediate frequency (IF) signal, by multiplying the RF signal with a local oscillator
sinewave and cosine wave frequency (see the superheterodyne receiver principle).
4. Sampling and analog-to-digital conversion (ADC) (Sometimes before or instead of the
above point, for example by means of undersampling).
5. Equalization filtering, for example a matched filter, compensation for multipath
propagation, time spreading, phase distortion and frequency selective fading, to avoid
intersymbol interference and symbol distortion.
6. Detection of the amplitudes of the I and Q signals, or the frequency or phase of the IF
signal.
7. Quantization of the amplitudes, frequencies or phases to the nearest allowed symbol
values.
8. Mapping of the quantized amplitudes, frequencies or phases to codewords (bit groups).
9. Parallel-to-serial conversion of the codewords into a bit stream.
10. Pass the resultant bit stream on for further processing such as removal of any error-
correcting codes.
Non-coherent modulation methods do not require a receiver reference clock signal that is phase
synchronized with the sender carrier wave. In this case, modulation symbols (rather than bits,
characters, or data packets) are asynchronously transferred. The opposite is coherent modulation.
MSK and GMSK are particular cases of continuous phase modulation. Indeed, MSK is a
particular case of the sub-family of CPM known as continuous-phase frequency-shift keying
(CPFSK) which is defined by a rectangular frequency pulse (i.e. a linearly increasing phase
pulse) of one symbol-time duration (total response signaling).
OFDM is based on the idea of frequency-division multiplexing (FDM), but is utilized as a digital
modulation scheme. The bit stream is split into several parallel data streams, each transferred
over its own sub-carrier using some conventional digital modulation scheme. The modulated
sub-carriers are summed to form an OFDM signal. OFDM is considered as a modulation
technique rather than a multiplex technique, since it transfers one bit stream over one
communication channel using one sequence of so-called OFDM symbols. OFDM can be
extended to multi-user channel access method in the orthogonal frequency-division multiple
access (OFDMA) and multi-carrier code division multiple access (MC-CDMA) schemes,
allowing several users to share the same physical medium by giving different sub-carriers or
spreading codes to different users.
Of the two kinds of RF power amplifier, switching amplifiers (Class C amplifiers) cost less and
use less battery power than linear amplifiers of the same output power. However, they only work
with relatively constant-amplitude-modulation signals such as angle modulation (FSK or PSK)
and CDMA, but not with QAM and OFDM. Nevertheless, even though switching amplifiers are
completely unsuitable for normal QAM constellations, often the QAM modulation principle are
used to drive switching amplifiers with these FM and other waveforms, and sometimes QAM
demodulators are used to receive the signals put out by these switching amplifiers.
Analog-over-analog methods:
Analog-over-digital methods:
As originally developed for the electric telephone, amplitude modulation was used to add audio
information to the low-powered direct current flowing from a telephone transmitter to a receiver
As a simplified explanation, at the transmitting end, a telephone microphone was used to vary
the strength of the transmitted current, according to the frequency and loudness of the sounds
received. Then, at the receiving end of the telephone line, the transmitted electrical current
affected an electromagnet, which strengthened and weakened in response to the strength of the
current. In turn, the electromagnet produced vibrations in the receiver diaphragm, thus closely
reproducing the frequency and loudness of the sounds originally heard at the transmitter.
To increase transmitter efficiency, the carrier can be removed (suppressed) from the AM signal.
This produces a reduced-carrier transmission or double-sideband suppressed-carrier (DSBSC)
signal. A suppressed-carrier amplitude modulation scheme is three times more power-efficient
than traditional DSB-AM. If the carrier is only partially suppressed, a double-sideband reduced-
carrier (DSBRC) signal results. DSBSC and DSBRC signals need their carrier to be regenerated
(by a beat frequency oscillator, for instance) to be demodulated using conventional techniques.
Even greater efficiency is achieved—at the expense of increased transmitter and receiver
complexity—by completely suppressing both the carrier and one of the sidebands. This is single-
sideband modulation, widely used in amateur radio due to its efficient use of both power and
bandwidth.
A simple form of AM often used for digital communications is on-off keying, a type of
amplitude-shift keying by which binary data is represented as the presence or absence of a carrier
DEPARTMENT OF ELECTRONICS AND COMMUNICATIONS MICROWAVE-I NOTES
BY-GIRRAJ SHARMA
wave. This is commonly used at radio frequencies to transmit Morse code, referred to as
continuous wave (CW) operation.
ITU designations
In 1982, the International Telecommunication Union (ITU) designated the various types of
amplitude modulation as follows:
Designation Description
A3E double-sideband full-carrier - the basic AM modulation scheme
R3E single-sideband reduced-carrier
H3E single-sideband full-carrier
J3E single-sideband suppressed-carrier
B8E independent-sideband emission
C3F vestigial-sideband
Lincompex linked compressor and expander
Example: double-sideband AM
Let m(t) represent an arbitrary waveform that is the message to be transmitted. And let the
constant M represent its largest magnitude. For instance:
represents the carrier amplitude which is a constant that we would choose to demonstrate the
modulation index. The values A=1, and M=0.5, produce a y(t) depicted by the graph labelled
"50% Modulation" in Figure 4.
For this simple example, y(t) can be trigonometrically manipulated into the following equivalent
form:
Therefore, the modulated signal has three components, a carrier wave and two sinusoidal waves
(known as sidebands) whose frequencies are slightly above and below
Also notice that the choice A=0 eliminates the carrier component, but leaves the sidebands. That
is the DSBSC transmission mode. To generate double-sideband full carrier (A3E), we must
choose:
Spectrum
For more general forms of m(t), trigonometry is not sufficient. But if the top trace of Figure 2
depicts the frequency spectrum, of m(t), then the bottom trace depicts the modulated carrier. It
DEPARTMENT OF ELECTRONICS AND COMMUNICATIONS MICROWAVE-I NOTES
BY-GIRRAJ SHARMA
has two groups of components: one at positive frequencies (centered on + ωc) and one at
negative frequencies (centered on − ωc). Each group contains the two sidebands and a narrow
component in between that represents the energy at the carrier frequency. We need only be
concerned with the positive frequencies. The negative ones are a mathematical artifact that
contains no additional information. Therefore, we see that an AM signal's spectrum consists
basically of its original (2-sided) spectrum shifted up to the carrier frequency.
Fig 3: The spectrogram of an AM broadcast shows its two sidebands (green) separated by the
carrier signal (red).
In terms of the positive frequencies, the transmission bandwidth of AM is twice the signal's
original (baseband) bandwidth—since both the positive and negative sidebands are shifted up to
the carrier frequency. Thus, double-sideband AM (DSB-AM) is spectrally inefficient, meaning
that fewer radio stations can be accommodated in a given broadcast band. The various
suppression methods in Forms of AM can be readily understood in terms of the diagram in
Figure 2. With the carrier suppressed there would be no energy at the center of a group. And
with a sideband suppressed, the "group" would have the same bandwidth as the positive
Modulation index
It can be defined as the measure of extent of amplitude variation about an unmodulated
maximum carrier. As with other modulation indices, in AM, this quantity, also called
modulation depth, indicates by how much the modulated variable varies around its 'original'
level. For AM, it relates to the variations in the carrier amplitude and is defined as:
So if h = 0.5, the carrier amplitude varies by 50% above and below its unmodulated level, and
for h = 1.0 it varies by 100%. To avoid distortion in the A3E transmission mode, modulation
depth greater than 100% must be avoided. Practical transmitter systems will usually incorporate
some kind of limiter circuit, such as a VOGAD, to ensure this. However, AM demodulators can
be designed to detect the inversion (or 180 degree phase reversal) that occurs when modulation
exceeds 100% and automatically correct for this effect.[citation needed]
Variations of modulated signal with percentage modulation are shown below. In each image, the
maximum amplitude is higher than in the previous image. Note that the scale changes from one
image to the next.
Circuits
A wide range of different circuits have been used for AM, but one of the simplest circuits uses
anode or collector modulation applied via a transformer. While it is perfectly possible to create
good designs using solid-state electronics, valved (vacuum tube) circuits are shown here. In
general, valves are able to more easily yield RF powers, in excess of what can be easily achieved
using solid-state transistors. Many high-power broadcast stations still use valves.
Anode modulation using a transformer. The tetrode is supplied with an anode supply (and screen
grid supply) which is modulated via the transformer. The resistor R1 sets the grid bias; both the
input and outputs are tuned LC circuits which are tapped into by inductive coupling
Modulation circuit designs can be broadly divided into low and high level.
Low level
Here a small audio stage is used to modulate a low power stage; the output of this stage is then
amplified using a linear RF amplifier. Wideband power amplifiers are used to preserve the
sidebands of the modulated waves. In this arrangement, modulation is done at low power. To
amplify it we use a wideband power amplifier at the output.
Advantages
The advantage of using a linear RF amplifier is that the smaller early stages can be modulated,
which only requires a small audio amplifier to drive the modulator.
The great disadvantage of this system is that the amplifier chain is less efficient, because it has to
be linear to preserve the modulation. Hence Class C amplifiers cannot be employed.
An approach which marries the advantages of low-level modulation with the efficiency of a
Class C power amplifier chain is to arrange a feedback system to compensate for the substantial
distortion of the AM envelope. A simple detector at the transmitter output (which can be little
more than a loosely coupled diode) recovers the audio signal, and this is used as negative
feedback to the audio modulator stage. The overall chain then acts as a linear amplifier as far as
the actual modulation is concerned, though the RF amplifier itself still retains the Class C
efficiency. This approach is widely used in practical medium power transmitters, such as AM
radiotelephones.
High level
With high level modulation, the modulation takes place at the final amplifier stage where the
carrier signal is at its maximum
Advantages
One advantage of using class C amplifiers in a broadcast AM transmitter is that only the final
stage needs to be modulated, and that all the earlier stages can be driven at a constant level.
These class C stages will be able to generate the drive for the final stage for a smaller DC power
input. However, in many designs in order to obtain better quality AM the penultimate RF stages
will need to be subject to modulation as well as the final stage.
Disadvantages
A large audio amplifier will be needed for the modulation stage, at least equal to the power of the
transmitter output itself. Traditionally the modulation is applied using an audio transformer, and
this can be bulky. Direct coupling from the audio amplifier is also possible (known as a cascode
arrangement), though this usually requires quite a high DC supply voltage (say 30 V or more),
which is not suitable for mobile units.
AM demodulation methods
The simplest form of AM demodulator consists of a diode which is configured to act as envelope
detector. Another type of demodulator, the product detector, can provide better quality
demodulation, at the cost of added circuit complexity.
This is used for RDS (Radio Data System) because it is difficult to decouple.
Spectrum
This is basically an amplitude modulation wave without the carrier therefore reducing power
wastage, giving it a 50% efficiency rate.
Demodulation
For demodulation the audio frequency and the carrier frequency must be exact otherwise we get
distortion.
The information represented by the modulating signal is contained in both the upper and the
lower sidebands. Since each modulating frequency fc produces corresponding upper and lower
side-frequencies
fc + fi
and
fc − fi
it is not necessary to transmit both side-bands. Either one can be suppressed at the transmitter
without any loss of information.
Advantages
Less transmitter power.
Less bandwidth, one-half that of Double-Sideband (DSB).
Less noise at the receiver.
Size, weight and peak antenna voltage of a single-sideband (SSB) transmitters is
significantly less than that of a standard AM transmitte
Methods
At the beginning of the 20th century, there were four chief methods of arranging the
transmitting circuits:[2]
1. The transmitting system consists of two tuned circuits such that the one containing the spark-gap
is a persistent oscillator; the other, containing the aerial structure, is a free radiator maintained in
oscillation by being coupled to the first (Nikola Tesla and Guglielmo Marconi).
2. The oscillating system, including the aerial structure with its associated inductance-coils and
condensers, is designed to be both a sufficiently persistent oscillator and a sufficiently active
radiator (Oliver Joseph Lodge).
3. The transmitting system consists of two electrically coupled circuits, one of which, containing the
air-gap, is a powerful but not persistent oscillator, being provided with a device for quenching the
spark so soon as it has imparted sufficient energy to the other circuit containing the aerial
structure, this second circuit then independently radiating the train of slightly damped waves at its
own period (Oliver Joseph Lodge and Wilhelm Wien).
4. The transmitting system, by means either of an oscillating arc (Valdemar Poulsen) or a high-
frequency alternator (Rudolf Goldschmidt), emits a persistent train of undamped waves
interrupted only by being broken up into long and short groups by the operator's key.
Frequency synthesis
Fixed frequency systems
For a fixed frequency transmitter one commonly used method is to use a resonant quartz crystal
in a Crystal oscillator to fix the frequency. Where the frequency has to be variable, several
options can be used.
Frequency multiplication
Frequency tripler
Frequency doubler A basic design for a frequency tripler (screen grids,
A basic design for a frequency doubler (screen grids, bias supplies and other elements are not shown).
bias supplies and other elements are not shown).
For VHF transmitters, it is often not possible to operate the oscillator at the final output
frequency. In such cases, for reasons including frequency stability, it is better to multiply the
frequency of the free running oscillator up to the final, required frequency.
If the output of an amplifier stage is tuned to a multiple of the frequency with which the stage is
driven, the stage will give a larger harmonic output than a linear amplifier. In a push-push stage,
the output will only contain even harmonics. This is because the currents which would generate
the fundamental and the odd harmonics in this circuit (if one valve was removed) are canceled
by the second valve. In the diagrams, bias supplies and neutralization measure have been omitted
for clarity. In a real system, it is likely that tetrodes would be used, as plate-to-grid capacitance
in a tetrode is lower, thereby reducing stage instability.
In a push-pull stage, the output will contain only odd harmonics because of the canceling effect.
DEPARTMENT OF ELECTRONICS AND COMMUNICATIONS MICROWAVE-I NOTES
BY-GIRRAJ SHARMA
Frequency mixing and modulation
The task of many transmitters is to transmit some form of information using a radio signal
(carrier wave) which has been modulated to carry the intelligence. A few rare types of
transmitter do not carry information: the RF generator in a microwave oven, electrosurgery, and
induction heating. RF transmitters that do not carry information are required by law to operate in
an ISM band.
AM modes
In many cases the carrier wave is mixed with another electrical signal to impose information
upon it. This occurs in Amplitude modulation (AM). Amplitude Modulation: In Amplitude
modulation the instantaneous change in the amplitude of the carrier Frequency with respect to
the amplitude of the modulating or Base band signal.
Low level
Here a small audio stage is used to modulate a low power stage, the output of this stage is then
amplified using a linear RF amplifier.
Advantages
The advantage of using a linear RF amplifier is that the smaller early stages can be modulated,
which only requires a small audio amplifier to drive the modulator.
Disadvantages
The great disadvantage of this system is that the amplifier chain is less efficient, because it has to
be linear to preserve the modulation. Hence class C amplifiers cannot be employed.
An approach which marries the advantages of low-level modulation with the efficiency of a
Class C power amplifier chain is to arrange a feedback system to compensate for the substantial
distortion of the AM envelope. A simple detector at the transmitter output (which can be little
more than a loosely coupled diode) recovers the audio signal, and this is used as negative
feedback to the audio modulator stage. The overall chain then acts as a linear amplifier as far as
the actual modulation is concerned, though the RF amplifier itself still retains the Class C
efficiency. This approach is widely used in practical medium power transmitters, such as AM
radiotelephones.
Advantages
One advantage of using class C amplifiers in a broadcast AM transmitter is that only the final
stage needs to be modulated, and that all the earlier stages can be driven at a constant level.
These class C stages will be able to generate the drive for the final stage for a smaller DC power
input. However, in many designs in order to obtain better quality AM the penultimate RF stages
will need to be subject to modulation as well as the final stage.
Disadvantages
A large audio amplifier will be needed for the modulation stage, at least equal to the power of the
transmitter output itself. Traditionally the modulation is applied using an audio transformer, and
this can be bulky. Direct coupling from the audio amplifier is also possible (known as a cascode
arrangement), though this usually requires quite a high DC supply voltage (say 30 V or more),
which is not suitable for mobile units.
Types of AM modulators
A wide range of different circuits have been used for AM. While it is perfectly possible to create
good designs using solid-state electronics, valved (tube) circuits are shown here. In general,
valves are able to easily yield RF powers far in excess of what can be achieved using solid state.
Most high-power broadcast stations still use valves.
In plate modulation systems the voltage delivered to the stage is changed. As the power output
available is a function of the supply voltage, the output power is modulated. This can be done
using a transformer to alter the anode (plate) voltage. The advantage of the transformer method
is that the audio power can be supplied to the RF stage and converted into RF power. With anode
modulation using a transformer, the tetrode is supplied with an anode supply (and screen grid
supply) which is modulated via the transformer. The resistor R1 sets the grid bias, both the input
and outputs are tuned LC circuits which are tapped into by inductive coupling. In series
modulated amplitude modulation, the tetrode is supplied with an anode supply (and screen grid
supply) which is modulated by the modulator valve. The resistor VR1 sets the grid bias for the
When the valve at the top conducts more than the potential difference between the anode and
cathode of the lower valve (RF valve) will increase. The two valves can be thought of as two
resistors in a potentiometer.
Screen AM modulators
Screen AM modulator.
Under steady state conditions (no audio driven) the stage will be a simple RF amplifier where the
grid bias is set by the cathode current. When the stage is modulated the screen potential changes
and so alters the gain of the stage.
Single-sideband modulation
Filter method
Using a balanced mixer a double side band signal is generated, this is then passed through a very
narrow bandpass filter to leave only one side-band. By convention it is normal to use the upper
sideband (USB) in communication systems, except for HAM radio when the carrier frequency is
below 10 MHz here the lower side band (LSB) is normally used.
Phasing method
This method is an alternative method for the generation of single sideband signals. One of the
weaknesses of this method is the need for a network which imposes a constant 90 o phase shift on
DEPARTMENT OF ELECTRONICS AND COMMUNICATIONS MICROWAVE-I NOTES
BY-GIRRAJ SHARMA
audio signals throughout the entire audio spectrum. By reducing the audio bandwidth the task of
designing the phaseshift network can be made more easy.
The audio signal is passed through the phase shift network to give two identical signals which
differ by 90o.
and
These audio outputs are mixed in non linear mixers with a carrier, the carrier drive for one of
these mixers is shifted by 90°. The output of these mixers is combined in a linear circuit to give
the SSB signal.
Vestigial-sideband modulation
Morse
Strictly speaking the commonly used 'AM' is double-sideband full carrier. Morse is often sent
using on-off keying of an unmodulated carrier (Continuous wave), this can be thought of as an
AM mode.
FM modes
Angle modulation is the proper term for modulation by changing the instantaneous frequency or
phase of the carrier signal. True FM and phase modulation are the most commonly employed
forms of analogue angle modulation.
Indirect FM
In some indirect FM solid state circuits, an RF drive is applied to the base of a transistor. The
tank circuit (LC), connected to the collector via a capacitor, contains a pair of varicap diodes. As
the voltage applied to the varicaps is changed, the phase shift of the output will change.
For high power systems it is normal to use valves, please see Valve RF amplifier for details of
how valved RF power stages work.
Advantages of valves
Disadvantages of valves
Solid state
For low and medium power it is often the case that solid state power stages are used. For higher
power systems these cost more per watt of output power than a valved system.
See Antenna tuner and balun for details of matching networks and baluns respectively.
EMC matters
While this section was written from the point of view of an amateur radio operator with relation
to television interference it applies to the construction and use of all radio transmitters, and other
electronic devices which generate high RF powers with no intention of radiating these. For
instance a dielectric heater might contain a 2000 watt 27 MHz source within it, if the machine
DEPARTMENT OF ELECTRONICS AND COMMUNICATIONS MICROWAVE-I NOTES
BY-GIRRAJ SHARMA
operates as intended then none of this RF power will leak out. However, if the device is subject
to a fault then when it operates RF will leak out and it will be now a transmitter. Also computers
are RF devices, if the case is poorly made then the computer will radiate at VHF. For example if
you attempt to tune into a weak FM radio station (88 to 108 MHz, band II) at your desk you may
lose reception when you switch on your PC. Equipment which is not intended to generate RF,
but does so through for example sparking at switch contacts is not considered here.
All equipment using RF electronics should be inside a screened metal box, all connections in or
out of the metal box should be filtered to avoid the ingress or egress of radio signals. A common
and effective method of doing so for wires carrying DC supplies, 50/60 Hz AC connections,
audio and control signals is to use a feedthrough capacitor. This is a capacitor which is mounted
in a hole in the shield, one terminal of the capacitor is its metal body which touches the shielding
of the box while the other two terminal of the capacitor are the on either side of the shield. The
feed through capacitor can be thought of as a metal rod which has a dielectric sheath which in
turn has a metal coating.
In addition to the feed through capacitor, either a resistor or RF choke can be used to increase the
filtering on the lead. In transmitters it is vital to prevent RF from entering the transmitter through
any lead such as an electric power, microphone or control connection. If RF does enter a
transmitter in this way then an instability known as motorboating can occur. Motorboating is an
example of a self inflicted EMC problem.
Spurious emissions
Early in the development of radio technology it was recognised that the signals emitted by
transmitters had to be 'pure'. For instance Spark-gap transmitters were quickly outlawed as they
give an output which is so wide in terms of frequency. In modern equipment there are three main
types of spurious emissions.
DEPARTMENT OF ELECTRONICS AND COMMUNICATIONS MICROWAVE-I NOTES
BY-GIRRAJ SHARMA
The term spurious emissions refers to any signal which comes out of a transmitter other than the
wanted signal. The spurious emissions include harmonics, out of band mixer products which are
not fully suppressed and leakage from the local oscillator and other systems within the
transmitter.
Harmonics
These are multiples of the operation frequency of the transmitter, they can be generated in a
stage of the transmitter even if it is driven with a perfect sine wave because no real life amplifier
is perfectly linear.
Note that B+ is the anode supply, C- is the grid bias. Note that B+ is the anode supply, C- is the grid
While the circuit shown here uses tetrode valves (for bias. While the circuit shown here uses a tetrode
example 2 x 4CX250B) many designs have used solid valve (for example the 4CX250B) many designs
state semiconductor parts (such as MOSFETS). Note have used solid state semiconductor parts (such as
that NC is a neutralization capacitor. MOSFETS).
It is best if these harmonics are designed out at an early stage. For instance a push-pull amplifier
consisting of two tetrode valves attached to an anode tank resonant LC circuit which has a coil
which is connected to the high voltage DC supply at the centre (Which is also RF ground) will
only give a signal for the fundamental and the odd harmonics.
In addition to the good design of the amplifier stages, the transmitter's output should be filtered
with a low pass filter to reduce the level of the harmonics.
The harmonics can be tested for using an RF spectrum analyser (expensive) or with an
absorption wavemeter (cheap). If a harmonic is found which is at the same frequency as the
frequency of the signal wanted at the receiver then this spurious emission can prevent the wanted
signal from being received.
Imagine a transmitter, which has an intermediate frequency (IF) of 144 MHz, which is mixed
with 94 MHz to create a signal at 50 MHz, which is then amplified and transmitted. If the local
oscillator signal was to enter the power amplifier and not be adequately suppressed then it could
be radiated. It would then have the potential to interfere with radio signals at 94 MHz in the FM
audio (band II) broadcast band. Also the unwanted mixing product at 238 MHz could in a poorly
designed system be radiated. Normally with good choice of the intermediate and local oscillator
frequencies this type of trouble can be avoided, but one potentially bad situation is in the
construction of a 144 to 70 MHz converted, here the local oscillator is at 74 MHz which is very
close to the wanted output. Good well made units have been made which use this conversion but
their design and construction has been challenging, for instance in the late 1980s Practical
Wireless published a design (Meon-4) for such a transverter [1][2]. This problem can be thought
of as being related to the Image response problem which exists in receivers.
One method of reducing the potential for this transmitter defect is the use of balance and double
balanced mixers. If the equation is assumed to be
E = E1 E2
and is driven by two simple sine waves, f1 and f2 then the output will be a mixture of four
frequencies
f1
f1-f2
f2
If the simple mixer is replaced with a balanced mixer then the number of possible products is
reduced. Imagine that two mixers which have the equation {I = E1 E2} are wired up so that the
current outputs are wired to the two ends of a coil (the centre of this coil is wired to ground) then
the total current flowing through the coil is the difference between the output of the two mixer
stages. If the f1 drive for one of the mixers is phase shifted by 180° then the overall system will
be a balanced mixer.
Note that while this hypothetical design uses tetrodes many designs have used solid state semiconductor
parts (such as MOSFETS).
E = K . Ef2 . ΔEf1
f1+f2
f1-f2
f2
Now as the frequency mixer has fewer outputs the task of making sure that the final output is
clean will be simpler.
If a stage in a transmitter is unstable and is able to oscillate then it can start to generate RF at
either a frequency close to the operating frequency or at a very different frequency. One good
sign that it is occurring is if an RF stage has a power output even without being driven by an
exciting stage. Another sign is if the output power suddenly increases wildly when the input
power is increased slightly, it is noteworthy that in a class C stage that this behaviour can be seen
under normal conditions. The best defence against this transmitter defect is a good design, also it
is important to pay good attention to the neutralization of the valves or transistors.
Basic designs
Crystal radio
A crystal set receiver consisting of an antenna, a variable inductor, a cat's whisker, and a
capacitor.
Advantages
o Simple, easy to make. This is the classic design for a clandestine receiver in a
POW camp.
Disadvantages
o Insensitive, it needs a very strong RF signal to operate.
o Poor selectivity, it often only has only one tuned circuit.
DEPARTMENT OF ELECTRONICS AND COMMUNICATIONS MICROWAVE-I NOTES
BY-GIRRAJ SHARMA
Direct amplification
The directly amplifying receiver contains the input radio frequency filter, the radio frequency
amplifier (amplifying radio signal of the tuned station), the detector and the sound frequency
amplifier. This design is simple and reliable, but much less sensitive than the superheterodyne
(described below).
Reflectional
The reflectional receiver contains the single amplifier that amplifies first radio, and then (after
detection) sound frequency. It is simpler, smaller and consumes less power, but it is also
comparatively unstable.
Regenerative
The Regenerative circuit has the advantage of being potentially very sensitive, it uses positive
feedback to increase the gain of the stage. Many valved sets were made which used a single
stage. However if misused it has the great potential to cause radio interference, if the set is
adjusted wrongly (too much feedback used) then the detector stage will oscillate so causing the
interference.
the RF interference that the local oscillator can generate can be controlled with the use of a
buffer stage between the LO and the Detector, and a buffer or RF amp stage between the LO and
the antenna.
Direct conversion
DEPARTMENT OF ELECTRONICS AND COMMUNICATIONS MICROWAVE-I NOTES
BY-GIRRAJ SHARMA
Main article: Direct-conversion receiver
In the Direct conversion receiver, the signals from the aerial pass through a band pass filter and
an amplifier before reaching a non-linear mixer where they are mixed with a signal from a local
oscillator which is tuned to the carrier wave frequency of an AM or SSB transmitter. The output
of this mixer is then passed through a low pass filter before an audio amplifier. This is then the
output of the radio.
For CW morse the local oscillator is tuned to a frequency slightly different from that of the
transmitter to make the received signal audible.
Advantages
o Simpler than a superhet
o Better tuning than a simple crystal set
Disadvantages
o Less selective than a superhet with regard to strong in-band signals
o A wider bandwidth than a good SSB communications radio, this is because no
sideband filtering exists in this circuit.
Superheterodyne
Here are two superheterodyne designs for AM and FM respectively. The FM design is a cheap
design intended for a broadcast band household receiver.
A schematic of a superhet AM receiver. Note that the radio has a AGC loop.
For single conversion superheterodyne AM receivers designed for mediumwave and longwave
the IF is commonly 455 kHz.
For many single conversion superheterodyne receivers designed for band II FM (88 - 108 MHz)
the IF is commonly 10.7 MHz. For TV sets the IF tends to be at 33 to 40 MHz.
FM vs. AM
To make a good AM receiver an automatic gain control loop is essential; this requires good
design. To make a good FM receiver a large number of RF amps which are driven into limiting
are required to create a receiver which can take advantage of the capture effect, one of the
biggest advantages of FM. With valved (tube) systems it is more expensive to make active stages
than it is to make the same number of stages with solid state parts, so for a valved superhet it is
simpler to make an AM receiver with the automatic gain control loop while for a solid state
receiver it is simpler to make an FM unit. Hence even while the idea of FM was known before
World War II its use was rare because of the cost of valves - in the UK the government had a
valve holder tax which encouraged radio receiver designers to use as few active stages as
possible, - but when solid state parts became available FM started to gain favour.
UNIT-3
Frequency modulation
DEPARTMENT OF ELECTRONICS AND COMMUNICATIONS MICROWAVE-I NOTES
BY-GIRRAJ SHARMA
In telecommunications and signal processing, frequency modulation (FM) conveys information
over a carrier wave by varying its instantaneous frequency (contrast this with amplitude
modulation, in which the amplitude of the carrier is varied while its frequency remains constant).
In analog applications, the difference between the instantaneous and the base frequency of the
carrier is directly proportional to the instantaneous value of the input signal amplitude. Digital
data can be sent by shifting the carrier's frequency among a set of discrete values, a technique
known as frequency-shift keying.
Frequency modulation can be regarded as phase modulation where the carrier phase modulation
is the time integral of the FM modulating signal.
Theory
Suppose the baseband data signal (the message) to be transmitted is xm(t) and the sinusoidal
carrier is , where fc is the carrier's base frequency and Ac is the
carrier's amplitude. The modulator combines the carrier with the baseband data signal to get the
transmitted signal:
In this equation, is the instantaneous frequency of the oscillator and is the frequency
deviation, which represents the maximum shift away from fc in one direction, assuming xm(t) is
limited to the range ±1.
Although it may seem that this limits the frequencies in use to fc ± fΔ, this neglects the distinction
between instantaneous frequency and spectral frequency. The frequency spectrum of an actual
FM signal has components extending out to infinite frequency, although they become negligibly
small beyond a point.
where the amplitude of the modulating sinusoid, is represented by the peak deviation (see
frequency deviation).
The harmonic distribution of a sine wave carrier modulated by such a sinusoidal signal can be
represented with Bessel functions - this provides a basis for a mathematical understanding of
frequency modulation in the frequency domain.
Modulation index
As with other modulation indices, this quantity indicates by how much the modulated variable
varies around its unmodulated level. It relates to the variations in the frequency of the carrier
signal:
where is the highest frequency component present in the modulating signal xm(t), and is
the Peak frequency-deviation, i.e. the maximum deviation of the instantaneous frequency from
the carrier frequency. If , the modulation is called narrowband FM, and its bandwidth is
approximately . If , the modulation is called wideband FM and its bandwidth is
approximately . While wideband FM uses more bandwidth, it can improve signal-to-noise
ratio significantly.
With a tone-modulated FM wave, if the modulation frequency is held constant and the
modulation index is increased, the (non-negligible) bandwidth of the FM signal increases, but
the spacing between spectra stays the same; some spectral components decrease in strength as
others increase. If the frequency deviation is held constant and the modulation frequency
increased, the spacing between spectra increases.
Carson's rule
where , as defined above, is the peak deviation of the instantaneous frequency from the
center carrier frequency .
Noise quieting
The noise power decreases as the signal power increases, therefore the SNR goes up
significantly.
Bessel functions
The carrier and sideband amplitudes are illustrated for different modulation indices of FM
signals. Based on the Bessel functions.
Modulatio Sideband
n Carrie
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
index r
0.00 1.00
0.25 0.98 0.12
0.5 0.94 0.24 0.03
1.0 0.77 0.44 0.11 0.02
1.5 0.51 0.56 0.23 0.06 0.01
2.0 0.22 0.58 0.35 0.13 0.03
2.41 0 0.52 0.43 0.20 0.06 0.02
2.5 −0.05 0.50 0.45 0.22 0.07 0.02 0.01
3.0 −0.26 0.34 0.49 0.31 0.13 0.04 0.01
−0.0
4.0 −0.40 0.36 0.43 0.28 0.13 0.05 0.02
7
−0.3 0.0
5.0 −0.18 0.05 0.36 0.39 0.26 0.13 0.05
3 2
−0.3 −0.1 0.0 0.0
5.53 0 0.25 0.40 0.32 0.19 0.09
4 3 3 1
6.0 0.15 −0.2 −0.2 0.11 0.36 0.36 0.25 0.13 0.0 0.0
Practical Implementation
Modulation
Direct FM modulation can be achieved by directly feeding the message into the input of a
VCO.
For indirect FM modulation, the message signal is integrated to generate a phase
modulated signal. This is used to modulate a crystal controlled oscillator, and the result is
passed through a frequency multiplier to give an FM signal[1].
Demodulation
Applications
DEPARTMENT OF ELECTRONICS AND COMMUNICATIONS MICROWAVE-I NOTES
BY-GIRRAJ SHARMA
Broadcasting
FM is commonly used at VHF radio frequencies for high-fidelity broadcasts of music and speech
(see FM broadcasting). Normal (analog) TV sound is also broadcast using FM. A narrow band
form is used for voice communications in commercial and amateur radio settings. The type of
FM used in broadcast is generally called wide-FM, or W-FM. In two-way radio, narrowband
narrow-fm (N-FM) is used to conserve bandwidth. In addition, it is used to send signals into
space.
FM is also used at intermediate frequencies by all analog VCR systems, including VHS, to
record both the luminance (black and white) and the chrominance portions of the video signal.
FM is the only feasible method of recording video to and retrieving video from Magnetic tape
without extreme distortion, as video signals have a very large range of frequency components —
from a few hertz to several megahertz, too wide for equalizers to work with due to electronic
noise below −60 dB. FM also keeps the tape at saturation level, and therefore acts as a form of
noise reduction, and a simple limiter can mask variations in the playback output, and the FM
capture effect removes print-through and pre-echo. A continuous pilot-tone, if added to the
signal — as was done on V2000 and many Hi-band formats — can keep mechanical jitter under
control and assist timebase correction.
These FM systems are unusual in that they have a ratio of carrier to maximum modulation
frequency of less than two; contrast this with FM audio broadcasting where the ratio is around
10,000. Consider for example a 6 MHz carrier modulated at a 3.5 MHz rate; by Bessel analysis
the first sidebands are on 9.5 and 2.5 MHz, while the second sidebands are on 13 MHz and
−1 MHz. The result is a sideband of reversed phase on +1 MHz; on demodulation, this results in
an unwanted output at 6−1 = 5 MHz. The system must be designed so that this is at an acceptable
level.[3]
Sound
Radio
Edwin Howard Armstrong (1890–1954) was an American electrical engineer who invented
frequency modulation (FM) radio.[4] He patented the regenerative circuit in 1914, the
superheterodyne receiver in 1918 and the super-regenerative circuit in 1922. [5] He presented his
paper: "A Method of Reducing Disturbances in Radio Signaling by a System of Frequency
Modulation", which first described FM radio, before the New York section of the Institute of
Radio Engineers on November 6, 1935. The paper was published in 1936.[6]
As the name implies, wideband FM (W-FM) requires a wider signal bandwidth than amplitude
modulation by an equivalent modulating signal, but this also makes the signal more robust
against noise and interference. Frequency modulation is also more robust against simple signal
amplitude fading phenomena. As a result, FM was chosen as the modulation standard for high
frequency, high fidelity radio transmission: hence the term "FM radio" (although for many years
the BBC called it "VHF radio", because commercial FM broadcasting uses a well-known part of
the VHF band -- the FM broadcast band.
FM receivers employ a special detector for FM signals and exhibit a phenomenon called capture
effect, where the tuner is able to clearly receive the stronger of two stations being broadcast on
the same frequency. Problematically however, frequency drift or lack of selectivity may cause
one station or signal to be suddenly overtaken by another on an adjacent channel. Frequency drift
typically constituted a problem on very old or inexpensive receivers, while inadequate selectivity
may plague any tuner.
An FM signal can also be used to carry a stereo signal: see FM stereo. However, this is done by
using multiplexing and demultiplexing before and after the FM process. The rest of this article
ignores the stereo multiplexing and demultiplexing process used in "stereo FM", and
concentrates on the FM modulation and demodulation process, which is identical in stereo and
mono processes.
FM broadcasting
FM broadcasting is a broadcast technology pioneered by Edwin Howard Armstrong that uses
frequency modulation (FM) to provide high-fidelity sound over broadcast radio.
Terminology
The term 'FM band' is effectively shorthand for 'frequency band in which FM is used for
broadcasting'. This term can upset purists because it conflates a modulation scheme with a range
of frequencies.
The term 'VHF' (Very High Frequency) was previously in common use in Europe. 'UKW,' which
stands for Ultrakurzwellen (ultra short wave) in German is still widely used in Germany, as is
'UKV' (Ultrakortvåg) in Sweden.
Broadcast bands
Throughout the world, the broadcast band falls within the VHF part of the radio spectrum.
Usually 87.5 to 108.0 MHz is used, or some portion thereof, with few exceptions:
In the former Soviet republics, and some former Eastern Bloc countries, the older 65-
74 MHz band is also used. Assigned frequencies are at intervals of 30 kHz. This band,
sometimes referred to as the OIRT band, is slowly being phased out in many countries. In
those countries the 87.5-108.0 MHz band is referred to as the CCIR band.
In Japan, the band 76-90 MHz is used.
The frequency of an FM broadcast station (more strictly its assigned nominal centre frequency)
is usually an exact multiple of 100 kHz. In most of the Americas and the Caribbean, only odd
multiples are used. In some parts of Europe, Greenland and Africa, only even multiples are used.
In Italy, multiples of 50 kHz are used. There are other unusual and obsolete standards in some
countries, including 0.001, 0.01, 0.03, 0.074, 0.5, and 0.3 MHz.
Frequency modulation (FM) is a form of modulation which conveys information over a carrier
wave by varying its frequency (contrast this with amplitude modulation, in which the amplitude
of the carrier is varied while its frequency remains constant). In analog applications, the
instantaneous frequency of the carrier is directly proportional to the instantaneous value of the
input signal. This form of modulation is commonly used in the FM broadcast band.
Random noise has a 'triangular' spectral distribution in an FM system, with the effect that noise
occurs predominantly at the highest frequencies within the baseband. This can be offset, to a
limited extent, by boosting the high frequencies before transmission and reducing them by a
corresponding amount in the receiver. Reducing the high frequencies in the receiver also reduces
the high-frequency noise. These processes of boosting and then reducing certain frequencies are
known as pre-emphasis and de-emphasis, respectively.
The amount of pre-emphasis and de-emphasis used is defined by the time constant of a simple
RC filter circuit. In most of the world a 50 µs time constant is used. In North America, 75 µs is
used. This applies to both mono and stereo transmissions and to baseband audio (not the
subcarriers).
The amount of pre-emphasis that can be applied is limited by the fact that many forms of
contemporary music contain more high-frequency energy than the musical styles which
prevailed at the birth of FM broadcasting. They cannot be pre-emphasized as much because it
would cause excessive deviation of the FM carrier. (Systems more modern than FM broadcasting
tend to use either programme-dependent variable pre-emphasis—e.g. dbx in the BTSC TV sound
system—or none at all.)
FM stereo
In the late 1950s, several systems to add stereo to FM radio were considered by the FCC.
Included were systems from 14 proponents including Crosley, Halstead, Electrical and Musical
Industries, Ltd (EMI), Zenith Electronics Corporation and General Electric. The individual
systems were evaluated for their strengths and weaknesses during field tests in Uniontown,
Pennsylvania using KDKA-FM in Pittsburgh as the originating station. The Crosley system was
rejected by the FCC because it degraded the signal-to-noise ratio of the main channel and did not
perform well under multipath RF conditions. In addition, it did not allow for SCA services
It is important that stereo broadcasts should be compatible with mono receivers. For this reason,
the left (L) and right (R) channels are algebraically encoded into sum (L+R) and difference
(L−R) signals. A mono receiver will use just the L+R signal so the listener will hear both
channels in the single loudspeaker. A stereo receiver will add the difference signal to the sum
signal to recover the left channel, and subtract the difference signal from the sum to recover the
right channel.
The (L+R) Main channel signal is transmitted as baseband audio in the range of 30 Hz to
15 kHz. The (L−R) Sub-channel signal is modulated onto a 38 kHz double-sideband suppressed
carrier (DSBSC) signal occupying the baseband range of 23 to 53 kHz.
A 19 kHz pilot tone, at exactly half the 38 kHz sub-carrier frequency and with a precise phase
relationship to it, as defined by the formula below, is also generated. This is transmitted at 8–
10% of overall modulation level and used by the receiver to regenerate the 38 kHz sub-carrier
with the correct phase.
The final multiplex signal from the stereo generator contains the Main Channel (L+R), the pilot
tone, and the sub-channel (L−R). This composite signal, along with any other sub-carriers,
modulates the FM transmitter.
The instantaneous deviation of the transmitter carrier frequency due to the stereo audio and pilot
tone (at 10% modulation) is:
[2]
Where A and B are the pre-emphasized Left and Right audio signals and fp is the frequency of
the pilot tone. Slight variations in the peak deviation may occur in the presence of other
subcarriers or because of local regulations.
Converting the multiplex signal back into left and right audio signals is performed by a stereo
decoder, which is built into stereo receivers.
Stereo FM signals are more susceptible to noise and multipath distortion than are mono FM
signals.[3]
In addition, for a given RF level at the receiver, the signal-to-noise ratio for the stereo signal will
be worse than for the mono receiver. For this reason many FM stereo receivers include a
stereo/mono switch to allow listening in mono when reception conditions are less than ideal, and
most car radios are arranged to reduce the separation as the signal-to-noise ratio worsens,
eventually going to mono while still indicating a stereo signal is being received.
Quadraphonic FM
In 1969 Louis Dorren invented the Quadraplex system of single station, discrete, compatible
four-channel FM broadcasting. There are two additional subcarriers in the Quadraplex system,
supplementing the single one used in standard stereo FM. The baseband layout is as follows:
There were several variations on this system submitted by GE, Zenith, RCA, and Denon for
testing and consideration during the National Quadraphonic Radio Committee field trials for the
FCC. The original Dorren Quadraplex System outperformed all the others and was chosen as the
national standard for Quadraphonic FM broadcasting in the United States. The first commercial
FM station to broadcast quadraphonic program content was WIQB (now called WWWW-FM) in
Ann Arbor/Saline, Michigan under the guidance of Chief Engineer Brian Brown.[4]
The subcarrier system has been further extended to add other services. Initially these were
private analog audio channels which could be used internally or rented out. Radio reading
services for the blind are also still common, and there were experiments with quadraphonic
sound. If stereo is not on a station, everything from 23 kHz on up can be used for other services.
The guard band around 19 kHz (±4 kHz) must still be maintained, so as not to trigger stereo
decoders on receivers. If there is stereo, there will typically be a guard band between the upper
limit of the DSBSC stereo signal (53 kHz) and the lower limit of any other subcarrier.
Digital services are now also available. A 57 kHz subcarrier (phase locked to the third harmonic
of the stereo pilot tone) is used to carry a low-bandwidth digital Radio Data System signal,
providing extra features such as Alternative Frequency (AF) and Network (NN). This
narrowband signal runs at only 1187.5 bits per second, thus is only suitable for text. A few
proprietary systems are used for private communications. A variant of RDS is the North
American RBDS or "smart radio" system. In Germany the analog ARI system was used prior to
RDS for broadcasting traffic announcements to motorists (without disturbing other listeners).
Plans to use ARI for other European countries led to the development of RDS as a more
powerful system. RDS is designed to be capable of being used alongside ARI despite using
identical subcarrier frequencies.
In the United States, digital radio services are being deployed within the FM band rather than
using Eureka 147 or the Japanese standard ISDB. This in-band on-channel approach, as do all
digital radio techniques, makes use of advanced compressed audio. The proprietary iBiquity
DEPARTMENT OF ELECTRONICS AND COMMUNICATIONS MICROWAVE-I NOTES
BY-GIRRAJ SHARMA
system, branded as "HD Radio", currently is authorized for "hybrid" mode operation, wherein
both the conventional analog FM carrier and digital sideband subcarriers are transmitted.
Eventually, presuming widespread deployment of HD Radio receivers, the analog services could
theoretically be discontinued and the FM band become all digital.
In the USA services (other than stereo, quad and RDS) using subcarriers are sometimes referred
to as SCA (subsidiary communications authorisation) services. Uses for such subcarriers include
book/newspaper reading services for blind listeners, private data transmission services (for
example sending stock market information to stockbrokers or stolen credit card number
blacklists to stores) subscription commercial-free background music services for shops, paging
("beeper") services and providing a program feed for AM transmitters of AM/FM stations. SCA
subcarriers are typically 67 kHz and 92 kHz.
Dolby FM
A commercially unsuccessful noise reduction system used with FM radio in some countries
during the late 1970s, Dolby FM used a modified 25 µs pre-emphasis time constant and a
frequency selective companding arrangement to reduce noise. See: Dolby noise reduction
system.
For FM stereo, the maximum distance covered is significantly reduced. This is due to the
presence of the 38 kHz subcarrier modulation. Vigorous audio processing improves the coverage
area of an FM stereo station.
The medium wave band (known as "AM" in North America) is overcrowded [citation needed] in
Western Europe, leading to interference problems and, as a result, many MW frequencies are
suitable only for speech broadcasting.
Belgium, the Netherlands, Denmark and particularly Germany were among the first countries to
adopt FM on a widespread scale. Among the reasons for this were:
1. The medium wave band in Western Europe became overcrowded after World War II,
mainly due to the best available medium wave frequencies being used at high power
levels by the Allied occupation forces, both for broadcasting entertainment to their troops
and for broadcasting cold war propaganda across the Iron curtain.
2. After World War II, broadcasting frequencies were reorganized and reallocated by
delegates of the victorious countries in the Copenhagen Frequency Plan. German
broadcasters were left with only two remaining AM frequencies, and were forced to look
to FM for expansion.
Public service broadcasters in Ireland and Australia were far slower at adopting FM radio than
those in either North America or continental Europe. However, in Ireland several unlicensed
commercial FM stations were on air by the mid-1980s. These generally simulcast on AM and
FM.
In the United Kingdom, the BBC began FM broadcasting in 1955, with three national networks
carrying the Light Programme, Third Programme and Home Service (renamed Radio 2, Radio 3
and Radio 4 respectively in 1967). These three networks used the sub-band 88.0–94.6 MHz. The
sub-band 94.6–97.6 MHz was later used for BBC and local commercial services. Only when
commercial broadcasting was introduced to the UK in 1973 did the use of FM pick up in Britain.
With the gradual clearance of other users (notably Public Services such as police, fire and
ambulance) and the extension of the FM band to 108.0 MHz between 1980 and 1995, FM
expanded rapidly throughout the British Isles and effectively took over from LW and MW as the
delivery platform of choice for fixed and portable domestic and vehicle-based receivers.
FM started in Australia in 1947 but did not catch on and was shut down in 1961 to expand the
television band. It was not reopened until 1975. Subsequently, it developed steadily until in the
1980s many AM stations transferred to FM because of its superior sound quality. Today, as
elsewhere in the developed world, most Australian broadcasting is on FM – although AM talk
stations are still very popular.
Most other countries expanded their use of FM through the 1990s. Because it takes a large
number of FM transmitting stations to cover a geographically large country, particularly where
there are terrain difficulties, FM is more suited to local broadcasting than for national networks.
In such countries, particularly where there are economic or infrastructural problems, "rolling
out" a national FM broadcast network to reach the majority of the population can be a slow and
expensive process.
The frequencies available for FM were decided by some important conferences of ITU. The
milestone of those conferences is the Stockholm agreement of 1961 among 38 countries.
A 1984 conference in Geneva made some modifications to the original Stockholm agreement
particularly in the frequency range above 100 MHz.
In some countries, small-scale (Part 15 in United States terms) transmitters are available that can
transmit a signal from an audio device (usually an MP3 player or similar) to a standard FM radio
receiver; such devices range from small units built to carry audio to a car radio with no audio-in
capability (often formerly provided by special adapters for audio cassette decks, which are
becoming less common on car radio designs) up to full-sized, near-professional-grade
broadcasting systems that can be used to transmit audio throughout a property. Most such units
transmit in full stereo, though some models designed for beginner hobbyists may not. Similar
transmitters are often included in satellite radio receivers and some toys.
Legality of these devices varies by country. The FCC in the US and Industry Canada allow them.
Starting on 1 October 2006 these devices became legal in most countries in the European Union.
Devices made to the harmonised European specification became legal in the UK on 8 December
2006.
FM radio microphones
The FM broadcast band can also be used by some inexpensive wireless microphones, but
professional-grade wireless microphones generally use bands in the UHF region so they can run
on dedicated equipment without broadcast interference. Such inexpensive wireless microphones
are generally sold as toys for karaoke or similar purposes, allowing the user to use an FM radio
as an output rather than a dedicated amplifier and speaker.
Microbroadcasting
Low-power transmitters such as those mentioned above are also sometimes used for
neighborhood or campus radio stations, though campus radio stations are often run over carrier
current. This is generally considered a form of microbroadcasting. As a general rule,
enforcement towards low-power FM stations is stricter than AM stations due to issues such as
the capture effect, and as a result, FM microbroadcasters generally do not reach as far as their
AM competitors.
FM transmitters have been used to construct miniature wireless microphones for espionage and
surveillance purposes (covert listening devices or so-called "bugs"); the advantage to using the
FM broadcast band for such operations is that the receiving equipment would not be considered
particularly suspect. Common practice is to tune the bug's transmitter off the ends of the
broadcast band, into what in the United States would be TV channel 6 (<87.9 MHz) or aviation
navigation frequencies (>107.9); most FM radios with analog tuners have sufficient
DEPARTMENT OF ELECTRONICS AND COMMUNICATIONS MICROWAVE-I NOTES
BY-GIRRAJ SHARMA
overcoverage to pick up these slightly-beyond-outermost frequencies, although many digitally
tuned radios do not.
Constructing a "bug" is a common early project for electronics hobbyists, and project kits to do
so are available from a wide variety of sources. The devices constructed, however, are often too
large and poorly shielded for use in clandestine activity.
In addition, much pirate radio activity is broadcast in the FM range, because of the band's greater
clarity and listenership, the smaller size and lower cost of equipment.
UNIT-4
PULSE MODULATION
Theory
For convenience, we will discuss signals which vary with time. However, the same results can be
applied to signals varying in space or in any other dimension and similar results are obtained in
two or more dimensions.
Let x(t) be a continuous signal which is to be sampled, and that sampling is performed by
measuring the value of the continuous signal every T seconds, which is called the sampling
interval. Thus, the sampled signal x[n] given by:
We can now ask: under what circumstances is it possible to reconstruct the original signal
completely and exactly (perfect reconstruction)?
The frequency equal to one-half of the sampling rate is therefore a bound on the highest
frequency that can be unambiguously represented by the sampled signal. This frequency (half the
sampling rate) is called the Nyquist frequency of the sampling system. Frequencies above the
Nyquist frequency fN can be observed in the sampled signal, but their frequency is ambiguous.
That is, a frequency component with frequency f cannot be distinguished from other components
with frequencies NfN + f and NfN – f for nonzero integers N. This ambiguity is called aliasing. To
handle this problem as gracefully as possible, most analog signals are filtered with an anti-
aliasing filter (usually a low-pass filter with cutoff near the Nyquist frequency) before
conversion to the sampled discrete representation.
Observation period
The observation period is the span of time during which a series of data samples are collected at
regular intervals. More broadly, it can refer to any specific period during which a set of data
points is gathered, regardless of whether or not the data is periodic in nature. Thus a researcher
might study the incidence of earthquakes and tsunamis over a particular time period, such as a
year or a century.
The observation period is simply the span of time during which the data is studied, regardless of
whether data so gathered represents a set of discrete events having arbitrary timing within the
interval, or whether the samples are explicitly bound to specified sub-intervals.
Practical implications
In practice, the continuous signal is sampled using an analog-to-digital converter (ADC), a non-
ideal device with various physical limitations. This results in deviations from the theoretically
perfect reconstruction capabilities, collectively referred to as distortion.
Aliasing. A precondition of the sampling theorem is that the signal be bandlimited. However, in
practice, no time-limited signal can be bandlimited. Since signals of interest are almost always
time-limited (e.g., at most spanning the lifetime of the sampling device in question), it follows
that they are not bandlimited. However, by designing a sampler with an appropriate guard band, it
is possible to obtain output that is as accurate as necessary.
Integration effect or aperture effect. This results from the fact that the sample is obtained as a
time average within a sampling region, rather than just being equal to the signal value at the
sampling instant. The integration effect is readily noticeable in photography when the exposure is
too long and creates a blur in the image. An ideal camera would have an exposure time of zero. In
a capacitor-based sample and hold circuit, the integration effect is introduced because the
capacitor cannot instantly change voltage thus requiring the sample to have non-zero width.
Jitter or deviation from the precise sample timing intervals.
Noise, including thermal sensor noise, analog circuit noise, etc.
Slew rate limit error, caused by an inability for an ADC output value to change sufficiently
rapidly.
Quantization as a consequence of the finite precision of words that represent the converted
values.
Error due to other non-linear effects of the mapping of input voltage to converted output value (in
addition to the effects of quantization).
The conventional, practical digital-to-analog converter (DAC) does not output a sequence of
dirac impulses (such that, if ideally low-pass filtered, result in the original signal before
sampling) but instead output a sequence of piecewise constant values or rectangular pulses. This
means that there is an inherent effect of the zero-order hold on the effective frequency response
of the DAC resulting in a mild roll-off of gain at the higher frequencies (a 3.9224 dB loss at the
Nyquist frequency). This zero-order hold effect is a consequence of the hold action of the DAC
and is not due to the sample and hold that might precede a conventional ADC as is often
misunderstood. The DAC can also suffer errors from jitter, noise, slewing, and non-linear
mapping of input value to output voltage.
Jitter, noise, and quantization are often analyzed by modeling them as random errors added to the
sample values. Integration and zero-order hold effects can be analyzed as a form of low-pass
filtering. The non-linearities of either ADC or DAC are analyzed by replacing the ideal linear
function mapping with a proposed nonlinear function.
Applications
Audio sampling
When it is necessary to capture audio covering the entire 20–20,000 Hz range of human hearing,
such as when recording music or many types of acoustic events, audio waveforms are typically
sampled at 44.1 kHz (CD), 48 kHz (professional audio), or 96 kHz. The approximately double-
rate requirement is a consequence of the Nyquist theorem.
There has been an industry trend towards sampling rates well beyond the basic requirements;
96 kHz and even 192 kHz are available.[1] This is in contrast with laboratory experiments, which
have failed to show that ultrasonic frequencies are audible to human observers; however in some
cases ultrasonic sounds do interact with and modulate the audible part of the frequency spectrum
(intermodulation distortion). It is noteworthy that intermodulation distortion is not present in the
live audio and so it represents an artificial coloration to the live sound.[2]
One advantage of higher sampling rates is that they can relax the low-pass filter design
requirements for ADCs and DACs, but with modern oversampling sigma-delta converters this
advantage is less important.
Audio is typically recorded at 8-, 16-, and 20-bit depth, which yield a theoretical maximum
signal to quantization noise ratio (SQNR) for a pure sine wave of, approximately, 49.93 dB,
98.09 dB and 122.17 dB [3]. Eight-bit audio is generally not used due to prominent and inherent
quantization noise (low maximum SQNR), although the A-law and u-law 8-bit encodings pack
more resolution into 8 bits while increase total harmonic distortion. CD quality audio is recorded
at 16-bit. In practice, not many consumer stereos can produce more than about 90 dB of dynamic
range, although some can exceed 100 dB. Thermal noise limits the true number of bits that can
be used in quantization. Few analog systems have signal to noise ratios (SNR) exceeding 120
dB; consequently, few situations will require more than 20-bit quantization.
For playback and not recording purposes, a proper analysis of typical programme levels
throughout an audio system reveals that the capabilities of well-engineered 16-bit material far
exceed those of the very best hi-fi systems, with the microphone noise and loudspeaker
headroom being the real limiting factors[citation needed].
Speech sampling
Speech signals, i.e., signals intended to carry only human speech, can usually be sampled at a
much lower rate. For most phonemes, almost all of the energy is contained in the 5Hz-4 kHz
range, allowing a sampling rate of 8 kHz. This is the sampling rate used by nearly all telephony
systems, which use the G.711 sampling and quantization specifications.
Standard-definition television (SDTV) uses either 720 by 480 pixels (US NTSC 525-line) or 704
by 576 pixels (UK PAL 625-line) for the visible picture area.
Undersampling
Plot of sample rates (y axis) versus the upper edge frequency (x axis) for a band of width 1; grays areas
are combinations that are "allowed" in the sense that no two frequencies in the band alias to same
frequency. The darker gray areas correspond to undersampling with the lowest allowable sample rate.
When one samples a bandpass signal at a rate lower than the Nyquist rate, the samples are equal
to samples of a low-frequency alias of the high-frequency signal; the original signal will still be
uniquely represented and recoverable if the spectrum of its alias does not cross over half the
sampling rate. Such undersampling is also known as bandpass sampling, harmonic sampling, IF
sampling, and direct IF to digital conversion.[4]
Oversampling
Oversampling is used in most modern analog-to-digital converters to reduce the distortion
introduced by practical digital-to-analog converters, such as a zero-order hold instead of
idealizations like the Whittaker–Shannon interpolation formula.
Complex sampling
Although complex-valued samples can be obtained as described above, they are much more
commonly created by manipulating samples of a real-valued waveform. For instance, the
equivalent baseband waveform can be created without explicitly computing by processing
[note 2]
the product sequence through a digital lowpass filter whose
[note 3]
cutoff frequency is B/2. Computing only every other sample of the output sequence reduces
the sample-rate commensurate with the reduced Nyquist rate. The result is half as many
complex-valued samples as the original number of real samples. No information is lost, and the
original s(t) waveform can be recovered, if necessary.
The Nyquist–Shannon sampling theorem, which has been named after Harry Nyquist and
Claude Shannon, is a fundamental result in the field of information theory, in particular
telecommunications and signal processing. Sampling is the process of converting a signal (for
example, a function of continuous time or space) into a numeric sequence (a function of discrete
time or space). Shannon's version of the theorem states:[1]
The theorem is commonly called the Nyquist sampling theorem; since it was also discovered
independently by E. T. Whittaker, by Vladimir Kotelnikov, and by others, it is also known as
Nyquist–Shannon–Kotelnikov, Whittaker–Shannon–Kotelnikov, Whittaker–Nyquist–
Kotelnikov–Shannon, WKS, etc., sampling theorem, as well as the Cardinal Theorem of
Interpolation Theory. It is often referred to simply as the sampling theorem.
In essence, the theorem shows that a bandlimited analog signal that has been sampled can be
perfectly reconstructed from an infinite sequence of samples if the sampling rate exceeds 2B
samples per second, where B is the highest frequency in the original signal. If a signal contains a
component at exactly B hertz, then samples spaced at exactly 1/(2B) seconds do not completely
determine the signal, Shannon's statement notwithstanding. This sufficient condition can be
weakened, as discussed at Sampling of non-baseband signals below.
More recent statements of the theorem are sometimes careful to exclude the equality condition;
that is, the condition is if x(t) contains no frequencies higher than or equal to B; this condition is
equivalent to Shannon's except when the function includes a steady sinusoidal component at
exactly frequency B.
The theorem assumes an idealization of any real-world situation, as it only applies to signals that
are sampled for infinite time; any time-limited x(t) cannot be perfectly bandlimited. Perfect
reconstruction is mathematically possible for the idealized model but only an approximation for
real-world signals and sampling techniques, albeit in practice often a very good one.
The theorem also leads to a formula for reconstruction of the original signal. The constructive
proof of the theorem leads to an understanding of the aliasing that can occur when a sampling
system does not satisfy the conditions of the theorem.
The sampling theorem provides a sufficient condition, but not a necessary one, for perfect
reconstruction. The field of compressed sensing provides a stricter sampling condition when the
Introduction
A signal or function is bandlimited if it contains no energy at frequencies higher than some
bandlimit or bandwidth B. A signal that is bandlimited is constrained in how rapidly it changes
in time, and therefore how much detail it can convey in an interval of time. The sampling
theorem asserts that the uniformly spaced discrete samples are a complete representation of the
signal if this bandwidth is less than half the sampling rate. To formalize these concepts, let x(t)
represent a continuous-time signal and X(f) be the continuous Fourier transform of that signal:
for all
or, equivalently, supp(X)[2] [−B, B]. Then the sufficient condition for exact reconstructability
from samples at a uniform sampling rate fs (in samples per unit time) is:
or equivalently:
2B is called the Nyquist rate and is a property of the bandlimited signal, while fs / 2 is called the
Nyquist frequency and is a property of this sampling system.
The time interval between successive samples is referred to as the sampling interval:
The sampling theorem leads to a procedure for reconstructing the original x(t) from the samples
and states sufficient conditions for such a reconstruction to be exact.
The continuous signal varies over time (or space in a digitized image, or another independent
variable in some other application) and the sampling process is performed by measuring the
continuous signal's value every T units of time (or space), which is called the sampling interval.
In practice, for signals that are a function of time, the sampling interval is typically quite small,
on the order of milliseconds, microseconds, or less. This results in a sequence of numbers, called
samples, to represent the original signal. Each sample value is associated with the instant in time
when it was measured. The reciprocal of the sampling interval (1/T) is the sampling frequency
denoted fs, which is measured in samples per unit of time. If T is expressed in seconds, then fs is
expressed in Hz.
Reconstruction
Reconstruction of the original signal is an interpolation process that mathematically defines a
continuous-time signal x(t) from the discrete samples x[n] and at times in between the sample
instants nT.
Fig.2: The normalized sinc function: sin(πx) / (πx) ... showing the central peak at x= 0, and zero-
crossings at the other integer values of x.
DEPARTMENT OF ELECTRONICS AND COMMUNICATIONS MICROWAVE-I NOTES
BY-GIRRAJ SHARMA
The procedure: Each sample value is multiplied by the sinc function scaled so that the
zero-crossings of the sinc function occur at the sampling instants and that the sinc
function's central point is shifted to the time of that sample, nT. All of these shifted and
scaled functions are then added together to recover the original signal. The scaled and
time-shifted sinc functions are continuous making the sum of these also continuous, so
the result of this operation is a continuous signal. This procedure is represented by the
Whittaker–Shannon interpolation formula.
The condition: The signal obtained from this reconstruction process can have no
frequencies higher than one-half the sampling frequency. According to the theorem, the
reconstructed signal will match the original signal provided that the original signal
contains no frequencies at or above this limit. This condition is called the Nyquist
criterion, or sometimes the Raabe condition.
If the original signal contains a frequency component equal to one-half the sampling rate, the
condition is not satisfied. The resulting reconstructed signal may have a component at that
frequency, but the amplitude and phase of that component generally will not match the original
component.
This reconstruction or interpolation using sinc functions is not the only interpolation scheme.
Indeed, it is impossible in practice because it requires summing an infinite number of terms.
However, it is the interpolation method that in theory exactly reconstructs any given bandlimited
x(t) with any bandlimit B < 1/(2T); any other method that does so is formally equivalent to it.
Practical considerations
A few consequences can be drawn from the theorem:
If the highest frequency B in the original signal is known, the theorem gives the lower
bound on the sampling frequency for which perfect reconstruction can be assured. This
lower bound to the sampling frequency, 2B, is called the Nyquist rate.
If instead the sampling frequency is known, the theorem gives us an upper bound for
frequency components, B<fs/2, of the signal to allow for perfect reconstruction. This
upper bound is the Nyquist frequency, denoted fN.
Both of these cases imply that the signal to be sampled must be bandlimited; that is, any
component of this signal which has a frequency above a certain bound should be zero, or
at least sufficiently close to zero to allow us to neglect its influence on the resulting
reconstruction. In the first case, the condition of bandlimitation of the sampled signal can
In practice, neither of the two statements of the sampling theorem described above can be
completely satisfied, and neither can the reconstruction formula be precisely
implemented. The reconstruction process that involves scaled and delayed sinc functions
can be described as ideal. It cannot be realized in practice since it implies that each
sample contributes to the reconstructed signal at almost all time points, requiring
summing an infinite number of terms. Instead, some type of approximation of the sinc
functions, finite in length, has to be used. The error that corresponds to the sinc-function
approximation is referred to as interpolation error. Practical digital-to-analog converters
produce neither scaled and delayed sinc functions nor ideal impulses (that if ideally low-
pass filtered would yield the original signal), but a sequence of scaled and delayed
rectangular pulses. This practical piecewise-constant output can be modeled as a zero-
order hold filter driven by the sequence of scaled and delayed dirac impulses referred to
in the mathematical basis section below. A shaping filter is sometimes used after the
DAC with zero-order hold to make a better overall approximation.
Furthermore, in practice, a signal can never be perfectly bandlimited, since ideal "brick-
wall" filters cannot be realized. All practical filters can only attenuate frequencies outside
a certain range, not remove them entirely. In addition to this, a "time-limited" signal can
never be bandlimited. This means that even if an ideal reconstruction could be made, the
reconstructed signal would not be exactly the original signal. The error that corresponds
to the failure of bandlimitation is referred to as aliasing.
The sampling theorem does not say what happens when the conditions and procedures
are not exactly met, but its proof suggests an analytical framework in which the non-
ideality can be studied. A designer of a system that deals with sampling and
reconstruction processes needs a thorough understanding of the signal to be sampled, in
particular its frequency content, the sampling frequency, how the signal is reconstructed
in terms of interpolation, and the requirement for the total reconstruction error, including
aliasing, sampling, interpolation and other errors. These properties and parameters may
need to be carefully tuned in order to obtain a useful system.
The Poisson summation formula shows that the samples, x[n]=x(nT), of function x(t) are
sufficient to create a periodic summation of function X(f). The result is:
(Eq.1)
As depicted in Figures 3, 4, and 8, copies of X(f) are shifted by multiples of fs and combined by
addition.
Fig.3: Hypothetical spectrum of a properly sampled bandlimited signal (blue) and images
(green) that do not overlap. A "brick-wall" low-pass filter can remove the images and leave the
original spectrum, thus recovering the original signal from the samples.
If the sampling condition is not satisfied, adjacent copies overlap, and it is not possible in general
to discern an unambiguous X(f). Any frequency component above fs/2 is indistinguishable from a
lower-frequency component, called an alias, associated with one of the copies. The
reconstruction technique described below produces the alias, rather than the original component,
in such cases.
For a sinusoidal component of exactly half the sampling frequency, the component will in
general alias to another sinusoid of the same frequency, but with a different phase and amplitude.
1. Increase the sampling rate, to above twice some or all of the frequencies that are aliasing.
2. Introduce an anti-aliasing filter or make the anti-aliasing filter more stringent.
The anti-aliasing filter is to restrict the bandwidth of the signal to satisfy the condition for proper
sampling. Such a restriction works in theory, but is not precisely satisfiable in reality, because
realizable filters will always allow some leakage of high frequencies. However, the leakage
energy can be made small enough so that the aliasing effects are negligible.
The sampling theorem is usually formulated for functions of a single variable. Consequently, the
theorem is directly applicable to time-dependent signals and is normally formulated in that
context. However, the sampling theorem can be extended in a straightforward way to functions
of arbitrarily many variables. Grayscale images, for example, are often represented as two-
dimensional arrays (or matrices) of real numbers representing the relative intensities of pixels
(picture elements) located at the intersections of row and column sample locations. As a result,
images require two independent variables, or indices, to specify each pixel uniquely — one for
the row, and one for the column.
Color images typically consist of a composite of three separate grayscale images, one to
represent each of the three primary colors — red, green, and blue, or RGB for short. Other
colorspaces using 3-vectors for colors include HSV, LAB, XYZ, etc. Some colorspaces such as
cyan, magenta, yellow, and black (CMYK) may represent color by four dimensions. All of these
are treated as vector-valued functions over a two-dimensional sampled domain.
Similar to one-dimensional discrete-time signals, images can also suffer from aliasing if the
sampling resolution, or pixel density, is inadequate. For example, a digital photograph of a
striped shirt with high frequencies (in other words, the distance between the stripes is small), can
DEPARTMENT OF ELECTRONICS AND COMMUNICATIONS MICROWAVE-I NOTES
BY-GIRRAJ SHARMA
cause aliasing of the shirt when it is sampled by the camera's image sensor. The aliasing appears
as a moiré pattern. The "solution" to higher sampling in the spatial domain for this case would be
to move closer to the shirt, use a higher resolution sensor, or to optically blur the image before
acquiring it with the sensor.
Another example is shown to the left in the brick patterns. The top image shows the effects when
the sampling theorem's condition is not satisfied. When software rescales an image (the same
process that creates the thumbnail shown in the lower image) it, in effect, runs the image through
a low-pass filter first and then downsamples the image to result in a smaller image that does not
exhibit the moiré pattern. The top image is what happens when the image is downsampled
without low-pass filtering: aliasing results.
The application of the sampling theorem to images should be made with care. For example, the
sampling process in any standard image sensor (CCD or CMOS camera) is relatively far from
the ideal sampling which would measure the image intensity at a single point. Instead these
devices have a relatively large sensor area at each sample point in order to obtain sufficient
amount of light. In other words, any detector has a finite-width point spread function. The analog
optical image intensity function which is sampled by the sensor device is not in general
bandlimited, and the non-ideal sampling is itself a useful type of low-pass filter, though not
always sufficient to remove enough high frequencies to sufficiently reduce aliasing. When the
area of the sampling spot (the size of the pixel sensor) is not large enough to provide sufficient
anti-aliasing, a separate anti-aliasing filter (optical low-pass filter) is typically included in a
camera system to further blur the optical image. Despite images having these problems in
relation to the sampling theorem, the theorem can be used to describe the basics of down and up
sampling of images.
Downsampling
When a signal is downsampled, the sampling theorem can be invoked via the artifice of
resampling a hypothetical continuous-time reconstruction. The Nyquist criterion must still be
satisfied with respect to the new lower sampling frequency in order to avoid aliasing. To meet
the requirements of the theorem, the signal must usually pass through a low-pass filter of
appropriate cutoff frequency as part of the downsampling operation. This low-pass filter, which
prevents aliasing, is called an anti-aliasing filter.
Critical frequency
But for any θ such that sin(θ) ≠ 0, x(t) and xA(t) have different amplitudes and different phase.
This and other ambiguities are the reason for the strict inequality of the sampling theorem's
condition.
Fig.8: Spectrum, Xs(f), of a properly sampled bandlimited signal (blue) and images (green) that
do not overlap. A "brick-wall" low-pass filter, H(f), removes the images, leaves the original
spectrum, X(f), and recovers the original signal from the samples.
From Figures 3 and 8, it is apparent that when there is no overlap of the copies (aka "images") of
X(f), the k=0 term of Xs(f) can be recovered by the product:
where:
H(f) need not be precisely defined in the region [B, fs-B], because Xs(f) is zero in that region.
However, the worst case is when B = fs/2, the Nyquist frequency. A function that is sufficient for
that and all less severe cases is:
Therefore:
[3]
which is the Whittaker–Shannon interpolation formula. It shows explicitly how the samples,
x(nT), can be combined to reconstruct x(t).
On the left are values of f(t) at the sampling points. The integral on the right will be
recognized as essentially the nth coefficient in a Fourier-series expansion of the function
F(ω), taking the interval –W to W as a fundamental period. This means that the values of
the samples f(n / 2W) determine the Fourier coefficients in the series expansion of F(ω).
Thus they determine F(ω), since F(ω) is zero for frequencies greater than W, and for
lower frequencies F(ω) is determined if its Fourier coefficients are determined. But F(ω)
determines the original function f(t) completely, since a function is determined if its
spectrum is known. Therefore the original samples determine the function f(t)
completely.
Shannon's proof of the theorem is complete at that point, but he goes on to discuss reconstruction
via sinc functions, what we now call the Whittaker–Shannon interpolation formula as discussed
above. He does not derive or prove the properties of the sinc function, but these would have been
familiar to engineers reading his works at the time, since the Fourier pair relationship between
rect (the rectangular function) and sinc was well known. Quoting Shannon:
Let xn be the nth sample. Then the function f(t) is represented by:
As in the other proof, the existence of the Fourier transform of the original signal is assumed, so
the proof does not say whether the sampling theorem extends to bandlimited stationary random
processes.
A similar result is true if the band does not start at zero frequency but at some
higher value, and can be proved by a linear translation (corresponding physically
to single-sideband modulation) of the zero-frequency case. In this case the
elementary pulse is obtained from sin(x)/x by single-side-band modulation.
DEPARTMENT OF ELECTRONICS AND COMMUNICATIONS MICROWAVE-I NOTES
BY-GIRRAJ SHARMA
That is, a sufficient no-loss condition for sampling signals that do not have baseband
components exists that involves the width of the non-zero frequency interval as opposed to its
highest frequency component. See Sampling (signal processing) for more details and examples.
A bandpass condition is that X(f) = 0, for all nonnegative f outside the open band of frequencies:
for some nonnegative integer N. This formulation includes the normal baseband condition as the
case N=0.
The corresponding interpolation function is the impulse response of an ideal brick-wall bandpass
filter (as opposed to the ideal brick-wall lowpass filter used above) with cutoffs at the upper and
lower edges of the specified band, which is the difference between a pair of lowpass impulse
responses:
Other generalizations, for example to signals occupying multiple non-contiguous bands, are
possible as well. Even the most generalized form of the sampling theorem does not have a
provably true converse. That is, one cannot conclude that information is necessarily lost just
because the conditions of the sampling theorem are not satisfied; from an engineering
perspective, however, it is generally safe to assume that if the sampling theorem is not satisfied
then information will most likely be lost.
Nonuniform sampling
The sampling theory of Shannon can be generalized for the case of nonuniform samples, that is,
samples not taken equally spaced in time. Shannon sampling theory for non-uniform sampling
states that a band-limited signal can be perfectly reconstructed from its samples if the average
sampling rate satisfies the Nyquist condition[4]. Therefore, although uniformly spaced samples
may result in easier reconstruction algorithms, it is not a necessary condition for perfect
reconstruction.
Beyond Nyquist
The Nyquist–Shannon sampling theorem is a sufficient condition for the reconstruction of a
band-limited signal. It is also necessary, in the sense that if samples are taken at a slower rate,
DEPARTMENT OF ELECTRONICS AND COMMUNICATIONS MICROWAVE-I NOTES
BY-GIRRAJ SHARMA
then there is some band-limited signal that cannot be reconstructed by this sampling. However, if
further restrictions are imposed on the signal, then the Nyquist-Shannon sampling theorem may
no longer be a necessary condition. For example, if the signal is band-limited and also has no
components near DC, then the full Nyquist rate is not necessary, since the passband technique
discussed earlier in the article applies.
A non-trivial example of exploiting extra assumptions about the signal is given by the recent
field of compressed sensing, which allows for full reconstruction with a sub-Nyquist sampling
rate. Specifically, this applies to signals that are sparse (or compressible) in some domain. As an
example, compressed sensing deals with signals that may have a low over-all bandwidth (say,
the effective bandwidth EB), but the frequency components are spread out in the overall
bandwidth B, rather than all together in a single band, so that the passband technique doesn't
apply. In other words, the frequency spectrum is sparse. Traditionally, the necessary sampling
rate is thus B / 2. Using compressed sensing techniques, the signal could be perfectly
reconstructed if it is sampled at a rate slightly greater than the EB / 2. The downside of this
approach is that reconstruction is no longer given by a formula, but instead by the solution to a
convex optimization program which requires well-studied but nonlinear methods.
Historical background
The sampling theorem was implied by the work of Harry Nyquist in 1928 ("Certain topics in
telegraph transmission theory"), in which he showed that up to 2B independent pulse samples
could be sent through a system of bandwidth B; but he did not explicitly consider the problem of
sampling and reconstruction of continuous signals. About the same time, Karl Küpfmüller
showed a similar result,[5] and discussed the sinc-function impulse response of a band-limiting
filter, via its integral, the step response Integralsinus; this bandlimiting and reconstruction filter
that is so central to the sampling theorem is sometimes referred to as a Küpfmüller filter (but
seldom so in English).
The sampling theorem, essentially a dual of Nyquist's result, was proved by Claude E. Shannon
in 1949 ("Communication in the presence of noise"). V. A. Kotelnikov published similar results
in 1933 ("On the transmission capacity of the 'ether' and of cables in electrical communications",
translation from the Russian), as did the mathematician E. T. Whittaker in 1915 ("Expansions of
the Interpolation-Theory", "Theorie der Kardinalfunktionen"), J. M. Whittaker in 1935
("Interpolatory function theory"), and Gabor in 1946 ("Theory of communication").
Other discoverers
Meijering[8] mentions several other discoverers and names in a paragraph and pair of footnotes:
As pointed out by Higgins [135], the sampling theorem should really be considered in two parts,
as done above: the first stating the fact that a bandlimited function is completely determined by
its samples, the second describing how to reconstruct the function using its samples. Both parts
of the sampling theorem were given in a somewhat different form by J. M. Whittaker [350, 351,
353] and before him also by Ogura [241, 242]. They were probably not aware of the fact that the
first part of the theorem had been stated as early as 1897 by Borel [25]. 27 As we have seen, Borel
also used around that time what became known as the cardinal series. However, he appears not
to have made the link [135]. In later years it became known that the sampling theorem had been
presented before Shannon to the Russian communication community by Kotel'nikov [173]. In
more implicit, verbal form, it had also been described in the German literature by Raabe [257].
Several authors [33, 205] have mentioned that Someya [296] introduced the theorem in the
Japanese literature parallel to Shannon. In the English literature, Weston [347] introduced it
independently of Shannon around the same time.28
27
Several authors, following Black [16], have claimed that this first part of the sampling theorem
was stated even earlier by Cauchy, in a paper [41] published in 1841. However, the paper of
Cauchy does not contain such a statement, as has been pointed out by Higgins [135].
28
As a consequence of the discovery of the several independent introductions of the sampling
theorem, people started to refer to the theorem by including the names of the aforementioned
authors, resulting in such catchphrases as “the Whittaker-Kotel’nikov-Shannon (WKS) sampling
theorem" [155] or even "the Whittaker-Kotel'nikov-Raabe-Shannon-Someya sampling theorem"
[33]. To avoid confusion, perhaps the best thing to do is to refer to it as the sampling theorem,
"rather than trying to find a title that does justice to all claimants" [136].
Why Nyquist?
Exactly how, when, or why Harry Nyquist had his name attached to the sampling theorem
remains obscure. The term Nyquist Sampling Theorem (capitalized thus) appeared as early as
1959 in a book from his former employer, Bell Labs,[9] and appeared again in 1963, and not
capitalized in 1965. It had been called the Shannon Sampling Theorem as early as 1954, but also
just the sampling theorem by several other books in the early 1950s.
When Shannon stated and proved the sampling theorem in his 1949 paper, according to
Meijering[8] "he referred to the critical sampling interval T = 1/(2W) as the Nyquist interval
corresponding to the band W, in recognition of Nyquist’s discovery of the fundamental
importance of this interval in connection with telegraphy." This explains Nyquist's name on the
critical interval, but not on the theorem.
Similarly, Nyquist's name was attached to Nyquist rate in 1953 by Harold S. Black:[14]
"If the essential frequency range is limited to B cycles per second, 2B was given by
Nyquist as the maximum number of code elements per second that could be
unambiguously resolved, assuming the peak interference is less half a quantum step. This
rate is generally referred to as signaling at the Nyquist rate and 1/(2B) has been termed
a Nyquist interval." (bold added for emphasis; italics as in the original)
According to the OED, this may be the origin of the term Nyquist rate. In Black's usage, it is not
a sampling rate, but a signaling rate.
Pulse-amplitude modulation
Principle of PAM; (1) original Signal, (2) PAM-Signal, (a) Amplitude of Signal, (b) Time
Overview
Pulse-amplitude modulation, acronym PAM, is a form of signal modulation where the
message information is encoded in the amplitude of a series of signal pulses.
Example: A two bit modulator (PAM-4) will take two bits at a time and will map the signal
amplitude to one of four possible levels, for example −3 volts, −1 volt, 1 volt, and 3 volts.
Demodulation is performed by detecting the amplitude level of the carrier at every symbol
period.
Pulse-amplitude modulation is widely used in baseband transmission of digital data, with non-
baseband applications having been largely superseded by pulse-code modulation, and, more
recently, by pulse-position modulation.
In particular, all telephone modems faster than 300 bit/s use quadrature amplitude modulation
(QAM). (QAM uses a two-dimensional constellation).
The IEEE 802.3an standard defines the wire-level modulation for 10GBASE-T as a Tomlinson-
Harashima Precoded (THP) version of pulse-amplitude modulation with 16 discrete levels
(PAM-16), encoded in a two-dimensional checkerboard pattern known as DSQ128. Several
proposals were considered for wire-level modulation, including PAM with 12 discrete levels
(PAM-12), 10 levels (PAM-10), or 8 levels (PAM-8), both with and without Tomlinson-
Harashima Precoding (THP).
Usage in Photobiology
The concept is also utilized for the study of photosynthesis using a PAM fluorometer. This
specialized instrument involves a spectrofluorometric measurement of the kinetics of
fluorescence rise and decay in the light-harvesting antenna of thylakoid membranes, thus
querying various aspects of the state of the photosystems under different environmental
conditions.
Pulse Amplitude Modulation LED drivers are able to synchronize pulses across multiple LED
channels to enable perfect colour matching. Due to the inherent nature of PAM in conjunction
with the rapid switching speed of LEDs it is possible to use LED lighting as a means of wireless
data transmission at high speed.
Pulse-width modulation
Pulse-width modulation (PWM) is a commonly used technique for controlling power to inertial
electrical devices, made practical by modern electronic power switches.
The average value of voltage (and current) fed to the load is controlled by turning the switch
between supply and load on and off at a fast pace. The longer the switch is on compared to the
off periods, the higher the power supplied to the load is.
The PWM switching frequency has to be much faster than what would affect the load, which is
to say the device that uses the power. Typically switchings have to be done several times a
minute in an electric stove, 120 Hz in a lamp dimmer, from few kilohertz (kHz) to tens of kHz
for a motor drive and well into the tens or hundreds of kHz in audio amplifiers and computer
power supplies.
The term duty cycle describes the proportion of 'on' time to the regular interval or 'period' of
time; a low duty cycle corresponds to low power, because the power is off for most of the time.
Duty cycle is expressed in percent, 100% being fully on.
The main advantage of PWM is that power loss in the switching devices is very low. When a
switch is off there is practically no current, and when it is on, there is almost no voltage drop
across the switch. Power loss, being the product of voltage and current, is thus in both cases
close to zero. PWM works also well with digital controls, which, because of their on/off nature,
can easily set the needed duty cycle.
History
In the past, when only partial power was needed (such as for a sewing machine motor), a
rheostat (located in the sewing machine's foot pedal) connected in series with the motor adjusted
the amount of current flowing through the motor, but also wasted power as heat in the resistor
element. It was an inefficient scheme, but tolerable because the total power was low. This was
one of several methods of controlling power. There were others—some still in use—such as
variable autotransformers, including the trademarked 'Autrastat' for theatrical lighting; and the
Variac, for general AC power adjustment. These were quite efficient, but also relatively costly.
For about a century, some variable-speed electric motors have had decent efficiency, but they
were somewhat more complex than constant-speed motors, and sometimes required bulky
external electrical apparatus, such as a bank of variable power resistors or rotating converter such
as Ward Leonard drive .
However, in addition to motor drives for fans, pumps and robotic servos, there was a great need
for compact and low cost means for applying adjustable power for many devices, such as electric
stoves and lamp dimmers.
One of early applications of PWM was in the Sinclair X10, a 10 W audio amplifier available in
kit form in the 1960s. At around the same time PWM started to be used in AC motor control [1]
Principle
Pulse-width modulation uses a rectangular pulse wave whose pulse width is modulated resulting
in the variation of the average value of the waveform. If we consider a pulse waveform f(t) with
As f(t) is a pulse wave, its value is ymax for and ymin for . The
above expression then becomes:
This latter expression can be fairly simplified in many cases where ymin = 0 as .
From this, it is obvious that the average value of the signal ( ) is directly dependent on the duty
cycle D.
Fig. 2: A simple method to generate the PWM pulse train corresponding to a given signal is the
intersective PWM: the signal (here the green sinewave) is compared with a sawtooth waveform (blue).
When the latter is less than the former, the PWM signal (magenta) is in high state (1). Otherwise it is in
the low state (0).
The simplest way to generate a PWM signal is the intersective method, which requires only a
sawtooth or a triangle waveform (easily generated using a simple oscillator) and a comparator.
DEPARTMENT OF ELECTRONICS AND COMMUNICATIONS MICROWAVE-I NOTES
BY-GIRRAJ SHARMA
When the value of the reference signal (the green sine wave in figure 2) is more than the
modulation waveform (blue), the PWM signal (magenta) is in the high state, otherwise it is in the
low state.
Delta
In the use of delta modulation for PWM control, the output signal is integrated, and the result is
compared with limits, which correspond to a reference signal offset by a constant. Every time the
integral of the output signal reaches one of the limits, the PWM signal changes state.
Fig. 3 : Principle of the delta PWM. The output signal (blue) is compared with the limits (green). These
limits correspond to the reference signal (red), offset by a given value. Every time the output signal
reaches one of the limits, the PWM signal changes state.
Delta-sigma
In delta-sigma modulation as a PWM control method, the output signal is subtracted from a
reference signal to form an error signal. This error is integrated, and when the integral of the
error exceeds the limits, the output changes state.
Space vector modulation is a PWM control algorithm for multi-phase AC generation, in which
the reference signal is sampled regularly; after each sample, non-zero active switching vectors
adjacent to the reference vector and one or more of the zero switching vectors are selected for the
appropriate fraction of the sampling period in order to synthesize the reference signal as the
average of the used vectors.
Direct torque control is a method used to control AC motors. It is closely related with the delta
modulation (see above). Motor torque and magnetic flux are estimated and these are controlled
to stay within their hysteresis bands by turning on new combination of the device's
semiconductor switches each time either of the signal tries to deviate out of the band.
Time proportioning
Many digital circuits can generate PWM signals (e.g. many microcontrollers have PWM
outputs). They normally use a counter that increments periodically (it is connected directly or
indirectly to the clock of the circuit) and is reset at the end of every period of the PWM. When
the counter value is more than the reference value, the PWM output changes state from high to
low (or low to high).[2] This technique is referred to as time proportioning, particularly as time-
proportioning control[3] – which proportion of a fixed cycle time is spent in the high state.
DEPARTMENT OF ELECTRONICS AND COMMUNICATIONS MICROWAVE-I NOTES
BY-GIRRAJ SHARMA
The incremented and periodically reset counter is the discrete version of the intersecting
method's sawtooth. The analog comparator of the intersecting method becomes a simple integer
comparison between the current counter value and the digital (possibly digitized) reference
value. The duty cycle can only be varied in discrete steps, as a function of the counter resolution.
However, a high-resolution counter can provide quite satisfactory performance.
Types
Fig. 5 : Three types of PWM signals (blue): leading edge modulation (top), trailing edge modulation
(middle) and centered pulses (both edges are modulated, bottom). The green lines are the sawtooth
waveform (first and second cases) and a triangle waveform (third case) used to generate the PWM
waveforms using the intersective method.
1. The pulse center may be fixed in the center of the time window and both edges of the pulse
moved to compress or expand the width.
2. The lead edge can be held at the lead edge of the window and the tail edge modulated.
3. The tail edge can be fixed and the lead edge modulated.
4. The pulse repetition frequency can be varied by the signal, and the pulse width can be constant.
However, this method has a more-restricted range of average output than the other three.
Spectrum
The resulting spectra (of the three cases) are similar, and each contains a dc component, a base
sideband containing the modulating signal and phase modulated carriers at each harmonic of the
frequency of the pulse. The amplitudes of the harmonic groups are restricted by a sinx / x
envelope (sinc function) and extend to infinity.
DEPARTMENT OF ELECTRONICS AND COMMUNICATIONS MICROWAVE-I NOTES
BY-GIRRAJ SHARMA
On the contrary, the delta modulation is a random process that produces continuous spectrum
without distinct harmonics.
Applications
Telecommunications
In telecommunications, the widths of the pulses correspond to specific data values encoded at
one end and decoded at the other.
Pulses of various lengths (the information itself) will be sent at regular intervals (the carrier
frequency of the modulation).
_ _ _ _ _ _ _ _
|| || || || || || || ||
Clock | | | | | | | | | | | | | | | |
__| |____| |____| |____| |____| |____| |____| |____| |____
_ __ ____ ____ _
PWM Signal || | | | | | |||
|| | | | | | |||
_________| |____| |___| |________| |_| |___________
Data 0 1 2 4 0 4 1 0
The inclusion of a clock signal is not necessary, as the leading edge of the data signal can be
used as the clock if a small offset is added to the data value in order to avoid a data value with a
zero length pulse.
Data 0 1 2 4 0 4 1 0
Power delivery
PWM can be used to adjust the total amount of power delivered to a load without losses
normally incurred when a power transfer is limited by resistive means. The drawback are the
pulsations defined by the duty cycle, switching frequency and properties of the load. With a
sufficiently high switching frequency and, when necessary, using additional passive electronic
filters the pulse train can be smoothed and average analog waveform recovered.
Variable-speed fan controllers for computers usually use PWM, as it is far more efficient when
compared to a potentiometer or rheostat. (Neither of the latter is practical to operate
electronically; they would require a small drive motor.)
Light dimmers for home use employ a specific type of PWM control. Home-use light dimmers
typically include electronic circuitry which suppresses current flow during defined portions of
each cycle of the AC line voltage. Adjusting the brightness of light emitted by a light source is
then merely a matter of setting at what voltage (or phase) in the AC halfcycle the dimmer begins
to provide electrical current to the light source (e.g. by using an electronic switch such as a triac).
In this case the PWM duty cycle is the ratio of the conduction time to the duration of the half AC
cycle defined by the frequency of the AC line voltage (50 Hz or 60 Hz depending on the
country).
These rather simple types of dimmers can be effectively used with inert (or relatively slow
reacting) light sources such as incandescent lamps, for example, for which the additional
modulation in supplied electrical energy which is caused by the dimmer causes only negligible
additional fluctuations in the emitted light. Some other types of light sources such as light-
emitting diodes (LEDs), however, turn on and off extremely rapidly and would perceivably
flicker if supplied with low frequency drive voltages. Perceivable flicker effects from such rapid
response light sources can be reduced by increasing the PWM frequency. If the light fluctuations
are sufficiently rapid, the human visual system can no longer resolve them and the eye perceives
the time average intensity without flicker (see flicker fusion threshold).
In electric cookers, continuously-variable power is applied to the heating elements such as the
hob or the grill using a device known as a Simmerstat. This consists of a thermal oscillator
running at approximately two cycles per minute and the mechanism varies the duty cycle
Voltage regulation
PWM is also used in efficient voltage regulators. By switching voltage to the load with the
appropriate duty cycle, the output will approximate a voltage at the desired level. The switching
noise is usually filtered with an inductor and a capacitor.
One method measures the output voltage. When it is lower than the desired voltage, it turns on
the switch. When the output voltage is above the desired voltage, it turns off the switch.
A new class of audio amplifiers based on the PWM principle is becoming popular. Called
"Class-D amplifiers", these amplifiers produce a PWM equivalent of the analog input signal
which is fed to the loudspeaker via a suitable filter network to block the carrier and recover the
original audio. These amplifiers are characterized by very good efficiency figures (≥ 90%) and
compact size/light weight for large power outputs. For a few decades, industrial and military
PWM amplifiers have been in common use, often for driving servo motors. They offer very good
efficiency, commonly well above 90%. Field-gradient coils in MRI machines are driven by
relatively-high-power PWM amplifiers.
Historically, a crude form of PWM has been used to play back PCM digital sound on the PC
speaker, which is driven by only two voltage levels, typically 0 V and 5 V. By carefully timing
the duration of the pulses, and by relying on the speaker's physical filtering properties (limited
frequency response, self-inductance, etc.) it was possible to obtain an approximate playback of
mono PCM samples, although at a very low quality, and with greatly varying results between
implementations.
In more recent times, the Direct Stream Digital sound encoding method was introduced, which
uses a generalized form of pulse-width modulation called pulse density modulation, at a high
DEPARTMENT OF ELECTRONICS AND COMMUNICATIONS MICROWAVE-I NOTES
BY-GIRRAJ SHARMA
enough sampling rate (typically in the order of MHz) to cover the whole acoustic frequencies
range with sufficient fidelity. This method is used in the SACD format, and reproduction of the
encoded audio signal is essentially similar to the method used in class-D amplifiers.
Pulse-position modulation
Pulse-position modulation (PPM) is a form of signal modulation in which M message bits are
encoded by transmitting a single pulse in one of 2M possible time-shifts. This is repeated every T
seconds, such that the transmitted bit rate is M/T bits per second. It is primarily useful for optical
communications systems, where there tends to be little or no multipath interference.
Synchronization
One of the key difficulties of implementing this technique is that the receiver must be properly
synchronized to align the local clock with the beginning of each symbol. Therefore, it is often
implemented differentially as differential pulse-position modulation, where by each pulse
position is encoded relative to the previous , such that the receiver must only measure the
difference in the arrival time of successive pulses. It is possible to limit the propagation of errors
to adjacent symbols, so that an error in measuring the differential delay of one pulse will affect
only two symbols, instead of affecting all successive measurements.
Non-coherent detection
One of the principal advantages of PPM is that it is an M-ary modulation technique that can be
implemented non-coherently, such that the receiver does not need to use a phase-locked loop
(PLL) to track the phase of the carrier. This makes it a suitable candidate for optical
communications systems, where coherent phase modulation and detection are difficult and
DEPARTMENT OF ELECTRONICS AND COMMUNICATIONS MICROWAVE-I NOTES
BY-GIRRAJ SHARMA
extremely expensive. The only other common M-ary non-coherent modulation technique is M-
ary Frequency Shift Keying (M-FSK), which is the frequency-domain dual to PPM.
Optical communications systems (even wireless ones) tend to have weak multipath distortions,
and PPM is a viable modulation scheme in many such applications.
Servos made for model radio control include some of the electronics required to convert the
pulse to the motor position - the receiver is merely required to demultiplex the separate channels
and feed the pulses to each servo.
More sophisticated R/C systems are now often based on pulse-code modulation, which is more
complex but offers greater flexibility and reliability.
Pulse position modulation is also used for communication to the ISO/IEC 15693 contactless
smart card as well as the HF implementation of the EPC Class 1 protocol for RFID tags.
DEPARTMENT OF ELECTRONICS AND COMMUNICATIONS MICROWAVE-I NOTES
BY-GIRRAJ SHARMA
UNIT-5
DEPARTMENT OF ELECTRONICS AND COMMUNICATIONS MICROWAVE-I NOTES
BY-GIRRAJ SHARMA
NOISE IN AM AND FM
Signal-to-noise ratio
Signal-to-noise ratio (often abbreviated SNR or S/N) is a measure used in science and
engineering to quantify how much a signal has been corrupted by noise. It is defined as the ratio
of signal power to the noise power corrupting the signal. A ratio higher than 1:1 indicates more
signal than noise. While SNR is commonly quoted for electrical signals, it can be applied to any
form of signal (such as isotope levels in an ice core or biochemical signaling between cells).
In less technical terms, signal-to-noise ratio compares the level of a desired signal (such as
music) to the level of background noise. The higher the ratio, the less obtrusive the background
noise is.
"Signal-to-noise ratio" is sometimes used informally to refer to the ratio of useful information to
false or irrelevant data in a conversation or exchange. For example, in online discussion forums
and other online communities, off-topic posts and spam are regarded as "noise" that interferes
with the "signal" of appropriate discussion.
Definition
Signal-to-noise ratio is defined as the power ratio between a signal (meaningful information) and
the background noise (unwanted signal):
where P is average power. Both signal and noise power must be measured at the same or
equivalent points in a system, and within the same system bandwidth. If the signal and the noise
are measured across the same impedance, then the SNR can be obtained by calculating the
square of the amplitude ratio:
The concepts of signal-to-noise ratio and dynamic range are closely related. Dynamic range
measures the ratio between the strongest un-distorted signal on a channel and the minimum
discernable signal, which for most purposes is the noise level. SNR measures the ratio between
an arbitrary signal level (not necessarily the most powerful signal possible) and noise. Measuring
signal-to-noise ratios requires the selection of a representative or reference signal. In audio
engineering, the reference signal is usually a sine wave at a standardized nominal or alignment
level, such as 1 kHz at +4 dBu (1.228 VRMS).
SNR is usually taken to indicate an average signal-to-noise ratio, as it is possible that (near)
instantaneous signal-to-noise ratios will be considerably different. The concept can be
understood as normalizing the noise level to 1 (0 dB) and measuring how far the signal 'stands
out'.
Alternative definition
An alternative definition of SNR is as the reciprocal of the coefficient of variation, i.e., the ratio
of mean to standard deviation of a signal or measurement:[2] [3]
where μ is the signal mean or expected value and σ is the standard deviation of the noise, or an
estimate thereof.[note 2] Notice that such an alternative definition is only useful for variables that
are always positive (such as photon counts and luminance). Thus it is commonly used in image
processing,[4][5][6][7] where the SNR of an image is usually calculated as the ratio of the mean pixel
value to the standard deviation of the pixel values over a given neighborhood. Sometimes SNR is
defined as the square of the alternative definition above.
Yet another alternative, very specific and distinct definition of SNR is employed to characterize
sensitivity of imaging systems; see signal to noise ratio (imaging).
Related measures are the "contrast ratio" and the "contrast-to-noise ratio".
Recording of the noise of a thermogravimetric analysis device that is poorly isolated from a
mechanical point of view; the middle of the curve shows a lower noise, due to a lesser
surrounding human activity at night.
All real measurements are disturbed by noise. This includes electronic noise, but can also include
external events that affect the measured phenomenon — wind, vibrations, gravitational attraction
of the moon, variations of temperature, variations of humidity, etc., depending on what is
measured and of the sensitivity of the device. It is often possible to reduce the noise by
controlling the environment. Otherwise, when the characteristics of the noise are known and are
different from the signals, it is possible to filter it or to process the signal. When the signal is
constant or periodic and the noise is random, it is possible to enhance the SNR by averaging the
measurement.
Digital signals
When a measurement is digitised, the number of bits used to represent the measurement
determines the maximum possible signal-to-noise ratio. This is because the minimum possible
noise level is the error caused by the quantization of the signal, sometimes called Quantization
noise. This noise level is non-linear and signal-dependent; different calculations exist for
different signal models. Quantization noise is modeled as an analog error signal summed with
the signal before quantization ("additive noise").
Although noise levels in a digital system can be expressed using SNR, it is more common to use
Eb/No, the energy per bit per noise power spectral density.
The modulation error ratio (MER) is a measure of the SNR in a digitally modulated signal.
Fixed point
For n-bit integers with equal distance between quantization levels (uniform quantization) the
dynamic range (DR) is also determined.
Assuming a uniform distribution of input signal values, the quantization noise is a uniformly-
distributed random signal with a peak-to-peak amplitude of one quantization level, making the
amplitude ratio 2n/1. The formula is then:
This relationship is the origin of statements like "16-bit audio has a dynamic range of 96 dB".
Each extra quantization bit increases the dynamic range by roughly 6 dB.
Assuming a full-scale sine wave signal (that is, the quantizer is designed such that it has the
same minimum and maximum values as the input signal), the quantization noise approximates a
sawtooth wave with peak-to-peak amplitude of one quantization level [9] and uniform distribution.
In this case, the SNR is approximately
Floating point
Floating-point numbers provide a way to trade off signal-to-noise ratio for an increase in
dynamic range. For n bit floating-point numbers, with n-m bits in the mantissa and m bits in the
exponent:
Optical SNR
Optical signals have a carrier frequency, which is much higher than the modulation frequency
(about 200 THz and more). This way the noise bandwidth covers a bandwidth which is much
wider than the signal itself. The resulting signal influence relies mainly on the filtering of the
noise. To describe the signal quality without taking the receiver into account the optical SNR
(OSNR) is used. The OSNR is the ratio between the signal power and the noise power in as
given bandwidth. Most commonly a reference bandwidth of 0.1 nm is used. This bandwidth is
independent from the modulation format, the frequency and the receiver. For instance a OSNR
of 20dB/0.1nm could be given, even the signal of 40 GBit DPSK would not fit in this bandwidth.
OSNR is measured with a Optical Spectrum Analyzer
No single measurement can assess audio quality. Instead, engineers use a series of measurements
to analyze various types of degradation that can reduce fidelity. Thus, when testing an analogue
tape machine it is necessary to test for wow and flutter and tape speed variations over longer
periods, as well as for distortion and noise. When testing a digital system, testing for speed
variations is normally considered unnecessary because of the accuracy of clocks in digital
circuitry, but testing for aliasing and timing jitter is often desirable, as these have caused audible
degradation in many systems.[citation needed]
Once subjectively valid methods have been shown to correlate well with listening tests over a
wide range of conditions, then such methods are generally adopted as preferred. Standard
engineering methods are not always sufficient when comparing like with like. One CD player,
for example, might have higher measured noise than another CD player when measured with a
RMS method, or even an A-weighted RMS method, yet sound quieter and measure lower when
468-weighting is used. This could be because it has more noise at high frequencies, or even at
frequencies beyond 20 kHz, both of which are less important since human ears are less sensitive
to them. (See noise shaping.) This effect is how Dolby B works and why it was introduced.
Cassette noise, which was predominately high frequency and unavoidable given the small size
and speed of the recorded track could be made subjectively much less important. The noise
sounded 10 dB quieter, but failed to measure much better unless 468-weighting was used rather
than A-weighting.
Measurable performance
Analog electrical
Mechanical
Digital
Note that digital systems do not suffer from many of these effects at a signal level, though the
same processes occur in the circuitry, since the data being handled is symbolic. As long as the
symbol survives the transfer between components, and can be perfectly regenerated (eg, by pulse
shaping techniques) the data itself is perfectly maintained. The data is typically buffered in a
memory, and is clocked out by a very precise crystal oscillator. The data usually does not
degenerate as it passes through many stages, because each stage regenerates new symbols for
transmission.
Digital systems have their own problems. Digitizing adds noise, which is measurable and
depends on the resolution ('number of bits") of the system, regardless of other quality issues.
Timing errors in sampling clocks (jitter) result in non-linear distortion (FM modulation) of the
signal. One quality measurement for a digital system (Bit Error Rate) relates to the probability of
an error in transmission or reception. Other metrics on the quality of the system are defined by
the sample rate and bit depth. In general, digital systems are much less prone to error than analog
systems; However, nearly all digital systems have analog inputs and/or outputs, and certainly all
of those that interact with the analog world do so. These analog components of the digital system
can suffer analog effects and potentially compromise the integrity of a well designed digital
system.
Jitter
A measurement of the variation in period (periodic jitter) and absolute timing (random
jitter) between measured clock timing versus an ideal clock. Less jitter is generally better
for sampling systems.
Sample rate
A specification of the rate at which measurements are taken of the analog signal. This is
measured in samples per second, or hertz. A higher sampling rate allows a greater total
bandwidth or pass-band frequency response and allows less-steep anti-aliasing/anti-
imaging filters to be used in the stop-band, which can in turn improve overall phase
linearity in the pass-band.
Bit depth
A specification of the accuracy of each measurement. For example, a 3-bit system would
be able to measure 23 = 8 different levels, so it would round the actual level at each point
DEPARTMENT OF ELECTRONICS AND COMMUNICATIONS MICROWAVE-I NOTES
BY-GIRRAJ SHARMA
to the nearest representable. Typical values for audio are 8-bit, 16-bit, 24-bit, and 32-bit.
The bit depth determines the theoretical maximum signal-to-noise ratio or dynamic range
for the system. It is common for devices to create more noise than the minimum possible
noise floor, however. Sometimes this is done intentionally; dither noise is added to
decrease the negative effects of quantization noise by converting it into a higher level of
uncorrelated noise.
To calculate the maximum theoretical dynamic range of a digital system, find the total
number of levels in the system. Dynamic Range = 20·log(# of different levels). Note: the
log function has a base of 10. Example: An 8-bit system has 256 different possibilities,
from 0 – 255. The smallest signal is 1 and the largest is 255. Dynamic Range =
20·log(255) = 48 dB.
Sample accuracy/synchronization
Not as much a specification as an ability. Since independent digital audio devices are
each run by their own crystal oscillator, and no two crystals are exactly the same, the
sample rate will be slightly different. This will cause the devices to drift apart over time.
The effects of this can vary. If one digital device is used to monitor another digital
device, this will cause dropouts in the audio, as one device will be producing more or less
data than the other per unit time. If two independent devices record at the same time, one
will lag the other more and more over time. This effect can be circumvented with a
wordclock synchronization. Sample rate will also vary slightly over time, as crystals
change in temperature, etc. See also clock recovery
Linearity
Differential non-linearity and integral non-linearity are two measurements of the
accuracy of an analog-to-digital converter. Basically, they measure how close the
threshold levels for each bit are to the theoretical equally-spaced levels.
Unquantifiable?
Many audio components are tested for performance using objective and quantifiable
measurements, e.g., THD, dynamic range and frequency response. Some take the view that
objective measurements are useful and often relate well to subjective performance, i.e., the sound
quality as experienced by the listener [9]. An example of this is the work of Toole on
loudspeakers. He has shown that the performance of loudspeakers, as assessed in listening tests,
are linked to objective measurements of loudspeaker performance [10]. In Toole's work, listening
tests were designed to eliminate any potential biases in results. Tests of this sort are called blind
(or controlled) tests.
Some argue that because human hearing and perception are not fully understood, listener
experience should be valued above everything else. This tactic is often encountered in the "high-
end audio" world, where it is used to sell amplifiers with poor specifications. The usefulness of
DEPARTMENT OF ELECTRONICS AND COMMUNICATIONS MICROWAVE-I NOTES
BY-GIRRAJ SHARMA
blind listening tests and common objective performance measurements, e.g., THD, are
questioned. For instance, crossover distortion at a given THD is much more audible than
clipping distortion at the same THD, since the harmonics produced are at higher frequencies.
This does not imply that the defect is somehow unquantifiable or unmeasurable; just that a single
THD number is inadequate to specify it and must be interpreted with care. Taking THD
measurements at different output levels would expose whether the distortion is clipping (which
increases with level) or crossover (which decreases with level).
It is claimed that subtle changes in sound quality are easier to hear in non-blind tests than blind
tests. Objective performance measurements are said not to fit in with ordinary listener
experience. Writing in Stereophile magazine, John Atkinson recalls his experience of an
amplifier that performed well objectively and in blind listening tests, but did not sound good in
actual use.
Superheterodyne receiver
History
DEPARTMENT OF ELECTRONICS AND COMMUNICATIONS MICROWAVE-I NOTES
BY-GIRRAJ SHARMA
Two-section variable capacitor, used in superheterodyne receivers
The word heterodyne is derived from the Greek roots hetero- "different", and -dyne "power".
The heterodyne technique was developed by Tesla circa 1896. Isochronous electro-mechanical
local oscillators were used in connection with those receiving circuits. [1] The heterodyne
technique was subsequently adopted by Canadian inventor Reginald Fessenden, [2] but it was not
pursued because the local oscillator used was unstable in its frequency output, and vacuum tubes
were not yet available.[3]
The superheterodyne principle was revisited in 1918 by U.S. Army Major Edwin Armstrong in
France during World War I. [2] He invented this receiver as a means of overcoming the
deficiencies of early vacuum tube triodes used as high-frequency amplifiers in radio direction
finding equipment. Unlike simple radio communication, which only needs to make transmitted
signals audible, direction-finders measure the received signal strength, which necessitates linear
amplification of the actual carrier wave.
In a triode radio-frequency (RF) amplifier, if both the plate and grid are connected to resonant
circuits tuned to the same frequency, stray capacitive coupling between the grid and the plate
will cause the amplifier to go into oscillation if the stage gain is much more than unity. In early
designs, dozens (in some cases over 100) low-gain triode stages had to be connected in cascade
to make workable equipment, which drew enormous amounts of power in operation and required
a team of maintenance engineers. The strategic value was so high, however, that the British
Admiralty felt the high cost was justified.
Armstrong realized that if RDF receivers could be operated at a higher frequency, this would
allow better detection of enemy shipping. However, at that time, no practical "short wave"
amplifier existed, (defined then as any frequency above 500 kHz) due to the limitations of
existing triodes.
A "heterodyne" refers to a beat or "difference" frequency produced when two or more radio
frequency carrier waves are fed to a detector. The term was coined by Canadian Engineer
Later, when vacuum triodes became available, the same result could be achieved more
conveniently by incorporating a "local oscillator" in the receiver, which became known as a
"beat frequency oscillator" (BFO). As the BFO frequency was varied, the pitch of the heterodyne
could be heard to vary with it. If the frequencies were too far apart the heterodyne became
ultrasonic and hence no longer audible.
It had been noticed some time before that if a regenerative receiver was allowed to go into
oscillation, other receivers nearby would suddenly start picking up stations on frequencies
different from those that the stations were actually transmitted on. Armstrong (and others)
eventually deduced that this was caused by a "supersonic heterodyne" between the station's
carrier frequency and the oscillator frequency. Thus if a station was transmitting on 300 kHz and
the oscillating receiver was set to 400 kHz, the station would be heard not only at the original
300 kHz, but also at 100 kHz and 700 kHz.
Armstrong realized that this was a potential solution to the "short wave" amplification problem,
since the beat frequency still retained its original modulation, but on a lower carrier frequency.
To monitor a frequency of 1500 kHz for example, he could set up an oscillator to, for example,
1560 kHz, which would produce a heterodyne difference frequency of 60 kHz, a frequency that
could then be more conveniently amplified by the triodes of the day. He termed this the
"Intermediate Frequency" often abbreviated to "IF".
In the 1920s, commercial IF filters looked very similar to 1920s audio interstage coupling
transformers, had very similar construction and were wired up in an almost identical manner, and
so they were referred to as "IF Transformers". By the mid-1930s however, superheterodynes
were using higher intermediate frequencies, (typically around 440–470 kHz), with tuned coils
similar in construction to the aerial and oscillator coils, however the name "IF Transformer"
persisted and is still used today. Modern receivers typically use a mixture of ceramic resonator or
SAW (surface-acoustic wave) resonators as well as traditional tuned-inductor IF transformers.
Armstrong was able to rapidly put his ideas into practice, and the technique was rapidly adopted
by the military. However, it was less popular when commercial radio broadcasting began in the
1920s, mostly due to the need for an extra tube (for the oscillator), the generally higher cost of
the receiver, and the level of technical skill required to operate it. For early domestic radios,
tuned radio frequency receivers ("TRF"), also called the Neutrodyne, were more popular because
they were cheaper, easier for a non-technical owner to use, and less costly to operate. Armstrong
eventually sold his superheterodyne patent to Westinghouse, who then sold it to RCA, the latter
monopolizing the market for superheterodyne receivers until 1930.[5]
By the 1930s, improvements in vacuum tube technology rapidly eroded the TRF receiver's cost
advantages, and the explosion in the number of broadcasting stations created a demand for
cheaper, higher-performance receivers.
The development of practical indirectly-heated-cathode tubes allowed the mixer and oscillator
functions to be combined in a single pentode tube, in the so-called autodyne mixer. This was
rapidly followed by the introduction of low-cost multi-element tubes specifically designed for
superheterodyne operation. These allowed the use of much higher intermediate frequencies
(typically around 440–470 kHz) which eliminated the problem of image frequency interference.
By the mid-30s, for commercial receiver production the TRF technique was obsolete.
The superheterodyne principle was eventually taken up for virtually all commercial radio and TV
designs.
Overview
The superheterodyne receiver has three elements: the local oscillator, a frequency mixer that
mixes the local oscillator's signal with the received signal, and a tuned amplifier.
The amplifier portion of the system is tuned to be highly selective at a single frequency, fIF. By
changing fLO, the resulting fd-fLO (or fd+fLO) signal can be tuned to the amplifier's fIF. In typical
amplitude modulation ("AM radio" in the U.S., or MW) receivers, that frequency is 455 kHz; for
FM receivers, it is usually 10.7 MHz; for television, 45 MHz. Other signals from the mixed
output of the heterodyne are filtered out by the amplifier.
The advantage to this method is that most of the radio's signal path has to be sensitive to only a
narrow range of frequencies. Only the front end (the part before the frequency converter stage)
needs to be sensitive to a wide frequency range. For example, the front end might need to be
sensitive to 1–30 MHz, while the rest of the radio might need to be sensitive only to 455 kHz, a
typical IF. Only one or two tuned stages need to be adjusted to track over the tuning range of the
To overcome obstacles such as image response, multiple IF stages are used, and in some cases
multiple stages with two IFs of different values are used. For example, the front end might be
sensitive to 1–30 MHz, the first half of the radio to 5 MHz, and the last half to 50 kHz. Two
frequency converters would be used, and the radio would be a "Double Conversion Super
Heterodyne"&mdash [6];a common example is a television receiver where the audio information
is obtained from a second stage of intermediate-frequency conversion. Receivers which are
tunable over a wide bandwidth (e.g. scanners) may use an intermediate frequency higher than the
signal, in order to improve image rejection.[7]
In the case of modern television receivers, no other technique was able to produce the precise
bandpass characteristic needed for vestigial sideband reception, first used with the original
NTSC system introduced in 1941. This originally involved a complex collection of tuneable
inductors which needed careful adjustment, but since the 1970s or early 1980s [citation needed] these
have been replaced with precision electromechanical surface acoustic wave (SAW) filters.
Fabricated by precision laser milling techniques, SAW filters are cheaper to produce, can be
made to extremely close tolerances, and are stable in operation. To avoid tooling costs associated
with these components most manufacturers then tended to design their receivers around the fixed
range of frequencies offered which resulted in de-facto standardization of intermediate
frequencies.
Radio transmitters may also use a mixer stage to produce an output frequency, working more or
less as the reverse of a superheterodyne receiver.
Intermediate frequencies
DEPARTMENT OF ELECTRONICS AND COMMUNICATIONS MICROWAVE-I NOTES
BY-GIRRAJ SHARMA
Usually the intermediate frequency is lower than either the carrier or oscillator frequencies, but
with some types of receiver (e.g. scanners and spectrum analyzers) it is more convenient to use a
higher intermediate frequency.
In order to avoid interference to and from signal frequencies close to the intermediate frequency,
in many countries IF frequencies are controlled by regulatory authorities. Examples of common
IFs are 455 kHz for medium-wave AM radio, 10.7 MHz for FM, 38.9 MHz (Europe) or 45 MHz
(US) for television, and 70 MHz for satellite and terrestrial microwave equipment.
Drawbacks
High-side and low-side injection
The amount that a signal is down-shifted by the local oscillator depends on whether its frequency
f is higher or lower than fLO. That is because its new frequency is |f − fLO| in either case.
Therefore, there are potentially two signals that could both shift to the same fIF; one at f = fLO + fIF
and another at f = fLO − fIF. One of those signals, called the image frequency, has to be filtered out
prior to the mixer to avoid aliasing. When the upper one is filtered out, it is called high-side
injection, because fLO is above the frequency of the received signal. The other case is called low-
side injection. High-side injection also reverses the order of a signal's frequency components.
Whether that actually changes the signal depends on whether it has spectral symmetry. The
reversal can be undone later in the receiver, if necessary.
One major disadvantage to the superheterodyne receiver is the problem of image frequency. In
heterodyne receivers, an image frequency is an undesired input frequency equal to the station
frequency plus twice the intermediate frequency. The image frequency results in two stations
being received at the same time, thus producing interference. Image frequencies can be
eliminated by sufficient attenuation on the incoming signal by the RF amplifier filter of the
superheterodyne receiver.
Early Autodyne receivers typically used IFs of only 150 kHz or so, as it was difficult to maintain
reliable oscillation if higher frequencies were used. As a consequence, most Autodyne receivers
For medium-wave AM radio, a variety of IFs have been used, but usually 455 kHz is used.
It is difficult to keep stray radiation from the local oscillator below the level that a nearby
receiver can detect. The receiver's local oscillator can act like a miniature CW transmitter. This
means that there can be mutual interference in the operation of two or more superheterodyne
receivers in close proximity. In espionage, oscillator radiation gives a means to detect a covert
receiver and its operating frequency. One effective way of preventing the local oscillator signal
from radiating out from the receiver's antenna is by adding a stage of RF amplification between
the receiver's antenna and its mixer stage.
Local oscillators typically generate a single frequency signal that has negligible amplitude
modulation but some random phase modulation. Either of these impurities spreads some of the
signal's energy into sideband frequencies. That causes a corresponding widening of the receiver's
frequency response, which would defeat the aim to make a very narrow bandwidth receiver such
as to receive low-rate digital signals. Care needs to be taken to minimise oscillator phase noise,
usually by ensuring that the oscillator never enters a non-linear mode.