Toaz - Info Analog and Digital Communication 2016pdf PR 344 399
Toaz - Info Analog and Digital Communication 2016pdf PR 344 399
always 1
2. Sun does not rise in east:-
Hear uncertainty is high,because there is maximum information
as it is not possible
the unit is the nat,and when the base is 10,the unit is the Hartley or
decit. The use of such unit in the present case is analogous to unit
radian used in angle measure and decibel used in connection with
power ratios.)
The use of base 2 is especially convenient when binary PCM is
employed ,If the 2 possible binary digit(bits) may occur with equal
likelihood,each with a probability 1/2,then the correct identification
of the binary digit conveys an amount of informational I=log22=1 bit.
In the past term bit was used as an abbreviation for the phrase
binary digit. When there is an uncertainly whether the word bit is
untended as an abbreviation for binary digit as binit.
Assume there are M equally likely and independent messages that
M=2N,with Nan integer .In this case the information in each message
is
= I log
= 2M log 2 2N = N log 2 2
To identify each message by binary PCM code word ,the number of
binary digits required for the each of the 2N message is also N.Hence
in this case the information in each message,as measured in bits, is
numerically the same as the number of binits needed to encode the
messages.
When pK=1, one possible message s allowed. In this instance,
since the receiver knows the message,there is really no need for
transmission. We find that 1= log 2, I = 0. As PK decreases from 1
to 0,Ik increases monotonically, going from to infinity. Therefore,a
greater amount of information has been conveyed when the receiver
correctly identifies a less likely message.
When two independent messages mK and mj are correctly identified,we
can readily prove that the amount of information conveyed is the sum
of the information associated with each of the message individually.
Therefore,we conclude that the information amount are
I k = log 2 1 / pk
I l = log 2 1 / pl
Problem 1
A source produces one of the four possible symbols
during each interval having probabilities p1=1/2,P2=1/4,P3=P 4=1/8.
obtain the information content of each of these symbols.
Solution
we know that the information content of each symbol is
given as, 1
I k = log 2
Pk
Thus we can write
1 1
I 1 = log 2 = log 2 = log 2 ( 2) = 1 bit
p1 1/ 2
1 1
= log ( 2) = 2 bits
2
I 2 = log 2 = log 2
p2 1/ 4
1 1
= log 2 ( 2) = 3 bits
3
I 3 = log 2 = log 2
p3 1/ 8
1 1
= log 2 ( 2) = 3 bits
3
I4 = log 2 = log 2
p4 1/ 8
Problem 2
Calculate the amount of information,if it is given that pk=1/2.
Solution
The amount of information
1
I k = log 2
pk
1
log10
pk log10 2
= = = 1 bit
log10 2 log10 2
or
1
=log 2 = log 2 ( 2) = 1 bit
1/ 2
ANALOG AND DIGITAL COMMUNICATION
Problem 3
Calculate the amount of information ,if binary digits occur with
equal likelihood in a binary pcm system.
Solution
we know that in binary PCM, there are 2 binary levels (i.e.,)1 or 0
Therefore the probabilities,
p1(0 level)=P2(1 level)=1/2
Here the amount of information content is given as,
1
I 1 = log 2
1/ 2
1
I 2 = log 2
1/ 2
1 log10 2
I 1 = log 2 = log 2 ( 2) = = 1 bit
1/ 2 log10 2
1 log10 2
I2 = log 2 = log 2 ( 2) = = 1 bit
1/ 2 log10 2
I1 = I 2 = 1 bit
1
I 1 = log 2
p1
• Since ,there are P1 L number of ml,the total information due to all
message of ml will be,
1
I 1 (total ) = P1L log 2
p1
1
I 2(total ) = P1L log 2 and so on
p2
I (total )
Entropy,H=
L
ANALOG AND DIGITAL COMMUNICATION
M
1
H = ∑ pk log 2
k =1 pk
• Since,Pk=1,the above equation becomes,
M
1
H = ∑ log
K =1
2
1
M log10 (1)
= ∑ log ( 2)
K =1 10
M
1
H = ∑P K log 2
K =1 pk
• The Right hand side of the above equation will be zero, when pk → 0
Hence entropy will be zero(i.e.;)
H=0
Therefore,entropy is zero for both certain and most rated message.
Property 2
When pk=1/M for all M symbols are equally likely .For such a
source entropy is given by H=log2M.
Proof
We know that the probability of M number of equally likely
messages is
1
P =
M
• This probability is same for all M messages,(i.e.,)
1
P1 = P2 = P3 = ...PM = ... (1)
M
1 1 1
H = log 2 M + log 2 M + ... log 2 M
M M M
Property 3
The upper bound on entropy is given as Hmax≤log2M.Hear ‘M’ is the
number of messages emitted by the source.
Solution
• To prove the above property,the property of natural logarithm is
used,it can be written as,
qk
log10
qk M Pk
M
∑ Pk log 2 ∑ k
= P
log102
K =1 Pk K =1
q
log10 k
q log10 e Pk
M M
∑ pk log 2 k = ∑ Pk 2
K =1 Pk K =1 log10 log10 e
M
q
= ∑ Pk log 2 e log e k
K =1 Pk
qk qk
Here log e = In Hence above equation becomes,
Pk Pk
M
q M
q
∑P k log 2 k = log 2 e ∑ Pk In k
K =1 Pk K =1 Pk
SOURCE CODING AND ERROR CONTROL CODING
qk qk
In ≤ − 1
pk pk
M M
≤ log 2e ∑ qk − ∑ pk
K =1 K =1
M M
• Note that ∑q
k =1
k = 1 as well as ∑P
k =1
k =1
• Hence above equation becomes,
M
q
∑P k log 2 k ≤ 0
K =1 pk
1
Now consider that qk = k for all k. That is all symbols in the alphabet
are equally likely.
Then above equation becomes,
M
1
∑P log 2 qk + log 2 ≤ 0
k
Pk
K =1
M M
1
∴ ∑ Pk log 2 qk + ∑ Pk log 2 ≤0
K =1 K =1 Pk
M
1 M
∴ ∑ Pk log 2 ≤ − ∑ Pk log 2 qk
K =1 Pk K =1
M
1 1
≤ ∑P
K =1
k log 2
Pk
log 2
qk
1
Replace qk = in above equation,
M
ANALOG AND DIGITAL COMMUNICATION
M
1 M
∑P
K =1
k log 2 ≤ ∑ Pk log 2 M
Pk K =1
M
≤ log 2 M ∑ Pk
K =1
We Know that ∑P
K =1
k =1
,hence above equation becomes,
M
1
∑P
K =1
k log 2
Pk
≤ log 2 M
H ( X ) ≤ log 2 M
H max ( X ) = log 2 M
Problem 1
In binary PCM if ‘0’ occur with probability 1/4 and ‘1’ occur with
the probability equal to 3/4,then calculate the amount of information
carried by each bin it.
Solution
1
I ( x i ) = log 2
P ( xi )
1
P ( x1 ) =
4
1
With P ( x1 ) =
4
log10 4
We have I ( x1 ) = log 2 4 = = 2bit
log10 2
3
And with I ( x 2 ) =
4
4
log10
We have I ( x 2 ) = 3 = 0.415 bits
log10 2
Here,it may observed that binary ’0’ has probability 1/4 and it
carries 2 bits of information.
Whereas binary it’1’ has probability 3/4 and it carries 0.415 bits
of information.
Thus, this reveals the fact that if probability of occurrence is less,then
the information carried is more and vice versa.
Problem 2
If there are M equally likely and independent symbol,then prove
that amount of information carried by each symbol will be,
I(xi)=N bits,where M=2N and N is an integer
Solution
Since, it is given that all the M symbols are equally likely and
independent,therefore,the probability of occurrence symbol must be
1/M.
We know that amount of information is given as,
1
I ( x i ) = log 2 ... (1)
P ( xi )
ANALOG AND DIGITAL COMMUNICATION
1
P ( xi ) =
M
Hence ,equation(1) will be,
I ( x i ) = log 2 M ... ( 2)
I ( x i ) = log 2 2N = N log 2 2
[Since log 22=1]
= N bits
Hence,amount of information carried by each symbol will be ‘N’
bits. We know that M=2.
This means that there ‘N’ binary digits (bin its)in each symbol.
This indicate that when the symbols are equally likely and coded with
equal number of binary digits (bin its), then the information carried by
each symbols(measured n bits) is numerically same as the number of
bin its used for each symbols.
Problem 3
Prove the statement stated as under “if a receiver knows the
message being transmitted,the amount of information carried will be
zero”.
Solution
Here it is stated that receiver “Knows” the message. This means
that only one message is transmitted. Thus,probability of occurrence o
this message will be P(xi) =1. This is because only one message and its
occurrence is certain(probability of certain events is’1’)The amount of
information carried by this type of message will be,
1 log10 1
I ( x i ) = log 2 =
P ( x i ) log10 2
Substituting(xi)=1
SOURCE CODING AND ERROR CONTROL CODING
Or
I(xi)=0 bits
This proves the statement if the receiver knows message,the
amount of information carried will be zero.
Also,as P(xi) is decreased from 1 to 0,I(xi ) increased monotonically
from 0 to infinity. This shows that the amount of information conveyed
is greater when receiver correctly identifies less likely messages.
Problem 4
Verify the following expression
Solution
If xiand xj independent then we know that
P ( xi x j ) = P ( xi ) P ( x j )
1
also I ( x i x j ) = log 2
P ( xi x j )
1
I ( x i x j ) = log 2
P ( xi ) P ( x j )
1 1
I ( x i x j ) = log + log
P ( xi ) P (x j )
I ( xi x j ) = I ( xi ) + I ( x j )
Problem 5
A discrete source emits one of five symbols once every millisecond
with probabilities 1/2,1/4,1/8,1/16 and 1/16 respectively. Determine
the source entropy and information rate
Solution
We know that the source entropy is given as
m
1
H ( x ) = ∑ P ( X i ) log 2
i =1 P ( xi )
ANALOG AND DIGITAL COMMUNICATION
5
1
= ∑ P ( x i ) log 2 bits / symbol
i =1 P ( xi )
1 1 1 1 1
(or ) H ( X ) = log 2 2 + log 2 4 + log 2 8 + log 2 16 + log 2 16
2 4 8 16 16
1 1 3 1 1 15
(or ) H ( X ) = + + + + =
2 2 8 4 4 8
(or ) H ( X ) = 1.875 bits/symbol
1 1
The symbol rate r= = = 1000 sym
mbols/sec
Tb 10−3
Therefore,the information rate is expressed as
Problem 6
The probabilities of the five possible outcomes of an experiment
are given as
1 1 1 1
P ( x1 ) = , P ( x 2 ) = , P ( x 3 ) = , P ( x 4 ) = P ( x 5 ) =
2 4 8 16
Determine the entropy and information rate if there are 16 out
comes per second.
Solution
The entropy of the system is given as
5
1
H ( X ) = ∑ P ( x i ) log 2 bits / symbol
i =1 P ( xi )
1 1 1 1 1 15
(or ) H ( X ) = log 2 2 + log 2 4 + log 2 8 + log 2 16 + log 2 16 =
2 4 8 16 16 8
H ( X ) = 1.875bits / outcome
Problem 7
An analog signal is band limited to fm Hz and sampled at Nyquist
rate. The samples are quanti zed into four levels. Each level represents
one symbol. Thus there are four symbols. The probabilities of these
four levels(symbols) are P(xi)=P(x4)=1/8 and P(x2)=P(x3)=3/8. Obtain
information rate of the source.
Solution
We are given four symbols with probabilities p(x1)=P(x4)=1/8 and
P(x2)=P(x3)=3/8. Average information H(X)(or entropy)is expressed as,
1 1 1 1
H ( X ) = P ( x1 ) log 2 + P ( x 2 ) log 2 + P ( x 3 ) log 2 + P ( x 4 ) log 2
P ( x1 ) P ( x2 ) P ( x3 ) P (x4 )
In this example there are four levels. Those four levels may be
coded using binary PCM as show in Table 6.1
ANALOG AND DIGITAL COMMUNICATION
Symbol or
S.No Probability Binary digits
level
1 Q1 1/8 00
2 Q2 3/8 01
3 Q3 3/8 10
4 Q4 1/8 11
Table 6.1
Hence,two binary digits(bin its) are required to send each
symbols are sent at the rate of 2fm symbols/sec. Therefore,transmission
rate of binary digits will be binary rate=2 binary/symbol×2fm sym-
bols/sec=4 fm bin its/sec. Because one bin it is capable of conveying
one bit of information,therefore the above coding scheme is capable of
conveying 4 fm bits of information per second. But in this example, we
have obtained that we are transmitting 3.6 fm bits of information per
second.This means that the information carrying ability of binary PCM
is not completely utilized by the transmission scheme.
4.3 SOURCE CODING TO INCREASE AVERAGE INFORMATION
PER BIT
Let Nminbe the minimum value of N. Then the coding efficiency of the
source encoder is defined as,
= N min / N k ... ( 2)
η = H/N ....(4)
Need
(i)If the probability of occurrence of all the messages are not
equally likely,then average information or entropy is reduced
and Results in information rate is reduced.
(ii) This problem can be solved by coding the messages with
different number of bits.
NOTE
(i).Shannon - Fano coding is used to encode the messages
depending upon their probabilities.
(ii).This algorithm is assigns less number off or
highly probable message and more number of bits for rarely
occurring messages.
SOURCE CODING AND ERROR CONTROL CODING
Producer
Step 2:Partition the set into two sets that are as close to equi-probable
as possible and assign 0 to the upper set and assign 1 to the lower
set.
Step 3:Continue this process each time partitioning the sets with as
nearly probabilities as possible until further partitioning is not
possible.
Problem 1
Solution
Given Probabilities
P1=0.4,P2=0.2,P3=0.1,P4=0.2,P5=0.1.
Symbols Probabilities
x1 0.4
x2 0.2
x3 0.2
x4 0.1
x5 0.1
ANALOG AND DIGITAL COMMUNICATION
Step 2:
Method 1:
L =1 L
N = ∑ Pknk
k =0
(or ) =∑ Pknk
K =1
5
N = ∑P n
K =1
k k
Method II
No of
Symbol Probability Stage 1 Stage Stage Code bits per
2 3 word message
(nk)
x1 0.4 0 0 1
x2 0.2 1 0 0 100 3
x3 0.2 1 0 1 101 3
x4 0.1 1 1 0 110 3
x5 0.1 1 1 1 111 3
Table 6.3
SOURCE CODING AND ERROR CONTROL CODING
5
N = ∑P n
K =1
k k
M
1
H = ∑ Pk log 2
K =1 Pk
5
1
= ∑ Pk log 2
K =1 Pk
1 1 1 1 1
= P1 log 2 + p2 log 2 + P3 log 2 + P4 log 2 + P5 log 2
P1 p2 P3 P4 P5
1 1 1 1 1
= 0.4 log 2 + 0.2 log 2 + 0.1log 2 + 0.2 log 2 + 0.1 log 2
0.4 0.2 0.1 0.2 0.1
1 1 1 1 1
log10 log10 log10 log10 log10
= 0.4 0.4 + 0.2 0.2 + 0.1 0.1 + 0.2 0.2 + 0.1 0.1
log10 2 log10 2 log10 2 log10 2 log10 2
= ( 0.4 × 1.3219 ) + ( 0.2 × 2.3219 ) + ( 0.1 × 3.3219 ) + ( 0.2 × 2.3219 ) + ( 0.1 × 3.3219 )
= 0.52876 + 0.46439 + 0.33219 + 0.46439 + 0.33219
= 2.12192 bits/ssymbol.
(iii) Efficiency
H
η =
N
2.12192
= = 0.96450
2.2
0 η = 96.45 0
0 0
ANALOG AND DIGITAL COMMUNICATION
Step 6: Start encoding with the last stage,which consist of exactly two
ordered probabilities Assign 0 as the first digit in the code words
for all the source of symbols associated with probability,assign 1
to the second probability.
Step 7: Now go back and assign 0 and 1 to the second digit for the two
probabilities that were combined in the previous step retaining
all assignments made in that stage.
Problem 1
A discrete memory less source has 6 symbols x1 ,x2,x3,x4,x5,x6
with probabilities 0.30,0.35,0.20,0.12,0.08,0.05 respectively. Construct
a huffman code and calculate its efficiency also calculate redundancy of
the code.
Solution
Code words obtained in bracket in stage. We can write the code
words for the respective probabilities,as follows
Stage I Stage
Xi Stage II Stage IV Stage V
P(xi) III
x1 0.30 0.30 0.30 0.45 0.55
x2 0.25 0.25 0.25 0.30 0.45
x3 0.20 0.20 0.25 0.25
x4 0.12 0.13 0.20
x5 0.08 0.12
x6 0.05
Number of
Message Probability Code word
bits nk
x1 0.3 00 2
x2 0.25 01 2
x3 0.2 11 2
x4 0.12 101 3
x5 0.08 1000 4
x6 0.05 1001 4
Table 6.5
(iii) To find efficiency h we have to calculate average code word length(N)
and entropy (H).
M
N = ∑P n
K =1
k k where nk is code word
6
= ∑P n
K =1
k k
Entropy
M
1
H = ∑ Pk log 2
K =1 Pk
6
1
= ∑ Pk log 2
K =1 Pk
1 1 1 1 1 1
= P1 log 2 + P2 log 2 + P3 log 2 + P4 log 2 + P5 log 2 + P6 log 2
P1 P2 P3 P4 P5 P6
1 1 1 1 1 1
0.30 log 2 + 0.25 log 2 + 0.20 log 2 + 0.12 log 2 + 0.08 log 2 + 0.05 log 2
0.30 0.25 0.20 0.12 0.08 0.05
log 0.30 log 0.25 log 0.20 log 0.12 log 0.08 log 0.05
0.30 10 + 0.25 10 + 0.20 10 + 0.12 10 + 0.08 10 + 0.05 10
log10 2 log10 2 log10 2 log10 2 log10 2 log10 2
= 0.521 + 0.5 + 0.4643 + 0.367 + 0.2915 + 0.216
= 2.3598 bits of information/message
γ = 1 − γ ⇒ 1 − 0.99
= 0.01
Problem 2
Solution
Given
1 1 1
P ( x1 ) = , P ( x 2 ) = , P ( x 3 ) = P ( x 4 ) = ,n (i ) = ( x i )
2 4 8
1
We know that,I(x i ) = log 2
P ( xi )
1 1
I ( x1 ) = log 2 ⇒ log 2
P ( x1 ) 1
2
1
log10
1
2 log 2
= = 10
=1
log10 2 log10 2
1 1
I ( x 2 ) = log 2 ⇒ log 2
P ( x ) 1
2
4
1
og10
lo
1
4
= = 2
log10 2
1 1
I ( x 3 ) = log 2 ⇒ log 2 =3
P ( x3 ) 1
8
1 1
I ( x 4 ) = log 2 ⇒ log 2 =3
P (x4 ) 1
8
ANALOG AND DIGITAL COMMUNICATION
= P ( x1 ) I ( x1 ) + P ( x 2 ) I ( x 2 ) + P ( x 3 ) I ( x 3 ) + P ( x 4 ) I ( x 4 )
1 1 1 1
= × 1 + × 2 + × 3 + × 3
2 4 8 8
1 1 3 3
= + + +
2 2 8 8
= 1.75 bits/message
M M
N = ∑ P n (or )∑ P ( x ) n
K =1
k k
i =1
i i
= P ( x1 ) n1 + P ( x 2 ) n 2 + P ( x 3 ) n 3 + P ( x 4 ) n 4
1 1 1 1
= × 1 + × 2 + × 3 + × 3
2 4 8 8
= 1.75 bits/syymbol
code efficiency
H (X ) 1.75
η= = =1
N 1.75
o η = 100 o
o o
SOURCE CODING AND ERROR CONTROL CODING
Problem 3
A DMS has five equaly likely symbols. Construct a Shannon
fano code for x and calculate the efficiency of code. Construct another
Shannon- fano code and compare the results. Repeat for the Huffman
code and compare results.
Solution
Entropy
5 1
H ( X ) = ∑ P ( x i ) log 2
i =1 P ( xi )
Here all five probabilitie es are same(i.e.,) 0.2 so we can write,
1
H ( X ) = 5 × P ( x i ) log 2
P ( xi )
1
= 5 × 0.2 × log 2
0.2
1
0.2 log10
= 5× 0.2
log10 ( 2)
H ( X ) = 2.32 bits/message
ANALOG AND DIGITAL COMMUNICATION
(ii) Another method for Shannon fano code[by choosing another two
approximately equiprobable (0.6 versus 0.4) sets]is constructed as
follows
Stage Stage Stage Cord No of bits per
Symbol Probability
1 2 3 word message (nk)
x1 0.2 0 0 00 2
x2 0.2 0 1 0 010 3
x3 0.2 0 1 1 011 3
x4 0.2 1 0 10 2
x5 0.2 1 1 11 2
5 1
H ( X ) = ∑ P ( x i ) log 2
i =1 P ( xi )
= 2.32 bits/message
SOURCE CODING AND ERROR CONTROL CODING
= P ( x1 ) n1 + P ( x 2 ) n 2 + P ( x 3 ) n 3 + P ( x 4 ) n 4 + P ( x 5 ) n 5
= ( 0.2 × 2) + ( 0.2 × 3 ) + ( 0.2 × 3 ) + ( 0.2 × 2) + ( 0.2 × 2)
= 0.4 + 0.6 + 0.6 + 0.4 + 0.4
= 2.4 bits/symbol
ding coefficiency ( η)
Cod
H (X )
coding efficiency η=
N
2.32
= = 0.967
2.4
o η = 96.7 o
o o
Since, average code word length is same as that for the code of
part(i), the efficiency is same.
(iii)The huffman code is constructed as follows
Stage 1
xi Stage II Stage III Stage IV
P(xi)
x1 0.2 (01) 0.4 (1) 0.4 (1) 0.6 (0)
x2 0.2 (000) 0.2 (01) 0.4 (00) 0.4 (1)
x3 0.2 (001) 0.2 (000) 0.2 (01)
x4 0.2 (10) 0.2 (001)
x5 0.2 (11)
= P ( x1 ) n1 + P ( x 2 ) n 2 + P ( x 3 ) n 3 + P ( x 4 ) n 4 + P ( x 5 ) n 5
Here all probability have same value (0.2).
=0.2 × [n1 + n 2 + n 3 + n 4 + n 5 ]
so,=
= 0.2[2 + 3 + 3 + 2 + 2]
= 0.2 × 12
= 2.4 bits/symbol
Entropy & efficiency are also same as that the Shannon fano code
due to same code word length.
Entropy
5 1
H ( X ) = ∑ P ( x i ) log 2
i =1 P ( xi )
1 1 1
= P ( x1 ) log 2 + P ( x 2 ) log 2 + P ( x 3 ) log 2
P ( x1 ) P ( x2 ) P ( x3 )
1 1
+P ( x 4 ) log 2 + P ( x 5 ) log 2
P (x4 ) P ( x5 )
Here all five probab bilities have same value as 0.2 so we can write,
1
=5 × P ( x1 ) lo
og 2
P ( x1 )
1
= 5 × 0.2 log 2
0.2
1
0.2 log10
= 5× 0.2
log10 2
= 2.32 bits/message
Coding efficiency ( η)
H (X )
Coding efficiiency η =
N
2.32
= = 0.967
2.4
o η = 96.7 o
o o
Problem 4
A Discrete memory less source (DMS) has five symbols x1x2,x3,x4,
SOURCE CODING AND ERROR CONTROL CODING
Entropy
5 1
H ( X ) = ∑ P ( x i ) log 2
i =1 P ( xi )
1 1 1
= P ( x1 ) log 2 + P ( x 2 ) log 2 + P ( x 3 ) log 2
P ( x1 ) P ( x2 ) P ( x3 )
1 1
+P ( x 4 ) log 2 + P ( x 5 ) log 2
P (x4 ) P ( x5 )
1 1 1
= 0.4 log 2 + 0.19 log 2 + 0.16 log 2
0.4 0.19 0.16
1 1
+0.15 log 2 + 0.1 log 2
0 .15 0.1
H ( X ) = 2.15 bits/symbol
Code efficiency ( η)
H (X ) 2.15
η= = = 0.956
N 2.25
o η = 95.6 o
o o
Stage I
Xi Stage II Stage III Stage IV
P(xi)
x1 0.4 (1) 0.4 (1) 0.4 (1) 0.6 (0)
x2 0.19 (000) 0.25 (01) 0.35 (00) 0.4 (1)
x3 0.16 (001) 0.19 (000) 0.25 (01)
x4 0.15 (010) 0.16 (001)
x5 0.1 (011)
Entropy H(X)
Entropy H(X) of Huffman code is same as that for the Shannon-
Fano code.
5 1
H ( X ) = ∑ P ( x i ) log 2
i =1 P ( xi )
1 1 1
= P ( x1 ) log 2 + P ( x 2 ) log 2 + P ( x 3 ) log 2
P ( x1 ) P ( x2 ) P ( x3 )
1 1
+P ( x 4 ) log 2 + P ( x 5 ) log 2
P (x4 ) p ( x5 )
1 1 1
= 0.4 log 2 + 0.19 log 2 + 0.16 log 2
0.4 0.19 0.16
1 1
+0.15 log 2 + 0.1 log 2
0.15 0.1
H ( X ) = 2.15 bits/message
SOURCE CODING AND ERROR CONTROL CODING
= P ( x1 ) n1 + P ( x 2 ) n 2 + P ( x 3 ) n 3 + P ( x 4 ) n 4 + P ( x 5 ) n 5
= ( 0.4 × 1) + ( 0.19 × 3 ) + ( 0.16 × 3 ) + ( 0.15 × 3 ) + ( 0.1 × 3 )
N = 2.2 bits/symbol
Code efficiency ( η)
H 2.15
η= = = 0.977
N 2.2
o η = 97.7 o
o o
ANALOG AND DIGITAL COMMUNICATION
Solution
Arranging the symbols in decreasing order and obtain the
Huffman code as follows
Stage I Stage
Xi Stage III Stage IV Stage V Stage VI
P(Xi) II
x6 0.3 (00) 0.3 (00) 0.3 (00) 0.3 (00) 0.4 (1) 0.6 (0)
x3 0.2 (10) 0.2 (10) 0.2 (10) 0.3 (10) 0.3 (00) 0.4 (1)
x2 0.15 (010) 0.15 (010) 0.2 (11) 0.2 (10) 0.3 (01)
x5 0.15 (011) 0.15 (011) 0.15 (010) 0.2 (11)
x7 0.1 (110) 0.1 (110) 0.15 (011)
x1 0.05 (1110) 0.1 (111)
x4 0.05 (1111)
= P ( x1 ) n1 + P ( x 2 ) n 2 + P ( x 3 ) n 3 + P ( x 4 ) n 4 + P ( x 5 ) n 5
+ P ( x6 )n6 + P ( x7 )n7
= ( 0.05 × 4 ) + ( 0.15 × 3 ) + ( 0.2 × 2) + ( 0.05 × 4 ) + ( 0.15 × 3 )
+ ( 0.3 × 2) + ( 0.1 × 3 )
N = 2.6 bits/symboll
Entropy H(X)
SOURCE CODING AND ERROR CONTROL CODING
7 1
H ( X ) = ∑ P ( x i ) log 2
i =1 P ( xi )
1 1 1 1
= 0.05 log 2 + 0.15 log 2 + 0.2 log 2 + 0.05 log 2
0.05 0.15 0.2 0.05
1 1 1
+0.15 log 2 + 0.3 log 2 + 0.1 log 2
0.15 0.3 0.1
H ( X ) = 2.57 bits/message
Coding efficiency ( η )
H 2.57
η= = = 0.9885
N 2.6
o η = 98.85 o
o o
Problem 7
A discrete memory less source has a alphabet given below.
Compute two different Huffman codes for this source,hence for each of
the two codes,find,
(i) The average code-word length.
(ii) The variance of the average code-word length over the
ensemble of source symbol.
Symbol S0 S1 S2 S3 S4
Probability 0.55 015 0.15 0.10 0.05
Solution
The two different Huffman codes are obtained by placing the com-
bined probability as high as possible or as low as possible.
1. Placing combined probability as high as possible
+ 0.05 [3 − 1.9]
2
= 0.99
2. Placing combined probability as low as possible
Stage I Stage Stage
Symbol Stage IV
P(Xi) II III
s0 0.55 (0) 0.55 (0) 0.55 (0) 0.55 (0)
s1 0.15 (11) 0.15 (11) 0.3 (10) 0.45 (1)
s2 0.15 (100) 0.15 (100) 0.15 (11)
s3 0.1 (1010) 0.15 (101)
s4 0.05 (1011)
(i ) Average code-word length
4
∴N = ∑P n
K =0
k k
+ 0.05 [4 − 1.9]
2
= 1.29
Average code-word
Method Varaiance
length
As high as possible 1.9 0.99
As low as possible 1.9 1.29
x
P i
yj
I ( x i , y j ) = log bits ... (1)
P ( xi )
xi
P
Here I(xi,yj) is the mutual information, y is the conditional
j
I ( X ;Y ) = I (Y ; X )
(ii) The mutual information can be expressed in terms in terms of
entropies of channel input or channel out put and conditional entropies.
I ( X ;Y ) = H ( X ) − H ( X /Y )
I (Y ; X ) = H (Y ) − H (Y / X )
where , H (X /Y ) and H (Y / X ) are conditional entropies.
I ( X ;Y ) ≥ 0
I ( X ;Y ) = H ( X ) + H (Y ) − H ( X ,Y )
Property 1
The mutual information of a channel is symmetric.
(i.e.,) I(X;Y)=I(Y;X)
Proof
Let us consider some standard relationships from probability
theory.These are as follows
X
P ( X i ,Y j ) = P i P (Y j ) ... (1)
Yj
Yj
and P ( X i ,Y j ) = P P ( Xi ) ... ( 2)
Xi
From equation (1) and (2) we can write,
X Yj
P i
Yj P (Y j ) = P P ( Xi ) ... ( 3 )
Xi
X
P i
m n
Y j
I ( X ;Y ) = ∑ ∑ P ( X i ,Y j ) log 2
i =1 j =1
P ( Xi )
X
P i
m n
Y j
I (Y ; X ) = ∑ ∑ P ( X i ,Y j ) log 2
i =1 j =1
P ( Xi )
= I ( X ;Y )
m n
1
H ( X /Y ) = ∑ ∑ P ( X i ,Y j ) log 2 ... (1)
i =1 j =1 P ( X i /Y j )
X
P i
m n
Y j
I ( X ;Y ) = ∑ ∑ P ( X i ,Y j ) log 2
i =1 j =1
P ( Xi )
m n
1
I ( X ;Y ) = ∑ ∑ P ( X i ,Y j ) log 2 − H ( X /Y ) ... ( 2)
i =1 j =1 P ( Xi )
∑ P ( X ,Y ) = P ( X )
j =1
i j i
Yj
P
m n Xi
I (Y ; X ) = ∑ ∑ P ( X i ,Y j ) log 2
i =1 j =1
P (Y )
j
m n
1
= ∑ ∑ P ( X i ,Y j ) log 2
j =1 i =1 P (Y j )
m n
1
−∑ ∑ P ( X i ,Y j ) log 2 ... ( 6 )
i =1 j =1 P (Y j / X i )
The conditional entropy H(Y/X) is given as,
m n
1
/X)=∑ ∑ P ( X i ,Y j ) log 2
H(Y/ ... ( 7 )
i =1 j =1 P (Y j / X i )
∑ P ( X ,Y ) = P (Y )
i =1
i j j ... ( 9 )
1 n
H (Y ) = ∑ P (Y j ) log 2
We know that j =1 P (Y j )
Hence first term of above equation represents H (Y).Hence above
equation becomes,
I (Y ; X ) = H (Y ) − H (Y / X ) ... (10 )
Property 3
I(X;Y) ≥ 0
Solution
Average mutual information can be written as,
m n P (X )
I ( X ;Y ) = ∑ ∑ P ( X i ,Y j ) log 2 i
... (1)
P ( X i /Y j )
i =1 j =1
ANALOG AND DIGITAL COMMUNICATION
1 m n P ( X i ) P (Y j )
-I (Y ; X ) = P ( X ,Y )
∑ ∑ i j P X ,Y In ... ( 2)
In 2 i =1 j =1
( i j)
Also we know that
In α ≤ α − 1
There fore we have
1 m n P ( X i ) P (Y j )
-I (Y ; X ) ≤ P (
∑ ∑ i j P X ,Y
X ,Y ) − 1
In 2 i =1 j =1
( i j )
1 m n m n
−I (Y ; X ) ≤ ∑ ∑ P ( X i ) P (Y j ) − ∑ ∑ P ( X i ,Y j ) ... ( 3 )
In 2 i =1 j =1 i =1 j =1
Since
m n m n
∑ ∑ P ( X i ) P (Y j ) = ∑ P ( X i )∑ P (Y j ) = (1)(1)
i =1 j =1 i =1 j =1
m n m n m
∑ ∑ P ( X ,Y ) = ∑ ∑ P ( X ,Y ) = ∑ P ( X ) = 1
i j i j i
i =1 j =1 j =1 i =1 i =1
Hence proved.
Property 4
I(X;Y) =H (X) +H (Y)-H (X,Y)
Solution
We know the relation
H (X,Y) = H (X,Y) -H (Y)
Therefore
SOURCE CODING AND ERROR CONTROL CODING
∑P (X
i =1
i /Y j ) = P (Y j )
n
m
−∑ ∑ P ( X i ,Y j ) log P (Y j )
j =1 i =1
actual transinformation
η= (or )
max imum transinformation
I ( X ;Y ) I ( X ;Y )
η= = ... ( 2)
max I ( X ;Y ) C
R =1− η
C − I ( X ;Y )
= ... ( 3 )
C
4.9 MAXIMUM ENTROPY FOR CONTINOUS CHANNEL OR GAUSSIAN
CHANNEL
• Probability density function of Gaussian function is given as’
X2
1
P (x ) = e 2σ
2
σ 2π
Where σ2 = Average power of the so urce
The maximum entropy is computed as follows
∞
1
h (x ) = ∫ p ( x ) log dx
P (x )
2
−∞
∞
= − ∫ P ( x ) log 2 P ( x ) dx
−∞
1 x 2
∞
= − ∫ P ( x ) log 2 e 2σ dx
2
−∞ σ 2π
x
2
∞
= − ∫ P ( x ) log 2 σ 2πe 2σ dx [ log 2 ( AB ) = log 2 A + log 2 B ]
2
−∞
− 2
2
∞ x
( )
= ∫ P ( x ) log 2 σ 2π + log 2 e 2σ dx
−∞
∞ ∞ x2
1
( )
− 2
= ∫ P ( x ) 2 log 2 σ 2π dx + ∫ P ( x ) log 2 e 2σ dx
−∞
2 −∞
SOURCE CODING AND ERROR CONTROL CODING
x2
−log ex 2
[ log 2 e 2 σ2
]=
2σ2
[ n log m=logmn ]
∞
1 log e 2
( )
2
= ∫ P ( x ) log 2 σ 2π dx + ∫ x P ( x ) dx
2 −∞ 2σ2
∞
1 log e
= log 2 ( 2πσ2 ) ∫ P ( x ) dx + 2 ∫
x 2P ( x ) dx
2 −∞
2σ
∞
∫ P ( x ) dx = 1, from properties of pdf
−∞
∞
∫ x 2P ( x ) dx = σ2 , from definition of variance
−∞
1 log e 2
log 2 ( 2πσ2 ) + σ
2 2σ2
1
log 2 ( 2πσ2 ) + log e
2
1
h ( x ) = log 2 ( 2πσ2e )
2
C = Blog 2 (1 + S / N ) bite/sec
Where
B →is the channel bandwidth,
S →is the signal power
N →is the total noise power within the channel bandwidth
No
Here B is bandwidth and power spectral density of white noise is
2
hence noise power N becomes,
B
No
N = ∫
−B
2
df
Noise power
N = NoB
SOURCE CODING AND ERROR CONTROL CODING
T
X Y
Source Destination
N
H [ x , y ] = H [y ] + H [ x / y ] ... ( 2)
The noise is added to the system is Gaussian in nature
As source is independent of noise
H [ x , y ] = H [ x ] + H [N ] ... ( 3 )
As y depends on x and N so
Y = f ( x , N ) and Y=x+N
Therefore, H [ x , y ] = H (x , N ) ... ( 4 )
Combining equation(2),(3) and (4)
H [y ] + H [ x / y ] = H [ x ] + H [N ]
H [ x ] − H [ x / y ] = H [y ] − H [N ] ... ( 5 )
1 S
C = 2B × log 2 1 +
2 N
S
C = B log 2 1 + bits /sec.
N
Where B is the channel band width. We Know the power spectral density
of noise is
N = NoB
S
C = B log 2 1 + bits /sec
NoB
C = B log 2 (1 + ∞ ) = ∞
SOURCE CODING AND ERROR CONTROL CODING
S
10000 = 10000 log 2 1 +
N
S
∴ =1
N
S
Here , B=3000 =9
N
S
B=10000 =1
N
Problem 2
Channel capacity is given by
S
C = B log 2 1 + bits /sec. ... ( 6 )
N
In the above equation when the signal power is fixed and white
gaussian noise present,the channel capacity approaches an upper limit
with increase band width ’B’.prove that this upper limit is given as,
lim C S 1 S
C ∞ = B → ∞ = 1.44 =
N o In 2 N o
S
C = B log 2 1 +
NoB
SOURCE CODING AND ERROR CONTROL CODING
Problem 3
A black and white TV picture consists of about 2 x106 Picture
elements with 16 different brightness levels,with equal probabilities. If
pictures are repeated at the rate of 32 per second,calculate average rate
of information conveyed by this TV picture source. If SNR is 30 dB,What
is the maximum band width required to support the transmission of the
resultant video signal
Solution
Given
Picture elements =2x106
Source levels(symbols) =16 i.e.,M=16
Picture repetition rate =32/sec.
S
= 30
N dB
(i) The source symbol entropy(H)
Source emits any one of the 16 brightness levels. Here M=16. These
levels are equiprobable. Hence entropy of such source is given by,
H=log2M
=log216
=4 bits/symbol(level)
(ii)Symbol rate(r)
Each picture consists of 2 x 106 picture elements. Such 32
pictures are transmitted per second. Hence number of picture elements
per second will be,
r = 2 × 106 × 32 symbols/sec
= 64 × 06 symbols/sec
S S
We Know that = 10 log10
N dB N
S
∴ 30=10log10
N
S
∴ =1000
N
Channel coding theorem states that information can be received
without error if,
R≤C
S
R = 2.56 × 108 and C=Blog 2 1 +
N
S
2.56 × 108 ≤ B log 2 1 +
N
i .e., 2.56 × 10 ≤ Blog 2 (1 + 1000 )
8
2.56 × 108
(or ) B≥ i .e., 25.68MH Z
log 2 (1001)
Therefore ,the transmission channel must have a band width of
25.68 MHZ to transmit the resultant video signal.
Problem 4
A voice grade telephone channel has a band width of 3400 Hz.
If the signal to noise ratio(SNR) on the channel is 30 dB,determine the
capacity of the channel. If the above channel is to be used to transmit
4.8 kbps of data determine the minimum SNR required on the channel.
Solution:
Given data: Channel band width B=3400Hz
S
= 30dB
N dB
We Know that
S S
= 10 log10
N dB N
S
∴ 30=10log10
N
S
log10 =3
N
S
∴ =10000
N
ANALOG AND DIGITAL COMMUNICATION
S
Here R=4.8 kbps and C=Blog 2 1 +
N
Hence above equation becomes,
S
4.8 kbps ≤ Blog 2 1 +
N
S
i .e., 4800 ≤ 3400log 2 1 +
N
S
i .e., log 2 1 + ≥ 1.41176
N
S
log10 1 +
N ≥ 1.41176
log10 2
S
∴ ≥ 1.66
N
S
This means = 1.66 to transmit data at the rate of 4.8kbps
N min
Problem 5
For an AWGN channel with 4.0 kHz band width, the noise
spectral density h/2 is 1.0 pico watts/Hz and the signal power at the
receiver is 0.1 mW. Determine the maximum capacity, as also the
actual capacity for the above AWGN channel.
Solution :
Given: B =4000 Hz, S=0.1 x 10-3 W