0% found this document useful (0 votes)
7 views

Unit IV - Information Theory

Unit IV covers information theory and coding, addressing key concepts such as entropy, information measures, and coding techniques like Shannon-Fano coding. It discusses the advantages and disadvantages of coding, including error reduction and redundancy. The document also explains the relationship between probability and information content, emphasizing the significance of entropy in determining average information per symbol.

Uploaded by

ezhil040304
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views

Unit IV - Information Theory

Unit IV covers information theory and coding, addressing key concepts such as entropy, information measures, and coding techniques like Shannon-Fano coding. It discusses the advantages and disadvantages of coding, including error reduction and redundancy. The document also explains the relationship between probability and information content, emphasizing the significance of entropy in determining average information per symbol.

Uploaded by

ezhil040304
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 17

Unit IV INFORMATION THEORY

AND CODING
• Introduction
• Information Measure
• Function Determination for Information
• Average Information per Symbol
• Information rate
• Coding
• Shannon-Fano Coding
Source coding
1.1 Introduction to information theory

-3-
• The information theory and coding answers two fundamental questions
• What is the irreducible complexity below which a signal cannot be
compressed?
• What is the ultimate transmission rate for reliable communication over a
noisy channel?
• The answers lie in entropy of a source and the capacity of a channel
respectively.
• Advantages
– Error probability is reduced
– Excellent channel coding techniques are available
– Utilization of full channel capacity
– Coding techniques for forward error correction and automatic repeat
request are available
• Disdvantages
– All coding techniques add redundancy bits for error detection and
correction
– Computations in coding and decoding involve delay in transmission
and reception
• Entropy is defined in terms of probabilistic behavior of a source of
information
• Capacity is defined as intrinsic ability of a channel to convey information.
Uncertainty, Information, Entropy

• Let a source output (discrete random variable) due to a probabilistic


experiment be
S={s0,s1,------sK-1}
with probabilities
P(S=sk)=pk k=0,1,---K-1
The set of probabilities must satisfy the condition

K-1
 pk = 1
K=0
Information measure
• So Question is which function that we can use that measure the Information?
• Information = F(1/Probability)
Requirement that function must satisfy
1. Its output must be non negative Quantity.
2. Minimum Value is 0.
3. It Should make Product into summation.

Information I(sk) = Log b (1/ Pk )

Here b may be 2, e or 10
If b = 2 then unit is bits
b = e then unit is nats
b = 10 then unit is decit
Information Measure-properties
• Also we can state the three law from Intution

Rule 1: Information I(sk) approaches to 0 as Pk approaches infinity.

Mathematically I(sk) = 0 as Pk  1

e.g. Sun Rises in East


Outcome is certain- no infomation
Rule 2: The Information Content I(sk) must be Non Negative quantity.

It may be zero

Mathematically I(sk) >= 0 as 0 <= Pk <=1

e.g. Sun Rises in West.


Event S=sk provides some or no information, but there is no loss the
information
Rule 3: The Information Content of message having Higher probability is less
than the Information Content of Message having Lower probability

Mathematically I(sk) > I(sj)


Rule 4: Also we can state for the Sum of two messages that the
information content in the two combined messages is same as the sum
of information content of each message Provided the occurrence is
mutually independent.

e.g. There will be Sunny weather Today.


There will be Cloudy weather Tomorrow

Mathematically

I (sk and sj) = I(sk sj )


= I(sk)+I(sj)
Information measure

This is utilized to determine the information rate of discrete Sources


Consider two Messages
A Dog Bites a Man  High probability  Less information
A Man Bites a Dog  Less probability  High Information

So we can say that


– Information α (1/Probability of Occurrence)
• Before the event S=sk occurs, there is an amount of uncertainty
• When S=sk , there is an amount of surprise
• After occurrence of event S=sk there is gain in the information
Average Information Content-Entropy

• It is necessary to define the information content of the particular


symbol as communication channel deals with symbol.

• Here we make following assumption…..

1. The Source is stationery, so Probability remains constant with time.


2. The Successive symbols are statistically independent and come out
at avg rate of r symbols per second
• Entropy is the average information content per source symbol
• H(S)= E(I(sk))
• H(S) =k pk log2 (1/pk)
• Entropy H satisfies the following equation
0 ≤ H ≤ log2(M)
• Maximum H will occur when all the message having equal probability
• Hence H also shows the uncertainty that which of the symbol will occur.
As H approaches to its maximum value we can’t determine which message
will occur
1.3 Average mutual information and
entropy
1 Maximum entropy

Entorpy H(X) 0.8

0.6

0.4

0.2

0
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
Probability (p)

 When the letters from the source are equally probable,


the entropy of a discrete source is maximum
 Uncertainty is maximum !!
References

1. 1. Simon Haykin and Michael Moher, An Introduction to analog and digital


communications, John Wiley and Sons, 2nd Edition, 2007.
2. 2. Wayne Tomasi, Advanced Electronic Communication Systems, Pearson
Education, 6thEdition, 2009.

You might also like