0% found this document useful (0 votes)
95 views1 page

Coding Theory Exam Questions

This document is an examination paper for the M.Tech II Semester Supplementary Examinations in Coding Theory & Techniques from Jawaharlal Nehru Technological University Hyderabad. It includes a series of questions on topics such as prefix codes, Shannon-Fano coding, systematic codes, Hamming codes, convolution codes, and BCH codes. Students are required to answer any five questions, each carrying equal marks.

Uploaded by

sudhakar k
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
95 views1 page

Coding Theory Exam Questions

This document is an examination paper for the M.Tech II Semester Supplementary Examinations in Coding Theory & Techniques from Jawaharlal Nehru Technological University Hyderabad. It includes a series of questions on topics such as prefix codes, Shannon-Fano coding, systematic codes, Hamming codes, convolution codes, and BCH codes. Students are required to answer any five questions, each carrying equal marks.

Uploaded by

sudhakar k
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

[Link].

com

NR
Code No: B3803
JAWAHARLAL NEHRU TECHNOLOGICAL UNIVERSITY HYDERABAD
[Link] II Semester Supplementary Examinations March/April 2010
CODING THEORY & TECHNIQUES
(DE&CS)
Time: 3hours [Link]
Answer any five questions
All questions carry equal marks
---
1.a) What are the properties of prefix codes?
b) Why source coding is required?
c) Find the entropy of the source transmitting four symbols with probabilities of 0.4,
0.2, 0.2, 0.2?

2.a) Design a Shannon-Fano coding for the following symbols and their probabilities
a1 − 0.5, a2 − 0.2, a3 − 0.2, a4 − 0.05, a5 − 0.05
b) Write Lempel – Ziv algorithm.

3.
⎡1 0 0 0 1 1 0 1 ⎤
⎢0 1 0 0 1 0 1 1 ⎥
L D
Consider an (8, 4) systematic code whose parity check matrix is

H =⎢

R

⎢0 0 1 0 1 1 1 0⎥
⎢ ⎥
(i)
⎣0 0 0 1 0 1 1 1⎦
O
Express the syndrome bits in terms of the received word bits.

4.a)
(ii)

W
Construct the syndrome circuit for this code.
(iii) Construct the standard array of this code.

U
Show that Hamming codes achieve the Hamming bound.
b)
desired.
(i)
N T
A fairly good linear code of length n = 63 with the minimum distance dmin = 5 is

Determine the range of parity check bits.

5.
(ii)
JFind the largest possible value of k.

Consider the (7, 4) cyclic hamming code generated by g(x) = 1 + x + x3. The r(x)
is fed into the syndrome register from the right end.
(i) What is the syndrome when a single error occurs at the location e(x) = x6?
(ii) Derive the decoding circuit for this code.

6. For the (3,1) systematic convolution code with m = 5, the generator sequences are
given as g(1) = ( 100000 ) , g(2) = ( 101101 ) and g(3) = ( 11011 )
(i) Find the generator matrix of this code?
(ii) Find the code word corresponding to the information sequence
d = (1101)?

7.a) Write a stack sequential decoding algorithm for convolution codes.


b) Draw the code tree for (3, 1, 2) code with L = 5 and decode the sequence.
r = (010, 010, 001, 110, 100, 101, 011)

8. Determine the generator polynomial of all the primitive BCH codes of length 31.
Use the Galois field GF (25) generated by p(x) = 1 + x2 + x5.
*******

[Link]

Common questions

Powered by AI

Prefix codes are a type of code in which no codeword is a prefix of any other. This property ensures that each codeword is uniquely decodable from the stream of code symbols without needing delimiters. Source coding is important in data compression as it reduces redundancy in the transmitted data, allowing for efficient utilization of bandwidth and storage space .

To design a Shannon-Fano code, the symbols are first arranged in order of decreasing probability. For probabilities 0.5, 0.2, 0.2, 0.05, 0.05, you split the list into two groups with approximately equal total probability, and assign a '0' to symbols in the first group and '1' to the second. Repeat this process recursively for each group until each symbol has a unique binary code. Shannon-Fano codes are significant as they are one of the earliest examples of a variable-length coding scheme, used to reduce the expected length of codes in line with probabilities .

Generator sequences g(i) are used to derive the generator matrix G through matrix transformations representing the constraint length and polynomial representations of the code. These sequences define the linear combinations of input bits to generate output code sequences, which are shifted and summed in the matrix form. For systematic convolution codes, G must specify the identity part for systematic bits and parity bits generated by the convolutions of the input sequence, thus directly affecting the form and efficiency of the code in terms of rate and error-correction capability .

BCH codes are significant for their strong error-correcting capabilities based on polynomials over finite fields. The generator polynomial for primitive BCH codes of length n is determined by first selecting a suitable m and using a primitive element α of GF(2^m). The generator polynomial is then the least common multiple (LCM) of minimal polynomials of α, α^2, ..., α^(2t), where t is the designed error-correcting capability. This polynomial ensures systematic error correction, making BCH codes highly effective in data transmission systems .

Drawing a code tree involves plotting a decision tree where each node represents a possible state of the encoder. Branches represent transitions corresponding to input bits, showing paths through the tree as sequences of branches. This visualization distinguishes valid code sequences from noise-induced variations. The tree guides step-by-step decoding by enabling a traceback from leaves to the root, aiding in recovering the input sequence efficiently. This approach leverages visual and sequential processes, enhancing error detection and correction understanding .

The entropy H of a source can be calculated using the formula H = - Σ (p_i * log2(p_i)), where p_i represents the probability of each symbol. For probabilities of 0.4, 0.2, 0.2, and 0.2, the entropy H = -(0.4*log2(0.4) + 0.2*log2(0.2) + 0.2*log2(0.2) + 0.2*log2(0.2)) = 1.85 bits. Entropy is significant because it quantifies the average amount of information produced by a stochastic source of data, and thus sets a theoretical limit on the best possible lossless compression .

Hamming codes achieve the Hamming bound by using the formula n = 2^m - 1 and k = n - m, where m is the number of parity bits. This ensures that for each error pattern (up to a single error), there is a unique syndrome, allowing for the correction of up to one error per codeword. This property implies that Hamming codes are optimally efficient for single-error correction given their length and rate, setting a benchmark for error-correcting code designs .

For cyclic codes, the syndrome is determined by calculating the remainder of the division of the received polynomial r(x) by the generator polynomial g(x). When a single error at position x^6 occurs, r(x) is the received word with the error, and s(x) = r(x) mod g(x). Correct interpretation of non-zero syndromes allows the location of the single error, facilitating the correction by adjusting the corresponding bit in the codeword. This method exploits the cyclic nature of the codes, ensuring reliability in communications .

Challenges in sequential decoding include managing computational complexity and memory usage, especially as the code length increases. These can lead to longer decoding times and potential buffer overflows. Addressing these challenges involves optimizing the algorithm for memory efficiency, ensuring that decoding paths are managed effectively to prevent stack overflow, and adopting heuristics or parallel processing to expedite progress through decision trees. These methods maintain decoding performance while balancing resource constraints .

A syndrome circuit can be constructed by using the parity-check matrix H to process the received word and produce a syndrome, which indicates errors. The steps are: (1) Multiply the received word by H transpose, (2) Calculate mod 2 for each result to generate the syndrome vector. This process is crucial in error detection because the non-zero syndromes indicate errors in specific positions, thus aiding in pinpointing and correcting errors .

You might also like