0% found this document useful (0 votes)
51 views

Channel Coding New

Channel coding techniques are used to add redundancy to transmitted data to reduce the effect of noise during transmission. This allows for error detection and correction. Common techniques discussed in the document include Hamming codes, which use parity bits to calculate Hamming distance for error detection and single-bit error correction. Convolutional codes use shift registers and modulo-2 adders to interleave coded bits across multiple transmitted symbols, allowing detection and correction of burst errors. Cyclic codes use a generator polynomial to algebraically encode data words into codewords. Systematic cyclic codes keep the first bits of the codeword identical to the data word. Decoding involves calculating the syndrome and looking up the error vector in a decoding table.

Uploaded by

krunal_patel_23
Copyright
© Attribution Non-Commercial (BY-NC)
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
51 views

Channel Coding New

Channel coding techniques are used to add redundancy to transmitted data to reduce the effect of noise during transmission. This allows for error detection and correction. Common techniques discussed in the document include Hamming codes, which use parity bits to calculate Hamming distance for error detection and single-bit error correction. Convolutional codes use shift registers and modulo-2 adders to interleave coded bits across multiple transmitted symbols, allowing detection and correction of burst errors. Cyclic codes use a generator polynomial to algebraically encode data words into codewords. Systematic cyclic codes keep the first bits of the codeword identical to the data word. Decoding involves calculating the syndrome and looking up the error vector in a decoding table.

Uploaded by

krunal_patel_23
Copyright
© Attribution Non-Commercial (BY-NC)
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 51

Channel Coding

To reduce the effect of noise


Redundancy
Error detection
Data word Codeword
0 00
1 11
Invalid code-words : 01 or 10
Hamming Distance

Words of same length


Number of different digits
00 and 11
Distance : 2
000 and 111
Distance : 3
Hamming distance for Error
detection

t errors to be detected.
Hamming distance: t + 1
Words : 00 and 11
Hamming distance : 2
1 error can be detected.
Error correction Coding

Data word Codeword


0 000
1 111
Majority rule
1 bit error correction
Rate reduction in coding
accumulated digits
Rate in bits per sec : 
Accumulation over time : T seconds
T bits coded by T bits
T bits sent in t1 seconds
T sent in t = ( /) t1 seconds
t/t1 = /
Error Probability

Fraction = 2 T/ 2 T
= 2 –(-)T
Pe reduces as T is increased
Hamming Distance for error
correction

Number of errors : t
Distance between codewords : 2t + 1
0 000
1 111
Distance is 3 bits
Hamming Bound

k : Number digits in the


dataword
n : Number of digits in the
codeword
2 m >=  ncj ; j =0 to j = t
m = n – k : Parity or check digits
Table for n,k pairs
t n k

1 3 1
4 1
7 4
2 10 4
15 8
3 10 2
15 5
23 12
Linear Block Codes
c = (c1,c2,c3 …….cn)
d = (d1,d2,d3 ….dk)
c1 = d1 ; c2 = d2 ; ….. ck = dk
c k+1 = h11d1 + h12d2 + …+h1kdk
c k+2 = h21d1 + h22d2 + …h2kdk
.
cn = hm1d1 + hm2d2 + …. + hmkdk
Generator Matrix

c=dG
1 0 .. 0 h11 h21 .. hm1
0 1 .. 0 h12 h22 ..hm2
G= . .
0 0 .. 1 h1k h2k ..hmk
Ik(k by k) P( k by m)
For a (6,3) Code the generator matrix is
1 0 0 1 0 1
G  0 1 0 0 1 1
0 0 1 1 1 0

Data word d Code word c


111 111000
110 110110
101 101011
100 100101
011 011101
010 010011
001 001110
000 000000
The Example

1 0 0 1 0 1

c   d1 d 2 d 3 0 1 0 0 1 1 
0 0 1 1 1 0
c1  d1 ; c 2  d 2 ; c3  d 3
c 4  d 1  d 3 ; c5  d 2  d 3 ; c 6  d 1  d 2
Code word property

C = dG
= d [ Ik , P]
= [ d, dP ]
P C HT = [dP+dP]
HT= = 0
Im HT : n by m
Error Vector
d : 100
r : received word 101101
c : Code word 100101
e : Error vector 001000
r=c+e
c=r+e
Syndrome Vector

S=rHT
= ( c + e) H T
= eHT
1 by n n by m
S : 1 by m
Decoding table
s  eH T

1 0 1 
0 1 1 
 
 1 1 0 
H 
T

1 0 0
0 1 0 
 
0 0 1
Decoding Table Continued
e s
000000
100000
010000
001000
000100
000010
000001
100010
Decoding Table
e s
000000 000
100000 101
010000 011
001000 110
000100 100
000010 010
000001 001
100010 111
Decoding

(1)Calculate syndrome by
S = r HT
(2) Obtain the error vector by
using the look up table
(3) c = r + e
Algebraic Codes

c   c1, c2,     cn 
n 1 n2
c( x)  c1 x  c2 x     cn
d  (d1 , d 2      d k )
k 1 k 2
d ( x)  d1 x  d2 x     dk
Examples
n7 , k  4
d  (1 0 1 0 )
d ( x)  x  x
3

c  (1 0 1 0 1 0 0)
c( x)  x  x  x
6 4 2
Modulo’2 addition

x x x
6 4

x x x
5 4

x x
6 5
Generation of Cyclic Codes

(1) Find g ( x) Generator polynomial


(n  k ) factor of x  1
th n

(2) For each d calculate d ( x)


c( x)  d ( x) g ( x)
Hence get c
An Example
n7, k 4 , nk  3
Third order factors
x  1 , x  x  1, etc.
3 3

We first try x  1
3
Division

x  1) x  1( x  x
3 7 4

x7  x4

x 1
4

x4  x

x 1
Trial Division

x 1
7


x  x 1
3 2
The Result

x 1
7

  x  x  x 1
4 3 2

x  x 1
3 2
Code generation
d  (1 0 1 0)
d ( x)  x  x
3

c( x)  d ( x).g ( x)
c( x)  ( x  x)( x  x  1)
3 3 2

x x x x
6 5 4

c  (1 1 1 0 0 1 0)
Systematic cyclic code
• The first k digits of c
are same as that of k

c( x)  x d ( x)   ( x)
nk

 ( x) is remainder of
nk
x d ( x)
g ( x)
Systematic code continued

x nk
d ( x)  ( x)
 q( x) 
g ( x) g ( x)
x nk
d ( x)  ( x)
  q( x)
g ( x) g ( x)
x nk
d ( x)   ( x)  q( x).g ( x)
Decoding
• Every valid code
polynomial is a
multiple of g(x)
• If the received word
polynomial r(x) is not r ( x)
a multiple of g(x) an
s ( x )  Re mainder of
g ( x)
error is indicated and
remainder polynomial
is used as syndrome
Error Polynomial

r ( x )  c ( x )  e( x )
r ( x)
s ( x)  Re m.of
g ( x)
c ( x )  e( x )
 Re m. of
g ( x)
e( x )
 Re m. of
g ( x)
Syndrome-Error Table
• For various errors syndrome is calculated
and stored in the form of table.
• Decoding steps
• Get s(x) as remainder in dividing r(x) by
g(x)
• Get the error from the table
• Get c = r + e
Cyclic Code Generation
nk n  k 1 nk 2
g ( x)  x  g1 x  g2 x    g n  k 1 x  1

gn-k-1
g2 g1

D D D D
a
c
s2
s1
n-k number of flip-flops d b

and modulo’2 adders Output


An Example
(7, 4 ) Code
g ( x)  x 3  x 2  1
g ( x)  x 3  1. x 2  0.x  1
g1  1
g 2  0 ; n  k  3 Thus 3 flip  flops

D1 D2 D3
Verification

d  (1 0 0 1)
d ( x)  x  1 3

nk
x d ( x)  x ( x  1)
3 3

 x6  x3
nk
x d ( x)
To find  ( x) we do the division
g ( x)
Remainder Calculation
x  x  1) x  x ( x  x  x  1
3 2 6 3 3 2

x x x
6 5 3

5
x
x x x
5 4 2

x x
4 2

x x x
4 3

x3  x 2  x
x3  x 2  1
x 1
Remainder polynomial

 ( x)  x  1
  011
D1 D2 D3

Clock D1 D2 D3
0 0 0 D = 1001
1 1 0 0
2 0 1 0
3 0 0 1
4 0 0 1
5 1 0 1
6 1 1 1
7 1 1 0
Interlaced Coding for Burst errors

x1 x2 x3 ------- x14 x15

y1 y2 y3 ------- y14 y15

z1 z2 z3 ------- z14 z15

15 , 8 code t = 2
Convolution Coder
• N stage shift register
• V modulo 2 adders
An Example
11010
s1 s2 s3 v1 v2
S1 S2 S3 0 0 0
Data 1 0 0 1 1
in 1 1 0 0 1
0 1 1 0 1
1 0 1 0 0
0 1 0 1 0
v1 v2 0 0 1 1 1
0 0 0 0 0
Number of output digits
• n = (k+N-1).v
• = (5+3-1).2 = 14
Channel coding in GSM

50 132 78

Parity bits Trailing bits


50 + 3 132 + 4
189 bits
Convolution Coder

378bits
456 bits
Code Tree 00 a
00
00 a 11 b

11 10 c

0 01 d
b
11 a
10
1
c 00 b
11
01 c
01
10 d
d
a 00
S1 S2 S3
Data b 01
in
c 10
d 11
v1 v2 From to
10

d a a &b
01 01
10 b c &d
b c
00
11 c a & b
11
a
d c & d
00 00 00
a
11 11 11
11
10 b
00
Trellis 10
01 c
Diagram 01
01
d
10
Received Digits : 01 00 01 00
Paths into node a 00 00 00 HD : 2 Survivor path
11 10 11 HD : 3
Paths into node b : 00 00 11 HD : 2 Survivor path
11 10 10 HD : 3
Error Probability in Un-coded
system
PEU  k . Q ( 2 )
Si
 : f 0   Bit rate
 f0
Si : Signal Power
 : PSD of noise
k : Number of bits in data word
Q( x) : Pr obability ( X  x)
For Gaussian Function
Error Probability in coded system

t 1
n  2k 
PEC    Q 
 t  1  n 
n : Number of bits in coded word
t : Number of correctable errors
An Example

n  15
k  11
  9.12
t 1
4
PEU  1.1 10
6
PEC  1.96 10

You might also like