0% found this document useful (0 votes)
14 views

Channel Coding Intro Linear Codes

The document introduces channel coding and discusses concepts like encoding, decoding, linear block codes, and cyclic codes. It covers topics such as the channel coding theorem, redundancy, Hamming distance, and minimum distance of a code.

Uploaded by

Noor Alshibani
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
14 views

Channel Coding Intro Linear Codes

The document introduces channel coding and discusses concepts like encoding, decoding, linear block codes, and cyclic codes. It covers topics such as the channel coding theorem, redundancy, Hamming distance, and minimum distance of a code.

Uploaded by

Noor Alshibani
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 44

Introduction Encoding Decoding Linear block codes Cyclic codes

Channel coding
Introduction & linear codes

Manuel A. Vázquez
Jose Miguel Leiva
Joaquı́n Mı́guez

February 20, 2024


Introduction Encoding Decoding Linear block codes Cyclic codes

Index
1 Introduction
Channel models
Fundamentals
2 Encoding
3 Decoding
Hard decoding
Soft decoding
Coding gain
4 Linear block codes
Fundamentals
Decoding
5 Cyclic codes
Polynomials
Decoding
Introduction Encoding Decoding Linear block codes Cyclic codes

Index
1 Introduction
Channel models
Fundamentals
2 Encoding
3 Decoding
Hard decoding
Soft decoding
Coding gain
4 Linear block codes
Fundamentals
Decoding
5 Cyclic codes
Polynomials
Decoding
Introduction Encoding Decoding Linear block codes Cyclic codes

(Channel) Coding

Goal
Add redundancy to the transmitted information so that it can be
recovered if errors happen during transmission.

Example: repetition code


0 → 000
1 → 111
so that, e.g.,
010 → 000 111 000
What should we decide it was transmitted if we receive

010 100 000 ?

000 (instead of 010)!


Introduction Encoding Decoding Linear block codes Cyclic codes

Digital communications system

transmitter receiver

A s(t) r (t) q =A+n


B Encoder Modulator + Demodulator Detector B̂

n(t) :
AWGN noise with
PSD N0 /2

This model can be analyzed at different levels...


Digital channel
Gaussian channel
Introduction Encoding Decoding Linear block codes Cyclic codes

Digital channel

transmitter receiver

A s(t) r (t) q =A+n


B Encoder Modulator + Demodulator Detector B̂

n(t) :
AWGN noise with
PSD N0 /2

B Digital channel B̂
Introduction Encoding Decoding Linear block codes Cyclic codes

Gaussian channel (with digital input)

transmitter receiver

A s(t) r (t) q =A+n


B Encoder Modulator + Demodulator Detector B̂

n(t) :
AWGN noise with
PSD N0 /2

A Gaussian channel q
Introduction Encoding Decoding Linear block codes Cyclic codes

Some basic concepts


Code
Mapping from a sequence of k bits, b ∈ {b1 , b2 , · · · }, onto
another one of n > k bits, c ∈ {c1 , c2 , · · · }.
coding transmission decoding

B̂[0], B̂[1], · · ·
bi ci or b̂
q[0], q[1], · · ·
i = 1, · · · , 2k i = 1, · · · , 2k

Probability of error for bi


Pei = Pr {b̂ 6= bi |b = bi }, i = 1, . . . , 2k
Maximum probability of error: Pemax = maxi Pei
Rate: The rate of a code is the number of information bits,
k, carried by a codeword of length n.
R = k/n
Introduction Encoding Decoding Linear block codes Cyclic codes

Codeword vs bit error probability

Pe : codeword error probability


# codewords received incorrectly v
Pe = =
overall # codewords w

BER (Bit Error Rate): bit error probability

# incorrect bits
BER =
# transmitted bits

(they match if every codeword carries a single information bit)

worst-case scenario → BER = wv ×k


)
×k = P e Pe
v ×1 Pe
⇒ ≤ BER ≤ Pe
best-case scenario → BER = w ×k = k k
Introduction Encoding Decoding Linear block codes Cyclic codes

Channel coding theorem

Theorem: Channel coding (Shannon, 1948)

If C is the capacity of a channel, then it is possible to reliably


transmit with rate R < C .

Capacity
It is the maximum of the mutual information between
the input and output of the channel.

Reliable transmission
There is a sequence of codes (n, k) = (n, nR) such
that, when n → ∞, Pemax → 0.
Introduction Encoding Decoding Linear block codes Cyclic codes

Channel coding theorem: example


1−p
0 0
p C = 1 − Hb (p),

p being p the channel


BER and Hb the bi-
1−p
1 1 nary entropy.
Let us consider 4 binary channels with

p = 0.15 ⇒ C1 = 0.39 p = 0.13 ⇒ C2 = 0.44


p = 0.17 ⇒ C3 = 0.34 p = 0.19 ⇒ C4 = 0.29

and a code with rate R = 1/3 = 0.33.

Channel coding theorem


A code with rate R = 1/3 only respects the Shannon limit in the
first three scenarios.
Introduction Encoding Decoding Linear block codes Cyclic codes

Channel coding theorem: example

The figure shows the evolution of the codeword error probability as


a function of n: it approaches 0 when R < C .

Figure: Left: logarithmic scale; right: linear scale


Introduction Encoding Decoding Linear block codes Cyclic codes

Definitions

Definition: Redundancy

The number of bits, r = n − k, added by the encoder.

k n−r r
It allows rewriting the rate of the code as R = n = n =1− n

Definition: Hamming distance...

...between two binary sequences is the number of different bits.

It is a measure of how different two sequences of bits are. For instance,


dH (1010, 1001) = 2.

Definition: Minimum distance of a code

dmin = min dH (ci , cj )


i6=j
Introduction Encoding Decoding Linear block codes Cyclic codes

Index
1 Introduction
Channel models
Fundamentals
2 Encoding
3 Decoding
Hard decoding
Soft decoding
Coding gain
4 Linear block codes
Fundamentals
Decoding
5 Cyclic codes
Polynomials
Decoding
Introduction Encoding Decoding Linear block codes Cyclic codes

Coding
In the usual model for a digital communications system,
transmitter receiver

A s(t) r (t) q =A+n


B Encoder Modulator + Demodulator Detector B̂

n(t) :
AWGN noise with
PSD N0 /2

the coding scheme is always placed before the system


transmitter

unencoded A s(t) r (t)


Coding scheme B Encoder Modulator + Demo
bits

n(t) :
AWGN noise with
PSD N0 /2
and we have 
B[0] = C [0] 


B[1] = C [1] codeword
.. .. 
. .

Introduction Encoding Decoding Linear block codes Cyclic codes

Index
1 Introduction
Channel models
Fundamentals
2 Encoding
3 Decoding
Hard decoding
Soft decoding
Coding gain
4 Linear block codes
Fundamentals
Decoding
5 Cyclic codes
Polynomials
Decoding
Introduction Encoding Decoding Linear block codes Cyclic codes

Hard decoding

Decoding at the bit level


It relies on the digital channel

B Digital channel B̂

The input to the decoder are bits coming from the Detector ,
the B̂’s.
Metric is the Hamming distance.

Notation

ci = C i [0], C i [1], · · · C i [n − 1] ≡ i-th codeword


 
h i
r = B̂[0], B̂[1], · · · B̂[n − 1] ≡ received word
Introduction Encoding Decoding Linear block codes Cyclic codes

Hard decoding: decision rule


Maximum a Posteriori (MAP) rule: we decide ci if

p(ci |r) > p(cj |r) ∀j 6= i

If all the codewords are equally likely, it is equivalent to


Maximum Likelihood (ML),

p(r|ci ) > p(r|cj ) ∀j 6= i

Likelihoods can be expressed in terms of dH

p(r|ci ) = dH (r,ci ) (1 − )n−dH (r,ci )

 ≡ channel bit error probability


If  < 0.5 ML rule is tantamount to deciding ci if

dH (r, ci ) < dH (r, cj ) ∀j 6= i.


Introduction Encoding Decoding Linear block codes Cyclic codes

Hard decoding: error detection vs. correction

Assuming errors happened during transmission, there are two


possible scenarios:
We do not detect them
(we only detect errors if r 6= ci i = 1, . . . , 2k )
We do detect them, in which case we must make a decision:
We don’t risk correct them and request a retransmission
(we cannot correct with confidence)
we try and correct them
(a risk is is involved!!)
We need a policy for the latter scenario: in this course we always
try and fix the errors.
Introduction Encoding Decoding Linear block codes Cyclic codes

Hard decoding: detection

We detect a word error when less than dmin bit errors


happen.
Probability of an erroneous codeword going undetected (at
least dmin bit errors)
n  
X n m
Pnd ≤  (1 − )n−m
m
m=dmin

where  is the bit error probability in the system, and dmin is


the minimum distance between codewords.

A bound on the probability of error...


...since it might happen that dmin bit errors do not turn a codeword
into another one ⇒ ≤ rather than =
Introduction Encoding Decoding Linear block codes Cyclic codes

Hard decoding: correction (“always correct” policy)


Decoding is correct if there are less than dmin /2 erroneous bits
⇒ the code can correct up to
t = b(dmin − 1)/2c errors.
Error correction probability:
n  
X n m
Pe ≤  (1 − )n−m
m
m=t+1

A bound on the probability of error...


...since it is possible to correct more than t errors (there is no
guarantee, though) ⇒ ≤ rather than =

Approximate bound
The first element in the summation is a good approximation if 
is small and dmin large.
Introduction Encoding Decoding Linear block codes Cyclic codes

Soft decoding
Decoding at the element from the constellation level
It relies on the Gaussian channel
A Gaussian channel q

with
q=A+n
where n is a Gaussian noise vector.
The input to the decoder are the observations coming from
the Demodulator , the q’s.
Metric is Euclidean distance

Notation

m ≡ # bits carried by every A


h i
c̃i = A(i) [0], A(i) [1], · · · A(i) [n/m − 1] ≡ i-th codeword
r̃ = [q[0], q[1], · · · q[n/m − 1]] ≡ received word
Introduction Encoding Decoding Linear block codes Cyclic codes

Soft decoding: correction

The codeword error probability can be approximated as


!
dmin /2
Pe ≈ κQ p (1)
N0 /2

where κ is the kiss number.

Definition: kiss number


It is the maximum number of codewords that are at
distance dmin from any given.
Introduction Encoding Decoding Linear block codes Cyclic codes

Coding gain

If we set equal the BER with and without coding, the coding
gain is obtained as

(Eb /N0 )nc


G=
(Eb /N0 )c

Different for soft and hard decoding

To compute the individual Eb /N0 ’s, it is often useful...

Stirling’s approximation
1 x2
Q(x) ≈ e − 2
2
Introduction Encoding Decoding Linear block codes Cyclic codes

Coding gain: example

√ √
− Es Es
φ1 (t)

Let us consider a binary antipodal constellation 2-PAM (± Es ),
with the code

bi ci
00 000
01 011
10 110
11 101
Introduction Encoding Decoding Linear block codes Cyclic codes

Coding gain: example - hard decoding


This code cannot correct any error since
t = b(dmin − 1)/2c = 0, and the codeword error probability is
3  
X 3
Pe ≤ m (1 − )n−m ≈ 3
m=1
m
p
where  = Q( 2Es /N0 ).
Bit error probability
r !
2 2Es
BER ≈ 3Q
3 N0

In order to express it in terms of Eb , we use that 2Eb = 3Es ,


and hence r !
4Eb
BER ≈ 2Q
3N0
Introduction Encoding Decoding Linear block codes Cyclic codes

Coding gain: example - soft decoding


We decide b from the output of the Gaussian channel,

q = (q[0], q[1], q[2]) = (A[0] + n[0], A[1] + n[1], A[2] + n[2])

Tantamount to the detector for the constellation


 √   √   √   √ 
−√Es −√ Es √E s √Es
− Es  ,  Es  ,  Es  , − Es 
√ √ √ √
− Es Es − Es Es

which has minimum (Euclidean) distance dmin = 2 2Es
From (1) the codeword error probability is
r !
4Es
Pe ≈ 3Q
N0

BER as a function of Eb : r !
8Eb
BER ≈ 2Q
3N0
Introduction Encoding Decoding Linear block codes Cyclic codes

Coding gain: example - hard vs soft decoding


Without coding, we have Eb = Es , and
p 
BERnc =  = Q 2Eb /N0

Gain with hard decoding


We set equal BERc and BERnc
Approximation: Q(·)
(Eb /N0 )nc
G= = 2/3 ≈ −1.76dB
(Eb /N0 )c

We are actually losing performance!! (expected, since the code


is not able correct any error)
Soft decoding
G = 4/3 ≈ 1.25dB

Now we are making good use of coding


Introduction Encoding Decoding Linear block codes Cyclic codes

Index
1 Introduction
Channel models
Fundamentals
2 Encoding
3 Decoding
Hard decoding
Soft decoding
Coding gain
4 Linear block codes
Fundamentals
Decoding
5 Cyclic codes
Polynomials
Decoding
Introduction Encoding Decoding Linear block codes Cyclic codes

Linear block codes


Galois field modulo 2 (GF (2))
a + b = (a + b)2
a · b = (a · b)2

Definition: Linear Block Code


A linear block code is a code in which any linear combination
of codewords is also a codeword.

Properties
It is a subspace in GF (2)n with 2k elements.
The all-zeros word is a codeword.
Every codeword has at least another codeword that is at dmin
from it.
dmin is the smallest weight (number of 1s) among the non-null
codewords.
Introduction Encoding Decoding Linear block codes Cyclic codes

Linear block codes: structure


Elements in an (n, k) linear block code
b is the message, 1×k
c is the codeword, 1×n
r is the received word, 1 × n with
r=c+e

e is the noise 1×n


G is the generator matrix,
(for encoding)
k ×n
H is the parity-check matrix,
(for decoding)
n−k ×n
Introduction Encoding Decoding Linear block codes Cyclic codes

Encoding

The mapping b → c is performed through matrix multiplication


i.e.,
c = bG.
Keep in mind:
b is 1 × k
G is k × n
c is 1 × n

Property
Every row of G is a codeword.
Introduction Encoding Decoding Linear block codes Cyclic codes

Parity-check matrix
Parity check matrix, H, is the orthogonal complement of G so that
cH> = 0 ⇔ c is a codeword
For the sake of convenience,
Definition: Syndrome

The syndrome of the received sequence r is

s = rH> (with dimensions 1 × (n − k))

Then,
s = 0 ⇔ r is a codeword.

Syndrome-error connection
*0
s = rHT = (c + e)HT = 
cHT + eHT = eHT

Introduction Encoding Decoding Linear block codes Cyclic codes

Hard decoding: syndrome decoding


The minimum distance rule requires computing dH between the received
word, r, and every codeword...but we can carry out syndrome
decoding
Beforehand:
Fill up a table yielding the syndrome associated with every possible
error,
error (e) syndrome(s) (If several errors yield the same syndrome,
.. .. choose the one that is most likely, i.e., the
. . one with the smallest weight)

In operation: given the received word, r:


1 Compute the syndrome s = rHT .
2 Look up the table for the error pattern, e, with that syndrome
3 Undo the error
ĉ = r + e
Introduction Encoding Decoding Linear block codes Cyclic codes

Systematic codes

Definition: Systematic code

A code in which the message is always embedded in the en-


coded sequence (in the same place).

This can be easily imposed through the generator matrix,


   
G = Ik P or G = P Ik

First/last k bits in c are equal to b, and the remaining n − k


are redundancy.
 
If G = Ik P it can be shown
H = PT In−k
 

Exercise
Prove it!
Introduction Encoding Decoding Linear block codes Cyclic codes

Systematic code example: Hamming (7, 4)

generator matrix:
  Parity-check matrix:
1 0 0 0 1 0 1  
0 1 0 0 1 1 0 1 1 1 0 1 0 0
G= 
0 0 1 0 1 1 1 H = 0 1 1 1 0 1 0
0 0 0 1 0 1 1 1 0 1 1 0 0 1

Every Hamming code:


It’s perfect
dmin = 3
k = 2j − j − 1 and n = 2j − 1 ∀j ∈ N ≥ 2
j = 2 → (3, 1)
j = 3 → (7, 4)
j = 4 → (15, 11)
Introduction Encoding Decoding Linear block codes Cyclic codes

Hamming (7, 4): coding gain


10−1 No coding
soft gain Hard decoding
Soft decoding
10−2
BER

10−3

10−4
hard gain

10−5 2 4 6 8

Eb
N0
Introduction Encoding Decoding Linear block codes Cyclic codes

Hamming (7, 4): decoding


Beforehand we apply
s = eHT
over every e that entails a single error (the code can only correct 1
erroneous bit):

error syndrome Example: r = [1100101]


0000000 000  
1 0 1
1000000 101
1 1 0
0100000 110 1 1 1
0010000 111 s = rH> = [1100101] 
01 1 1 = [110]
0 0

0001000 011

0 1 0
0000100 100 0 0 1

0000010 010
and hence e = [0100000] so that
0000001 001
ĉ = r + e = r = [1000101] .
Introduction Encoding Decoding Linear block codes Cyclic codes

Equivalent codes

Computing H from G
If the code is systematic, we have an easy way of computing the
parity-check matrix...
...but what if it’s not? If the code is not systematic, one can apply
operations on the generator matrix, G, to try and transform it into
that of an equivalent systematic code, G0 = Ik P .


Allowed operations are:


On rows replace any row with a linear combination of itself
and other rows or swapping rows.
On columns swapping columns.

Definition: Equivalent codes

Two codes are equivalent if they have the same codewords


(after, maybe, reordering the bits).
Introduction Encoding Decoding Linear block codes Cyclic codes

Index
1 Introduction
Channel models
Fundamentals
2 Encoding
3 Decoding
Hard decoding
Soft decoding
Coding gain
4 Linear block codes
Fundamentals
Decoding
5 Cyclic codes
Polynomials
Decoding
Introduction Encoding Decoding Linear block codes Cyclic codes

Cyclic codes

Large values of k and n


Working with matrices is not efficient!!

Definition: Cyclic code

It is a linear block code in which any circular shift of a code-


word results in another codeword.

In a cyclic code,
If [c0 , c1 , . . . , cn−1 ] is a codeword, then so is
[cn−1 , c0 , c1 , . . . , cn−2 ]
i.e., every codeword is a (circularly) shifted version of another
codeword.
Introduction Encoding Decoding Linear block codes Cyclic codes

Polynomial representation of codewords


Codeword [c0 , c1 , · · · , cn−1 ] is represented as the polynomial
c(x) = c0 + c1 x + c2 x 2 + · · · + cn−1 x n−1

How is
[c0 , c1 , · · · , cn−1 ] → [cn−1 , c0 , · · · , cn−2 ]
achieved mathematically? By multiplying c(x) times x modulo
(x n − 1), i.e.,
xc(x) = c0 x + c1 x 2 + · · · + cn−1 x n = c0 x + · · · + cn−1 x n + cn−1 − cn−1
= cn−1 (x n − 1) + cn−1 + c0 x + c1 x 2 + · · · + cn−2 x n−1
Hence,
(xc(x))x n −1 = cn−1 + c0 x + c1 x 2 + · · · + cn−2 x n−1
| {z }
[cn−1 ,c0 ,··· ,cn−2 ]
Introduction Encoding Decoding Linear block codes Cyclic codes

Encoding

G → g (x)
generator generator
matrix polynomial

Coding is carried out by multiplying, modulo x n − 1, the


polynomial representing bi by a generator polynomial, g (x),

c(x) = (b(x)g (x))x n −1

The generator polynomial, g (x),


it is of degree r = n − k,
it must be an irreducible polynomial
Introduction Encoding Decoding Linear block codes Cyclic codes

Decoding

H → h(x)
parity-check parity-check
matrix polynomial

The parity-check polynomial, h(x),


it is of degree r 0 = n − k − 1,
must satisfy
(g (x)h(x))x n −1 = 0.

Just like in regular linear block codes, we can perform syndrome


decoding,
s(x) = (r (x)h(x))x n −1

You might also like